Skip to main content

Long-term memory for AI Agents

Project description

Mem0 - The Memory Layer for Personalized AI

mem0ai%2Fmem0 | Trendshift

Learn more · Join Discord · Demo

Mem0 Discord Mem0 PyPI - Downloads GitHub commit activity Package version Npm package Y Combinator S24

📄 Benchmarking Mem0's token-efficient memory algorithm →

New Memory Algorithm (April 2026)

Benchmark Old New Tokens Latency p50
LoCoMo 71.4 91.6 7.0K 0.88s
LongMemEval 67.8 93.4 6.8K 1.09s
BEAM (1M) 64.1 6.7K 1.00s
BEAM (10M) 48.6 6.9K 1.05s

All benchmarks run on the same production-representative model stack. Single-pass retrieval (one call, no agentic loops).

What changed:

  • Single-pass ADD-only extraction -- one LLM call, no UPDATE/DELETE. Memories accumulate; nothing is overwritten.
  • Agent-generated facts are first-class -- when an agent confirms an action, that information is now stored with equal weight.
  • Entity linking -- entities are extracted, embedded, and linked across memories for retrieval boosting.
  • Multi-signal retrieval -- semantic, BM25 keyword, and entity matching scored in parallel and fused.

See the migration guide for upgrade instructions. The evaluation framework is open-sourced so anyone can reproduce the numbers.

Research Highlights

  • 91.6 on LoCoMo -- +20 points over the previous algorithm
  • 93.4 on LongMemEval -- +26 points, with +53.6 on assistant memory recall
  • 64.1 on BEAM (1M) -- production-scale memory evaluation at 1M tokens
  • Read the full paper

Introduction

Mem0 ("mem-zero") enhances AI assistants and agents with an intelligent memory layer, enabling personalized AI interactions. It remembers user preferences, adapts to individual needs, and continuously learns over time—ideal for customer support chatbots, AI assistants, and autonomous systems.

Key Features & Use Cases

Core Capabilities:

  • Multi-Level Memory: Seamlessly retains User, Session, and Agent state with adaptive personalization
  • Developer-Friendly: Intuitive API, cross-platform SDKs, and a fully managed service option

Applications:

  • AI Assistants: Consistent, context-rich conversations
  • Customer Support: Recall past tickets and user history for tailored help
  • Healthcare: Track patient preferences and history for personalized care
  • Productivity & Gaming: Adaptive workflows and environments based on user behavior

🚀 Quickstart Guide

Library Self-Hosted Server Cloud Platform
Best for Testing, prototyping Teams running on their own infrastructure Zero-ops production use
Setup pip install mem0ai docker compose up Sign up at app.mem0.ai
Dashboard -- Yes Yes
Auth & API Keys -- Yes Yes
Advanced Features -- Teasers All included

Just testing? Use the library. Building for a team? Self-hosted. Want zero ops? Cloud.

Library (pip / npm)

pip install mem0ai

For enhanced hybrid search with BM25 keyword matching and entity extraction, install with NLP support:

pip install mem0ai[nlp]
python -m spacy download en_core_web_sm

Install sdk via npm:

npm install mem0ai

Self-Hosted Server

Note: Self-hosted auth is on by default. Upgrading from a pre-auth build? Set ADMIN_API_KEY, register an admin through the wizard, or AUTH_DISABLED=true for local dev only. See upgrade notes.

# Recommended: one command — start the stack, create an admin, issue the first API key.
cd server && make bootstrap

# Manual: start the stack and finish setup via the browser wizard.
cd server && docker compose up -d    # http://localhost:3000

See the self-hosted docs for configuration.

Cloud Platform

  1. Sign up on Mem0 Platform
  2. Embed the memory layer via SDK or API keys

CLI

Manage memories from your terminal:

npm install -g @mem0/cli   # or: pip install mem0-cli

mem0 init
mem0 add "Prefers dark mode and vim keybindings" --user-id alice
mem0 search "What does Alice prefer?" --user-id alice

See the CLI documentation for the full command reference.

Agent Skills

Teach your AI coding assistant (Claude Code, Codex, Cursor, Windsurf, OpenCode, OpenClaw, and any tool that supports the skills standard) how to build with Mem0. Two categories:

Reference skills — always on (SDK knowledge loaded into the assistant's context):

npx skills add https://github.com/mem0ai/mem0 --skill mem0
npx skills add https://github.com/mem0ai/mem0 --skill mem0-cli
npx skills add https://github.com/mem0ai/mem0 --skill mem0-vercel-ai-sdk

Pipeline skills — run on demand (execute an end-to-end workflow in an existing repo):

npx skills add https://github.com/mem0ai/mem0 --skill mem0-integrate
npx skills add https://github.com/mem0ai/mem0 --skill mem0-test-integration

Use /mem0-integrate to wire Mem0 into an existing repo via a test-first pipeline, then /mem0-test-integration to verify. See the skills catalog or Vibecoding with Mem0 for the full picture.

Basic Usage

Mem0 requires an LLM to function, with gpt-5-mini from OpenAI as the default. However, it supports a variety of LLMs; for details, refer to our Supported LLMs documentation.

Mem0 uses text-embedding-3-small from OpenAI as the default embedding model. For best results with hybrid search (semantic + keyword + entity boosting), we recommend using at least Qwen 600M or a comparable embedding model. See Supported Embeddings for configuration details.

First step is to instantiate the memory:

from openai import OpenAI
from mem0 import Memory

openai_client = OpenAI()
memory = Memory()

def chat_with_memories(message: str, user_id: str = "default_user") -> str:
    # Retrieve relevant memories
    relevant_memories = memory.search(query=message, filters={"user_id": user_id}, top_k=3)
    memories_str = "\n".join(f"- {entry['memory']}" for entry in relevant_memories["results"])

    # Generate Assistant response
    system_prompt = f"You are a helpful AI. Answer the question based on query and memories.\nUser Memories:\n{memories_str}"
    messages = [{"role": "system", "content": system_prompt}, {"role": "user", "content": message}]
    response = openai_client.chat.completions.create(model="gpt-5-mini", messages=messages)
    assistant_response = response.choices[0].message.content

    # Create new memories from the conversation
    messages.append({"role": "assistant", "content": assistant_response})
    memory.add(messages, user_id=user_id)

    return assistant_response

def main():
    print("Chat with AI (type 'exit' to quit)")
    while True:
        user_input = input("You: ").strip()
        if user_input.lower() == 'exit':
            print("Goodbye!")
            break
        print(f"AI: {chat_with_memories(user_input)}")

if __name__ == "__main__":
    main()

For detailed integration steps, see the Quickstart and API Reference.

🔗 Integrations & Demos

  • ChatGPT with Memory: Personalized chat powered by Mem0 (Live Demo)
  • Browser Extension: Store memories across ChatGPT, Perplexity, and Claude (Chrome Extension)
  • Langgraph Support: Build a customer bot with Langgraph + Mem0 (Guide)
  • CrewAI Integration: Tailor CrewAI outputs with Mem0 (Example)

📚 Documentation & Support

Citation

We now have a paper you can cite:

@article{mem0,
  title={Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory},
  author={Chhikara, Prateek and Khant, Dev and Aryan, Saket and Singh, Taranjeet and Yadav, Deshraj},
  journal={arXiv preprint arXiv:2504.19413},
  year={2025}
}

⚖️ License

Apache 2.0 — see the LICENSE file for details.

Project details


Release history Release notifications | RSS feed

This version

2.0.2

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mem0ai-2.0.2.tar.gz (214.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mem0ai-2.0.2-py3-none-any.whl (302.3 kB view details)

Uploaded Python 3

File details

Details for the file mem0ai-2.0.2.tar.gz.

File metadata

  • Download URL: mem0ai-2.0.2.tar.gz
  • Upload date:
  • Size: 214.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for mem0ai-2.0.2.tar.gz
Algorithm Hash digest
SHA256 482d11a55c8aa00dce29a66eeecbba2fd5f5c7501c054ce8ba606baee3755f99
MD5 45925b2581c5d0186125d91f35cf643f
BLAKE2b-256 ce8f86975bdf11a0aece78b6f4d5551b8b92c38b9bdf5f5764fc879e6cb1227b

See more details on using hashes here.

Provenance

The following attestation bundles were made for mem0ai-2.0.2.tar.gz:

Publisher: cd.yml on mem0ai/mem0

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mem0ai-2.0.2-py3-none-any.whl.

File metadata

  • Download URL: mem0ai-2.0.2-py3-none-any.whl
  • Upload date:
  • Size: 302.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for mem0ai-2.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 6fda32549c213fb63b393f4b1920f54961ea75c51d95b6492096108609accf3b
MD5 a4d7db1cb05f69cebc95ea0e66a08fb4
BLAKE2b-256 01f859902456c1950e63b6b226634463bc8b24d81697e7114f0740059fd0a270

See more details on using hashes here.

Provenance

The following attestation bundles were made for mem0ai-2.0.2-py3-none-any.whl:

Publisher: cd.yml on mem0ai/mem0

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page