Mem0 mem0

install
source · Clone the upstream repo
git clone https://github.com/mem0ai/mem0
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/mem0ai/mem0 "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/mem0" ~/.claude/skills/mem0ai-mem0-mem0-944238 && rm -rf "$T"
manifest: skills/mem0/SKILL.md
source content

Mem0 Platform Integration

Skill Graph: This skill is part of the Mem0 skill graph:

Mem0 is a managed memory layer for AI applications. It stores, retrieves, and manages user memories via API — no infrastructure to deploy. For self-hosted usage, see the OSS section in the client references below.

Step 1: Install and authenticate

Python:

pip install mem0ai
export MEM0_API_KEY="m0-your-api-key"

TypeScript/JavaScript:

npm install mem0ai
export MEM0_API_KEY="m0-your-api-key"

Get an API key at: https://app.mem0.ai/dashboard/api-keys

Step 2: Initialize the client

Python:

from mem0 import MemoryClient
client = MemoryClient(api_key="m0-xxx")

TypeScript:

import MemoryClient from 'mem0ai';
const client = new MemoryClient({ apiKey: 'm0-xxx' });

For async Python, use

AsyncMemoryClient
.

Step 3: Core operations

Every Mem0 integration follows the same pattern: retrieve → generate → store.

Add memories

messages = [
    {"role": "user", "content": "I'm a vegetarian and allergic to nuts."},
    {"role": "assistant", "content": "Got it! I'll remember that."}
]
client.add(messages, user_id="alice")

Search memories

results = client.search("dietary preferences", filters={"user_id": "alice"})
for mem in results.get("results", []):
    print(mem["memory"])

Get all memories

all_memories = client.get_all(filters={"user_id": "alice"})

Update a memory

client.update("memory-uuid", text="Updated: vegetarian, nut allergy, prefers organic")

Delete a memory

client.delete("memory-uuid")
client.delete_all(user_id="alice")  # delete all for a user

Common integration pattern

from mem0 import MemoryClient
from openai import OpenAI

mem0 = MemoryClient()
openai = OpenAI()

def chat(user_input: str, user_id: str) -> str:
    # 1. Retrieve relevant memories
    memories = mem0.search(user_input, filters={"user_id": user_id})
    context = "\n".join([m["memory"] for m in memories.get("results", [])])

    # 2. Generate response with memory context
    response = openai.chat.completions.create(
        model="gpt-5-mini",
        messages=[
            {"role": "system", "content": f"User context:\n{context}"},
            {"role": "user", "content": user_input},
        ]
    )
    reply = response.choices[0].message.content

    # 3. Store interaction for future context
    mem0.add(
        [{"role": "user", "content": user_input}, {"role": "assistant", "content": reply}],
        user_id=user_id
    )
    return reply

Common edge cases

  • Search returns empty: Memories process asynchronously. Wait 2-3s after
    add()
    before searching. Also verify
    user_id
    matches exactly (case-sensitive) and use
    filters={"user_id": "..."}
    syntax.
  • AND filter with user_id + agent_id returns empty: Entities are stored separately. Use
    OR
    instead, or query separately.
  • Duplicate memories: Don't mix
    infer=True
    (default) and
    infer=False
    for the same data. Stick to one mode.
  • Wrong import: Always use
    from mem0 import MemoryClient
    (or
    AsyncMemoryClient
    for async). Do not use
    from mem0 import Memory
    .
  • v3 defaults:
    top_k=20
    ,
    threshold=0.1
    ,
    rerank=False
    . Adjust as needed for your use case.

v2 Compatibility

If you're using SDK v2.x, note these differences:

  • Entity IDs: Pass
    user_id
    as top-level kwarg to
    search()
    instead of inside
    filters
  • Defaults:
    top_k=100
    , no threshold,
    rerank=True
  • Graph memory: Available via
    enable_graph=True

See the migration guide for details.

Live documentation search

For the latest docs beyond what's in the references, use the doc search tool:

python ${CLAUDE_SKILL_DIR}/scripts/mem0_doc_search.py --query "topic"
python ${CLAUDE_SKILL_DIR}/scripts/mem0_doc_search.py --page "/platform/features/graph-memory"
python ${CLAUDE_SKILL_DIR}/scripts/mem0_doc_search.py --index

No API key needed — searches docs.mem0.ai directly.

Client SDK References

Language-specific deep references (Platform + OSS):

LanguageFile
Python (MemoryClient + AsyncMemoryClient + Memory OSS)client/python.md
TypeScript/Node.js (MemoryClient + Memory OSS)client/node.md
Python vs TypeScript differencesclient/differences.md

Platform References

Load these on demand for deeper detail:

TopicFile
Quickstart (Python, TS, cURL)references/quickstart.md
SDK guide (all methods, both languages)references/sdk-guide.md
API reference (endpoints, filters, object schema)references/api-reference.md
Architecture (pipeline, lifecycle, scoping, performance)references/architecture.md
Platform features (retrieval, graph, categories, MCP, etc.)references/features.md
Framework integrations (LangChain, CrewAI, OpenAI Agents, etc.)references/integration-patterns.md
Use cases & examples (real-world patterns with code)references/use-cases.md

Related Mem0 Skills

SkillWhen to useLink
mem0-cliTerminal commands, scripting, CI/CD, agent tool loopslocal / GitHub
mem0-vercel-ai-sdkVercel AI SDK provider with automatic memorylocal / GitHub