Mem0 mem0
git clone https://github.com/mem0ai/mem0
T=$(mktemp -d) && git clone --depth=1 https://github.com/mem0ai/mem0 "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/mem0" ~/.claude/skills/mem0ai-mem0-mem0-944238 && rm -rf "$T"
skills/mem0/SKILL.mdMem0 Platform Integration
Skill Graph: This skill is part of the Mem0 skill graph:
- mem0 (this skill) -- Platform Client SDK + OSS (Python + TypeScript)
- mem0-cli (GitHub) -- Command-line interface
- mem0-vercel-ai-sdk (GitHub) -- Vercel AI SDK provider
Mem0 is a managed memory layer for AI applications. It stores, retrieves, and manages user memories via API — no infrastructure to deploy. For self-hosted usage, see the OSS section in the client references below.
Step 1: Install and authenticate
Python:
pip install mem0ai export MEM0_API_KEY="m0-your-api-key"
TypeScript/JavaScript:
npm install mem0ai export MEM0_API_KEY="m0-your-api-key"
Get an API key at: https://app.mem0.ai/dashboard/api-keys
Step 2: Initialize the client
Python:
from mem0 import MemoryClient client = MemoryClient(api_key="m0-xxx")
TypeScript:
import MemoryClient from 'mem0ai'; const client = new MemoryClient({ apiKey: 'm0-xxx' });
For async Python, use
AsyncMemoryClient.
Step 3: Core operations
Every Mem0 integration follows the same pattern: retrieve → generate → store.
Add memories
messages = [ {"role": "user", "content": "I'm a vegetarian and allergic to nuts."}, {"role": "assistant", "content": "Got it! I'll remember that."} ] client.add(messages, user_id="alice")
Search memories
results = client.search("dietary preferences", filters={"user_id": "alice"}) for mem in results.get("results", []): print(mem["memory"])
Get all memories
all_memories = client.get_all(filters={"user_id": "alice"})
Update a memory
client.update("memory-uuid", text="Updated: vegetarian, nut allergy, prefers organic")
Delete a memory
client.delete("memory-uuid") client.delete_all(user_id="alice") # delete all for a user
Common integration pattern
from mem0 import MemoryClient from openai import OpenAI mem0 = MemoryClient() openai = OpenAI() def chat(user_input: str, user_id: str) -> str: # 1. Retrieve relevant memories memories = mem0.search(user_input, filters={"user_id": user_id}) context = "\n".join([m["memory"] for m in memories.get("results", [])]) # 2. Generate response with memory context response = openai.chat.completions.create( model="gpt-5-mini", messages=[ {"role": "system", "content": f"User context:\n{context}"}, {"role": "user", "content": user_input}, ] ) reply = response.choices[0].message.content # 3. Store interaction for future context mem0.add( [{"role": "user", "content": user_input}, {"role": "assistant", "content": reply}], user_id=user_id ) return reply
Common edge cases
- Search returns empty: Memories process asynchronously. Wait 2-3s after
before searching. Also verifyadd()
matches exactly (case-sensitive) and useuser_id
syntax.filters={"user_id": "..."} - AND filter with user_id + agent_id returns empty: Entities are stored separately. Use
instead, or query separately.OR - Duplicate memories: Don't mix
(default) andinfer=True
for the same data. Stick to one mode.infer=False - Wrong import: Always use
(orfrom mem0 import MemoryClient
for async). Do not useAsyncMemoryClient
.from mem0 import Memory - v3 defaults:
,top_k=20
,threshold=0.1
. Adjust as needed for your use case.rerank=False
v2 Compatibility
If you're using SDK v2.x, note these differences:
- Entity IDs: Pass
as top-level kwarg touser_id
instead of insidesearch()filters - Defaults:
, no threshold,top_k=100rerank=True - Graph memory: Available via
enable_graph=True
See the migration guide for details.
Live documentation search
For the latest docs beyond what's in the references, use the doc search tool:
python ${CLAUDE_SKILL_DIR}/scripts/mem0_doc_search.py --query "topic" python ${CLAUDE_SKILL_DIR}/scripts/mem0_doc_search.py --page "/platform/features/graph-memory" python ${CLAUDE_SKILL_DIR}/scripts/mem0_doc_search.py --index
No API key needed — searches docs.mem0.ai directly.
Client SDK References
Language-specific deep references (Platform + OSS):
| Language | File |
|---|---|
| Python (MemoryClient + AsyncMemoryClient + Memory OSS) | client/python.md |
| TypeScript/Node.js (MemoryClient + Memory OSS) | client/node.md |
| Python vs TypeScript differences | client/differences.md |
Platform References
Load these on demand for deeper detail:
| Topic | File |
|---|---|
| Quickstart (Python, TS, cURL) | references/quickstart.md |
| SDK guide (all methods, both languages) | references/sdk-guide.md |
| API reference (endpoints, filters, object schema) | references/api-reference.md |
| Architecture (pipeline, lifecycle, scoping, performance) | references/architecture.md |
| Platform features (retrieval, graph, categories, MCP, etc.) | references/features.md |
| Framework integrations (LangChain, CrewAI, OpenAI Agents, etc.) | references/integration-patterns.md |
| Use cases & examples (real-world patterns with code) | references/use-cases.md |
Related Mem0 Skills
| Skill | When to use | Link |
|---|---|---|
| mem0-cli | Terminal commands, scripting, CI/CD, agent tool loops | local / GitHub |
| mem0-vercel-ai-sdk | Vercel AI SDK provider with automatic memory | local / GitHub |