Mem0 mem0-vercel-ai-sdk
install
source · Clone the upstream repo
git clone https://github.com/mem0ai/mem0
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/mem0ai/mem0 "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/mem0-vercel-ai-sdk" ~/.claude/skills/mem0ai-mem0-mem0-vercel-ai-sdk && rm -rf "$T"
manifest:
skills/mem0-vercel-ai-sdk/SKILL.mdsource content
Mem0 Vercel AI SDK Provider
Memory-enhanced AI provider for Vercel AI SDK. Automatically retrieves and stores memories during LLM calls.
Step 1: Install
npm install @mem0/vercel-ai-provider ai
Step 2: Set up environment variables
export MEM0_API_KEY="m0-xxx" export OPENAI_API_KEY="sk-xxx" # or ANTHROPIC_API_KEY, GOOGLE_API_KEY, etc.
Get a Mem0 API key at: https://app.mem0.ai/dashboard/api-keys
Pattern 1: Wrapped Model
The wrapped model approach is the simplest.
createMem0 returns a provider that wraps any supported LLM with automatic memory retrieval and storage.
import { generateText } from "ai"; import { createMem0 } from "@mem0/vercel-ai-provider"; const mem0 = createMem0(); const { text } = await generateText({ model: mem0("gpt-5-mini", { user_id: "alice" }), prompt: "Recommend a restaurant", });
What happens under the hood:
- The prompt is sent to Mem0 search (
) to retrieve relevant memoriesPOST /v3/memories/search/ - Retrieved memories are injected as a system message at the start of the prompt
- The underlying LLM (e.g., OpenAI gpt-5-mini) generates a response using the enriched prompt
- The conversation is stored back to Mem0 (
) as a fire-and-forget async call (no await)POST /v3/memories/add/
Pattern 2: Standalone Utilities
Use standalone utilities when you want full control over the memory retrieve/store cycle, or you want to use a provider that is already configured separately.
import { openai } from "@ai-sdk/openai"; import { generateText } from "ai"; import { retrieveMemories, addMemories } from "@mem0/vercel-ai-provider"; const prompt = "Recommend a restaurant"; // Retrieve memories -- returns a formatted system prompt string const memories = await retrieveMemories(prompt, { user_id: "alice", mem0ApiKey: "m0-xxx", }); // Generate using any provider with injected memories const { text } = await generateText({ model: openai("gpt-5-mini"), prompt, system: memories, }); // Optionally store the conversation back await addMemories( [ { role: "user", content: [{ type: "text", text: prompt }] }, { role: "assistant", content: [{ type: "text", text }] }, ], { user_id: "alice", mem0ApiKey: "m0-xxx" } );
Pattern 3: Streaming
Use
streamText for streaming responses with memory augmentation:
import { streamText } from "ai"; import { createMem0 } from "@mem0/vercel-ai-provider"; const mem0 = createMem0(); const result = streamText({ model: mem0("gpt-5-mini", { user_id: "alice" }), prompt: "What should I cook for dinner?", }); for await (const chunk of result.textStream) { process.stdout.write(chunk); }
The wrapped model handles memory retrieval before streaming begins and stores the conversation after.
Supported Providers
| Provider | Config value | Required env var |
|---|---|---|
| OpenAI (default) | | |
| Anthropic | | |
| | |
| Groq | | |
| Cohere | | |
Select a provider when creating the Mem0 instance:
const mem0 = createMem0({ provider: "anthropic" }); const { text } = await generateText({ model: mem0("gpt-5-mini", { user_id: "alice" }), prompt: "Hello!", });
How It Works Internally
Wrapped model flow
User prompt --> searchInternalMemories (POST /v3/memories/search/) --> memories injected as system message at start of prompt --> underlying LLM generates response (doGenerate or doStream) --> processMemories fires addMemories as fire-and-forget (no await) --> response returned to caller
Standalone flow
User controls each step: 1. retrieveMemories / getMemories / searchMemories -> fetch memories 2. inject into system prompt manually 3. call generateText / streamText with any provider 4. addMemories -> store new conversation to Mem0
Key Differences Between the 4 Utility Functions
| Function | Returns | Use when |
|---|---|---|
| Formatted system prompt string | Injecting directly into parameter |
| Raw memory array | Processing memories programmatically |
| Full search response (results + relations) | Need relations, scores, metadata |
| API response | Storing new messages to Mem0 |
All four accept
LanguageModelV2Prompt | string as the first argument and optional Mem0ConfigSettings as the second.
Common Edge Cases and Tips
- Always provide
(oruser_id
/agent_id
/app_id
) for consistent memory retrieval. Without an entity identifier, memories cannot be scoped.run_id - Standalone utilities require explicit API key: pass
in the config object, or set themem0ApiKey
environment variable.MEM0_API_KEY - This uses Vercel AI SDK v5 (LanguageModelV2 / ProviderV2 interfaces). It is not compatible with AI SDK v3 or v4.
firesprocessMemories
as fire-and-forget (addMemories
without.then()
). Memory storage happens asynchronously and does not block the LLM response.await- The
alias exists in the provider switch but is NOT in the"gemini"
list. UsesupportedProviders
instead."google" - Custom host: set
in the config to point to a different Mem0 API endpoint (default:host
).https://api.mem0.ai
References
| Topic | File |
|---|---|
Provider API (, , types) | local / GitHub |
Memory utilities (, , etc.) | local / GitHub |
| Usage patterns and examples | local / GitHub |
Related Mem0 Skills
| Skill | When to use | Link |
|---|---|---|
| mem0 | Python/TypeScript SDK, REST API, framework integrations | local / GitHub |
| mem0-cli | Terminal commands, scripting, CI/CD, agent tool loops | local / GitHub |