Skills supermemory
install
source · Clone the upstream repo
git clone https://github.com/TerminalSkills/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/TerminalSkills/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/supermemory" ~/.claude/skills/terminalskills-skills-supermemory && rm -rf "$T"
manifest:
skills/supermemory/SKILL.mdsafety · automated scan (medium risk)
This is a pattern-based risk scan, not a security review. Our crawler flagged:
- pip install
- references .env files
- references API keys
Always read a skill's source content before installing. Patterns alone don't mean the skill is malicious — but they warrant attention.
source content
Supermemory
Overview
Supermemory is the memory and context layer for AI -- ranked #1 on LongMemEval, LoCoMo, and ConvoMem benchmarks. It automatically extracts facts from conversations, maintains user profiles with ~50ms retrieval, handles temporal changes and contradictions, and delivers the right context at the right time. Supports hybrid search (RAG + memory), connectors (Google Drive, Gmail, Notion, GitHub), and multi-modal input (PDFs, images, videos, code).
Instructions
Installation
npm install supermemory # or pip install supermemory
Get an API key at https://console.supermemory.ai
Core Memory Operations
import Supermemory from "supermemory"; const client = new Supermemory({ apiKey: process.env.SUPERMEMORY_API_KEY }); // Add a memory const memory = await client.memories.add({ content: "User prefers dark mode and uses TypeScript exclusively", userId: "user_123", metadata: { source: "conversation", timestamp: new Date().toISOString() }, }); // Search memories const results = await client.memories.search({ query: "user preferences", userId: "user_123", limit: 5, }); // Delete a memory await client.memories.delete(memory.id);
User Profiles (Auto-maintained)
const profile = await client.users.getProfile("user_123"); // Returns: { stable_facts, recent_activity, preferences }
Adding Memory to AI Conversations
- Retrieve relevant memories before each response
- Include memory context in the system prompt
- Store new information from each conversation turn
async function chatWithMemory(userId: string, userMessage: string) { const memories = await client.memories.search({ query: userMessage, userId, limit: 5, }); const memoryContext = memories.results.map(m => `- ${m.content}`).join("\n"); const response = await claude.messages.create({ model: "claude-opus-4-5", max_tokens: 1024, system: `You know this about the user:\n${memoryContext}`, messages: [{ role: "user", content: userMessage }], }); await client.memories.add({ content: `User said: "${userMessage}"`, userId, }); return response.content[0].text; }
Python Usage
from supermemory import Supermemory client = Supermemory(api_key="your_api_key") client.memories.add( content="User is building a B2B SaaS targeting HR teams", user_id="user_123", ) results = client.memories.search(query="what is the user building", user_id="user_123", limit=3) for r in results.results: print(f"[{r.score:.2f}] {r.content}")
Connectors (Auto-sync External Sources)
await client.connectors.connect({ type: "google_drive", userId: "user_123", credentials: { access_token: googleAccessToken }, }); // Search across Drive docs + memories together const results = await client.memories.search({ query: "project requirements", userId: "user_123", includeConnectors: true, });
MCP Integration (Claude Desktop)
Add to
claude_desktop_config.json:
{ "mcpServers": { "supermemory": { "command": "npx", "args": ["-y", "supermemory-mcp"], "env": { "SUPERMEMORY_API_KEY": "your_api_key" } } } }
Examples
Example 1: Personal AI Assistant with Memory
Build a chatbot that remembers user preferences across sessions:
- On first conversation: user mentions they work in fintech, prefer Python, and are building a payment API
stores thisclient.memories.add({ content: "Works in fintech, prefers Python, building payment API", userId })- Next session, user asks "help me with error handling" -- search returns their context
- System prompt includes: "User works in fintech, prefers Python, is building a payment API"
- Response is tailored: Python error handling examples specific to payment processing, not generic code
- Profile auto-updates:
{ stable_facts: ["Works in fintech", "Prefers Python"], recent_activity: ["Building payment API"] }
Example 2: Knowledge Base with Connector Sync
Sync a team's Google Drive and let anyone search across all documents plus conversation history:
- Connect Google Drive:
client.connectors.connect({ type: "google_drive", userId: "team_shared" }) - Supermemory indexes all Drive documents automatically
- Team member asks: "What did we decide about the pricing model?"
- Search with
returns both the pricing doc from Drive and a memory from a previous conversation where the CEO said "let us go with usage-based"includeConnectors: true - Response synthesizes both sources: "The pricing doc outlines three tiers, and in your last discussion the team decided on usage-based pricing"
Guidelines
- Always scope memories to a
for multi-user applicationsuserId - Use
to tag memories with source and timestamp for traceabilitymetadata - Search before adding to avoid duplicate memories -- Supermemory handles contradictions but duplicates waste quota
- Retrieve 3-5 memories per query for optimal context without noise
- User profiles are auto-maintained -- no need to manually build them
- Free tier: 1,000 memories, 100 searches/day. Pro: $20/month for 100k memories, unlimited search
- Keep API keys in environment variables, never hardcode them
- Connectors sync automatically after initial setup -- no polling required