Claude-skill-registry inmemoria
Use when building persistent codebase intelligence for AI agents or integrating knowledge systems via MCP
install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/inmemoria" ~/.claude/skills/majiayu000-claude-skill-registry-inmemoria && rm -rf "$T"
manifest:
skills/data/inmemoria/SKILL.mdsource content
In Memoria: Persistent Codebase Intelligence
In Memoria is an MCP server that learns your codebase patterns once, then exposes that intelligence to AI agents persistently. Instead of re-analyzing code on every interaction, it maintains a semantic understanding of your architecture, conventions, and decisions.
Core Concept
Setup → Learn → Verify → Serve. After that, AI agents query persistent intelligence without repeated parsing.
Quick Start (5 minutes)
# 1. Configure for your project npx in-memoria setup --interactive # 2. Build intelligence database npx in-memoria learn ./src # 3. Verify it worked npx in-memoria check ./src --verbose # 4. Keep it fresh (optional but recommended) npx in-memoria watch ./src # 5. Expose to agents via MCP npx in-memoria server
When to Use
✅ Use In Memoria:
- Building long-lived AI agent partnerships (Claude, Copilot, etc.)
- Projects where consistency across sessions matters
- Teams wanting shared codebase intelligence
❌ Skip it:
- One-off analysis (use
directly)npx in-memoria analyze [path] - Simple projects agents can read directly
The 5 Core Commands
| Command | Purpose | When |
|---|---|---|
| Configure exclusions, paths, preferences | First time only |
| Build/rebuild intelligence database | After setup, major refactors |
| Validate intelligence layer | After learn, before server |
| Auto-update intelligence on code changes | During development (optional) |
| Start MCP server for agent queries | After check passes |
Key difference:
learn builds persistent knowledge. analyze is one-time reporting only.
What Agents See
When connected, agents can query:
- Project structure - Tech stack, entry points, architecture
- Code patterns - Your naming conventions, error handling, patterns used
- Smart routing - "Add password reset" → suggests
src/auth/password-reset.ts - Semantic search - Find code by meaning, not keywords
- Work context - Track decisions, tasks, approach consistency
Troubleshooting
| Issue | Fix |
|---|---|
| Learn fails | Verify path is correct; check file permissions |
| Check reports missing intelligence | Run again |
| Agent doesn't see new code | Is running? Start it: |
| Server won't start | Run first; if issues, rebuild: |
| Multiple projects conflict | Use (or different port per project) |
Performance Notes
- Small projects (<1K files): 5-15s to learn
- Medium (1K-10K files): 30-60s
- Large (10K+ files): 2-5min
If learning stalls (>10min), verify you're not indexing
node_modules/, dist/, or build artifacts—use setup's exclusion patterns.
Key Principles
- Local-first - Everything stays on your machine; no telemetry
- Persistent - One learning pass; intelligence updates incrementally with
watch - Agent-native - Designed for MCP; works with Claude, Copilot, and any MCP-compatible tool
- Pattern-based - Learns from your actual code, not rules you define
Deployment Pattern (3 terminals)
# Terminal 1: One-time setup npx in-memoria setup --interactive npx in-memoria learn ./src npx in-memoria check ./src --verbose # Terminal 2: Keep intelligence fresh npx in-memoria watch ./src # Terminal 3: Expose to agents npx in-memoria server # Now agents (Claude, Copilot, etc.) have persistent codebase context
See GitHub for full API docs and agent integration examples.