Skills smart-memory
Persistent local transcript-first memory for OpenClaw via a Node adapter and FastAPI engine.
install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/bluepointdigital/smart-memory" ~/.claude/skills/openclaw-skills-smart-memory && rm -rf "$T"
OpenClaw · Install into ~/.openclaw/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/bluepointdigital/smart-memory" ~/.openclaw/skills/openclaw-skills-smart-memory && rm -rf "$T"
manifest:
skills/bluepointdigital/smart-memory/SKILL.mdsource content
Smart Memory v3.1 Skill
Smart Memory v3.1 is a local transcript-first cognitive memory runtime with revision-aware derivation, pinned context lanes, entity-aware retrieval, and bounded prompt composition.
Core runtime:
- Node adapter:
smart-memory/index.js - Local API:
server.py - System facade:
cognitive_memory_system.py - Canonical store:
plusstorage/sqlite_memory_store.pytranscripts/
Core Capabilities
- transcript-first ingest and per-message transcript logging
- typed long-term memory including
,preference
, andidentitytask_state - evidence-backed revision lifecycle decisions and supersession chains
- explicit core and working memory lanes
- entity-aware retrieval with lightweight relationship hints
- deterministic rebuild from transcript history
- hot-memory compatibility projection for working context
- strict token-bounded prompt composition with trace metadata
- inspection endpoints for transcripts, evidence, history, lanes, and eval runs
OpenClaw Integration
Use the native wrapper package in
skills/smart-memory-openclaw/.
Primary exports:
createSmartMemorySkill(options)createOpenClawHooks({ skill, agentIdentity, summarizeWithLLM })
The wrapper remains stable while the backend is now transcript-first under the hood.
Tool Interface
memory_search
- purpose: query relevant memory through
/retrieve - supports
,query
,type
,limit
, and optionalmin_relevanceconversation_history - health-checks the backend before execution
memory_commit
- purpose: persist important facts, decisions, beliefs, goals, or session summaries
- health-checks the backend before execution
- serializes commits to protect local embedding throughput
- queues failed commits in
.memory_retry_queue.json
memory_insights
- purpose: surface pending background insights
- health-checks the backend before execution
- calls
/insights/pending
API Endpoints
Core endpoints:
GET /healthPOST /ingestPOST /retrievePOST /composePOST /run_backgroundGET /memoriesGET /memory/{memory_id}GET /insights/pending
Transcript and inspection endpoints:
POST /transcripts/messageGET /transcripts/{session_id}GET /transcript/message/{message_id}GET /memory/{memory_id}/evidencePOST /reviseGET /memory/{memory_id}/historyGET /memory/{memory_id}/activeGET /memory/{memory_id}/chainGET /lanes/{lane_name}POST /lanes/{lane_name}/{memory_id}DELETE /lanes/{lane_name}/{memory_id}POST /rebuildPOST /rebuild/{session_id}GET /eval/suite/{suite_name}GET /eval/case/{case_id}
Operating guidance
- query memory before speaking when continuity matters
- do not claim prior context unless retrieval actually supports it
- transcripts are canonical, memories are derived
- treat SQLite as canonical runtime storage
- treat JSON as offline export or backup only
- keep CPU-only PyTorch policy intact
Deprecated
Legacy vector-memory CLI artifacts remain deprecated and should not be revived.