Skills memory-guardian
install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/5rbdmak7f-alt/memory-guardian-agent" ~/.claude/skills/openclaw-skills-memory-guardian && rm -rf "$T"
OpenClaw · Install into ~/.openclaw/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/5rbdmak7f-alt/memory-guardian-agent" ~/.openclaw/skills/openclaw-skills-memory-guardian && rm -rf "$T"
manifest:
skills/5rbdmak7f-alt/memory-guardian-agent/SKILL.mdsource content
memory-guardian
Workspace memory lifecycle system. Dual-layer Bayesian decay + four-state quality gate + case lifecycle + compaction.
Design Principles
- Check status before any write —
firstmemory_status - Dry-run before apply — preview destructive operations
- Default behavior > toggles — workspace defaults from
MG_WORKSPACE - Write ordering > content correctness — sequence matters
- Observable but not brittle — signal degradation → WARNING, not crash
- Single source of truth — all defaults in
mg_schema/meta_defaults.py
MCP Tools (10)
workspace defaults from
MG_WORKSPACE env var; only non-default params listed.
Query
— System overview (memory count / gate state / case summary / references integrity)memory_status()
— Search memories (keyword/memory_type filter)memory_query(type="active", min_score=0.3)
Write
— Create new memorymemory_ingest(content="...", importance="auto", tags=[])
— Run five-track Bayesian decaymemory_decay(lambda=0.01, dry_run=false)
Audit
— Quality gate (retire_rate / similar_case_signal / stale_cases)quality_check(layer="all")
— Query cases (active/frozen/retired/stale/ignored)case_query(filter="frozen")
— Case operations (active/frozen/retired/unfreeze/ignore)case_review(case_id, action="retire", origin_type="agent_initiated")
Batch
— Full maintenance (includes sync + signal merge)run_batch(skip_compact=true, dry_run=false, timeout=300)
— Sync file changes → meta.json (auto-run in run_batch)memory_sync(dry_run=true)
— Compact MEMORY.mdmemory_compact(dry_run=true, aggressive=false)
Workflows
New Installation
→ confirmmemory_statusreferences.complete: true- If false, create missing files per the missing list, re-verify
- Create cron task (see signal-loop.md for cron template)
- Manually trigger once to verify
Daily Maintenance (cron)
run_batch(skip_compact=true) runs automatically. Includes:
- memory_sync (incremental file scan)
- Signal merge (access_log + cron inference)
- Decay + quality gate check
- Compact triggers only when MEMORY.md > 15KB
Diagnostics
D1: Memory bloat →
memory_compact(dry_run=true) → apply if needed → see compaction.md
D2: Quality anomaly →
quality_check(layer="all") → see error_recovery.md
D3: Case invalidation →
case_query(filter="stale") → case_review(action="retire"|"active"|"unfreeze") → see case-management.md
Signal Loop (v0.4.6)
Dual-layer access signals feed the decay engine:
- Layer 1 (weight 1.0):
— agent appends afteraccess_log.jsonlmemory_get - Layer 2 (weight 0.5): cron keyword inference from daily notes
- Health check auto-degrades to Layer 2 if access_log stale > 24h
Agent must append to
access_log.jsonl after each memory_get call. See signal-loop.md for integration code.
References
Load on demand per scenario:
- signal-loop.md — Signal loop setup, AGENTS.md integration code, cron template, config fields
- triggers.md — Trigger/anti-trigger rules
- parameters.md — Decay params, β scar, PID gains, TTL
- compaction.md — D1 diagnosis and compaction strategy
- error_recovery.md — D2 diagnosis, anomaly states, self-healing
- case-management.md — D3 diagnosis, case audit, L3 review
- advanced-tools.md — Quiet degradation, topic lock, PID adaptive
CLI Fallback
When MCP unavailable, CLI path relative to skill dir:
python3 scripts/memory_guardian.py <command> [--workspace <path>]
Commands:
status, ingest, bootstrap, snapshot, run, violations, migrate