Openclaw-superpowers compaction-resilience-guard

Monitors memory compaction for failures and enforces a three-level fallback chain — normal, aggressive, deterministic truncation — ensuring compaction always makes forward progress.

install
source · Clone the upstream repo
git clone https://github.com/ArchieIndian/openclaw-superpowers
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/ArchieIndian/openclaw-superpowers "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/openclaw-native/compaction-resilience-guard" ~/.claude/skills/archieindian-openclaw-superpowers-compaction-resilience-guard && rm -rf "$T"
OpenClaw · Install into ~/.openclaw/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/ArchieIndian/openclaw-superpowers "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/openclaw-native/compaction-resilience-guard" ~/.openclaw/skills/archieindian-openclaw-superpowers-compaction-resilience-guard && rm -rf "$T"
manifest: skills/openclaw-native/compaction-resilience-guard/SKILL.md
source content

Compaction Resilience Guard

What it does

Memory compaction can fail silently: the LLM produces empty output, summaries that are larger than their input, or garbled text. When this happens, compaction stalls and context overflows.

Compaction Resilience Guard enforces a three-level escalation chain inspired by lossless-claw:

LevelStrategyWhen used
L1 — NormalStandard summarization promptFirst attempt
L2 — AggressiveLow temperature, reduced reasoning, shorter output targetAfter L1 failure
L3 — DeterministicPure truncation: keep first N + last N lines, drop middleAfter L2 failure

This ensures compaction always makes progress — even if the LLM is broken.

When to invoke

  • After any compaction event — validate the output
  • When context usage approaches 90% — compaction may be failing
  • When summaries seem unusually long or empty — detect inflation
  • As a pre-check before memory-dag-compactor runs

How to use

python3 guard.py --check                       # Validate recent compaction outputs
python3 guard.py --check --file <summary.yaml> # Check a specific summary file
python3 guard.py --simulate <text>             # Run the 3-level chain on sample text
python3 guard.py --report                      # Show failure/escalation history
python3 guard.py --status                      # Last check summary
python3 guard.py --format json                 # Machine-readable output

Failure detection

The guard detects these compaction failures:

FailureHow detectedAction
Empty outputSummary length < 10 charsEscalate to next level
InflationSummary tokens > input tokensEscalate to next level
Garbled textEntropy score > 5.0 (random chars)Escalate to next level
RepetitionSame 20+ char phrase repeated 3+ timesEscalate to next level
Truncation markerContains
[FALLBACK]
or
[TRUNCATED]
Record as L3 usage
StaleSummary unchanged from previous runFlag for review

Procedure

Step 1 — Check recent compaction outputs

python3 guard.py --check

Validates all summary nodes in memory-dag-compactor state. Reports failures by level and whether escalation was needed.

Step 2 — Simulate the fallback chain

python3 guard.py --simulate "$(cat long-text.txt)"

Runs the 3-level chain on sample text to test that each level produces valid output.

Step 3 — Review escalation history

python3 guard.py --report

Shows how often each level was used. High L2/L3 usage indicates the primary summarization prompt needs improvement.

State

Failure counts, escalation history, and per-summary validation results stored in

~/.openclaw/skill-state/compaction-resilience-guard/state.yaml
.

Fields:

last_check_at
,
level_usage
,
failures
,
check_history
.

Notes

  • Read-only monitoring — does not perform compaction itself
  • Works alongside memory-dag-compactor as a quality gate
  • Deterministic truncation (L3) preserves first 30% and last 20% of input, drops middle
  • Entropy is measured using Shannon entropy on character distribution
  • High L3 usage (>10% of compactions) suggests a systemic LLM issue