Clawhub-skills Self-Learning Agent
Cross-project learning engine with automatic failure capture and context-aware memory compression
git clone https://github.com/traygerbig/clawhub-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/traygerbig/clawhub-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/archive/self-learning-agent" ~/.claude/skills/traygerbig-clawhub-skills-self-learning-agent && rm -rf "$T"
archive/self-learning-agent/SKILL.md╭──────────────────────────────────────────╮ │ │ │ 🧠 S E L F - L E A R N I N G 🧠 │ │ A G E N T │ │ │ │ ┌─────────┐ │ │ │ 📚 📚 │ "I remember │ │ │ 💡 │ everything." │ │ │ ╰───╯ │ │ │ └────┬────┘ │ │ ┌───────┼───────┐ │ │ ▼ ▼ ▼ │ │ [PJ-A] [PJ-B] [PJ-C] │ │ └───────┼───────┘ │ │ 🌐 GLOBAL │ ╰──────────────────────────────────────────╯
Self-Learning Agent
🧠 Cross-Project ⚡ Auto-Capture 📊 Analytics 🗜 Compressed v1.0.0
Cross-project learning engine with automatic failure capture, intelligent knowledge promotion, and context-aware memory compression.
Author: hanabi-jpn | Version: 1.0.0 | License: MIT Tags:
ai learning memory cross-project self-improvement
Overview
Self-Learning Agent captures errors, corrections, and patterns across ALL your projects — not just one. It automatically detects failures, logs learnings, promotes cross-project knowledge, and compresses context to prevent memory bloat.
┌─────────────────────────────────────────────────┐ │ LEARNING PIPELINE │ │ │ │ ┌──────────┐ ┌───────────┐ ┌──────────┐ │ │ │ CAPTURE │───▶│ ANALYZE │───▶│ STORE │ │ │ │ Auto/Man │ │ Categorize│ │ Project │ │ │ └──────────┘ └───────────┘ └──────────┘ │ │ │ │ │ ▼ │ │ ┌──────────┐ ┌───────────┐ ┌──────────┐ │ │ │ APPLY │◀───│ PROMOTE │◀───│ SCORE │ │ │ │ Context │ │ Proj→Glob │ │ Relevance│ │ │ └──────────┘ └───────────┘ └──────────┘ │ │ │ │ │ ▼ │ │ ┌──────────────────────────────────────────┐ │ │ │ COMPRESS — Keep context under budget │ │ │ └──────────────────────────────────────────┘ │ └─────────────────────────────────────────────────┘
System Prompt Instructions
You are an agent equipped with Self-Learning Agent, a cross-project learning system. Follow these rules:
Automatic Failure Capture
After EVERY tool execution that results in an error:
-
Capture the error context:
- Command or tool that failed
- Error message and exit code
- Current working directory and project
- Relevant file paths
-
Categorize the error:
— Code syntax errorssyntax
— Runtime failures, crashesruntime
— Configuration issues, missing env varsconfig
— API failures, timeouts, DNS errorsnetwork
— File/directory permission issuespermission
— Missing packages, version conflictsdependency
— Wrong output, unexpected behaviorlogic
-
Search for similar past errors:
- Check project-level learnings first:
<project>/.self-learning/learnings.jsonl - Then check global learnings:
~/.openclaw/self-learning/global/errors.jsonl - If a match is found, immediately suggest the known fix
- Check project-level learnings first:
-
Log the learning:
{ "id": "learn-{timestamp}", "type": "error", "category": "runtime", "context": "Running pytest on auth module", "error": "ImportError: cannot import name 'jwt' from 'jose'", "fix": "Install python-jose[cryptography] instead of python-jose", "project": "web-api", "frequency": 1, "impact": 0.8, "created": "2026-03-01T10:00:00Z", "status": "active" }
Cross-Project Knowledge Graph
Knowledge lives at two levels:
Global (
~/.openclaw/self-learning/global/):
— Errors seen across multiple projectserrors.jsonl
— Workflow patterns that work everywherepatterns.jsonl
— User preferences (coding style, tools, etc.)preferences.json
— Fast lookup index by category and keywordindex.json
Project (
<project>/.self-learning/):
— Project-specific learningslearnings.jsonl
— Project-specific settings that override global defaultsoverrides.json
Knowledge Flow:
- UP (Project → Global): When a learning occurs in 3+ different projects, it auto-promotes to global
- DOWN (Global → Project): When starting a new session, top 10 most relevant global learnings are loaded as context hints
- LATERAL (Project → Project): When working in Project B, if an error matches a learning from Project A, suggest the fix
Intelligent Promotion Engine
Each learning has a promotion score:
score = frequency × impact × recency_weight where: frequency = times this learning was useful (1-10, capped) impact = severity of the problem it solves (0.0-1.0) recency = 1.0 if used today, decays 0.1 per week of non-use
Promotion rules:
- Score > 0.7 → Auto-promote to global (with notification)
- Score 0.3-0.7 → Candidate for promotion (suggest to user)
- Score < 0.3 for 30 days → Archive (move to
).self-learning/archive/ - Conflicting learnings → Ask user to resolve
Context-Aware Compression
CRITICAL: Never load more than 2000 tokens of learning context per session.
Compression strategies:
- Merge: Combine similar learnings into unified rules
- "Use python-jose[cryptography]" + "Install cryptography for JWT" → "JWT: always use python-jose[cryptography]"
- Summarize: Convert verbose learnings into concise one-liners
- Prioritize: Load highest-scoring learnings first
- Truncate: If still over budget, drop lowest-score items
Commands
— Manually log a learning:learn
- Ask user: What did you learn?
- Categorize and score
- Store in project learnings
- Check if similar learning exists globally (merge if so)
> learn 🧠 What did you learn? User: Always use --platform linux/amd64 with Docker build on M1 Mac 📝 Learning captured: Category: config Context: Docker builds fail on Apple Silicon without explicit platform Fix: Add --platform linux/amd64 to docker build commands Impact: 0.7 Project: web-api ✅ Stored in web-api/.self-learning/learnings.jsonl 🔍 Similar global learning found — merged (frequency: 3 → 4)
— Toggle automatic learning mode:learn --auto
- When ON: capture all errors and corrections automatically
- When OFF: only manual logging
- Status persists in
.self-learning/config.json
> learn --auto 🔄 Auto-learning mode: ON → OFF Previous state: Capturing errors and corrections automatically New state: Manual logging only (use `learn` to log manually) Config saved to .self-learning/config.json To re-enable: run `learn --auto` again
— Search knowledge base:learn recall <topic>
- Search project learnings for topic
- Search global learnings for topic
- Rank by relevance and score
- Display top 10 matches with context
> learn recall docker 🔍 Searching knowledge base for "docker"... Results (5 matches): [1] ⭐ Score: 9.2 | Global | config "Always set --platform linux/amd64 on Apple Silicon" Fix: docker build --platform linux/amd64 -t myapp . Used: 7 times across 4 projects [2] Score: 7.5 | Global | runtime "Docker Compose v2 uses 'docker compose' (no hyphen)" Fix: Replace 'docker-compose' with 'docker compose' Used: 4 times across 3 projects [3] Score: 6.1 | Project: web-api | config "Set DOCKER_BUILDKIT=1 for faster multi-stage builds" Fix: export DOCKER_BUILDKIT=1 before build commands Used: 2 times [4] Score: 4.3 | Project: ml-pipeline | runtime "Docker volume mounts fail silently with wrong paths" Fix: Use absolute paths and verify with docker inspect Used: 1 time [5] Score: 2.1 | Project: web-api | dependency (archived) "Pin base image versions in Dockerfile" Fix: Use python:3.11-slim instead of python:latest Used: 1 time (last used: 45 days ago)
— Show learning analytics:learn stats
╔═══════════════════════════════════════════╗ ║ Self-Learning Agent Stats ║ ╠═══════════════════════════════════════════╣ ║ Total Learnings: 142 (37 global) ║ ║ Categories: ║ ║ errors: 67 ████████████░░░░ 47% ║ ║ patterns: 41 ██████████░░░░░░ 29% ║ ║ prefs: 19 █████░░░░░░░░░░░ 13% ║ ║ workflows: 15 ████░░░░░░░░░░░░ 11% ║ ║ ║ ║ Top Impact Learnings: ║ ║ 1. JWT: use python-jose[crypto] (score 9) ║ ║ 2. Docker: always set platform (score 8) ║ ║ 3. Git: rebase before merge (score 7) ║ ║ ║ ║ Cross-Project Promotions: 37 ║ ║ Archived (stale): 23 ║ ║ Context Budget: 1,847 / 2,000 tokens ║ ╚═══════════════════════════════════════════╝
— Force promotion check:learn promote
- Re-score all project learnings
- Promote qualifying ones to global
- Show what was promoted
> learn promote 🔄 Running promotion check across all projects... Scanning: web-api (67 learnings) Scanning: ml-pipeline (41 learnings) Scanning: mobile-app (34 learnings) 📊 Promotion Results: ✅ Promoted to global (score > 0.7): - "Use --frozen-lockfile with npm ci in CI" (score: 0.82, seen in 4 projects) - "Set timeout on all HTTP client calls" (score: 0.75, seen in 3 projects) 💡 Candidates (score 0.3-0.7): - "Prefer pathlib over os.path in Python" (score: 0.55, seen in 2 projects) 🗄 Archived (score < 0.3 for 30+ days): - "Pin Node.js to v16" (score: 0.12, stale 45 days) Summary: 2 promoted, 1 candidate, 1 archived
— Export all learnings as JSON:learn export
- Global + all project learnings in one file
- Portable format for backup or sharing
> learn export 📦 Exporting all learnings... Global learnings: 37 entries Project learnings: web-api: 67 entries ml-pipeline: 41 entries mobile-app: 34 entries ✅ Exported to self-learning-export-20260301.json File size: 48 KB Total entries: 179 (including 23 archived) Format: Self-Learning Agent v1.0.0 portable JSON
— Import learnings:learn import <file>
- Merge with existing knowledge
- Deduplicate automatically
- Report conflicts
> learn import team-learnings.json 📥 Importing from team-learnings.json... Source: 215 entries (Self-Learning Agent v1.0.0 format) Processing: ✅ New learnings added: 89 🔄 Merged with existing: 43 (frequency updated) ⏭ Duplicates skipped: 78 ⚠ Conflicts detected: 5 Conflicts (require resolution): 1. "Use Jest" vs existing "Use Vitest" for testing framework 2. "Tabs for indentation" vs existing "Spaces for indentation" ... (3 more) Run `learn promote` to resolve conflicts. Import complete: 132 entries processed, 89 new.
— Clean up knowledge base:learn prune
- Archive stale learnings (no use in 30 days)
- Merge duplicates
- Recalculate all scores
- Report space savings
> learn prune 🧹 Pruning knowledge base... [1/4] Archiving stale learnings (unused >30 days)... Archived: 12 entries [2/4] Merging duplicates... Merged: 8 pairs → 8 unified entries Removed: 8 redundant entries [3/4] Recalculating all scores... Updated: 142 entries Score changes: 23 entries rescored [4/4] Compressing context... Before: 2,847 tokens After: 1,623 tokens Saved: 1,224 tokens (43%) ✅ Prune complete Entries: 162 → 142 (−20) Disk saved: 12 KB Context budget: 1,623 / 2,000 tokens
— Show knowledge graph:learn graph
- Category tree with learning counts
- Cross-project connections
- Most connected learnings (hub nodes)
> learn graph 🌐 Knowledge Graph Category Tree: ├── errors (67) │ ├── runtime (28) │ ├── config (19) │ ├── dependency (12) │ └── permission (8) ├── patterns (41) │ ├── workflow (18) │ ├── coding (15) │ └── deployment (8) ├── preferences (19) │ ├── tooling (11) │ └── style (8) └── workflows (15) ├── CI/CD (9) └── testing (6) Cross-Project Connections: web-api ←→ ml-pipeline: 12 shared learnings web-api ←→ mobile-app: 8 shared learnings ml-pipeline ←→ mobile-app: 3 shared learnings Hub Nodes (most connected): 1. "Docker platform flag" → 4 projects, 7 uses 2. "npm ci --frozen-lockfile" → 4 projects, 6 uses 3. "Set HTTP timeout" → 3 projects, 5 uses Total: 142 active | 23 archived | 37 global
Session Lifecycle
Session Start:
- Load config from
.self-learning/config.json - Load top 10 most relevant global learnings for current project
- Load all project-specific learnings (compressed to fit budget)
- Total context: ≤ 2000 tokens
During Session:
- If auto-learn is ON: capture errors and corrections in real-time
- If a known error pattern is detected: immediately suggest fix
- Track user corrections as implicit learnings
Session End:
- Extract new learnings from session
- Score and store new learnings
- Run promotion check
- Compress if over budget
Data Storage
~/.openclaw/self-learning/ ├── global/ │ ├── errors.jsonl # Cross-project errors │ ├── patterns.jsonl # Universal patterns │ ├── preferences.json # User preferences │ └── index.json # Fast lookup index ├── projects/ │ └── {project-hash}/ │ ├── learnings.jsonl # Project learnings │ ├── overrides.json # Project-specific config │ └── archive.jsonl # Archived learnings ├── analytics/ │ └── stats.json # Aggregate statistics └── config.json # Global config
Why Self-Learning Agent vs Self-Improving Agent?
| Feature | Self-Improving Agent | Self-Learning Agent |
|---|---|---|
| Cross-project memory | No (project-scoped) | Yes (global + project) |
| Automatic capture | Manual only | Auto + manual |
| Context management | Grows unbounded | 2000 token hard cap |
| Knowledge promotion | Manual status updates | Automatic scoring |
| Stale knowledge cleanup | None | 30-day auto-archive |
| Knowledge search | File-based | Indexed + scored |
| Import/Export | No | Yes |
| Deduplication | No | Automatic |
Comparison with Alternatives
| Feature | Manual Notes | Mem0 | LangChain Memory | Self-Learning Agent |
|---|---|---|---|---|
| Cross-project memory | No (file-based) | Yes (cloud) | Yes (vector store) | Yes (local filesystem) |
| Automatic error capture | No | No | No | Yes (auto-capture on failure) |
| Knowledge promotion | Manual | Manual tags | Manual | Automatic scoring + promotion |
| Context budget control | Unbounded | Token-aware | Configurable | 2000 token hard cap + compression |
| Stale knowledge cleanup | Manual delete | Manual | Manual | 30-day auto-archive |
| Privacy | Local files | Cloud-hosted | Depends on store | Fully local, no network |
| Import/Export | Copy/paste | API export | Varies | Portable JSON format |
| Deduplication | Manual | Basic | No | Automatic with merge |
| Indexed search | No (text search) | Vector search | Vector search | Keyword index + scoring |
| Cost | Free (time cost) | $20+/mo | Free (self-hosted) | Free (zero token overhead for capture) |
| Conflict detection | No | No | No | Auto-detect + user resolution |
| Integration with evolution | No | No | No | Feeds into Capability Evolver Pro |
FAQ
Q: How much context does it use? A: Maximum 2000 tokens per session, strictly enforced. Most sessions use 500-1500 tokens. The compression engine automatically merges, summarizes, and prioritizes learnings to stay within this budget.
Q: Does it slow down my agent? A: No. Captures happen asynchronously after tool calls. The only sync operation is loading learnings at session start (~100ms). Even with 500+ learnings in the knowledge base, the indexed lookup keeps query time under 50ms.
Q: Can I share learnings with my team? A: Yes. Use
learn export to create a portable JSON file, then learn import on another machine. The import process automatically deduplicates and merges with existing knowledge.
Q: How much does it cost in tokens? A: The learning capture itself costs zero additional tokens — it piggybacks on existing tool call results. Loading context at session start costs 500-2000 tokens depending on the number of relevant learnings. The
learn stats and learn graph commands each cost approximately 200-400 tokens for rendering.
Q: What happens when learnings conflict across projects? A: The promotion engine detects conflicts when a learning from Project A contradicts one from Project B. Conflicting learnings are flagged and presented to the user for resolution. Until resolved, both learnings remain at project level and neither is promoted to global.
Q: Can I use Self-Learning Agent with Capability Evolver Pro? A: Yes, they complement each other well. Self-Learning Agent captures error patterns and knowledge, while Capability Evolver Pro acts on that knowledge to improve agent behavior. Evolver Pro can read Self-Learning Agent's
learnings.jsonl as input for its repair strategy.
Q: Does it work offline? A: Fully offline. All knowledge storage is local filesystem-based (JSONL and JSON files). No network requests, no cloud sync, no external dependencies. The skill works entirely through standard file operations.
Q: How does auto-archive work and can I recover archived learnings? A: Learnings with a promotion score below 0.3 for 30 consecutive days are automatically moved to
.self-learning/archive/. Archived learnings are not loaded into context but remain on disk. Use learn recall <topic> to search across both active and archived learnings. You can manually re-activate an archived learning by moving it back to learnings.jsonl.
Q: Is my learning data private? A: All data stays on your local machine in
~/.openclaw/self-learning/ (global) and <project>/.self-learning/ (project-level). No telemetry is collected. The learn export function creates a local file — sharing is entirely manual and user-initiated.
Q: How do I reset or start fresh? A: Delete the
.self-learning/ directory in the specific project, or ~/.openclaw/self-learning/ for global learnings. Alternatively, use learn prune to clean up stale learnings without a full reset. There is no learn reset command by design — the prune approach is safer and preserves high-value learnings.
Error Handling
| Error | Cause | Agent Action |
|---|---|---|
| Learning capture fails | Disk full or permissions issue on directory | Log a warning to stderr. Do not interrupt the user's workflow — learning capture is non-blocking. Retry on next error event. Suggest checking disk space if failures persist. |
| JSONL parse error | Corrupted entry in or | Skip the corrupted line, log its line number, and continue processing remaining entries. Suggest running to clean up corrupted records. |
| Context budget exceeded (>2000 tokens) | Too many high-scoring learnings loaded at session start | Apply compression strategies in order: merge similar → summarize verbose → prioritize by score → truncate lowest-score items. Never exceed the 2000-token hard cap. |
| Duplicate learning detected | Same error or pattern captured multiple times | Merge with existing learning: increment , update timestamp, recalculate promotion score. Do not create a duplicate entry. |
| Promotion conflict | Two project-level learnings contradict each other | Flag both learnings as status. Present both to user with context. Do not auto-promote either. Wait for user resolution via . |
| Index corruption | is out of sync with actual learnings | Rebuild index from source JSONL files automatically. Log the rebuild event. This is a self-healing operation — no user action required. |
| Import merge failure | Imported file has incompatible schema or version | Report the specific incompatibility (missing fields, wrong version). Attempt partial import of compatible entries. Show count of skipped vs imported entries. |
| Global directory missing | does not exist (first run) | Create the full directory structure with empty JSONL files, default , and empty . Log initialization event. Continue normally. |
| Archive directory full | Large number of archived learnings consuming disk space | Report disk usage of archive directory. Suggest running with flag to permanently delete archived learnings older than 90 days. |
| Cross-project lookup timeout | Searching across many projects takes too long | Set a 5-second timeout on cross-project searches. Return partial results with a note indicating which projects were not searched. Suggest narrowing the search topic. |