Awesome-openclaw-skills self-improvement-3
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Claude ('No, that's wrong...', 'Actually...'), (3) User requests a capability that doesn't exist, (4) An external API or tool fails, (5) Claude realizes its knowledge is outdated or incorrect, (6) A better approach is discovered for a recurring task. Also review learnings before major tasks.
git clone https://github.com/sundial-org/awesome-openclaw-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/sundial-org/awesome-openclaw-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/self-improvement-3" ~/.claude/skills/sundial-org-awesome-openclaw-skills-self-improvement-3 && rm -rf "$T"
T=$(mktemp -d) && git clone --depth=1 https://github.com/sundial-org/awesome-openclaw-skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/self-improvement-3" ~/.openclaw/skills/sundial-org-awesome-openclaw-skills-self-improvement-3 && rm -rf "$T"
skills/self-improvement-3/SKILL.mdSelf-Improvement Skill
Log learnings and errors to markdown files for continuous improvement. Coding agents can later process these into fixes, and important learnings get promoted to project memory.
Quick Reference
| Situation | Action |
|---|---|
| Command/operation fails | Log to |
| User corrects you | Log to with category |
| User wants missing feature | Log to |
| API/external tool fails | Log to with integration details |
| Knowledge was outdated | Log to with category |
| Found better approach | Log to with category |
| Similar to existing entry | Link with , consider priority bump |
| Broadly applicable learning | Promote to , , and/or |
Setup
Create
.learnings/ directory in project root if it doesn't exist:
mkdir -p .learnings
Copy templates from
assets/ or create files with headers.
Logging Format
Learning Entry
Append to
.learnings/LEARNINGS.md:
## [LRN-YYYYMMDD-XXX] category **Logged**: ISO-8601 timestamp **Priority**: low | medium | high | critical **Status**: pending **Area**: frontend | backend | infra | tests | docs | config ### Summary One-line description of what was learned ### Details Full context: what happened, what was wrong, what's correct ### Suggested Action Specific fix or improvement to make ### Metadata - Source: conversation | error | user_feedback - Related Files: path/to/file.ext - Tags: tag1, tag2 - See Also: LRN-20250110-001 (if related to existing entry) ---
Error Entry
Append to
.learnings/ERRORS.md:
## [ERR-YYYYMMDD-XXX] skill_or_command_name **Logged**: ISO-8601 timestamp **Priority**: high **Status**: pending **Area**: frontend | backend | infra | tests | docs | config ### Summary Brief description of what failed ### Error
Actual error message or output
### Context - Command/operation attempted - Input or parameters used - Environment details if relevant ### Suggested Fix If identifiable, what might resolve this ### Metadata - Reproducible: yes | no | unknown - Related Files: path/to/file.ext - See Also: ERR-20250110-001 (if recurring) ---
Feature Request Entry
Append to
.learnings/FEATURE_REQUESTS.md:
## [FEAT-YYYYMMDD-XXX] capability_name **Logged**: ISO-8601 timestamp **Priority**: medium **Status**: pending **Area**: frontend | backend | infra | tests | docs | config ### Requested Capability What the user wanted to do ### User Context Why they needed it, what problem they're solving ### Complexity Estimate simple | medium | complex ### Suggested Implementation How this could be built, what it might extend ### Metadata - Frequency: first_time | recurring - Related Features: existing_feature_name ---
ID Generation
Format:
TYPE-YYYYMMDD-XXX
- TYPE:
(learning),LRN
(error),ERR
(feature)FEAT - YYYYMMDD: Current date
- XXX: Sequential number or random 3 chars (e.g.,
,001
)A7B
Examples:
LRN-20250115-001, ERR-20250115-A3F, FEAT-20250115-002
Resolving Entries
When an issue is fixed, update the entry:
- Change
→**Status**: pending**Status**: resolved - Add resolution block after Metadata:
### Resolution - **Resolved**: 2025-01-16T09:00:00Z - **Commit/PR**: abc123 or #42 - **Notes**: Brief description of what was done
Other status values:
- Actively being worked onin_progress
- Decided not to address (add reason in Resolution notes)wont_fix
- Elevated to CLAUDE.md, AGENTS.md, or .github/copilot-instructions.mdpromoted
Promoting to Project Memory
When a learning is broadly applicable (not a one-off fix), promote it to permanent project memory.
When to Promote
- Learning applies across multiple files/features
- Knowledge any contributor (human or AI) should know
- Prevents recurring mistakes
- Documents project-specific conventions
Promotion Targets
| Target | What Belongs There |
|---|---|
| Project facts, conventions, gotchas for all Claude interactions |
| Agent-specific workflows, tool usage patterns, automation rules |
| Project context and conventions for GitHub Copilot |
How to Promote
- Distill the learning into a concise rule or fact
- Add to appropriate section in target file (create file if needed)
- Update original entry:
- Change
→**Status**: pending**Status**: promoted - Add
,**Promoted**: CLAUDE.md
, orAGENTS.md.github/copilot-instructions.md
- Change
Promotion Examples
Learning (verbose):
Project uses pnpm workspaces. Attempted
but failed. Lock file isnpm install. Must usepnpm-lock.yaml.pnpm install
In CLAUDE.md (concise):
## Build & Dependencies - Package manager: pnpm (not npm) - use `pnpm install`
Learning (verbose):
When modifying API endpoints, must regenerate TypeScript client. Forgetting this causes type mismatches at runtime.
In AGENTS.md (actionable):
## After API Changes 1. Regenerate client: `pnpm run generate:api` 2. Check for type errors: `pnpm tsc --noEmit`
Recurring Pattern Detection
If logging something similar to an existing entry:
- Search first:
grep -r "keyword" .learnings/ - Link entries: Add
in Metadata**See Also**: ERR-20250110-001 - Bump priority if issue keeps recurring
- Consider systemic fix: Recurring issues often indicate:
- Missing documentation (→ promote to CLAUDE.md or .github/copilot-instructions.md)
- Missing automation (→ add to AGENTS.md)
- Architectural problem (→ create tech debt ticket)
Periodic Review
Review
.learnings/ at natural breakpoints:
When to Review
- Before starting a new major task
- After completing a feature
- When working in an area with past learnings
- Weekly during active development
Quick Status Check
# Count pending items grep -h "Status\*\*: pending" .learnings/*.md | wc -l # List pending high-priority items grep -B5 "Priority\*\*: high" .learnings/*.md | grep "^## \[" # Find learnings for a specific area grep -l "Area\*\*: backend" .learnings/*.md
Review Actions
- Resolve fixed items
- Promote applicable learnings
- Link related entries
- Escalate recurring issues
Detection Triggers
Automatically log when you notice:
Corrections (→ learning with
correction category):
- "No, that's not right..."
- "Actually, it should be..."
- "You're wrong about..."
- "That's outdated..."
Feature Requests (→ feature request):
- "Can you also..."
- "I wish you could..."
- "Is there a way to..."
- "Why can't you..."
Knowledge Gaps (→ learning with
knowledge_gap category):
- User provides information you didn't know
- Documentation you referenced is outdated
- API behavior differs from your understanding
Errors (→ error entry):
- Command returns non-zero exit code
- Exception or stack trace
- Unexpected output or behavior
- Timeout or connection failure
Priority Guidelines
| Priority | When to Use |
|---|---|
| Blocks core functionality, data loss risk, security issue |
| Significant impact, affects common workflows, recurring issue |
| Moderate impact, workaround exists |
| Minor inconvenience, edge case, nice-to-have |
Area Tags
Use to filter learnings by codebase region:
| Area | Scope |
|---|---|
| UI, components, client-side code |
| API, services, server-side code |
| CI/CD, deployment, Docker, cloud |
| Test files, testing utilities, coverage |
| Documentation, comments, READMEs |
| Configuration files, environment, settings |
Best Practices
- Log immediately - context is freshest right after the issue
- Be specific - future agents need to understand quickly
- Include reproduction steps - especially for errors
- Link related files - makes fixes easier
- Suggest concrete fixes - not just "investigate"
- Use consistent categories - enables filtering
- Promote aggressively - if in doubt, add to CLAUDE.md or .github/copilot-instructions.md
- Review regularly - stale learnings lose value
Gitignore Options
Keep learnings local (per-developer):
.learnings/
Track learnings in repo (team-wide): Don't add to .gitignore - learnings become shared knowledge.
Hybrid (track templates, ignore entries):
.learnings/*.md !.learnings/.gitkeep
Hook Integration
Enable automatic reminders through agent hooks. This is opt-in - you must explicitly configure hooks.
Quick Setup (Claude Code / Codex)
Create
.claude/settings.json in your project:
{ "hooks": { "UserPromptSubmit": [{ "matcher": "", "hooks": [{ "type": "command", "command": "./skills/self-improvement/scripts/activator.sh" }] }] } }
This injects a learning evaluation reminder after each prompt (~50-100 tokens overhead).
Full Setup (With Error Detection)
{ "hooks": { "UserPromptSubmit": [{ "matcher": "", "hooks": [{ "type": "command", "command": "./skills/self-improvement/scripts/activator.sh" }] }], "PostToolUse": [{ "matcher": "Bash", "hooks": [{ "type": "command", "command": "./skills/self-improvement/scripts/error-detector.sh" }] }] } }
Available Hook Scripts
| Script | Hook Type | Purpose |
|---|---|---|
| UserPromptSubmit | Reminds to evaluate learnings after tasks |
| PostToolUse (Bash) | Triggers on command errors |
See
references/hooks-setup.md for detailed configuration and troubleshooting.
Automatic Skill Extraction
When a learning is valuable enough to become a reusable skill, extract it using the provided helper.
Skill Extraction Criteria
A learning qualifies for skill extraction when ANY of these apply:
| Criterion | Description |
|---|---|
| Recurring | Has links to 2+ similar issues |
| Verified | Status is with working fix |
| Non-obvious | Required actual debugging/investigation to discover |
| Broadly applicable | Not project-specific; useful across codebases |
| User-flagged | User says "save this as a skill" or similar |
Extraction Workflow
- Identify candidate: Learning meets extraction criteria
- Run helper (or create manually):
./skills/self-improvement/scripts/extract-skill.sh skill-name --dry-run ./skills/self-improvement/scripts/extract-skill.sh skill-name - Customize SKILL.md: Fill in template with learning content
- Update learning: Set status to
, addpromoted_to_skillSkill-Path - Verify: Read skill in fresh session to ensure it's self-contained
Manual Extraction
If you prefer manual creation:
- Create
skills/<skill-name>/SKILL.md - Use template from
assets/SKILL-TEMPLATE.md - Follow Agent Skills spec:
- YAML frontmatter with
andnamedescription - Name must match folder name
- No README.md inside skill folder
- YAML frontmatter with
Extraction Detection Triggers
Watch for these signals that a learning should become a skill:
In conversation:
- "Save this as a skill"
- "I keep running into this"
- "This would be useful for other projects"
- "Remember this pattern"
In learning entries:
- Multiple
links (recurring issue)See Also - High priority + resolved status
- Category:
with broad applicabilitybest_practice - User feedback praising the solution
Skill Quality Gates
Before extraction, verify:
- Solution is tested and working
- Description is clear without original context
- Code examples are self-contained
- No project-specific hardcoded values
- Follows skill naming conventions (lowercase, hyphens)
Multi-Agent Support
This skill works across different AI coding agents with agent-specific activation.
Claude Code
Activation: Hooks (UserPromptSubmit, PostToolUse) Setup:
.claude/settings.json with hook configuration
Detection: Automatic via hook scripts
Codex CLI
Activation: Hooks (same pattern as Claude Code) Setup:
.codex/settings.json with hook configuration
Detection: Automatic via hook scripts
GitHub Copilot
Activation: Manual (no hook support) Setup: Add to
.github/copilot-instructions.md:
## Self-Improvement After solving non-obvious issues, consider logging to `.learnings/`: 1. Use format from self-improvement skill 2. Link related entries with See Also 3. Promote high-value learnings to skills Ask in chat: "Should I log this as a learning?"
Detection: Manual review at session end
Agent-Agnostic Guidance
Regardless of agent, apply self-improvement when you:
- Discover something non-obvious - solution wasn't immediate
- Correct yourself - initial approach was wrong
- Learn project conventions - discovered undocumented patterns
- Hit unexpected errors - especially if diagnosis was difficult
- Find better approaches - improved on your original solution
Copilot Chat Integration
For Copilot users, add this to your prompts when relevant:
After completing this task, evaluate if any learnings should be logged to
using the self-improvement skill format..learnings/
Or use quick prompts:
- "Log this to learnings"
- "Create a skill from this solution"
- "Check .learnings/ for related issues"