Commonly-used-high-value-skills reflect-learn
Self-improvement through conversation analysis. Extracts learnings from corrections and success patterns, proposes updates to agent files or creates new skills. Philosophy: "Correct once, never again." Use when: (1) User explicitly corrects behavior ("never do X", "always Y"), (2) Session ending or context compaction, (3) User requests /reflect, (4) Successful pattern worth preserving.
git clone https://github.com/seaworld008/Commonly-used-high-value-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/seaworld008/Commonly-used-high-value-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/task-understanding-decomposition/reflect-learn" ~/.claude/skills/seaworld008-commonly-used-high-value-skills-reflect-learn-6cec39 && rm -rf "$T"
skills/task-understanding-decomposition/reflect-learn/SKILL.mdReflect - Self-Improvement Skill
Quick Reference
| Command | Action |
|---|---|
| Analyze conversation for learnings |
| Enable auto-reflection |
| Disable auto-reflection |
| Show state and metrics |
| Review low-confidence learnings |
| Focus on specific agent |
Core Philosophy
"Correct once, never again."
When users correct behavior, those corrections become permanent improvements encoded into the agent system - across all future sessions.
Workflow
Step 1: Initialize State
Check and initialize state files using the state manager:
# Check for existing state python scripts/state_manager.py init # State directory is configurable via REFLECT_STATE_DIR env var # Default: ~/.reflect/ (portable) or ~/.claude/session/ (Claude Code)
State includes:
- Toggle state, pending reviewsreflect-state.yaml
- Aggregate metricsreflect-metrics.yaml
- Log of all applied learningslearnings.yaml
Step 2: Scan Conversation for Signals
Use the signal detector to identify learnings:
python scripts/signal_detector.py --input conversation.txt
Signal Confidence Levels
| Confidence | Triggers | Examples |
|---|---|---|
| HIGH | Explicit corrections | "never", "always", "wrong", "stop", "the rule is" |
| MEDIUM | Approved approaches | "perfect", "exactly", accepted output |
| LOW | Observations | Patterns that worked, not validated |
See signal_patterns.md for full detection rules.
Step 3: Classify & Match to Target Files
Map each signal to the appropriate target:
Learning Categories:
| Category | Target Files |
|---|---|
| Code Style | , , |
| Architecture | , , |
| Process | , orchestrator agents |
| Domain | Domain-specific agents, |
| Tools | , relevant specialists |
| New Skill | |
See agent_mappings.md for mapping rules.
Step 4: Check for Skill-Worthy Signals
Some learnings should become new skills rather than agent updates:
Skill-Worthy Criteria:
- Non-obvious debugging (>10 min investigation)
- Misleading error (root cause different from message)
- Workaround discovered through experimentation
- Configuration insight (differs from documented)
- Reusable pattern (helps in similar situations)
Quality Gates (must pass all):
- Reusable: Will help with future tasks
- Non-trivial: Requires discovery, not just docs
- Specific: Can describe exact trigger conditions
- Verified: Solution actually worked
- No duplication: Doesn't exist already
See skill_template.md for skill creation guidelines.
Step 5: Generate Proposals
Produce output in this format:
# Reflection Analysis ## Session Context - **Date**: [timestamp] - **Messages Analyzed**: [count] - **Focus**: [all agents OR specific agent name] ## Signals Detected | # | Signal | Confidence | Source Quote | Category | |---|--------|------------|--------------|----------| | 1 | [learning] | HIGH | "[exact words]" | Code Style | | 2 | [learning] | MEDIUM | "[context]" | Architecture | ## Proposed Agent Updates ### Change 1: Update [agent-name] **Target**: `[file path]` **Section**: [section name] **Confidence**: [HIGH/MEDIUM/LOW] **Rationale**: [why this change] ```diff --- a/path/to/agent.md +++ b/path/to/agent.md @@ -82,6 +82,7 @@ ## Section * Existing rule +* New rule from learning
Proposed New Skills
Skill 1: [skill-name]
Quality Gate Check:
- Reusable: [why]
- Non-trivial: [why]
- Specific: [trigger conditions]
- Verified: [how verified]
- No duplication: [checked against]
Will create:
.claude/skills/[skill-name]/SKILL.md
Conflict Check
- No conflicts with existing rules detected
- OR: Warning - potential conflict with [file:line]
Commit Message
reflect: add learnings from session [date] Agent updates: - [learning 1 summary] New skills: - [skill-name]: [brief description] Extracted: [N] signals ([H] high, [M] medium, [L] low confidence)
Review Prompt
Apply these changes?
- Apply all changes and commitY
- Discard all changesN
- Adjust specific changesmodify
- Apply only changes 1 and 31,3
- Apply only skill 1s1
- Apply all skills, skip agent updatesall-skills
### Step 6: Handle User Response **On `Y` (approve):** 1. Apply each change using Edit tool 2. Run `git add` on modified files 3. Commit with generated message 4. Update learnings log 5. Update metrics **On `N` (reject):** 1. Discard proposed changes 2. Log rejection for analysis 3. Ask if user wants to modify any signals **On `modify`:** 1. Present each change individually 2. Allow editing the proposed addition 3. Reconfirm before applying **On selective (e.g., `1,3`):** 1. Apply only specified changes 2. Log partial acceptance 3. Commit only applied changes ### Step 7: Update Metrics ```bash python scripts/metrics_updater.py --accepted 3 --rejected 1 --confidence high:2,medium:1
Toggle Commands
Enable Auto-Reflection
/reflect on # Sets auto_reflect: true in state file # Will trigger on PreCompact hook
Disable Auto-Reflection
/reflect off # Sets auto_reflect: false in state file
Check Status
/reflect status # Shows current state and metrics
Review Pending
/reflect review # Shows low-confidence learnings awaiting validation
Output Locations
Project-level (versioned with repo):
- Full reflection.claude/reflections/YYYY-MM-DD_HH-MM-SS.md
- Project summary.claude/reflections/index.md
- New skills.claude/skills/{name}/SKILL.md
Global (user-level):
- Cross-project~/.claude/reflections/by-project/{project}/
- Per-agent~/.claude/reflections/by-agent/{agent}/learnings.md
- Global summary~/.claude/reflections/index.md
Memory Integration
Some learnings belong in auto-memory (
~/.claude/projects/*/memory/MEMORY.md) rather than agent files:
| Learning Type | Best Target |
|---|---|
| Behavioral correction ("always do X") | Agent file |
| Project-specific pattern | MEMORY.md |
| Recurring bug/workaround | New skill OR MEMORY.md |
| Tool preference | CLAUDE.md |
| Domain knowledge | MEMORY.md or compound-docs |
When a signal is LOW confidence and project-specific, prefer writing to MEMORY.md over modifying agents.
Safety Guardrails
Human-in-the-Loop
- NEVER apply changes without explicit user approval
- Always show full diff before applying
- Allow selective application
Git Versioning
- All changes committed with descriptive messages
- Easy rollback via
git revert - Learning history preserved
Incremental Updates
- ONLY add to existing sections
- NEVER delete or rewrite existing rules
- Preserve original structure
Conflict Detection
- Check if proposed rule contradicts existing
- Warn user if conflict detected
- Suggest resolution strategy
Integration
With /handover
If auto-reflection is enabled, PreCompact hook triggers reflection before handover.
With Session Health
At 70%+ context (Yellow status), reminders to run
/reflect are injected.
Hook Integration (Claude Code)
The skill includes hook scripts for automatic integration:
# Install hook to your Claude hooks directory cp hooks/precompact_reflect.py ~/.claude/hooks/
Configure in
~/.claude/settings.json:
{ "hooks": { "PreCompact": [ { "hooks": [ { "type": "command", "command": "uv run ~/.claude/hooks/precompact_reflect.py --auto" } ] } ] } }
See hooks/README.md for full configuration options.
Portability
This skill works with any LLM tool that supports:
- File read/write operations
- Text pattern matching
- Git operations (optional, for commits)
Configurable State Location
# Set custom state directory export REFLECT_STATE_DIR=/path/to/state # Or use default # ~/.reflect/ (portable default) # ~/.claude/session/ (Claude Code default)
No Task Tool Dependency
Unlike the previous agent-based approach, this skill executes directly without spawning subagents. The LLM reads SKILL.md and follows the workflow.
Git Operations Optional
Commits are wrapped with availability checks - if not in a git repo, changes are still saved but not committed.
Troubleshooting
No signals detected:
- Session may not have had corrections
- Try
to check pending items/reflect review
Conflict warning:
- Review the existing rule cited
- Decide if new rule should override
- Can modify before applying
Agent file not found:
- Check agent name spelling
- Use
to see available targets/reflect status - May need to create agent file first
File Structure
reflect/ ├── SKILL.md # This file ├── scripts/ │ ├── state_manager.py # State file CRUD │ ├── signal_detector.py # Pattern matching │ ├── metrics_updater.py # Metrics aggregation │ └── output_generator.py # Reflection file & index generation ├── hooks/ │ ├── precompact_reflect.py # PreCompact hook integration │ ├── settings-snippet.json # Settings.json examples │ └── README.md # Hook configuration guide ├── references/ │ ├── signal_patterns.md # Detection rules │ ├── agent_mappings.md # Target mappings │ └── skill_template.md # Skill generation └── assets/ ├── reflection_template.md # Output template └── learnings_schema.yaml # Schema definition