Claude-skill-registry lessons-learned
Use when capturing discoveries after phase completion, before shipping, or when reflecting on completed work to extract reusable patterns
git clone https://github.com/majiayu000/claude-skill-registry
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/lessons-learned" ~/.claude/skills/majiayu000-claude-skill-registry-lessons-learned && rm -rf "$T"
skills/data/lessons-learned/SKILL.mdLessons Learned
Overview
The lessons-learned system captures discoveries, patterns, and pitfalls found during implementation and feeds them back into project memory. Lessons are stored in
.shipyard/LESSONS.md and optionally surfaced in CLAUDE.md so future agents benefit from past experience.
When to Use
- After phase completion during
(Step 3a)/shipyard:ship - When reflecting on completed work to extract reusable knowledge
- When a build summary contains notable discoveries worth preserving
LESSONS.md Format
Store lessons in
.shipyard/LESSONS.md using this exact structure:
# Shipyard Lessons Learned ## [YYYY-MM-DD] Phase N: {Phase Name} ### What Went Well - {Bullet point} ### Surprises / Discoveries - {Pattern discovered} ### Pitfalls to Avoid - {Anti-pattern encountered} ### Process Improvements - {Workflow enhancement} ---
New entries are prepended after the
# Shipyard Lessons Learned heading so the most recent phase appears first. Each phase gets its own dated section with all four subsections.
Structured Prompts
Present these four questions to the user during lesson capture:
- What went well in this phase? -- Patterns, tools, or approaches that worked effectively.
- What surprised you or what did you learn? -- Unexpected behaviors, new techniques, or revised assumptions.
- What should future work avoid? -- Anti-patterns, dead ends, or approaches that caused problems.
- Any process improvements discovered? -- Workflow changes, tooling suggestions, or efficiency gains.
Pre-populate suggested answers from build artifacts before asking (see Pre-Population below).
Pre-Population
Before presenting prompts, extract candidate lessons from completed build summaries:
- Read all
files inSUMMARY-*.md
..shipyard/phases/{N}/results/ - Extract entries from "Issues Encountered" sections -- these often contain workarounds and edge cases.
- Extract entries from "Decisions Made" sections -- these capture rationale worth preserving.
- Present extracted items as pre-populated suggestions the user can accept, edit, or discard.
This reduces friction and ensures discoveries documented during building are not lost.
Memory Enrichment
If the
shipyard:memory skill is available and memory is enabled:
- Search memory for the milestone's date range and project path.
- Use Haiku to extract insights about:
- Debugging struggles and resolutions
- Rejected approaches and why they failed
- Key decisions and their rationale
- Add memory-derived insights to candidates (marked separately from summary-derived).
Memory captures implicit knowledge from conversation context that may not appear in formal SUMMARY.md files.
CLAUDE.md Integration
After the user approves lessons, optionally append a summary to the project's
CLAUDE.md:
- Check for CLAUDE.md -- If no
exists in the project root, skip this step entirely.CLAUDE.md - Find existing section -- Look for a
heading in## Lessons Learned
.CLAUDE.md - Append if exists -- Add new bullet points under the existing
section.## Lessons Learned - Create if missing -- If
exists but has noCLAUDE.md
section, append the section at the end of the file.## Lessons Learned - Format for CLAUDE.md -- Use concise single-line bullets. Omit phase dates; focus on actionable guidance:
## Lessons Learned - Bash `set -e` interacts poorly with pipelines -- use explicit error checks after pipes - jq `.field // "default"` prevents null propagation in optional config values
Quality Standards
Lessons must be specific, actionable, and reusable. Apply these filters:
Good lessons (specific, transferable):
- "Bash
interacts poorly with pipelines -- use explicit error checks after pipes"set -e - "jq
prevents null propagation in optional config values".field // \"default\" - "bats-core
captures exit code but swallows stderr -- userun
to capture both"2>&1
Bad lessons (too vague or too specific):
- "Tests are important" -- too generic, not actionable
- "Fixed a bug on line 47" -- too specific, not transferable
- "Code should be clean" -- vague platitude
- "Changed variable name from x to y" -- implementation detail, not a lesson
Anti-Patterns to reject:
- Lessons that duplicate existing entries in LESSONS.md
- Lessons that reference specific line numbers or ephemeral file locations
- Lessons that are generic truisms rather than discovered knowledge
- Lessons longer than two sentences -- split or summarize
Integration
Referenced by:
commands/ship.md Step 3a for post-phase lesson capture.
Pairs with:
shipyard:shipyard-verification for validating lesson quality before persisting.