Claude-skill-registry debug
Systematic error debugging with analysis, solution discovery, and verification
install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/debug-skills" ~/.claude/skills/majiayu000-claude-skill-registry-debug-10b554 && rm -rf "$T"
manifest:
skills/data/debug-skills/SKILL.mdsource content
<objective>
Debug errors systematically through a 5-step workflow: analyze the error, find potential solutions, propose options to the user, implement the fix, and verify it works through multi-layer verification.
</objective>
<quick_start> Debug an error (interactive):
/debug login page crashes on submit
Auto mode (fully automatic, use recommended solutions):
/debug -a API returning 500 on POST
What it does:
- Analyze: Reproduce error, identify root cause → ask if you have more context
- Find Solutions: Research 2-3+ potential fixes with pros/cons
- Propose: Present options → you choose which solution
- Fix: Implement solution with strategic logging
- Verify: Multi-layer verification (Static → Build → Runtime)
Key principle: Tests passing ≠ fix working. Always execute the actual code path.
Note: Only 2 questions in the entire workflow:
- After analysis: "Do you have additional info?"
- After finding solutions: "Which solution to implement?" </quick_start>
- Reproduce Before Anything Else - If you can't reproduce it, you can't verify the fix
- Hypothesis-Driven Analysis - List 3-5 causes ranked by likelihood, test systematically
- Multi-Layer Verification - Tests alone give false confidence (20-40% still fail in production) </core_principles>
<verification_pyramid> Verification Pyramid:
┌─────────────┐ │ Manual │ ← User confirms └──────┬──────┘ ┌─────────┴─────────┐ │ Runtime Execution │ ← CRITICAL: Real execution └─────────┬─────────┘ ┌───────────┴───────────┐ │ Automated Checks │ ← Build, Types, Lint, Tests └───────────┬───────────┘ ┌─────────────┴─────────────┐ │ Static Analysis │ ← Syntax, Imports └───────────────────────────┘
Key Insight: Tests passing ≠ fix working. ALWAYS execute the actual code path. </verification_pyramid> </methodology>
<parameters> **Flags:**| Flag | Name | Description |
|---|---|---|
, | Auto mode | Full automatic mode - don't ask the user, use recommended solutions |
Arguments:
- Everything after flags =
- Description of the error or context about what's failing </parameters>{error_context}
<state_variables> Persist throughout all steps:
| Variable | Type | Description |
|---|---|---|
| string | User's description of the error |
| boolean | Skip confirmations, use recommended options |
| object | Detailed analysis from step 1 |
| list | Potential solutions found in step 2 |
| object | User's chosen solution from step 3 |
| list | Files changed during the fix |
| object | Results from verification step |
</state_variables>
<entry_point> Load
steps/step-00-init.md
</entry_point>
<step_files>
| Step | File | Description |
|---|---|---|
| 0 | | Parse flags, setup state |
| 1 | | Reproduce error, form hypotheses, identify root cause |
| 2 | | Research 2-3+ solutions with pros/cons |
| 3 | | Present solutions for user selection |
| 4 | | Implement with strategic logging |
| 5 | | Multi-layer verification (Static → Build → Runtime → User) |
| </step_files> |
<success_criteria>
- Error successfully reproduced
- Root cause identified through hypothesis testing
- 2-3+ potential solutions researched with pros/cons
- Solution selected (by user or auto mode)
- Fix implemented with strategic logging
- Static analysis passes (syntax, imports)
- Build completes successfully
- Tests pass (if tests exist)
- Runtime execution verified (actual code path executed)
- User confirms fix resolves the original issue
- No regressions introduced </success_criteria>