Claude-night-market bug-review
Bug hunting with evidence trails: find defects, document them, and verify fixes
git clone https://github.com/athola/claude-night-market
T=$(mktemp -d) && git clone --depth=1 https://github.com/athola/claude-night-market "$T" && mkdir -p ~/.claude/skills && cp -r "$T/plugins/pensive/skills/bug-review" ~/.claude/skills/athola-claude-night-market-bug-review && rm -rf "$T"
plugins/pensive/skills/bug-review/SKILL.mdTable of Contents
- Quick Start
- When to Use
- Required TodoWrite Items
- Progressive Loading
- Workflow
- Step 1: Detect Languages (
)bug-review:language-detected - Step 2: Plan Reproduction (
)bug-review:repro-plan - Step 3: Document Defects (
)bug-review:defects-documented - Step 4: Prepare Fixes (
)bug-review:fixes-prepared - Step 5: Verification Plan (
)bug-review:verification-plan - Defect Classification (Condensed)
- Output Format
- Summary
- Defects Found
- [D1] file.rs:142 - Title
- Proposed Fixes
- Fix for D1
- Test Updates
- Evidence
- Best Practices
- Exit Criteria
Bug Review Workflow
Systematic bug identification and fixing with language-specific expertise.
Quick Start
/bug-review
Verification: Run the command with
--help flag to verify availability.
When To Use
- Reviewing code for potential bugs
- After receiving bug reports
- Before major releases
- During security audits
- Investigating production issues
When NOT To Use
- Test coverage audit - use test-review instead
Required TodoWrite Items
bug-review:language-detectedbug-review:repro-planbug-review:defects-documentedbug-review:fixes-preparedbug-review:verification-plan
Progressive Loading
Load additional context as needed:
- Language Detection:
- Manifest heuristics, expertise framing, version constraints@include modules/language-detection.md - Defect Documentation:
- Severity classification, root cause analysis, static analyzers@include modules/defect-documentation.md - Fix Preparation:
- Minimal patches, idiomatic patterns, test coverage@include modules/fix-preparation.md
Workflow
Step 1: Detect Languages (bug-review:language-detected
)
bug-review:language-detectedIdentify dominant languages using manifest files (Cargo.toml → Rust, package.json → Node, etc.).
State expertise persona appropriate for the language ecosystem.
Note version constraints (MSRV, Python versions, Node engines).
Progressive: Load
modules/language-detection.md for detailed manifest heuristics.
Step 2: Plan Reproduction (bug-review:repro-plan
)
bug-review:repro-planIdentify reproduction methods:
- Unit/integration test suites
- Fuzzing tools
- Manual reproduction commands
Document exact commands:
cargo test -p core pytest tests/test_api.py npm test -- pkg
Verification: Run
pytest -v tests/test_api.py to verify.
Capture blockers and propose mocks when dependencies unavailable.
Step 3: Document Defects (bug-review:defects-documented
)
bug-review:defects-documentedReview code line-by-line, logging each bug with:
- File:line reference: Precise location
- Severity: Critical, High, Medium, Low
- Root cause: Logic error, API misuse, concurrency, resource leak
- Impact: What breaks and how
Run static analyzers (
cargo clippy, ruff check, golangci-lint, eslint).
Use
imbue:proof-of-work for reproducible capture.
Progressive: Load
modules/defect-documentation.md for classification details and analyzer commands.
Step 4: Prepare Fixes (bug-review:fixes-prepared
)
bug-review:fixes-preparedDraft minimal, idiomatic patches using language best practices:
- Guard clauses (Rust: pattern matching, Python: early returns)
- Resource cleanup (Go: defer, Python: context managers)
- Error propagation (Rust: ?, Go: wrapped errors)
Create tests following Red → Green pattern:
- Write failing test
- Apply minimal fix
- Verify test passes
Progressive: Load
modules/fix-preparation.md for language-specific patterns and test strategies.
Step 5: Verification Plan (bug-review:verification-plan
)
bug-review:verification-planExecute reproduction steps with fixes applied.
Capture evidence:
- Test output logs
- Benchmark comparisons
- Coverage reports
Document remaining risks using
imbue:diff-analysis/modules/risk-assessment-framework.
Assign owners and deadlines for follow-up items.
Defect Classification (Condensed)
Severity: Critical (crash/data loss) → High (broken features) → Medium (degraded UX) → Low (edge cases)
Root Causes: Logic errors | API misuse | Concurrency issues | Resource leaks | Validation gaps
Output Format
## Summary [Brief scope description] ## Defects Found ### [D1] file.rs:142 - Title - Severity: High - Root Cause: Logic error - Impact: Data corruption possible - Fix: [description] ## Proposed Fixes ### Fix for D1 [code diff with explanation] ## Test Updates [new/updated tests with Red → Green verification] ## Evidence - Commands executed - Logs and outputs - External references
Verification: Run
pytest -v to verify tests pass.
Best Practices
- Evidence-based: Every finding has file:line reference
- Reproducible: Clear steps to reproduce each bug
- Minimal fixes: Smallest change that fixes the issue
- Test coverage: Every fix has corresponding test
- Risk awareness: Document remaining risks with severity scoring
Exit Criteria
- All defects documented with precise references
- Fixes prepared with test coverage verified
- Verification plan includes commands and expected outputs
- Remaining risks assessed and owners assigned