Skills test-review
Evaluate test suites for coverage gaps, quality issues, and TDD/BDD compliance
git clone https://github.com/openclaw/skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/athola/nm-pensive-test-review" ~/.claude/skills/clawdbot-skills-test-review && rm -rf "$T"
skills/athola/nm-pensive-test-review/SKILL.mdNight Market Skill — ported from claude-night-market/pensive. For the full experience with agents, hooks, and commands, install the Claude Code plugin.
Table of Contents
- Quick Start
- When to Use
- Required TodoWrite Items
- Progressive Loading
- Workflow
- Step 1: Detect Languages (
)test-review:languages-detected - Step 2: Inventory Coverage (
)test-review:coverage-inventoried - Step 3: Assess Scenario Quality (
)test-review:scenario-quality - Step 4: Plan Remediation (
)test-review:gap-remediation - Step 5: Log Evidence (
)test-review:evidence-logged - Test Quality Checklist (Condensed)
- Output Format
- Summary
- Framework Detection
- Coverage Analysis
- Quality Issues
- Remediation Plan
- Recommendation
- Integration Notes
- Exit Criteria
Test Review Workflow
Evaluate and improve test suites with TDD/BDD rigor.
Quick Start
/test-review
Verification: Run
pytest -v to verify tests pass.
When To Use
- Reviewing test suite quality
- Analyzing coverage gaps
- Before major releases
- After test failures
- Planning test improvements
When NOT To Use
- Writing new tests - use parseltongue:python-testing
- Updating existing tests - use sanctum:test-updates
Required TodoWrite Items
test-review:languages-detectedtest-review:coverage-inventoriedtest-review:scenario-qualitytest-review:gap-remediationtest-review:evidence-logged
Progressive Loading
Load modules as needed based on review depth:
- Basic review: Core workflow (this file)
- Framework detection: Load
modules/framework-detection.md - Coverage analysis: Load
modules/coverage-analysis.md - Quality assessment: Load
modules/scenario-quality.md - Remediation planning: Load
modules/remediation-planning.md
Workflow
Step 1: Detect Languages (test-review:languages-detected
)
test-review:languages-detectedIdentify testing frameworks and version constraints. → See:
modules/framework-detection.md
Quick check:
find . -maxdepth 2 -name "Cargo.toml" -o -name "pyproject.toml" -o -name "package.json" -o -name "go.mod"
Verification: Run the command with
--help flag to verify availability.
Step 2: Inventory Coverage (test-review:coverage-inventoried
)
test-review:coverage-inventoriedRun coverage tools and identify gaps. → See:
modules/coverage-analysis.md
Quick check:
git diff --name-only | rg 'tests|spec|feature'
Verification: Run
pytest -v to verify tests pass.
Step 3: Assess Scenario Quality (test-review:scenario-quality
)
test-review:scenario-qualityEvaluate test quality using BDD patterns and assertion checks. → See:
modules/scenario-quality.md
Focus on:
- Given/When/Then clarity
- Assertion specificity
- Anti-patterns (dead waits, mocking internals, repeated boilerplate)
Step 4: Plan Remediation (test-review:gap-remediation
)
test-review:gap-remediationCreate concrete improvement plan with owners and dates. → See:
modules/remediation-planning.md
Step 5: Log Evidence (test-review:evidence-logged
)
test-review:evidence-loggedRecord executed commands, outputs, and recommendations. → See:
imbue:proof-of-work
Test Quality Checklist (Condensed)
- Clear test structure (Arrange-Act-Assert)
- Critical paths covered (auth, validation, errors)
- Specific assertions with context
- No flaky tests (dead waits, order dependencies)
- Reusable fixtures/factories
Output Format
## Summary [Brief assessment] ## Framework Detection - Languages: [list] | Frameworks: [list] | Versions: [constraints] ## Coverage Analysis - Overall: X% | Critical: X% | Gaps: [list] ## Quality Issues [Q1] [Issue] - Location - Fix ## Remediation Plan 1. [Action] - Owner - Date ## Recommendation Approve / Approve with actions / Block
Verification: Run the command with
--help flag to verify availability.
Integration Notes
- Use
for reproducible evidence captureimbue:proof-of-work - Reference
for risk assessmentimbue:diff-analysis - Format output using
patternsimbue:structured-output
Exit Criteria
- Frameworks detected and documented
- Coverage analyzed and gaps identified
- Scenario quality assessed
- Remediation plan created with owners and dates
- Evidence logged with citations
Troubleshooting
Common Issues
Tests not discovered Ensure test files match pattern
test_*.py or *_test.py. Run pytest --collect-only to verify.
Import errors Check that the module being tested is in
PYTHONPATH or install with pip install -e .
Async tests failing Install pytest-asyncio and decorate test functions with
@pytest.mark.asyncio