Leos_claude_starter review-tests
Comprehensive test review using parallel test-reviewer agents.
install
source · Clone the upstream repo
git clone https://github.com/leogodin217/leos_claude_starter
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/leogodin217/leos_claude_starter "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/review-tests" ~/.claude/skills/leogodin217-leos-claude-starter-review-tests && rm -rf "$T"
manifest:
.claude/skills/review-tests/SKILL.mdsource content
Review Tests
Orchestrate a comprehensive test review using parallel test-reviewer agents.
Process
1. Discover
Find all test files:
Glob: **/test_*.py, **/*_test.py
Map each test file to its source:
→tests/test_foo.pysrc/.../foo.py
→src/.../tests/test_bar.pysrc/.../bar.py
Report: "Found N test files covering M source modules"
2. Plan Parallelization
Group tests for parallel review. Options:
- By file (maximum parallelism)
- By module (balanced)
- By package (fewer agents)
Default: By module (all tests for a source module → one agent)
3. Launch Agents
For each group, launch test-reviewer agent:
Task( subagent_type="test-reviewer", prompt="Review tests.\n\ntest_path: {paths}\nsource_path: {paths}", run_in_background=true )
Launch ALL agents in a single message for true parallelism.
4. Collect Results
Wait for all agents to complete. Gather structured output.
5. Aggregate
Combine findings across all agents:
# Test Review Summary ## Overall - Files reviewed: N - Tests reviewed: N - Remove: N | Improve: N | Add: N | Keep: N ## By Priority ### High (Remove - tests with no value) [List across all files] ### Medium (Add - important gaps) [List across all files] ### Low (Improve - existing tests to refine) [List across all files] ## By Module [Findings grouped by source module]
6. Recommendations
Based on findings, recommend:
- Tests to delete immediately
- Critical gaps to fill
- Improvements to make
- Whether overall test health is good/fair/poor