Claude-skill-registry analyze-test-results
Analyze test failures and CI artifacts to identify and fix bugs
install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/analyze-test-results" ~/.claude/skills/majiayu000-claude-skill-registry-analyze-test-results && rm -rf "$T"
manifest:
skills/data/analyze-test-results/SKILL.mdsource content
Analyze Test Results Skill
Analyze test results from CI or local runs to identify failures, diagnose root causes, and fix bugs.
When to Use
- After tests fail (CI or local)
- After running
/download-ci-artifacts - When asked to "fix test failures"
- When asked to "analyze test results"
Workflow
1. Locate Test Results
# From CI artifacts cat ci-artifacts/test-report/test-report.md # From local run uv run pytest MouseMaster/Testing/Python/ -v --tb=long 2>&1 | tee test-output.log
2. Parse JUnit XML (if available)
# Extract failed tests from XML grep -E "(failure|error)" ci-artifacts/unit/unit-tests.xml | head -20
3. For Each Failure
a. Identify the failing test
FAILED MouseMaster/Testing/Python/test_event_handler.py::TestClass::test_name
b. Read the test file
Understand what the test expects:
- What behavior is being tested?
- What are the assertions?
- What mocks are set up?
c. Read the implementation
Find the code being tested and understand:
- Current behavior
- Why it might fail
- Edge cases
d. Diagnose root cause
Common issues:
- Mock not configured: Check mock setup in test
- API changed: Update test or implementation
- Race condition: Add proper synchronization
- Missing dependency: Check imports and fixtures
4. Fix the Issue
Apply the minimal fix needed:
# If test expectation is wrong # → Update the test # If implementation is wrong # → Fix the implementation, re-run tests # If mock setup is incomplete # → Add proper mock configuration
5. Verify Fix
# Run specific failing test uv run pytest MouseMaster/Testing/Python/test_file.py::TestClass::test_name -v # Run full suite uv run pytest MouseMaster/Testing/Python/ -v
Analyzing Slicer Test Failures
Read Slicer output log
cat ci-artifacts/slicer/slicer-output.log
Common Slicer issues
| Error | Cause | Fix |
|---|---|---|
| Extension not loaded | Check module paths |
| Widget not ready | Add processEvents() |
| View closed | Check object lifetime |
| Invalid UI operation | Run on main thread |
Screenshot analysis
If screenshots captured, review with:
# View manifest cat ci-artifacts/screenshots/manifest.json # Then use /review-ui-screenshots skill
Decision Rules
Fix immediately
- Clear implementation bugs
- Missing mock configuration
- Import errors
- Typos
Investigate more
- Intermittent failures
- Platform-specific issues
- Complex logic errors
Report only
- Test infrastructure issues
- CI configuration problems
- External dependency failures
Report Format
After analysis:
## Test Analysis Report ### Failures Found | Test | Error | Root Cause | |------|-------|------------| | test_name | AssertionError | Incorrect mock setup | ### Fixes Applied - `file.py:123` - Fixed mock configuration - `test.py:45` - Updated assertion ### Remaining Issues - Issue requiring further investigation ### Verification - [ ] All tests pass locally - [ ] Lint passes - [ ] Type check passes
Integration
After fixing:
- Run
to verify/run-tests - Commit with
fix: description of fix - Push to trigger CI
Related Skills
- Get artifacts first/download-ci-artifacts
- Analyze UI issues/review-ui-screenshots
- Fix code quality issues/fix-bad-practices