Claude-skill-registry agent-ops-testing
Test strategy, execution, and coverage analysis. Use when designing tests, running test suites, or analyzing test results beyond baseline checks.
git clone https://github.com/majiayu000/claude-skill-registry
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/core/agent-ops-testing-majiayu000-claude-skill-registr" ~/.claude/skills/majiayu000-claude-skill-registry-agent-ops-testing && rm -rf "$T"
skills/core/agent-ops-testing-majiayu000-claude-skill-registr/SKILL.mdTesting Workflow
Works with or without
CLI installed. Issue tracking can be done via direct file editing.aoc
Purpose
Provide structured guidance for test design, execution, and analysis that goes beyond baseline capture. This skill covers test strategy during planning, incremental testing during implementation, and coverage analysis.
Test Commands (from constitution)
# Python (uv/pytest) uv run pytest # Run all tests uv run pytest tests/ -v # Verbose output uv run pytest tests/ -m "not slow" # Skip slow tests uv run pytest tests/ --tb=short -q # Quick summary uv run pytest --cov=src --cov-report=html # Coverage report # TypeScript/Node (vitest/jest) npm run test # Run all tests npm run test -- --coverage # With coverage # .NET (dotnet test) dotnet test # Run all tests dotnet test --collect:"XPlat Code Coverage" # With coverage
Issue Tracking (File-Based — Default)
| Operation | How to Do It |
|---|---|
| Create test issue | Append to with type |
| Create bug from failure | Append to with type |
| Log test results | Edit issue's section in priority file |
Example: Post-Test Issue Creation (File-Based)
- Increment
.agent/issues/.counter - Append issue to appropriate priority file
- Add log entry with test run results
CLI Integration (when aoc is available)
When
aoc CLI is detected in .agent/tools.json, these commands provide convenience shortcuts:
| Operation | Command |
|---|---|
| Create test issue | |
| Create bug from failure | |
| Log test results | |
Test Isolation (MANDATORY)
Tests must NEVER create, modify, or delete files in the project folder.
Unit Tests
- Use mocks/patches for ALL file system operations
- Use in-memory data structures where possible
- NEVER call real file I/O against project paths
- Use
forunittest.mock.patch
,Path
, file operationsopen()
Integration Tests
- ALWAYS use
pytest
fixture (auto-cleaned)tmp_path - Use Docker containers for service dependencies (API, DB, etc.)
- Fixtures MUST handle cleanup on both success AND failure
- Test data lives ONLY in temp directories
Forbidden Patterns
# ❌ NEVER do this - pollutes project Path(".agent/test.md").write_text("test") Path("src/data/fixture.json").write_text("{}") open("tests/output.log", "w").write("log") # ✅ Always use tmp_path def test_example(tmp_path): test_file = tmp_path / "test.md" test_file.write_text("test") # Auto-cleaned
Review Checklist (before approving tests)
- No hardcoded paths to project directories
- All file operations use
or mockstmp_path - Integration tests use fixtures with cleanup
- Docker fixtures auto-remove containers
When to Use
- During planning: designing test strategy for new features
- During implementation: running incremental tests
- During review: analyzing coverage and gaps
- On demand: investigating test failures, improving test suite
Preconditions
exists with confirmed test command.agent/constitution.md
exists (for comparison).agent/baseline.md
Test Strategy Design
For New Features
-
Identify test levels needed:
- Unit tests: isolated function/method behavior
- Integration tests: component interaction
- E2E tests: user-facing workflows (if applicable)
-
Define test cases from requirements:
- Happy path: expected inputs → expected outputs
- Edge cases: boundary values, empty inputs, max values
- Error cases: invalid inputs, failure scenarios
- Regression cases: ensure existing behavior unchanged
-
Document in task/plan:
## Test Strategy - Unit: [list of unit test cases] - Integration: [list of integration scenarios] - Edge cases: [specific edge cases to cover] - Not testing: [explicitly excluded with rationale]
For Bug Fixes
- Write failing test FIRST (reproduces the bug)
- Fix the bug
- Verify test passes
- Check for related regression tests needed
Test Execution
Incremental Testing (during implementation)
After each implementation step:
- Run the smallest reliable test subset covering changed code
- If tests fail: stop, diagnose, fix before proceeding
- Log test results in focus.md
Full Test Suite (end of implementation)
- Run complete test command from constitution
- Compare results to baseline
- Investigate ANY new failures (even in unrelated areas)
Test Command Patterns
# Run specific test file <test-runner> path/to/test_file.py # Run tests matching pattern <test-runner> -k "test_feature_name" # Run with coverage <test-runner> --coverage # Run failed tests only (re-run) <test-runner> --failed
Actual commands must come from constitution.
Coverage Analysis
Confidence-Based Coverage Thresholds (MANDATORY)
Coverage requirements scale with confidence level:
| Confidence | Line Coverage | Branch Coverage | Enforcement |
|---|---|---|---|
| LOW | ≥90% on changed code | ≥85% on changed code | HARD — blocks completion |
| NORMAL | ≥80% on changed code | ≥70% on changed code | SOFT — warning if missed |
| HIGH | Tests pass | N/A | MINIMAL — existing tests only |
Rationale:
- LOW confidence = more unknowns = more code paths to verify
- HIGH confidence = well-understood = existing tests sufficient
Enforcement:
🎯 COVERAGE CHECK — {CONFIDENCE} Confidence Required: ≥{line_threshold}% line, ≥{branch_threshold}% branch Actual: {actual_line}% line, {actual_branch}% branch [PASS] Coverage meets threshold — OR — [FAIL] Coverage below threshold — must add tests before completion
For LOW confidence failures:
- Coverage failure is a HARD BLOCK
- Cannot proceed until threshold is met
- Document why if threshold is truly unachievable (rare)
When to Analyze Coverage
- After completing a feature (before critical review)
- When investigating untested code paths
- During improvement discovery
Coverage Metrics to Track
| Metric | Target | Notes |
|---|---|---|
| Line coverage | ≥80% for new code | Not a hard rule; quality over quantity |
| Branch coverage | Critical paths covered | Focus on decision points |
| Uncovered lines | Document rationale | Some code legitimately untestable |
Coverage Gaps to Flag
- New code with 0% coverage → must address
- Error handling paths untested → should address
- Complex logic untested → investigate
- Generated/boilerplate untested → acceptable
Test Quality Checklist
Good Tests
- Test behavior, not implementation
- Independent (no test order dependencies)
- Deterministic (same result every run)
- Fast (< 1 second per unit test)
- Readable (test name describes scenario)
- Minimal mocking (only external dependencies)
Anti-Patterns to Avoid
- ❌ Testing implementation details (breaks on refactor)
- ❌ Excessive mocking (tests mock, not real code)
- ❌ Flaky tests (intermittent failures)
- ❌ Slow tests without justification
- ❌ Tests that require manual setup
- ❌ Commented-out tests
Failure Investigation
When tests fail unexpectedly, invoke
:agent-ops-debugging
-
Apply systematic debugging process:
- Isolate: Run failing test alone
- Reproduce: Confirm failure is consistent
- Form hypothesis: What might cause this?
- Test hypothesis: Add logging, inspect state
-
Categorize the failure:
Category Evidence Action Agent's change Test passed in baseline Fix the change Pre-existing Test failed in baseline Document, create issue Flaky Intermittent, no code change Fix test or document Environment Works elsewhere Check constitution assumptions -
Handoff decision:
🔍 Test failure analysis: - Test: {test_name} - Category: {agent_change | pre_existing | flaky | environment} - Root cause: {diagnosis} Next steps: 1. Fix and re-run (if agent's change) 2. Create issue and continue (if pre-existing) 3. Deep dive with /agent-debug (if unclear)
Output
After test activities, update:
: test results summary.agent/focus.md
: if establishing new baseline.agent/baseline.md
Issue Discovery After Testing
After test analysis, invoke
discovery procedure:agent-ops-tasks
-
Collect test-related findings:
- Failing tests →
(high)BUG - Missing test coverage →
(medium)TEST - Flaky tests identified →
(medium)CHORE - Test anti-patterns found →
(low)REFAC - Missing edge case tests →
(medium)TEST
- Failing tests →
-
Present to user:
📋 Test analysis found {N} items: High: - [BUG] Flaky test: PaymentService.processAsync (failed 2/10 runs) Medium: - [TEST] Missing coverage for error handling in UserController - [TEST] No edge case tests for empty input scenarios Low: - [REFAC] Tests have excessive mocking in OrderService.test.ts Create issues for these? [A]ll / [S]elect / [N]one -
After creating issues:
Created {N} test-related issues. What's next? 1. Start fixing highest priority (BUG-0024@abc123 - flaky test) 2. Continue with current work 3. Review test coverage report