Claude-skill-registry dev-swarm-code-test
Create and execute comprehensive tests including unit tests, integration tests, CLI tests, web/mobile UI tests, API tests, and log analysis. Find bugs, verify requirements, identify improvements, and create change/bug/improve backlogs. Use when testing implementations or ensuring quality.
git clone https://github.com/majiayu000/claude-skill-registry
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/dev-swarm-code-test" ~/.claude/skills/majiayu000-claude-skill-registry-dev-swarm-code-test && rm -rf "$T"
skills/data/dev-swarm-code-test/SKILL.mdAI Builder - Code Test
This skill creates and executes comprehensive test suites to verify code quality and functionality. As a QA Engineer expert, you'll design test plans, write automated tests, perform manual testing, analyze results, identify issues, and create backlogs for changes, bugs, or improvements.
When to Use This Skill
- User asks to test a backlog or feature
- User requests test creation or execution
- Code review is complete and testing is needed
- User wants to verify implementation meets requirements
- User asks to run test suite
- User wants to validate a sprint before completion
Prerequisites
This skill requires:
- Code implementation completed
- Code review completed (recommended)
- Product Requirements Document (business requirements and acceptance criteria)04-prd/
- Engineering standards and constraints07-tech-specs/
folder with feature design and implementation docsfeatures/
folder with backlog and test plan09-sprints/
folder (organized as defined in source-code-structure.md)src/- Access to source code and running environment
Feature-Driven Testing Workflow
CRITICAL: This skill follows a strict feature-driven approach where
feature-name is the index for the entire project:
For Each Backlog:
- Read backlog.md from
09-sprints/SPRINT-XX-descriptive-name/[BACKLOG_TYPE]-XX-[feature-name]-<sub-feature>.md - Extract the
from the backlog file namefeature-name - Read
to find the feature filefeatures/features-index.md - Read feature documentation in this order:
- Feature definition (WHAT/WHY/SCOPE)features/[feature-name].md
- User flows and process flows (if exists)features/flows/[feature-name].md
- API/data contracts (if exists)features/contracts/[feature-name].md
- Implementation notes (if exists)features/impl/[feature-name].md
- Locate code and test files in
usingsrc/features/impl/[feature-name].md - Write/execute tests following
07-tech-specs/testing-standards.md - Update
with test results and findingsbacklog.md
This approach ensures AI testers can test large projects without reading all code at once.
Your Roles in This Skill
See
dev-swarm/docs/general-dev-stage-rule.md for role selection guidance.
Role Communication
See
dev-swarm/docs/general-dev-stage-rule.md for the required role announcement format.
Test Types Overview
This skill handles multiple test types:
- Unit Tests: Test individual functions/components in isolation
- Integration Tests: Test component interactions and data flow
- API Tests: Test REST/GraphQL endpoints, contracts, error handling
- CLI Tests: Test command-line interfaces and scripts
- Web UI Tests: Test web interfaces (Playwright, Selenium, Cypress)
- Mobile UI Tests: Test mobile apps (if applicable)
- Log Analysis: Verify logging, monitoring, error tracking
- Performance Tests: Load testing, stress testing, benchmarks
- Security Tests: Vulnerability scanning, penetration testing
Instructions
Follow these steps in order:
Step 0: Verify Prerequisites and Gather Context (Feature-Driven Approach)
IMPORTANT: Follow this exact order to efficiently locate all relevant context:
-
Identify the backlog to test:
- User specifies which backlog to test
- Or test latest reviewed backlog from sprint
09-sprints/ └── SPRINT-XX-descriptive-name/ └── [BACKLOG_TYPE]-XX-[feature-name]-<sub-feature>.md- Locate the sprint README at
for required progress log updates09-sprints/SPRINT-XX-descriptive-name/README.md
-
Read the backlog file:
- Understand requirements and acceptance criteria
- Read the test plan defined in backlog
- Extract the
from the file name (CRITICAL)feature-name - Verify
in backlog metadata matches the file nameFeature Name - If they do not match, stop and ask the user to confirm the correct feature name
- Note backlog type (FEATURE/CHANGE/BUG/IMPROVE)
- Identify success criteria
-
Read testing standards:
- Understand test coverage requirements
- Note test frameworks and conventions
-
Read PRD and tech specs:
- Read
(all markdown files) - Product requirements and acceptance criteria for the feature04-prd/ - Read
(all markdown files) - Technical specifications and engineering standards07-tech-specs/ - Understand the business context and technical constraints
- Read
-
Read feature documentation (using feature-name as index):
- Read
to confirm feature existsfeatures/features-index.md - Read
- Feature definition (expected behavior)features/[feature-name].md - Read
- User flows (test these flows)features/flows/[feature-name].md - Read
- API contracts (test these contracts)features/contracts/[feature-name].md - Read
- Implementation notes (what was built)features/impl/[feature-name].md
- Read
-
Locate code and tests:
- Use
to find code locationsfeatures/impl/[feature-name].md - Navigate to
directorysrc/ - Check existing test files in
(locations from features/impl/[feature-name].md)src/ - Identify files to test
- Use
-
Read sprint test plan:
- Check
for sprint-level test plan09-sprints/sprint/README.md - Understand end-user test scenarios
- Note manual vs automated test requirements
- Check
-
Determine test scope:
- What test types are needed?
- Manual or automated or both?
- Environment requirements?
DO NOT read the entire codebase. Use
feature-name to find only relevant files.
Step 1: Design Test Strategy
Before writing tests, plan the approach:
-
Identify test scenarios:
Happy Path:
- Normal, expected user flows
- Valid inputs and operations
- Successful outcomes
Edge Cases:
- Boundary values (min, max, zero, negative)
- Empty inputs
- Very large inputs
- Special characters
Error Cases:
- Invalid inputs
- Missing required data
- Permission denials
- Network failures
- System errors
Security Cases:
- SQL injection attempts
- XSS attempts
- Authentication bypass attempts
- Authorization violations
- CSRF attacks
-
Select test types:
- Which test types are appropriate?
- What can be automated?
- What requires manual testing?
- What's the priority order?
-
Define success criteria:
- What does passing mean?
- What coverage is needed?
- Performance benchmarks?
- Security requirements?
Step 2: Write Automated Tests
Create automated test suites based on test type:
Unit Tests
Test individual functions/components:
Best Practices:
- Test one thing per test case
- Clear, descriptive test names
- Arrange-Act-Assert pattern
- Mock external dependencies
- Test both success and failure paths
Integration Tests
Test component interactions:
API Tests
Test endpoints and contracts:
CLI Tests
Test command-line interfaces:
Web UI Tests (Playwright/Cypress)
Test web interfaces:
Step 3: Execute Manual Tests
For scenarios that can't be easily automated:
-
Follow test plan from backlog:
- Execute each manual test step
- Use curl for API testing
- Use CLI for command testing
- Use browser for UI testing
-
Document test execution:
- Record what was tested
- Note any issues encountered
- Capture screenshots/logs for failures
- Time performance-critical operations
-
Test across environments:
- Development environment
- Different browsers (Chrome, Firefox, Safari)
- Different devices (mobile, tablet, desktop)
- Different operating systems (if applicable)
Step 4: Analyze Logs
Review application logs for issues:
-
Check for errors:
- Unhandled exceptions
- Stack traces
- Error messages
-
Verify logging quality:
- Appropriate log levels (debug, info, warn, error)
- No sensitive data in logs (passwords, tokens)
- Sufficient context in log messages
- Proper error tracking
-
Monitor performance:
- Slow queries or operations
- Memory usage patterns
- Resource leaks
-
Security audit:
- No secrets logged
- Proper access control logging
- Suspicious activity detection
Step 5: Performance Testing (When Needed)
For performance-critical features:
-
Load testing:
- Simulate multiple concurrent users
- Measure response times
- Identify bottlenecks
-
Stress testing:
- Push system beyond normal limits
- Find breaking points
- Test recovery behavior
-
Benchmark key operations:
- Database query performance
- API response times
- Page load times
Step 6: Analyze Results and Identify Issues
Categorize findings into three types:
1. Changes (Doesn't meet requirements)
Implementation doesn't meet original requirements:
- Missing acceptance criteria
- Incorrect behavior vs specification
- Doesn't follow test plan
- Feature doesn't work as designed
Action: Create
change type backlog
2. Bugs (Defects found)
Code has defects or errors:
- Functional bugs (incorrect results)
- UI bugs (broken layouts, wrong text)
- API bugs (wrong status codes, incorrect responses)
- Performance bugs (timeouts, slowness)
- Security vulnerabilities
- Crashes or exceptions
- Data corruption
Action: Create
bug type backlog
3. Improvements (Enhancement opportunities)
Non-critical enhancements:
- Better error messages
- UX improvements
- Performance optimizations
- Additional validation
- Better logging
- Test coverage gaps
- Accessibility improvements
Action: Create
improve type backlog
Step 7: Create Backlogs for Issues
For each issue found, create a backlog:
-
Determine severity:
- Critical: System unusable, data loss, security breach
- High: Major feature broken, significant user impact
- Medium: Minor feature broken, workaround exists
- Low: Cosmetic issues, minor improvements
-
Create backlog file in
:09-sprints/Test Bug Backlog Template:
# Backlog: [Type] - [Brief Description] ## Type [change | bug | improve] ## Severity [critical | high | medium | low] ## Original Feature/Backlog Reference to original backlog that was tested ## Issue Description Clear description of the bug or issue ## Steps to Reproduce 1. Step-by-step instructions to reproduce 2. Include specific inputs/actions 3. Note environment details ## Expected Behavior What should happen ## Actual Behavior What actually happens ## Test Evidence - Screenshots - Log excerpts - Error messages - Performance metrics ## Affected Components - Files/functions involved - APIs or UI elements broken ## Reference Features Related features to consult ## Test Plan How to verify the fix works -
Notify Project Management:
- Critical issues need immediate attention
- High severity bugs should be prioritized
- Medium/low can be batched
Step 8: Create Test Report
Document test results:
-
Test Summary:
- Total test cases executed
- Passed vs Failed
- Test coverage achieved
- Time taken
-
Test Results by Type:
- Unit tests: X passed, Y failed
- Integration tests: X passed, Y failed
- API tests: X passed, Y failed
- UI tests: X passed, Y failed
- Manual tests: X passed, Y failed
-
Issues Found:
- Changes required: count
- Bugs found: count
- Improvements suggested: count
- By severity breakdown
-
Test Decision:
- Passed: All tests pass, ready for production
- Passed with minor issues: Non-critical improvements noted
- Failed: Critical issues must be fixed before release
- Blocked: Cannot test due to environment or dependency issues
Step 9: Update Backlog with Test Results
CRITICAL: Update the backlog.md file to track testing progress:
-
Update backlog status:
- Change status from "In Testing" to "Done" (if all tests pass)
- Or change to "In Development" (if bugs found requiring fixes)
- Add a "Testing Notes" section if not present
-
Document testing findings:
- Test Summary: Total tests executed, passed, failed
- Test Types Executed: Unit, integration, API, UI, manual
- Test Coverage: Percentage of code/features tested
- Issues Found: Count of CHANGE/BUG/IMPROVE backlogs created
- Test Decision: Passed, Passed with minor issues, Failed, or Blocked
- Test Evidence: Screenshots, logs, performance metrics
- Related Backlogs: Link to created CHANGE/BUG/IMPROVE backlogs
-
Update feature documentation:
- Add test notes to
features/impl/[feature-name].md - Document known issues or limitations discovered
- Note test coverage achieved
- Update with any testing insights
- Add test notes to
-
Notify user:
- Summarize test results
- Report pass/fail status
- List critical issues found
- Recommend next steps (fix bugs, deploy, etc.)
-
Update sprint README (README.md) (CRITICAL):
- Update backlog status in the sprint backlog table
- Append a log entry in the sprint progress log for the Testing step
These backlog.md and sprint README updates create the audit trail showing testing was completed and results.