Gsd-2 test
Generate or run tests. Auto-detects test framework, generates comprehensive tests for source files, or runs existing test suites with failure analysis.
git clone https://github.com/gsd-build/gsd-2
T=$(mktemp -d) && git clone --depth=1 https://github.com/gsd-build/gsd-2 "$T" && mkdir -p ~/.claude/skills && cp -r "$T/src/resources/skills/test" ~/.claude/skills/gsd-build-gsd-2-test && rm -rf "$T"
src/resources/skills/test/SKILL.mdAccepts optional arguments:
- A file path: generate tests for that source file
: run the existing test suite and analyze resultsrun- No arguments: suggest what to test based on recent changes </objective>
<quick_start>
<step_1_detect_framework>
Detect the test framework and conventions before doing anything else.
Check these sources in order:
-
package.json (Node/JS/TS projects):
for the test commandscripts.test
for jest, vitest, mocha, ava, tap, node:test, playwright, cypressdevDependencies
orjest
config keysvitest
-
Config files:
,jest.config.*
,vitest.config.*
,.mocharc.*ava.config.*
,pytest.ini
(look forpyproject.toml
),[tool.pytest]setup.cfg
(Go projects usego.mod
by default)go test
(Rust projects useCargo.toml
)cargo test
-
Existing test files:
- Scan for
,*.test.*
,*.spec.*
,*_test.*
filestest_*.* - Read 1-2 existing test files to understand patterns, imports, assertion style, and structure
- Note the directory structure (co-located tests vs
vs__tests__/
vstests/
)test/
- Scan for
-
Record your findings:
- Framework name and version
- Test file naming convention
- Test file location convention
- Import/require style
- Assertion style (expect, assert, chai, etc.)
- Any custom utilities, fixtures, or helpers used
</step_1_detect_framework>
<step_2_handle_arguments>
Route based on the argument provided.
- File path given -> Go to
generate_tests - "run" given -> Go to
run_tests - No arguments -> Go to
suggest_tests
</step_2_handle_arguments>
<generate_tests>
Generate tests for the specified source file.
A. Read and analyze the source file:
- Identify all exported/public functions, classes, methods, and types
- Understand each function's parameters, return types, and side effects
- Note error handling patterns (throws, returns null, returns Result, etc.)
- Identify dependencies that will need mocking
B. Read existing test files in the project (1-2 files minimum):
- Match their import style exactly
- Match their describe/it or test block structure
- Match their assertion patterns
- Match their mock/stub approach
- Use the same test utilities and helpers
C. Generate tests covering:
- Happy paths: Normal expected inputs produce correct outputs
- Edge cases:
- Empty inputs (empty string, empty array, null, undefined, zero)
- Boundary values (min/max integers, very long strings)
- Single element collections
- Error handling:
- Invalid inputs that should throw or return errors
- Missing required parameters
- Type mismatches (if applicable)
- Async behavior (if the function is async):
- Successful resolution
- Rejection/error cases
- Timeout scenarios (if relevant)
- Dependencies:
- Mock external dependencies (APIs, databases, file system)
- Verify correct interaction with dependencies (called with right args)
D. Place the test file correctly:
- Follow the project's existing convention for test file location
- Use the project's naming convention (
,.test.ts
,.spec.js
,_test.go
, etc.)test_*.py
E. Run the generated tests immediately to verify they pass.
- If tests fail, read the error output carefully
- Fix the test code (not the source code)
- Re-run until all tests pass
</generate_tests>
<run_tests>
Run the existing test suite and analyze results.
A. Determine the test command:
- Check
package.json
for Node projectsscripts.test - Use
for Python projectspytest - Use
for Go projectsgo test ./... - Use
for Rust projectscargo test - Fall back to the detected framework's CLI
B. Run the tests:
- Execute the test command
- Capture full output including failures and errors
C. Analyze results:
- Report total passed, failed, skipped counts
- For each failure:
- Identify the failing test name and file
- Show the assertion that failed (expected vs actual)
- Read the relevant source code if needed
- Provide a specific diagnosis of why it failed
- Suggest a concrete fix (is it a test bug or a source bug?)
D. Present a summary:
Test Results: X passed, Y failed, Z skipped Failures: 1. [test name] - [brief diagnosis] Fix: [specific suggestion] 2. [test name] - [brief diagnosis] Fix: [specific suggestion]
</run_tests>
<suggest_tests>
Suggest what to test when no arguments are given.
A. Check recent changes:
- Run
to find recently changed filesgit diff --name-only HEAD~5 - Run
for staged filesgit diff --name-only --cached - Filter to source files (exclude configs, docs, lockfiles)
B. Check test coverage gaps:
- Find source files that have no corresponding test file
- Prioritize files that were recently modified
C. Present suggestions:
Suggested files to test (based on recent changes and coverage gaps): 1. [file path] - modified recently, no test file exists 2. [file path] - modified recently, tests exist but may need updating 3. [file path] - no test coverage found Run `/test <file path>` to generate tests for any of these. Run `/test run` to run the existing test suite.
</suggest_tests>
</quick_start>
<critical_rules>
- MATCH EXISTING PATTERNS: Never impose a new test style. Always mirror what the project already does.
- READ BEFORE WRITING: Always read existing test files before generating new ones.
- VERIFY GENERATED TESTS: Always run generated tests. Untested test code is unreliable.
- DON'T MODIFY SOURCE CODE: If generated tests fail, fix the tests, not the source. If the source has a real bug, report it to the user.
- MOCK EXTERNAL DEPENDENCIES: Never let tests hit real APIs, databases, or file systems unless the project explicitly uses integration tests that way.
- ONE FILE AT A TIME: Generate tests for one source file per invocation. Keep scope manageable.
- USE PROJECT DEPENDENCIES: Only use test libraries already installed in the project. Do not add new dependencies without asking.
</critical_rules>
<success_criteria>
Before completing:
- Test framework and conventions were detected correctly
- Generated tests match the project's existing test style
- All generated tests pass when run
- Tests cover happy paths, edge cases, and error handling
- Test file is placed in the correct location with the correct naming convention
- No source code was modified
</success_criteria>