Claude-code-blueprint test-check
MUST use after implementing new features or bug fixes, when user asks 'run the tests', 'are tests passing?', 'test this', or before any deployment step. Also trigger when tests were previously failing and fixes were applied.
install
source · Clone the upstream repo
git clone https://github.com/faizkhairi/claude-code-blueprint
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/faizkhairi/claude-code-blueprint "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/test-check" ~/.claude/skills/faizkhairi-claude-code-blueprint-test-check && rm -rf "$T"
manifest:
skills/test-check/SKILL.mdsource content
Run the full test suite and provide analysis:
- Detect active project from current working directory or recent context:
- Check
orCLAUDE.md
in the project root for the test command and package managerpackage.json - Common patterns:
,yarn test:unit
,npm run testnpx vitest run - Default (no context): ask which project
- Check
- Run the appropriate test command and capture output
- Report summary: total tests, passed, failed, skipped, duration
- If failures exist: analyze root cause of each failing test
- Compare against known baselines:
- Check
for the documented test baseline count for the active projectCLAUDE.md - If no baseline is documented: parse the test summary output line for total count; compare against previous runs if agent memory has a baseline
- If actual count is higher than baseline, the test suite grew; if lower, investigate missing tests
- Check
- Check if recently modified files have corresponding tests
- Identify test coverage gaps for critical paths
- Suggest new tests if coverage is insufficient
- E2E tests (run if user says "e2e", "all", or "full"):
- Check
for the dev server port and E2E test commandCLAUDE.md - Check if dev server is running on that port before proceeding
- Run E2E test command from
(e.g.,CLAUDE.md
oryarn test:e2e
)npm run test:e2e - E2E baselines: documented in
CLAUDE.md - Parse Playwright output: passed/failed/skipped counts
- On failure: note which spec file and test name failed
- If dev server is NOT running, report and skip (do NOT auto-start)
- Check
Present results in a clear dashboard format.