EasyPlatform integration-test-verify
[Testing] Verify integration tests pass after writing and reviewing them. Reads project-specific run guidance from docs/project-config.json (integrationTestVerify section). Generic: supports any test runner via config.
git clone https://github.com/duc01226/EasyPlatform
T=$(mktemp -d) && git clone --depth=1 https://github.com/duc01226/EasyPlatform "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/integration-test-verify" ~/.claude/skills/duc01226-easyplatform-integration-test-verify && rm -rf "$T"
.claude/skills/integration-test-verify/SKILL.md<!-- SYNC:critical-thinking-mindset -->[IMPORTANT] Use
to break ALL work into small tasks BEFORE starting. A verify step that does not actually run tests is not verification. It is theater. Read project config FIRST to understand how to run tests for this specific project.TaskCreate
<!-- /SYNC:critical-thinking-mindset --> <!-- SYNC:ai-mistake-prevention -->Critical Thinking Mindset — Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact — cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence — certainty without evidence root of all hallucination.
<!-- /SYNC:ai-mistake-prevention -->AI Mistake Prevention — Failure modes to avoid on every task:
- Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal.
- Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing.
- Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain.
- Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips — not just happy path.
- When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer — never patch symptom site.
- Assume existing values are intentional — ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code.
- Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks.
- Holistic-first debugging — resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis.
- Surgical changes — apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly.
- Surface ambiguity before coding — don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
Quick Summary
Goal: Run integration tests after
/integration-test writes them and /integration-test-review reviews them. Confirm all pass.
Workflow:
- Read Config — Load
→docs/project-config.json
section for project-specific run guidanceintegrationTestVerify - System Check — Verify required system is healthy before running
- Determine Test Projects — Discover via
glob,testProjectPattern
list, or git auto-detecttestProjects - Run Tests — Execute
on determined test projectsquickRunCommand - Report — Pass/fail counts, failed test names, next steps on failure
Key Rules:
- MUST read project config
section before doing anything elseintegrationTestVerify - Use
from config — NEVER hardcodequickRunCommand
or any language-specific commanddotnet test - If system check fails → instruct user how to start system (reference
from config)startupScript - On test failure → diagnose root cause: test bug or service bug. NEVER weaken assertions.
- Always report exact failure counts and names — "all passed" requires evidence
Be skeptical. Apply critical thinking. Every pass/fail claim needs actual test runner output.
Step 1: Read Project Config
Read
docs/project-config.json and extract the integrationTestVerify section.
Expected config shape: { "integrationTestVerify": { "guidance": string — instructions for this project's test run approach "quickRunCommand": string — test runner command (e.g., "dotnet test --no-build", "npm test", "pytest") "testProjectPattern": string — glob pattern to discover test projects (e.g., "src/Services/**/*.IntegrationTests.csproj") "testProjects": string[] — explicit list of test project paths (fallback if no pattern) "systemCheckCommand": string — shell command to check system readiness "runScript": string — path to CI-style full run script (reference only) "startupScript": string — path to system startup script (reference only) } }
Config priority:
testProjectPattern (auto-discovers via glob) > testProjects (explicit list) > git auto-detect (fallback).
If
section is missing: proceed to Fallback Mode.integrationTestVerify
If section exists: display the
guidance value to the user verbatim — it contains project-specific instructions the implementer wrote intentionally.
Step 2: System Check
If
exists in config:systemCheckCommand
Run the system check via Bash:
{systemCheckCommand}
Evaluate output:
- Healthy → proceed to Step 3
- Partially healthy / no containers → display startup instructions to user:
"System not fully ready. To start: run
(or follow the guidance above). Wait for all services to be healthy, then re-run{startupScript}
." STOP — do not run tests against an unhealthy system. Results would be unreliable./integration-test-verify
If no
: skip this step and proceed to Step 3.systemCheckCommand
Step 3: Determine Test Projects
Priority order:
testProjectPattern (glob auto-discover) > testProjects (explicit list) > git auto-detect (fallback).
If
exists in config:testProjectPattern
Discover test projects by running a glob search for the pattern:
# Example for .NET projects (pattern: "src/Services/**/*.IntegrationTests.csproj") find . -path "{testProjectPattern}" -type f # or use language-appropriate glob tool
Use all discovered
.csproj files (or equivalent) as the test project list. Exclude any paths outside the pattern scope.
If no
but testProjectPattern
list exists:testProjects
Use the explicit list from config directly.
If neither exists — auto-detect from git:
# Auto-detect changed test projects git diff --name-only HEAD | grep -i "IntegrationTest" | sed 's|/[^/]*$||' | sort -u
If auto-detect finds nothing (no uncommitted test changes), ask user: "No changed test files detected. Run all test projects or skip?"
Filter rule: Only run projects relevant to the current change. If user explicitly asks to run all → run all discovered/configured projects.
Step 4: Run Tests
Execute using
quickRunCommand from config. Example for a .NET project:
# Run each test project individually for clear per-project results {quickRunCommand} {testProject1} {quickRunCommand} {testProject2} # ...
Or run all at once using the solution filter if supported:
{quickRunCommand} --filter "Category=integration"
Capture output: count Passed, Failed, Skipped. Note: skipped tests (tests marked with a framework-specific skip annotation, e.g.,
[Fact(Skip=...)] in xUnit, @Disabled in JUnit) are expected and not a failure.
Step 5: Report Results
After all tests complete, report:
### Integration Test Verify Results **Run command:** {quickRunCommand} **Projects tested:** {N} | Project | Passed | Failed | Skipped | |---------|--------|--------|---------| | {Project1} | X | 0 | Y | | {Project2} | X | 0 | Y | **Total:** {total_passed} passed, {total_failed} failed, {total_skipped} skipped (expected skip annotations) Status: ✅ ALL PASS | ❌ {N} FAILURES
On failure:
- List each failing test name + failure message
- Diagnose: test bug (wrong assertion setup) or service bug (handler actually broken)?
- If test bug → fix in the test file (do NOT weaken assertions — fix setup/data)
- If service bug → report as finding, do NOT silently fix without telling user
- After fixing → re-run verify
Fallback Mode (No Project Config)
When
docs/project-config.json has no integrationTestVerify section:
-
Detect project type from root files:
or*.sln
→*.csprojdotnet test
→package.json
ornpm testnpx jest
/pytest.ini
/setup.py
→pyproject.tomlpytest
→go.modgo test ./...
-
Auto-detect changed test files from git:
git diff --name-only HEAD -
Run detected command on changed test projects.
-
Report results and recommend: "Add
tointegrationTestVerify
for project-specific run guidance."docs/project-config.json
CI-Style Full Run (Reference)
When
runScript is configured, reference it for the full CI-style run (not run by AI directly — Windows .cmd scripts and CI runners require user/pipeline execution):
"For a full CI-style run including Docker orchestration and health polling, execute:
"{runScript}
This script typically: creates networks → removes stale containers → builds images → starts infrastructure (wait healthy) → starts APIs (wait healthy) → runs all tests.
On Test Failure Protocol
NEVER do these to make failures go away:
- ❌ Remove or weaken assertions
- ❌ Add skip annotations (e.g.,
in xUnit,[Fact(Skip=...)]
in JUnit) to hide failures@Disabled - ❌ Mark passing by ignoring error output
- ❌ Report "all passed" without showing actual runner output
DO this instead:
- Read the failing test method
- Read the handler/service the test targets
- Identify: is the assertion wrong, or is the code wrong?
- Fix at the root cause layer
- Re-run to confirm green
If a test fails because the system is unavailable → report as "system not ready" and reference
startupScript / runScript. Never change the test.
Next Steps
MANDATORY IMPORTANT MUST ATTENTION — NO EXCEPTIONS after completing this skill, you MUST ATTENTION use
AskUserQuestion to present these options. Do NOT skip because the task seems "simple" or "obvious" — the user decides:
- "/workflow-review-changes (Recommended)" — Review all changes before committing
- "/docs-update" — Update documentation if test counts changed
- "Skip, continue manually" — user decides
Closing Reminders
- MANDATORY IMPORTANT MUST ATTENTION read
→docs/project-config.json
FIRST — project-specific guidance overrides defaultsintegrationTestVerify - MANDATORY IMPORTANT MUST ATTENTION use
from config, not hardcodedquickRunCommand
— this skill is language-agnosticdotnet test - MANDATORY IMPORTANT MUST ATTENTION run system check before tests — unreliable system = unreliable results
- MANDATORY IMPORTANT MUST ATTENTION never weaken assertions to fix failures — diagnose and fix root cause
- MANDATORY IMPORTANT MUST ATTENTION show actual test runner output — "all passed" without evidence is not verification
- MANDATORY IMPORTANT MUST ATTENTION on failure: diagnose (test bug vs service bug) before fixing anything <!-- SYNC:critical-thinking-mindset:reminder -->
- MUST ATTENTION apply critical thinking — every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact. <!-- /SYNC:critical-thinking-mindset:reminder --> <!-- SYNC:ai-mistake-prevention:reminder -->
- MUST ATTENTION apply AI mistake prevention — holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction. <!-- /SYNC:ai-mistake-prevention:reminder -->