Claude-skill-registry-data mcaf-testing
Add or update automated tests for a change (bugfix, feature, refactor) using the repository’s testing rules in `AGENTS.md`. Use TDD where applicable; derive scenarios from docs/Features/* and ADR invariants; prefer stable integration/API/UI tests, run build before tests, and verify meaningful assertions for happy/negative/edge cases.
install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry-data
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry-data "$T" && mkdir -p ~/.claude/skills && cp -r "$T/data/mcaf-testing" ~/.claude/skills/majiayu000-claude-skill-registry-data-mcaf-testing && rm -rf "$T"
manifest:
data/mcaf-testing/SKILL.mdsource content
MCAF: Testing
Outputs
- New/updated automated tests that encode documented behaviour (happy path + negative + edge), with integration/API/UI preferred
- For new behaviour and bugfixes: tests drive the change (TDD: reproduce/specify → test fails → implement → test passes)
- Updated verification sections in relevant docs (
,docs/Features/*
) when needed (tests + commands must match reality)docs/ADR/* - Evidence of verification: commands run (
/build
/test
/coverage
) + result + the report/artifact path written by the tool (when applicable)analyze
Workflow
- Read
:AGENTS.md- commands:
,build
,test
,format
, and the repo’s coverage path (either a dedicatedanalyze
command or acoverage
command that generates coverage)test - testing rules (levels, mocks policy, suites to run, containers, etc.)
- commands:
- Start from the docs that define behaviour (no guessing):
for user/system flows and business rulesdocs/Features/*
for architectural decisions and invariants that must remain truedocs/ADR/*- if the docs are missing/contradict, fix the docs first (or write a minimal spec + test plan in the task/PR)
- follow
scoping rules (Architecture map → relevant docs → relevant module code; avoid repo-wide scanning)AGENTS.md
- Follow
verification timing (optimize time + tokens):AGENTS.md- run tests/coverage only when you have a reason (changed code/tests, bug reproduction, baseline confirmation)
- start with the smallest scope (new/changed tests), then expand to required suites
- Define the scenarios you must prove (map them back to docs):
- positive (happy path)
- negative (validation/forbidden/unauthorized/error paths)
- edge (limits, concurrency, retries/idempotency, time-sensitive behaviour)
- for ADRs: test the invariants and the “must not happen” behaviours the decision relies on
- Choose the highest meaningful test level:
- prefer integration/API/UI when the behaviour crosses boundaries
- use unit tests only when logic is isolated and higher-level coverage is impractical
- Implement via a TDD loop (per scenario):
- write the test first and make sure it fails for the right reason
- implement the minimum change to make it pass
- refactor safely (keep tests green)
- Write tests that assert outcomes (not “it runs”):
- assert returned values/responses
- assert DB state / emitted events / observable side effects
- include negative and edge cases when relevant
- Keep tests stable (treat flakiness as a bug):
- deterministic data/fixtures, no hidden dependencies
- avoid
-based timing; prefer “wait until condition”/polling with a timeoutsleep - keep test setup/teardown reliable (reset state between tests)
- Coverage (follow
, optimize time/tokens):AGENTS.md- run coverage only if it’s part of the repo’s required verification path or if you need it to find gaps
- run coverage once per change (it is heavier than tests)
- capture where the report/artifacts were written (path, summary) if generated
- If the repo has UI:
- run UI/E2E tests
- inspect screenshots/videos/traces produced by the runner for failures and obvious UI regressions
- Run verification in layers (as required by
):AGENTS.md
- new/changed tests first
- then the related suite
- then broader regressions if required
- run
if requiredanalyze
- Keep docs and skills consistent:
- ensure
anddocs/Features/*
verification sections point to the real tests and real commandsdocs/ADR/* - if you change test/coverage commands or rules, update
and this skill in the same PRAGENTS.md
Guardrails
- All test discipline and prohibitions come from
. Do not contradict it in this skill.AGENTS.md