Claude-skill-registry-data mcaf-feature-spec
Create or update a feature spec under `docs/Features/` using `docs/templates/Feature-Template.md`: business rules, user flows, system behaviour, Mermaid diagram(s), verification plan, and Definition of Done. Use before implementing a non-trivial feature or when behaviour changes; keep the spec executable (test flows + traceability to tests).
install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry-data
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry-data "$T" && mkdir -p ~/.claude/skills && cp -r "$T/data/mcaf-feature-spec" ~/.claude/skills/majiayu000-claude-skill-registry-data-mcaf-feature-spec && rm -rf "$T"
manifest:
data/mcaf-feature-spec/SKILL.mdsource content
MCAF: Feature Spec
Outputs
(create or update)docs/Features/<feature>.md- Update links from/to ADRs and architecture map when needed
Spec Quality (anti-guesswork checklist)
Write a spec that can be implemented and verified without guessing:
- No placeholders: avoid “TBD”, “later”, “etc.”; if something is unknown, list it as an explicit question.
- Concrete modules: use real module/boundary names from
.docs/Architecture/Overview.md - Rules are testable: numbered business rules with clear inputs → outputs (no vague adjectives).
- Flows are executable: scenarios include preconditions, steps, expected results (happy + negative + edge).
- Verification is real: commands copied from
, and scenarios mapped to test IDs.AGENTS.md - Stakeholders covered: Product / Dev / DevOps / QA each get the information they need to ship safely.
Workflow
- Start from
to pick the affected module(s).docs/Architecture/Overview.md - Create/update the feature doc using
.docs/templates/Feature-Template.md- follow
scoping rules (do not scan the whole repo; use the architecture map to stay focused)AGENTS.md - keep the feature’s
updated while executing## Implementation plan (step-by-step)
- follow
- Define behaviour precisely:
- purpose and scope (in/out)
- business rules (numbered, testable)
- primary flow + edge cases
- Describe system behaviour in terms of entry points, reads/writes, side effects, idempotency, and errors.
- Add a Mermaid diagram for the main flow (modules + interactions; keep it readable).
- Write verification that can be executed:
- test environment assumptions
- concrete test flows (positive/negative/edge)
- mapping to where tests live (or will live)
- traceability: rules/flows → test IDs (so tests reflect the spec)
- Keep Definition of Done strict:
- behaviour covered by automated tests
- static analysis clean
- docs updated (feature + ADR + architecture overview if boundaries changed)
Guardrails
- If the feature introduces a new dependency/boundary, write an ADR and update
.docs/Architecture/Overview.md - Don’t hide decisions inside the feature doc: decisions go to ADRs.