Spec-forge review
git clone https://github.com/tercel/spec-forge
T=$(mktemp -d) && git clone --depth=1 https://github.com/tercel/spec-forge "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/review" ~/.claude/skills/tercel-spec-forge-review && rm -rf "$T"
skills/review/SKILL.mdReview — Specification Quality & Consistency Review
Systematically review spec-forge generated documents for quality, completeness, and internal consistency. Optionally auto-fix issues found.
Core Principles
- Evidence-based: Every finding must cite a specific file and section — no vague complaints
- Spec-focused: Review specification documents only (tech-design, feature specs, overview) — not code, not upstream idea drafts
- Upstream-aware: Use idea drafts and project manifests as reference context to validate spec accuracy, but do not review them
- Prioritized: Findings classified by severity so the user can act on what matters first
- Conservative fixes: Auto-fix only touches cited sections; when domain knowledge is missing, leave a
comment instead of guessing<!-- REVIEW: {question} --> - Honest: Report real issues, don't inflate findings to look thorough
Severity Levels
| Severity | Meaning | Example |
|---|---|---|
| Critical | Wrong, contradictory, or misleading content | Feature spec API signature contradicts tech-design, component boundary mismatch |
| Major | Significant gap that would block or confuse implementation | Empty required section, missing error handling spec, undefined edge cases |
| Minor | Quality issue that degrades usefulness but isn't blocking | Vague description, missing cross-reference, inconsistent terminology |
Workflow
Step 1: Determine Review Scope
Parse the arguments to determine what to review:
- If a
argument is provided, look for:feature_namedocs/{feature_name}/tech-design.md- All feature specs in
(glob fordocs/features/
)docs/features/*.md - Upstream reference:
(if exists)ideas/{feature_name}/draft.md - Project manifest:
(if exists, for multi-split context)docs/project-{feature_name}.md
- If no argument, scan
for the most recent tech-design and all feature specs indocs/docs/features/ - If no spec documents found, inform the user and stop
Use
AskUserQuestion to ask:
- Review scope: Review all generated specs, or focus on specific documents? (Options: All / Tech design only / Feature specs only / Specific files)
- Auto-fix: Should I auto-fix issues found? (Options: Yes — fix Critical+Major automatically / Yes — fix all / No — report only)
Step 2: Document Inventory
Build the list of documents to review:
-
Review targets (will be reviewed):
docs/{feature_name}/tech-design.mddocs/features/overview.md
,docs/features/{component-1}.md
, etc.docs/features/{component-2}.md- For multi-split: tech-designs for all sub-features
-
Reference context (read for context, NOT reviewed):
— upstream requirementsideas/{feature_name}/draft.md
— project manifest with sub-feature scopedocs/project-{feature_name}.md
-
Read each review target document fully
Display the inventory:
Review scope: {feature_name} Review targets: {N} documents - docs/{feature_name}/tech-design.md - docs/features/overview.md - docs/features/{component-1}.md - docs/features/{component-2}.md ... Reference context: {N} documents - ideas/{feature_name}/draft.md
Step 3: Review
Check each review target document against the following checklist:
3.1 Completeness
- Any empty sections, TBD/TODO markers, placeholder text, or
template variables?{placeholder} - Missing required sections per document type?
- Tech design: Goals, Non-goals, Scope, Architecture, API Design, Data Model, Component Overview
- Feature spec: Purpose, API/Interface, Logic/Behavior, Error Handling, Dependencies
- Overview: Feature index listing all generated specs, dependency graph
3.2 Internal Consistency
- Component name matching: For every feature spec file (
), verify there is a corresponding row in the tech-design's §8.1 Component Overview with an identical slug. Raise a Critical finding if a feature spec exists with no matching component or vice versa — this breaks the traceability chain and confuses downstream consumers like code-forge.docs/features/{name}.md - Do feature spec API signatures match the tech-design's API Design section?
- Do component boundaries in feature specs align with tech-design's Component Overview?
- Are data models consistent across documents? (field names, types, relationships)
- Do feature specs reference the same architectural patterns described in the tech-design?
- For multi-split: are cross-sub-feature interfaces consistent?
3.3 Specificity
- Vague descriptions that should be concrete (e.g., "handles errors appropriately" → specific error codes/behaviors)
- Ambiguous quantifiers (e.g., "fast", "large", "many" without concrete thresholds)
- Undefined behavior for edge cases or boundary conditions
3.4 Traceability
- Do feature specs reference back to tech-design sections they implement?
- Does overview.md list ALL generated feature specs? (no missing entries)
- Are dependency relationships between feature specs documented and consistent?
3.5 Actionability
- Could a developer implement from these specs without guessing?
- Are input/output formats fully specified?
- Are error scenarios and recovery behaviors defined?
- Are configuration options and defaults documented?
Step 4: Generate Findings
For each issue found, produce a structured finding:
- [{severity}] {file_path} § {section}: {description of issue} → FIX: {concrete fix instruction}
Compile the full review result:
REVIEW_RESULT: {PASS | ISSUES_FOUND} CRITICAL_COUNT: {N} MAJOR_COUNT: {N} MINOR_COUNT: {N} FINDINGS: - [{severity}] {file_path} § {section}: {description} → FIX: {fix instruction} ...
Step 5: Present Results
Display summary:
spec-forge review: {feature_name} Documents reviewed: {N} Result: {PASS | ISSUES_FOUND} Findings: {critical} critical, {major} major, {minor} minor {If ISSUES_FOUND, list top findings}
If
REVIEW_RESULT: PASS, inform the user and stop.
If
REVIEW_RESULT: ISSUES_FOUND, proceed based on user's auto-fix preference from Step 1:
- Report only: Display all findings and stop
- Auto-fix: Proceed to Step 6
If auto-fix was not pre-selected, ask now via
AskUserQuestion:
- Fix Critical+Major — auto-fix significant issues
- Fix all — auto-fix everything
- Skip — just the report
Step 6: Auto-Fix (Iterative)
Maximum iterations: 2 (one fix + one re-review). If issues persist after 2 iterations, report remaining issues and stop.
6.1 Apply Fixes
For each finding to fix:
- Read the target file
- Apply the concrete fix described in the finding
- Rules:
- Only modify the specific section cited in the finding
- Do NOT restructure or rewrite entire documents
- Do NOT add new documents — only fix existing ones
- If a fix requires information you don't have (e.g., specific domain logic), add a
comment instead of guessing<!-- REVIEW: {question} --> - Do NOT change content unrelated to the findings
6.2 Re-Review
After all fixes are applied, re-run the review (Step 3-4) on the same documents.
-
If
: Display successREVIEW_RESULT: PASSspec-forge review: PASS after fixes — {N} issues resolved -
If
(iteration 2): Display remaining issues and stopREVIEW_RESULT: ISSUES_FOUNDspec-forge review: {N} issues remain after auto-fix Remaining issues: - [{severity}] {file} § {section}: {description} ... These may require manual attention or domain-specific decisions.
Step 7: Summary
Display final status:
spec-forge review complete: {feature_name} Documents reviewed: {N} Issues found: {total} Issues fixed: {fixed} Issues remaining: {remaining} Next steps: /code-forge:plan @docs/features/{component-name}.md → Generate implementation plan /spec-forge:review {feature_name} → Re-run review after manual fixes