Spec-forge review

install
source · Clone the upstream repo
git clone https://github.com/tercel/spec-forge
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/tercel/spec-forge "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/review" ~/.claude/skills/tercel-spec-forge-review && rm -rf "$T"
manifest: skills/review/SKILL.md
source content

Review — Specification Quality & Consistency Review

Systematically review spec-forge generated documents for quality, completeness, and internal consistency. Optionally auto-fix issues found.

Core Principles

  1. Evidence-based: Every finding must cite a specific file and section — no vague complaints
  2. Spec-focused: Review specification documents only (tech-design, feature specs, overview) — not code, not upstream idea drafts
  3. Upstream-aware: Use idea drafts and project manifests as reference context to validate spec accuracy, but do not review them
  4. Prioritized: Findings classified by severity so the user can act on what matters first
  5. Conservative fixes: Auto-fix only touches cited sections; when domain knowledge is missing, leave a
    <!-- REVIEW: {question} -->
    comment instead of guessing
  6. Honest: Report real issues, don't inflate findings to look thorough

Severity Levels

SeverityMeaningExample
CriticalWrong, contradictory, or misleading contentFeature spec API signature contradicts tech-design, component boundary mismatch
MajorSignificant gap that would block or confuse implementationEmpty required section, missing error handling spec, undefined edge cases
MinorQuality issue that degrades usefulness but isn't blockingVague description, missing cross-reference, inconsistent terminology

Workflow

Step 1: Determine Review Scope

Parse the arguments to determine what to review:

  1. If a
    feature_name
    argument is provided, look for:
    • docs/{feature_name}/tech-design.md
    • All feature specs in
      docs/features/
      (glob for
      docs/features/*.md
      )
    • Upstream reference:
      ideas/{feature_name}/draft.md
      (if exists)
    • Project manifest:
      docs/project-{feature_name}.md
      (if exists, for multi-split context)
  2. If no argument, scan
    docs/
    for the most recent tech-design and all feature specs in
    docs/features/
  3. If no spec documents found, inform the user and stop

Use

AskUserQuestion
to ask:

  • Review scope: Review all generated specs, or focus on specific documents? (Options: All / Tech design only / Feature specs only / Specific files)
  • Auto-fix: Should I auto-fix issues found? (Options: Yes — fix Critical+Major automatically / Yes — fix all / No — report only)

Step 2: Document Inventory

Build the list of documents to review:

  1. Review targets (will be reviewed):

    • docs/{feature_name}/tech-design.md
    • docs/features/overview.md
    • docs/features/{component-1}.md
      ,
      docs/features/{component-2}.md
      , etc.
    • For multi-split: tech-designs for all sub-features
  2. Reference context (read for context, NOT reviewed):

    • ideas/{feature_name}/draft.md
      — upstream requirements
    • docs/project-{feature_name}.md
      — project manifest with sub-feature scope
  3. Read each review target document fully

Display the inventory:

Review scope: {feature_name}
  Review targets: {N} documents
    - docs/{feature_name}/tech-design.md
    - docs/features/overview.md
    - docs/features/{component-1}.md
    - docs/features/{component-2}.md
    ...
  Reference context: {N} documents
    - ideas/{feature_name}/draft.md

Step 3: Review

Check each review target document against the following checklist:

3.1 Completeness

  • Any empty sections, TBD/TODO markers, placeholder text, or
    {placeholder}
    template variables?
  • Missing required sections per document type?
    • Tech design: Goals, Non-goals, Scope, Architecture, API Design, Data Model, Component Overview
    • Feature spec: Purpose, API/Interface, Logic/Behavior, Error Handling, Dependencies
    • Overview: Feature index listing all generated specs, dependency graph

3.2 Internal Consistency

  • Component name matching: For every feature spec file (
    docs/features/{name}.md
    ), verify there is a corresponding row in the tech-design's §8.1 Component Overview with an identical slug. Raise a Critical finding if a feature spec exists with no matching component or vice versa — this breaks the traceability chain and confuses downstream consumers like code-forge.
  • Do feature spec API signatures match the tech-design's API Design section?
  • Do component boundaries in feature specs align with tech-design's Component Overview?
  • Are data models consistent across documents? (field names, types, relationships)
  • Do feature specs reference the same architectural patterns described in the tech-design?
  • For multi-split: are cross-sub-feature interfaces consistent?

3.3 Specificity

  • Vague descriptions that should be concrete (e.g., "handles errors appropriately" → specific error codes/behaviors)
  • Ambiguous quantifiers (e.g., "fast", "large", "many" without concrete thresholds)
  • Undefined behavior for edge cases or boundary conditions

3.4 Traceability

  • Do feature specs reference back to tech-design sections they implement?
  • Does overview.md list ALL generated feature specs? (no missing entries)
  • Are dependency relationships between feature specs documented and consistent?

3.5 Actionability

  • Could a developer implement from these specs without guessing?
  • Are input/output formats fully specified?
  • Are error scenarios and recovery behaviors defined?
  • Are configuration options and defaults documented?

Step 4: Generate Findings

For each issue found, produce a structured finding:

- [{severity}] {file_path} § {section}: {description of issue} → FIX: {concrete fix instruction}

Compile the full review result:

REVIEW_RESULT: {PASS | ISSUES_FOUND}
CRITICAL_COUNT: {N}
MAJOR_COUNT: {N}
MINOR_COUNT: {N}

FINDINGS:
- [{severity}] {file_path} § {section}: {description} → FIX: {fix instruction}
...

Step 5: Present Results

Display summary:

spec-forge review: {feature_name}

  Documents reviewed: {N}
  Result: {PASS | ISSUES_FOUND}
  Findings: {critical} critical, {major} major, {minor} minor

  {If ISSUES_FOUND, list top findings}

If

REVIEW_RESULT: PASS
, inform the user and stop.

If

REVIEW_RESULT: ISSUES_FOUND
, proceed based on user's auto-fix preference from Step 1:

  • Report only: Display all findings and stop
  • Auto-fix: Proceed to Step 6

If auto-fix was not pre-selected, ask now via

AskUserQuestion
:

  • Fix Critical+Major — auto-fix significant issues
  • Fix all — auto-fix everything
  • Skip — just the report

Step 6: Auto-Fix (Iterative)

Maximum iterations: 2 (one fix + one re-review). If issues persist after 2 iterations, report remaining issues and stop.

6.1 Apply Fixes

For each finding to fix:

  1. Read the target file
  2. Apply the concrete fix described in the finding
  3. Rules:
    • Only modify the specific section cited in the finding
    • Do NOT restructure or rewrite entire documents
    • Do NOT add new documents — only fix existing ones
    • If a fix requires information you don't have (e.g., specific domain logic), add a
      <!-- REVIEW: {question} -->
      comment instead of guessing
    • Do NOT change content unrelated to the findings

6.2 Re-Review

After all fixes are applied, re-run the review (Step 3-4) on the same documents.

  • If

    REVIEW_RESULT: PASS
    : Display success

    spec-forge review: PASS after fixes — {N} issues resolved
    
  • If

    REVIEW_RESULT: ISSUES_FOUND
    (iteration 2): Display remaining issues and stop

    spec-forge review: {N} issues remain after auto-fix
    
    Remaining issues:
      - [{severity}] {file} § {section}: {description}
      ...
    
    These may require manual attention or domain-specific decisions.
    

Step 7: Summary

Display final status:

spec-forge review complete: {feature_name}

  Documents reviewed: {N}
  Issues found: {total}
  Issues fixed: {fixed}
  Issues remaining: {remaining}

Next steps:
  /code-forge:plan @docs/features/{component-name}.md   → Generate implementation plan
  /spec-forge:review {feature_name}                      → Re-run review after manual fixes