Awesome-omni-skill spec-review

How to verify implementation against OpenSpec artifacts

install
source · Clone the upstream repo
git clone https://github.com/diegosouzapw/awesome-omni-skill
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data-ai/spec-review" ~/.claude/skills/diegosouzapw-awesome-omni-skill-spec-review && rm -rf "$T"
manifest: skills/data-ai/spec-review/SKILL.md
source content

OpenSpec Verification

Guide for verifying that implementation matches spec artifacts.

Input Files

FileRequiredPurpose
{specs_path}/design.md
YesRequirements and scenarios to verify
{specs_path}/tasks.md
YesCompletion checklist
{specs_path}/proposal.md
OptionalOriginal intent reference

Three Verification Dimensions

1. Completeness

Question: Are all tasks done and all requirements covered?

CheckMethodIssue Level
All checkboxes marked
[x]
Parse tasks.mdCRITICAL if incomplete
All requirements have codeSearch codebase for keywordsCRITICAL if missing
All new files existVerify file paths from tasksCRITICAL if missing

2. Correctness

Question: Does the code do what the spec says?

CheckMethodIssue Level
GIVEN/WHEN/THEN satisfiedTrace scenario through codeWARNING if divergent
Tests cover scenariosMatch test names to scenariosWARNING if uncovered
Edge cases handledCheck error paths in codeWARNING if missing
Validation commands passRun pytest / npm run validateCRITICAL if failing

3. Coherence

Question: Does the code match design decisions?

CheckMethodIssue Level
Design decisions followedCompare Decisions section to codeWARNING if violated
Patterns consistentCheck naming, structure, styleSUGGESTION
No undocumented changesDiff scope vs design scopeWARNING if extra
No design deviationsCross-reference architectureWARNING if different

Verification Process

  1. Load artifacts - Read design.md, tasks.md, proposal.md
  2. Check completeness - Parse checkboxes, search for requirement implementations
  3. Check correctness - Trace each scenario through code, verify test coverage
  4. Check coherence - Compare decisions to implementation, check patterns
  5. Generate report - Summarize findings with issue levels

Report Format

## Verification Report

### Summary
| Dimension    | Status              |
|--------------|---------------------|
| Completeness | X/Y tasks, N reqs   |
| Correctness  | M/N scenarios pass   |
| Coherence    | Followed / N issues  |

### Critical Issues
| # | Dimension | Issue | File | Recommendation |
|---|-----------|-------|------|----------------|
| 1 | Completeness | Task 2.3 incomplete | - | Complete or mark blocked |

### Warnings
| # | Dimension | Issue | File | Recommendation |
|---|-----------|-------|------|----------------|
| 1 | Correctness | Scenario X not tested | test_foo.py | Add test case |

### Suggestions
- [Pattern deviation details with file reference]

### Assessment
[CRITICAL: N issues | WARNINGS: N | Ready for archive: Yes/No]

Graceful Degradation

Available ArtifactsChecks Performed
tasks.md onlyCompleteness (checkboxes) only
tasks.md + design.mdCompleteness + Correctness
All threeAll three dimensions

Always note which checks were skipped and why.

Verification Heuristics

  • Completeness: Focus on objective items (checkboxes, requirement lists)
  • Correctness: Use keyword search + file path analysis; don't require certainty
  • Coherence: Look for glaring inconsistencies, don't nitpick style
  • False positives: Prefer SUGGESTION over WARNING, WARNING over CRITICAL when uncertain
  • Actionability: Every issue must have a specific recommendation with file references