install
source · Clone the upstream repo
git clone https://github.com/diegosouzapw/awesome-omni-skill
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data-ai/spec-review" ~/.claude/skills/diegosouzapw-awesome-omni-skill-spec-review && rm -rf "$T"
manifest:
skills/data-ai/spec-review/SKILL.mdsource content
OpenSpec Verification
Guide for verifying that implementation matches spec artifacts.
Input Files
| File | Required | Purpose |
|---|---|---|
| Yes | Requirements and scenarios to verify |
| Yes | Completion checklist |
| Optional | Original intent reference |
Three Verification Dimensions
1. Completeness
Question: Are all tasks done and all requirements covered?
| Check | Method | Issue Level |
|---|---|---|
All checkboxes marked | Parse tasks.md | CRITICAL if incomplete |
| All requirements have code | Search codebase for keywords | CRITICAL if missing |
| All new files exist | Verify file paths from tasks | CRITICAL if missing |
2. Correctness
Question: Does the code do what the spec says?
| Check | Method | Issue Level |
|---|---|---|
| GIVEN/WHEN/THEN satisfied | Trace scenario through code | WARNING if divergent |
| Tests cover scenarios | Match test names to scenarios | WARNING if uncovered |
| Edge cases handled | Check error paths in code | WARNING if missing |
| Validation commands pass | Run pytest / npm run validate | CRITICAL if failing |
3. Coherence
Question: Does the code match design decisions?
| Check | Method | Issue Level |
|---|---|---|
| Design decisions followed | Compare Decisions section to code | WARNING if violated |
| Patterns consistent | Check naming, structure, style | SUGGESTION |
| No undocumented changes | Diff scope vs design scope | WARNING if extra |
| No design deviations | Cross-reference architecture | WARNING if different |
Verification Process
- Load artifacts - Read design.md, tasks.md, proposal.md
- Check completeness - Parse checkboxes, search for requirement implementations
- Check correctness - Trace each scenario through code, verify test coverage
- Check coherence - Compare decisions to implementation, check patterns
- Generate report - Summarize findings with issue levels
Report Format
## Verification Report ### Summary | Dimension | Status | |--------------|---------------------| | Completeness | X/Y tasks, N reqs | | Correctness | M/N scenarios pass | | Coherence | Followed / N issues | ### Critical Issues | # | Dimension | Issue | File | Recommendation | |---|-----------|-------|------|----------------| | 1 | Completeness | Task 2.3 incomplete | - | Complete or mark blocked | ### Warnings | # | Dimension | Issue | File | Recommendation | |---|-----------|-------|------|----------------| | 1 | Correctness | Scenario X not tested | test_foo.py | Add test case | ### Suggestions - [Pattern deviation details with file reference] ### Assessment [CRITICAL: N issues | WARNINGS: N | Ready for archive: Yes/No]
Graceful Degradation
| Available Artifacts | Checks Performed |
|---|---|
| tasks.md only | Completeness (checkboxes) only |
| tasks.md + design.md | Completeness + Correctness |
| All three | All three dimensions |
Always note which checks were skipped and why.
Verification Heuristics
- Completeness: Focus on objective items (checkboxes, requirement lists)
- Correctness: Use keyword search + file path analysis; don't require certainty
- Coherence: Look for glaring inconsistencies, don't nitpick style
- False positives: Prefer SUGGESTION over WARNING, WARNING over CRITICAL when uncertain
- Actionability: Every issue must have a specific recommendation with file references