Claude-skill-registry ln-310-story-validator

Validates Stories/Tasks with GO/NO-GO verdict, Readiness Score (1-10), Penalty Points, and Anti-Hallucination verification. Auto-fixes to reach 0 points, delegates to ln-002 for docs. Use when reviewing Stories before execution or when user requests validation.

install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/ln-310-story-validator" ~/.claude/skills/majiayu000-claude-skill-registry-ln-310-story-validator-53235d && rm -rf "$T"
manifest: skills/data/ln-310-story-validator/SKILL.md
source content

Story Verification Skill

Validate Stories/Tasks with explicit GO/NO-GO verdict, Readiness Score, and Anti-Hallucination verification.

Purpose & Scope

  • Validate Story plus child Tasks against industry standards and project patterns
  • Calculate Penalty Points for violations, then auto-fix to reach 0 points
  • Delegate to ln-002-best-practices-researcher for creating documentation (guides, manuals, ADRs, research)
  • Support Plan Mode: show audit results, wait for approval, then fix
  • Approve Story after fixes (Backlog -> Todo) with tabular output summary

When to Use

  • Reviewing Stories before approval (Backlog -> Todo)
  • Validating implementation path across Story and Tasks
  • Ensuring standards, architecture, and solution fit
  • Optimizing or correcting proposed approaches

Penalty Points System

Goal: Quantitative assessment of Story/Tasks quality. Target = 0 penalty points after fixes.

SeverityPointsDescription
CRITICAL10RFC/OWASP/security violations
HIGH5Outdated libraries, architecture issues
MEDIUM3Best practices violations
LOW1Structural/cosmetic issues

Workflow:

  1. Audit: Calculate penalty points for all 17 criteria
  2. Fix: Auto-fix and zero out points
  3. Report: Total Before -> 0 After

Mode Detection

Detect operating mode at startup:

Plan Mode Active:

  • Phase 1-2: Full audit (discovery + research + penalty calculation)
  • Phase 3: Show results + fix plan -> WAIT for user approval
  • Phase 4-5: After approval -> execute fixes

Normal Mode:

  • Phase 1-5: Standard workflow without stopping
  • Automatically fix and approve

Workflow Overview

Phase 1: Discovery & Loading

Step 1: Configuration & Metadata Loading

  • Auto-discover configuration: Team ID (
    docs/tasks/kanban_board.md
    ), project docs (
    CLAUDE.md
    ), epic from Story.project
  • Load metadata only: Story ID/title/status/labels, child Task IDs/titles/status/labels
  • Expect 3-8 implementation tasks; record parentId for filtering
  • Rationale: keep loading light; full descriptions arrive in Phase 2

Phase 2: Research & Audit

Always execute for every Story - no exceptions.

Step 1: Domain Extraction

  • Extract technical domains from Story title + Technical Notes + Implementation Tasks
  • Load pattern registry from
    references/domain_patterns.md
  • Scan Story content for pattern matches via keyword detection
  • Build list of detected domains requiring documentation

Step 2: Documentation Delegation

  • For EACH detected pattern, delegate to ln-002:
    Skill(skill="ln-002-best-practices-researcher",
          args="doc_type=[guide|manual|adr] topic='[pattern]'")
    
  • Receive file paths to created documentation (
    docs/guides/
    ,
    docs/manuals/
    ,
    docs/adrs/
    ,
    docs/research/
    )

Step 3: Research via MCP

  • Query MCP Ref for industry standards:
    ref_search_documentation(query="[topic] RFC OWASP best practices 2025")
  • Query Context7 for library versions:
    resolve-library-id
    +
    query-docs
  • Extract: standards (RFC numbers, OWASP rules), library versions, patterns

Step 3.5: Anti-Hallucination Verification

  • Scan Story/Tasks for technical claims (RFC references, library versions, security requirements)
  • Verify each claim has MCP Ref/Context7 evidence
  • Flag unverified claims for correction
  • Status: VERIFIED (all sourced) or FLAGGED (list unverified)

Step 4: Penalty Points Calculation

  • Evaluate all 17 criteria against Story/Tasks
  • Assign penalty points per violation (CRITICAL=10, HIGH=5, MEDIUM=3, LOW=1)
  • Calculate total penalty points
  • Build fix plan for each violation

Phase 3: Audit Results & Fix Plan

Display audit results:

  • Penalty Points table (criterion, severity, points, description)
  • Total: X penalty points
  • Fix Plan: list of fixes for each criterion

Mode handling:

  • IF Plan Mode: Show results + "After your approval, changes will be applied" -> WAIT
  • ELSE (Normal Mode): Proceed to Phase 4 immediately

Phase 4: Auto-Fix

Execute fixes for ALL 19 criteria on the spot.

  • Execution order (7 groups):
    1. Structural (#1-#4) — Story/Tasks template compliance + AC completeness/specificity
    2. Standards (#5) — RFC/OWASP compliance FIRST (before YAGNI/KISS!)
    3. Solution (#6) — Library versions
    4. Workflow (#7-#13) — Test strategy, docs integration, size, cleanup, YAGNI, KISS, task order, Database Creation
    5. Quality (#14-#15) — Documentation complete, hardcoded values
    6. Dependencies (#18-#19) — Story/Task independence (no forward dependencies)
    7. Traceability (#16-#17) — Story-Task alignment, AC coverage quality (LAST, after all fixes)
  • Use Auto-Fix Actions table below as authoritative checklist
  • Zero out penalty points as fixes applied
  • Test Strategy section must exist but remain empty (testing handled separately)

Phase 5: Approve & Notify

  • Set Story + all Tasks to Todo (Linear); update
    kanban_board.md
    with APPROVED marker
  • Add Linear comment with full validation summary:
    • Penalty Points table (Before -> After = 0)
    • Auto-Fixes Applied table
    • Documentation Created table (docs created via ln-002)
    • Standards Compliance Evidence table
  • Display tabular output (Unicode box-drawing) to terminal
  • Final: Total Penalty Points = 0
  • Optional: If
    --execute
    flag provided, delegate to ln-400-story-executor to start execution immediately after approval

Auto-Fix Actions Reference

Structural (#1-#4)

#CriterionWhat it checksPenaltyAuto-fix actions
1Story Structure8 sections per templateLOW (1)Add/reorder sections with TODO placeholders; update Linear
2Tasks StructureEach Task has 7 sectionsLOW (1)Load each Task; add/reorder sections; update Linear
3Story StatementAs a/I want/So that clarityLOW (1)Rewrite using persona/capability/value; update Linear
4Acceptance CriteriaGiven/When/Then, 3-5 itemsMEDIUM (3)Normalize to G/W/T; add edge cases; update Linear

Standards (#5)

#CriterionWhat it checksPenaltyAuto-fix actions
5Standards ComplianceRFC, OWASP, REST, SecurityCRITICAL (10)Query MCP Ref; update Technical Notes with compliant approach

Solution (#6)

#CriterionWhat it checksPenaltyAuto-fix actions
6Library & VersionLibraries are latest stableHIGH (5)Query Context7; update to recommended versions

Workflow (#7-#13)

#CriterionWhat it checksPenaltyAuto-fix actions
7Test StrategySection exists but emptyLOW (1)Ensure section present; leave empty (testing handled separately)
8Documentation IntegrationNo standalone doc tasksMEDIUM (3)Remove doc-only tasks; fold into implementation DoD
9Story Size3-8 tasks; 3-5h eachMEDIUM (3)If <3 or >8, add TODO; flag task size issues
10Test Task CleanupNo premature test tasksMEDIUM (3)Remove test tasks before final; testing appears later
11YAGNINo premature featuresMEDIUM (3)Move speculative items to Out of Scope unless standards require
12KISSSimplest solutionMEDIUM (3)Simplify unless standards require complexity
13Task OrderDB→Service→API→UIMEDIUM (3)Reorder Tasks foundation-first

Quality (#14-#15)

#CriterionWhat it checksPenaltyAuto-fix actions
14Documentation CompletePattern docs exist + referencedHIGH (5)Delegate to ln-002; add all doc links to Technical Notes
15Code Quality BasicsNo hardcoded valuesMEDIUM (3)Add TODOs for constants/config/env

Traceability (#16-#17)

#CriterionWhat it checksPenaltyAuto-fix actions
16Story-Task AlignmentTasks implement Story statementMEDIUM (3)Add TODO to misaligned Tasks; warn user
17AC-Task CoverageEach AC has implementing TaskMEDIUM (3)Add TODO for uncovered ACs; suggest missing Tasks

Dependencies (#18-#19)

#CriterionWhat it checksPenaltyAuto-fix actions
18Story DependenciesNo forward Story dependenciesCRITICAL (10)Flag forward dependencies; suggest reorder
19Task DependenciesNo forward Task dependenciesMEDIUM (3)Flag forward dependencies; reorder Tasks

Maximum Penalty: 60 points

Final Assessment Model

Outputs after all fixes applied:

MetricValueMeaning
GateGO / NO-GOFinal verdict for execution readiness
Readiness Score1-10Quality confidence level
Penalty Points0 (after fixes)Validation completeness
Anti-HallucinationVERIFIED / FLAGGEDTechnical claims verified
AC Coverage100% (N/N)All ACs mapped to Tasks

Readiness Score Calculation

Readiness Score = 10 - (Penalty Points / 5)
ScoreStatusGate
9-10ExcellentGO
7-8GoodGO
5-6AcceptableGO (with notes)
3-4ConcernsNO-GO (requires review)
1-2CriticalNO-GO (major issues)

Anti-Hallucination Verification

Verify technical claims have evidence:

Claim TypeVerification
RFC/Standard referenceMCP Ref search confirms existence
Library versionContext7 query confirms version
Security requirementOWASP/CWE reference exists
Performance claimBenchmark/doc reference

Status: VERIFIED (all claims sourced) or FLAGGED (unverified claims listed)

Task-AC Coverage Matrix

Output explicit mapping:

| AC | Task(s) | Coverage |
|----|---------|----------|
| AC1: Given/When/Then | T-001, T-002 | ✅ |
| AC2: Given/When/Then | T-003 | ✅ |
| AC3: Given/When/Then | — | ❌ UNCOVERED |

Coverage:

{covered}/{total} ACs
(target: 100%)

Self-Audit Protocol (Mandatory)

Before marking any criterion as complete, provide concrete evidence (doc path, MCP result, Linear update).

#Self-Audit QuestionRequired Evidence
1Validated all 8 Story sections?Section list
2Loaded full description for each Task?Task validation count
3Statement in As a/I want/So that?Quoted statement
4AC are G/W/T and testable?AC count and format
5Verified RFC/OWASP/REST compliance?Standards list + MCP result
6Checked library versions via Context7?Context7 result
7Test Strategy kept empty?Note that testing deferred
8Docs integrated, no standalone tasks?Integration evidence
9Task count 3-8 and 3-5h?Task count/sizes
10No premature test tasks?Search result
11Only current-scope features (YAGNI)?Scope review
12Simplest approach within standards (KISS)?Simplicity justification
13Tasks ordered Foundation-First?Task order list
14All pattern docs exist and referenced?Doc paths from ln-002
15Hardcoded values handled?TODO/config evidence
16Each Task aligns with Story statement?Alignment check result
17Each AC has implementing Task?Coverage matrix

Definition of Done

  • Phase 1: Auto-discovery done; Story + Tasks metadata loaded; task count checked
  • Phase 2: Domain extraction complete; ln-002 delegated for docs; MCP research done; Anti-Hallucination verification done; Penalty Points calculated
  • Phase 3: Audit results shown; IF Plan Mode: user approved
  • Phase 4: All 17 criteria auto-fixed; Penalty Points = 0; Test Strategy empty; test tasks removed
  • Phase 5: Final Assessment output:
    gate: GO | NO-GO
    readiness_score: {1-10}
    penalty_points: 0 (was {N})
    anti_hallucination: VERIFIED | FLAGGED
    ac_coverage: "{N}/{M} (100%)"
    ac_matrix:
      - ac: "AC1"
        tasks: ["T-001", "T-002"]
        status: covered
    
  • Story/Tasks set to Todo;
    kanban_board.md
    updated; Linear comment with Final Assessment added
  • Optional: If
    --execute
    flag, ln-400-story-executor invoked after approval

Example Workflow

Story: "Create user management API with rate limiting"

  1. Phase 1: Load metadata (5 Tasks, status Backlog)
  2. Phase 2:
    • Domain extraction: REST API, Rate Limiting
    • Delegate ln-002: creates Guide-05 (REST patterns), Guide-06 (Rate Limiting)
    • MCP Ref: RFC 7231 compliance, OWASP API Security
    • Context7: Express v4.19 (current v4.17)
    • Penalty Points: 18 total (version=5, missing docs=5, structure=3, standards=5)
  3. Phase 3:
    • Show Penalty Points table
    • IF Plan Mode: "18 penalty points found. Fix plan ready. Approve?"
  4. Phase 4:
    • Fix #6: Update Express v4.17 -> v4.19
    • Fix #5: Add RFC 7231 compliance notes
    • Fix #13: Add Guide-05, Guide-06 references
    • Fix #17: Docs already created by ln-002
    • All fixes applied, Penalty Points = 0
  5. Phase 5: Story -> Todo, tabular report

Template Loading

Templates:

story_template.md
,
task_template_implementation.md

Loading Logic:

  1. Check if
    docs/templates/{template}.md
    exists in target project
  2. IF NOT EXISTS: a. Create
    docs/templates/
    directory if missing b. Copy
    shared/templates/{template}.md
    docs/templates/{template}.md
    c. Replace placeholders in the LOCAL copy:
    • {{TEAM_ID}}
      → from
      docs/tasks/kanban_board.md
    • {{DOCS_PATH}}
      → "docs" (standard)
  3. Use LOCAL copy (
    docs/templates/{template}.md
    ) for all validation operations

Rationale: Templates are copied to target project on first use, ensuring:

  • Project independence (no dependency on skills repository)
  • Customization possible (project can modify local templates)
  • Placeholder replacement happens once at copy time

Reference Files

  • Final Assessment:
    references/readiness_scoring.md
    (GO/NO-GO rules, Readiness Score calculation)
  • Templates (centralized):
    shared/templates/story_template.md
    ,
    shared/templates/task_template_implementation.md
  • Local copies:
    docs/templates/
    (in target project)
  • Validation Checklists (Progressive Disclosure):
    • references/verification_checklist_template.md
      (overview of 7 categories)
    • references/structural_validation.md
      (criteria #1-#4)
    • references/standards_validation.md
      (criterion #5)
    • references/solution_validation.md
      (criterion #6)
    • references/workflow_validation.md
      (criteria #7-#13)
    • references/quality_validation.md
      (criteria #14-#15)
    • references/dependency_validation.md
      (criteria #18-#19)
    • references/traceability_validation.md
      (criteria #16-#17)
    • references/domain_patterns.md
      (pattern registry for ln-002 delegation)
    • references/penalty_points.md
      (penalty system details)
  • Linear integration:
    ../shared/templates/linear_integration.md

Version: 7.0.0 (BREAKING: Added 2 new criteria #18-#19 for Story/Task dependencies per BMAD Method. Expanded criterion #4 with AC completeness/specificity, #9 with Database Creation Principle, #13 with forward dependency checks, #17 with STRONG/WEAK/MISSING coverage quality. Total 19 criteria, max 60 penalty points.) Last Updated: 2026-02-03