EasyPlatform dor-gate

[Code Quality] Validate PBI against Definition of Ready before grooming. Blocks unready PBIs from entering grooming.

install
source · Clone the upstream repo
git clone https://github.com/duc01226/EasyPlatform
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/duc01226/EasyPlatform "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/dor-gate" ~/.claude/skills/duc01226-easyplatform-dor-gate && rm -rf "$T"
manifest: .claude/skills/dor-gate/SKILL.md
source content

[IMPORTANT] Use

TaskCreate
to break ALL work into small tasks BEFORE starting.

Evidence Gate: MANDATORY IMPORTANT MUST ATTENTION — every claim requires

file:line
proof or traced evidence with confidence percentage (>80% to act).

<!-- SYNC:critical-thinking-mindset -->

Critical Thinking Mindset — Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact — cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence — certainty without evidence root of all hallucination.

<!-- /SYNC:critical-thinking-mindset --> <!-- SYNC:ai-mistake-prevention -->

AI Mistake Prevention — Failure modes to avoid on every task:

  • Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal.
  • Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing.
  • Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain.
  • Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips — not just happy path.
  • When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer — never patch symptom site.
  • Assume existing values are intentional — ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code.
  • Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks.
  • Holistic-first debugging — resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis.
  • Surgical changes — apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly.
  • Surface ambiguity before coding — don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
<!-- /SYNC:ai-mistake-prevention -->

Quick Summary

Goal: Validate a PBI artifact against the Definition of Ready (DoR) checklist. Block PBIs that fail required criteria from entering grooming.

Key distinction: Automated quality gate (not collaborative review — use

/pbi-challenge
for that).

Be skeptical. Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence percentages (Idea should be more than 80%).

Workflow

  1. Locate PBI — Find PBI artifact in
    team-artifacts/pbis/
    or active plan context. If not found, ask user for path.
  2. Load DoR protocol — Apply DoR 7-criteria checklist (story template, testable AC, wireframes, UI design, AI pre-review, story points, dependencies)
  3. Evaluate each criterion — Parse PBI sections against 7 DoR items:
    • Check user story template format ("As a... I want... So that...")
    • Scan AC for vague language ("should", "might", "TBD", "etc.", "various")
    • Verify GIVEN/WHEN/THEN format (min 3 scenarios)
    • Check for wireframe/mockup references (or explicit "N/A" for backend-only)
    • Check for UI design status
    • Verify story_points and complexity fields present with valid values
    • Verify dependencies table with correct columns
  4. Classify result:
    • PASS — All 7 criteria pass → ready for grooming
    • FAIL — Any criterion fails → blocked, list fixes needed
  5. Output verdict — Use the DoR Gate Output Template from protocol

Checklist (from protocol)

Required (ALL must pass)

  • User story template — "As a {role}, I want {goal}, so that {benefit}" present
  • AC testable — All AC use GIVEN/WHEN/THEN, no vague language, min 3 scenarios
  • Wireframes/mockups — Present or explicit "N/A" for backend-only
  • UI design ready — Completed or "N/A" for backend-only
  • AI pre-review
    /refine-review
    or
    /pbi-challenge
    result is PASS or WARN
  • Story points — Valid Fibonacci (1-21) + complexity (Low/Medium/High)
  • Dependencies table — Complete with Type column (must-before/can-parallel/blocked-by/independent)

Output

## DoR Gate Result

**PBI:** {PBI filename}
**Status:** PASS | FAIL
**Date:** {date}

### Checklist Results

| #   | Criterion                   | Status    | Evidence / Issue |
| --- | --------------------------- | --------- | ---------------- |
| 1   | User story template         | ✅/❌     | {evidence}       |
| 2   | AC testable and unambiguous | ✅/❌     | {evidence}       |
| 3   | Wireframes/mockups          | ✅/❌/N/A | {evidence}       |
| 4   | UI design ready             | ✅/❌/N/A | {evidence}       |
| 5   | AI pre-review passed        | ✅/❌     | {evidence}       |
| 6   | Story points estimated      | ✅/❌     | {evidence}       |
| 7   | Dependencies complete       | ✅/❌     | {evidence}       |

### Blocking Items (if FAIL)

1. {Fix instruction}

### Verdict

**{READY_FOR_GROOMING | FIX_REQUIRED}**

Key Rules

  • FAIL blocks grooming — If ANY required criterion fails, PBI cannot enter grooming. List specific fixes.
  • No guessing — Every check must reference specific content (line numbers) in the PBI artifact.
  • Protocol is source of truth — Always reference
    refinement-dor-checklist-protocol.md
    for criteria definitions.
  • Story points >13 — Flag recommendation to split (not a FAIL, but a strong WARN).

Next Steps

MANDATORY IMPORTANT MUST ATTENTION — NO EXCEPTIONS after completing this skill, you MUST ATTENTION use

AskUserQuestion
to present these options. Do NOT skip because the task seems "simple" or "obvious" — the user decides:

  • "/handoff (Recommended)" — If PASS: hand off to grooming presentation
  • "/refine" — If FAIL: revise PBI
  • "/pbi-challenge" — If collaborative review needed before re-checking DoR
  • "Skip, continue manually" — user decides

Closing Reminders

MANDATORY IMPORTANT MUST ATTENTION break work into small todo tasks using

TaskCreate
BEFORE starting. MANDATORY IMPORTANT MUST ATTENTION validate decisions with user via
AskUserQuestion
— never auto-decide. MANDATORY IMPORTANT MUST ATTENTION add a final review todo task to verify work quality.

<!-- SYNC:critical-thinking-mindset:reminder -->
  • MUST ATTENTION apply critical thinking — every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact. <!-- /SYNC:critical-thinking-mindset:reminder --> <!-- SYNC:ai-mistake-prevention:reminder -->
  • MUST ATTENTION apply AI mistake prevention — holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction. <!-- /SYNC:ai-mistake-prevention:reminder -->