EasyPlatform refine-review

[Code Quality] Review PBI artifact for completeness, missing concerns, and quality before proceeding to story creation. AI self-review gate after /refine.

install
source · Clone the upstream repo
git clone https://github.com/duc01226/EasyPlatform
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/duc01226/EasyPlatform "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/refine-review" ~/.claude/skills/duc01226-easyplatform-refine-review && rm -rf "$T"
manifest: .claude/skills/refine-review/SKILL.md
source content

[IMPORTANT] Use

TaskCreate
to break ALL work into small tasks BEFORE starting — including tasks for each file read. This prevents context loss from long files. For simple tasks, AI MUST ATTENTION ask user whether to skip.

Evidence Gate: MANDATORY IMPORTANT MUST ATTENTION — every claim, finding, and recommendation requires

file:line
proof or traced evidence with confidence percentage (>80% to act, <80% must verify first).

OOP & DRY Enforcement: MANDATORY IMPORTANT MUST ATTENTION — flag duplicated patterns that should be extracted to a base class, generic, or helper. Classes in the same group or suffix (ex *Entity, *Dto, *Service, etc...) MUST ATTENTION inherit a common base (even if empty now — enables future shared logic and child overrides). Verify project has code linting/analyzer configured for the stack.

External Memory: For complex or lengthy work (research, analysis, scan, review), write intermediate findings and final results to a report file in

plans/reports/
— prevents context loss and serves as deliverable.

<!-- SYNC:critical-thinking-mindset -->

Critical Thinking Mindset — Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact — cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence — certainty without evidence root of all hallucination.

<!-- /SYNC:critical-thinking-mindset --> <!-- SYNC:ai-mistake-prevention -->

AI Mistake Prevention — Failure modes to avoid on every task:

  • Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal.
  • Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing.
  • Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain.
  • Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips — not just happy path.
  • When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer — never patch symptom site.
  • Assume existing values are intentional — ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code.
  • Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks.
  • Holistic-first debugging — resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis.
  • Surgical changes — apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly.
  • Surface ambiguity before coding — don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
<!-- /SYNC:ai-mistake-prevention --> <!-- SYNC:double-round-trip-review -->

Deep Multi-Round Review — Escalating rounds. Round 1 in main session. Round 2+ and EVERY recursive re-review iteration MUST use a fresh sub-agent.

Round 1: Main-session review. Read target files, build understanding, note issues. Output baseline findings.

Round 2: MANDATORY fresh sub-agent review — see

SYNC:fresh-context-review
for the spawn mechanism and
SYNC:review-protocol-injection
for the canonical Agent prompt template. The sub-agent re-reads ALL files from scratch with ZERO Round 1 memory. It must catch:

  • Cross-cutting concerns missed in Round 1
  • Interaction bugs between changed files
  • Convention drift (new code vs existing patterns)
  • Missing pieces that should exist but don't
  • Subtle edge cases the main session rationalized away

Round 3+ (recursive after fixes): After ANY fix cycle, MANDATORY fresh sub-agent re-review. Spawn a NEW Agent tool call each iteration — never reuse Round 2's agent. Each new agent re-reads ALL files from scratch with full protocol injection. Continue until PASS or 3 fresh-subagent rounds max, then escalate to user via

AskUserQuestion
.

Rules:

  • NEVER declare PASS after Round 1 alone
  • NEVER reuse a sub-agent across rounds — every iteration spawns a NEW Agent call
  • Main agent READS sub-agent reports but MUST NOT filter, reinterpret, or override findings
  • Max 3 fresh-subagent rounds per review — if still FAIL, escalate via
    AskUserQuestion
    (do NOT silently loop)
  • Track round count in conversation context (session-scoped)
  • Final verdict must incorporate ALL rounds

Report must include

## Round N Findings (Fresh Sub-Agent)
for every round N≥2.

<!-- /SYNC:double-round-trip-review --> <!-- SYNC:fresh-context-review -->

Fresh Sub-Agent Review — Eliminate orchestrator confirmation bias via isolated sub-agents.

Why: The main agent knows what it (or

/cook
) just fixed and rationalizes findings accordingly. A fresh sub-agent has ZERO memory, re-reads from scratch, and catches what the main agent dismissed. Sub-agent bias is mitigated by (1) fresh context, (2) verbatim protocol injection, (3) main agent not filtering the report.

When: Round 2 of ANY review AND every recursive re-review iteration after fixes. NOT needed when Round 1 already PASSes with zero issues.

How:

  1. Spawn a NEW
    Agent
    tool call — use
    code-reviewer
    subagent_type for code reviews,
    general-purpose
    for plan/doc/artifact reviews
  2. Inject ALL required review protocols VERBATIM into the prompt — see
    SYNC:review-protocol-injection
    for the full list and template. Never reference protocols by file path; AI compliance drops behind file-read indirection (see
    SYNC:shared-protocol-duplication-policy
    )
  3. Sub-agent re-reads ALL target files from scratch via its own tool calls — never pass file contents inline in the prompt
  4. Sub-agent writes structured report to
    plans/reports/{review-type}-round{N}-{date}.md
  5. Main agent reads the report, integrates findings into its own report, DOES NOT override or filter

Rules:

  • NEVER reuse a sub-agent across rounds — every iteration spawns a NEW
    Agent
    call
  • NEVER skip fresh-subagent review because "last round was clean" — every fix triggers a fresh round
  • Max 3 fresh-subagent rounds per review — escalate via
    AskUserQuestion
    if still failing; do NOT silently loop or fall back to any prior protocol
  • Track iteration count in conversation context (session-scoped, no persistent files)
<!-- /SYNC:fresh-context-review --> <!-- SYNC:review-protocol-injection -->

Review Protocol Injection — Every fresh sub-agent review prompt MUST embed 10 protocol blocks VERBATIM. The template below has ALL 10 bodies already expanded inline. Copy the template wholesale into the Agent call's

prompt
field at runtime, replacing only the
{placeholders}
in Task / Round / Reference Docs / Target Files / Output sections with context-specific values. Do NOT touch the embedded protocol sections.

Why inline expansion: Placeholder markers would force file-read indirection at runtime. AI compliance drops significantly behind indirection (see

SYNC:shared-protocol-duplication-policy
). Therefore the template carries all 10 protocol bodies pre-embedded.

Subagent Type Selection

  • code-reviewer
    — for code reviews (reviewing source files, git diffs, implementation)
  • general-purpose
    — for plan / doc / artifact reviews (reviewing markdown plans, docs, specs)

Canonical Agent Call Template (Copy Verbatim)

Agent({
  description: "Fresh Round {N} review",
  subagent_type: "code-reviewer",
  prompt: `
## Task
{review-specific task — e.g., "Review all uncommitted changes for code quality" | "Review plan files under {plan-dir}" | "Review integration tests in {path}"}

## Round
Round {N}. You have ZERO memory of prior rounds. Re-read all target files from scratch via your own tool calls. Do NOT trust anything from the main agent beyond this prompt.

## Protocols (follow VERBATIM — these are non-negotiable)

### Evidence-Based Reasoning
Speculation is FORBIDDEN. Every claim needs proof.
1. Cite file:line, grep results, or framework docs for EVERY claim
2. Declare confidence: >80% act freely, 60-80% verify first, <60% DO NOT recommend
3. Cross-service validation required for architectural changes
4. "I don't have enough evidence" is valid and expected output
BLOCKED until: Evidence file path (file:line) provided; Grep search performed; 3+ similar patterns found; Confidence level stated.
Forbidden without proof: "obviously", "I think", "should be", "probably", "this is because".
If incomplete → output: "Insufficient evidence. Verified: [...]. Not verified: [...]."

### Bug Detection
MUST check categories 1-4 for EVERY review. Never skip.
1. Null Safety: Can params/returns be null? Are they guarded? Optional chaining gaps? .find() returns checked?
2. Boundary Conditions: Off-by-one (< vs <=)? Empty collections handled? Zero/negative values? Max limits?
3. Error Handling: Try-catch scope correct? Silent swallowed exceptions? Error types specific? Cleanup in finally?
4. Resource Management: Connections/streams closed? Subscriptions unsubscribed on destroy? Timers cleared? Memory bounded?
5. Concurrency (if async): Missing await? Race conditions on shared state? Stale closures? Retry storms?
6. Stack-Specific: JS: === vs ==, typeof null. C#: async void, missing using, LINQ deferred execution.
Classify: CRITICAL (crash/corrupt) → FAIL | HIGH (incorrect behavior) → FAIL | MEDIUM (edge case) → WARN | LOW (defensive) → INFO.

### Design Patterns Quality
Priority checks for every code change:
1. DRY via OOP: Same-suffix classes (*Entity, *Dto, *Service) MUST share base class. 3+ similar patterns → extract to shared abstraction.
2. Right Responsibility: Logic in LOWEST layer (Entity > Domain Service > Application Service > Controller). Never business logic in controllers.
3. SOLID: Single responsibility (one reason to change). Open-closed (extend, don't modify). Liskov (subtypes substitutable). Interface segregation (small interfaces). Dependency inversion (depend on abstractions).
4. After extraction/move/rename: Grep ENTIRE scope for dangling references. Zero tolerance.
5. YAGNI gate: NEVER recommend patterns unless 3+ occurrences exist. Don't extract for hypothetical future use.
Anti-patterns to flag: God Object, Copy-Paste inheritance, Circular Dependency, Leaky Abstraction.

### Logic & Intention Review
Verify WHAT code does matches WHY it was changed.
1. Change Intention Check: Every changed file MUST serve the stated purpose. Flag unrelated changes as scope creep.
2. Happy Path Trace: Walk through one complete success scenario through changed code.
3. Error Path Trace: Walk through one failure/edge case scenario through changed code.
4. Acceptance Mapping: If plan context available, map every acceptance criterion to a code change.
NEVER mark review PASS without completing both traces (happy + error path).

### Test Spec Verification
Map changed code to test specifications.
1. From changed files → find TC-{FEAT}-{NNN} in docs/business-features/{Service}/detailed-features/{Feature}.md Section 15.
2. Every changed code path MUST map to a corresponding TC (or flag as "needs TC").
3. New functions/endpoints/handlers → flag for test spec creation.
4. Verify TC evidence fields point to actual code (file:line, not stale references).
5. Auth changes → TC-{FEAT}-02x exist? Data changes → TC-{FEAT}-01x exist?
6. If no specs exist → log gap and recommend /tdd-spec.
NEVER skip test mapping. Untested code paths are the #1 source of production bugs.

### Fix-Layer Accountability
NEVER fix at the crash site. Trace the full flow, fix at the owning layer. The crash site is a SYMPTOM, not the cause.
MANDATORY before ANY fix:
1. Trace full data flow — Map the complete path from data origin to crash site across ALL layers (storage → backend → API → frontend → UI). Identify where bad state ENTERS, not where it CRASHES.
2. Identify the invariant owner — Which layer's contract guarantees this value is valid? Fix at the LOWEST layer that owns the invariant, not the highest layer that consumes it.
3. One fix, maximum protection — If fix requires touching 3+ files with defensive checks, you are at the wrong layer — go lower.
4. Verify no bypass paths — Confirm all data flows through the fix point. Check for direct construction skipping factories, clone/spread without re-validation, raw data not wrapped in domain models, mutations outside the model layer.
BLOCKED until: Full data flow traced (origin → crash); Invariant owner identified with file:line evidence; All access sites audited (grep count); Fix layer justified (lowest layer that protects most consumers).
Anti-patterns (REJECT): "Fix it where it crashes" (crash site ≠ cause site, trace upstream); "Add defensive checks at every consumer" (scattered defense = wrong layer); "Both fix is safer" (pick ONE authoritative layer).

### Rationalization Prevention
AI skips steps via these evasions. Recognize and reject:
- "Too simple for a plan" → Simple + wrong assumptions = wasted time. Plan anyway.
- "I'll test after" → RED before GREEN. Write/verify test first.
- "Already searched" → Show grep evidence with file:line. No proof = no search.
- "Just do it" → Still need TaskCreate. Skip depth, never skip tracking.
- "Just a small fix" → Small fix in wrong location cascades. Verify file:line first.
- "Code is self-explanatory" → Future readers need evidence trail. Document anyway.
- "Combine steps to save time" → Combined steps dilute focus. Each step has distinct purpose.

### Graph-Assisted Investigation
MANDATORY when .code-graph/graph.db exists.
HARD-GATE: MUST run at least ONE graph command on key files before concluding any investigation.
Pattern: Grep finds files → trace --direction both reveals full system flow → Grep verifies details.
- Investigation/Scout: trace --direction both on 2-3 entry files
- Fix/Debug: callers_of on buggy function + tests_for
- Feature/Enhancement: connections on files to be modified
- Code Review: tests_for on changed functions
- Blast Radius: trace --direction downstream
CLI: python .claude/scripts/code_graph {command} --json. Use --node-mode file first (10-30x less noise), then --node-mode function for detail.

### Understand Code First
HARD-GATE: Do NOT write, plan, or fix until you READ existing code.
1. Search 3+ similar patterns (grep/glob) — cite file:line evidence.
2. Read existing files in target area — understand structure, base classes, conventions.
3. Run python .claude/scripts/code_graph trace <file> --direction both --json when .code-graph/graph.db exists.
4. Map dependencies via connections or callers_of — know what depends on your target.
5. Write investigation to .ai/workspace/analysis/ for non-trivial tasks (3+ files).
6. Re-read analysis file before implementing — never work from memory alone.
7. NEVER invent new patterns when existing ones work — match exactly or document deviation.
BLOCKED until: Read target files; Grep 3+ patterns; Graph trace (if graph.db exists); Assumptions verified with evidence.

## Reference Docs (READ before reviewing)
- docs/project-reference/code-review-rules.md
- {skill-specific reference docs — e.g., integration-test-reference.md for integration-test-review; backend-patterns-reference.md for backend reviews; frontend-patterns-reference.md for frontend reviews}

## Target Files
{explicit file list OR "run git diff to see uncommitted changes" OR "read all files under {plan-dir}"}

## Output
Write a structured report to plans/reports/{review-type}-round{N}-{date}.md with sections:
- Status: PASS | FAIL
- Issue Count: {number}
- Critical Issues (with file:line evidence)
- High Priority Issues (with file:line evidence)
- Medium / Low Issues
- Cross-cutting findings

Return the report path and status to the main agent.
Every finding MUST have file:line evidence. Speculation is forbidden.
`
})

Rules

  • DO copy the template wholesale — including all 10 embedded protocol sections
  • DO replace only the
    {placeholders}
    in Task / Round / Reference Docs / Target Files / Output sections with context-specific content
  • DO choose
    code-reviewer
    subagent_type for code reviews and
    general-purpose
    for plan / doc / artifact reviews
  • DO NOT paraphrase, summarize, or skip any protocol section
  • DO NOT pass file contents inline — the sub-agent reads via its own tool calls so it has a fresh context
  • DO NOT reference protocols by file path or tag name — the bodies are already embedded above
  • DO NOT introduce placeholder markers for the protocols — they must stay literally expanded
<!-- /SYNC:review-protocol-injection --> <!-- SYNC:graph-impact-analysis -->

Graph Impact Analysis — When

.code-graph/graph.db
exists, run
blast-radius --json
to detect ALL files affected by changes (7 edge types: CALLS, MESSAGE_BUS, API_ENDPOINT, TRIGGERS_EVENT, PRODUCES_EVENT, TRIGGERS_COMMAND_EVENT, INHERITS). Compute gap: impacted_files - changed_files = potentially stale files. Risk: <5 Low, 5-20 Medium, >20 High. Use
trace --direction downstream
for deep chains on high-impact files.

<!-- /SYNC:graph-impact-analysis -->

Quick Summary

Goal: Auto-review a refined PBI artifact for completeness, quality, and correctness before story creation proceeds.

Key distinction: AI self-review (automatic), NOT user interview.

Be skeptical. Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence percentages (Idea should be more than 80%).

Frontend/UI Context (if applicable)

When this task involves frontend or UI changes,

<!-- SYNC:ui-system-context -->

UI System Context — For ANY task touching

.ts
,
.html
,
.scss
, or
.css
files:

MUST ATTENTION READ before implementing:

  1. docs/project-reference/frontend-patterns-reference.md
    — component base classes, stores, forms
  2. docs/project-reference/scss-styling-guide.md
    — BEM methodology, SCSS variables, mixins, responsive
  3. docs/project-reference/design-system/README.md
    — design tokens, component inventory, icons

Reference

docs/project-config.json
for project-specific paths.

<!-- /SYNC:ui-system-context -->
  • Component patterns:
    docs/project-reference/frontend-patterns-reference.md
    (content auto-injected by hook — check for [Injected: ...] header before reading)
  • Styling/BEM guide:
    docs/project-reference/scss-styling-guide.md
  • Design system tokens:
    docs/project-reference/design-system/README.md

Adversarial Review Mindset (NON-NEGOTIABLE)

Default stance: SKEPTIC reviewing your own work. You just produced this PBI. Your natural bias is to find it acceptable. Break that bias deliberately.

Self-review trap: The AI wrote this PBI moments ago. Every reasoning chain you used to WRITE it is still active in this context. You will find it coherent because YOU made it coherent — not because it is correct. This section forces adversarial challenge.

Adversarial Techniques (apply ALL before concluding)

1. Scope Creep Detector Read each acceptance criterion. Ask: "Is this required for the core user value, or is it a 'nice to have' that grew in during writing?" Flag any AC that is NOT directly traceable to the problem statement. A PBI with 8 ACs for a simple feature is almost certainly over-scoped.

2. Assumption Stress Test List the 3 biggest assumptions embedded in the acceptance criteria. For each: "What if this assumption is wrong?" Common hidden assumptions: "users will provide valid input," "the data is in the expected format," "the system is in a known state." If any AC fails when an assumption breaks, it needs a corresponding error-path AC.

3. Missing Stakeholder Check Ask: "Who else uses this feature who was NOT the primary user in the problem statement?" (e.g., admins, support staff, other services). Does any missing stakeholder have a use case the PBI does NOT cover? Flag gaps.

4. Pre-Mortem Assume this PBI is implemented exactly as written and deployed. In 3 months, a user files a critical support ticket. Write the most plausible complaint — the gap the PBI failed to address. If you can't find one, you wrote too safe a PBI.

5. Testability Challenge For each acceptance criterion: "Can a QA engineer write an automated test for this AC without asking any clarifying questions?" If not — the AC is ambiguous or unmeasurable. Flag it.

6. Contrarian Pass Before writing any verdict, generate at least 2 sentences arguing the OPPOSITE conclusion. Then decide which argument is stronger.

Forbidden Patterns (self-review specific)

  • "ACs look comprehensive" — You wrote them. Of course they look comprehensive to you. Challenge each one.
  • "Scope is well-defined" — Did you REDUCE scope during writing? What did you cut? Was cutting justified?
  • "Problem statement is clear" — Could it be framed differently to lead to a different (better) solution?
  • "Ready for story creation" — Have you checked with INVEST? Would a developer estimate this in < 1 sprint?
  • Confirming your own work without adversarial challenge — Forbidden. Find at least one thing to question.

Anti-Bias Gate (MANDATORY before finalizing verdict)

Complete this checklist before writing the final verdict:

  • Checked each AC for scope creep (traceable to problem statement)
  • Identified 3 hidden assumptions and stress-tested them
  • Checked for missing stakeholders
  • Ran pre-mortem (most plausible support complaint in 3 months)
  • Verified all ACs are unambiguously testable
  • Generated at least 2 sentences arguing the opposite verdict

If any box is unchecked — you have NOT completed the adversarial review. Go back.

Workflow

  1. Locate PBI — Find latest PBI artifact in
    team-artifacts/pbis/
    or active plan context
  2. Evaluate checklist — Score each check as PASS/FAIL
  3. Classify — PASS (all Required + >=50% Recommended), WARN (all Required), FAIL (any Required fails)
  4. Output verdict — Status, issues, recommendations

Checklist

Required (all must pass)

#CheckPresenceQuality Depth
1Problem statement — Clear problem defined (not just solution description)Is a problem statement section present?Is the problem scoped correctly? Does it confuse symptoms with root cause? Is it framed to lead to only one solution, or is the solution space left open?
2Acceptance criteria — Minimum 3 GIVEN/WHEN/THEN scenariosAre at least 3 GIVEN/WHEN/THEN ACs present?Are they realistic scenarios, not trivial? Do they cover error paths? Are edge cases specified, or only the happy path?
3Story points + complexity — Both fields present with valid valuesAre story points and complexity both filled in?Do both values match the scope of work? Is complexity above 5 justified with a rationale, or just assigned arbitrarily?
4Dependencies table — Has dependency table with must-before/can-parallel/blocked-by typesIs a dependencies table present with typed rows?Are ALL dependencies captured (technical, data, service)? Are "can-parallel" items truly safe to parallelize, or do they share a resource?
5Stakeholder validation — User interview was conducted (validation section present)Is a stakeholder/validation section present?Was the interview substantive? Does the validation section show actual user responses and insights, or just "interviewed on [date]"?
6No vague language — No "should work", "might need", "TBD" in acceptance criteriaAre all ACs free of vague terms?Are "should work"/"TBD"/"might need" absent from ALL acceptance criteria, including edge cases and error paths?
7Scope boundary — Clear "out of scope" or "not included" sectionIs an out-of-scope section present?Are out-of-scope items specific enough to prevent scope creep during implementation? Is anything ambiguously in/out?
8Authorization defined — PBI has "Authorization & Access Control" section with roles × CRUD tableIs an authorization section with a CRUD table present?Does the CRUD table cover ALL PBI-relevant roles? Are there roles (admin, support, service account) that are missing or not addressed?
9UI Layout section — If UI: has
## UI Layout
section per wireframe protocol. If backend-only: explicit "N/A"
Is a UI Layout section or explicit N/A present?If UI: does the wireframe have component tree + states + design tokens? If backend-only: is the N/A explicit and justified, not just omitted?

Recommended (>=50% should pass)

#CheckPresenceQuality Depth
1RICE/MoSCoW score — Prioritization appliedIs a RICE or MoSCoW prioritization score present?Is the scoring justified with data or reasoning, or assigned arbitrarily without rationale?
2Domain vocabulary — Uses project-specific terms from domain-entities-reference.mdAre domain-specific terms used in the PBI?Is domain language consistent with
domain-entities-reference.md
, or are new terms introduced without definition?
3Risk assessment — Risks identified with mitigationsIs a risk section present with at least one item?Are risks specific to this PBI, or generic "technical risk"? Are mitigations actionable (who does what, when triggered)?
4Non-functional requirements — Performance, security, accessibility consideredIs an NFR section or NFR notes present?Are perf/security/accessibility numbers specified (e.g., "< 200ms response time", "WCAG AA"), not just "considered" or "addressed"?
5Production readiness concerns — PBI includes "Production Readiness Concerns" table with Yes/No/ExistingIs the Production Readiness Concerns table present with all 5 rows?Are all 5 table rows meaningfully filled — not just "Yes" for all? Are gaps called out with rationale?
6Seed data assessed — PBI addresses seed data needs or explicitly states "N/A"Is seed data addressed or explicitly marked N/A?If "N/A" is used, is it justified? Is it the right N/A (no new reference data needed), or is seed data actually required and skipped?
7Data migration assessed — PBI addresses schema changes and data migration needs or explicitly states "N/A"Is data migration addressed or explicitly marked N/A?If "N/A" is used, is it justified? Are there hidden schema implications (added columns, changed constraints) that require migration even if "no new tables"?

Output

## PBI Review Result

**Status:** PASS | WARN | FAIL
**Artifact:** {pbi-path}

### Required ({X}/{Y})

- ✅/❌ Check description

### Recommended ({X}/{Y})

- ✅/⚠️ Check description

### Issues Found

- ❌ FAIL: {issue}
- ⚠️ WARN: {issue}

### Verdict

{PROCEED | REVISE_FIRST}

Round 2+ : Fresh Sub-Agent Re-Review (MANDATORY)

Protocol:

SYNC:double-round-trip-review
+
SYNC:fresh-context-review
+
SYNC:review-protocol-injection
(all inlined above in this file).

After completing Round 1 checklist evaluation, spawn a fresh

general-purpose
sub-agent for Round 2 using the canonical Agent template from
SYNC:review-protocol-injection
above. PBI artifact reviews are NOT code reviews — use
subagent_type: "general-purpose"
. When constructing the Agent call prompt:

  1. Copy the Agent call shape from the
    SYNC:review-protocol-injection
    template verbatim
  2. Set
    subagent_type: "general-purpose"
  3. Embed the full verbatim body of these SYNC blocks:
    SYNC:evidence-based-reasoning
    ,
    SYNC:rationalization-prevention
    ,
    SYNC:understand-code-first
    (omit code-specific protocols like
    SYNC:bug-detection
    ,
    SYNC:design-patterns-quality
    ,
    SYNC:fix-layer-accountability
    which are not applicable to PBI artifacts)
  4. Set the Task as
    "Review the PBI artifact for completeness and quality. Focus on: implicit assumptions not validated, missing acceptance criteria coverage, edge cases not addressed, cross-references not verified, vague language, authorization gaps."
  5. Set Target Files as the explicit PBI artifact file path
  6. Set report path as
    plans/reports/refine-review-round{N}-{date}.md

After sub-agent returns:

  1. Read the sub-agent's report
  2. Integrate findings as
    ## Round {N} Findings (Fresh Sub-Agent)
    in the main report — DO NOT filter or override
  3. If FAIL: fix issues in the PBI, then spawn a NEW Round N+1 fresh sub-agent (new Agent call — never reuse Round 2's agent)
  4. Max 3 fresh rounds — escalate to user via
    AskUserQuestion
    if still failing after 3 rounds
  5. Final verdict must incorporate findings from ALL rounds

Key Rules

  • FAIL blocks workflow — If FAIL, do NOT proceed to /story. List specific fixes needed.
  • WARN allows proceeding — Note gaps but continue.
  • No guessing — Every check must reference specific content in the PBI artifact.
  • Constructive — Focus on implementation-blocking issues, not pedantic details.
  • Production Readiness — When reviewing production readiness concerns in a PBI, verify coverage of: Code Quality Tooling, Error Handling Foundation, Loading State Management, Docker Development Environment, Integration Points (see
    SYNC:scaffold-production-readiness
    for definitions).

Next Steps

MANDATORY IMPORTANT MUST ATTENTION — NO EXCEPTIONS after completing this skill, you MUST ATTENTION use

AskUserQuestion
to present these options. Do NOT skip because the task seems "simple" or "obvious" — the user decides:

  • "/story (Recommended)" — Create user stories from validated PBI
  • "/refine" — Re-refine if FAIL verdict
  • "Skip, continue manually" — user decides

Closing Reminders

MANDATORY IMPORTANT MUST ATTENTION break work into small todo tasks using

TaskCreate
BEFORE starting. MANDATORY IMPORTANT MUST ATTENTION validate decisions with user via
AskUserQuestion
— never auto-decide. MANDATORY IMPORTANT MUST ATTENTION add a final review todo task to verify work quality. MANDATORY IMPORTANT MUST ATTENTION READ the following files before starting:

<!-- SYNC:double-round-trip-review:reminder -->
  • MANDATORY IMPORTANT MUST ATTENTION execute TWO review rounds. Round 2 delegates to fresh code-reviewer sub-agent (zero prior context) — never skip or combine with Round 1. <!-- /SYNC:double-round-trip-review:reminder --> <!-- SYNC:graph-impact-analysis:reminder -->
  • IMPORTANT MUST ATTENTION run
    blast-radius
    when graph.db exists. Flag impacted files NOT in changeset as potentially stale. <!-- /SYNC:graph-impact-analysis:reminder --> <!-- SYNC:ui-system-context:reminder -->
  • IMPORTANT MUST ATTENTION read frontend-patterns-reference, scss-styling-guide, design-system/README before any UI change. <!-- /SYNC:ui-system-context:reminder --> <!-- SYNC:critical-thinking-mindset:reminder -->
  • MUST ATTENTION apply critical thinking — every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact. <!-- /SYNC:critical-thinking-mindset:reminder --> <!-- SYNC:ai-mistake-prevention:reminder -->
  • MUST ATTENTION apply AI mistake prevention — holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction. <!-- /SYNC:ai-mistake-prevention:reminder -->