EasyPlatform review-artifact

[Code Quality] Review artifact quality before handoff. Use to verify PBIs, designs, stories meet quality standards.

install
source · Clone the upstream repo
git clone https://github.com/duc01226/EasyPlatform
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/duc01226/EasyPlatform "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/review-artifact" ~/.claude/skills/duc01226-easyplatform-review-artifact && rm -rf "$T"
manifest: .claude/skills/review-artifact/SKILL.md
source content

[IMPORTANT] Use

TaskCreate
to break ALL work into small tasks BEFORE starting — including tasks for each file read. This prevents context loss from long files. For simple tasks, AI MUST ATTENTION ask user whether to skip.

<!-- SYNC:critical-thinking-mindset -->

Critical Thinking Mindset — Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact — cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence — certainty without evidence root of all hallucination.

<!-- /SYNC:critical-thinking-mindset --> <!-- SYNC:ai-mistake-prevention -->

AI Mistake Prevention — Failure modes to avoid on every task:

  • Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal.
  • Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing.
  • Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain.
  • Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips — not just happy path.
  • When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer — never patch symptom site.
  • Assume existing values are intentional — ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code.
  • Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks.
  • Holistic-first debugging — resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis.
  • Surgical changes — apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly.
  • Surface ambiguity before coding — don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
<!-- /SYNC:ai-mistake-prevention -->

Prerequisites: MUST ATTENTION READ before executing:

<!-- SYNC:evidence-based-reasoning -->

Evidence-Based Reasoning — Speculation is FORBIDDEN. Every claim needs proof.

  1. Cite
    file:line
    , grep results, or framework docs for EVERY claim
  2. Declare confidence: >80% act freely, 60-80% verify first, <60% DO NOT recommend
  3. Cross-service validation required for architectural changes
  4. "I don't have enough evidence" is valid and expected output

BLOCKED until: Evidence file path (

file:line
) provided; Grep search performed; 3+ similar patterns found; Confidence level stated.

Forbidden without proof: "obviously", "I think", "should be", "probably", "this is because".

If incomplete → output: "Insufficient evidence. Verified: [...]. Not verified: [...]."

<!-- /SYNC:evidence-based-reasoning --> <!-- SYNC:understand-code-first -->

Understand Code First — HARD-GATE: Do NOT write, plan, or fix until you READ existing code.

  1. Search 3+ similar patterns (
    grep
    /
    glob
    ) — cite
    file:line
    evidence
  2. Read existing files in target area — understand structure, base classes, conventions
  3. Run
    python .claude/scripts/code_graph trace <file> --direction both --json
    when
    .code-graph/graph.db
    exists
  4. Map dependencies via
    connections
    or
    callers_of
    — know what depends on your target
  5. Write investigation to
    .ai/workspace/analysis/
    for non-trivial tasks (3+ files)
  6. Re-read analysis file before implementing — never work from memory alone
  7. NEVER invent new patterns when existing ones work — match exactly or document deviation

BLOCKED until:

- [ ]
Read target files
- [ ]
Grep 3+ patterns
- [ ]
Graph trace (if graph.db exists)
- [ ]
Assumptions verified with evidence

<!-- /SYNC:understand-code-first --> <!-- SYNC:graph-impact-analysis -->

Graph Impact Analysis — When

.code-graph/graph.db
exists, run
blast-radius --json
to detect ALL files affected by changes (7 edge types: CALLS, MESSAGE_BUS, API_ENDPOINT, TRIGGERS_EVENT, PRODUCES_EVENT, TRIGGERS_COMMAND_EVENT, INHERITS). Compute gap: impacted_files - changed_files = potentially stale files. Risk: <5 Low, 5-20 Medium, >20 High. Use
trace --direction downstream
for deep chains on high-impact files.

<!-- /SYNC:graph-impact-analysis --> <!-- SYNC:double-round-trip-review -->

Deep Multi-Round Review — Escalating rounds. Round 1 in main session. Round 2+ and EVERY recursive re-review iteration MUST use a fresh sub-agent.

Round 1: Main-session review. Read target files, build understanding, note issues. Output baseline findings.

Round 2: MANDATORY fresh sub-agent review — see

SYNC:fresh-context-review
for the spawn mechanism and
SYNC:review-protocol-injection
for the canonical Agent prompt template. The sub-agent re-reads ALL files from scratch with ZERO Round 1 memory. It must catch:

  • Cross-cutting concerns missed in Round 1
  • Interaction bugs between changed files
  • Convention drift (new code vs existing patterns)
  • Missing pieces that should exist but don't
  • Subtle edge cases the main session rationalized away

Round 3+ (recursive after fixes): After ANY fix cycle, MANDATORY fresh sub-agent re-review. Spawn a NEW Agent tool call each iteration — never reuse Round 2's agent. Each new agent re-reads ALL files from scratch with full protocol injection. Continue until PASS or 3 fresh-subagent rounds max, then escalate to user via

AskUserQuestion
.

Rules:

  • NEVER declare PASS after Round 1 alone
  • NEVER reuse a sub-agent across rounds — every iteration spawns a NEW Agent call
  • Main agent READS sub-agent reports but MUST NOT filter, reinterpret, or override findings
  • Max 3 fresh-subagent rounds per review — if still FAIL, escalate via
    AskUserQuestion
    (do NOT silently loop)
  • Track round count in conversation context (session-scoped)
  • Final verdict must incorporate ALL rounds

Report must include

## Round N Findings (Fresh Sub-Agent)
for every round N≥2.

<!-- /SYNC:double-round-trip-review --> <!-- SYNC:fresh-context-review -->

Fresh Sub-Agent Review — Eliminate orchestrator confirmation bias via isolated sub-agents.

Why: The main agent knows what it (or

/cook
) just fixed and rationalizes findings accordingly. A fresh sub-agent has ZERO memory, re-reads from scratch, and catches what the main agent dismissed. Sub-agent bias is mitigated by (1) fresh context, (2) verbatim protocol injection, (3) main agent not filtering the report.

When: Round 2 of ANY review AND every recursive re-review iteration after fixes. NOT needed when Round 1 already PASSes with zero issues.

How:

  1. Spawn a NEW
    Agent
    tool call — use
    code-reviewer
    subagent_type for code reviews,
    general-purpose
    for plan/doc/artifact reviews
  2. Inject ALL required review protocols VERBATIM into the prompt — see
    SYNC:review-protocol-injection
    for the full list and template. Never reference protocols by file path; AI compliance drops behind file-read indirection (see
    SYNC:shared-protocol-duplication-policy
    )
  3. Sub-agent re-reads ALL target files from scratch via its own tool calls — never pass file contents inline in the prompt
  4. Sub-agent writes structured report to
    plans/reports/{review-type}-round{N}-{date}.md
  5. Main agent reads the report, integrates findings into its own report, DOES NOT override or filter

Rules:

  • NEVER reuse a sub-agent across rounds — every iteration spawns a NEW
    Agent
    call
  • NEVER skip fresh-subagent review because "last round was clean" — every fix triggers a fresh round
  • Max 3 fresh-subagent rounds per review — escalate via
    AskUserQuestion
    if still failing; do NOT silently loop or fall back to any prior protocol
  • Track iteration count in conversation context (session-scoped, no persistent files)
<!-- /SYNC:fresh-context-review --> <!-- SYNC:review-protocol-injection -->

Review Protocol Injection — Every fresh sub-agent review prompt MUST embed 10 protocol blocks VERBATIM. The template below has ALL 10 bodies already expanded inline. Copy the template wholesale into the Agent call's

prompt
field at runtime, replacing only the
{placeholders}
in Task / Round / Reference Docs / Target Files / Output sections with context-specific values. Do NOT touch the embedded protocol sections.

Why inline expansion: Placeholder markers would force file-read indirection at runtime. AI compliance drops significantly behind indirection (see

SYNC:shared-protocol-duplication-policy
). Therefore the template carries all 10 protocol bodies pre-embedded.

Subagent Type Selection

  • code-reviewer
    — for code reviews (reviewing source files, git diffs, implementation)
  • general-purpose
    — for plan / doc / artifact reviews (reviewing markdown plans, docs, specs)

Canonical Agent Call Template (Copy Verbatim)

Agent({
  description: "Fresh Round {N} review",
  subagent_type: "code-reviewer",
  prompt: `
## Task
{review-specific task — e.g., "Review all uncommitted changes for code quality" | "Review plan files under {plan-dir}" | "Review integration tests in {path}"}

## Round
Round {N}. You have ZERO memory of prior rounds. Re-read all target files from scratch via your own tool calls. Do NOT trust anything from the main agent beyond this prompt.

## Protocols (follow VERBATIM — these are non-negotiable)

### Evidence-Based Reasoning
Speculation is FORBIDDEN. Every claim needs proof.
1. Cite file:line, grep results, or framework docs for EVERY claim
2. Declare confidence: >80% act freely, 60-80% verify first, <60% DO NOT recommend
3. Cross-service validation required for architectural changes
4. "I don't have enough evidence" is valid and expected output
BLOCKED until: Evidence file path (file:line) provided; Grep search performed; 3+ similar patterns found; Confidence level stated.
Forbidden without proof: "obviously", "I think", "should be", "probably", "this is because".
If incomplete → output: "Insufficient evidence. Verified: [...]. Not verified: [...]."

### Bug Detection
MUST check categories 1-4 for EVERY review. Never skip.
1. Null Safety: Can params/returns be null? Are they guarded? Optional chaining gaps? .find() returns checked?
2. Boundary Conditions: Off-by-one (< vs <=)? Empty collections handled? Zero/negative values? Max limits?
3. Error Handling: Try-catch scope correct? Silent swallowed exceptions? Error types specific? Cleanup in finally?
4. Resource Management: Connections/streams closed? Subscriptions unsubscribed on destroy? Timers cleared? Memory bounded?
5. Concurrency (if async): Missing await? Race conditions on shared state? Stale closures? Retry storms?
6. Stack-Specific: JS: === vs ==, typeof null. C#: async void, missing using, LINQ deferred execution.
Classify: CRITICAL (crash/corrupt) → FAIL | HIGH (incorrect behavior) → FAIL | MEDIUM (edge case) → WARN | LOW (defensive) → INFO.

### Design Patterns Quality
Priority checks for every code change:
1. DRY via OOP: Same-suffix classes (*Entity, *Dto, *Service) MUST share base class. 3+ similar patterns → extract to shared abstraction.
2. Right Responsibility: Logic in LOWEST layer (Entity > Domain Service > Application Service > Controller). Never business logic in controllers.
3. SOLID: Single responsibility (one reason to change). Open-closed (extend, don't modify). Liskov (subtypes substitutable). Interface segregation (small interfaces). Dependency inversion (depend on abstractions).
4. After extraction/move/rename: Grep ENTIRE scope for dangling references. Zero tolerance.
5. YAGNI gate: NEVER recommend patterns unless 3+ occurrences exist. Don't extract for hypothetical future use.
Anti-patterns to flag: God Object, Copy-Paste inheritance, Circular Dependency, Leaky Abstraction.

### Logic & Intention Review
Verify WHAT code does matches WHY it was changed.
1. Change Intention Check: Every changed file MUST serve the stated purpose. Flag unrelated changes as scope creep.
2. Happy Path Trace: Walk through one complete success scenario through changed code.
3. Error Path Trace: Walk through one failure/edge case scenario through changed code.
4. Acceptance Mapping: If plan context available, map every acceptance criterion to a code change.
NEVER mark review PASS without completing both traces (happy + error path).

### Test Spec Verification
Map changed code to test specifications.
1. From changed files → find TC-{FEAT}-{NNN} in docs/business-features/{Service}/detailed-features/{Feature}.md Section 15.
2. Every changed code path MUST map to a corresponding TC (or flag as "needs TC").
3. New functions/endpoints/handlers → flag for test spec creation.
4. Verify TC evidence fields point to actual code (file:line, not stale references).
5. Auth changes → TC-{FEAT}-02x exist? Data changes → TC-{FEAT}-01x exist?
6. If no specs exist → log gap and recommend /tdd-spec.
NEVER skip test mapping. Untested code paths are the #1 source of production bugs.

### Fix-Layer Accountability
NEVER fix at the crash site. Trace the full flow, fix at the owning layer. The crash site is a SYMPTOM, not the cause.
MANDATORY before ANY fix:
1. Trace full data flow — Map the complete path from data origin to crash site across ALL layers (storage → backend → API → frontend → UI). Identify where bad state ENTERS, not where it CRASHES.
2. Identify the invariant owner — Which layer's contract guarantees this value is valid? Fix at the LOWEST layer that owns the invariant, not the highest layer that consumes it.
3. One fix, maximum protection — If fix requires touching 3+ files with defensive checks, you are at the wrong layer — go lower.
4. Verify no bypass paths — Confirm all data flows through the fix point. Check for direct construction skipping factories, clone/spread without re-validation, raw data not wrapped in domain models, mutations outside the model layer.
BLOCKED until: Full data flow traced (origin → crash); Invariant owner identified with file:line evidence; All access sites audited (grep count); Fix layer justified (lowest layer that protects most consumers).
Anti-patterns (REJECT): "Fix it where it crashes" (crash site ≠ cause site, trace upstream); "Add defensive checks at every consumer" (scattered defense = wrong layer); "Both fix is safer" (pick ONE authoritative layer).

### Rationalization Prevention
AI skips steps via these evasions. Recognize and reject:
- "Too simple for a plan" → Simple + wrong assumptions = wasted time. Plan anyway.
- "I'll test after" → RED before GREEN. Write/verify test first.
- "Already searched" → Show grep evidence with file:line. No proof = no search.
- "Just do it" → Still need TaskCreate. Skip depth, never skip tracking.
- "Just a small fix" → Small fix in wrong location cascades. Verify file:line first.
- "Code is self-explanatory" → Future readers need evidence trail. Document anyway.
- "Combine steps to save time" → Combined steps dilute focus. Each step has distinct purpose.

### Graph-Assisted Investigation
MANDATORY when .code-graph/graph.db exists.
HARD-GATE: MUST run at least ONE graph command on key files before concluding any investigation.
Pattern: Grep finds files → trace --direction both reveals full system flow → Grep verifies details.
- Investigation/Scout: trace --direction both on 2-3 entry files
- Fix/Debug: callers_of on buggy function + tests_for
- Feature/Enhancement: connections on files to be modified
- Code Review: tests_for on changed functions
- Blast Radius: trace --direction downstream
CLI: python .claude/scripts/code_graph {command} --json. Use --node-mode file first (10-30x less noise), then --node-mode function for detail.

### Understand Code First
HARD-GATE: Do NOT write, plan, or fix until you READ existing code.
1. Search 3+ similar patterns (grep/glob) — cite file:line evidence.
2. Read existing files in target area — understand structure, base classes, conventions.
3. Run python .claude/scripts/code_graph trace <file> --direction both --json when .code-graph/graph.db exists.
4. Map dependencies via connections or callers_of — know what depends on your target.
5. Write investigation to .ai/workspace/analysis/ for non-trivial tasks (3+ files).
6. Re-read analysis file before implementing — never work from memory alone.
7. NEVER invent new patterns when existing ones work — match exactly or document deviation.
BLOCKED until: Read target files; Grep 3+ patterns; Graph trace (if graph.db exists); Assumptions verified with evidence.

## Reference Docs (READ before reviewing)
- docs/project-reference/code-review-rules.md
- {skill-specific reference docs — e.g., integration-test-reference.md for integration-test-review; backend-patterns-reference.md for backend reviews; frontend-patterns-reference.md for frontend reviews}

## Target Files
{explicit file list OR "run git diff to see uncommitted changes" OR "read all files under {plan-dir}"}

## Output
Write a structured report to plans/reports/{review-type}-round{N}-{date}.md with sections:
- Status: PASS | FAIL
- Issue Count: {number}
- Critical Issues (with file:line evidence)
- High Priority Issues (with file:line evidence)
- Medium / Low Issues
- Cross-cutting findings

Return the report path and status to the main agent.
Every finding MUST have file:line evidence. Speculation is forbidden.
`
})

Rules

  • DO copy the template wholesale — including all 10 embedded protocol sections
  • DO replace only the
    {placeholders}
    in Task / Round / Reference Docs / Target Files / Output sections with context-specific content
  • DO choose
    code-reviewer
    subagent_type for code reviews and
    general-purpose
    for plan / doc / artifact reviews
  • DO NOT paraphrase, summarize, or skip any protocol section
  • DO NOT pass file contents inline — the sub-agent reads via its own tool calls so it has a fresh context
  • DO NOT reference protocols by file path or tag name — the bodies are already embedded above
  • DO NOT introduce placeholder markers for the protocols — they must stay literally expanded
<!-- /SYNC:review-protocol-injection -->

OOP & DRY Enforcement: MANDATORY IMPORTANT MUST ATTENTION — flag duplicated patterns that should be extracted to a base class, generic, or helper. Classes in the same group or suffix (ex *Entity, *Dto, *Service, etc...) MUST ATTENTION inherit a common base (even if empty now — enables future shared logic and child overrides). Verify project has code linting/analyzer configured for the stack.

Quick Summary

Goal: Review an artifact (PBI, design spec, story, test spec) for completeness and quality before handoff.

Workflow:

  1. Identify — What artifact type is being reviewed
  2. Checklist — Apply type-specific quality criteria
  3. Verdict — READY or NEEDS WORK with specific items

Key Rules:

  • Use type-specific checklists
  • Every NEEDS WORK item must be actionable
  • Never block on stylistic preferences — focus on completeness

Be skeptical. Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence percentages (Idea should be more than 80%).

Adversarial Review Mindset (NON-NEGOTIABLE)

Default stance: SKEPTIC challenging artifact quality and completeness, not confirming presence of sections.

Presence-quality confusion trap: An artifact with all required sections LOOKS complete. But sections that exist but contain weak, ambiguous, or untestable content are worse than missing sections — they create false confidence. This section forces quality challenge beyond existence checks.

Adversarial Techniques (apply ALL before concluding)

1. Steel-Man the Alternatives Before accepting the chosen approach in any design artifact: argue FOR the strongest rejected alternative as vigorously as possible. Would a senior domain expert seriously consider it? If yes — the artifact's dismissal needs stronger justification.

2. Assumption Stress Test List the 3 biggest assumptions embedded in the artifact. For each: "What if this is wrong?" An artifact that breaks when 2 of its 3 core assumptions fail is fragile. Flag unaddressed failure modes.

3. Acceptance Criteria Testability For each acceptance criterion: "Can a QA engineer write a specific automated test for this — without asking clarifying questions?" If not — the AC is ambiguous. Flag it. Vague ACs ("the feature works correctly") are NOT acceptance criteria.

4. Pre-Mortem Assume the artifact is implemented exactly as written and the feature fails in production within 3 months. Write the most plausible failure scenario. If you can't find one, look harder — every implementation has a failure mode.

5. Unseen Alternatives Identify 1-2 approaches NOT mentioned in the artifact. Were they genuinely not considered, or considered and excluded without documented reasoning? Missing alternatives without exclusion reasoning = incomplete analysis.

6. Contrarian Pass Before writing any verdict, generate at least 2 sentences arguing the OPPOSITE conclusion. Then decide which argument is stronger based on evidence.

Forbidden Patterns

  • "Required sections present" → Presence ≠ quality. What's IN them?
  • "Acceptance criteria are defined" → Are they TESTABLE? Name the automated test for each.
  • "Scope is well-defined" → What is explicitly OUT of scope? If nothing is out of scope, the scope is undefined.
  • "Alternatives were considered" → Were they real alternatives, or strawmen set up to lose?
  • "Looks complete" → What specific failure mode is NOT addressed?

Anti-Bias Gate (MANDATORY before finalizing verdict)

  • Steel-manned at least one rejected alternative
  • Identified 3 hidden assumptions and stress-tested them
  • Verified each AC is unambiguously testable (can write automated test without clarification)
  • Ran pre-mortem (one concrete production failure scenario)
  • Identified at least 1 unexamined alternative (not in artifact)
  • Generated at least 2 sentences arguing the opposite verdict

If any box is unchecked → adversarial review incomplete. Go back.

Type-Specific Checklists

PBI Review

#CheckPresenceQuality Depth
1Problem statement is clear — the problem being solved is described in concrete termsIs a problem statement present? Is it 2+ sentences?Is the problem scoped correctly? Could it be framed differently to lead to a different (simpler) solution? Are symptoms confused with root cause?
2Acceptance criteria are testable and measurable — each AC can be verified by a testAre ACs present? Do they use measurable language?Can a QA engineer write an automated test for EACH AC without clarification? Are they specific enough to catch regressions? Vague ACs ("feature works correctly") are not acceptance criteria.
3Scope is well-defined (what's in and out) — both in-scope and out-of-scope items are explicitly listedIs an in/out scope list present? Does it have both sides?Are out-of-scope items specific enough to prevent scope creep? Is anything ambiguously in/out? A scope that says nothing is out of scope is an undefined scope.
4Dependencies are identified — all external dependencies the PBI relies on are listedIs a dependencies section present? Does it list items?Are ALL dependencies listed (technical, data, service, team)? Are "can-parallel" items truly safe to parallelize, or do they share a shared resource?
5Business value is articulated — the why behind the PBI is stated in terms of user or business outcomeIs business value described?Is the value quantified or just stated? Does it connect to a user outcome, not just a feature delivery? "Users can now do X" is better than "we implemented feature Y".
6Priority is assigned — the PBI has an explicit priority levelIs a priority level assigned?Is priority justified with data (RICE/MoSCoW), or arbitrary? Is it consistent with other PBIs in the same sprint? A PBI that is "high priority" without justification is unranked.

User Story Review

#CheckPresenceQuality Depth
1Follows GIVEN/WHEN/THEN format — the story uses the structured BDD formatAre all three parts (GIVEN, WHEN, THEN) present?Are all 3 parts present AND meaningful? Or is GIVEN trivial ("Given a user exists")? A GIVEN that describes no precondition adds no value.
2Is independent (not dependent on other stories) — the story can be implemented without requiring another story firstIs independence stated or inferable?Would descoping other stories prevent this story from being implemented? Implicit dependencies are as blocking as explicit ones.
3Is estimable (team can size it) — the team has enough information to assign story pointsIs the story sized or estimable based on content?Does the team have enough info to estimate? Is "can't estimate" a sign of missing AC? If it can't be sized, it's not ready for sprint.
4Is small enough for one sprint — the story fits within a single sprint's capacityIs the story sized at ≤8 story points or scoped to one sprint?Could this be split further? Stories >8SP should always be split. A story that "could fit" in a sprint but requires multiple sub-systems is likely too large.
5Has acceptance criteria — the story defines measurable conditions for completionAre acceptance criteria present?Are criteria testable? Would they catch a bug if the feature works in 9/10 cases? ACs that only describe the happy path are incomplete.

Design Spec Review

#CheckPresenceQuality Depth
1All component states covered (default, hover, active, disabled, error, loading) — spec defines visual behavior for all interaction statesAre all 6 states defined?Are edge-case states (error, loading) as fully designed as the default state, or sketched? An undesigned error state will be improvised in implementation.
2Design tokens specified (colors, spacing, typography) — specific token values are called out, not ad-hoc valuesAre token references present instead of raw values?Are tokens from the project's token system, or are new values introduced? New values outside the token system break design consistency silently.
3Responsive behavior defined — how the component adapts across breakpoints is documentedAre breakpoint behaviors defined?Are ALL breakpoints covered, or only desktop and mobile? Tablet-specific layouts are the most frequently omitted. Are content truncation / overflow behaviors specified?
4Accessibility requirements noted — WCAG-relevant requirements (color contrast, keyboard nav, ARIA) are documentedAre accessibility notes present?Are requirements specific (WCAG level, contrast ratio) or vague ("should be accessible")? Vague accessibility notes produce non-compliant implementations. Is keyboard navigation flow defined?
5Interaction patterns documented — animations, transitions, and user interaction flows are specifiedAre interaction behaviors described?Are timing and easing values specified? Is behavior defined for both forward and reverse interactions (e.g., open AND close)? Unspecified interactions are implemented inconsistently.

Test Spec Review

#CheckPresenceQuality Depth
1Coverage adequate for acceptance criteria — each AC maps to at least one test caseIs there a test for each AC?Is coverage 1:1 (each AC has one test) or is each AC covered by multiple test angles (boundary, happy path, error)? Single-test-per-AC misses edge cases.
2Edge cases included — boundary conditions and atypical inputs are testedAre edge case tests present?Are edge cases derived from the AC boundaries (min/max values, empty inputs, nulls), or are they generic? Edge cases that aren't tied to actual boundaries are noise.
3Test data requirements specified — the data setup needed to run each test is documentedAre test data requirements stated per test?Is test data specific enough to create fixtures without guessing? Vague data requirements ("a valid user") will cause test setup divergence across environments.
4GIVEN/WHEN/THEN format used — tests follow the structured BDD formatAre all tests written in GIVEN/WHEN/THEN?Are the THEN clauses assertions on observable outcomes, or on internal state? Tests asserting on internal state are brittle and break on refactoring.
5Negative test cases included — tests cover rejection, failure, and unauthorized access scenariosAre negative tests present?Do negative tests assert on specific error conditions (error code, message) or just that an error occurred? Unspecific negative tests don't verify correct failure behavior.

Readability Checklist (MUST ATTENTION evaluate)

Before approving, verify the code is easy to read, easy to maintain, easy to understand:

  • Schema visibility — If a function computes a data structure (object, map, config), a comment should show the output shape so readers don't have to trace the code
  • Non-obvious data flows — If data transforms through multiple steps (A → B → C), a brief comment should explain the pipeline
  • Self-documenting signatures — Function params should explain their role; flag unused params
  • Magic values — Unexplained numbers/strings should be named constants or have inline rationale
  • Naming clarity — Variables/functions should reveal intent without reading the implementation

Output Format

## Artifact Review

**Artifact Type:** [PBI | Story | Design | Test Spec]
**Artifact:** [Reference/title]
**Date:** {date}
**Verdict:** READY | NEEDS WORK

### Checklist Results
- [pass] [Item] — [evidence]
- [fail] [Item] — [what's missing/wrong]

### Action Items (if NEEDS WORK)
1. [Specific actionable item]

Round 2+ : Fresh Sub-Agent Re-Review (MANDATORY)

Protocol:

SYNC:double-round-trip-review
+
SYNC:fresh-context-review
+
SYNC:review-protocol-injection
(all inlined above in this file).

After completing Round 1 evaluation, spawn a fresh

general-purpose
sub-agent for Round 2 using the canonical Agent template from
SYNC:review-protocol-injection
above. Artifact reviews (PBI, story, design spec, test spec) are NOT code — use
subagent_type: "general-purpose"
, not
"code-reviewer"
. When constructing the Agent call prompt:

  1. Copy the Agent call shape from the
    SYNC:review-protocol-injection
    template verbatim
  2. Set
    subagent_type: "general-purpose"
  3. Embed the full verbatim body of these SYNC blocks (inlined above in this skill file):
    SYNC:evidence-based-reasoning
    ,
    SYNC:rationalization-prevention
    ,
    SYNC:understand-code-first
    (omit code-specific protocols like
    SYNC:bug-detection
    ,
    SYNC:design-patterns-quality
    ,
    SYNC:fix-layer-accountability
    which are not applicable to artifact files)
  4. Set the Task as
    "Review the {artifact-type} artifact for completeness and quality. Focus on: implicit assumptions, missing coverage of edge cases / error scenarios, unverified cross-references, completeness gaps only visible on second reading, whether acceptance criteria are truly testable and measurable."
  5. Set Target Files as the explicit artifact file path(s)
  6. Set report path as
    plans/reports/review-artifact-round{N}-{date}.md

After sub-agent returns:

  1. Read the sub-agent's report
  2. Integrate findings as
    ## Round {N} Findings (Fresh Sub-Agent)
    in the main report — DO NOT filter or override
  3. If NEEDS WORK: fix artifact issues, then spawn a NEW Round N+1 fresh sub-agent (new Agent call — never reuse Round 2's agent)
  4. Max 3 fresh rounds — escalate to user via
    AskUserQuestion
    if still failing after 3 rounds
  5. Final verdict must incorporate findings from ALL rounds

IMPORTANT Task Planning Notes (MUST ATTENTION FOLLOW)

  • Always plan and break work into many small todo tasks using
    TaskCreate
  • Always add a final review todo task to verify work quality and identify fixes/enhancements

Systematic Review Protocol (for 10+ artifacts)

When reviewing many artifacts at once, categorize by type, fire parallel

code-reviewer
sub-agents per category, then synchronize findings. See
review-changes/SKILL.md
§ "Systematic Review Protocol" for the full 4-step protocol (Categorize → Parallel Sub-Agents → Synchronize → Holistic Assessment).


AI Agent Integrity Gate (NON-NEGOTIABLE)

Completion ≠ Correctness. Before reporting ANY work done, prove it:

  1. Grep every removed name. Extraction/rename/delete touched N files? Grep confirms 0 dangling refs across ALL file types.
  2. Ask WHY before changing. Existing values are intentional until proven otherwise. No "fix" without traced rationale.
  3. Verify ALL outputs. One build passing ≠ all builds passing. Check every affected stack.
  4. Evaluate pattern fit. Copying nearby code? Verify preconditions match — same scope, lifetime, base class, constraints.
  5. New artifact = wired artifact. Created something? Prove it's registered, imported, and reachable by all consumers.

Closing Reminders

  • IMPORTANT MUST ATTENTION break work into small todo tasks using
    TaskCreate
    BEFORE starting
  • IMPORTANT MUST ATTENTION search codebase for 3+ similar patterns before creating new code
  • IMPORTANT MUST ATTENTION cite
    file:line
    evidence for every claim (confidence >80% to act)
  • IMPORTANT MUST ATTENTION add a final review todo task to verify work quality
  • IMPORTANT MUST ATTENTION execute two review rounds (Round 1: understand, Round 2: catch missed issues) <!-- SYNC:understand-code-first:reminder -->
  • IMPORTANT MUST ATTENTION search 3+ existing patterns and read code BEFORE any modification. Run graph trace when graph.db exists. <!-- /SYNC:understand-code-first:reminder --> <!-- SYNC:graph-impact-analysis:reminder -->
  • IMPORTANT MUST ATTENTION run
    blast-radius
    when graph.db exists. Flag impacted files NOT in changeset as potentially stale. <!-- /SYNC:graph-impact-analysis:reminder --> <!-- SYNC:critical-thinking-mindset:reminder -->
  • MUST ATTENTION apply critical thinking — every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact. <!-- /SYNC:critical-thinking-mindset:reminder --> <!-- SYNC:ai-mistake-prevention:reminder -->
  • MUST ATTENTION apply AI mistake prevention — holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction. <!-- /SYNC:ai-mistake-prevention:reminder -->