EasyPlatform code-review
[Code Quality] Use when receiving code review feedback (especially if unclear or technically questionable), when completing tasks requiring review before proceeding, or before making completion claims. Covers receiving feedback with technical rigor, requesting reviews via code-reviewer subagent, and verification gates requiring evidence before status claims.
git clone https://github.com/duc01226/EasyPlatform
T=$(mktemp -d) && git clone --depth=1 https://github.com/duc01226/EasyPlatform "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/code-review" ~/.claude/skills/duc01226-easyplatform-code-review && rm -rf "$T"
.claude/skills/code-review/SKILL.md<!-- SYNC:critical-thinking-mindset -->[IMPORTANT] Use
to break ALL work into small tasks BEFORE starting — including tasks for each file read. This prevents context loss from long files. For simple tasks, AI MUST ATTENTION ask user whether to skip.TaskCreate
<!-- /SYNC:critical-thinking-mindset --> <!-- SYNC:ai-mistake-prevention -->Critical Thinking Mindset — Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact — cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence — certainty without evidence root of all hallucination.
<!-- /SYNC:ai-mistake-prevention --> <!-- SYNC:evidence-based-reasoning -->AI Mistake Prevention — Failure modes to avoid on every task:
- Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal.
- Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing.
- Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain.
- Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips — not just happy path.
- When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer — never patch symptom site.
- Assume existing values are intentional — ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code.
- Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks.
- Holistic-first debugging — resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis.
- Surgical changes — apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly.
- Surface ambiguity before coding — don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
<!-- /SYNC:evidence-based-reasoning --> <!-- SYNC:design-patterns-quality -->Evidence-Based Reasoning — Speculation is FORBIDDEN. Every claim needs proof.
- Cite
, grep results, or framework docs for EVERY claimfile:line- Declare confidence: >80% act freely, 60-80% verify first, <60% DO NOT recommend
- Cross-service validation required for architectural changes
- "I don't have enough evidence" is valid and expected output
BLOCKED until:
Evidence file path (- [ ])file:lineGrep search performed- [ ]3+ similar patterns found- [ ]Confidence level stated- [ ]Forbidden without proof: "obviously", "I think", "should be", "probably", "this is because" If incomplete → output:
"Insufficient evidence. Verified: [...]. Not verified: [...]."
<!-- /SYNC:design-patterns-quality --> <!-- SYNC:double-round-trip-review -->Design Patterns Quality — Priority checks for every code change:
- DRY via OOP: Same-suffix classes (
,*Entity,*Dto) MUST ATTENTION share base class. 3+ similar patterns → extract to shared abstraction.*Service- Right Responsibility: Logic in LOWEST layer (Entity > Domain Service > Application Service > Controller). Never business logic in controllers.
- SOLID: Single responsibility (one reason to change). Open-closed (extend, don't modify). Liskov (subtypes substitutable). Interface segregation (small interfaces). Dependency inversion (depend on abstractions).
- After extraction/move/rename: Grep ENTIRE scope for dangling references. Zero tolerance.
- YAGNI gate: NEVER recommend patterns unless 3+ occurrences exist. Don't extract for hypothetical future use.
Anti-patterns to flag: God Object, Copy-Paste inheritance, Circular Dependency, Leaky Abstraction.
<!-- /SYNC:double-round-trip-review --> <!-- SYNC:fresh-context-review -->Deep Multi-Round Review — Escalating rounds. Round 1 in main session. Round 2+ and EVERY recursive re-review iteration MUST use a fresh sub-agent.
Round 1: Main-session review. Read target files, build understanding, note issues. Output baseline findings.
Round 2: MANDATORY fresh sub-agent review — see
for the spawn mechanism andSYNC:fresh-context-reviewfor the canonical Agent prompt template. The sub-agent re-reads ALL files from scratch with ZERO Round 1 memory. It must catch:SYNC:review-protocol-injection
- Cross-cutting concerns missed in Round 1
- Interaction bugs between changed files
- Convention drift (new code vs existing patterns)
- Missing pieces that should exist but don't
- Subtle edge cases the main session rationalized away
Round 3+ (recursive after fixes): After ANY fix cycle, MANDATORY fresh sub-agent re-review. Spawn a NEW Agent tool call each iteration — never reuse Round 2's agent. Each new agent re-reads ALL files from scratch with full protocol injection. Continue until PASS or 3 fresh-subagent rounds max, then escalate to user via
.AskUserQuestionRules:
- NEVER declare PASS after Round 1 alone
- NEVER reuse a sub-agent across rounds — every iteration spawns a NEW Agent call
- Main agent READS sub-agent reports but MUST NOT filter, reinterpret, or override findings
- Max 3 fresh-subagent rounds per review — if still FAIL, escalate via
(do NOT silently loop)AskUserQuestion- Track round count in conversation context (session-scoped)
- Final verdict must incorporate ALL rounds
Report must include
for every round N≥2.## Round N Findings (Fresh Sub-Agent)
<!-- /SYNC:fresh-context-review --> <!-- SYNC:review-protocol-injection -->Fresh Sub-Agent Review — Eliminate orchestrator confirmation bias via isolated sub-agents.
Why: The main agent knows what it (or
) just fixed and rationalizes findings accordingly. A fresh sub-agent has ZERO memory, re-reads from scratch, and catches what the main agent dismissed. Sub-agent bias is mitigated by (1) fresh context, (2) verbatim protocol injection, (3) main agent not filtering the report./cookWhen: Round 2 of ANY review AND every recursive re-review iteration after fixes. NOT needed when Round 1 already PASSes with zero issues.
How:
- Spawn a NEW
tool call — useAgentsubagent_type for code reviews,code-reviewerfor plan/doc/artifact reviewsgeneral-purpose- Inject ALL required review protocols VERBATIM into the prompt — see
for the full list and template. Never reference protocols by file path; AI compliance drops behind file-read indirection (seeSYNC:review-protocol-injection)SYNC:shared-protocol-duplication-policy- Sub-agent re-reads ALL target files from scratch via its own tool calls — never pass file contents inline in the prompt
- Sub-agent writes structured report to
plans/reports/{review-type}-round{N}-{date}.md- Main agent reads the report, integrates findings into its own report, DOES NOT override or filter
Rules:
- NEVER reuse a sub-agent across rounds — every iteration spawns a NEW
callAgent- NEVER skip fresh-subagent review because "last round was clean" — every fix triggers a fresh round
- Max 3 fresh-subagent rounds per review — escalate via
if still failing; do NOT silently loop or fall back to any prior protocolAskUserQuestion- Track iteration count in conversation context (session-scoped, no persistent files)
Review Protocol Injection — Every fresh sub-agent review prompt MUST embed 10 protocol blocks VERBATIM. The template below has ALL 10 bodies already expanded inline. Copy the template wholesale into the Agent call's
field at runtime, replacing only thepromptin Task / Round / Reference Docs / Target Files / Output sections with context-specific values. Do NOT touch the embedded protocol sections.{placeholders}Why inline expansion: Placeholder markers would force file-read indirection at runtime. AI compliance drops significantly behind indirection (see
). Therefore the template carries all 10 protocol bodies pre-embedded.SYNC:shared-protocol-duplication-policy
Subagent Type Selection
— for code reviews (reviewing source files, git diffs, implementation)code-reviewer
— for plan / doc / artifact reviews (reviewing markdown plans, docs, specs)general-purpose
Canonical Agent Call Template (Copy Verbatim)
Agent({ description: "Fresh Round {N} review", subagent_type: "code-reviewer", prompt: ` ## Task {review-specific task — e.g., "Review all uncommitted changes for code quality" | "Review plan files under {plan-dir}" | "Review integration tests in {path}"} ## Round Round {N}. You have ZERO memory of prior rounds. Re-read all target files from scratch via your own tool calls. Do NOT trust anything from the main agent beyond this prompt. ## Protocols (follow VERBATIM — these are non-negotiable) ### Evidence-Based Reasoning Speculation is FORBIDDEN. Every claim needs proof. 1. Cite file:line, grep results, or framework docs for EVERY claim 2. Declare confidence: >80% act freely, 60-80% verify first, <60% DO NOT recommend 3. Cross-service validation required for architectural changes 4. "I don't have enough evidence" is valid and expected output BLOCKED until: Evidence file path (file:line) provided; Grep search performed; 3+ similar patterns found; Confidence level stated. Forbidden without proof: "obviously", "I think", "should be", "probably", "this is because". If incomplete → output: "Insufficient evidence. Verified: [...]. Not verified: [...]." ### Bug Detection MUST check categories 1-4 for EVERY review. Never skip. 1. Null Safety: Can params/returns be null? Are they guarded? Optional chaining gaps? .find() returns checked? 2. Boundary Conditions: Off-by-one (< vs <=)? Empty collections handled? Zero/negative values? Max limits? 3. Error Handling: Try-catch scope correct? Silent swallowed exceptions? Error types specific? Cleanup in finally? 4. Resource Management: Connections/streams closed? Subscriptions unsubscribed on destroy? Timers cleared? Memory bounded? 5. Concurrency (if async): Missing await? Race conditions on shared state? Stale closures? Retry storms? 6. Stack-Specific: JS: === vs ==, typeof null. C#: async void, missing using, LINQ deferred execution. Classify: CRITICAL (crash/corrupt) → FAIL | HIGH (incorrect behavior) → FAIL | MEDIUM (edge case) → WARN | LOW (defensive) → INFO. ### Design Patterns Quality Priority checks for every code change: 1. DRY via OOP: Same-suffix classes (*Entity, *Dto, *Service) MUST share base class. 3+ similar patterns → extract to shared abstraction. 2. Right Responsibility: Logic in LOWEST layer (Entity > Domain Service > Application Service > Controller). Never business logic in controllers. 3. SOLID: Single responsibility (one reason to change). Open-closed (extend, don't modify). Liskov (subtypes substitutable). Interface segregation (small interfaces). Dependency inversion (depend on abstractions). 4. After extraction/move/rename: Grep ENTIRE scope for dangling references. Zero tolerance. 5. YAGNI gate: NEVER recommend patterns unless 3+ occurrences exist. Don't extract for hypothetical future use. Anti-patterns to flag: God Object, Copy-Paste inheritance, Circular Dependency, Leaky Abstraction. ### Logic & Intention Review Verify WHAT code does matches WHY it was changed. 1. Change Intention Check: Every changed file MUST serve the stated purpose. Flag unrelated changes as scope creep. 2. Happy Path Trace: Walk through one complete success scenario through changed code. 3. Error Path Trace: Walk through one failure/edge case scenario through changed code. 4. Acceptance Mapping: If plan context available, map every acceptance criterion to a code change. NEVER mark review PASS without completing both traces (happy + error path). ### Test Spec Verification Map changed code to test specifications. 1. From changed files → find TC-{FEAT}-{NNN} in docs/business-features/{Service}/detailed-features/{Feature}.md Section 15. 2. Every changed code path MUST map to a corresponding TC (or flag as "needs TC"). 3. New functions/endpoints/handlers → flag for test spec creation. 4. Verify TC evidence fields point to actual code (file:line, not stale references). 5. Auth changes → TC-{FEAT}-02x exist? Data changes → TC-{FEAT}-01x exist? 6. If no specs exist → log gap and recommend /tdd-spec. NEVER skip test mapping. Untested code paths are the #1 source of production bugs. ### Fix-Layer Accountability NEVER fix at the crash site. Trace the full flow, fix at the owning layer. The crash site is a SYMPTOM, not the cause. MANDATORY before ANY fix: 1. Trace full data flow — Map the complete path from data origin to crash site across ALL layers (storage → backend → API → frontend → UI). Identify where bad state ENTERS, not where it CRASHES. 2. Identify the invariant owner — Which layer's contract guarantees this value is valid? Fix at the LOWEST layer that owns the invariant, not the highest layer that consumes it. 3. One fix, maximum protection — If fix requires touching 3+ files with defensive checks, you are at the wrong layer — go lower. 4. Verify no bypass paths — Confirm all data flows through the fix point. Check for direct construction skipping factories, clone/spread without re-validation, raw data not wrapped in domain models, mutations outside the model layer. BLOCKED until: Full data flow traced (origin → crash); Invariant owner identified with file:line evidence; All access sites audited (grep count); Fix layer justified (lowest layer that protects most consumers). Anti-patterns (REJECT): "Fix it where it crashes" (crash site ≠ cause site, trace upstream); "Add defensive checks at every consumer" (scattered defense = wrong layer); "Both fix is safer" (pick ONE authoritative layer). ### Rationalization Prevention AI skips steps via these evasions. Recognize and reject: - "Too simple for a plan" → Simple + wrong assumptions = wasted time. Plan anyway. - "I'll test after" → RED before GREEN. Write/verify test first. - "Already searched" → Show grep evidence with file:line. No proof = no search. - "Just do it" → Still need TaskCreate. Skip depth, never skip tracking. - "Just a small fix" → Small fix in wrong location cascades. Verify file:line first. - "Code is self-explanatory" → Future readers need evidence trail. Document anyway. - "Combine steps to save time" → Combined steps dilute focus. Each step has distinct purpose. ### Graph-Assisted Investigation MANDATORY when .code-graph/graph.db exists. HARD-GATE: MUST run at least ONE graph command on key files before concluding any investigation. Pattern: Grep finds files → trace --direction both reveals full system flow → Grep verifies details. - Investigation/Scout: trace --direction both on 2-3 entry files - Fix/Debug: callers_of on buggy function + tests_for - Feature/Enhancement: connections on files to be modified - Code Review: tests_for on changed functions - Blast Radius: trace --direction downstream CLI: python .claude/scripts/code_graph {command} --json. Use --node-mode file first (10-30x less noise), then --node-mode function for detail. ### Understand Code First HARD-GATE: Do NOT write, plan, or fix until you READ existing code. 1. Search 3+ similar patterns (grep/glob) — cite file:line evidence. 2. Read existing files in target area — understand structure, base classes, conventions. 3. Run python .claude/scripts/code_graph trace <file> --direction both --json when .code-graph/graph.db exists. 4. Map dependencies via connections or callers_of — know what depends on your target. 5. Write investigation to .ai/workspace/analysis/ for non-trivial tasks (3+ files). 6. Re-read analysis file before implementing — never work from memory alone. 7. NEVER invent new patterns when existing ones work — match exactly or document deviation. BLOCKED until: Read target files; Grep 3+ patterns; Graph trace (if graph.db exists); Assumptions verified with evidence. ## Reference Docs (READ before reviewing) - docs/project-reference/code-review-rules.md - {skill-specific reference docs — e.g., integration-test-reference.md for integration-test-review; backend-patterns-reference.md for backend reviews; frontend-patterns-reference.md for frontend reviews} ## Target Files {explicit file list OR "run git diff to see uncommitted changes" OR "read all files under {plan-dir}"} ## Output Write a structured report to plans/reports/{review-type}-round{N}-{date}.md with sections: - Status: PASS | FAIL - Issue Count: {number} - Critical Issues (with file:line evidence) - High Priority Issues (with file:line evidence) - Medium / Low Issues - Cross-cutting findings Return the report path and status to the main agent. Every finding MUST have file:line evidence. Speculation is forbidden. ` })
Rules
- DO copy the template wholesale — including all 10 embedded protocol sections
- DO replace only the
in Task / Round / Reference Docs / Target Files / Output sections with context-specific content{placeholders} - DO choose
subagent_type for code reviews andcode-reviewer
for plan / doc / artifact reviewsgeneral-purpose - DO NOT paraphrase, summarize, or skip any protocol section
- DO NOT pass file contents inline — the sub-agent reads via its own tool calls so it has a fresh context
- DO NOT reference protocols by file path or tag name — the bodies are already embedded above
- DO NOT introduce placeholder markers for the protocols — they must stay literally expanded
— Domain entity catalog, relationships, cross-service sync (read when task involves business entities/models) (content auto-injected by hook — check for [Injected: ...] header before reading)docs/project-reference/domain-entities-reference.md
Critical Purpose: Ensure quality — no flaws, no bugs, no missing updates, no stale content. Verify both code AND documentation.
External Memory: For complex or lengthy work (research, analysis, scan, review), write intermediate findings and final results to a report file in
— prevents context loss and serves as deliverable.plans/reports/
<!-- SYNC:rationalization-prevention -->Evidence Gate: MANDATORY IMPORTANT MUST ATTENTION — every claim, finding, and recommendation requires
proof or traced evidence with confidence percentage (>80% to act, <80% must verify first).file:line
<!-- /SYNC:rationalization-prevention --> <!-- SYNC:logic-and-intention-review -->Rationalization Prevention — AI skips steps via these evasions. Recognize and reject:
Evasion Rebuttal "Too simple for a plan" Simple + wrong assumptions = wasted time. Plan anyway. "I'll test after" RED before GREEN. Write/verify test first. "Already searched" Show grep evidence with . No proof = no search.file:line"Just do it" Still need TaskCreate. Skip depth, never skip tracking. "Just a small fix" Small fix in wrong location cascades. Verify file:line first. "Code is self-explanatory" Future readers need evidence trail. Document anyway. "Combine steps to save time" Combined steps dilute focus. Each step has distinct purpose.
<!-- /SYNC:logic-and-intention-review --> <!-- SYNC:bug-detection -->Logic & Intention Review — Verify WHAT code does matches WHY it was changed.
- Change Intention Check: Every changed file MUST ATTENTION serve the stated purpose. Flag unrelated changes as scope creep.
- Happy Path Trace: Walk through one complete success scenario through changed code
- Error Path Trace: Walk through one failure/edge case scenario through changed code
- Acceptance Mapping: If plan context available, map every acceptance criterion to a code change
NEVER mark review PASS without completing both traces (happy + error path).
<!-- /SYNC:bug-detection --> <!-- SYNC:test-spec-verification -->Bug Detection — MUST ATTENTION check categories 1-4 for EVERY review. Never skip.
- Null Safety: Can params/returns be null? Are they guarded? Optional chaining gaps?
returns checked?.find()- Boundary Conditions: Off-by-one (
vs<)? Empty collections handled? Zero/negative values? Max limits?<=- Error Handling: Try-catch scope correct? Silent swallowed exceptions? Error types specific? Cleanup in finally?
- Resource Management: Connections/streams closed? Subscriptions unsubscribed on destroy? Timers cleared? Memory bounded?
- Concurrency (if async): Missing
? Race conditions on shared state? Stale closures? Retry storms?await- Stack-Specific: JS:
vs===,==. C#:typeof null, missingasync void, LINQ deferred execution.usingClassify: CRITICAL (crash/corrupt) → FAIL | HIGH (incorrect behavior) → FAIL | MEDIUM (edge case) → WARN | LOW (defensive) → INFO
<!-- /SYNC:test-spec-verification --> <!-- SYNC:fix-layer-accountability -->Test Spec Verification — Map changed code to test specifications.
- From changed files → find TC-{FEAT}-{NNN} in
Section 15docs/business-features/{Service}/detailed-features/{Feature}.md- Every changed code path MUST ATTENTION map to a corresponding TC (or flag as "needs TC")
- New functions/endpoints/handlers → flag for test spec creation
- Verify TC evidence fields point to actual code (
, not stale references)file:line- Auth changes → TC-{FEAT}-02x exist? Data changes → TC-{FEAT}-01x exist?
- If no specs exist → log gap and recommend
/tdd-specNEVER skip test mapping. Untested code paths are the #1 source of production bugs.
<!-- /SYNC:fix-layer-accountability -->Fix-Layer Accountability — NEVER fix at the crash site. Trace the full flow, fix at the owning layer.
AI default behavior: see error at Place A → fix Place A. This is WRONG. The crash site is a SYMPTOM, not the cause.
MANDATORY before ANY fix:
- Trace full data flow — Map the complete path from data origin to crash site across ALL layers (storage → backend → API → frontend → UI). Identify where the bad state ENTERS, not where it CRASHES.
- Identify the invariant owner — Which layer's contract guarantees this value is valid? That layer is responsible. Fix at the LOWEST layer that owns the invariant — not the highest layer that consumes it.
- One fix, maximum protection — Ask: "If I fix here, does it protect ALL downstream consumers with ONE change?" If fix requires touching 3+ files with defensive checks, you are at the wrong layer — go lower.
- Verify no bypass paths — Confirm all data flows through the fix point. Check for: direct construction skipping factories, clone/spread without re-validation, raw data not wrapped in domain models, mutations outside the model layer.
BLOCKED until:
Full data flow traced (origin → crash)- [ ]Invariant owner identified with- [ ]evidencefile:lineAll access sites audited (grep count)- [ ]Fix layer justified (lowest layer that protects most consumers)- [ ]Anti-patterns (REJECT these):
- "Fix it where it crashes" — Crash site ≠ cause site. Trace upstream.
- "Add defensive checks at every consumer" — Scattered defense = wrong layer. One authoritative fix > many scattered guards.
- "Both fix is safer" — Pick ONE authoritative layer. Redundant checks across layers send mixed signals about who owns the invariant.
OOP & DRY Enforcement: MANDATORY IMPORTANT MUST ATTENTION — flag duplicated patterns that should be extracted to a base class, generic, or helper. Classes in the same group or suffix (ex *Entity, *Dto, *Service, etc...) MUST ATTENTION inherit a common base (even if empty now — enables future shared logic and child overrides). Verify project has code linting/analyzer configured for the stack.
Quick Summary
Goal: Ensure technical correctness through three practices: receiving feedback with verification over performative agreement, requesting systematic reviews via code-reviewer subagent, and enforcing verification gates before completion claims.
MANDATORY IMPORTANT MUST ATTENTION Plan ToDo Task to READ the following project-specific reference docs:
— anti-patterns, review checklists, quality standards (READ FIRST)docs/project-reference/code-review-rules.md — Integration test patterns, fixture setup, seeder conventions, lessons learned (MUST READ before reviewing/writing integration tests)docs/project-reference/integration-test-reference.md — backend CQRS, validation, entity patternsbackend-patterns-reference.md — component hierarchy, store, forms patternsfrontend-patterns-reference.md — design tokens, component inventory, iconsdocs/project-reference/design-system/README.mdIf files not found, search for: project coding standards, architecture documentation.
Workflow:
- Create Review Report — Initialize report file in
plans/reports/code-review-{date}-{slug}.md - Phase 1: File-by-File — Review each file, update report with issues (naming, typing, magic numbers, responsibility)
- Phase 2: Holistic Review — Re-read accumulated report, assess overall approach, architecture, duplication
- Phase 3: Final Result — Update report with overall assessment, critical issues, recommendations
Key Rules:
- Report-Driven: Build report incrementally, re-read for big picture
- Two-Phase: Individual file review → holistic assessment of accumulated findings
- No Performative Agreement: Technical evaluation, not social comfort ("You're right!" banned)
- Verification Gates: Evidence required before any completion claims (tests pass, build succeeds)
Code Review
Three practices: (1) Receiving feedback with technical rigor, (2) Requesting systematic reviews via code-reviewer subagent, (3) Enforcing verification gates before completion claims.
<!-- SYNC:graph-assisted-investigation --><!-- /SYNC:graph-assisted-investigation --> <!-- SYNC:subagent-return-contract -->Graph-Assisted Investigation — MANDATORY when
exists..code-graph/graph.dbHARD-GATE: MUST ATTENTION run at least ONE graph command on key files before concluding any investigation.
Pattern: Grep finds files →
reveals full system flow → Grep verifies detailstrace --direction both
Task Minimum Graph Action Investigation/Scout on 2-3 entry filestrace --direction bothFix/Debug on buggy function +callers_oftests_forFeature/Enhancement on files to be modifiedconnectionsCode Review on changed functionstests_forBlast Radius trace --direction downstreamCLI:
. Usepython .claude/scripts/code_graph {command} --jsonfirst (10-30x less noise), then--node-mode filefor detail.--node-mode function
<!-- /SYNC:subagent-return-contract -->Sub-Agent Return Contract — When this skill spawns a sub-agent, the sub-agent MUST return ONLY this structure. Main agent reads only this summary — NEVER requests full sub-agent output inline.
## Sub-Agent Result: [skill-name] Status: ✅ PASS | ⚠️ PARTIAL | ❌ FAIL Confidence: [0-100]% ### Findings (Critical/High only — max 10 bullets) - [severity] [file:line] [finding] ### Actions Taken - [file changed] [what changed] ### Blockers (if any) - [blocker description] Full report: plans/reports/[skill-name]-[date]-[slug].mdMain agent reads
file ONLY when: (a) resolving a specific blocker, or (b) building a fix plan. Sub-agent writes full report incrementally (per SYNC:incremental-persistence) — not held in memory.Full report
Run
on changed functions to flag coverage gaps.python .claude/scripts/code_graph query tests_for <function> --json
Review Mindset (NON-NEGOTIABLE)
Be skeptical. Every claim needs traced proof with
evidence. Confidence >80% to act.file:line
- NEVER accept code correctness at face value — trace call paths to confirm
- NEVER include a finding without
evidence (grep results, read confirmations)file:line - ALWAYS question: "Does this actually work?" → trace it. "Is this all?" → grep cross-service.
- ALWAYS verify side effects: check consumers and dependents before approving
Core Principles (ENFORCE ALL)
| Principle | Rule |
|---|---|
| YAGNI | Flag code solving hypothetical problems (unused params, speculative interfaces) |
| KISS | Flag unnecessary complexity. "Is there a simpler way?" |
| DRY | Grep for similar/duplicate code. 3+ similar patterns → flag for extraction |
| Clean Code | Readable > clever. Names reveal intent. Functions do ONE thing. Nesting <=3. Methods <30 lines |
| Convention | MUST ATTENTION grep 3+ existing examples before flagging violations. Codebase convention wins over textbook |
| No Bugs | Trace logic paths. Verify edge cases (null, empty, boundary). Check error handling |
| Proof Required | Every claim backed by evidence. Speculation is forbidden |
| Doc Staleness | Cross-ref changed files against related docs. Flag stale/missing updates |
Technical correctness over social comfort. Verify before implementing. Evidence before claims.
Graph-Enhanced Review (RECOMMENDED if graph.db exists)
— prioritize files by impact (most dependents first)python .claude/scripts/code_graph graph-blast-radius --json
— flag untested changed functionspython .claude/scripts/code_graph query tests_for <function_name> --json
— downstream impact (events, bus, cross-service)python .claude/scripts/code_graph trace <file> --direction downstream --json
— full flow context for controllers/commands/handlerspython .claude/scripts/code_graph trace <file> --direction both --json- Wide blast radius (>20 impacted nodes) = high-risk. Flag in report.
Review Approach (Report-Driven Two-Phase - CRITICAL)
⛔ MANDATORY FIRST: Create Todo Tasks Before starting, call TaskCreate with review phase tasks:
- in_progress[Review Phase 1] Create report file
- pending[Review Phase 1] Review file-by-file and update report
- pending[Review Phase 2] Re-read report for holistic assessment
- pending[Review Phase 3] Generate final review findings
- pending[Review Round 2] Focused re-review of all files
- pending Update todo status as each phase completes.[Review Final] Consolidate Round 1 + Round 2 findings
Step 0: Create Report File
- Create
plans/reports/code-review-{date}-{slug}.md - Initialize with Scope, Files to Review sections
Phase 1: File-by-File Review (Build Report) For EACH file, immediately update report with:
- File path, Change Summary, Purpose, Issues Found
- Check naming, typing, magic numbers, responsibility placement
- Convention check: Grep for 3+ similar patterns in codebase — does new code follow existing convention?
- Correctness check: Trace logic paths — does the code handle null, empty, boundary values, error cases?
- DRY check: Grep for similar/duplicate code — does this logic already exist elsewhere?
Phase 2: Holistic Review (Review the Report) After ALL files reviewed, re-read accumulated report to see big picture:
- Technical Solution: Does overall approach make sense as unified plan?
- Responsibility: New files in correct layers? Logic in LOWEST layer?
- Backend: Mapping in Command/DTO (not Handler)?
- Frontend: Constants/columns in Model (not Component)?
- Duplication: Any duplicated logic across changes? Similar code elsewhere? (grep to verify)
- Architecture: Clean Architecture followed? Service boundaries respected?
- Plan Compliance (if active plan exists): Check
→ if plan path exists, verify: implementation matches plan requirements, plan TCs have code evidence (not "TBD"), no plan requirement unaddressed## Plan Context - Design Patterns (per
): Pattern opportunities (switch→Strategy, scattered new→Factory)? Anti-patterns (God Object, Copy-Paste, Circular Dependency)? DRY via base classes/generics? Right responsibility layer? Tech-agnostic abstractions?design-patterns-quality-checklist.md
MUST ATTENTION CHECK — Clean Code: YAGNI (unused params, speculative interfaces)? KISS (simpler alternative exists)? Methods >30 lines or nesting >3? Abstractions for single-use?
MUST ATTENTION CHECK — Correctness: Null/empty/boundary handled? Error paths caught and propagated? Async race conditions? Trace one happy path + one error path through business logic.
Documentation Staleness Check:
Cross-reference changed files against related documentation:
| Changed file pattern | Docs to check |
|---|---|
Service code | Business feature docs for affected service |
Frontend code | Frontend patterns doc, relevant business-feature docs |
Framework code | Backend patterns doc, advanced patterns doc |
| , hook count tables in |
| , skill catalogs |
| workflow catalog, references |
Flag stale counts/tables/examples, missing docs for new features, outdated test specs. Do NOT auto-fix — flag in report with specific stale section and what changed.
Phase 3: Final Review Result Update report with: Overall Assessment, Critical Issues, High Priority, Architecture Recommendations, Documentation Staleness, Positive Observations
Round 2+ : Fresh Sub-Agent Re-Review (MANDATORY)
Protocol:
+SYNC:double-round-trip-review+SYNC:fresh-context-review(all inlined above in this file).SYNC:review-protocol-injection
After completing Phase 3 (Round 1), spawn a fresh code-reviewer sub-agent for Round 2 using the canonical Agent template from
SYNC:review-protocol-injection above. When constructing the Agent call prompt:
- Copy the Agent call shape from the
template verbatimSYNC:review-protocol-injection - Embed the full verbatim body of these 9 SYNC blocks (all present inline above in this skill file):
,SYNC:evidence-based-reasoning
,SYNC:bug-detection
,SYNC:design-patterns-quality
,SYNC:logic-and-intention-review
,SYNC:test-spec-verification
,SYNC:fix-layer-accountability
,SYNC:rationalization-prevention
,SYNC:graph-assisted-investigationSYNC:understand-code-first - Set the Task as
"Review ALL uncommitted changes for code quality. Focus on cross-cutting concerns, interaction bugs, convention drift, missing pieces, subtle edge cases, logic errors, and test spec gaps." - Set Target Files as
"run git diff to see all uncommitted changes" - Set report path as
plans/reports/code-review-round{N}-{date}.md
After sub-agent returns:
- Read the sub-agent's report from
plans/reports/code-review-round{N}-{date}.md - Integrate findings as
in the main report — DO NOT filter or override## Round {N} Findings (Fresh Sub-Agent) - If FAIL: fix issues, then spawn a NEW Round N+1 fresh sub-agent (new Agent call — never reuse Round 2's agent)
- Max 3 fresh rounds — escalate to user via
if still failing after 3 roundsAskUserQuestion - Final verdict must incorporate findings from ALL rounds
Clean Code Rules (MUST ATTENTION CHECK)
| # | Rule | Details |
|---|---|---|
| 1 | No Magic Values | All literals → named constants |
| 2 | Type Annotations | Explicit parameter and return types on all functions |
| 3 | Single Responsibility | One concern per method/class. Event handlers/consumers: one handler = one concern. NEVER bundle — platform swallows exceptions silently |
| 4 | DRY | No duplication; extract shared logic |
| 5 | Naming | Specific ( not ), Verb+Noun methods, is/has/can/should booleans, no abbreviations |
| 6 | Performance | No O(n²) (use dictionary). Project in query (not load-all). ALWAYS paginate. Batch-by-IDs (not N+1) |
| 7 | Entity Indexes | Collections: index management methods. EF Core: composite indexes. Expression fields match index order. Text search → text indexes |
Data Lifecycle Rules (MUST ATTENTION CHECK)
Decision test: "Delete DB and start fresh — does this data still need to exist?" Yes → Seeder. No → Migration.
| Type | Contains | NEVER contains |
|---|---|---|
| Seeder | Default records, system config, reference data (idempotent, every run) | Schema changes |
| Migration | Schema changes, column adds/removes, data transforms, indexes | Default records, permission seeds, system config |
// ❌ Seed data in migration — lost after DB reset class SeedDefaultRecords : DataMigrationExecutor { ... } // ✅ Idempotent seeder — always runs class ApplicationDataSeeder { if (exists) return; else create(); }
Legacy Frontend Pattern Compliance
When reviewing legacy frontend apps (check
docs/project-config.json → modules[].tags for "legacy"), MUST ATTENTION verify:
- MUST ATTENTION component extends base component class (search for: app base component hierarchy) with
in constructorsuper(...) - MUST ATTENTION uses subscription cleanup pattern (search for: subscription cleanup pattern) — NO manual
destroySubject - MUST ATTENTION services extend API service base class — NO direct
HttpClient - MUST ATTENTION store API calls use store effect pattern — NOT deprecated patterns
CRITICAL anti-patterns to flag:
// ❌ Manual destroy Subject / takeUntil pattern private destroy$ = new Subject<void>(); .pipe(takeUntil(this.destroy$)) // ❌ Raw Component without base class export class MyComponent implements OnInit, OnDestroy { }
When to Use This Skill
| Practice | Triggers | MUST ATTENTION READ |
|---|---|---|
| Receiving Feedback | Review comments received, feedback unclear/questionable, conflicts with existing decisions | |
| Requesting Review | After each subagent task, major feature done, before merge, after complex bug fix | |
| Verification Gates | Before any completion claim, commit, push, or PR. ANY success/satisfaction statement | |
Quick Decision Tree
SITUATION? │ ├─ Received feedback │ ├─ Unclear items? → STOP, ask for clarification first │ ├─ From human partner? → Understand, then implement │ └─ From external reviewer? → Verify technically before implementing │ ├─ Completed work │ ├─ Major feature/task? → Request code-reviewer subagent review │ └─ Before merge? → Request code-reviewer subagent review │ └─ About to claim status ├─ Have fresh verification? → State claim WITH evidence └─ No fresh verification? → RUN verification command first
Receiving Feedback Protocol
Pattern: READ → UNDERSTAND → VERIFY → EVALUATE → RESPOND → IMPLEMENT
- NEVER use performative agreement ("You're right!", "Great point!", "Thanks for...")
- NEVER implement before verification
- MUST ATTENTION restate requirement, ask questions, or push back with technical reasoning
- MUST ATTENTION ask for clarification on ALL unclear items BEFORE starting
- MUST ATTENTION grep for usage before implementing suggested "proper" features (YAGNI check)
Source handling: Human partner → implement after understanding. External reviewer → verify technically, push back if wrong.
Full protocol:
references/code-review-reception.md
Requesting Review Protocol
- Get git SHAs:
andBASE_SHA=$(git rev-parse HEAD~1)HEAD_SHA=$(git rev-parse HEAD) - Dispatch code-reviewer subagent with: WHAT_WAS_IMPLEMENTED, PLAN_OR_REQUIREMENTS, BASE_SHA, HEAD_SHA, DESCRIPTION
- Act on feedback: Critical → fix immediately. Important → fix before proceeding. Minor → note for later.
Full protocol:
references/requesting-code-review.md
Verification Gates Protocol
Iron Law: NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE
Gate: IDENTIFY command → RUN it → READ output → VERIFY it confirms claim → THEN claim. Skip any step = lying.
| Claim | Required Evidence |
|---|---|
| Tests pass | Test output shows 0 failures |
| Build succeeds | Build command exit 0 |
| Bug fixed | Original symptom test passes |
| Requirements met | Line-by-line checklist verified |
Red Flags — STOP: "should"/"probably"/"seems to", satisfaction before verification, committing without verification, trusting agent reports.
Full protocol:
references/verification-before-completion.md
Related
code-simplifierdebug-investigaterefactoring
Systematic Review Protocol (for 10+ changed files)
When the changeset is large (10+ files), categorize files by concern, fire parallel
sub-agents per category, then synchronize findings into a holistic report. Seecode-reviewer§ "Systematic Review Protocol" for the full 4-step protocol (Categorize → Parallel Sub-Agents → Synchronize → Holistic Assessment).review-changes/SKILL.md
Workflow Recommendation
MANDATORY IMPORTANT MUST ATTENTION — NO EXCEPTIONS: If you are NOT already in a workflow, you MUST ATTENTION use
to ask the user. Do NOT judge task complexity or decide this is "simple enough to skip" — the user decides whether to use a workflow, not you:AskUserQuestion
- Activate
workflow (Recommended) — code-review → plan → code → review-changes → testquality-audit- Execute
directly — run this skill standalone/code-review
Architecture Boundary Check
For each changed file, verify it does not import from a forbidden layer:
- Read rules from
→docs/project-config.jsonarchitectureRules.layerBoundaries - Determine layer — For each changed file, match its path against each rule's
glob patternspaths - Scan imports — Grep the file for
(C#) orusing
(TS) statementsimport - Check violations — If any import path contains a layer name listed in
, it is a violationcannotImportFrom - Exclude framework — Skip files matching any pattern in
architectureRules.excludePatterns - BLOCK on violation — Report as critical:
"BLOCKED: {layer} layer file {filePath} imports from {forbiddenLayer} layer ({importStatement})"
If
architectureRules is not present in project-config.json, skip this check silently.
Next Steps
MANDATORY IMPORTANT MUST ATTENTION — NO EXCEPTIONS after completing this skill, you MUST ATTENTION use
AskUserQuestion to present these options. Do NOT skip because the task seems "simple" or "obvious" — the user decides:
- "/fix (Recommended)" — If review found issues that need fixing
- "/watzup" — If review is clean, wrap up session
- "Skip, continue manually" — user decides
AI Agent Integrity Gate (NON-NEGOTIABLE)
Completion ≠ Correctness. Before reporting ANY work done, prove it:
- Grep every removed name. Extraction/rename/delete touched N files? Grep confirms 0 dangling refs across ALL file types.
- Ask WHY before changing. Existing values are intentional until proven otherwise. No "fix" without traced rationale.
- Verify ALL outputs. One build passing ≠ all builds passing. Check every affected stack.
- Evaluate pattern fit. Copying nearby code? Verify preconditions match — same scope, lifetime, base class, constraints.
- New artifact = wired artifact. Created something? Prove it's registered, imported, and reachable by all consumers.
Closing Reminders
MANDATORY IMPORTANT MUST ATTENTION break work into small todo tasks using
TaskCreate BEFORE starting.
MANDATORY IMPORTANT MUST ATTENTION validate decisions with user via AskUserQuestion — never auto-decide.
MANDATORY IMPORTANT MUST ATTENTION add a final review todo task to verify work quality.
MANDATORY IMPORTANT MUST ATTENTION READ the following files before starting:
<!-- SYNC:evidence-based-reasoning:reminder -->
- MANDATORY IMPORTANT MUST ATTENTION cite
evidence for every claim. Confidence >80% to act, <60% = do NOT recommend. <!-- /SYNC:evidence-based-reasoning:reminder --> <!-- SYNC:design-patterns-quality:reminder -->file:line - MANDATORY IMPORTANT MUST ATTENTION check DRY via OOP (same-suffix → base class), right responsibility (lowest layer), SOLID. Grep for dangling refs after changes. <!-- /SYNC:design-patterns-quality:reminder --> <!-- SYNC:double-round-trip-review:reminder -->
- MANDATORY IMPORTANT MUST ATTENTION execute TWO review rounds. Round 2 delegates to fresh code-reviewer sub-agent (zero prior context) — never skip or combine with Round 1. <!-- /SYNC:double-round-trip-review:reminder --> <!-- SYNC:rationalization-prevention:reminder -->
- MANDATORY IMPORTANT MUST ATTENTION follow ALL steps regardless of perceived simplicity. "Too simple to plan" is an evasion, not a reason. <!-- /SYNC:rationalization-prevention:reminder --> <!-- SYNC:graph-assisted-investigation:reminder -->
- MANDATORY IMPORTANT MUST ATTENTION run at least ONE graph command on key files when graph.db exists. Pattern: grep → graph trace → grep verify. <!-- /SYNC:graph-assisted-investigation:reminder --> <!-- SYNC:logic-and-intention-review:reminder -->
- MANDATORY IMPORTANT MUST ATTENTION verify every changed file serves stated purpose. Trace happy + error paths. Flag scope creep. <!-- /SYNC:logic-and-intention-review:reminder --> <!-- SYNC:bug-detection:reminder -->
- MANDATORY IMPORTANT MUST ATTENTION check null safety, boundary conditions, error handling, resource management for every review. <!-- /SYNC:bug-detection:reminder --> <!-- SYNC:test-spec-verification:reminder -->
- MANDATORY IMPORTANT MUST ATTENTION map every changed function/endpoint to a TC-{FEAT}-{NNN}. Flag gaps, recommend
. <!-- /SYNC:test-spec-verification:reminder --> <!-- SYNC:fix-layer-accountability:reminder -->/tdd-spec - IMPORTANT MUST ATTENTION trace full data flow and fix at the owning layer, not the crash site. Audit all access sites before adding
. <!-- /SYNC:fix-layer-accountability:reminder --> <!-- SYNC:critical-thinking-mindset:reminder -->?. - MUST ATTENTION apply critical thinking — every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact. <!-- /SYNC:critical-thinking-mindset:reminder --> <!-- SYNC:ai-mistake-prevention:reminder -->
- MUST ATTENTION apply AI mistake prevention — holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction. <!-- /SYNC:ai-mistake-prevention:reminder -->