EasyPlatform sre-review

[Code Quality] Production readiness review for service-layer and API changes

install
source · Clone the upstream repo
git clone https://github.com/duc01226/EasyPlatform
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/duc01226/EasyPlatform "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/sre-review" ~/.claude/skills/duc01226-easyplatform-sre-review && rm -rf "$T"
manifest: .claude/skills/sre-review/skill.md
source content

[IMPORTANT] Use

TaskCreate
to break ALL work into small tasks BEFORE starting — including tasks for each file read. This prevents context loss from long files. For simple tasks, AI MUST ATTENTION ask user whether to skip.

<!-- SYNC:critical-thinking-mindset -->

Critical Thinking Mindset — Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact — cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence — certainty without evidence root of all hallucination.

<!-- /SYNC:critical-thinking-mindset --> <!-- SYNC:ai-mistake-prevention -->

AI Mistake Prevention — Failure modes to avoid on every task:

  • Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal.
  • Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing.
  • Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain.
  • Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips — not just happy path.
  • When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer — never patch symptom site.
  • Assume existing values are intentional — ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code.
  • Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks.
  • Holistic-first debugging — resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis.
  • Surgical changes — apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly.
  • Surface ambiguity before coding — don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
<!-- /SYNC:ai-mistake-prevention --> <!-- SYNC:evidence-based-reasoning -->

Evidence-Based Reasoning — Speculation is FORBIDDEN. Every claim needs proof.

  1. Cite
    file:line
    , grep results, or framework docs for EVERY claim
  2. Declare confidence: >80% act freely, 60-80% verify first, <60% DO NOT recommend
  3. Cross-service validation required for architectural changes
  4. "I don't have enough evidence" is valid and expected output

BLOCKED until:

- [ ]
Evidence file path (
file:line
)
- [ ]
Grep search performed
- [ ]
3+ similar patterns found
- [ ]
Confidence level stated

Forbidden without proof: "obviously", "I think", "should be", "probably", "this is because" If incomplete → output:

"Insufficient evidence. Verified: [...]. Not verified: [...]."

<!-- /SYNC:evidence-based-reasoning -->
  • docs/project-reference/domain-entities-reference.md
    — Domain entity catalog, relationships, cross-service sync (read when task involves business entities/models) (content auto-injected by hook — check for [Injected: ...] header before reading)
<!-- SYNC:double-round-trip-review -->

Deep Multi-Round Review — Escalating rounds. Round 1 in main session. Round 2+ and EVERY recursive re-review iteration MUST use a fresh sub-agent.

Round 1: Main-session review. Read target files, build understanding, note issues. Output baseline findings.

Round 2: MANDATORY fresh sub-agent review — see

SYNC:fresh-context-review
for the spawn mechanism and
SYNC:review-protocol-injection
for the canonical Agent prompt template. The sub-agent re-reads ALL files from scratch with ZERO Round 1 memory. It must catch:

  • Cross-cutting concerns missed in Round 1
  • Interaction bugs between changed files
  • Convention drift (new code vs existing patterns)
  • Missing pieces that should exist but don't
  • Subtle edge cases the main session rationalized away

Round 3+ (recursive after fixes): After ANY fix cycle, MANDATORY fresh sub-agent re-review. Spawn a NEW Agent tool call each iteration — never reuse Round 2's agent. Each new agent re-reads ALL files from scratch with full protocol injection. Continue until PASS or 3 fresh-subagent rounds max, then escalate to user via

AskUserQuestion
.

Rules:

  • NEVER declare PASS after Round 1 alone
  • NEVER reuse a sub-agent across rounds — every iteration spawns a NEW Agent call
  • Main agent READS sub-agent reports but MUST NOT filter, reinterpret, or override findings
  • Max 3 fresh-subagent rounds per review — if still FAIL, escalate via
    AskUserQuestion
    (do NOT silently loop)
  • Track round count in conversation context (session-scoped)
  • Final verdict must incorporate ALL rounds

Report must include

## Round N Findings (Fresh Sub-Agent)
for every round N≥2.

<!-- /SYNC:double-round-trip-review --> <!-- SYNC:fresh-context-review -->

Fresh Sub-Agent Review — Eliminate orchestrator confirmation bias via isolated sub-agents.

Why: The main agent knows what it (or

/cook
) just fixed and rationalizes findings accordingly. A fresh sub-agent has ZERO memory, re-reads from scratch, and catches what the main agent dismissed. Sub-agent bias is mitigated by (1) fresh context, (2) verbatim protocol injection, (3) main agent not filtering the report.

When: Round 2 of ANY review AND every recursive re-review iteration after fixes. NOT needed when Round 1 already PASSes with zero issues.

How:

  1. Spawn a NEW
    Agent
    tool call — use
    code-reviewer
    subagent_type for code reviews,
    general-purpose
    for plan/doc/artifact reviews
  2. Inject ALL required review protocols VERBATIM into the prompt — see
    SYNC:review-protocol-injection
    for the full list and template. Never reference protocols by file path; AI compliance drops behind file-read indirection (see
    SYNC:shared-protocol-duplication-policy
    )
  3. Sub-agent re-reads ALL target files from scratch via its own tool calls — never pass file contents inline in the prompt
  4. Sub-agent writes structured report to
    plans/reports/{review-type}-round{N}-{date}.md
  5. Main agent reads the report, integrates findings into its own report, DOES NOT override or filter

Rules:

  • NEVER reuse a sub-agent across rounds — every iteration spawns a NEW
    Agent
    call
  • NEVER skip fresh-subagent review because "last round was clean" — every fix triggers a fresh round
  • Max 3 fresh-subagent rounds per review — escalate via
    AskUserQuestion
    if still failing; do NOT silently loop or fall back to any prior protocol
  • Track iteration count in conversation context (session-scoped, no persistent files)
<!-- /SYNC:fresh-context-review --> <!-- SYNC:review-protocol-injection -->

Review Protocol Injection — Every fresh sub-agent review prompt MUST embed 10 protocol blocks VERBATIM. The template below has ALL 10 bodies already expanded inline. Copy the template wholesale into the Agent call's

prompt
field at runtime, replacing only the
{placeholders}
in Task / Round / Reference Docs / Target Files / Output sections with context-specific values. Do NOT touch the embedded protocol sections.

Why inline expansion: Placeholder markers would force file-read indirection at runtime. AI compliance drops significantly behind indirection (see

SYNC:shared-protocol-duplication-policy
). Therefore the template carries all 10 protocol bodies pre-embedded.

Subagent Type Selection

  • code-reviewer
    — for code reviews (reviewing source files, git diffs, implementation)
  • general-purpose
    — for plan / doc / artifact reviews (reviewing markdown plans, docs, specs)

Canonical Agent Call Template (Copy Verbatim)

Agent({
  description: "Fresh Round {N} review",
  subagent_type: "code-reviewer",
  prompt: `
## Task
{review-specific task — e.g., "Review all uncommitted changes for code quality" | "Review plan files under {plan-dir}" | "Review integration tests in {path}"}

## Round
Round {N}. You have ZERO memory of prior rounds. Re-read all target files from scratch via your own tool calls. Do NOT trust anything from the main agent beyond this prompt.

## Protocols (follow VERBATIM — these are non-negotiable)

### Evidence-Based Reasoning
Speculation is FORBIDDEN. Every claim needs proof.
1. Cite file:line, grep results, or framework docs for EVERY claim
2. Declare confidence: >80% act freely, 60-80% verify first, <60% DO NOT recommend
3. Cross-service validation required for architectural changes
4. "I don't have enough evidence" is valid and expected output
BLOCKED until: Evidence file path (file:line) provided; Grep search performed; 3+ similar patterns found; Confidence level stated.
Forbidden without proof: "obviously", "I think", "should be", "probably", "this is because".
If incomplete → output: "Insufficient evidence. Verified: [...]. Not verified: [...]."

### Bug Detection
MUST check categories 1-4 for EVERY review. Never skip.
1. Null Safety: Can params/returns be null? Are they guarded? Optional chaining gaps? .find() returns checked?
2. Boundary Conditions: Off-by-one (< vs <=)? Empty collections handled? Zero/negative values? Max limits?
3. Error Handling: Try-catch scope correct? Silent swallowed exceptions? Error types specific? Cleanup in finally?
4. Resource Management: Connections/streams closed? Subscriptions unsubscribed on destroy? Timers cleared? Memory bounded?
5. Concurrency (if async): Missing await? Race conditions on shared state? Stale closures? Retry storms?
6. Stack-Specific: JS: === vs ==, typeof null. C#: async void, missing using, LINQ deferred execution.
Classify: CRITICAL (crash/corrupt) → FAIL | HIGH (incorrect behavior) → FAIL | MEDIUM (edge case) → WARN | LOW (defensive) → INFO.

### Design Patterns Quality
Priority checks for every code change:
1. DRY via OOP: Same-suffix classes (*Entity, *Dto, *Service) MUST share base class. 3+ similar patterns → extract to shared abstraction.
2. Right Responsibility: Logic in LOWEST layer (Entity > Domain Service > Application Service > Controller). Never business logic in controllers.
3. SOLID: Single responsibility (one reason to change). Open-closed (extend, don't modify). Liskov (subtypes substitutable). Interface segregation (small interfaces). Dependency inversion (depend on abstractions).
4. After extraction/move/rename: Grep ENTIRE scope for dangling references. Zero tolerance.
5. YAGNI gate: NEVER recommend patterns unless 3+ occurrences exist. Don't extract for hypothetical future use.
Anti-patterns to flag: God Object, Copy-Paste inheritance, Circular Dependency, Leaky Abstraction.

### Logic & Intention Review
Verify WHAT code does matches WHY it was changed.
1. Change Intention Check: Every changed file MUST serve the stated purpose. Flag unrelated changes as scope creep.
2. Happy Path Trace: Walk through one complete success scenario through changed code.
3. Error Path Trace: Walk through one failure/edge case scenario through changed code.
4. Acceptance Mapping: If plan context available, map every acceptance criterion to a code change.
NEVER mark review PASS without completing both traces (happy + error path).

### Test Spec Verification
Map changed code to test specifications.
1. From changed files → find TC-{FEAT}-{NNN} in docs/business-features/{Service}/detailed-features/{Feature}.md Section 15.
2. Every changed code path MUST map to a corresponding TC (or flag as "needs TC").
3. New functions/endpoints/handlers → flag for test spec creation.
4. Verify TC evidence fields point to actual code (file:line, not stale references).
5. Auth changes → TC-{FEAT}-02x exist? Data changes → TC-{FEAT}-01x exist?
6. If no specs exist → log gap and recommend /tdd-spec.
NEVER skip test mapping. Untested code paths are the #1 source of production bugs.

### Fix-Layer Accountability
NEVER fix at the crash site. Trace the full flow, fix at the owning layer. The crash site is a SYMPTOM, not the cause.
MANDATORY before ANY fix:
1. Trace full data flow — Map the complete path from data origin to crash site across ALL layers (storage → backend → API → frontend → UI). Identify where bad state ENTERS, not where it CRASHES.
2. Identify the invariant owner — Which layer's contract guarantees this value is valid? Fix at the LOWEST layer that owns the invariant, not the highest layer that consumes it.
3. One fix, maximum protection — If fix requires touching 3+ files with defensive checks, you are at the wrong layer — go lower.
4. Verify no bypass paths — Confirm all data flows through the fix point. Check for direct construction skipping factories, clone/spread without re-validation, raw data not wrapped in domain models, mutations outside the model layer.
BLOCKED until: Full data flow traced (origin → crash); Invariant owner identified with file:line evidence; All access sites audited (grep count); Fix layer justified (lowest layer that protects most consumers).
Anti-patterns (REJECT): "Fix it where it crashes" (crash site ≠ cause site, trace upstream); "Add defensive checks at every consumer" (scattered defense = wrong layer); "Both fix is safer" (pick ONE authoritative layer).

### Rationalization Prevention
AI skips steps via these evasions. Recognize and reject:
- "Too simple for a plan" → Simple + wrong assumptions = wasted time. Plan anyway.
- "I'll test after" → RED before GREEN. Write/verify test first.
- "Already searched" → Show grep evidence with file:line. No proof = no search.
- "Just do it" → Still need TaskCreate. Skip depth, never skip tracking.
- "Just a small fix" → Small fix in wrong location cascades. Verify file:line first.
- "Code is self-explanatory" → Future readers need evidence trail. Document anyway.
- "Combine steps to save time" → Combined steps dilute focus. Each step has distinct purpose.

### Graph-Assisted Investigation
MANDATORY when .code-graph/graph.db exists.
HARD-GATE: MUST run at least ONE graph command on key files before concluding any investigation.
Pattern: Grep finds files → trace --direction both reveals full system flow → Grep verifies details.
- Investigation/Scout: trace --direction both on 2-3 entry files
- Fix/Debug: callers_of on buggy function + tests_for
- Feature/Enhancement: connections on files to be modified
- Code Review: tests_for on changed functions
- Blast Radius: trace --direction downstream
CLI: python .claude/scripts/code_graph {command} --json. Use --node-mode file first (10-30x less noise), then --node-mode function for detail.

### Understand Code First
HARD-GATE: Do NOT write, plan, or fix until you READ existing code.
1. Search 3+ similar patterns (grep/glob) — cite file:line evidence.
2. Read existing files in target area — understand structure, base classes, conventions.
3. Run python .claude/scripts/code_graph trace <file> --direction both --json when .code-graph/graph.db exists.
4. Map dependencies via connections or callers_of — know what depends on your target.
5. Write investigation to .ai/workspace/analysis/ for non-trivial tasks (3+ files).
6. Re-read analysis file before implementing — never work from memory alone.
7. NEVER invent new patterns when existing ones work — match exactly or document deviation.
BLOCKED until: Read target files; Grep 3+ patterns; Graph trace (if graph.db exists); Assumptions verified with evidence.

## Reference Docs (READ before reviewing)
- docs/project-reference/code-review-rules.md
- {skill-specific reference docs — e.g., integration-test-reference.md for integration-test-review; backend-patterns-reference.md for backend reviews; frontend-patterns-reference.md for frontend reviews}

## Target Files
{explicit file list OR "run git diff to see uncommitted changes" OR "read all files under {plan-dir}"}

## Output
Write a structured report to plans/reports/{review-type}-round{N}-{date}.md with sections:
- Status: PASS | FAIL
- Issue Count: {number}
- Critical Issues (with file:line evidence)
- High Priority Issues (with file:line evidence)
- Medium / Low Issues
- Cross-cutting findings

Return the report path and status to the main agent.
Every finding MUST have file:line evidence. Speculation is forbidden.
`
})

Rules

  • DO copy the template wholesale — including all 10 embedded protocol sections
  • DO replace only the
    {placeholders}
    in Task / Round / Reference Docs / Target Files / Output sections with context-specific content
  • DO choose
    code-reviewer
    subagent_type for code reviews and
    general-purpose
    for plan / doc / artifact reviews
  • DO NOT paraphrase, summarize, or skip any protocol section
  • DO NOT pass file contents inline — the sub-agent reads via its own tool calls so it has a fresh context
  • DO NOT reference protocols by file path or tag name — the bodies are already embedded above
  • DO NOT introduce placeholder markers for the protocols — they must stay literally expanded
<!-- /SYNC:review-protocol-injection -->

Critical Purpose: Ensure quality — no flaws, no bugs, no missing updates, no stale content. Verify both code AND documentation.

External Memory: For complex or lengthy work (research, analysis, scan, review), write intermediate findings and final results to a report file in

plans/reports/
— prevents context loss and serves as deliverable.

Evidence Gate: MANDATORY IMPORTANT MUST ATTENTION — every claim, finding, and recommendation requires

file:line
proof or traced evidence with confidence percentage (>80% to act, <80% must verify first).

Quick Summary

Goal: Assess production readiness of service-layer and API changes by scoring observability, reliability, and operational preparedness.

When to use: After implementing backend service or API changes, before committing.

Scope: Service-layer and API changes only — frontend-only changes exempt.

Why this exists: Code that works but can't be debugged, monitored, or rolled back is technical debt in disguise.

Deployment Context: Read

docs/project-config.json
infrastructure
section for deployment platform:

  • containerization
    — e.g., "docker" → check Dockerfiles, docker-compose
  • orchestration
    — e.g., "kubernetes" → check K8s manifests, Helm charts
  • cicd.tool
    — e.g., "azure-devops" → check pipeline configs

Your Mission

<task> $ARGUMENTS </task>

Review Mindset (NON-NEGOTIABLE)

Be skeptical. Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence percentages (Idea should be more than 80%).

  • Do NOT accept operational readiness at face value — verify by reading actual implementations
  • Every score must include
    file:line
    evidence (grep results, read confirmations)
  • If you cannot prove a score with a code trace, score it 0
  • Question assumptions: "Is this really handled?" → trace the error/retry/timeout path to confirm
  • Challenge completeness: "Are all failure modes covered?" → check what happens when dependencies fail
  • Verify observability: "Can we actually debug this in production?" → check logging, correlation, metrics
  • No "looks fine" without proof — state what you verified and how

Scope Resolution

  1. If arguments specify files/directories → review those
  2. Else review uncommitted changes (
    git diff --name-only
    )
  3. Focus on:
    *.cs
    files in
    src/Services/
    , API controllers, service classes
  4. Skip: frontend files, test files, documentation, configuration-only changes

Production Readiness Checklist

Review the changed files and score each criterion 0-2:

  • 0 = Not addressed
  • 1 = Partially addressed
  • 2 = Fully addressed

Observability (max 8 points)

#CriterionWhat to Check
1Structured LoggingExternal API calls and critical operations log errors with context (request ID, user, parameters)
2Error ContextExceptions include enough context to diagnose without reproducing (entity IDs, operation type, input summary)
3Metrics AwarenessOperations >100ms consider tracking duration. New endpoints consider latency monitoring
4CorrelationCross-service calls include or propagate correlation IDs for distributed tracing

Reliability (max 8 points)

#CriterionWhat to Check
5Retry StrategyTransient failures (HTTP, DB timeouts) have retry logic or documented reason for not retrying
6Timeout ConfigurationHTTP clients and external calls have explicit timeout (not relying on defaults)
7Error HandlingErrors handled gracefully — no swallowed exceptions, no generic catch-all without logging
8Fallback BehaviorCritical paths define what happens when dependencies fail (degraded mode, cached response, user-facing error)

Data Integrity (max 4 points)

#CriterionWhat to Check
9Seed vs MigrationSeed data (default records, system config) lives in startup data seeders, NOT in one-time migration executors
10Seeder IdempotencyData seeders use check-then-create pattern (query before insert) — safe for repeated runs on any environment

Decision test for reviewers: "If the database is reset, does this data still need to exist?" Yes → must be in a seeder. No → migration is acceptable.

Database Performance (max 4 points)

[IMPORTANT] Database Performance Protocol (MANDATORY):

  1. Paging Required — ALL list/collection queries MUST ATTENTION use pagination. NEVER load all records into memory. Verify: no unbounded
    GetAll()
    ,
    ToList()
    , or
    Find()
    without
    Skip/Take
    or cursor-based paging.
  2. Index Required — ALL query filter fields, foreign keys, and sort columns MUST ATTENTION have database indexes configured. Verify: entity expressions match index field order, database collections have index management methods, migrations include indexes for WHERE/JOIN/ORDER BY columns.
#CriterionWhat to Check
11PaginationList/collection queries use pagination (Skip/Take, cursor). No unbounded GetAll/ToList loading all records into memory
12Database IndexesQuery filter fields, foreign keys, and sort columns have matching database indexes. Migrations include index creation

Scoring

ScoreVerdictRecommendation
19-24PASSProduction-ready. Proceed to commit.
13-18NEEDS WORKAddress gaps before deploying to production. OK for dev/staging.
0-12NOT READYSignificant operational gaps. Review Operational Readiness rules in code-review-rules.md.
<!-- SYNC:graph-assisted-investigation -->

Graph-Assisted Investigation — MANDATORY when

.code-graph/graph.db
exists.

HARD-GATE: MUST ATTENTION run at least ONE graph command on key files before concluding any investigation.

Pattern: Grep finds files →

trace --direction both
reveals full system flow → Grep verifies details

TaskMinimum Graph Action
Investigation/Scout
trace --direction both
on 2-3 entry files
Fix/Debug
callers_of
on buggy function +
tests_for
Feature/Enhancement
connections
on files to be modified
Code Review
tests_for
on changed functions
Blast Radius
trace --direction downstream

CLI:

python .claude/scripts/code_graph {command} --json
. Use
--node-mode file
first (10-30x less noise), then
--node-mode function
for detail.

<!-- /SYNC:graph-assisted-investigation --> <!-- SYNC:subagent-return-contract -->

Sub-Agent Return Contract — When this skill spawns a sub-agent, the sub-agent MUST return ONLY this structure. Main agent reads only this summary — NEVER requests full sub-agent output inline.

## Sub-Agent Result: [skill-name]

Status: ✅ PASS | ⚠️ PARTIAL | ❌ FAIL
Confidence: [0-100]%

### Findings (Critical/High only — max 10 bullets)

- [severity] [file:line] [finding]

### Actions Taken

- [file changed] [what changed]

### Blockers (if any)

- [blocker description]

Full report: plans/reports/[skill-name]-[date]-[slug].md

Main agent reads

Full report
file ONLY when: (a) resolving a specific blocker, or (b) building a fix plan. Sub-agent writes full report incrementally (per SYNC:incremental-persistence) — not held in memory.

<!-- /SYNC:subagent-return-contract -->

Run

python .claude/scripts/code_graph connections <file> --json
on service boundary files for cross-service impact.

Structural Impact Analysis (RECOMMENDED if graph.db exists)

If

.code-graph/graph.db
exists, include structural impact in production readiness assessment:

  • Run:
    python .claude/scripts/code_graph graph-blast-radius --json
  • High blast radius (>20 impacted nodes) --> flag as high-risk deployment
  • Check if changed functions have test coverage via
    python .claude/scripts/code_graph query tests_for <function_name> --json

Graph-Trace for Production Flow

When graph DB is available, use

trace
to verify production readiness:

  • python .claude/scripts/code_graph trace <service-file> --direction downstream --json
    — verify all downstream dependencies are accounted for (event handlers, bus consumers, cross-service calls)
  • python .claude/scripts/code_graph trace <service-file> --direction both --json
    — full flow: entry points + downstream cascade
  • Flag any cross-service MESSAGE_BUS consumer that lacks error handling or monitoring

Round 2: Focused Re-Review (MANDATORY)

Protocol: Deep Multi-Round Review (inlined via SYNC:double-round-trip-review above)

After completing Round 1 scoring, execute a second full review round:

  1. Re-read the Round 1 score and findings
  2. Re-evaluate ALL scoring criteria — do NOT rely on Round 1 memory
  3. Focus on what Round 1 typically misses:
    • Operational concerns that span multiple services
    • Subtle reliability gaps (retry logic, circuit breakers, timeout handling)
    • Missing observability (structured logging, correlation IDs, metrics)
    • Data integrity edge cases under concurrent load
  4. Re-score all criteria — verify Round 1 scoring accuracy
  5. Update report with
    ## Round 2 Findings (Fresh-Context)
    section
  6. Final score must incorporate findings from BOTH rounds

Output Format

## SRE Review Results

**Scope:** {files reviewed}
**Date:** {date}
**Score:** {X}/24
**Verdict:** PASS / NEEDS WORK / NOT READY

### Observability ({X}/8)

| #   | Criterion          | Score | Evidence                   |
| --- | ------------------ | ----- | -------------------------- |
| 1   | Structured Logging | 0/1/2 | {file:line or "not found"} |
| 2   | Error Context      | 0/1/2 | ...                        |
| 3   | Metrics Awareness  | 0/1/2 | ...                        |
| 4   | Correlation        | 0/1/2 | ...                        |

### Reliability ({X}/8)

| #   | Criterion         | Score | Evidence |
| --- | ----------------- | ----- | -------- |
| 5   | Retry Strategy    | 0/1/2 | ...      |
| 6   | Timeout Config    | 0/1/2 | ...      |
| 7   | Error Handling    | 0/1/2 | ...      |
| 8   | Fallback Behavior | 0/1/2 | ...      |

### Data Integrity ({X}/4)

| #   | Criterion          | Score | Evidence |
| --- | ------------------ | ----- | -------- |
| 9   | Seed vs Migration  | 0/1/2 | ...      |
| 10  | Seeder Idempotency | 0/1/2 | ...      |

### Database Performance ({X}/4)

| #   | Criterion        | Score | Evidence |
| --- | ---------------- | ----- | -------- |
| 11  | Pagination       | 0/1/2 | ...      |
| 12  | Database Indexes | 0/1/2 | ...      |

### Gaps to Address

- {specific actionable item}

### Recommendation

{Proceed / Address gaps first}

Important Notes

  • Advisory only (final VERDICT) — the score and verdict inform the team but do not block commits; the PROCESS steps (graph gate, Round 2, Database Performance Protocol) remain MANDATORY and are not advisory
  • Evidence-based — cite specific file:line for each score
  • Proportional — small bug fixes need less rigor than new endpoints (applies to VERDICT interpretation, not to skipping MANDATORY process steps)
  • Check for project framework patterns (background job handlers, base controller error handling)

Workflow Recommendation

MANDATORY IMPORTANT MUST ATTENTION — NO EXCEPTIONS: If you are NOT already in a workflow, you MUST ATTENTION use

AskUserQuestion
to ask the user. Do NOT judge task complexity or decide this is "simple enough to skip" — the user decides whether to use a workflow, not you:

  1. Activate
    feature
    workflow
    (Recommended) — scout → investigate → plan → cook → review → sre-review → test → docs
  2. Execute
    /sre-review
    directly
    — run this skill standalone

Next Steps

MANDATORY IMPORTANT MUST ATTENTION — NO EXCEPTIONS after completing this skill, you MUST ATTENTION use

AskUserQuestion
to present these options. Do NOT skip because the task seems "simple" or "obvious" — the user decides:

  • "/watzup (Recommended)" — Wrap up and check for doc staleness
  • "/test" — Run tests before wrapping up
  • "Skip, continue manually" — user decides

Closing Reminders

MANDATORY IMPORTANT MUST ATTENTION break work into small todo tasks using

TaskCreate
BEFORE starting. MANDATORY IMPORTANT MUST ATTENTION validate decisions with user via
AskUserQuestion
— never auto-decide. MANDATORY IMPORTANT MUST ATTENTION add a final review todo task to verify work quality. MANDATORY IMPORTANT MUST ATTENTION READ the following files before starting:

<!-- SYNC:evidence-based-reasoning:reminder -->
  • IMPORTANT MUST ATTENTION cite
    file:line
    evidence for every claim. Confidence >80% to act, <60% do NOT recommend. <!-- /SYNC:evidence-based-reasoning:reminder --> <!-- SYNC:double-round-trip-review:reminder -->
  • MANDATORY IMPORTANT MUST ATTENTION execute TWO review rounds. Round 2 delegates to fresh code-reviewer sub-agent (zero prior context) — never skip or combine with Round 1. <!-- /SYNC:double-round-trip-review:reminder --> <!-- SYNC:graph-assisted-investigation:reminder -->
  • IMPORTANT MUST ATTENTION run at least ONE graph command on key files before concluding (when graph.db exists). <!-- /SYNC:graph-assisted-investigation:reminder --> <!-- SYNC:critical-thinking-mindset:reminder -->
  • MUST ATTENTION apply critical thinking — every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact. <!-- /SYNC:critical-thinking-mindset:reminder --> <!-- SYNC:ai-mistake-prevention:reminder -->
  • MUST ATTENTION apply AI mistake prevention — holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction. <!-- /SYNC:ai-mistake-prevention:reminder -->