EasyPlatform scan-code-review-rules
[Documentation] Scan project and populate/sync docs/project-reference/code-review-rules.md with code conventions, anti-patterns, architecture rules, and review checklists.
git clone https://github.com/duc01226/EasyPlatform
T=$(mktemp -d) && git clone --depth=1 https://github.com/duc01226/EasyPlatform "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/scan-code-review-rules" ~/.claude/skills/duc01226-easyplatform-scan-code-review-rules && rm -rf "$T"
.claude/skills/scan-code-review-rules/SKILL.md<!-- SYNC:critical-thinking-mindset -->[IMPORTANT] Use
to break ALL work into small tasks BEFORE starting — including tasks for each file read. This prevents context loss from long files. For simple tasks, AI MUST ATTENTION ask user whether to skip.TaskCreate
<!-- /SYNC:critical-thinking-mindset --> <!-- SYNC:ai-mistake-prevention -->Critical Thinking Mindset — Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact — cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence — certainty without evidence root of all hallucination.
<!-- /SYNC:ai-mistake-prevention -->AI Mistake Prevention — Failure modes to avoid on every task:
- Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal.
- Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing.
- Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain.
- Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips — not just happy path.
- When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer — never patch symptom site.
- Assume existing values are intentional — ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code.
- Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks.
- Holistic-first debugging — resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis.
- Surgical changes — apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly.
- Surface ambiguity before coding — don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
Prerequisites: MUST ATTENTION READ before executing:
<!-- SYNC:scan-and-update-reference-doc --><!-- /SYNC:scan-and-update-reference-doc --> <!-- SYNC:output-quality-principles -->Scan & Update Reference Doc — Surgical updates only, never full rewrite.
- Read existing doc first — understand current structure and manual annotations
- Detect mode: Placeholder (only headings, no content) → Init mode. Has content → Sync mode.
- Scan codebase for current state (grep/glob for patterns, counts, file paths)
- Diff findings vs doc content — identify stale sections only
- Update ONLY sections where code diverged from doc. Preserve manual annotations.
- Update metadata (date, counts, version) in frontmatter or header
- NEVER rewrite entire doc. NEVER remove sections without evidence they're obsolete.
<!-- /SYNC:output-quality-principles -->Output Quality — Token efficiency without sacrificing quality.
- No inventories/counts — AI can
. Counts go stale instantlygrep | wc -l- No directory trees — AI can
/glob. Use 1-line path conventionsls- No TOCs — AI reads linearly. TOC wastes tokens
- No examples that repeat what rules say — one example only if non-obvious
- Lead with answer, not reasoning. Skip filler words and preamble
- Sacrifice grammar for concision in reports
- Unresolved questions at end, if any
Quick Summary
Goal: Scan project codebase for established conventions, lint rules, common patterns, and anti-patterns, then populate
docs/project-reference/code-review-rules.md with actionable review rules and checklists. (content auto-injected by hook — check for [Injected: ...] header before reading)
Workflow:
- Read — Load current target doc, detect init vs sync mode
- Scan — Discover conventions and patterns via parallel sub-agents
- Report — Write findings to external report file
- Generate — Build/update reference doc from report
- Verify — Validate rules reference real code patterns
Key Rules:
- Generic — works with any language/framework combination
- Derive rules from ACTUAL codebase patterns, not generic best practices
- Every rule should have a "DO" example from the project and a "DON'T" counterexample
- Focus on project-specific conventions that differ from framework defaults
Be skeptical. Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence percentages (Idea should be more than 80%).
Scan Code Review Rules
Phase 0: Read & Assess
- Read
docs/project-reference/code-review-rules.md - Detect mode: init (placeholder) or sync (populated)
- If sync: extract existing sections and note what's already well-documented
Phase 1: Plan Scan Strategy
Discover code quality infrastructure:
- Linter configs (
,.eslintrc
,.editorconfig
,stylecop.json
,.prettierrc
)ruff.toml - CI quality gates (build scripts, test requirements, coverage thresholds)
- Code analysis configs (SonarQube, CodeClimate, custom analyzers)
- Existing code standards docs (CONTRIBUTING.md, CODING_STANDARDS.md)
- Git hooks (pre-commit, husky configs)
Use
docs/project-config.json if available for architecture rules and naming conventions.
Phase 2: Execute Scan (Parallel Sub-Agents)
Launch 3 Explore agents in parallel:
Agent 1: Backend Rules
- Grep for naming conventions (class suffixes, method prefixes, interface naming)
- Find common base classes and when they're used vs not used
- Discover error handling patterns (try-catch, Result types, error middleware)
- Find dependency injection patterns (registration conventions, lifetime choices)
- Look for anti-patterns (direct DB access from controllers, business logic in wrong layer)
- Identify logging conventions (structured logging, log levels, correlation IDs)
Agent 2: Frontend Rules
- Grep for component conventions (naming, file organization, template patterns)
- Find state management rules (what goes in store vs component vs service)
- Discover styling conventions (BEM, CSS modules, utility classes, naming)
- Find subscription/memory management patterns (cleanup, unsubscribe)
- Look for accessibility patterns (ARIA, semantic HTML, keyboard navigation)
- Identify performance patterns (lazy loading, change detection, memoization)
Agent 3: Architecture Rules
- Find layer boundaries (what imports what, dependency direction)
- Discover cross-service communication patterns (direct calls vs messages)
- Find shared code conventions (what's shared vs duplicated)
- Look for testing conventions (test naming, test organization, mock patterns)
- Identify security patterns (auth checks, input validation, output encoding)
- Find configuration patterns (env vars, config files, secrets management)
Write all findings to:
plans/reports/scan-code-review-rules-{YYMMDD}-{HHMM}-report.md
Phase 3: Analyze & Generate
Read the report. Build these sections:
Target Sections
| Section | Content |
|---|---|
| Critical Rules | Top 5-10 rules that cause the most bugs/issues if violated |
| Backend Rules | Naming, patterns, error handling, DI conventions with DO/DON'T examples |
| Frontend Rules | Component, state, styling, cleanup conventions with DO/DON'T examples |
| Architecture Rules | Layer boundaries, cross-service rules, shared code conventions |
| Anti-Patterns | Common mistakes found in codebase with explanations and fixes |
| Decision Trees | Flowcharts for common decisions (which base class, where to put logic, etc.) |
| Checklists | PR review checklists for backend, frontend, and cross-cutting concerns |
Content Rules
- Every rule must have a "DO" code example from the actual project
- Every rule should have a "DON'T" counterexample (real or realistic)
- Use
references for all code examplesfile:line - Prioritize rules by impact (bugs prevented, not style preferences)
- Decision trees can use markdown flowchart format or nested bullet lists
Phase 4: Write & Verify
- Write updated doc with
at top<!-- Last scanned: YYYY-MM-DD --> - Verify: 5 code example file paths exist (Glob check)
- Verify: anti-pattern examples are realistic (not fabricated)
- Report: sections updated, rules count, anti-patterns discovered
Closing Reminders
- IMPORTANT MUST ATTENTION break work into small todo tasks using
BEFORE startingTaskCreate - IMPORTANT MUST ATTENTION search codebase for 3+ similar patterns before creating new code
- IMPORTANT MUST ATTENTION cite
evidence for every claim (confidence >80% to act)file:line - IMPORTANT MUST ATTENTION add a final review todo task to verify work quality
- IMPORTANT MUST ATTENTION execute two review rounds (Round 1: understand, Round 2: catch missed issues) <!-- SYNC:scan-and-update-reference-doc:reminder -->
- IMPORTANT MUST ATTENTION read existing doc first, scan codebase, diff, surgical update only. Never rewrite entire doc. <!-- /SYNC:scan-and-update-reference-doc:reminder --> <!-- SYNC:output-quality-principles:reminder -->
- IMPORTANT MUST ATTENTION follow output quality rules: no counts/trees/TOCs, rules > descriptions, 1 example per pattern, primacy-recency anchoring. <!-- /SYNC:output-quality-principles:reminder --> <!-- SYNC:critical-thinking-mindset:reminder -->
- MUST ATTENTION apply critical thinking — every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact. <!-- /SYNC:critical-thinking-mindset:reminder --> <!-- SYNC:ai-mistake-prevention:reminder -->
- MUST ATTENTION apply AI mistake prevention — holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction. <!-- /SYNC:ai-mistake-prevention:reminder -->