EasyPlatform code-auto

[Implementation] [AUTO] Start coding & testing an existing plan (trust me bro)

install
source · Clone the upstream repo
git clone https://github.com/duc01226/EasyPlatform
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/duc01226/EasyPlatform "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/code-auto" ~/.claude/skills/duc01226-easyplatform-code-auto && rm -rf "$T"
manifest: .claude/skills/code-auto/SKILL.md
source content

[IMPORTANT] Use

TaskCreate
to break ALL work into small tasks BEFORE starting — including tasks for each file read. This prevents context loss from long files. For simple tasks, AI MUST ATTENTION ask user whether to skip.

<!-- SYNC:critical-thinking-mindset -->

Critical Thinking Mindset — Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact — cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence — certainty without evidence root of all hallucination.

<!-- /SYNC:critical-thinking-mindset --> <!-- SYNC:ai-mistake-prevention -->

AI Mistake Prevention — Failure modes to avoid on every task:

  • Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal.
  • Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing.
  • Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain.
  • Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips — not just happy path.
  • When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer — never patch symptom site.
  • Assume existing values are intentional — ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code.
  • Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks.
  • Holistic-first debugging — resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis.
  • Surgical changes — apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly.
  • Surface ambiguity before coding — don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
<!-- /SYNC:ai-mistake-prevention --> <!-- SYNC:understand-code-first -->

Understand Code First — HARD-GATE: Do NOT write, plan, or fix until you READ existing code.

  1. Search 3+ similar patterns (
    grep
    /
    glob
    ) — cite
    file:line
    evidence
  2. Read existing files in target area — understand structure, base classes, conventions
  3. Run
    python .claude/scripts/code_graph trace <file> --direction both --json
    when
    .code-graph/graph.db
    exists
  4. Map dependencies via
    connections
    or
    callers_of
    — know what depends on your target
  5. Write investigation to
    .ai/workspace/analysis/
    for non-trivial tasks (3+ files)
  6. Re-read analysis file before implementing — never work from memory alone
  7. NEVER invent new patterns when existing ones work — match exactly or document deviation

BLOCKED until:

- [ ]
Read target files
- [ ]
Grep 3+ patterns
- [ ]
Graph trace (if graph.db exists)
- [ ]
Assumptions verified with evidence

<!-- /SYNC:understand-code-first -->
  • docs/project-reference/domain-entities-reference.md
    — Domain entity catalog, relationships, cross-service sync (read when task involves business entities/models) (content auto-injected by hook — check for [Injected: ...] header before reading)

Quick Summary

Goal: Automatically execute an existing plan with testing and code review — no user approval gate (trust mode).

Workflow:

  1. Plan Detection — Find latest plan or use provided path, select next incomplete phase
  2. Analysis & Tasks — Extract tasks into TaskCreate with step numbering
  3. Implementation — Implement phase step-by-step, run type checks
  4. Testing — Tester subagent; must reach 100% pass
  5. Code Review — Code-reviewer subagent; must reach 0 critical issues
  6. Finalize — Update status, docs, auto-commit; optionally loop to next phase

Key Rules:

  • No user approval gate (unlike
    /code
    which has a blocking Step 5)
  • Tests must be 100% passing; critical issues must be 0
  • $ALL_PHASES=Yes
    (default) processes all phases in one run
  • Never comment out tests or use fake data to pass

MUST ATTENTION READ

CLAUDE.md
then THINK HARDER to start working on the following plan:

Be skeptical. Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence percentages (Idea should be more than 80%).

<plan>$ARGUMENTS</plan>

Arguments

  • $PLAN: $1 (Mention specific plan or auto detected, default: latest plan)
  • $ALL_PHASES: $2 (
    Yes
    to finish all phases in one run or
    No
    to implement phase-by-phase, default:
    Yes
    )

Step 0: Plan Detection & Phase Selection

If

$PLAN
is empty:

  1. Find latest
    plan.md
    in
    ./plans
  2. Parse plan for phases and status, auto-select next incomplete

If

$PLAN
provided: Use that plan and detect which phase to work on.

Output:

✓ Step 0: [Plan Name] - [Phase Name]


Workflow Sequence

Rules: Follow steps 1-5 in order. Each step requires output marker

✓ Step N:
. Mark each complete in TaskCreate before proceeding. Do not skip steps.


Step 1: Analysis & Task Extraction

Use

project-manager
agent to read plan file completely. Map dependencies. List ambiguities. Identify required skills. If the plan references analysis files in
.ai/workspace/analysis/
, re-read them before implementation.

TaskCreate Initialization:

  • Initialize TaskCreate with
    Step 0: [Plan Name] - [Phase Name]
    and all steps (1-5)
  • Read phase file, look for tasks/steps/phases/sections/numbered/bulleted lists
  • Convert to TaskCreate tasks with UNIQUE names:
    • Phase Implementation tasks → Step 2.X (Step 2.1, Step 2.2, etc.)
    • Phase Testing tasks → Step 3.X
    • Phase Code Review tasks → Step 4.X

Output:

✓ Step 1: Found [N] tasks across [M] phases - Ambiguities: [list or "none"]


Step 2: Implementation

Implement selected plan phase step-by-step. Mark tasks complete as done. For UI work, call

ui-ux-designer
subagent. Run type checking and compile.

Output:

✓ Step 2: Implemented [N] files - [X/Y] tasks complete, compilation passed


Step 3: Testing

Call

tester
subagent. If ANY tests fail: STOP, call
debugger
, fix, re-run. Repeat until 100% pass.

Testing standards: Forbidden: commenting out tests, changing assertions to pass, TODO/FIXME to defer fixes.

Output:

✓ Step 3: Tests [X/X passed] - All requirements met

Validation: If X ≠ total, Step 3 INCOMPLETE - do not proceed.


Step 4: Code Review

Call

code-reviewer
subagent. If critical issues found: STOP, fix, re-run
tester
, re-run
code-reviewer
. Repeat until no critical issues.

Output:

✓ Step 4: Code reviewed - [0] critical issues

Validation: If critical issues > 0, Step 4 INCOMPLETE - do not proceed.


Step 5: Finalize

  1. STATUS UPDATE (PARALLEL): Call
    project-manager
    +
    docs-manager
    subagents.
  2. ONBOARDING CHECK: Detect onboarding requirements + generate summary.
  3. AUTO-COMMIT: Call
    git-manager
    subagent. Run only if Steps 1-2 successful + Tests passed.

If $ALL_PHASES is

Yes
: proceed to next phase automatically. If $ALL_PHASES is
No
: ask user before proceeding to next phase.

If last phase: Generate summary report. Ask user about

/preview
and
/plan-archive
.

Output:

✓ Step 5: Finalize - Status updated - Git committed


Critical Enforcement Rules

Step output format:

✓ Step [N]: [Brief status] - [Key metrics]

TaskCreate tracking required: Initialize at Step 0, mark each step complete before next.

Mandatory subagent calls: Step 3:

tester
| Step 4:
code-reviewer
| Step 5:
project-manager
AND
docs-manager
AND
git-manager

Blocking gates:

  • Step 3: Tests must be 100% passing
  • Step 4: Critical issues must be 0

Do not skip steps. Do not proceed if validation fails. One plan phase per command run.


Next Steps (Standalone: MUST ATTENTION ask user via
AskUserQuestion
. Skip if inside workflow.)

MANDATORY IMPORTANT MUST ATTENTION — NO EXCEPTIONS: If this skill was called outside a workflow, you MUST ATTENTION use

AskUserQuestion
to present these options. Do NOT skip because the task seems "simple" or "obvious" — the user decides:

  • "Proceed with full workflow (Recommended)" — I'll detect the best workflow to continue from here (code implemented). This ensures review, testing, and docs steps aren't skipped.
  • "/code-simplifier" — Simplify implementation
  • "/workflow-review-changes" — Review changes before commit
  • "Skip, continue manually" — user decides

If already inside a workflow, skip — the workflow handles sequencing.

Closing Reminders

  • IMPORTANT MUST ATTENTION break work into small todo tasks using
    TaskCreate
    BEFORE starting
  • IMPORTANT MUST ATTENTION search codebase for 3+ similar patterns before creating new code
  • IMPORTANT MUST ATTENTION cite
    file:line
    evidence for every claim (confidence >80% to act)
  • IMPORTANT MUST ATTENTION add a final review todo task to verify work quality
  • IMPORTANT MUST ATTENTION validate decisions with user via
    AskUserQuestion
    — never auto-decide MANDATORY IMPORTANT MUST ATTENTION READ the following files before starting: <!-- SYNC:understand-code-first:reminder -->
  • IMPORTANT MUST ATTENTION search 3+ existing patterns and read code BEFORE any modification. Run graph trace when graph.db exists. <!-- /SYNC:understand-code-first:reminder -->
  • IMPORTANT MUST ATTENTION READ
    CLAUDE.md
    before starting <!-- SYNC:critical-thinking-mindset:reminder -->
  • MUST ATTENTION apply critical thinking — every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact. <!-- /SYNC:critical-thinking-mindset:reminder --> <!-- SYNC:ai-mistake-prevention:reminder -->
  • MUST ATTENTION apply AI mistake prevention — holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction. <!-- /SYNC:ai-mistake-prevention:reminder -->