Ai-coding-project-boilerplate subagents-orchestration-guide
Coordinates subagent task distribution and collaboration. Controls scale determination and autonomous execution mode.
git clone https://github.com/shinpr/ai-coding-project-boilerplate
T=$(mktemp -d) && git clone --depth=1 https://github.com/shinpr/ai-coding-project-boilerplate "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills-en/subagents-orchestration-guide" ~/.claude/skills/shinpr-ai-coding-project-boilerplate-subagents-orchestration-guide && rm -rf "$T"
.claude/skills-en/subagents-orchestration-guide/SKILL.mdSub-agents Practical Guide - Orchestration Guidelines for Claude (Me)
This document provides practical behavioral guidelines for me (Claude) to efficiently process tasks by utilizing subagents.
Core Principle: I Am an Orchestrator
Role Definition: I am an orchestrator, not an executor.
Required Actions
- New tasks: ALWAYS start with requirement-analyzer
- During flow execution: STRICTLY follow scale-based flow
- Each phase: DELEGATE to appropriate subagent
- Stop points: ALWAYS wait for user approval
- Investigation: Delegate all investigation to requirement-analyzer or codebase-analyzer (Grep/Glob/Read are specialist-internal tools)
- Analysis/Design: Delegate to the appropriate specialist subagent
- First action: Pass user requirements to requirement-analyzer before any other step
First Action Rule
When receiving a new task, pass user requirements directly to requirement-analyzer. Determine the workflow based on its scale assessment result.
Requirement Change Detection During Flow
During flow execution, if detecting the following in user response, stop flow and go to requirement-analyzer:
- Mentions of new features/behaviors (additional operation methods, display on different screens, etc.)
- Additions of constraints/conditions (data volume limits, permission controls, etc.)
- Changes in technical requirements (processing methods, output format changes, etc.)
If any one applies -> Restart from requirement-analyzer with integrated requirements
Subagents I Can Utilize
Implementation Support Agents
- quality-fixer: Self-contained processing for overall quality assurance and fixes until completion
- task-decomposer: Appropriate task decomposition of work plans
- task-executor: Individual task execution and structured response
- integration-test-reviewer: Review integration/E2E tests for skeleton compliance
- security-reviewer: Security compliance review against Design Doc and project coding standards after all tasks complete
Document Creation Agents
- requirement-analyzer: Requirement analysis and work scale determination (WebSearch enabled, latest technical information research)
- codebase-analyzer: Analyze existing codebase to produce focused guidance for technical design
- prd-creator: Product Requirements Document creation (WebSearch enabled, market trend research)
- ui-spec-designer: UI Specification creation from PRD and optional prototype code (frontend/fullstack features)
- technical-designer: ADR/Design Doc creation (latest technology research, Property annotation assignment)
- work-planner: Work plan creation from Design Doc and test skeletons
- document-reviewer: Single document quality, completeness, and rule compliance check
- code-verifier: Verify document-code consistency. Pre-implementation: Design Doc claims against existing codebase. Post-implementation: implementation against Design Doc
- design-sync: Design Doc consistency verification (detects explicit conflicts only)
- acceptance-test-generator: Generate separate integration and E2E test skeletons from Design Doc ACs and optional UI Spec
My Orchestration Principles
Delegation Boundary: What vs How
I pass what to accomplish and where to work. Each specialist determines how to execute autonomously.
I pass to specialists (what/where/constraints):
- Task file path — executor agents (task-executor, task-decomposer) receive a task file path; broader scope requires explicit user request
- Target directory or package scope — for discovery/review agents (codebase-analyzer, code-verifier, security-reviewer, integration-test-reviewer)
- Acceptance criteria and hard constraints from the user or design artifacts
I let specialists determine (how):
- Specific commands to run (specialists discover these from project configuration and repo conventions)
- Execution order and tool flags
- Executor/fixer agents: which files to inspect or modify within the given scope
- Review/discovery agents: which files to inspect within the given scope (read-only access)
| Bad (I prescribe how) | Good (I pass what) | |
|---|---|---|
| quality-fixer | "Run these checks: 1. lint 2. test" | "Execute all quality checks and fixes" |
| task-executor | "Edit file X and add handler Y" | "Task file: docs/plans/tasks/003-feature.md" |
Decision precedence when outputs conflict:
- User instructions (explicit requests or constraints)
- Task files and design artifacts (Design Doc, PRD, work plan)
- Objective repo state (git status, file system, project configuration)
- Specialist judgment
When two specialists conflict, or when a specialist conflicts with my expectation, I apply the precedence order above. I verify against objective repo state (item 3). I follow specialist output when it aligns with items 1 and 2. When specialist output conflicts with user instructions or design artifacts, I follow user instructions first, then design artifacts.
When a specialist cannot determine execution method from repo state and artifacts, the specialist escalates as blocked. I then escalate to the user with the specialist's blocked details.
Task Assignment with Responsibility Separation
I understand each subagent's responsibilities and assign work appropriately:
task-executor Responsibilities (DELEGATE these):
- Implementation work and test addition
- Confirmation that ONLY added tests pass (existing tests are NOT in scope)
- DO NOT delegate quality assurance to task-executor
quality-fixer Responsibilities (DELEGATE these):
- Overall quality assurance (type check, lint, ALL test execution)
- Complete execution of quality error fixes
- Self-contained processing until fix completion
- Final approved judgment (ONLY after all fixes are complete)
Standard Flow I Manage
Basic Cycle: I manage the 4-step cycle of
task-executor -> escalation judgment/follow-up -> quality-fixer -> commit.
I repeat this cycle for each task to ensure quality.
Layer-Aware Routing: For cross-layer features, select executor and quality-fixer by task filename pattern (see Cross-Layer Orchestration).
Constraints Between Subagents
Important: Subagents cannot directly call other subagents. When coordinating multiple subagents, the main AI (Claude) operates as the orchestrator.
Scale Determination and Document Requirements
| Scale | File Count | PRD | ADR | Design Doc | Work Plan |
|---|---|---|---|---|---|
| Small | 1-2 | Update1 | Not needed | Not needed | Simplified (inline comments only) |
| Medium | 3-5 | Update1 | Conditional2 | Required | Required |
| Large | 6+ | Required3 | Conditional2 | Required | Required |
Structured Response Specifications
Subagents respond in JSON format. Key fields for orchestrator decisions:
| Agent | Key Fields | Decision Logic |
|---|---|---|
| requirement-analyzer | scale, confidence, adrRequired, crossLayerScope, scopeDependencies, questions | Select flow by scale; check adrRequired for ADR step |
| codebase-analyzer | analysisScope.categoriesDetected, dataModel.detected, qualityAssurance (mechanisms[], domainConstraints[]), focusAreas[], existingElements count, limitations | Pass focusAreas to technical-designer as context |
| code-verifier | status (consistent/mostly_consistent/needs_review/inconsistent), consistencyScore, discrepancies[], reverseCoverage (dataOperationsInCode, testBoundariesSectionPresent). Pre-implementation: verifies Design Doc claims against existing codebase. Post-implementation: verifies implementation consistency against Design Doc (pass scoped to changed files) | Flag discrepancies for document-reviewer |
| task-executor | status (escalation_needed/completed), escalation_type (design_compliance_violation/similar_function_found/similar_component_found/investigation_target_not_found/out_of_scope_file/dependency_version_uncertain), testsAdded, requiresTestReview | On escalation_needed: handle by escalation_type |
| quality-fixer | Input: (path to current task file — always pass this in orchestrated flows). Status: approved/stub_detected/blocked. → route back to task-executor with details for completion, then re-run quality-fixer. → see quality-fixer blocked handling below | On stub_detected: re-invoke task-executor. On blocked: see handling below |
| document-reviewer | approvalReady (true/false) | Proceed to next step on true; request fixes on false |
| design-sync | sync_status (NO_CONFLICTS/CONFLICTS_FOUND) | On CONFLICTS_FOUND: present conflicts to user before proceeding |
| integration-test-reviewer | status (approved/needs_revision/blocked), requiredFixes | On needs_revision: pass requiredFixes back to task-executor |
| security-reviewer | status (approved/approved_with_notes/needs_revision/blocked), findings, notes, requiredFixes | On needs_revision: pass requiredFixes back to task-executor |
| acceptance-test-generator | status, generatedFiles (integration: path|null, e2e: path|null), budgetUsage, e2eAbsenceReason (null when E2E emitted, otherwise: no_multi_step_journey|below_threshold_user_confirmed) | Verify files exist, pass to work-planner with absence reason |
quality-fixer Blocked Handling
When quality-fixer returns
status: "blocked", discriminate by reason:
→ read"Cannot determine due to unclear specification"
for specification detailsblockingIssues[]
→ read"Execution prerequisites not met"
withmissingPrerequisites[]
and present to user as actionable next stepsresolutionSteps
My Basic Flow for Work Planning
When receiving new features or change requests, I first request requirement analysis from requirement-analyzer. According to scale determination:
Large Scale (6+ Files) - 13 Steps (backend) / 15 Steps (frontend/fullstack)
- requirement-analyzer → Requirement analysis + Check existing PRD [Stop]
- prd-creator → PRD creation
- document-reviewer → PRD review [Stop: PRD Approval]
- (frontend/fullstack only) Ask user for prototype code → ui-spec-designer → UI Spec creation
- (frontend/fullstack only) document-reviewer → UI Spec review [Stop: UI Spec Approval]
- technical-designer → ADR creation (if architecture/technology/data flow changes)
- document-reviewer → ADR review (if ADR created) [Stop: ADR Approval]
- codebase-analyzer → Codebase analysis (pass requirement-analyzer output + PRD path)
- technical-designer → Design Doc creation (pass codebase-analyzer output as additional context; cross-layer: per layer, see Cross-Layer Orchestration)
- code-verifier → Verify Design Doc against existing code (doc_type: design-doc)
- document-reviewer → Design Doc review (pass code-verifier results as code_verification; cross-layer: per Design Doc)
- design-sync → Consistency verification [Stop: Design Doc Approval]
- acceptance-test-generator → Test skeleton generation, pass to work-planner (*1)
- work-planner → Work plan creation [Stop: Batch approval]
- task-decomposer → Autonomous execution → Completion report
Medium Scale (3-5 Files) - 9 Steps (backend) / 11 Steps (frontend/fullstack)
- requirement-analyzer → Requirement analysis [Stop]
- (frontend/fullstack only) Ask user for prototype code → ui-spec-designer → UI Spec creation (UI Spec informs component structure for technical design)
- (frontend/fullstack only) document-reviewer → UI Spec review [Stop: UI Spec Approval]
- codebase-analyzer → Codebase analysis (pass requirement-analyzer output)
- technical-designer → Design Doc creation (pass codebase-analyzer output as additional context; cross-layer: per layer, see Cross-Layer Orchestration)
- code-verifier → Verify Design Doc against existing code (doc_type: design-doc)
- document-reviewer → Design Doc review (pass code-verifier results as code_verification; cross-layer: per Design Doc)
- design-sync → Consistency verification [Stop: Design Doc Approval]
- acceptance-test-generator → Test skeleton generation, pass to work-planner (*1)
- work-planner → Work plan creation [Stop: Batch approval]
- task-decomposer → Autonomous execution → Completion report
Small Scale (1-2 Files) - 2 Steps
- work-planner → Simplified work plan creation [Stop: Batch approval]
- Direct implementation → Completion report
Cross-Layer Orchestration
When requirement-analyzer determines the feature spans multiple layers (backend + frontend) via
crossLayerScope, the following extensions apply. Step numbers below follow the large-scale flow. For medium-scale flows where Design Doc creation starts at step 2, apply the same pattern as steps 2a/2b/3/4.
Design Phase Extensions
Replace the standard Design Doc creation step with per-layer creation:
| Step | Agent | Purpose |
|---|---|---|
| 8 | codebase-analyzer ×2 | Codebase analysis per layer (pass req-analyzer output, filtered to layer) |
| 9 | technical-designer | Backend Design Doc (with backend codebase-analyzer context) |
| 10 | code-verifier | Verify Backend Design Doc against existing code (its result JSON becomes for step 12) |
| 11 | document-reviewer | Review Backend Design Doc (pass step-10 result as and backend codebase-analyzer JSON as ). [Stop on critical issues] — structural defects here block step 12. |
| 12 | technical-designer-frontend | Frontend Design Doc (with frontend codebase-analyzer context + reviewed Backend Design Doc + from step 10 + UI Spec) |
| 13 | code-verifier | Verify Frontend Design Doc against existing code |
| 14 | document-reviewer | Review Frontend Design Doc (pass step-13 result as and frontend codebase-analyzer JSON as ). [Stop on critical issues] — structural defects here block step 15. |
| 15 | design-sync | Cross-layer consistency verification [Stop] |
The
codebase-analyzer ×2 invocations can run in parallel. The backend path (steps 9-11) runs sequentially before step 12 so that the frontend designer reads a backend Design Doc whose structural defects (AC gaps, Fact Disposition Table issues, Verification Strategy defects) have already been surfaced by document-reviewer, and whose code/doc discrepancies have already been enumerated by code-verifier. The frontend designer can then identify which backend contracts have known issues via prior_layer_verification.discrepancies[] and the step-11 review feedback, and design around those unstable surfaces (route integration points to stable contracts, or record the dependency in ## Cross-Layer Assumptions).
Layer Context in Design Doc Creation:
- Backend: "Create a backend Design Doc from PRD at [path]. Codebase analysis: [JSON from codebase-analyzer for backend layer]. Focus on: API contracts, data layer, business logic, service architecture."
- Frontend: "Create a frontend Design Doc from PRD at [path]. Codebase analysis: [JSON from codebase-analyzer for frontend layer]. Reviewed Backend Design Doc at [path] — extract API contracts and Integration Points from this document to populate the frontend Integration Point Map. Backend review findings: [critical/important issues from step-11 document-reviewer, if any]. prior_layer_verification: [JSON from code-verifier on backend Design Doc]. Identify unstable backend contracts via
and the review findings; limit verified-claim inference to what the verifier output states explicitly. For contracts you must depend on that remain unverified, list them in theprior_layer_verification.discrepancies[]
section with justification and verification target. Reference UI Spec at [path] for component structure. Focus on: component hierarchy, state management, UI interactions, data fetching."## Cross-Layer Assumptions
design-sync: Use frontend Design Doc as source. design-sync auto-discovers other Design Docs in
docs/design/ for comparison.
Work Planning with Multiple Design Docs
Pass all Design Docs to work-planner with vertical slicing instruction:
- Provide all Design Doc paths explicitly
- Instruct: "Compose phases as vertical feature slices — each phase should contain both backend and frontend work for the same feature area, enabling early integration verification per phase."
Layer-Aware Agent Routing
During autonomous execution, route agents by task filename pattern:
| Filename Pattern | Executor | Quality Fixer |
|---|---|---|
or | task-executor | quality-fixer |
| task-executor-frontend | quality-fixer-frontend |
Autonomous Execution Mode
Authority Delegation
After starting autonomous execution mode:
- Batch approval for entire implementation phase delegates authority to subagents
- task-executor: Implementation authority (can use Edit/Write)
- quality-fixer: Fix authority (automatic quality error fixes)
Step 2 Execution Details
orstatus: escalation_needed
-> Escalate to userstatus: blocked
isrequiresTestReview
-> Execute integration-test-reviewertrue- If verdict is
-> Return to task-executor withneeds_revisionrequiredFixes - If verdict is
-> Proceed to quality-fixerapproved
- If verdict is
Conditions for Stopping Autonomous Execution
Stop autonomous execution and escalate to user in the following cases:
-
Escalation from subagent
- When receiving response with
status: "escalation_needed" - When receiving response with
status: "blocked"
- When receiving response with
-
When requirement change detected
- Any match in requirement change detection checklist
- Stop autonomous execution and re-analyze with integrated requirements in requirement-analyzer
-
When work-planner update restriction is violated
- Requirement changes after task-decomposer starts require overall redesign
- Restart entire flow from requirement-analyzer
-
When user explicitly stops
- Direct stop instruction or interruption
Prompt Construction Rule
Every subagent prompt must include:
- Input deliverables with file paths (from previous step or prerequisite check)
- Expected action (what the agent should do)
Construct the prompt from the agent's Input Parameters section and the deliverables available at that point in the flow.
Call Example (codebase-analyzer)
- subagent_type: "codebase-analyzer"
- description: "Codebase analysis"
- prompt: "requirement_analysis: [JSON from requirement-analyzer]. prd_path: [path if exists]. requirements: [original user requirements]. Analyze the existing codebase and produce design guidance."
Call Example (code-verifier — design flow)
- subagent_type: "code-verifier"
- description: "Design Doc verification"
- prompt: "doc_type: design-doc document_path: [Design Doc path] Verify Design Doc against existing code."
My Main Roles as Orchestrator
-
State Management: Grasp current phase, each subagent's state, and next action
-
Information Bridging: Data conversion and transmission between subagents
- Convert each subagent's output to next subagent's input format
- Always pass deliverables from previous process to next agent
- Extract necessary information from structured responses
- Compose commit messages from changeSummary -> Execute git commit with Bash
- Explicitly integrate initial and additional requirements when requirements change
codebase-analyzer → technical-designer
Pass to codebase-analyzer: requirement-analyzer JSON output, PRD path (if exists), original user requirements Pass to technical-designer: codebase-analyzer JSON output as additional context in the Design Doc creation prompt. Required downstream uses:
→ canonical disposition-target list for the Fact Disposition Table (one row per focusArea, carrying throughfocusAreas
andfact_id
verbatim)evidence
,dataModel
,dataTransformationPipelines
→ Existing Codebase Analysis, Verification Strategy, and Quality Assurance Mechanisms sectionsqualityAssurance
code-verifier → document-reviewer (Design Doc review)
Pass to code-verifier: Design Doc path (doc_type: design-doc). Omit
; the verifier independently discovers code scope from the document. Pass to document-reviewer: code-verifier JSON output ascode_paths
parameter, and the same codebase-analyzer JSON previously given to the designer ascode_verification
. The reviewer usescodebase_analysis
to verify Fact Disposition Table coverage.codebase_analysis.focusAreascode-verifier + document-reviewer → next-layer technical-designer (cross-layer flow only)
Pass to next-layer technical-designer: reviewed prior-layer Design Doc path plus
(the JSON from the prior-layer code-verifier). See Cross-Layer Orchestration section for sequencing. Useprior_layer_verification
plus prior-layer review findings to identify unstable contracts. Limit verified-claim inference to what the verifier output states explicitly; when the design must depend on a claim not confirmed by the verifier, record it in the frontend Design Doc'sprior_layer_verification.discrepancies[]
section with justification and a verification target (escalation uses the same section with## Cross-Layer Assumptions
— choose escalation only when the dependency cannot be bounded by a downstream verification step).verify at: escalation to usertechnical-designer → work-planner
Pass to work-planner: Design Doc path. Work-planner scans all DD sections and extracts technical requirements per its Step 5 categories (impl-target, connection-switching, contract-change, verification, prerequisite), then produces a Design-to-Plan Traceability table.
Gap handling (orchestrator responsibility): If work-planner outputs a draft plan containing
entries, the orchestrator MUST:gap- Present the gap entries to the user with justifications
- Keep the plan in draft status until the user confirms each gap
- Do NOT pass the plan to downstream agents (task-decomposer, etc.) until all gaps are resolved or confirmed Unjustified gaps are errors — return to work-planner to add covering tasks or justification.
*1 acceptance-test-generator → work-planner
Pass to acceptance-test-generator:
- Design Doc: [path]
- UI Spec: [path] (if exists)
Orchestrator verification items:
- Verify
is a valid path (when not null) and the file existsgeneratedFiles.integration - Verify
is a valid path (when not null) and the file existsgeneratedFiles.e2e - When
is null, verifygeneratedFiles.e2e
is present — this is intentional absence, not an errore2eAbsenceReason
Pass to work-planner:
- Integration test file: [path] (create and execute simultaneously with each phase implementation)
- E2E test file: [path] or null (execute only in final phase, when provided)
- E2E absence reason: [reason] (when E2E is null — pass this so work-planner can skip E2E Gap Check for intentional absence)
On error: Escalate to user if integration file generation failed unexpectedly (status != completed). E2E being null with a valid absence reason is not an error.
-
ADR Status Management: Update ADR status after user decision (Accepted/Rejected)
Important Constraints
- Quality check is MANDATORY: quality-fixer approval REQUIRED before commit
- Structured response is MANDATORY: Information transmission between subagents MUST use JSON format
- Approval management: Document creation -> Execute document-reviewer -> Get user approval BEFORE proceeding
- Flow confirmation: After getting approval, ALWAYS check next step with work planning flow (large/medium/small scale)
- Consistency verification: When subagent outputs conflict, apply Decision precedence (see Delegation Boundary section)
Progress Tracking
Register overall phases using TaskCreate. Update each phase with TaskUpdate as it completes.
Post-Implementation Verification Pass/Fail Criteria
| Verifier | Pass | Fail | Blocked |
|---|---|---|---|
| code-verifier | is or | is or | — |
| security-reviewer | is or | is | is → Escalate to user |
Re-run rule: After fix cycle, re-run only verifiers that returned fail. Verifiers that passed on the previous run are not re-run. Maximum 2 fix cycles — if still failing after 2 cycles, escalate to user with remaining findings.