Orchestrator-supaconductor loop-planner
Evaluate-Loop Step 1: PLAN. Use this agent when starting a new track or feature to create a detailed execution plan. Reads spec.md, loads project context, and produces a phased plan.md with specific tasks, acceptance criteria, and dependencies. Triggered by: 'plan feature', 'create plan', 'start track', '/conductor implement' (planning phase).
git clone https://github.com/Ibrahim-3d/orchestrator-supaconductor
T=$(mktemp -d) && git clone --depth=1 https://github.com/Ibrahim-3d/orchestrator-supaconductor "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/loop-planner" ~/.claude/skills/ibrahim-3d-orchestrator-supaconductor-loop-planner && rm -rf "$T"
skills/loop-planner/SKILL.mdLoop Planner Agent — Step 1: PLAN
Creates detailed, scoped execution plans for tracks. This is Step 1 of the Evaluate-Loop.
Inputs Required
- Track
— what needs to be builtspec.md
— what's already been done (to avoid overlap)conductor/tracks.md- Track
(if exists) — check for prior progressplan.md
Workflow
1. Load Context
read_file in order:
— completed tracks and their deliverablesconductor/tracks.md- Track's
— requirements for this trackspec.md - Track's
(if exists) — check what's alreadyplan.md
done[x]
— product scope referenceconductor/product.md
— technical constraintsconductor/tech-stack.md
2. Identify Scope Boundaries
Before writing any plan:
- List what
asks for (deliverables)spec.md - List what's already done in other tracks (from
)tracks.md - Identify overlap — anything in spec that was already delivered elsewhere
- Flag overlap items as "SKIP — already done in [TRACK-ID]"
3. Create Phased Plan with DAG
write_file
plan.md with this structure (now includes dependency DAG for parallel execution):
# [Track Name] — Execution Plan ## Context - **Track**: [ID] - **Spec**: [one-line summary] - **Dependencies**: [list prerequisite tracks] - **Overlap Check**: [tracks checked, conflicts found/none] - **Execution Mode**: PARALLEL | SEQUENTIAL ## Dependency Graph <!-- YAML DAG for parallel execution --> ```yaml dag: nodes: - id: "1.1" name: "Task name" type: "code" # code | ui | integration | test | docs | config files: ["src/path/to/file.ts"] depends_on: [] estimated_duration: "30m" phase: 1 - id: "1.2" name: "Another task" type: "code" files: ["src/another/file.ts"] depends_on: [] phase: 1 - id: "1.3" name: "Depends on 1.1 and 1.2" type: "code" files: ["src/path/to/file.ts"] depends_on: ["1.1", "1.2"] phase: 1 parallel_groups: - id: "pg-1" tasks: ["1.1", "1.2"] conflict_free: true - id: "pg-2" tasks: ["1.3", "1.4"] conflict_free: false shared_resources: ["src/path/to/file.ts"] coordination_strategy: "file_lock"
Phase 1: [Phase Name]
Tasks
- Task 1.1: [Specific action] <!-- deps: none, parallel: pg-1 -->
- Type: code
- Acceptance: [How to verify this is done]
- Files: [Expected files to create/modify]
- Task 1.2: [Specific action] <!-- deps: none, parallel: pg-1 -->
- Type: code
- Acceptance: [How to verify]
- Files: [Expected files]
- Task 1.3: [Depends on above] <!-- deps: 1.1, 1.2 -->
- Type: code
- Acceptance: [How to verify]
- Files: [Expected files]
Phase 2: [Phase Name]
...
Discovered Work
<!-- Add items here during execution if scope expansion is needed -->### 3.1 DAG Generation Algorithm When creating the plan, build the dependency graph: ```python def generate_dag(tasks: list) -> dict: """ Generate DAG from task list. 1. Create nodes for each task 2. Analyze dependencies (explicit + file-based) 3. Identify parallel groups (tasks at same level with no conflicts) 4. Detect shared resources """ nodes = [] for task in tasks: nodes.append({ "id": task['id'], "name": task['name'], "type": determine_task_type(task), "files": task.get('files', []), "depends_on": task.get('depends_on', []), "estimated_duration": estimate_duration(task), "phase": task['phase'] }) # Build adjacency list dependents = defaultdict(list) for node in nodes: for dep in node['depends_on']: dependents[dep].append(node['id']) # Compute topological levels levels = compute_topological_levels(nodes) # Group tasks by level for parallel execution parallel_groups = [] for level_num, level_tasks in enumerate(levels): if len(level_tasks) >= 2: # Analyze file conflicts file_usage = defaultdict(list) for task_id in level_tasks: task = next(n for n in nodes if n['id'] == task_id) for f in task.get('files', []): file_usage[f].append(task_id) # Find conflict-free groups shared_files = {f: tasks for f, tasks in file_usage.items() if len(tasks) > 1} if not shared_files: parallel_groups.append({ "id": f"pg-{level_num + 1}", "tasks": level_tasks, "conflict_free": True }) else: parallel_groups.append({ "id": f"pg-{level_num + 1}", "tasks": level_tasks, "conflict_free": False, "shared_resources": list(shared_files.keys()), "coordination_strategy": "file_lock" }) return { "nodes": nodes, "parallel_groups": parallel_groups }
3.2 Bite-Sized Task Format
Each task MUST follow the TDD bite-sized format. Every task is one focused action (2-5 minutes) with exact file paths and complete code:
### Task 1.1: [Component Name] **Files:** - Create: `exact/path/to/file.ts` - Modify: `exact/path/to/existing.ts:123-145` - Test: `tests/exact/path/to/test.ts` **Step 1: Write the failing test** ```typescript test('specific behavior', () => { const result = function(input); expect(result).toBe(expected); }); ``` **Step 2: Run test to verify it fails** Run: `npm test -- --grep "specific behavior"` Expected: FAIL with "function not defined" **Step 3: Write minimal implementation** ```typescript export function specificFunction(input: string): string { return expected; } ``` **Step 4: Run test to verify it passes** Run: `npm test -- --grep "specific behavior"` Expected: PASS **Step 5: Commit** ```bash git add tests/path/test.ts src/path/file.ts git commit -m "feat: add specific feature" ```
Key rules:
- Exact file paths always — no "add to the appropriate file"
- Complete code in plan — not "add validation" or "implement logic"
- Exact commands with expected output
- DRY, YAGNI, TDD, frequent commits
3.3 Task Type Detection
Automatically detect task type from description and files:
| Indicators | Type |
|---|---|
, , , | |
, , , | |
, , | |
, , | |
, , | |
| Default | |
3.4 Parallel Group Identification
Tasks can run in parallel if:
- No dependency relationship (neither depends on the other)
- At the same topological level
- Either:
- No shared files (conflict_free: true)
- Shared files with coordination strategy (conflict_free: false)
3.5 Plan Quality Checklist
Before finalizing, verify:
| Check | Question |
|---|---|
| Scoped | Does every task trace back to a requirement? |
| No Overlap | Does any task duplicate work from completed tracks? |
| Testable | Does every task have clear acceptance criteria? |
| Ordered | Are tasks sequenced by dependency? |
| Sized | Can each task be completed in a single session? |
4. Output
Save the plan to the track's
plan.md and report:
## Plan Created **Track**: [track-id] **Phases**: [count] **Tasks**: [total count] **Dependencies**: [list] **Ready for**: Step 2 (Evaluate Plan) → hand off to loop-plan-evaluator
Metadata Checkpoint Updates
The planner MUST update the track's
metadata.json at key points:
On Start
{ "loop_state": { "current_step": "PLAN", "step_status": "IN_PROGRESS", "step_started_at": "[ISO timestamp]", "checkpoints": { "PLAN": { "status": "IN_PROGRESS", "started_at": "[ISO timestamp]", "agent": "loop-planner" } } } }
On Completion
{ "loop_state": { "current_step": "EVALUATE_PLAN", "step_status": "NOT_STARTED", "checkpoints": { "PLAN": { "status": "PASSED", "started_at": "[start timestamp]", "completed_at": "[ISO timestamp]", "agent": "loop-planner", "commit_sha": "[if plan was committed]", "plan_version": 1 }, "EVALUATE_PLAN": { "status": "NOT_STARTED" } } } }
Update Protocol
- read_file current
metadata.json - Update
fieldsloop_state.checkpoints.PLAN - Advance
tocurrent_stepEVALUATE_PLAN - write_file back to
metadata.json
If
metadata.json doesn't exist or is v1 format, create v2 structure with default values.
Handoff
After creating the plan, the Conductor should dispatch the loop-plan-evaluator agent to verify the plan before execution begins.