Agentops crank
Hands-free epic execution. Runs until ALL children are CLOSED. Uses Codex session agents for parallel waves. NO human prompts, NO stopping. Triggers: "crank", "run epic", "execute epic", "run all tasks", "hands-free execution", "crank it".
git clone https://github.com/boshu2/agentops
T=$(mktemp -d) && git clone --depth=1 https://github.com/boshu2/agentops "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills-codex/crank" ~/.claude/skills/boshu2-agentops-crank-641dcc && rm -rf "$T"
skills-codex/crank/SKILL.md$crank - Autonomous Epic Execution (Codex Native)
Quick Ref: Execute every open issue in an epic via wave-based workers using
,spawn_agent,wait_agent, andsend_input. Output: closed issues + final validation.close_agent
You must execute this workflow. Do not just describe it.
Architecture
Crank (lead agent) | +-> bd ready (current wave) | +-> Build a wave task packet | +-> spawn_agent per issue (worker or explorer role) | +-> wait_agent for all worker ids | +-> Validate results + bd update | +-> Loop until epic DONE
Backend Rules
- Prefer Codex session agents when
is available.spawn_agent - Use
for implementation agents andagent_type=worker
for discovery agents when the runtime exposes roles.agent_type=explorer - Use
only for short steering or retry prompts.send_input - Use
for stalled or unnecessary agents.close_agent - Never depend on legacy CSV fan-out or host-task result polling. Use
,spawn_agent
,wait_agent
, andsend_input
instead.close_agent
Codex Lifecycle Guard
When this skill runs in Codex hookless mode (
CODEX_THREAD_ID is set or
CODEX_INTERNAL_ORIGINATOR_OVERRIDE is Codex Desktop), ensure startup context
before the first wave:
ao codex ensure-start 2>/dev/null || true
ao codex ensure-start is the single startup guard for Codex skills. It records
startup once per thread and skips duplicate startup automatically. Leave
ao codex ensure-stop to closeout skills after the implementation wave ends.
Flags
| Flag | Default | Description |
|---|---|---|
| off | SPEC -> TEST -> IMPL wave sequence. Workers classify tests by pyramid level (L0-L3) per the test pyramid standard ( in the standards skill). When includes metadata, carry it into . |
Global Limits
MAX_EPIC_WAVES = 50 (hard limit). Typical epics use 5-10 waves.
Completion Enforcement (Sisyphus Rule)
After each wave, output one of:
- epic complete, all issues closed<promise>DONE</promise>
- cannot proceed, with reason<promise>BLOCKED</promise>
- incomplete, with remaining items<promise>PARTIAL</promise>
Never claim completion without the marker.
Node Repair Operator
When a task fails during wave execution, classify as RETRY (transient — re-add with adjustment, max 2), DECOMPOSE (too complex — split into sub-issues, terminal), or PRUNE (blocked — escalate immediately). Budget: 2 per task.
Mutation logging on failure: DECOMPOSE logs
task_removed + task_added per sub-task. PRUNE logs task_removed. RETRY logs nothing (task identity unchanged).
Execution Steps
Given
$crank [epic-id | .agents/rpi/execution-packet.json | plan-file.md | "description"]:
Step 0: Load Knowledge Context
if command -v ao &>/dev/null; then ao lookup --query "<epic-title>" --limit 5 2>/dev/null || true ao ratchet status 2>/dev/null || true fi
Apply retrieved knowledge: If learnings are returned, check each for applicability to this epic. For applicable learnings, treat as implementation constraints and cite by filename. Record citations with the correct type:
ao metrics cite "<path>" --type applied when the learning influenced a decision, or --type retrieved when loaded but not referenced.
Section evidence: When lookup results include
section_heading, matched_snippet, or match_confidence fields, prefer the matched section over the whole file — it pinpoints the relevant portion. Higher match_confidence (>0.7) means the section is a strong match; lower values (<0.4) are weaker signals. Use the matched_snippet as the primary context rather than reading the full file.
Step 0.5: Detect Tracking Mode
if bd ready --json >/dev/null 2>&1 && bd list --type epic --status open --json >/dev/null 2>&1; then TRACKING_MODE="beads" else TRACKING_MODE="tasklist" fi
Step 0.6: Initialize Shared Task Notes
Create the shared notes file for cross-wave context persistence. See
references/shared-task-notes.md for the full pattern.
mkdir -p .agents/crank cat > .agents/crank/SHARED_TASK_NOTES.md <<EOF # Shared Task Notes — Epic ${EPIC_ID:-unknown} > Cross-wave context for workers. Read before starting. Report discoveries in task output. > Maintained by the crank orchestrator — workers do NOT write to this file directly. EOF
Step 0.7: Initialize Plan Mutation Audit Trail
Create the JSONL file that tracks every plan mutation during execution. See
references/plan-mutations.md for the full schema and mutation budget.
mkdir -p .agents/rpi : > .agents/rpi/plan-mutations.jsonl # Budget counters MUTATION_TASK_ADDED=0 MUTATION_TASK_ADDED_LIMIT=5 MUTATION_TASK_REORDERED=0 MUTATION_TASK_REORDERED_LIMIT=3
Helper function:
log_plan_mutation() { local mutation_type="$1" task_id="$2" before="$3" after="$4" local ts ts=$(date -Iseconds) if [[ "$mutation_type" == "task_added" ]]; then MUTATION_TASK_ADDED=$((MUTATION_TASK_ADDED + 1)) if [[ $MUTATION_TASK_ADDED -gt $MUTATION_TASK_ADDED_LIMIT ]]; then echo "WARN: task_added budget exceeded ($MUTATION_TASK_ADDED/$MUTATION_TASK_ADDED_LIMIT). Consider re-running $plan." fi elif [[ "$mutation_type" == "task_reordered" ]]; then MUTATION_TASK_REORDERED=$((MUTATION_TASK_REORDERED + 1)) if [[ $MUTATION_TASK_REORDERED -gt $MUTATION_TASK_REORDERED_LIMIT ]]; then echo "WARN: task_reordered budget exceeded ($MUTATION_TASK_REORDERED/$MUTATION_TASK_REORDERED_LIMIT)." fi fi echo "{\"timestamp\":\"$ts\",\"wave\":$wave,\"task_id\":\"$task_id\",\"mutation_type\":\"$mutation_type\",\"before\":$before,\"after\":$after}" \ >> .agents/rpi/plan-mutations.jsonl }
Mutation types:
task_added, task_removed, task_reordered, scope_changed, dependency_changed.
Step 1: Identify the Execution Target
Beads mode:
- If epic ID provided: use it directly
- If no epic ID:
bd list --type epic --status open 2>/dev/null | head -5
Execution-packet/file mode:
- If the input is
, read.agents/rpi/execution-packet.json
,objective
,epic_id
,tracker_mode
, anddone_criteriavalidation_commands - If
exists inside the execution packet, keep that epic as the execution spineepic_id - If
is absent, keep the packetepic_id
as the execution spine and continue in file-backed mode instead of inventing an epic IDobjective - For other plan files, read the plan file and extract tasks
Step 2: Load Execution Details
Beads mode:
bd show <epic-id> 2>/dev/null
Execution-packet/file mode:
- Read the packet or plan file into local state for the current objective
- Preserve the same objective across retries; do not narrow to one slice from
bd ready
Step 3: List Ready Work for the Current Wave
Beads mode:
bd ready 2>/dev/null
bd ready returns all unblocked issues - these can run in parallel.
Execution-packet/file mode:
- Read remaining tasks from
or the plan file.agents/rpi/execution-packet.json - Execute against the packet objective until the plan-backed work is done, blocked, or the retry budget is exhausted
Step 3a: Pre-flight Checks
- Verify there are ready issues. Empty list is an error unless the epic is already complete.
- If 3+ issues are ready, check
for pre-mortem evidence..agents/council/ - If tracking mode is
andbeads
exists, run the backlog audit before spawning workers.scripts/bd-audit.sh - If bd-audit flags backlog hygiene issues, stop and clean them up before continuing. Use
only when you intentionally want to bypass that gate.--skip-audit - For every string being modified, grep the codebase for stale cross-references.
Step 3b: Language Standards Injection
Detect project language (
go.mod -> Go, pyproject.toml -> Python, etc.) and read applicable standards from $standards. Include a Testing section in worker prompts.
Step 4: Execute the Wave with Codex Session Agents
Crank follows the FIRE loop for each wave:
- FIND: locate the next ready set
- IGNITE: spawn workers
- REAP: wait, validate, and merge results
- ESCALATE: retry or block when needed
4a: Load Shared Task Notes
Read cross-wave context to include in worker prompts:
SHARED_NOTES="" if [ -f .agents/crank/SHARED_TASK_NOTES.md ]; then SHARED_NOTES=$(cat .agents/crank/SHARED_TASK_NOTES.md) fi
If
SHARED_NOTES exceeds ~50 lines, summarize older waves (keep last 3 in full detail, preserve [CRITICAL] entries).
4b: Build a Wave Task Packet
Create one packet per ready issue. Do not use CSV fan-out.
mkdir -p .agents/crank cat > ".agents/crank/wave-${wave}-tasks.json" << EOF { "wave": $wave, "epic_id": "$EPIC_ID", "tasks": [ { "issue_id": "bd-123", "subject": "Short issue summary", "description": "Issue details and acceptance criteria", "files": ["path/to/file.go"], "validation_cmd": "go test ./...", "metadata": { "issue_type": "feature" } } ] } EOF
Each task packet must include
metadata.issue_type.
4c: Pre-spawn File Conflict Check
wave_tasks = [tasks from packet] all_files = {} for task in wave_tasks: for f in task.files: if f in all_files: CONFLICT -> serialize into sub-waves all_files[f] = task.id
Display an ownership table before spawning workers. If conflicts exist, split into sub-waves and keep file ownership disjoint.
4d: Spawn Workers
Spawn one agent per issue. Prefer
worker roles for implementation and explorer roles for file discovery when the runtime exposes agent_type.
spawn_agent( agent_type="worker", message="You are worker-<issue-id>. Assignment: <subject> <description> --- Context from prior waves (read before starting): <SHARED_NOTES content, or 'First wave — no prior context.' if empty> --- FILE MANIFEST (files you are permitted to modify): <list of files> Rules: 1. Stay within your assigned files 2. Run validation: <validation_cmd> 3. Keep your response short 4. Write any durable notes to .agents/crank/results/<issue-id>.md or .agents/crank/results/<issue-id>.json 5. DISCOVERY REPORTING: If you discover codebase quirks, failed approaches, convention requirements, or dependency constraints, include a section in your output titled '## Discoveries' with one bullet per finding. Use the repo's current Codex primitives only." )
If a task is missing its file manifest, spawn a short-lived
explorer agent first:
spawn_agent( agent_type="explorer", message="You are explorer-<issue-id>. Task: identify the files that must be created or modified for this issue. Return a JSON array of paths only." )
4e: Wait for Workers
wait_agent(ids=["agent-id-1", "agent-id-2"])
If a worker needs a short correction, use
send_input(id=..., message=...).
If a worker stalls or is no longer needed, use
close_agent(id=...).
Step 5: Verify and Sync
External Gate Enforcement: After each worker completes, the orchestrator (not the worker) runs the gate command. Workers must not declare their own completion. See
references/external-gate-protocol.md.
For each completed worker:
- PASS -> close the issue.
- FAIL -> log the failure, keep the issue open, and retry only if the issue is still within the retry budget.
- BLOCKED -> mark blocked with the reason and continue the wave.
Update beads:
bd close "$issue_id" 2>/dev/null bd update "$issue_id" --status blocked --append-notes "Wave $wave FAIL: $reason" 2>/dev/null
Step 5.5: Wave Acceptance Check
After all workers complete:
- Compute
for the wave.git diff - Run project-level tests appropriate to the wave.
- If tests fail, identify which worker's changes broke things and requeue only that work.
Step 5.7: Wave Checkpoint
FILES_CHANGED_JSON="${FILES_CHANGED_JSON:-$(git diff --name-only "${WAVE_START_SHA:-HEAD~1}..HEAD" | jq -R -s -c 'split("\n")[:-1]')}" GIT_SHA="$(git rev-parse HEAD)" cat > ".agents/crank/wave-${wave}-checkpoint.json" << EOF { "schema_version": 1, "wave": $wave, "epic_id": "$EPIC_ID", "timestamp": "$(date -u +%Y-%m-%dT%H:%M:%SZ)", "tasks_completed": ${TASKS_COMPLETED_JSON:-[]}, "tasks_failed": ${TASKS_FAILED_JSON:-[]}, "files_changed": $FILES_CHANGED_JSON, "git_sha": "$GIT_SHA", "acceptance_verdict": "${ACCEPTANCE_VERDICT:-WARN}", "commit_strategy": "${COMMIT_STRATEGY:-wave-batch}", "mutations_this_wave": $(grep -c "\"wave\":${wave}" .agents/rpi/plan-mutations.jsonl 2>/dev/null || echo 0), "total_mutations": $(wc -l < .agents/rpi/plan-mutations.jsonl 2>/dev/null | tr -d ' '), "mutation_budget": { "task_added": {"used": ${MUTATION_TASK_ADDED:-0}, "limit": 5}, "task_reordered": {"used": ${MUTATION_TASK_REORDERED:-0}, "limit": 3} } } EOF bash skills-codex/crank/scripts/validate-wave-checkpoint.sh ".agents/crank/wave-${wave}-checkpoint.json"
Do not copy or consume the checkpoint downstream until validation passes. The validator fails closed when
git_sha does not resolve in the current repo, timestamp is invalid or more than 5 minutes in the future, or required checkpoint fields are missing/malformed.
Step 5.8: Update Shared Task Notes
Harvest discoveries from completed workers and append to the shared notes file:
WAVE_DISCOVERIES="" for result_file in .agents/crank/results/*; do if [ -f "$result_file" ]; then DISCOVERIES=$(sed -n '/^## Discoveries/,/^## /{ /^## Discoveries/d; /^## /d; p; }' "$result_file" 2>/dev/null) if [ -n "$DISCOVERIES" ]; then WAVE_DISCOVERIES="${WAVE_DISCOVERIES}${DISCOVERIES}\n" fi fi done if [ -n "$WAVE_DISCOVERIES" ]; then cat >> .agents/crank/SHARED_TASK_NOTES.md <<EOF ## Wave ${wave} ($(date -Iseconds)) $(echo -e "$WAVE_DISCOVERIES") EOF fi
Capture: Failed approaches, codebase quirks, convention discoveries, dependency notes. Skip: Full error logs, implementation details, task status.
Step 5.9: Log Plan Mutations
After processing wave results, log mutations for any plan changes. Call
log_plan_mutation for each:
- DECOMPOSE:
for original,task_removed
for each sub-tasktask_added - PRUNE:
with block reasontask_removed - Scope change:
when file manifest updated after explorationscope_changed - Dependency discovered:
when blocked-by list modifieddependency_changed - Wave reassignment:
when task moves between wavestask_reordered
# Example: task decomposed into sub-tasks log_plan_mutation "task_removed" "$decomposed_id" \ "{\"subject\":\"$ORIGINAL_SUBJECT\",\"status\":\"decomposed\"}" "null" log_plan_mutation "task_added" "$sub_id" "null" \ "{\"subject\":\"$SUB_SUBJECT\",\"reason\":\"Split from $decomposed_id\"}" # Example: scope change after exploration log_plan_mutation "scope_changed" "$task_id" \ "{\"files\":$ORIGINAL_FILES}" \ "{\"files\":$UPDATED_FILES,\"reason\":\"$REASON\"}"
Mutations are append-only to
.agents/rpi/plan-mutations.jsonl. Read by $post-mortem for drift analysis.
Step 6: Commit Wave Results
Lead-only commit - workers write files, lead validates and commits once per wave:
for f in $WORKER_FILES_CHANGED; do git add -- "$f" done git commit -m "feat(<scope>): wave $wave - $COMPLETED_COUNT issues completed"
Step 7: Loop or Complete
wave=$((wave + 1)) if [[ $wave -ge 50 ]]; then echo "<promise>BLOCKED</promise>" echo "Global wave limit (50) reached." exit 1 fi REMAINING=$(bd ready 2>/dev/null | wc -l) if [[ $REMAINING -eq 0 ]]; then ALL_CLOSED=$(bd children "$EPIC_ID" 2>/dev/null | grep -c "CLOSED" || echo 0) ALL_TOTAL=$(bd children "$EPIC_ID" 2>/dev/null | wc -l || echo 0) if [[ $ALL_CLOSED -eq $ALL_TOTAL ]]; then echo "<promise>DONE</promise>" else echo "<promise>BLOCKED</promise>" echo "No ready issues but $((ALL_TOTAL - ALL_CLOSED)) issues remain unclosed." fi else # Continue to next wave - return to Step 3 fi
Step 8: Final Validation
When the epic is DONE:
$vibe validate the completed epic
Step 8.5: Archive Shared Task Notes
Move the shared notes to an archive after epic completion:
if [ -f .agents/crank/SHARED_TASK_NOTES.md ]; then mkdir -p .agents/crank/archives mv .agents/crank/SHARED_TASK_NOTES.md \ ".agents/crank/archives/SHARED_TASK_NOTES-${EPIC_ID:-unknown}-$(date +%Y%m%d-%H%M%S).md" fi
Retry Policy
- Max 2 retries per issue across all waves
- On third failure: mark BLOCKED and continue with remaining issues
- Track retries with
bd comments add "$issue_id" "retry $N: $reason"
Failure Recovery
| Scenario | Action |
|---|---|
| Worker timeout | Mark BLOCKED, log reason, continue wave |
| Test failure | Identify breaking change, retry once |
| All workers fail | with diagnostics |
| File conflict detected | Split into sub-waves, re-run |
Reference Documents
- references/de-sloppify.md - cleanup pass after implementation waves
- references/plan-mutations.md - plan mutation audit trail for drift analysis
- references/shared-task-notes.md - cross-wave context persistence
- references/commit-strategies.md - per-task vs wave-batch commits
- references/contract-template.md - contract template for worker specs
- references/failure-recovery.md - escalation and retry logic
- references/failure-taxonomy.md - failure classification
- references/fire.md - FIRE loop specification
- references/ralph-loop-contract.md - Ralph Wiggum loop contract
- references/taskcreate-examples.md - task creation examples
- references/team-coordination.md - worker coordination details
- references/external-gate-protocol.md - external gate protocol for wave validation
- references/test-first-mode.md - test-first wave sequence
- references/troubleshooting.md - common issues and fixes
- references/uat-integration-wave.md - UAT integration wave patterns
- references/wave-patterns.md - acceptance checks and checkpoints
- references/gc-pool-dispatch.md - gc pool worker dispatch
- references/wave1-spec-consistency-checklist.md - Wave 1 spec consistency checklist
- references/worktree-per-worker.md - worktree isolation pattern