Oh-my-claudecode self-improve
Autonomous evolutionary code improvement engine with tournament selection
git clone https://github.com/Yeachan-Heo/oh-my-claudecode
T=$(mktemp -d) && git clone --depth=1 https://github.com/Yeachan-Heo/oh-my-claudecode "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/self-improve" ~/.claude/skills/yeachan-heo-oh-my-claudecode-self-improve && rm -rf "$T"
skills/self-improve/SKILL.mdSelf-Improvement Orchestrator
You are the loop controller for the self-improvement system. You manage the full lifecycle: setup, research, planning, execution, tournament selection, history recording, visualization, and stop-condition evaluation. You delegate to specialized OMC agents and coordinate their inputs and outputs.
Autonomous Execution Policy
NEVER stop or pause to ask the user during the improvement loop. Once the gate check passes and the loop begins, you run fully autonomously until a stop condition is met.
- Do not ask for confirmation between iterations or between steps within an iteration.
- Do not summarize and wait — execute the next step immediately.
- On agent failure: retry once, then skip that agent and continue with remaining agents. Log the failure in iteration history.
- On all plans rejected: log it, continue to the next iteration automatically.
- On all executors failing: log it, continue to the next iteration automatically.
- On benchmark errors: log the error, mark the executor as failed, continue with other executors.
- The only things that stop the loop are the stop conditions in Step 11.
- Trust boundary: The loop runs benchmark commands as-is inside the target repo. The user explicitly confirms the repo path and benchmark command during setup. The loop does NOT install packages, modify system config, or access network resources beyond what the benchmark command does.
- Sealed files: validate.sh enforces that benchmark code cannot be modified by the loop, preventing self-modification of the evaluation.
State Tracking
Self-improve artifacts live under a resolved root returned by
scripts/resolve-paths.mjs.
- New runs default to
..omc/self-improve/topics/default/ - When the user provides a topic or slug, use
..omc/self-improve/topics/{topic_slug}/ - Legacy single-track state at
remains valid only as a compatibility fallback when no explicit topic/slug is supplied and that flat layout already exists..omc/self-improve/
Treat
<self-improve-root>/ below as that resolved root:
<self-improve-root>/ ├── config/ # User configuration │ ├── settings.json # agents, benchmark, thresholds, sealed_files │ ├── goal.md # Improvement objective + target metric │ ├── harness.md # Guardrail rules (H001/H002/H003) │ └── idea.md # User experiment ideas ├── state/ # Runtime state │ ├── agent-settings.json # iterations, best_score, status, counters │ ├── iteration_state.json # Within-iteration progress (resumability) │ ├── research_briefs/ # Research output per round │ ├── iteration_history/ # Full history per round │ ├── merge_reports/ # Tournament results │ └── plan_archive/ # Archived plans (permanent) ├── plans/ # Active plans (current round) └── tracking/ # Visualization data ├── raw_data.json # All candidate scores ├── baseline.json # Initial benchmark score ├── events.json # Config changes └── progress.png # Generated chart
OMC mode lifecycle:
.omc/state/sessions/{sessionId}/self-improve-state.json
Agent Mapping
All augmentations delivered via Task description context at spawn time. No modifications to existing agent .md files.
| Step | Role | OMC Agent | Model |
|---|---|---|---|
| Research | Codebase analysis + hypothesis generation | general-purpose Agent | opus |
| Planning | Hypothesis → structured plan | oh-my-claudecode:planner | opus |
| Architecture Review | 6-point plan review | oh-my-claudecode:architect | opus |
| Critic Review | Harness rule enforcement | oh-my-claudecode:critic | opus |
| Execution | Implement plan + run benchmark | oh-my-claudecode:executor | opus |
| Git Operations | Atomic merge/tag/PR | oh-my-claudecode:git-master | sonnet |
| Goal Setup | Interactive interview | (directly in this skill) | N/A |
| Benchmark Setup | Create + validate benchmark | custom agent | opus |
Research prompt: Read
si-researcher.md from this skill directory and pass its content as the agent prompt.
Benchmark builder: Read
si-benchmark-builder.md from this skill directory and pass its content as the agent prompt.
Goal clarifier: Read
si-goal-clarifier.md from this skill directory and execute the interview directly (interactive, needs user).
Inputs
Read these files at startup and at the beginning of each iteration:
| File | Purpose |
|---|---|
| User config: , , , , , , , , , , , , , , , , |
| Runtime: , , , , , (derived: lowercase underscore from goal objective, persisted for cross-session consistency) |
| Per-iteration progress for resumability |
| Improvement objective, target metric, scope |
| Guardrail rules (H001, H002, H003) |
Setup Phase
- Check if target repo path exists. If not configured, ask user for the path to the repository to improve.
- Resolve
by running<self-improve-root>
.node {skill_dir}/scripts/resolve-paths.mjs --project-root {repo_path} [--topic "..."] [--slug "..."] --ensure-dirs - Create the
directory structure by copying from<self-improve-root>/
in this skill directory into the resolvedtemplates/
root.config/ - Read
. Check<self-improve-root>/state/agent-settings.json
,si_setting_goal
,si_setting_benchmark
.si_setting_harness - Trust confirmation (mandatory, cannot be skipped):
a. If
is alreadytrust_confirmed
in agent-settings.json, skip to step 5 (resume path). b. Display the target repo path and ask user to confirm:true
c. If user declines: abort setup and exit. Do NOT proceed. d. Record consent: set"Self-improve will run benchmark commands inside {repo_path}. This executes arbitrary code in that repository. Confirm? [yes/no]"
in agent-settings.json.trust_confirmed: true - Persist
intotopic_slug
when the resolved root is topic-scoped so future resumes stay on the same track.config/settings.json - If goal not set → read
from this skill directory and run the 4-dimension Socratic interview directly in this context (Objective, Metric, Target, Scope). Write result tosi-goal-clarifier.md
.<self-improve-root>/config/goal.md - If benchmark not set → read
from this skill directory, spawn a custom Agent(model=opus) with its content as prompt. The agent surveys the repo, creates or wraps a benchmark, validates 3x, and records baseline. After benchmark is set, confirm the benchmark command with user:si-benchmark-builder.md
If user declines: abort setup and exit."Benchmark command: {benchmark_command}. This will be run repeatedly during the loop. Confirm? [yes/no]" - If harness not set → confirm default harness rules (H001/H002/H003) with user or customize.
- Gate: All of
,si_setting_goal
,si_setting_benchmark
,si_setting_harness
must be true.trust_confirmed - Create improvement branch (if it does not exist):
Wheregit -C {repo_path} checkout -b improve/{goal_slug} {target_branch} git -C {repo_path} checkout {target_branch}
is derived from the goal objective (lowercase, underscored). If the branch already exists, skip creation. Persist{goal_slug}
in agent-settings.json.goal_slug - Mode exclusivity: Call
. If autopilot, ralph, or ultrawork is active, refuse to start.state_list_active - Write initial state:
state_write(mode='self-improve', active=true, iteration=0, started_at=<now>)
Git Strategy
All git operations happen inside the target repo, NOT in the OMC project root.
- Improvement branch:
— accumulates winning changes only.improve/{goal_slug} - Experiment branches:
— short-lived, per executor.experiment/round_{n}_executor_{id} - Archive tags:
— losing branches tagged before deletion.archive/round_{n}_executor_{id} - Worktree setup (SKILL.md creates before each executor):
git -C {repo_path} worktree add worktrees/round_{n}_executor_{id} -b experiment/round_{n}_executor_{id} improve/{goal_slug} - Winner merges via
:oh-my-claudecode:git-masterMerge experiment/round_{n}_executor_{winner_id} into improve/{goal_slug} with --no-ff Message: "Iteration {n}: {hypothesis} (score: {before} → {after})" - Push after merge:
(backup, non-blocking)git -C {repo_path} push origin improve/{goal_slug} - Losers archived: Tag + delete via git-master.
Improvement Loop
Gate: All settings must be true. Once the gate passes, execute continuously without stopping.
Update
state_write(mode='self-improve', active=true, status="running").
Step 0 — Stale Worktree Cleanup (mandatory, runs every iteration)
PREREQUISITE: This step MUST run to completion before any other step, including resume logic. It is idempotent and safe to run multiple times.
- List all worktrees in the target repo:
git -C {repo_path} worktree list - For any worktree matching
that does NOT belong to the current iteration: remove it withworktrees/round_*git -C {repo_path} worktree remove {path} --force - Run
to clean up stale referencesgit -C {repo_path} worktree prune - This handles crash recovery — orphaned worktrees from interrupted iterations are cleaned before the new iteration starts
Step 1 — Refresh State
state_write(mode='self-improve', active=true, iteration=N) to reset 30min TTL.
Step 2 — Check Stop Request
Read state via
state_read(mode='self-improve').
If state is cleared (cancel was invoked) OR status is
user_stopped:
a. Set status: "user_stopped" in <self-improve-root>/state/agent-settings.json
b. Update iteration_state.json: set status: "interrupted", record current_step
c. Clean up any active worktrees for the current round (Step 0 logic)
d. Log: "Self-improve stopped by user at iteration {N}, step {current_step}"
e. Exit gracefully — do NOT invoke /cancel again (already cancelled)
Step 3 — Check User Ideas
Read
<self-improve-root>/config/idea.md. If non-empty, snapshot contents for planners. Clear after planners consume.
Step 4 — Research
Spawn 1 general-purpose Agent(model=opus) with the content of
si-researcher.md as prompt.
Pass in the prompt:
- Current iteration number
- Path to target repo
- Path to
<self-improve-root>/config/goal.md - Path to
(all prior records)<self-improve-root>/state/iteration_history/ - Path to
(prior briefs)<self-improve-root>/state/research_briefs/ - Content of
Section 3 (Research Brief schema)data_contracts.md
Expected output: research brief JSON →
<self-improve-root>/state/research_briefs/round_{n}.json
If researcher fails, proceed with history only.
Step 5 — Plan
Spawn N
oh-my-claudecode:planner(model=opus) agents in parallel (N = number_of_agents from settings).
Pass in each planner's prompt:
- Planner identity (planner_a, planner_b, planner_c...)
- Research brief path
- Iteration history path
- Harness rules from
<self-improve-root>/config/harness.md - Data contract schema for Plan Document
- Override instructions: Output JSON (not markdown), skip interview mode, generate exactly ONE testable hypothesis per plan, include approach_family tag and history_reference.
- User ideas (if any, planner_a gets priority)
Expected output: Plan Document JSON →
<self-improve-root>/plans/round_{n}/plan_planner_{id}.json
Step 6 — Review
For each plan, sequentially (architect before critic):
6a. Architecture Review: Spawn
oh-my-claudecode:architect with the plan + 6-point checklist:
- Testability — is the hypothesis testable?
- Novelty — different from prior attempts?
- Scope — right-sized?
- Target files — exist, not sealed?
- Implementation clarity — executor can implement without guessing?
- Expected outcome — realistic given evidence?
Architect verdict is advisory only.
6b. Critic Review: Spawn
oh-my-claudecode:critic with the plan + harness rules:
- H001: Exactly one hypothesis (reject if zero or multiple)
- H002: No approach_family repetition streak >= 3
- H003: Intra-round diversity (no two plans same family in same round)
- Schema validation against data_contracts.md
- History awareness check
Critic sets
critic_approved: true or false. Plans with false are excluded from execution.
If ALL plans rejected, log and skip to Step 9.
Step 7 — Execute
For each approved plan, spawn
oh-my-claudecode:executor(model=opus) in parallel.
Before spawning, create worktree:
git -C {repo_path} worktree add worktrees/round_{n}_executor_{id} -b experiment/round_{n}_executor_{id} improve/{goal_slug}
Pass in each executor's prompt:
- The approved plan JSON
- Worktree directory path
- Benchmark command from settings
- Sealed files list from settings
- Path to
in this skill directoryscripts/validate.sh - Data contract schema for Benchmark Result
- Override instructions: Implement the plan faithfully, run validate.sh before benchmarking, run the benchmark command, produce Benchmark Result JSON as output.
Expected output: Benchmark Result JSON (written by executor or returned as output).
Step 8 — Tournament Selection
SKILL.md does this directly (not delegated):
- Collect all executor results
- Filter to
only. If zero candidates, skip to Step 9 (Record & Visualize).status: "success" - Rank by
(respectingbenchmark_score
)benchmark_direction - Ranked-candidate loop — for each candidate in rank order (best first):
a. No-regression check: candidate score must improve or hold even vs
, respectingbest_score
(benchmark_direction
: score >= best_score;higher_is_better
: score <= best_score) b. Merge vialower_is_better
:oh-my-claudecode:git-master
c. Re-benchmark on merged state to confirm improvement d. If re-benchmark confirms improvement: accept winner, break loop e. If re-benchmark shows regression: revert merge viagit merge experiment/round_{n}_executor_{id} --no-ff -m "Iteration {n}: {hypothesis} (score: {before} → {after})"
, continue to next candidate f. If merge conflicts:git -C {repo_path} reset --hard HEAD~1
, continue to next candidategit -C {repo_path} merge --abort - If a winner was accepted AND
isauto_push
in settings: Push improvement branch:true
(non-blocking). Ifgit -C {repo_path} push origin improve/{goal_slug}
isauto_push
(default): skip push. Log:false"Push skipped (auto_push: false). Run manually: git -C {repo_path} push origin improve/{goal_slug}" - Archive all non-winner branches via git-master: tag + delete
- If no candidate survived the loop: no merge this round. Improvement branch stays at prior state.
- Write Merge Report JSON to
(schema: data_contracts.md Section 9).<self-improve-root>/state/merge_reports/round_{n}.json
Step 9 — Record & Visualize
- Write iteration history to
<self-improve-root>/state/iteration_history/round_{n}.json - Update
:<self-improve-root>/state/agent-settings.json- Increment
by 1iterations - If winner AND improvement exceeds
(plateau_threshold
): updateabs(new_score - best_score) >= plateau_threshold
, resetbest_score
, resetplateau_consecutive_count = 0circuit_breaker_count = 0 - If winner AND improvement below threshold (
): updateabs(new_score - best_score) < plateau_threshold
if better, incrementbest_score
, resetplateau_consecutive_count += 1circuit_breaker_count = 0 - If no winner (all rejected, all failed, or all regressed): increment
(do NOT incrementcircuit_breaker_count += 1
— plateau tracks stagnating wins, not failures)plateau_consecutive_count
- Increment
- Append to
(one entry per candidate)<self-improve-root>/tracking/raw_data.json - Run
for visualizationpython3 {skill_dir}/scripts/plot_progress.py --tracking-dir <self-improve-root>/tracking - Archive plans: copy current round plans to
state/plan_archive/round_{n}/
Step 10 — Cleanup
Remove worktrees:
git -C {repo_path} worktree remove worktrees/round_{n}_executor_{id} --force git -C {repo_path} worktree prune
Update
iteration_state.json status to completed.
Step 11 — Stop Condition Check
Evaluate ALL conditions. If ANY is true, exit:
| Condition | Check |
|---|---|
| User stop | in agent-settings or state cleared |
| Target reached | meets/exceeds (respecting direction) |
| Plateau | |
| Max iterations | |
| Circuit breaker | |
If NO stop condition: immediately go back to Step 1.
Resumability
PREREQUISITE: Step 0 (stale worktree cleanup) MUST run to completion before any resume logic executes, regardless of prior state.
On invocation, before entering the loop:
- Always run Step 0 (stale worktree cleanup) — even on fresh start
- Read
:<self-improve-root>/state/agent-settings.json- If
: ask userstatus: "user_stopped"
. If no, exit. If yes, continue."Previous run was stopped at iteration {N}. Resume? [yes/no]" - If
: session crashed — resume automatically (no user prompt)status: "running" - If
: fresh startstatus: "idle"
- If
- Re-confirm trust gate only if
istrust_confirmed
in agent-settings.jsonfalse - Read
:<self-improve-root>/state/iteration_state.json
→ resume fromstatus: "in_progress"
, skip completed sub-stepscurrent_step
→ start next iterationstatus: "completed"
→ complete recording step if needed, start next iterationstatus: "failed"- File missing → start from iteration 1
Completion
When the loop exits:
- Update agent-settings.json with final status
- If
ANDtarget_reached
isauto_pr
in settings: spawn git-master to create PR fromtrue
to upstream. Ifimprove/{goal_slug}
isauto_pr
(default): skip PR creation. Log:false"PR creation skipped (auto_pr: false). Run manually: gh pr create --head improve/{goal_slug} --base {target_branch}" - Run plot_progress.py one final time
- Print summary report:
=== Self-Improvement Loop Complete === Status: {status} Iterations: {iterations} Best Score: {best_score} (baseline: {baseline}) Improvement: {delta} ({delta_pct}%) - Run
for clean state cleanup/oh-my-claudecode:cancel
Error Handling
| Situation | Action |
|---|---|
| Agent fails to produce output | Retry once. If still no output, log and continue. |
| Researcher produces empty brief | Proceed — planners work from history alone. |
| All plans rejected by critic | Skip execution. Log. Continue to next iteration. |
| All executors fail | Skip tournament. Record failures. Continue. |
| Merge conflict | Reject candidate, try next. |
| Re-benchmark regression | Reject candidate, revert merge, try next. |
| Push failure | Log warning. Continue — push is backup. |
| Worktree already exists | Remove and recreate. |
| Settings corrupted | Report and stop. |
Approach Family Taxonomy
Every plan must be tagged with exactly one:
| Tag | Description |
|---|---|
| Model/component structure changes |
| Optimizer, LR, scheduler, batch size |
| Data loading, augmentation, preprocessing |
| Mixed precision, distributed training, compiled kernels |
| Algorithmic/numerical optimizations |
| Evaluation methodology changes |
| Documentation-only changes |
| Does not fit above — explain in evidence |