autoresearch
Set up and run an autonomous experiment loop for any optimization target. Use when asked to "run autoresearch", "optimize X in a loop", "set up autoresearch for X", or "start experiments".
git clone https://github.com/drivelineresearch/autoresearch-claude-code
T=$(mktemp -d) && git clone --depth=1 https://github.com/drivelineresearch/autoresearch-claude-code "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/autoresearch" ~/.claude/skills/drivelineresearch-autoresearch-claude-code-autoresearch && rm -rf "$T"
skills/autoresearch/SKILL.mdAutoresearch
Autonomous experiment loop: try ideas, keep what works, discard what doesn't, never stop.
Setup
- Ask (or infer): Goal, Command, Metric (+ direction), Files in scope, Constraints.
git checkout -b autoresearch/<goal>-<date>- Read the source files. Understand the workload deeply before writing anything.
then writemkdir -p experiments
,autoresearch.md
, andautoresearch.sh
(see below). Commit all three.experiments/worklog.md- Initialize experiment (write config header to
) → run baseline → log result → start looping immediately.autoresearch.jsonl
autoresearch.md
autoresearch.mdThis is the heart of the session. A fresh agent with no context should be able to read this file and run the loop effectively. Invest time making it excellent.
# Autoresearch: <goal> ## Objective <Specific description of what we're optimizing and the workload.> ## Metrics - **Primary**: <name> (<unit>, lower/higher is better) - **Secondary**: <name>, <name>, ... ## How to Run `./autoresearch.sh` — outputs `METRIC name=number` lines. ## Files in Scope <Every file the agent may modify, with a brief note on what it does.> ## Off Limits <What must NOT be touched.> ## Constraints <Hard rules: tests must pass, no new deps, etc.> ## What's Been Tried <Update this section as experiments accumulate. Note key wins, dead ends, and architectural insights so the agent doesn't repeat failed approaches.>
Update
autoresearch.md periodically — especially the "What's Been Tried" section — so resuming agents have full context.
autoresearch.sh
autoresearch.shBash script (
set -euo pipefail) that: pre-checks fast (syntax errors in <1s), runs the benchmark, outputs METRIC name=number lines. Keep it fast — every second is multiplied by hundreds of runs. Update it during the loop as needed.
JSONL State Protocol
All experiment state lives in
autoresearch.jsonl. This is the source of truth for resuming across sessions.
Config Header
The first line (and any re-initialization line) is a config header:
{"type":"config","name":"<session name>","metricName":"<primary metric name>","metricUnit":"<unit>","bestDirection":"lower|higher"}
Rules:
- First line of the file is always a config header.
- Each subsequent config header (re-init) starts a new segment. Segment index increments with each config header.
- The baseline for a segment is the first result line after the config header.
Result Lines
Each experiment result is appended as a JSON line:
{"run":1,"commit":"abc1234","metric":42.3,"metrics":{"secondary_metric":123},"status":"keep","description":"baseline","timestamp":1234567890,"segment":0}
Fields:
: sequential run number (1-indexed, across all segments)run
: 7-char git short hash (the commit hash AFTER the auto-commit for keeps, or current HEAD for discard/crash)commit
: primary metric value (0 for crashes)metric
: object of secondary metric values — once you start tracking a secondary metric, include it in every subsequent resultmetrics
:status
|keep
|discardcrash
: short description of what this experiment trieddescription
: Unix epoch secondstimestamp
: current segment indexsegment
Initialization (equivalent of init_experiment
)
init_experimentTo initialize, write the config header to
autoresearch.jsonl:
echo '{"type":"config","name":"<name>","metricName":"<metric>","metricUnit":"<unit>","bestDirection":"<lower|higher>"}' > autoresearch.jsonl
To re-initialize (change optimization target), append a new config header:
echo '{"type":"config","name":"<name>","metricName":"<metric>","metricUnit":"<unit>","bestDirection":"<lower|higher>"}' >> autoresearch.jsonl
Running Experiments (equivalent of run_experiment
)
run_experimentRun the benchmark command, capturing timing and output:
START_TIME=$(date +%s%N) bash -c "./autoresearch.sh" 2>&1 | tee /tmp/autoresearch-output.txt EXIT_CODE=$? END_TIME=$(date +%s%N) DURATION=$(echo "scale=3; ($END_TIME - $START_TIME) / 1000000000" | bc) echo "Duration: ${DURATION}s, Exit code: ${EXIT_CODE}"
After running:
- Parse
lines from the output to extract metric valuesMETRIC name=number - If exit code != 0 → this is a crash
- Read the output to understand what happened
Logging Results (equivalent of log_experiment
)
log_experimentAfter each experiment run, follow this exact protocol:
1. Determine status
- keep: primary metric improved (lower if
, higher ifbestDirection=lower
)bestDirection=higher - discard: primary metric worse or equal to best kept result
- crash: command failed (non-zero exit code)
Secondary metrics are for monitoring only — they almost never affect keep/discard decisions. Only discard a primary improvement if a secondary metric degraded catastrophically, and explain why in the description.
2. Git operations
If keep:
git add -A git diff --cached --quiet && echo "nothing to commit" || git commit -m "<description> Result: {\"status\":\"keep\",\"<metricName>\":<value>,<secondary metrics>}"
Then get the new commit hash:
git rev-parse --short=7 HEAD
If discard or crash:
git checkout -- . git clean -fd
Warning: Never use
— thegit clean -fdxflag deletes gitignored files including JSONL state, dashboards, and experiment artifacts.-x
Use the current HEAD hash (before revert) as the commit field.
3. Append result to JSONL
echo '{"run":<N>,"commit":"<hash>","metric":<value>,"metrics":{<secondaries>},"status":"<status>","description":"<desc>","timestamp":'$(date +%s)',"segment":<seg>}' >> autoresearch.jsonl
4. Update dashboard
After every log, regenerate
autoresearch-dashboard.md (see Dashboard section below).
5. Append to worklog
After every experiment, append a concise entry to
experiments/worklog.md. This file survives context compactions and crashes, giving any resuming agent (or the user) a complete narrative of the session. Format:
### Run N: <short description> — <primary_metric>=<value> (<STATUS>) - Timestamp: YYYY-MM-DD HH:MM - What changed: <1-2 sentences describing the code/config change> - Result: <metric values>, <delta vs best> - Insight: <what was learned, why it worked/failed> - Next: <what to try next based on this result>
Also update the "Key Insights" and "Next Ideas" sections at the bottom of the worklog when you learn something new.
On setup, create
experiments/worklog.md with the session header, data summary, and baseline result. On resume, read experiments/worklog.md to recover context.
6. Secondary metric consistency
Once you start tracking a secondary metric, you MUST include it in every subsequent result. Parse the JSONL to discover which secondary metrics have been tracked and ensure all are present.
If you want to add a new secondary metric mid-session, that's fine — but from that point forward, always include it.
Dashboard
After each experiment, regenerate
autoresearch-dashboard.md:
# Autoresearch Dashboard: <name> **Runs:** 12 | **Kept:** 8 | **Discarded:** 3 | **Crashed:** 1 **Baseline:** <metric_name>: <value><unit> (#1) **Best:** <metric_name>: <value><unit> (#8, -26.2%) | # | commit | <metric_name> | status | description | |---|--------|---------------|--------|-------------| | 1 | abc1234 | 42.3s | keep | baseline | | 2 | def5678 | 40.1s (-5.2%) | keep | optimize hot loop | | 3 | abc1234 | 43.0s (+1.7%) | discard | try vectorization | ...
Include delta percentages vs baseline for each metric value. Show ALL runs in the current segment (not just recent ones).
Loop Rules
LOOP FOREVER. Never ask "should I continue?" — the user expects autonomous work.
- Primary metric is king. Improved →
. Worse/equal →keep
. Secondary metrics rarely affect this.discard - Simpler is better. Removing code for equal perf = keep. Ugly complexity for tiny gain = probably discard.
- Don't thrash. Repeatedly reverting the same idea? Try something structurally different.
- Crashes: fix if trivial, otherwise log and move on. Don't over-invest.
- Think longer when stuck. Re-read source files, study the profiling data, reason about what the CPU is actually doing. The best ideas come from deep understanding, not from trying random variations.
- Resuming: if
exists, read it +autoresearch.md
+autoresearch.jsonl
+ git log, continue looping. The worklog has the full narrative and insights.experiments/worklog.md
NEVER STOP. The user may be away for hours. Keep going until interrupted.
Ideas Backlog
When you discover complex but promising optimizations that you decide not to pursue right now, append them as bullet points to
. Don't let good ideas get lost.autoresearch.ideas.md
If the loop stops (context limit, crash, etc.) and
autoresearch.ideas.md exists, you'll be asked to:
- Read the ideas file and use it as inspiration for new experiment paths
- Prune ideas that are duplicated, already tried, or clearly bad
- Create experiments based on the remaining ideas
- If nothing is left, try to come up with your own new ideas
- If all paths are exhausted, delete
and write a final summary reportautoresearch.ideas.md
When there is no
autoresearch.ideas.md file and the loop ends, the research is complete.
User Steers
User messages sent while an experiment is running should be noted and incorporated into the NEXT experiment. Finish your current experiment first — don't stop or ask for confirmation. Incorporate the user's idea in the next experiment.
Updating autoresearch.md
Periodically update
autoresearch.md — especially the "What's Been Tried" section — so that a fresh agent resuming the loop has full context on what worked, what didn't, and what architectural insights have been gained. Do this every 5-10 experiments or after any significant breakthrough.