Auto-claude-code-research-in-sleep result-to-claim
Use when experiments complete to judge what claims the results support, what they don't, and what evidence is still missing. Codex MCP evaluates results against intended claims and routes to next action (pivot, supplement, or confirm). Use after experiments finish — before writing the paper or running ablations.
git clone https://github.com/wanshuiyin/Auto-claude-code-research-in-sleep
T=$(mktemp -d) && git clone --depth=1 https://github.com/wanshuiyin/Auto-claude-code-research-in-sleep "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/result-to-claim" ~/.claude/skills/wanshuiyin-auto-claude-code-research-in-sleep-result-to-claim && rm -rf "$T"
skills/result-to-claim/SKILL.mdResult-to-Claim Gate
Experiments produce numbers; this gate decides what those numbers mean. Collect results from available sources, get a Codex judgment, then auto-route based on the verdict.
Context: $ARGUMENTS
When to Use
- After a set of experiments completes (main results, not just sanity checks)
- Before committing to claims in a paper or review response
- When results are ambiguous and you need an objective second opinion
Workflow
Step 1: Collect Results
Gather experiment data from whatever sources are available in the project:
- W&B (preferred):
— metrics, training curves, comparisonswandb.Api().run("<entity>/<project>/<run_id>").history() - EXPERIMENT_LOG.md: full results table with baselines and verdicts
- EXPERIMENT_TRACKER.md: check which experiments are DONE vs still running
- Log files:
if no other sourcessh server "tail -100 /path/to/training.log" - docs/research_contract.md: intended claims and experiment design
Assemble the key information:
- What experiments were run (method, dataset, config)
- Main metrics and baseline comparisons (deltas)
- The intended claim these experiments were designed to test
- Any known confounds or caveats
Step 2: Codex Judgment
Send the collected results to Codex for objective evaluation:
mcp__codex__codex: config: {"model_reasoning_effort": "xhigh"} prompt: | RESULT-TO-CLAIM EVALUATION I need you to judge whether experimental results support the intended claim. Intended claim: [the claim these experiments test] Experiments run: [list experiments with method, dataset, metrics] Results: [paste key numbers, comparison deltas, significance] Baselines: [baseline numbers and sources — reproduced or from paper] Known caveats: [any confounding factors, limited datasets, missing comparisons] Please evaluate: 1. claim_supported: yes | partial | no 2. what_results_support: what the data actually shows 3. what_results_dont_support: where the data falls short of the claim 4. missing_evidence: specific evidence gaps 5. suggested_claim_revision: if the claim should be strengthened, weakened, or reframed 6. next_experiments_needed: specific experiments to fill gaps (if any) 7. confidence: high | medium | low Be honest. Do not inflate claims beyond what the data supports. A single positive result on one dataset does not support a general claim.
Step 3: Parse and Normalize
Extract structured fields from Codex response:
- claim_supported: yes | partial | no - what_results_support: "..." - what_results_dont_support: "..." - missing_evidence: "..." - suggested_claim_revision: "..." - next_experiments_needed: "..." - confidence: high | medium | low
Step 3.5: Check Experiment Integrity (if audit exists)
Skip this step if
does not exist.EXPERIMENT_AUDIT.json
if EXPERIMENT_AUDIT.json exists: read integrity_status from file attach to verdict output: integrity_status: pass | warn | fail if integrity_status == "fail": append to verdict: "[INTEGRITY CONCERN] — audit found issues, see EXPERIMENT_AUDIT.md" downgrade confidence to "low" regardless of Codex judgment if integrity_status == "warn": append to verdict: "[INTEGRITY: WARN] — audit flagged potential issues" else: integrity_status = "unavailable" verdict is labeled "provisional — no integrity audit run" (this does NOT block anything — pipeline continues normally)
See
shared-references/experiment-integrity.md for the full integrity protocol.
Step 4: Route Based on Verdict
no
— Claim not supported
no- Record postmortem in findings.md (Research Findings section):
- What was tested, what failed, hypotheses for why
- Constraints for future attempts (what NOT to try again)
- Update CLAUDE.md Pipeline Status
- Decide whether to pivot to next idea from IDEA_CANDIDATES.md or try an alternative approach
partial
— Claim partially supported
partial- Update the working claim to reflect what IS supported
- Record the gap in findings.md
- Design and run supplementary experiments to fill evidence gaps
- Re-run result-to-claim after supplementary experiments complete
- Multiple rounds of
on the same claim → record analysis in findings.md, consider whether to narrow the claim scope or switch ideaspartial
yes
— Claim supported
yes- Record confirmed claim in project notes
- If ablation studies are incomplete → trigger
/ablation-planner - If all evidence is in → ready for paper writing
Step 5: Update Research Wiki (if active)
Skip this step entirely if
does not exist.research-wiki/
if research-wiki/ exists: # 1. Create experiment page Create research-wiki/experiments/<exp_id>.md with: - node_id: exp:<id> - idea_id: idea:<active_idea> - date, hardware, duration, metrics - verdict, confidence, reasoning summary # 2. Update claim status for each claim resolved by this verdict: if verdict == "yes": Update claim page: status → supported python3 tools/research_wiki.py add_edge research-wiki/ --from "exp:<id>" --to "claim:<cid>" --type supports --evidence "<metric>" elif verdict == "partial": Update claim page: status → partial python3 tools/research_wiki.py add_edge research-wiki/ --from "exp:<id>" --to "claim:<cid>" --type supports --evidence "partial" else: Update claim page: status → invalidated python3 tools/research_wiki.py add_edge research-wiki/ --from "exp:<id>" --to "claim:<cid>" --type invalidates --evidence "<why>" # 3. Update idea outcome Update research-wiki/ideas/<idea_id>.md: - outcome: positive | mixed | negative - If negative: fill "Failure / Risk Notes" and "Lessons Learned" - If positive: fill "Actual Outcome" and "Reusable Components" # 4. Rebuild + log python3 tools/research_wiki.py rebuild_query_pack research-wiki/ python3 tools/research_wiki.py log research-wiki/ "result-to-claim: exp:<id> verdict=<verdict> for idea:<idea_id>" # 5. Re-ideation suggestion Count failed/partial ideas since last /idea-creator run. If >= 3: print "💡 3+ ideas tested since last ideation. Consider re-running /idea-creator — the wiki now knows what doesn't work."
Rules
- Codex is the judge, not CC. CC collects evidence and routes; Codex evaluates. This prevents post-hoc rationalization.
- Do not inflate claims beyond what the data supports. If Codex says "partial", do not round up to "yes".
- A single positive result on one dataset does not support a general claim. Be honest about scope.
- If
is low, treat the judgment as inconclusive and add experiments rather than committing to a claim.confidence - If Codex MCP is unavailable (call fails), CC makes its own judgment and marks it
— do not block the pipeline.[pending Codex review] - Always record the verdict and reasoning in findings.md, regardless of outcome.
Review Tracing
After each
mcp__codex__codex or mcp__codex__codex-reply reviewer call, save the trace following shared-references/review-tracing.md. Use tools/save_trace.sh or write files directly to .aris/traces/<skill>/<date>_run<NN>/. Respect the --- trace: parameter (default: full).