AutoResearchClaw researchclaw
Run the ResearchClaw autonomous research pipeline from a topic, config, and output directory.
install
source · Clone the upstream repo
git clone https://github.com/aiming-lab/AutoResearchClaw
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/aiming-lab/AutoResearchClaw "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/researchclaw" ~/.claude/skills/aiming-lab-autoresearchclaw-researchclaw && rm -rf "$T"
manifest:
.claude/skills/researchclaw/SKILL.mdsource content
ResearchClaw — Autonomous Research Pipeline Skill
Description
Run ResearchClaw's 23-stage autonomous research pipeline. Given a research topic, this skill orchestrates the entire research workflow: literature review → hypothesis generation → experiment design → code generation & execution → result analysis → paper writing → peer review → final export.
Trigger Conditions
Activate this skill when the user:
- Asks to "research [topic]", "write a paper about [topic]", or "investigate [topic]"
- Wants to run an autonomous research pipeline
- Asks to generate a research paper from scratch
- Mentions "ResearchClaw" by name
Instructions
Prerequisites Check
- Verify config file exists:
ls config.yaml || ls config.researchclaw.example.yaml - If no
, create one from the example:config.yamlcp config.researchclaw.example.yaml config.yaml - Ensure the user's LLM API key is configured in
underconfig.yaml
or viallm.api_key
environment variable.llm.api_key_env
Running the Pipeline
Option A: CLI (recommended)
researchclaw run --topic "Your research topic here" --auto-approve
Options:
/--topic
: Override the research topic from config-t
/--config
: Config file path (default:-c
)config.yaml
/--output
: Output directory (default:-o
)artifacts/rc-YYYYMMDD-HHMMSS-HASH/
: Resume from a specific stage (e.g.,--from-stage
)PAPER_OUTLINE
: Auto-approve gate stages (5, 9, 20) without human input--auto-approve
Option B: Python API
from researchclaw.pipeline.runner import execute_pipeline from researchclaw.config import RCConfig from researchclaw.adapters import AdapterBundle from pathlib import Path config = RCConfig.load("config.yaml", check_paths=False) results = execute_pipeline( run_dir=Path("artifacts/my-run"), run_id="research-001", config=config, adapters=AdapterBundle(), auto_approve_gates=True, ) # Check results for r in results: print(f"Stage {r.stage.name}: {r.status.value}")
Option C: Iterative Pipeline (multi-round improvement)
from researchclaw.pipeline.runner import execute_iterative_pipeline results = execute_iterative_pipeline( run_dir=Path("artifacts/my-run"), run_id="research-001", config=config, adapters=AdapterBundle(), max_iterations=3, convergence_rounds=2, )
Output Structure
After a successful run, the output directory contains:
artifacts/<run-id>/ ├── stage-1/ # TOPIC_INIT outputs ├── stage-2/ # PROBLEM_DECOMPOSE outputs ├── ... ├── stage-10/ │ └── experiment.py # Generated experiment code ├── stage-12/ │ └── runs/run-1.json # Experiment execution results ├── stage-14/ │ ├── experiment_summary.json # Aggregated metrics │ └── results_table.tex # LaTeX results table ├── stage-17/ │ └── paper_draft.md # Full paper draft ├── stage-22/ │ └── charts/ # Generated visualizations │ ├── metric_trajectory.png │ └── experiment_comparison.png └── pipeline_summary.json # Overall pipeline status
Experiment Modes
| Mode | Description | Config |
|---|---|---|
| LLM generates synthetic results (no code execution) | |
| Execute generated code locally via subprocess | |
| Execute on remote GPU server via SSH | |
Troubleshooting
- Config validation error: Run
researchclaw validate --config config.yaml - LLM connection failure: Check
and API keyllm.base_url - Sandbox execution failure: Verify
exists and has numpy installedexperiment.sandbox.python_path - Gate rejection: Use
or manually approve at stages 5, 9, 20--auto-approve
Tools Required
- File read/write (for config and artifacts)
- Bash (for CLI execution)
- No external MCP servers required for basic operation