AutoResearchClaw meta-analysis
Statistical methods for combining results across multiple studies. Use when aggregating cross-study or cross-experiment results.
install
source · Clone the upstream repo
git clone https://github.com/aiming-lab/AutoResearchClaw
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/aiming-lab/AutoResearchClaw "$T" && mkdir -p ~/.claude/skills && cp -r "$T/researchclaw/skills/builtin/experiment/meta-analysis" ~/.claude/skills/aiming-lab-autoresearchclaw-meta-analysis && rm -rf "$T"
manifest:
researchclaw/skills/builtin/experiment/meta-analysis/SKILL.mdsource content
Meta-Analysis Best Practice
When comparing results across studies or experiments:
- Report effect sizes, not just p-values
- Use standardized metrics for cross-study comparison
- Account for heterogeneity (different setups, datasets, seeds)
- Report confidence intervals alongside point estimates
- Use forest plots to visualize cross-study comparisons
- Identify and discuss outliers or inconsistent results
- Consider publication bias when interpreting aggregate results