Skills ml-experiment-tracker
Plan reproducible ML experiment runs with explicit parameters, metrics, and artifacts. Use before model training to standardize tracking-ready experiment definitions.
install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/0x-professor/ml-experiment-tracker" ~/.claude/skills/openclaw-skills-ml-experiment-tracker && rm -rf "$T"
OpenClaw · Install into ~/.openclaw/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/0x-professor/ml-experiment-tracker" ~/.openclaw/skills/openclaw-skills-ml-experiment-tracker && rm -rf "$T"
manifest:
skills/0x-professor/ml-experiment-tracker/SKILL.mdsource content
ML Experiment Tracker
Overview
Generate structured experiment plans that can be logged consistently in experiment tracking systems.
Workflow
- Define dataset, target task, model family, and parameter search space.
- Define metrics and acceptance thresholds before training.
- Produce run plan with version and artifact expectations.
- Export the run plan for execution in tracking tools.
Use Bundled Resources
- Run
to generate consistent run plans.scripts/build_experiment_plan.py - Read
for reproducibility checklist.references/tracking-guide.md
Guardrails
- Keep inputs explicit and machine-readable.
- Always include metrics and baseline criteria.