Skills ml-experiment-tracker

Plan reproducible ML experiment runs with explicit parameters, metrics, and artifacts. Use before model training to standardize tracking-ready experiment definitions.

install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/0x-professor/ml-experiment-tracker" ~/.claude/skills/openclaw-skills-ml-experiment-tracker && rm -rf "$T"
OpenClaw · Install into ~/.openclaw/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/0x-professor/ml-experiment-tracker" ~/.openclaw/skills/openclaw-skills-ml-experiment-tracker && rm -rf "$T"
manifest: skills/0x-professor/ml-experiment-tracker/SKILL.md
source content

ML Experiment Tracker

Overview

Generate structured experiment plans that can be logged consistently in experiment tracking systems.

Workflow

  1. Define dataset, target task, model family, and parameter search space.
  2. Define metrics and acceptance thresholds before training.
  3. Produce run plan with version and artifact expectations.
  4. Export the run plan for execution in tracking tools.

Use Bundled Resources

  • Run
    scripts/build_experiment_plan.py
    to generate consistent run plans.
  • Read
    references/tracking-guide.md
    for reproducibility checklist.

Guardrails

  • Keep inputs explicit and machine-readable.
  • Always include metrics and baseline criteria.