Skills dl-transformer-finetune
Build transformer fine-tuning run plans with task settings, hyperparameters, and model-card outputs. Use for repeatable Hugging Face or PyTorch finetuning workflows.
install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/0x-professor/dl-transformer-finetune" ~/.claude/skills/openclaw-skills-dl-transformer-finetune && rm -rf "$T"
OpenClaw · Install into ~/.openclaw/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/0x-professor/dl-transformer-finetune" ~/.openclaw/skills/openclaw-skills-dl-transformer-finetune && rm -rf "$T"
manifest:
skills/0x-professor/dl-transformer-finetune/SKILL.mdsource content
DL Transformer Finetune
Overview
Generate reproducible fine-tuning run plans for transformer models and downstream tasks.
Workflow
- Define base model, task type, and dataset.
- Set training hyperparameters and evaluation cadence.
- Produce run plan plus model card skeleton.
- Export configuration-ready artifacts for training pipelines.
Use Bundled Resources
- Run
for deterministic plan output.scripts/build_finetune_plan.py - Read
for hyperparameter baseline guidance.references/finetune-guide.md
Guardrails
- Keep run plans reproducible with explicit seeds and output directories.
- Include evaluation and rollback criteria.