AutoResearchClaw pytorch-training

Best practices for building robust PyTorch training loops. Use when generating or reviewing ML training code.

install
source · Clone the upstream repo
git clone https://github.com/aiming-lab/AutoResearchClaw
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/aiming-lab/AutoResearchClaw "$T" && mkdir -p ~/.claude/skills && cp -r "$T/researchclaw/skills/builtin/tooling/pytorch-training" ~/.claude/skills/aiming-lab-autoresearchclaw-pytorch-training && rm -rf "$T"
manifest: researchclaw/skills/builtin/tooling/pytorch-training/SKILL.md
source content

PyTorch Training Best Practice

  1. Use torch.manual_seed() for reproducibility (set for torch, numpy, random)
  2. Use DataLoader with num_workers>0 and pin_memory=True for GPU
  3. Enable cudnn.benchmark=True for fixed input sizes
  4. Use learning rate schedulers (CosineAnnealingLR or OneCycleLR)
  5. Implement early stopping based on validation metric
  6. Log metrics every epoch, save best model checkpoint
  7. Use torch.no_grad() for evaluation
  8. Clear gradients with optimizer.zero_grad(set_to_none=True) for efficiency