AutoResearchClaw mixed-precision
Use FP16/BF16 mixed precision to accelerate training and reduce memory. Use when optimizing GPU performance.
install
source · Clone the upstream repo
git clone https://github.com/aiming-lab/AutoResearchClaw
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/aiming-lab/AutoResearchClaw "$T" && mkdir -p ~/.claude/skills && cp -r "$T/researchclaw/skills/builtin/tooling/mixed-precision" ~/.claude/skills/aiming-lab-autoresearchclaw-mixed-precision && rm -rf "$T"
manifest:
researchclaw/skills/builtin/tooling/mixed-precision/SKILL.mdsource content
Mixed Precision Training Best Practice
Use torch.cuda.amp for automatic mixed precision:
- Wrap forward pass in torch.cuda.amp.autocast()
- Use GradScaler for loss scaling
- BF16 preferred over FP16 on Ampere+ GPUs (RTX 3xxx, A100, RTX 4xxx)
- Watch for NaN gradients — reduce learning rate if needed
- Do NOT use amp with custom CUDA kernels unless tested