AutoResearchClaw nlp-alignment

Best practices for LLM alignment techniques including RLHF, DPO, and instruction tuning. Use when working on alignment or safety.

install
source · Clone the upstream repo
git clone https://github.com/aiming-lab/AutoResearchClaw
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/aiming-lab/AutoResearchClaw "$T" && mkdir -p ~/.claude/skills && cp -r "$T/researchclaw/skills/builtin/domain/nlp-alignment" ~/.claude/skills/aiming-lab-autoresearchclaw-nlp-alignment && rm -rf "$T"
manifest: researchclaw/skills/builtin/domain/nlp-alignment/SKILL.md
source content

LLM Alignment Best Practice

Methods:

  • RLHF: Train reward model → PPO fine-tuning (complex but powerful)
  • DPO: Direct preference optimization (simpler, no reward model needed)
  • GRPO: Group relative policy optimization
  • SFT: Supervised fine-tuning as alignment baseline

Training recipe:

  • Start with SFT on high-quality instruction data
  • DPO: lr=5e-7, beta=0.1, batch_size=64
  • PPO: lr=1e-6, clip=0.2, KL coeff=0.02
  • Use reference model for KL penalty
  • Evaluate on safety benchmarks (TruthfulQA, BBQ, etc.)

Common pitfalls:

  • Reward hacking: model finds shortcuts to high reward
  • Mode collapse: model generates repetitive outputs
  • Catastrophic forgetting: loses general capabilities