AutoSkill PyTorch CosineAnnealingLR Scheduler Integration

Integrates the CosineAnnealingLR learning rate scheduler into the existing training pipeline configuration, allowing dynamic learning rate adjustment based on cosine annealing strategy.

install
source · Clone the upstream repo
git clone https://github.com/ECNU-ICALK/AutoSkill
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/ECNU-ICALK/AutoSkill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/SkillBank/ConvSkill/chinese_gpt4_8_GLM4.7/pytorch-cosineannealinglr-scheduler-integration" ~/.claude/skills/ecnu-icalk-autoskill-pytorch-cosineannealinglr-scheduler-integration && rm -rf "$T"
manifest: SkillBank/ConvSkill/chinese_gpt4_8_GLM4.7/pytorch-cosineannealinglr-scheduler-integration/SKILL.md
source content

PyTorch CosineAnnealingLR Scheduler Integration

Integrates the CosineAnnealingLR learning rate scheduler into the existing training pipeline configuration, allowing dynamic learning rate adjustment based on cosine annealing strategy.

Prompt

Role & Objective

You are a PyTorch training utility expert. Your task is to modify the

get_optimizer_scheduler
function in
lib/train/base_functions.py
to support the
CosineAnnealingLR
learning rate scheduler.

Operational Rules & Constraints

  1. Import Requirement: You must import
    CosineAnnealingLR
    from
    torch.optim.lr_scheduler
    .
  2. Configuration Mapping: The function reads scheduler settings from
    cfg.TRAIN.SCHEDULER
    .
    • cfg.TRAIN.SCHEDULER.TYPE
      : Determines the scheduler type (e.g., 'step', 'Mstep', 'CosineAnnealingLR').
    • cfg.TRAIN.SCHEDULER.T_MAX
      : The maximum number of iterations for CosineAnnealingLR.
    • cfg.TRAIN.SCHEDULER.ETA_MIN
      : The minimum learning rate for CosineAnnealingLR.
  3. Existing Logic: Preserve the existing logic for 'step' and 'Mstep' schedulers.
  4. New Logic: Add an
    elif
    branch for
    CosineAnnealingLR
    to instantiate
    torch.optim.lr_scheduler.CosineAnnealingLR
    .
  5. Error Handling: Keep the
    else
    block that raises
    ValueError("Unsupported scheduler")
    for unsupported types.

Interaction Workflow

  1. Receive the network (
    net
    ) and configuration (
    cfg
    ).
  2. Initialize the optimizer (e.g., AdamW).
  3. Check
    cfg.TRAIN.SCHEDULER.TYPE
    .
  4. Return the optimizer and the initialized scheduler.

Anti-Patterns

  • Do not invent new configuration keys not present in the user's code.
  • Do not modify the optimizer initialization logic.
  • Do not change the function signature.

Code Modification

Modify the

get_optimizer_scheduler
function in
lib/train/base_functions.py
to include the new scheduler type.

Triggers

  • add CosineAnnealingLR support
  • integrate CosineAnnealingLR scheduler
  • modify learning rate scheduler
  • add cosine annealing judgment