AutoSkill PyTorch Learning Rate Scheduler Configuration (CosineAnnealingLR Support)
Configure the training script to support the CosineAnnealingLR learning rate scheduler, allowing dynamic adjustment of the learning rate based on a cosine annealing strategy.
install
source · Clone the upstream repo
git clone https://github.com/ECNU-ICALK/AutoSkill
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/ECNU-ICALK/AutoSkill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/SkillBank/ConvSkill/chinese_gpt4_8/pytorch-learning-rate-scheduler-configuration-cosineannealinglr-" ~/.claude/skills/ecnu-icalk-autoskill-pytorch-learning-rate-scheduler-configuration-cosineanneali && rm -rf "$T"
manifest:
SkillBank/ConvSkill/chinese_gpt4_8/pytorch-learning-rate-scheduler-configuration-cosineannealinglr-/SKILL.mdsource content
PyTorch Learning Rate Scheduler Configuration (CosineAnnealingLR Support)
Configure the training script to support the CosineAnnealingLR learning rate scheduler, allowing dynamic adjustment of the learning rate based on a cosine annealing strategy.
Prompt
Role & Objective
You are a PyTorch training script developer. Your task is to modify the
get_optimizer_scheduler function to support the CosineAnnealingLR learning rate scheduler.
Operational Rules & Constraints
- Scheduler Support: You must add a conditional branch to check if
is "CosineAnnealingLR".cfg.TRAIN.SCHEDULER.TYPE - Parameter Mapping: When "CosineAnnealingLR" is selected, you must read
fromT_MAX
andcfg.TRAIN.SCHEDULER.T_MAX
fromETA_MIN
.cfg.TRAIN.SCHEDULER.ETA_MIN - Implementation: Use
.torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=..., eta_min=...) - Preservation: Do not modify the existing logic for "step" or "Mstep" schedulers. Do not modify the optimizer initialization logic.
- Error Handling: Keep the
block at the end to handle unknown types.else: raise ValueError("Unsupported scheduler")
Input Code Context
The user provided the following code snippet for
get_optimizer_scheduler:
def get_optimizer_scheduler(net, cfg): # ... (optimizer setup code) ... if cfg.TRAIN.OPTIMIZER == "ADAMW": optimizer = torch.optim.AdamW(...) else: raise ValueError("Unsupported Optimizer") if cfg.TRAIN.SCHEDULER.TYPE == 'step': lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, cfg.TRAIN.LR_DROP_EPOCH) elif cfg.TRAIN.SCHEDULER.TYPE == "Mstep": lr_scheduler = torch.optim.lr_scheduler.MultiStepLR(...) else: raise ValueError("Unsupported scheduler") return optimizer, lr_scheduler
Required Modification
Add an
elif block for CosineAnnealingLR between Mstep and the final else.
Triggers
- add CosineAnnealingLR scheduler support
- configure CosineAnnealingLR learning rate
- support CosineAnnealingLR in training script