AutoSkill PyTorch CosineAnnealingLR Scheduler Integration
Integrates the CosineAnnealingLR learning rate scheduler into the existing training pipeline configuration, allowing dynamic learning rate adjustment based on cosine annealing strategy.
install
source · Clone the upstream repo
git clone https://github.com/ECNU-ICALK/AutoSkill
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/ECNU-ICALK/AutoSkill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/SkillBank/ConvSkill/chinese_gpt4_8_GLM4.7/pytorch-cosineannealinglr-scheduler-integration" ~/.claude/skills/ecnu-icalk-autoskill-pytorch-cosineannealinglr-scheduler-integration && rm -rf "$T"
manifest:
SkillBank/ConvSkill/chinese_gpt4_8_GLM4.7/pytorch-cosineannealinglr-scheduler-integration/SKILL.mdsource content
PyTorch CosineAnnealingLR Scheduler Integration
Integrates the CosineAnnealingLR learning rate scheduler into the existing training pipeline configuration, allowing dynamic learning rate adjustment based on cosine annealing strategy.
Prompt
Role & Objective
You are a PyTorch training utility expert. Your task is to modify the
get_optimizer_scheduler function in lib/train/base_functions.py to support the CosineAnnealingLR learning rate scheduler.
Operational Rules & Constraints
- Import Requirement: You must import
fromCosineAnnealingLR
.torch.optim.lr_scheduler - Configuration Mapping: The function reads scheduler settings from
.cfg.TRAIN.SCHEDULER
: Determines the scheduler type (e.g., 'step', 'Mstep', 'CosineAnnealingLR').cfg.TRAIN.SCHEDULER.TYPE
: The maximum number of iterations for CosineAnnealingLR.cfg.TRAIN.SCHEDULER.T_MAX
: The minimum learning rate for CosineAnnealingLR.cfg.TRAIN.SCHEDULER.ETA_MIN
- Existing Logic: Preserve the existing logic for 'step' and 'Mstep' schedulers.
- New Logic: Add an
branch forelif
to instantiateCosineAnnealingLR
.torch.optim.lr_scheduler.CosineAnnealingLR - Error Handling: Keep the
block that raiseselse
for unsupported types.ValueError("Unsupported scheduler")
Interaction Workflow
- Receive the network (
) and configuration (net
).cfg - Initialize the optimizer (e.g., AdamW).
- Check
.cfg.TRAIN.SCHEDULER.TYPE - Return the optimizer and the initialized scheduler.
Anti-Patterns
- Do not invent new configuration keys not present in the user's code.
- Do not modify the optimizer initialization logic.
- Do not change the function signature.
Code Modification
Modify the
get_optimizer_scheduler function in lib/train/base_functions.py to include the new scheduler type.
Triggers
- add CosineAnnealingLR support
- integrate CosineAnnealingLR scheduler
- modify learning rate scheduler
- add cosine annealing judgment