TorchedUp
ProblemsPremium
TorchedUp
PyTorch: LR Scheduler ComparisonEasy
ProblemsPremium

PyTorch: LR Scheduler Comparison

Implement cosine annealing learning rate schedule using PyTorch's built-in CosineAnnealingLR scheduler.

Signature: def cosine_lr_schedule(lr_init, T_max, n_steps)

  • lr_init: initial learning rate (float)
  • T_max: number of steps for half the cosine cycle (int)
  • n_steps: total steps to simulate (int)
  • Returns: list of learning rates at each step (length = n_steps)

Steps:

  1. Create a dummy torch.nn.Parameter and torch.optim.SGD optimizer with lr=lr_init
  2. Create torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=T_max)
  3. For each step, call scheduler.step() and record optimizer.param_groups[0]['lr']
  4. Return the list of recorded learning rates

The cosine annealing formula (with eta_min=0):

lr(t) = 0.5 * lr_init * (1 + cos(π * t / T_max))

Math

Asked at

Python (numpy)0/3 runs today

Test Results

○lr=1.0, T_max=4, 4 steps — full half-cycle
○lr=0.1, T_max=2, 2 steps
○lr=0.01, T_max=1, 1 step — reaches 0 immediately🔒 Premium
○lr=0.5, T_max=4, 8 steps — full cosine cycle🔒 Premium
Advertisement