Implement cosine annealing learning rate schedule using PyTorch's built-in CosineAnnealingLR scheduler.
Signature: def cosine_lr_schedule(lr_init, T_max, n_steps)
lr_init: initial learning rate (float)T_max: number of steps for half the cosine cycle (int)n_steps: total steps to simulate (int)n_steps)Steps:
torch.nn.Parameter and torch.optim.SGD optimizer with lr=lr_inittorch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=T_max)scheduler.step() and record optimizer.param_groups[0]['lr']The cosine annealing formula (with eta_min=0):
lr(t) = 0.5 * lr_init * (1 + cos(π * t / T_max))
Math
Asked at
Test Results