TorchedUp
ProblemsPremium
TorchedUp
Backprop: Single Linear LayerMedium
ProblemsPremium

Backprop: Single Linear Layer

Implement the backward pass for a single linear layer y = xW + b.

Signature: def backprop_single_layer(x: np.ndarray, W: np.ndarray, delta: np.ndarray) -> tuple

Return (dW, db, dx) where:

  • dW = x.T @ delta — gradient w.r.t. weights
  • db = delta.sum(axis=0) — gradient w.r.t. bias
  • dx = delta @ W.T — gradient w.r.t. input

Math

Asked at

Python (numpy)0/3 runs today

Test Results

○simple 1x2 -> 1x3
○batch size 2
○zero gradient🔒 Premium
○gradient matches central-difference numerical estimate
Advertisement