TorchedUp
ProblemsPremium
TorchedUp
PyTorch: Custom Autograd FunctionHard
ProblemsPremium

PyTorch: Custom Autograd Function

Implement a custom ReLU activation using torch.autograd.Function with explicit forward and backward passes.

Signature: def relu_custom_forward(x)

  • x: input as a nested list
  • Returns: ReLU output as nested list

Implement ReLUFunction extending torch.autograd.Function:

  • forward(ctx, x): save the input mask (x > 0) via ctx.save_for_backward, return x.clamp(min=0)
  • backward(ctx, grad_output): retrieve mask, return grad_output * mask

Then relu_custom_forward(x) converts input to a float32 tensor, applies ReLUFunction.apply(), and returns .tolist().

Why custom autograd? PyTorch's built-in ops have autograd support, but custom ops (e.g. fused kernels, novel activations) need explicit forward/backward. torch.autograd.Function is the hook.

class ReLUFunction(torch.autograd.Function):
    @staticmethod
    def forward(ctx, x): ...
    @staticmethod
    def backward(ctx, grad_output): ...

Math

Asked at

Python (numpy)0/3 runs today

Test Results

○mixed pos/neg/zero values
○2D batch with negatives
○large magnitude values🔒 Premium
○all positive🔒 Premium
Advertisement