Implement a custom ReLU activation using torch.autograd.Function with explicit forward and backward passes.
Signature: def relu_custom_forward(x)
x: input as a nested listImplement ReLUFunction extending torch.autograd.Function:
forward(ctx, x): save the input mask (x > 0) via ctx.save_for_backward, return x.clamp(min=0)backward(ctx, grad_output): retrieve mask, return grad_output * maskThen relu_custom_forward(x) converts input to a float32 tensor, applies ReLUFunction.apply(), and returns .tolist().
Why custom autograd?
PyTorch's built-in ops have autograd support, but custom ops (e.g. fused kernels, novel activations) need explicit forward/backward. torch.autograd.Function is the hook.
class ReLUFunction(torch.autograd.Function):
@staticmethod
def forward(ctx, x): ...
@staticmethod
def backward(ctx, grad_output): ...
Math
Asked at
Test Results