Implement the KL divergence KL(P || Q).
KL(P || Q)
Signature: def kl_divergence(P: np.ndarray, Q: np.ndarray) -> float
def kl_divergence(P: np.ndarray, Q: np.ndarray) -> float
Clip Q values to avoid log(0). Both P and Q are valid probability distributions (sum to 1).
Math
Asked at
Test Results