TorchedUp
LearnBetaProblemsSystem DesignSoonPremium
TorchedUp
LearnBetaProblemsSystem DesignSoonPremium

Learn

Beta

Curated tracks instead of a wall of problems. Each is a hand-ordered curriculum — start at the top, finish at the bottom, end up shipping production ML code. Sign in to track progress across tracks.

∑

ML Basics

The math primitives every ML engineer implements at least once: softmax, cross-entropy, normalization, dropout.

12 problems7 easy5 medium
∂

Backpropagation Series

Hand-derive backward passes for every layer. Verified against numerical gradients via gradcheck — no autograd shortcuts.

12 problems4 easy4 medium4 hard
⚡

Transformer Internals

Attention mechanisms, positional encodings, RoPE, multi-head, KV cache, and FlashAttention — the architecture powering modern LLMs.

10 problems8 medium2 hard
🔥

LLM Inference & Serving

KV caching, sampling strategies, speculative decoding, prefix caching, paged attention — what runs in vLLM and TGI.

9 problems2 easy2 medium5 hard
⚙

Distributed Training & Memory Math

Sizing models on real hardware: parameter counts, KV cache, activation memory, ZeRO, DDP, FSDP.

8 problems3 easy3 medium2 hard
🔍

Debug Gauntlet

Real-world bugs planted in real-world code. No hints, no checklists — find what an experienced reviewer would catch.

11 problems5 easy5 medium1 hard

Tracks are evolving. New problems land in the catalog all the time; tracks get curated additions when they fit the curriculum. Suggestions welcome — email support@torchedup.dev.

© 2026 TorchedUp. All rights reserved.

ChangelogContact UsTerms of ServicePrivacy PolicyRefund Policy