Backpropagation is the algorithm that computes gradients through a neural network by applying the chain rule backwards through the computation graph. Built here as a scalar-valued autograd engine — each operation tracks its inputs and knows how to propagate gradients — plus a minimal neural net library on top. ~200 lines of Python, zero dependencies.
9 notebooks: backpropagation and autograd
Notebook Summary 01Derivatives & AutogradLimit definition; manual derivative computation 02Data Struct & Forward PassBuilding the Valueclass; first forward pass03Backpropagation (by hand)Hand-assigning gradients at each node 04Training a Neurontanhas an activation function05Manual Backward PassRecursive ._backward()calls (compute each node’s gradient)06Automated Backward PassSingle .backward()call, propagates via topological sort
Ensure gradients accumulate (multivariate:+=not=)07Express tanh more simplyExercise: re-implement tanhusing constituent operations08Pytorch: BackpropagationExercise: Perform backpropagation to update gradients in PyTorch 09PyTorch: Gradient DescentExercise: Perform gradient descent (train the neural net) in PyTorch Sources: Andrej Karpathy - Zero to Hero · micrograd · lectures · exercises