Markov Chain Monte Carlo is one of the most powerful ideas in modern statistics and machine learning. It lets us sample from complex, high-dimensional probability distributions when direct computation is impossible. If you’ve ever wondered how Bayesian models are actually fit in practice – this is the engine.
In this mini-course, we build MCMC from the ground up. We start with intuition: why sampling is needed, and what problem MCMC is solving. Then we develop the mathematical foundation of Markov chains. From there, we derive and deeply understand Metropolis–Hastings, move to Gibbs sampling, and finally explore Hamiltonian Monte Carlo – the method that powers many modern probabilistic programming frameworks.
Throughout the series, I don’t just state the algorithms – I visualize them. Using MANIM, we animate trajectories, transition kernels, acceptance probabilities, and high-dimensional geometry, so you can actually see how these methods explore probability space.
The goal is not just to run MCMC – but to understand it at a level where you could derive, implement, and extend it yourself.
Note: with this course you also get access to the Latent Dirichlet Allocation (LDA) mini-course.
Curriculum
- 1 Section
- 6 Lessons
- Lifetime
