Multilevel Monte Carlo For Estimating the Expected Value of Stochastic Differential Equations
Motivation
A few days ago, I replied to a tweet about stochastic differential equations (SDEs).
(Insert screenshot of tweet + reply)
Quant Beckman’s response was a bit snarky—especially considering it was his own post—but it did get me thinking seriously about the computational challenges behind SDEs. Problems like computing expectations, option payoffs, or tail probabilities can easily become expensive, especially when the SDE has no closed-form solution.
But computationally expensive doesn’t mean hopeless.
It just means we need better algorithms.
This led me down the path of Multilevel Monte Carlo (MLMC), a method designed to dramatically speed up the estimation of expectations of SDEs by combining simulations at different time resolutions. Along the way, I also examined how approximate random variables, single vs double precision, and compensated summation (Kahan) affect accuracy.
Background: Multilevel Monte Carlo (MLMC)
1. The Problem
Consider the Ornstein–Uhlenbeck (OU) SDE:
What we can say from this equation is that if we simulated the SDE from time to , the value is the random state of the process at the terminal time . Due to the stochastic nature of stochastic differential equations, is non determinstic, therefore it is oftentimes a hard problem to figure out how to estimate the expectation of the distribution
where for us is just the distribution of . (Oftentimes, for instance, in options pricing .
For many SDEs (including OU with general payoffs), computing this expectation analytically is difficult or impossible. So we turn to numerical simulation.
A simple and widely used scheme is the Euler–Maruyama method.
1.1 Discretizing the OU SDE with Euler–Maruyama
Given
integrate over the interval ([t_n, t_{n+1}]) with step size (\Delta t):
Euler–Maruyama approximates these integrals using:
-
drift evaluated at the left endpoint:
-
Brownian increment:
giving the update
This produces a discrete approximation with step size .
But refining the time step improves accuracy only at the cost of drastically more computation.
This is where MLMC enters.
2. The MLMC Idea
Instead of running a huge Monte Carlo simulation at the finest step size, MLMC decomposes the expectation using a telescoping sum:
where:
- is the payoff computed using time step ,
- is a level correction, computed by coupling coarse and fine simulations using the same random normal increments.
The key insight:
Coarse levels are cheap but inaccurate.
Fine levels are expensive but accurate.
MLMC mixes many cheap coarse samples with few expensive fine samples—
reducing total variance at minimal cost.
3. What Quantities MLMC Needs
To carry out the MLMC optimization, we need just three quantities per level :
- Variance
→ Determines how many samples we should take at each level.
- Cost
→ Number of time steps per sample.
- Bias estimate
→ Determines how deep the hierarchy must extend.
These three numbers feed directly into the MLMC formula for the optimal number of samples per level.