Vikram Oddiraju | 2025-10-21

Multilevel Monte Carlo for the Ornstein–Uhlenbeck SDE

Estimating expectations of SDEs efficiently using Multilevel Monte Carlo, approximate random variables, and mixed floating-point precision

An exploration of Multilevel Monte Carlo (MLMC) for estimating expectations of stochastic differential equations, using the Ornstein–Uhlenbeck process as a test case. I investigate how approximate random variables and mixed floating-point precision influence computational efficiency and accuracy.

Multilevel Monte Carlo For Estimating the Expected Value of Stochastic Differential Equations

Motivation

A few days ago, I replied to a tweet about stochastic differential equations (SDEs).
(Insert screenshot of tweet + reply)

Quant Beckman’s response was a bit snarky—especially considering it was his own post—but it did get me thinking seriously about the computational challenges behind SDEs. Problems like computing expectations, option payoffs, or tail probabilities can easily become expensive, especially when the SDE has no closed-form solution.

But computationally expensive doesn’t mean hopeless.
It just means we need better algorithms.

This led me down the path of Multilevel Monte Carlo (MLMC), a method designed to dramatically speed up the estimation of expectations of SDEs by combining simulations at different time resolutions. Along the way, I also examined how approximate random variables, single vs double precision, and compensated summation (Kahan) affect accuracy.

Background: Multilevel Monte Carlo (MLMC)

1. The Problem

Consider the Ornstein–Uhlenbeck (OU) SDE:

dxt=θ(μxt)dt+σdWt

What we can say from this equation is that if we simulated the SDE from time 0 to T, the value XT is the random state of the process at the terminal time T. Due to the stochastic nature of stochastic differential equations, XT is non determinstic, therefore it is oftentimes a hard problem to figure out how to estimate the expectation of the distribution

𝔼[P(XT)]

where P(XT) for us is just the distribution of XT. (Oftentimes, for instance, in options pricing P(x)=max(xK,0).

For many SDEs (including OU with general payoffs), computing this expectation analytically is difficult or impossible. So we turn to numerical simulation.

A simple and widely used scheme is the Euler–Maruyama method.


1.1 Discretizing the OU SDE with Euler–Maruyama

Given

dxt=θ(μxt)dt+σdWt,x(0)=x0

integrate over the interval ([t_n, t_{n+1}]) with step size (\Delta t):

xn+1xn=tntn+1θ(μxs)ds+tntn+1σdWs.

Euler–Maruyama approximates these integrals using:

giving the update

xn+1=xn+θ(μxn)Δt+σΔtZn,Zn~𝒩(0,1).

This produces a discrete approximation P with step size Δt=T/2.

But refining the time step improves accuracy only at the cost of drastically more computation.

This is where MLMC enters.


2. The MLMC Idea

Instead of running a huge Monte Carlo simulation at the finest step size, MLMC decomposes the expectation using a telescoping sum:

𝔼[PL]=𝔼[P0]+=1L𝔼[PP1],

where:

The key insight:

Coarse levels are cheap but inaccurate.
Fine levels are expensive but accurate.
MLMC mixes many cheap coarse samples with few expensive fine samples—
reducing total variance at minimal cost.


3. What Quantities MLMC Needs

To carry out the MLMC optimization, we need just three quantities per level ():

  1. Variance

    V=Var(PP1)

→ Determines how many samples we should take at each level.

  1. Cost

    C2

→ Number of time steps per sample.

  1. Bias estimate
BL=|𝔼[PLPL1]|

→ Determines how deep the hierarchy must extend.

These three numbers feed directly into the MLMC formula for the optimal number of samples per level.