Professional Documents
Culture Documents
Numerical Methods II: Prof. Mike Giles
Numerical Methods II: Prof. Mike Giles
Numerical Methods II: Prof. Mike Giles
MC Lecture 3 – p. 1
Variance Reduction
Monte Carlo starts as a very simple method; much of the
complexity in practice comes from trying to reduce the
variance, to reduce the number of samples that have to be
simulated to achieve a given accuracy.
antithetic variables
control variates
importance sampling
stratified sampling (lecture 4)
Latin hypercube (lecture 4)
quasi-Monte Carlo (lecture 5)
MC Lecture 3 – p. 2
Review of elementary results
If a, b are random variables, and λ, µ are constants, then
E[a + µ] = E[a] + µ
V[a + µ] = V[a]
E[λ a] = λ E[a]
V[λ a] = λ2 V[a]
E[a + b] = E[a] + E[b]
V[a + b] = V[a] + 2 Cov[a, b] + V[b]
where
h i 2
2
V[a] ≡ E (a − E[a]) = E a − (E[a])2
h i
Cov[a, b] ≡ E (a − E[a]) (b − E[b])
MC Lecture 3 – p. 3
Review of elementary results
If a, b are independent random variables then
and so " #
N
X
V N −1 xn = N −1 V[x]
n=1
MC Lecture 3 – p. 4
Antithetic variables
The simple estimator from the last lecture has the form
X
N −1 f (W (i) )
i
MC Lecture 3 – p. 5
Antithetic variables
Antithetic estimator replaces f (W (i) ) by
(i)
f = 12 f (W (i) ) + f (−W (i) )
MC Lecture 3 – p. 6
Antithetic variables
The variance is always reduced, but the cost is almost
doubled, so net benefit only if Cov[f (W ), f (−W )] < 0.
Two extremes:
A linear payoff, f = a + b W , is integrated exactly since
f = a and Cov[f (W ), f (−W )] = −V[f ]
A symmetric payoff f (W ) = f (−W ) is the worst case
since Cov[f (W ), f (−W )] = V[f ]
MC Lecture 3 – p. 7
Control Variates
Suppose we want to approximate E[f ] using a simple
Monte Carlo average f .
fb = f − λ (g− E[g])
MC Lecture 3 – p. 8
Control Variates
For a single sample,
Cov[f, g]
λ=
V[g]
MC Lecture 3 – p. 9
Control Variates
The resulting variance is
2
(Cov[f, g]) 2
N V[f ] 1 −
−1
= N V[f ] 1 − ρ
−1
V[f ] V[g]
MC Lecture 3 – p. 10
Control Variates
Possible choices:
for European call option (ignoring its known value)
could use g = S since exp(−rt) S(t) is a martingale:
N = 1000;
U = rand(1,N); % uniform random variable
Y = norminv(U); % inverts Normal cum. fn.
S = S0*exp((r-sigˆ2/2)*T + sig*sqrt(T)*Y);
F = exp(-r*T)*max(0,S-K);
C = exp(-r*T)*S;
Fave = sum(F)/N;
Cave = sum(C)/N;
lam = sum((F-Fave).*(C-Cave)) / sum((C-Cave).ˆ2);
MC Lecture 3 – p. 12
Application
MATLAB code, part 2 – control variate estimation:
N = 1e5;
U = rand(1,N); % uniform random variable
Y = norminv(U); % inverts Normal cum. fn.
S = S0*exp((r-sigˆ2/2)*T + sig*sqrt(T)*Y);
F = exp(-r*T)*max(0,S-K);
C = exp(-r*T)*S;
F2 = F - lam*(C-S0);
Fave = sum(F)/N;
F2ave = sum(F2)/N;
sd = sqrt( sum((F -Fave ).ˆ2)/(N*(N-1)) )
sd2 = sqrt( sum((F2-F2ave).ˆ2)/(N*(N-1)) )
MC Lecture 3 – p. 13
Application
Results:
>>> lec3
sd =
0.0599
sd2 =
0.0151
MC Lecture 3 – p. 14
Importance Sampling
Importance sampling involves a change of probability
measure. Instead of taking X from a distribution with
p.d.f. p1 (X), we instead take it from a different distribution
with p.d.f. p2 (X).
Z
E1 [f (X)] = f (X) p1 (X) dX
Z
p1 (X)
= f (X) p2 (X) dX
p2 (X)
= E2 [f (X) R(X)]
MC Lecture 3 – p. 15
Importance Sampling
We want the new variance V2 [f (X) R(X)] to be smaller
than the old variance V1 [f (X)].
MC Lecture 3 – p. 16
Importance Sampling
Really simple rare event example: suppose random variable
X takes value 1 with probability δ ≪ 1 and is otherwise 0.
E[X] = δ
MC Lecture 3 – p. 17
Importance Sampling
Digital put option:
where
MC Lecture 3 – p. 18
Importance Sampling
A digital put option with very low strike (e.g. K = 0.4 S(0))
is sometimes used as a hedge for credit derivatives.
MC Lecture 3 – p. 19
Importance Sampling
Approach 1: change the mean from µ1 to µ2 < µ1 by using
X = µ2 + σ W (T )
X = µ + σ2 W (T )
MC Lecture 3 – p. 22