Week 5 Lecture 1

You might also like

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 24

Monte Carlo Simulation

1
Monte Carlo Simulation and Options
When used to value European stock options, Monte
Carlo simulation involves the following steps:
1. Simulate 1 path for the stock price in a risk neutral
world
2. Calculate the payoff from the stock option
3. Repeat steps 1 and 2 many times to get many sample
payoff
4. Calculate mean payoff
5. Discount mean payoff at risk free rate to get an
estimate of the value of the option

2
Sampling Stock Price Movements
• In a risk neutral world the process for a stock
price is
dS  
 S dt  S dz
• We can simulate a path by choosing time
steps of length t and using the discrete
version of this
S  
ˆ S t  S  t
where  is a random sample from (0,1)
3
A More Accurate Approach

Use  
d ln S  ˆ   2 / 2 dt   dz
The discrete version of this is

ln S (t  t )  ln S (t )  ˆ   2 / 2 t   t
or
S (t  t )  S (t ) e  ˆ  / 2  t  
2
t

4
Sampling from Normal
Distribution
• One simple way to obtain a sample
from (0,1) is to generate 12 random
numbers between 0.0 & 1.0, take the
sum, and subtract 6.0
• In Excel =NORMSINV(RAND()) gives
a random sample from (0,1)
• In Matlab: ‘randn’ generates random
samples from (0,1)
5
Standard Errors
• When N is sufficiently large, the price
estimate has a normal distribution with the
following parameters:
– Mean: the true price of the contract

– Standard deviation: with  the standard
N
deviation of the discounted payoffs and N the
number of simulated paths

•  is called the “standard error” of the


N
estimated price
6
Confidence Intervals
• The standard error of the estimate of the
option price is the standard deviation of
the discounted payoffs given by the
simulation trials divided by the square root
of the number of observations.
• Remember: the estimate for the price is
the sample mean of the sample of
generated prices

7
Extension
When a derivative depends on several
underlying variables we can simulate
paths for each of them in a risk-neutral
world to calculate the values for the
derivative

8
To Obtain 2 Correlated Normal
Samples
• Obtain independent (uncorrelated) normal
samples x1 and x2
• We get two series  and 2 with correlation ρ
as follows:

1  x1
 2  x1  x2 1   2

9
Cholesky Decomposition
• Obtain n independent (uncorrelated) normal
samples x1…xn
• We want to obtain n series  and n with
correlation matrix ρ:
1 12  1n 
 1  

21

    n 1, n 
 
  n1   n ,n 1 1 

• We first calculate the Cholesky decomposition A


of :  a11 0  0 
  a22  
A such that AAT  ρ
   0
 
an1   ann  10
Cholesky Decomposition (cont’d)
• The n series  and n with correlation matrix ρ
are then obtained as:
 1   x1 
   
   A  
  x 
 n  n

• Remark: A only exists if  is indeed a


correlation matrix

11
Cholesky Decomposition (cont’d)
Special case
• If  is a 2x2 matrix:
1 12 

  21 1 
• Then
1 0 
A
 1   2 

12
Application of Monte Carlo Simulation
• Monte Carlo simulation can deal with
path dependent options, and options
with complex payoffs.
• It can easily be used to price options
dependent on several underlying
state variables.
BUT
• It cannot easily deal with American-
style options.

13
Determining Greek Letters

For 
1.Make a small change to asset price
2. Carry out the simulation again using the same
random number streams
3. Estimate  as the change in the option price
divided by the change in the asset price

Proceed in a similar manner for other Greek letters

14
Variance Reduction Techniques
1. Antithetic variable technique
2. Control variate technique
3. Importance sampling
4. Stratified sampling
5. Moment matching

15
1. Antithetic variable technique
• Given the random series 1…n used to
generate one discounted payoff f1, we use the
series - 1…-n to generate a second
discounted payoff f2.
• The average of the two discounted payoffs is
used a single estimate f:
f1  f 2
f 
2

• The number of independent estimates N is


given by the number of f ' s

• The standard error is given by N
16
• Why does it work ?
• Each time we have drawn a random sample of
’s that is unusually high, the series – is
unusually low, and vice versa.
• So the two series compensate each other.

17
2. Control variate technique
• Suppose we want to obtain the price fA of a
derivative A.
• Assume that there is another derivative B similar
to the A, but of which we have an analytic
expression for the price.
• We generate price estimates f*A and f*B using
the same ’s.
• The price estimate fA for A is given by:

f A  f A*  f B  f B* 
where fB is the known true price of B calculated
analytically.
18
• Why does it work ?
• From the following expression for fA:


f A  f A*  f B  f B* 
One see that this technique adds the term  f  *
• B f B to the

simulated price f*A for the derivative A.

• B f
 f *
B 
is the difference between the simulated price for
derivative B, and its known true price. It picks up, and
corrects for, any overestimation or underestimation.

19
3. Moment matching

• For each path we store ale the i’s.


• We calculate the mean m, and the standard
deviation  of the sample of i’s.
• We define a new series of i’s as:
i  m
 
*
i

• This way, the mean of series of i’s used to
generate the path is exactly zero, and its
standard deviation is exactly one.
20
4. Stratified sampling
• We typically can generate a series of i’s as
follows:
 i   (ui )
1

• With the ui’s drawn from a uniform random


distribution on (0,1).
• For any sample of M i’s, the ui’s will never be
distributed perfectly uniform.
• We can achieve this by generating the i’s as:
 i  0.5 
i   
1

 M 

21
• Why does it work?
• If we need M i’s, the we divide the interval in M
steps of length 1/M.
• We effectively set each ui equal to the middle of
such an interval.
• This forces the ui’s to be perfectly spread
across the entire range (0,1).
• The order in which we generate ui’s, doesn’t
matter.

22
5. Quasi-Random Sequences
• With stratified sampling, we need to determine the
number M of i’s at the start, and stick to it.
• If we set M=100,000, but only use the first 90,000 ui’s,
we will be missing all the 10% biggest values.
• Quasi-random sequences are series of ui’s that are
spread uniformly over (0,1), just as with stratified
sampling.
• But extra values for the ui’ are always set such that they
fill in the gaps left between the previous values.
• Quasi-random sequences are generated using
equations. There aren’t random at all. They just appear
to be so. 23
Example of quasi-random numbers: The Sobel
sequence in two dimensions

24

You might also like