Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

2.2.

2 Weak convergence for vanilla options

• The goal is to estimate the error

✏w := E[g(XT⇡ )] E[g(XT )] (2.17)

• (Hr) g, b, and u are C 1,2 with Lipschitz derivatives.

Theorem 2.3. Under (Hr) and (HL),

|E[g(XT⇡ )] E[g(XT )] |  Ch . (2.18)

37
Proof. First, we note that, for t 2 [ti , ti+1 ]

⇥ ⇤ 1q p
E |f (Xt⇡ ) f (Xt⇡i )|q  Lf h (2.19)

where Lf is the Lipschitz constant of f . For later use we introduce

Li := 2
sup {|@x u(t, ·)|1 + |@xx 3
u(t, ·)|1 + |@xxx u(t, ·)|1 }
t2[ti ,ti+1 ]

and by L the Lipschitz constant of b and a (to simplify in the proof is bounded)

We observe that

n 1
X ⇥ ⇤
✏w = E u(ti+1 , Xt⇡i+1 ) u(ti , Xt⇡i ) .
i=0

38
Applying Ito’s formula, we have

⇥ ⇤
E u(ti+1 , Xt⇡i+1 ) u(ti , Xt⇡i ) =
Z ti+1 
1
E @t u(t, Xt⇡ ) + b(Xt⇡i )@x u(t, Xt⇡ ) + 2
(Xt⇡i )@xx
2
u(t, Xt⇡ ) dt
ti 2

Using the PDE satisfied by u, we get

⇥ ⇤
E u(ti+1 , Xt⇡i+1 ) u(ti , Xt⇡i ) = (2.20)
Z ti+1 
1
E {b(Xt⇡i ) b(Xt⇡ )}@x u(t, Xt⇡ ) + {a(Xt⇡i ) a(Xt⇡ )}@xx
2
u(t, Xt⇡ ) dt.
ti 2

39
For the first term in the RHS we compute

⇥ ⇤
|E {b(Xt⇡i ) b(Xt⇡ )}@x u(t, Xt⇡ ) | =
⇥ ⇤
|E {b(Xt⇡i )
b(Xt⇡ )}{@x u(t, Xt⇡ ) @x u(t, Xt⇡i )} + {b(Xt⇡i ) b(Xt⇡ )}@x u(t, Xt⇡i ) |
 Z ti
i ⇡
 LL hi + E @x u(t, Xti ) L̄z=Xt⇡i b(Xs⇡ )ds
t

where we used (2.19) and Cauchy-Schwarz inequality to get the upper bound for the first term in the RHS of the

above inequality. For the second term, we apply Ito’s Formula:


 ✓ Z ti Z ti ◆
⇥ ⇡ ⇡ ⇡
⇤ ⇡ ⇡
E b(Xti ) b(Xt )@x u(t, Xti ) = E @x u(t, Xti ) L̄z=Xt⇡i b(Xs )ds + @x b(Xs⇡ ) (Xt⇡i )dWs
t t

Conditioning with respect to Fti cancels the strochastic integral term. Using the assumptions on b, we get

⇥ ⇤
|E {b(Xt⇡i ) b(Xt⇡ )}@x u(tti , Xt⇡i ) |  CL hi .

40
We thus obtain

⇥ ⇤
|E {b(Xt⇡i ) b(Xt⇡ )}@x u(t, Xt⇡ ) |  CL hi .

For the second term in (2.20), we perform similar computations and obtain the same upper bound. This leads to

n 1
X
|✏w |  C Li h2i , (2.21)
i=0

 Ch . (2.22)

41
Extensions .

i) Error expansion with a lot of smoothness

Talay & Tubaro [73] have proved

Theorem 2.4. If u 2 C 1 ,

n
X
E[g(XT⇡ )] = E[g(XT )] + Ci hi + O(hn+1 )
i=1

,! Generally one cannot beat order one with an Euler scheme.

,! Possibility of Romberg-Richardson extrapolation method:


⇥ ⇤ ⇥ ⇤
e.g. compute 2E g(XTh ) E g(XT2h )

42
ii) Measurable terminal condition

• For the Euler Scheme with Brownian increments and with condition on the diffusion coefficient, Bally & Talay [7]

have proved

E[g(XT⇡ )] = E[g(XT )] + O(h)

for g measurable and bounded!

43
2.3 Implementation using Monte Carlo Methods

2.3.1 Quick review of the case without bias

⇥ ⇤
• Assuming we know how to sample from ST , the price p0 = E e rT
g(ST ) < 1 is approximated by
N
1 X
p̂N
0 = e rT
g(STj ) ,
N j=1

where the STj are iid random variables with same law as ST . ,! empirical mean.

• We observe that p̂N


0 is an unbiased estimator of p0 and importantly we have the following result.

Theorem 2.5. (LLN) Since (g(STj ))j is a sequence of integrable and iid random variables, then

0 ! p0 a.s.
p̂N

44
• We need to assess the accuracy of the previous estimate. We can use the following L2 -estimate:

Theorem 2.6. Assume that (g(STj ))j a sequence of square integrable and iid random variables. We have that

⇥ ⇤ ⇥ N ⇤ V ar(e rT
g(ST ))
Var p̂N
0 = E |p̂0 p0 | 2 = .
N

• Remark: If e rT
g(ST1 ) 2 L2 , then

N
X
1
V̂0N := (e rT
g(STj ) p̂N
0 )
2
N 1 j=1
⇥ ⇤
is an unbiased estimator of Var e rT
g(ST ) .

45
• We can also describe the distribution of p̂N
0 , at least asymptotically.

Theorem 2.7. (CLT) Assume that (g(STj ))j is a sequence of square integrable and iid random variables. We assume
q
N
that Var[g(ST )] > 0 and set ˆ0 := V̂N0 then
N

p ⇣ p̂N
0 p0 ⌘
N 1 ˆ0N >0 ! N (0, 1) in distribution.
ˆ0N

• From this, we can deduce asymptotic confidence interval

Corollary 2.1. Under the assumption of Theorem 2.7, we have

p p̂N p0
P( N 0 N < z ↵2 ) ! 1 ↵
ˆ0

where for ✏ 2 [0, 1], z✏ denotes the 1 ✏ quantile of the standard normal distribution i.e. 1 ✏ := P (G < z✏ ) with

G ⇠ N (0, 1).

46
47
,! This tells us that, for N large enough, the probability that

ˆN ˆ0N
p0 2 [p̂N
0 z ↵2 p0 , p̂N
0 + z ↵ p
2
]
N N

is closed to 1 ↵.

• Using concentration inequalities, one can obtain non-asymptotic confidence interval.

48
2.3.2 The case with bias

• BUT in practice, we do not know how to sample from ST and we use a discretisation scheme e.g. the Euler

scheme ST⇡

• The simulation of the Euler scheme is quite straightforward as soon as one knows how to simulate the Brownian

increment!

• We then compute

N
1 X
p̂⇡,N
0 := e rT
g(ST⇡,j )
N j=1

49
• The error is decomposed into

p̂⇡,N
0 p 0 = ✏M C + ✏w (2.23)
⇥ ⇤
with ✏M C := p̂⇡,N
0 Ee rT
g(ST⇡ )
⇥ ⇤ ⇥ ⇤
and ✏w = E e rT
g(ST⇡ ) Ee rT
g(ST )

,! There is a balance to find between these two errors (see below)

⇥ ⇤
• The study of the MC error can be done as previously as soon as E |g(ST⇡ )|2 < 1.

50
2.3.3 Convergence of Mean-Square Error for MC method

⇥ ⇤
• In estimating the quantity p0 = E e rT
g(ST ) , one faces a tradeoff between reducing bias (✏w ) and reducing

variance (✏S ).

• To take this into account, one tries to minimise the Mean-Square Error.

Definition 2.3. For a given quantity ↵ estimated by ↵


ˆ , we set

⇥ ⇤
M SE := E |↵ ˆ |2

We observe

M SE = |↵ ↵] |2 + Var[ˆ
E[ˆ ↵] = bias2 + V ariance

51
Optimising the computational effort

• Assuming weak convergence of order 1, we have

1
M SE ⇠c h2 + .
N

• The computational effort C:

1. For one path, computational effort ⇠c 1/h

2. there are N paths to simulate

,! overall C ⇠c N/h

52
• The goal is to minimise the MSE taking into account the computational cost:

c2 c3 N
min(c1 h2 + ) s.t. =C.
h,N N h

• This leads to, setting h2 ⇠c N,


1

p 1
M SE = O(C 3 ).

p
• In other words, to reach a precision M SE = O(✏), one has

C = O(✏ 3 ) .

53
54
2.4 Implementation using quantisation of Brownian increments

Discretisation of the brownian increment (d = 1 for ease of presentation).

• ci stands for a discrete approximation of


W Wi := Wti+1 W ti
8
>
>
>
> bt⇡ = X0
<X i+1
(2.24)
>
>
>
> bt⇡ = X
bt⇡ + b(X
bt⇡ )hi + (X
b ⇡) W
ci , 0  i < n
:X i+1 i i ti

• Matching moment property up to order M : for al k M,

h i ⇥ ⇤
ci )k = E ( Wi )k , .
E( W (2.25)

55
Proposition 2.2. Assume that g and u(t, ·) is Cb4 (with bounds uniform in time) and that c has the matching
W
h i
c
moment property up to order M = 3 and E ( Wi ) = O(|⇡|2 ) then
4

h i
b ⇡
✏ˆw := |E g(XT ) E[g(XT )] |  C|⇡| . (2.26)

Example 2.1. Two-point discretisation

p 1
ci = ±
P( W hi ) = .
2

One observes that

h i ⇥ ⇤ h i ⇥ ⇤
c 2 2 c
E ( Wi ) = E ( Wi ) = hi and E ( Wi ) 2k+1
= E ( Wi )2k+1 = 0 , (2.27)

for all k 0, by symmetry.

56
Proof. 1. We drop the ⇡ in the proof and denote the Euler scheme with increment W started from ⇠ at time t by

X̄ t,⇠ . In particular, we shall use:

b
ti , X t i
X̄ti+1 bt +
=X X̄i with X̄i := b̂i hi + ˆi Wi .
i

We observe that
n 1 h
X i
✏ˆw = bt )
E u(ti+1 , X bt )
u(ti , X
i+1 i
i=0
X1
n  
bt
ti , X bt
ti , X
= bt )
E u(ti+1 , X u(ti+1 , X̄ti+1 i
) + E u(ti+1 , X̄ti+1 i
) bt )
u(ti , X
i+1 i
i=0

The second term has already been studied in the proof of Theorem 2.3. We just have to study the first one. We

proceed by doing an expansion of the smooth function u(ti+1 , ·) in terms of the increments of both Euler scheme. In

what follows, we perform this for a generic Cb4 function '.

57
Introducing bt +
7! '(X bi ), and performing a Taylor expansion, we compute
X
i

3
X ( Xb i )k Z 1 )3
bt +
'(X bi ) =
X b
(k)
' ( X ti ) + bt +
'(4) (X X bi )4 (1
bi )( X d ,
i i
k! 0 6
k=0

and similarly
3
X k Z 1
bt
ti , X
bt ) ( X̄i ) +
(k) bt + (1 )3
'(X̄ti+1 i
) = ' (X i
'(4) (X i
X̄i )( X̄i )4 d .
k! 0 6
k=0

Due to the matching moment property and boundedness properties, we get


h i 
b
E '(Xbt ) = E '(X̄tti ,Xti ) + O(h2i ) .
i+1 i+1

The proof is then concluded observing that, from the previous expansion,

b
bt ) u(ti+1 , X̄tti ,Xti ) = O(h2i )
E u(ti+1 , X i+1 i+1

58
Tree methods

• When using discrete random variables for the increment, it is possible to calculate the approximation on a tree

,! the two-point approximation used in Example 2.1 leads to a binomial tree

• There is no statistical error but computation becomes rapidly untractable (unless special properties are used as

recombination)

Link with finite difference scheme

• say b = 0, = cste : (2.8) ,! backward heat equation

59
• Tree approximation gives for time ti starting at x
h i
ti ,x
ūi (x) = E ūi+1 (X̂ti+1 )

1 p 1 p
= ūi+1 (x + h) + ūi+1 (x h)
2 2

where ui (·) (resp. ui+1 (·)) stands for the approximation of uti (·) (resp. uti+1 (·))

• Rearranging the terms, we obtain

ūi (x) ūi+1 (x) 1⇣ p p ⌘


= ūi+1 (x + h) + ūi+1 (x h) 2ūi+1 (x)
h 2h

which corresponds to the (backward explicit) finite different scheme


2 ⇣ ⌘
ūi (x) ūi+1 (x)
= 2 ūi+1 (x + ) + ūi+1 (x ) 2ūi+1 (x)
h 2
p 2
with space discretisation = h (satisfying the stability condition 2 h2  12 ).

60
JUMP

61
Figure 1: MC simulation for a digital (gaussian increment)

62
Figure 2: MC simulation for a digital (discrete increment)

63
Figure 3: Bias for a digital (discrete increment) - no variance

64

You might also like