Professional Documents
Culture Documents
Tableau Seance02
Tableau Seance02
37
Proof. First, we note that, for t 2 [ti , ti+1 ]
⇥ ⇤ 1q p
E |f (Xt⇡ ) f (Xt⇡i )|q Lf h (2.19)
Li := 2
sup {|@x u(t, ·)|1 + |@xx 3
u(t, ·)|1 + |@xxx u(t, ·)|1 }
t2[ti ,ti+1 ]
and by L the Lipschitz constant of b and a (to simplify in the proof is bounded)
We observe that
n 1
X ⇥ ⇤
✏w = E u(ti+1 , Xt⇡i+1 ) u(ti , Xt⇡i ) .
i=0
38
Applying Ito’s formula, we have
⇥ ⇤
E u(ti+1 , Xt⇡i+1 ) u(ti , Xt⇡i ) =
Z ti+1
1
E @t u(t, Xt⇡ ) + b(Xt⇡i )@x u(t, Xt⇡ ) + 2
(Xt⇡i )@xx
2
u(t, Xt⇡ ) dt
ti 2
⇥ ⇤
E u(ti+1 , Xt⇡i+1 ) u(ti , Xt⇡i ) = (2.20)
Z ti+1
1
E {b(Xt⇡i ) b(Xt⇡ )}@x u(t, Xt⇡ ) + {a(Xt⇡i ) a(Xt⇡ )}@xx
2
u(t, Xt⇡ ) dt.
ti 2
39
For the first term in the RHS we compute
⇥ ⇤
|E {b(Xt⇡i ) b(Xt⇡ )}@x u(t, Xt⇡ ) | =
⇥ ⇤
|E {b(Xt⇡i )
b(Xt⇡ )}{@x u(t, Xt⇡ ) @x u(t, Xt⇡i )} + {b(Xt⇡i ) b(Xt⇡ )}@x u(t, Xt⇡i ) |
Z ti
i ⇡
LL hi + E @x u(t, Xti ) L̄z=Xt⇡i b(Xs⇡ )ds
t
where we used (2.19) and Cauchy-Schwarz inequality to get the upper bound for the first term in the RHS of the
Conditioning with respect to Fti cancels the strochastic integral term. Using the assumptions on b, we get
⇥ ⇤
|E {b(Xt⇡i ) b(Xt⇡ )}@x u(tti , Xt⇡i ) | CL hi .
40
We thus obtain
⇥ ⇤
|E {b(Xt⇡i ) b(Xt⇡ )}@x u(t, Xt⇡ ) | CL hi .
For the second term in (2.20), we perform similar computations and obtain the same upper bound. This leads to
n 1
X
|✏w | C Li h2i , (2.21)
i=0
Ch . (2.22)
41
Extensions .
Theorem 2.4. If u 2 C 1 ,
n
X
E[g(XT⇡ )] = E[g(XT )] + Ci hi + O(hn+1 )
i=1
42
ii) Measurable terminal condition
• For the Euler Scheme with Brownian increments and with condition on the diffusion coefficient, Bally & Talay [7]
have proved
43
2.3 Implementation using Monte Carlo Methods
⇥ ⇤
• Assuming we know how to sample from ST , the price p0 = E e rT
g(ST ) < 1 is approximated by
N
1 X
p̂N
0 = e rT
g(STj ) ,
N j=1
where the STj are iid random variables with same law as ST . ,! empirical mean.
Theorem 2.5. (LLN) Since (g(STj ))j is a sequence of integrable and iid random variables, then
0 ! p0 a.s.
p̂N
44
• We need to assess the accuracy of the previous estimate. We can use the following L2 -estimate:
Theorem 2.6. Assume that (g(STj ))j a sequence of square integrable and iid random variables. We have that
⇥ ⇤ ⇥ N ⇤ V ar(e rT
g(ST ))
Var p̂N
0 = E |p̂0 p0 | 2 = .
N
• Remark: If e rT
g(ST1 ) 2 L2 , then
N
X
1
V̂0N := (e rT
g(STj ) p̂N
0 )
2
N 1 j=1
⇥ ⇤
is an unbiased estimator of Var e rT
g(ST ) .
45
• We can also describe the distribution of p̂N
0 , at least asymptotically.
Theorem 2.7. (CLT) Assume that (g(STj ))j is a sequence of square integrable and iid random variables. We assume
q
N
that Var[g(ST )] > 0 and set ˆ0 := V̂N0 then
N
p ⇣ p̂N
0 p0 ⌘
N 1 ˆ0N >0 ! N (0, 1) in distribution.
ˆ0N
p p̂N p0
P( N 0 N < z ↵2 ) ! 1 ↵
ˆ0
where for ✏ 2 [0, 1], z✏ denotes the 1 ✏ quantile of the standard normal distribution i.e. 1 ✏ := P (G < z✏ ) with
G ⇠ N (0, 1).
46
47
,! This tells us that, for N large enough, the probability that
ˆN ˆ0N
p0 2 [p̂N
0 z ↵2 p0 , p̂N
0 + z ↵ p
2
]
N N
is closed to 1 ↵.
48
2.3.2 The case with bias
• BUT in practice, we do not know how to sample from ST and we use a discretisation scheme e.g. the Euler
scheme ST⇡
• The simulation of the Euler scheme is quite straightforward as soon as one knows how to simulate the Brownian
increment!
• We then compute
N
1 X
p̂⇡,N
0 := e rT
g(ST⇡,j )
N j=1
49
• The error is decomposed into
p̂⇡,N
0 p 0 = ✏M C + ✏w (2.23)
⇥ ⇤
with ✏M C := p̂⇡,N
0 Ee rT
g(ST⇡ )
⇥ ⇤ ⇥ ⇤
and ✏w = E e rT
g(ST⇡ ) Ee rT
g(ST )
⇥ ⇤
• The study of the MC error can be done as previously as soon as E |g(ST⇡ )|2 < 1.
50
2.3.3 Convergence of Mean-Square Error for MC method
⇥ ⇤
• In estimating the quantity p0 = E e rT
g(ST ) , one faces a tradeoff between reducing bias (✏w ) and reducing
variance (✏S ).
• To take this into account, one tries to minimise the Mean-Square Error.
⇥ ⇤
M SE := E |↵ ˆ |2
↵
We observe
M SE = |↵ ↵] |2 + Var[ˆ
E[ˆ ↵] = bias2 + V ariance
51
Optimising the computational effort
1
M SE ⇠c h2 + .
N
,! overall C ⇠c N/h
52
• The goal is to minimise the MSE taking into account the computational cost:
c2 c3 N
min(c1 h2 + ) s.t. =C.
h,N N h
p 1
M SE = O(C 3 ).
p
• In other words, to reach a precision M SE = O(✏), one has
C = O(✏ 3 ) .
53
54
2.4 Implementation using quantisation of Brownian increments
h i ⇥ ⇤
ci )k = E ( Wi )k , .
E( W (2.25)
55
Proposition 2.2. Assume that g and u(t, ·) is Cb4 (with bounds uniform in time) and that c has the matching
W
h i
c
moment property up to order M = 3 and E ( Wi ) = O(|⇡|2 ) then
4
h i
b ⇡
✏ˆw := |E g(XT ) E[g(XT )] | C|⇡| . (2.26)
p 1
ci = ±
P( W hi ) = .
2
h i ⇥ ⇤ h i ⇥ ⇤
c 2 2 c
E ( Wi ) = E ( Wi ) = hi and E ( Wi ) 2k+1
= E ( Wi )2k+1 = 0 , (2.27)
56
Proof. 1. We drop the ⇡ in the proof and denote the Euler scheme with increment W started from ⇠ at time t by
b
ti , X t i
X̄ti+1 bt +
=X X̄i with X̄i := b̂i hi + ˆi Wi .
i
We observe that
n 1 h
X i
✏ˆw = bt )
E u(ti+1 , X bt )
u(ti , X
i+1 i
i=0
X1
n
bt
ti , X bt
ti , X
= bt )
E u(ti+1 , X u(ti+1 , X̄ti+1 i
) + E u(ti+1 , X̄ti+1 i
) bt )
u(ti , X
i+1 i
i=0
The second term has already been studied in the proof of Theorem 2.3. We just have to study the first one. We
proceed by doing an expansion of the smooth function u(ti+1 , ·) in terms of the increments of both Euler scheme. In
57
Introducing bt +
7! '(X bi ), and performing a Taylor expansion, we compute
X
i
3
X ( Xb i )k Z 1 )3
bt +
'(X bi ) =
X b
(k)
' ( X ti ) + bt +
'(4) (X X bi )4 (1
bi )( X d ,
i i
k! 0 6
k=0
and similarly
3
X k Z 1
bt
ti , X
bt ) ( X̄i ) +
(k) bt + (1 )3
'(X̄ti+1 i
) = ' (X i
'(4) (X i
X̄i )( X̄i )4 d .
k! 0 6
k=0
The proof is then concluded observing that, from the previous expansion,
b
bt ) u(ti+1 , X̄tti ,Xti ) = O(h2i )
E u(ti+1 , X i+1 i+1
58
Tree methods
• When using discrete random variables for the increment, it is possible to calculate the approximation on a tree
• There is no statistical error but computation becomes rapidly untractable (unless special properties are used as
recombination)
59
• Tree approximation gives for time ti starting at x
h i
ti ,x
ūi (x) = E ūi+1 (X̂ti+1 )
1 p 1 p
= ūi+1 (x + h) + ūi+1 (x h)
2 2
where ui (·) (resp. ui+1 (·)) stands for the approximation of uti (·) (resp. uti+1 (·))
60
JUMP
61
Figure 1: MC simulation for a digital (gaussian increment)
62
Figure 2: MC simulation for a digital (discrete increment)
63
Figure 3: Bias for a digital (discrete increment) - no variance
64