Download as pdf or txt
Download as pdf or txt
You are on page 1of 47

1

Stochastic Calculus for Finance


Solutions to Exercises

Chapter 1
Exercise 1.1: Show that for each n, the random variables K(1), . . . , K(n)
are independent.
Solution: Since K(r) have discrete distributions, the independence of
K(1), . . . , K(n) means that for each sequence V1 , . . . , Vn , Vi {U, D} we
have
P(K(1) = V1 , K(2) = V2 , . . . , K(n) = Vn )
= P(K(1) = V1 ) . . . P(K(n) = Vn ).
Fix a sequence V1 , . . . , Vn . Start by splitting the interval [0, 1] into two
intervals IU , ID of length 12 , IU = { : K(1) = U}, ID = { : K(1) = D}.
Repeat the splitting for each interval at each stage. At stage two we have
IU = IUU IUD , ID = IDU IDD and the variable K(2) is constant on each
I . For example, IUD = { : K(1) = U, K(2) = D}. Using this notation we
have
{K(1) = V1 } = IV1 , {K(1) = V1 , K(2) = V2 } = IV1 ,V2 ,

. . . {K(1) = V1 , . . . , K(n) = Vn } = IV1 ,...,Vn .


The Lebesgue measure of IV1 ,...,Vn is

1
2n

, so that

1
2n
. From the definition of K(r) follows directly P(K(1) = V1 ) . . . P(K(n) =
Vn ) = 21n .
P(K(1) = V1 , . . . , K(n) = Vn ) =

Exercise 1.2: Redesign the random variables K(n) so that P(K(n) =


U) = p (0, 1), arbitrary
Solution: Given the probability space (, F , P) = ([0, 1], B([0, 1]), m),
where m denotes the Lebesgue measure, we will define a sequence of random variables K(n), n = 1, 2, . . ..on .
First split [0, 1] into two subintervals: [0, 1] = IU ID , where IU , ID are
disjoint intervals with lengths |IU | = p, |ID | = q, p + q = 1, with IU to the
left on ID .. Now set

Solutions to Exercises

K(1, ) =

U if IU
D if ID

Clearly P(K(1) = U) = p, P(K(1) = D) = q. Repeat the procedure separately on IU and ID , splitting each into two subintervals in the proportion
p to q. Then IU = IUU IUD , ID = IDU IDD , |IUU | = p2 , |IUD | = pq,
|IDU | = qp, |IDD | = q2 . Repeating this recursive construction n times we
obtain intervals of the form I1 ,...,r , with i either U or D, and with length
pl qrl , where l = #{i : i = U}.
Again set
(
)
U if I1 ,...,r1 ,U
K(r, ) =
.
D if I1 ,...,r1 ,D
If the value U appears
r1l times in a sequence 1 , . . . , r1 , then |I1 ,...,r1 ,U | =
l r1l
pp q
. There are l different sequences 1 , . . . , r1 having U exactly
at l places. Then for Ar = {K(r) = U} we find
!
r1
X
r 1 l r1l
P(Ar ) = P(K(r) = U) = p
pq
l
l=0
= p(p + q)r1 = p
As a consequence is P(K(r) = D) = q. The proof that the variables
K(1), . . . , K(n) are independent follows as in Ex. 1.1.
Exercise 1.3: Find the filtration in = [0, 1] generated by the process
X(n, ) = 21[0,1 n1 ] ().
Solution: Since X(1)() = 0 for all [0, 1], we have FX(1) = {, [0, 1]}.
For any B R and R, let B = { : B}.
Now for k > 1, B B(R),
(
)
( 21 B) [0, 1 1k ] if 0 < B
1
X(k) (B) =
( 21 B) [0, 1 1k ] (1 1k , 1] if 0 B.
Then Hence FX(k) = {A E : A B((0, 1 1k ])}, E {, {0} (1 1k , 1]}}.
Suppose 1 k n. If C FX(k) and C B((0, 1 1k ]) then C B((0, 1
1
]) FX(n) . If C = A {0} (1 1k , 1], A B((0, 1 1k ]), then C = (A (1
n
1
, 1 n1 ]) {0} (1 1n , 1] FX(n) because A (1 1k , 1 1n ] B((0, 1 1n ]).
k
In consequence FX(k) FX(n) for all k. This implies FnX = FX(n) .
Exercise 1.4: Working on = [0, 1] find (by means of concrete formu-

Solutions to Exercises

lae and sketching the graphs) the martingale E(Y|Fn ) where Y() = 2 and
Fn is generated by X(n, ) = 21[0,1 n1 ) () (see Exercise 1.3).
Solution: According to Exercise 1.3 the natural filtration Fn of X has
the formFn = FnX = FX(n) , so
1
1
Fn = {A E : A B((0, 1 ]), E {, {0} (1 , 1]}}.
n
n

Hence the restriction of E(Y|Fn ) to the interval (0, 1 1n ] must be a B((0, 1


1
])-measurable variable and E(Y|Fn ) = Y on (0, 1 1n ] satisfies Def. 1.9
n
for A (0, 1 1n ]. The restriction of E(Y|Fn ) to the set {0} (1 n1 , 1]
must be measurable with respect to the -field {, {0} (1 1n , 1]}. Thus
E(Y|Fn ) has to be a constant function:
E(Y|FRn ) = c, on {0} (1 n1 , 1].
R
Condition 2 of Def. 1.9 gives (1 1 ,1] c dP = (1 1 ,1] 2 dP. It follows that
E(Y|Fn )() = c = 1

1
n

1
3n2

for (1 n1 , 1].

Exercise 1.5: Show that the expectation of a martingale is constant in


time. Find an example showing that constant expectation does not imply
the martingale property.
Solution: Let be the trivial -algebra, consisting of P-null sets and
their complements. For every integrable random variable X, E(X|) = E(X).
If M is a martingale, then E(M(n + 1)|Fn ) = M(n) for all n 0. Using the
tower property we obtain
E(M(n)) = E(M(n)|) = E(M(n + 1)|Fn )|)
= E(M(n + 1)|) = E(M(n + 1)).
If X(n), n 0 is any sequence of integrable random variables, then for

the sequence X(n)


= X(n) E(X(n)) the property E(X(n))
= E(X(n)
E(X(n))) = E(X(n)) E(X(n)) = 0 holds for all n.
Exercise 1.6: Show that martingale property is preserved under linear
combinations with constant coefficients and adding a constant.
Solution: Let X, Y be martingales with respect to the filtration Fn and
fix R. Define Z = X + Z, W = X, U = X + . Then E(|Z(n)|) =
E(|X(n) + Y(n)|) E(|X(n)|) + E(|Y(n)|) < +, E(|W(n)|) = E(|X(n)|) =
||E(|X(n)|) < +. It implies that Z(n) and W(n) are Fn -measurable and
they have finite expectation. Finally the linearity of conditional expectation
gives E(Z(n + 1)|Fn ) = E(X(n + 1) + Y(n + 1)|Fn ) = E(X(n + 1)|Fn ) +
E(Y(n + 1)|Fn ) = X(n) + Y(n) = Z(n), E(W(n + 1)|Fn ) + E(X(n + 1)|Fn ) =
E(X(n + 1)|Fn ) = X(n) = W(n). The process U is the special case of this
Z when Y(n) = for all n.

Solutions to Exercises
Exercise 1.7: Prove that if M is a martingale, then for m < n,
M(m) = E(M(n)|Fm ).
Solution: Using the tower property n m 1 times we obtain
M(m) = E(M(m + 1)|Fm ) = E(E(M(m + 2)|Fm+1 )|Fm )
= E(M(m + 2)|Fm ) = . . . = E(M(m)|Fm ).

Exercise 1.8: Let M be a martingale with respect to the filtration generated by L(n) (as defined for random walk), and assume for simplicity
M(0) = 0. Show that there exists a predictable process H such that M(n) =
Pn
Pn
Pi
i=1 H(i)L(i) (i.e. M(n) =
i=1 H(i)[Z(i)Z(i1)], where Z(i) =
j=1 L( j).
(We are justified in calling this result a representation theorem: each martingale is a discrete stochastic integral).
Solution: Here the crucial point is that the random variables L(n) have
discrete distributions and the process (M(n))n0 is adapted to the filtration
FnL , which means that M(n), n 0 also have discrete distributions and
M(n) is constant on the sets of the partition P(L1 , . . . , Ln ). From the forP
mula M(n) = ni=1 H(i)L(i) we obtain M(n + 1) M(n) = H(n + 1)L(n + 1).
Since L2 (k) = 1 a.s. for all k 1, we define H(n + 1) = [M(n + 1)
M(n)]L(n for n 0. To prove that (H(n + 1))n0 is a predictable process
we have to verify M(n + 1) is FnL -measurable. This is equivalent to the
condition H(n + 1) is constant on the sets of the partition P(L1 , . . . , Ln ).
Write A1 ,...,k = { : L1 () = 1 , . . . , Lk () = k ; j {1, 1}}.
Then P(L1 , . . . , Lk ) = {A1 ,...,k ; j {1, 1}} and A1 ,...,k ,1 A1 ,...,k ,1 =
A1 ,...,k . Moreover, P(A1 ,...,k ) = 21k , because the L j are i.i.d. random variables. Fix n and a set A1 ,...,k . Next, let 0 = M(n)() for A1 ,...,n ,
1 = M(n + 1)() for A1 ,...,n 1 , 1 = M(n + 1)() for A1 ,...,n ,1 .
Since M is a martingale
Z
Z
M(n)dP =
M(n + 1)dP
A1 ,...,n

A1 ,...,n

and therefore 0 P(A1 ,...,n ) = 1 P(A1 ,...,n ,1 ) + 1 P(A1 ,...,n ,1 ). From this
and the relation 2P(A1 ,...,n ) = P(A1 ,...,n ,1 ) = P(A1 ,...,n ,1 ) it follows that
2 0 = 1 + 1 or, equivalently, ( 1 0 ) = 1 0 . Using this equality
we verify finally that
H(n + 1)1A1,...,n = [M(n + 1) M(n)]L(n + 1)1A1 ,...,n

= (1)( 1 0 )1A1 ,...,n ,1 + ( 1 0 )1A1 ,...,n ,1

= ( 1 1 )1A1 ,...,n

Solutions to Exercises

so that H(n + 1) is constant on A1 ,...,k .


Exercise 1.9: Show that the process Z 2 (n), the square of a random walk,
is not a martingale, by checking that E(Z 2 (n + 1)|Fn ) = Z 2 (n) + 1.
Solution: Assume, as before, that L(k), k = 1, . . . is a symmetric random walk, L(k) {1, 1} and set L(0) = 0. The variables (L(k))k0 are
independent and Z(k + 1) = Z(k) + L(k), k 0, Fk = FkL . Then E(L(k)) = 0,
E(L2 (k)) = 1 for k 1 and the variables Z(k), Z(k2 ) are Fk -measurable and
the variables L(k + 1), L2 (k + 1) are independent of Fk . Using the properties
of conditional expectation we have
E(Z 2 (n + 1)|Fn ) = E((Z(n) + L(n + 1))2 |Fn )

= E(Z 2 (n)|Fn ) + 2Z(n)E(L(n + 1)|Fn ) + E(L2 (n + 1)|Fn )


(linearity, measurability)
= Z 2 (n) + 2Z(n)E(L(n + 1)) + E(L2 (n + 1))
(measurability,independence)
= Z 2 (n) + 1 for n 0.
Exercise 1.10: Show that if X is a submartingale, then its expectations
increase with n:
E(X(0)) E(X(1)) E(X(2)) ,
and if X is a supermartingale, then its expectations decrease as n increases:
E(X(0)) E(X(1)) E(X(2)) .
Solution: Since X is a submartingale, X(n) E(X(n + 1)|Fn ) for all n.
Taking expectations on both sides of this inequality we obtain
E(X(n)) E(E(X(n + 1)|Fn )) = E(X(n + 1)) for all n.
For a supermartingale proceed similarly.
Exercise 1.11: Let X(n) be a martingale (submartingale, supermartingale). For a fixed m consider the sequence X (k) = X(m + k) X(m), k 0
Show that X is a martingale (submartingale, supermartingale) relative to
the filtration Fk = Fm+k .
Solution: Let X be a martingale (submartingale, supermartingale). Then
X(m) is Fm+k measurable variable for all m, k. We have E(X (k + 1)|Fk ) =
E(X(m + k + 1) X(m)|Fm+k ) = E(X(m + k + 1)|Fm+k ) E(X(m)|Fm+k ) = (
, )X(m + k) X(m) = X (k), for all k.

Solutions to Exercises

Exercise 1.12: Prove the Doob decomposition for submartingales from


first principles:
If Y(n) is a submartingale with respect to some filtration, then there exist, for the same filtration, a martingale M(n) and a predictable, increasing
process A(n) with M(0) = A(0) = 0 such that
Y(n) = Y(0) + M(n) + A(n).
This decomposition is unique.
Solution: The process Z(n) = Y(n)Y(0), n 0, is a submartingale with
Z(0) = 0. Therefore we may assume Y(0) = 0 without loss of generality.
We prove the theorem with the use the principle of induction. For n = 1,
the decomposition formula would imply the relation
E(Y(1)|F0 ) = E(M(1)|F0 ) + E(A(1)|F0 ).
If this is to hold with M a martingale and A predictable, we must set
A(1) := E(Y(1)|F0 ) M(0),
which shows that A(1) is F0 -measurable.
To arrive at the composition formula we now define
M(1) := Y(1) A(1).
M(1) is F1 -measurable because Y(1) and A(1) are. Moreover,
E(M(1)|F0 ) = E(Y(1)|F0 ) E(A(1)|F0 ) = E(Y(1)|F0 ) A(1) = M(0),
which completes the initial induction step.
Assume now that we have defined an Fk -adapted martingale M(k) and a
predictable, increasing process A(k), k n such that A(k) and M(k) satisfy
the decomposition formula for Y(k), for all k n. Once again the decomposition formula for k = n + 1 gives
E(A(n + 1)|Fn ) = E(Y(n + 1)|Fn ) E(M(n + 1)|Fn ).
Hence it is necessary to define
A(n + 1) := E(Y(n + 1)|Fn ) M(n).

(0.1)

Having A(n + 1) to conserve the decomposition formula we set


M(n + 1) := Y(n + 1) A(n + 1).

(0.2)

Solutions to Exercises

Now we verify that M(k), A(k), k n + 1 satisfy the conditions of the theorem. From (0.1) A(n+1) is Fn -measurable, because M(n) is Fn -measurable.
Next from (0.1) and the decomposition formula for n we have
A(n + 1) = E(Y(n + 1)|Fn ) M(n)

= [E(Y(n + 1)|Fn ) Y(n)] + A(n) A(n)

because Y is a submartingale. Then A(k) is an increasing, predictable process for k n + 1.


From (0.2), M(n + 1) is Fn+1 -measurable and, since A(n + 1) is Fn measurable,
E(M(n + 1)|Fn ) = E(Y(n + 1)|Fn ) E(A(n + 1)|Fn ) = E(Y(n + 1)|Fn ) A(n + 1)
= M(n), by (0.1)).

Thus M(k), k n + 1 is a martingale. By construction, the processes


M(k), A(k), k n + 1 satisfy the decomposition formula for Y(k) for all
k n + 1. By the principle of induction we may deduce that the processes
A and M, given by (0.1), (0.2) for all n, satisfy the conditions of the theorem. The uniqueness is proved in the main text.
Exercise 1.13: Let Z(n) be a random walk (see Example 1.2), Z(0) =
P
0, Z(n) = nj=1 L( j), L( j) = 1, and let Fn be the filtration generated by
L(n), Fn = (L(1), . . . , L(n)). Verify that Z 2 (n) is a submartingale and find
the increasing process A in its Doob decomposition.
Solution: From relations (0.1), (0.2) we can give explicit formula for
the compensator A.
A(k) = E(Y(k)|Fk1 ) M(k 1)

= E(Y(k)|Fk1 ) Y(k 1) + A(k 1).

Hence A(k) A(k 1) = E(Y(k) Y(k 1)|Fk1 ). Adding these equalities


on both sides we obtain
A(n) =

n
X
k=1

E(Y(k) Y(k 1)|Fk1 ), for n 1.

(0.3)

By Exercise 1.9, E(Z 2 (n + 1)|Fn ) = Z 2 (n) + 1 Z 2 (n) when Z(0) = 0,


i.e., Z 2 is a submartingale. Next using the formula (0.3) given in Exercise

Solutions to Exercises

1.12 we obtain
A(n) =

n
X
k=1

E(Z 2 (k) Z 2 (k 1)|Fk1 )

n
X
=
[(Z 2 (k 1) + 1) Z 2 (k 1)] = n
k=1

for n 1.
Exercise 1.14: Using the Doob decomposition, show that if Y is a squareintegrable submartingale (resp. supermartingale) and H is predictable with
bounded non-negative H(n), then the stochastic integral of H with respect
to Y is also a submartingale (resp. supermartingale).
Solution: Let Y be a submartingale. Then by the Doob decomposition
(Theorem 1.19) there exist unique martingale M and a predictable, increasing process A, M(0) = A(0) = 0, such that Y(k) = Y(0) + M(k) + A(k) for
k 0. Hence Y(k) Y(k 1) = [M(k) M(k 1)] + [A(k) A(k 1)].
This relation gives the following representation for the stochastic integral
H with respect to Y
X(n + 1) =

n+1
X

H(k)[Y(k) Y(k 1)]

n+1
X

H(k)[M(k) M(k 1)]

n+1
X

H(k)[A(k) A(k 1)]

k=1

k=1

k=1

= Z(n + 1) + B(n + 1), n 0.


By the Theorem 1.15 Z(k), k 1 is a martingale. For the second term we
have
n+1
X
E(H(k)[A(k) A(k 1)]|Fn )
E(B(n + 1)|Fn ) =
k=1

n+1
X
k=1

H(k)[A(k) A(k 1)]

= B(n) + H(n + 1)[A(n + 1) A(n)] B(n)


because by the predictability of H and A, the random variables H(k)[A(k)
A(k 1)] are Fn -measurable for k n + 1. Also, H(k) 0, and A(k) is an
increasing process. Taking together the properties of Z and B we conclude

Solutions to Exercises

that E(X(n +1)|Fn ) = E(Z(n +1)|Fn ) +E(B(n +1)|Fn ) Z(n) + B(n) = X(n).
This proves the claim for a submartingale.
If Y is a supermartingale, Y is the submartingale, so the above proof
applies, which implies that the stochastic integral of a supermartingale is
again a supermartingale.
Exercise 1.15: Let be a stopping time relative to the filtration Fn .
Which of the random variables + 1, 1, 2 is a stopping time?
Solution: ) = + 1, yes. Because { = n} = { = n 1} Fn1 Fn
for n 1. { = 0} = F0 .
) = 1, no. Because we can only conclude that{ = n} = { = n+1}
Fn+1 for n 0, so this set need not be in Fn .
) 2 , yes. Because { : 2 , k N} = { : () = k} Fk Fn for n = k2 ,
k N. For n < {k2 ; k N}, { : () = n} = Fn .
Exercise 1.16: Show that the constant random variable, () = m for all
, is a stopping time relative to any filtration.
(
)
if m , n
Solution: { = n}=
then { = n} Fn for all n N.
if m = n
Exercise 1.17: Show that if and are as in the Proposition, then
is also a stopping time.
Solution: Use the condition (p. 15): g : N, then {g = n} Fn for
all n N {g n} Fn for all n N. We have { = n} Fn for all n
{ n} { n} Fn for all n.
Exercise 1.18: Deduce the above theorem from Theorem 1.15 by considering H(k) = 1{k} . (Let M be a martingale. If is a stopping time, then
the stopped process M is also a martingale.)
Solution: Let M and be a martingale and a stopping time for filtration
(Fn )n0 . Take Y(n) = M(n) M(0) for n 0. Then Y is also a martingale
and E(Y(0)) = 0. Now write for n 1
Y (n, ) = Y(n (), ) = Y(1, ) + (Y(2, ) Y(1, ) + . . .

+(Y(n (), ) Y(n () 1, ))


n
X
=
1{k} ()(Y(k) Y(k 1)).
k=1

Thus Y can be written in the form Y (n) =

Pn

k=1

H(k)(Y(k)Y(k1)) where

10

Solutions to Exercises

H(k) = 1{k} . The process H is a bounded predictable process because it is


Sk1
the indicator function of the set { k} = m=1
{ = m} Fk1 . By Theorem 1.15 Y is a martingale. This gives E(Y (n)) = (Y (0)) = E(Y(0)) = 0.
Hence E(X (n)) = E(X(0)) for all n.
Exercise 1.19: Using the Doob decomposition show that a stopped submartingale is a submartingale, (and similarly for a supermartingale). Alternatively use the above representation of the stopped process and use the
definition to reach the same conclusions.
Solution: Use the form of the stopped process given in the proof of
Proposition 1.30. Let M and be a submartingale (supermartingale) and a
finite stopping time. From the form of M we have
X
M(m)1=m + M(n + 1)1n+1 .
M (n + 1) =
m<n+1

Since each term of the right hand side is integrable variable, M (n + 1) is


also integrable variable. Now we can write
X
E(M(m)1=m |Fn )
E(M (n + 1)|Fn ) =
m<n+1

+E(M(n + 1)1n+1 |Fn ).


The processes M(m)1=m , m < n + 1 and 1n+1 = 1 1n are Fn measurable, then
X
M(m)1=m + 1n+1 E(M(n + 1)|Fm )
E(M (n + 1)|Fn ) =
m<n+1

()

M(m)1=m + M(n)1=n + 1n+1 M(n)

m<n

(M is sub (super) martingale)


X
=
M(m)1=m + M(n)1n = M (n) for all n 0.
m<n

Exercise 1.20: Show that F is a sub--field of F .


Solution: ) F because { = n} = { = n} Fn for all n 0.
) Let A belong F . It is equivalent to the condition A { = n} Fn for all
n. Then ( \ A) { = n} = { = n} (A { = n}) Fn for all n, because
Fn are -fields. The last condition means \ A F .
) Let Ak belong F for k = 1, 2, . . . . Then Ak { = n} Fn for all n.
S
S
Hence it follows (
k=1 Ak ) { = n} =
k=1 (Ak { = n}) Fn for all n.
S
This means k=1 Ak F .

Solutions to Exercises

11

Exercise 1.21:Show that if , are stopping times with then F


F .
Solution: Let , be stopping times such that . Let A F . According to the definition of F , A { = m} Fm for m = 0, 1, . . . Since
S
, it holds ( nm=0 { = m}) { n} = { n}. Hence we have
St
A { n} = m=0 (A { = m}) { n} Fn for Fm Fn and
{ n} Fn . n was arbitrary, according to the condition (p. 15) A F .
Exercise 1.22: Any stopping time is F -measurable.
Solution: We have to prove { k} F for each k = 0, 1, . . . . This is
equivalent to the condition({ < k} { = n} F)n for all n, k. We have

if k < n
Then B Fn . As conseB = { k} { = n}=
{ = n} if n k
quence { k} F .
Exercise 1.23: (Theorem 1.35 for supermartingales). If M is a supermartingale and , are bounded stopping times then
E(M()|F ) M().
Solution: It is enough to prove that
Z
Z
Z
M()dP
M()dP
E(M()|F )dP =
A

for all A F . We will prove the equivalent inequality. E(1A (M()


M())) 0 for arbitrary A F . From the proof of Theorem 1.35 we know
that the variable 1A (M() M()) can be written in the form 1A (M()
M()) = X (N) where the process X(n) is as follows
X(n) =

n
X
k=1

H(k)(M(k) M(k 1)),

X(0) = 0, H(k) = 1A 1{<k} and N is a constant such that N. Additionally H is a bounded and predictable process. Now the assumption
M is a supermartingale implies that X is also supermartingale Exercise
1.14). Hence it follows by the results of Exercise 1.19 that the stopped
process X (n) is also supermartingale. But for a supermartingale we have
E(X (N)) E(X (0)) = E(X(0)) = 0, which completes the proof.
Exercise 1.24: Suppose that, with M(n) and as in the Theorem, (M(n)) p

12

Solutions to Exercises

is integrable for some p > 1. Show that we can improve (1.4) to read
Z
1
1
P(max M(k) ) p
M p (n)dP p E(M p (n)).
kn
{maxkn M(k)}

Solution: If p 1 the function x p , x 0 is a convex, nondecreasing


function. As M p (n) is integrable, Jensens inequality (see p.10) and the fact
that M is a submartingale imply
E(M p (n + 1)|Fn ) (E(M(n + 1)|Fn )) p M p (n).
Applying Doobs maximal inequality (Theorem 1.36) to the event {maxkn M(k)
} = {maxkn M p p } we obtain the result.
Exercise 1.25: Extend the above Lemma to L p for every p > 1, to conclude that for non-negative Y L p , and with its relation to X 0 as stated
p
||Y|| p . (Hint: the proof is similar to
in the Lemma, we obtain ||X|| p p1
R
that given for the case p = 2, and utilises the identity p {zx} x p1 dx = x p .)
Note: The definition of the normed vector space L p is not given explicitly in the text, but is well-known: one may prove that if p > 1 the map
X 7 (E(|X| p ))1/p = ||X|| p is a norm on the vector space of all p-integrable
random variables (i.e. where E(|X| p ) < ), again with the proviso that
we identify random variables that are a.s. equal. The Schwarz inequality
in L2 then extends to the Holder inequality: E(|XY|) ||X|| p ||Y||q when
X L p , Y Lq , 1p + q1 = 1.
Proof: The extension of Lemma 1.38 that we require is the following:
Assume that X, Y are non-negative random variables, Y is in L p (), p >
1. Suppose that for all x > 0,
Z
1{Xx} YdP.
xP(X x)

Then X is in L () and
1

||E(X)|| p = (E(X p )) p

p
||E(Y)|| p .
p1

The proof is similar to that of Lemma 1.38. First consider the case when X
is bounded. The following formula is interesting on its own:
Z
p
px p1 P(X x)dx, for p > 0.
E(X ) =
0

Solutions to Exercises

13
R

To prove it, substitute z = X() in the equality z p = p 0 1{xz} (x)x p1 dx,


and we obtain
!
Z
Z
Z
p1
p
p
p
1{xX()} (x)x dx dP().
X ()dP() =
E(X ) =

By Fubinis theorem
!
Z
Z
Z
E(X p ) =
x p1
1{Xx} ()dP() dx =
0

px p1 P(X x)dx.

Our hypothesis and Fubinis theorem once more give

!
1{Xx} ()Y()dP() dx
0

!
Z
p2
=p
1{x<X()} (x)x dx Y()dP()

0
!
Z Z X()
x p2 dx Y()dP()
=p

Z0
p
=
X p1 ()Y()dP().
p1

E(X p ) p

x p2
Z

for p > 1.
The Holder inequality with p and q =

p
p1

yields

X p1 YdP (E((X p1 ) p1 ))

p
The last two inequalities give ||X|| pp = E(X p ) p1
||X|| p1
p ||Y|| p . This is
equivalent to our claim for X bounded.
If X is not bounded we can take Xn = X n. The inclusion {Xn x}
{X x} implies P(Xn x) P(X x) and from the assumptions of the
theorem we obtain the inequalities
Z
Z
xP(Xn x) xP(X x)
1{Xx} YdP
1{Xn x} YdP.

p p
) E(Y p ) for all n 1. The seAs Xn is bounded this gives E(Xnp ) ( p1
p
quence Xn increases to X p a.s., the monotone convergence theorem implies
p p
E(X p ) ( p1
) E(Y p ) and also as a consequence X p L p ().

Exercise 1.26: Find the transition probabilities for the binomial tree. Is
it homogeneous?
Solution: From the definition of the binomial tree the behaviour of stock
prices is described by a sequence of random variables S (n) = S (n 1)(1 +
K(n)), where K(n, ) = U1An () + D1[0,1]\An (), S (0) given, deterministic.

p1
p

(E(Y p )) p .

14

Solutions to Exercises

As in the Exercise 1.2 we have P(K(n) = U) = 1 P(K(n) = D) =


p, p (0, 1) for n 1 and the variables K(n) are independent random
Q
variables. From the definition of S (n), S (n) = S (0) ni=1 (1 + K(i)). Then
FnS = (S (1), . . . , S (n)) = FnK = (K(1), . . . , K(n)) and for any Borel
function f : R R is
E( f (S (n + 1))|FnS ) = E( f (S (n)(1 + K(n + 1)))|FnK )
= E(F( f )(S (n), K(n + 1))|FnK )
where F( f )(x, y) = f (x(1 + y)). The variable K(n + 1) is independent of FnK
and S (n) is FnK measurable, by the Lemma 1.43 we have
E( f (S (n + 1)|FnS ) = G( f )(S (n))

where
G( f )(x) = E(F( f )(x, K(n + 1)))
= p f (x(1 + U)) + (1 p) f (x(1 + D)).
Since G( f ) is a Borel function, the penultimate formula implies, by definition of conditional expectation, that E( f (S (n + 1))|FnS ) = E( f (S (n +
1))|FS (n) ). So the process (S (n))n0 has the Markov property. Assuming
f = 1B , n (x, B) = G(1B)(x) for Borel sets B we see that, for every fixed B,
n (x, B) = p1B(x(1 + U)) (1 p)1B(x(1 + B)) is a measurable function and
for every fixed x R, n (x, ) is a probability measure on B(R). We also
have
P(S (n + 1) B|FS (n) ) = E(1B (S (n + 1))|FS (n) ) = n (S (n), B).
Thus the n are transition probabilities of the Markov process (S (n))n0 .
This is a homogeneous Markov process, as the n do not depend on n.
Exercise 1.27: Show that symmetric random walk is homogeneous.
Solution: According to its definition, a symmetric random walk is defined by taking Z(0) and defining Z(n) = Z(n 1) + L(n), where the random
variables Z(0), L(1), . . . , L(n) are independent for every n 1. Moreover,
P(L(n) = 1) = P(L(n) = 1) = 12 (see Examples 1.4 and 1.46). Since
P
Z(n) = Z(0) + ni=1 L(i), we have FnZ = (Z(0), L(1), . . . , L(n)).
For any bounded Borel function f : R R we have f (Z(n + 1)) =
f (Z(n) + L(n + 1)) = F( f )(Z(n), L(n + 1)) where F( f )(x, y) = f (x + y). The
variable L(n + 1) is independent of FnZ and Z(n) is FZ n -measurable, so by
Lemma 1.43 we obtain
E( f (Z(n + 1))|FnZ ) = E(F( f )(Z(n), L(n + 1))) = G( f )(Z(n)),

Solutions to Exercises

15

where
G( f )(x) = E(F( f )(x, L(n + 1))) = E( f (x + L(n + 1))
1
= ( f (x + 1) + f (x 1)).
2
There last two relations testify that G( f ) is a Borel function and that the
equality E( f (Z(n + 1))|FnZ ) = E( f (Z(n + 1))|FZ(n) ) holds.
Thus (Z(n))n0 is a Markov process. Assuming f = 1B , (x, B) = 21 1B (x+
1) + 1B (x 1) = x+1 (B) + x1 (B), where B is a Borel set, x R we obtain
P(Z(n + 1) B|FZ(n) ) = G(1B )(Z(n)) = (Z(n), B).
Again, (, B) is a measurable function for each Borel set B and (x, ) is a
probability measurable for every x R, so we conclude that is a transition
probability for the Markov process (Z(n))n0 . Since does not depend on
n, this process is homogeneous.
Exercise 1.28: Let (Y(n))n0 , be a sequence of independent integrable
P
random variables on (, F , P). Show that the sequence Z(n) = ni=0 Y(i) is
a Markov process and calculate the transition probabilities dependent on n.
Find a condition for Z to be homogeneous.
P
Solution: From the definition Z(n) = ni=1 Y(i) follow the relations FnZ =
(Z(0), . . . , Z(n)) = (Y(0), . . . , Y(n)) = FnY and Z(n+1) = Z(n)+Y(n+1).
For any bounded Borel function f : R R we have
f (Z(n + 1)) = f (Z(n) + Y(n + 1)) = F( f )(Z(n), Y(n + 1))
where F( f )(x, y) = f (x+y). The variable Z(n) is FnZ measurable and by our
assumption (Y(i))i0 is a sequence of independent variables. Thus Y(n + 1)
is independent of FnZ . Now using Lemma 1.43 we obtain
E( f (Z(n + 1))|FnZ ) = E(F( f )(Z(n), Y(n + 1))|FnZ )
= Gn ( f )(Z(n))
where
Gn ( f )(x) = E(F( f )(x, Y(n + 1))) = E( f (x + Y(n + 1)))
Z
f (x + y)PY(n+1) (dy) for n 0.
=
R

PY(n+1) is theR distribution of the random variable Y(n + 1). From the form
Gn ( f )(x) = R f (x + y)PY(n+1) (dy) and the Fubini theorem, Gn ( f ) is a measurable function. The equality E( f (Z(n + 1))|FnZ ) = Gn ( f )(Z(n)) implies

16

Solutions to Exercises

that E( f (Z(n + 1))|FnZ ) is FZ(n) measurable function. Then from the definition of conditional expectation we have E( f (Z(n + 1))|FZ (n) ) = E( f (Z(n +
1))|FZ(n) ) a.e. . So the process (Z(n))n0 is a Markov process.
Putting n (x, B) = Gn (1B )(x), n 0, we see that for every Borel set B,
n (, B) is a measurable function. Next, denote S x (y) = x + y. Of course S x
is a Borel function for every x. From the definition of n we have relations
Z
1B (S x (y))PY(n+1) (dy)
n (x, B) =
R
Z
1S 1
(y)PY(n+1) (dy) = PY(n+1) (S 1
=
x (B)).
x (B)
R

This shows that for every x R, n (x, ) are probability measures. Finally
P(Z(n + 1) B|FZ(n) ) = E(1B (Z(n + 1))|FZ(n) )

+ G(1B)(Z(n)) = n (Z(n), B)

n 0. Collecting together all these properties we conclude that the measures n , n 0, are the transition probabilities of the Markov process
(Z(n))n0 . From the definition of n we see that if the distribution functions
PY(n) of variables Y(n) are different, then n are different and the process
Z(n) is not homogeneous. If for all n variables Y(n) have the same distribution function that is PY(n) = PY(0) for all n, then n = 0 for all n and the
process Z(n) is homogeneous.
Exercise 1.29: A Markov chain is homogeneous if and only if for every
pair i, j S
P(X(n + 1) = j|X(n) = i) = P(X(1) = j|X(0) = i) = pi j

(0.4)

for every n 0.
Solution: A Markov chain X(n), n 0, is homogeneous if for every
Borel set B and n 0 the equation E(1B (X(n + 1))|FX(n) ) = (X(n), B) is
satisfied, where is a fixed transition probability, not depending on n. In the
discrete case, the variables X(n), n 0 take values in a finite set {0, . . . , N}.
P
The relation 1B(X(n +1)) = jB 1{X(n+1)= j} and additivity of the conditional
expectation allows us to restrict attention to sets B = {i}, i S . Since
here the conditional expectations are simple functions, constant on the sets
Ani = {X(n) = i}, the condition that the process X(n) be homogeneous is
equivalent to
E(1{X(n+1)= j} |FX(n) ) 1A(n) = (i, { j})1A(n)
i

Solutions to Exercises

17

for every i, j S , n 0. Denoting (i, { j}) = pi j we obtain that the last


equalities are equivalent to the formulae P(X(n + 1) = j|X(n) = i) = pi j for
all i, j S and n 0.
Comming back to the financial example (see p.32) based on credit ratings this means that the process of rating of a country is a homogeneous
Markov process if the rating of a country at time n + 1 depends only on its
rating at time n (it close not depend on its previous ratings)and the probabilities pi j of rating changes are the same for all times.
Exercise 1.30: Prove that the transition probabilities of a homogeneous
Markov chain satisfy the so-called Chapman-Kolmogorov equation
X
pi j (k + l) =
pir (k)pr j (l).
rS

Proof: Denote by P k the matrix with entries pi j (k) and denote the entries
of the k-th power of the transition matrix Pk by p(k)
i j , i, j S. We now claim
(k)
k
k

that P = P for all k 0, or equivalently pik = pi j (k) for all i, j, k.


To prove our claim we use the induction principle.

Step1. If k = 1, then p(1)


i j = pi j = pi j (1), so P = P.
Step2. The induction hypothesis. Assume that for all l m, P l = Pl .
Step3. The inductive step. We will prove that P m+1 = Pm+1 . We have
pi j (m + 1) = P(X(m + 1) = j|X(0) = i)
X
=
P(X(m) = r|X(0) = i)P(X(m + 1) = j|X(m) = r, X(0) = i)
=

rS
X
rS

pir (m) P(X(m + 1) = j|X(m) = r, X(0) = i).

The following relations hold on the set {X(m) = r}


P(X(m + 1) = j|X(m) = r, X(0) = i)
= E(1{ j} (X(m + 1))|FX(0),X(m) )

= E(E(1{ j} (X(m + 1))|FmX |FX(0),X(m) ) (tower property)

= E(E(1{ j} (X(m + 1))|FX(m) |FX(0),X(m) ) (Markov property)

= E(1{ j} (X(m + 1))|FX(m) ) (tower property)


= P(X(m + 1) = j|X(m) = r) = pr j

(X is Markov, homogeneous, Exercise 1.29). Utilizing this result we obtain


P
pi j (m + 1) = rS pir (m)pr j for all i, j. This equality means that P m+1 =

18

Solutions to Exercises

P m P. Hence by the induction assumption P m+1 = Pm . Now by the Induction


Principle P k = Pk for all k 1 and the claim is true.
Now our exercise is trivial. From the equality Pk+l = Pk Pl it follows that
k+l
P
= P k P l . Writing out the last equation for the entries completes the
proof.

Solutions to Exercises

19

Chapter 2
Exercise 2.1: Show that scalings other than by the square root lead nowhere
PN
by proving that X(n) = h L(n), (0, 21 ), implies n=1
X(n) 0 in L2
1
while for > 2 this sequence goes to infinity in this space.
Solution: Since L(n) has mean 0 and variance 1 for each n, we have, by
independence and since h = N1 ,

2
N
N
X
X


X(n) = Var(h
L(n)
n=1
2
n=1
= h2

N
X

Var(L(n))

n=1

= h2 N = h21 .
When h 0, this goes to 0 if <

1
2

and to + if < 12 .

Exercise 2.2: Show that Cov(W(s), W(t)) = min(s, t).


Solution: Since E(W(t)) = 0 and E(W(t) W(s))2 = t s for all t s
0, we have Cov(W(s), W(t)) = E(W(s)W(t)) and t s = E(W(t) W(s))2 =
E(W 2 (t)) 2E(W(s)W(t)) + E(W 2 (s)) = t 2E(W(s)W(t)) + s. This equality
implies the formula E(W(s)W(t)) = s = min(s, t).
Exercise 2.3 Consider B(t) = W(t) tW(1) for t [0, 1] (this process is called the Brownian bridge, since B(0) = B(1) = 0). Compute
Cov(B(s), B(t)).
Solution: E(B(r)) = E(W(r)) rE(W(s)) = 0 for all r 0 for E(W(s)) =
0 for all r. Then
Cov(B(s), B(t)) = Cov(B(s), B(t)) = E(W(s) sW(1))(W(t) tW(1))

= E(W(s)W(t)) sE(W(1)W(t)) tE(W(s)W(1)) + stE(W 2 (1))

= Cov(W(s), W(t)) sCov(W(1), W(t)) tCov(W(s), W(1)) + stE(W 2 (1))


= min(s, t) s min(t, 1) t(s, 1) + st
(
)
s(1 t) if s t 1
=
.
t(1 s) if t s 1

Exercise 2.4: Show directly from the definition that if W is a Wiener

20

Solutions to Exercises

process, then so are the processes given by W(t) and 1c W(c2 t) for any
c > 0.
Solution: The process W(t) obviously satisfies the Definition 2.4. We
consider the process Y(t) = 1c W(c2 t), c > 0, t 0. It is known (see [PF])
that if, for two given random variables U, V and every continuous bounded
function f : R R we have E( f (U)) = E( f (V)), then the distributions PU
and PV of U and V are the same. First note that PY(t) = PW(t) for all t 0,
since
!
1
2
f W(c t) dP
E( f (Y(t))) =
c

!
Z
x2
1
1
f x
e 2c2 t dx (W has normal distribution)
=
c
R
2c2 t
Z
y2
1
f (y) e 2t dy (change of variable x=cy)
=
R
2t
= E( f (W(t))).
Z

(i) We verify the conditions of Definition 2.4. Condition 1 is obvious.


For Condition 2 2 take 0 s < t, B B(R). Then
!
1
P((Y(t) Y(s)) B) = P (W(c2 t) W(c2 s)) B
c
1
= P((W(c2 t) W(c2 s)) g1
x)
c (B)) (where gc (x) =
c
= P((W(c2 t c2 s)) g1
(B)) (Condition 2 for W, the same increments)
! c !
1
W(c2 (t s)) B = P(Y(t s) B) = P(W(t s) B)
=P
c
= P((W(t) W(s)) B).
Thus Y(t) Y(s) and W(t) W(s) have the same distribution. For the
third condition set 0 t1 < . . . < tm . Then 0 c2 t1 < . . . < c2 tm
and the increments W(c2 t1 ) W(c2 t0 ), . . . , W(c2 tm ) W(c2 tm1 ) are
independent by independence of the increments of W(t). Hence the
process Y(t) has independent increments. The paths of Y are continuous for almost all because this holds for W.
Exercise 2.5: Apply the above Proposition to solve Exercise 2.4. In other
words, use the following result to give alternative proofs of Exercise 2.4: If
a Gaussian process X has X(0) = 0, constant expectations, a.s. continuous
paths and Cov(X(s), X(t) = min(s, t), then it is a Wiener process.

Solutions to Exercises

21

Solution: The proof that W is again a Wiener process is clear, as it


is Gaussian,, a.s. continuous and has the right covariances. For the second
part we prove two auxiliary claims:
1.If X(t), t 0 is a Gaussian process and b > 0, then Y(t) = X(bt), t 0
is a Gaussian process.
2. If Y(t), t 0 is a Gaussian process, c > 0, then Z(t) = 1c Y(t), t 0 is a
Gaussian process.
Proof of 1: Fix 0 t0 < t1 < . . . < tn . Then the distribution vector of
increments (Y(t1 ) Y(t0 ), . . . , Y(tn ) Y(tn1 )) is the same as the distribution
vector of the increments (X(s1 ) X(s0 ), . . . , X(sn ) X(sn1 )) where si =
bti , i = 0, . . . , n. But the last vector is Gaussian because X is a Gaussian
process. According to Def. 2.11 Y is a Gaussian process.
For 2. we prove the following more general claim:
If U, U T = (U1 , . . . , Un )T is a Gaussian random vector with the mean
vector U and the covariance matrix U and A is a nonsingular n n (real)
matrix, then V = AU is a Gaussian vector with mean vector V = AU and
covariance matrix V = AU AT .
To prove this consider the mapping A(u) = Au for u Rn . Then for every
Borel set B B(Rn ) we have
Z
1
fU (u)du
P(V B) = P(A U B) = P(U A (B)) =
A1 (B)

where fU is the density distribution function for U, du R= (du1 , . . . , dun ).


Changing the variables, u = A1 v, we obtain P(V B) = B fU (A1 v) det(A1 )dv.
From Definition 2.12 we conclude that
Z
1
n
1
1
P(V B) = (2) 2 (det U ) 2 exp{(A1 (v A))T 1
U (A (v A))}(det A) dv
B
Z
1
n
= (2) 2 det(AU AT ) 2 exp{(v A)T (AU AT )1 (v A)}dv.
B

This formula shows that V is a Gaussian vector with mean vector V = AU


and covariance matrix V = AU AT .
Returning to Point 2 above let V be the vector of increments of the
process Z, V T = (Z(t1 ) Z(t0 ), . . . , Z(tn ) Z(tn1 )) and U the vector of
increments of Y, Y T = (Y(t1 ) Y(t0 ), ..., Y(tn ) Y(tn1 )). Denote A =
diag( 1c , . . . , 1c ), where diag means diagonal matrix. Then V = AU and A
is a non-singular matrix (c > 0). Since Y was Gaussian, we know that V is
also a Gaussian vector. The proof of 2. is completed.
Now to solve our Exercise we verify the assumptions of Proposition

22

Solutions to Exercises

2.13. Our claims 1 and 2 show that the process W(t)


= 1c W(c2 t), t 0 is

a Gaussian process because W has this property. Next E(W(t))


= 0 for all
has continuous paths because W has
t because E(W(t)) = 0 for each t. W
continuous paths. For the last condition
!
1
1
1
2
2

Cov(W(s), W(t)) = Cov W(c s), W(c t) = 2 (c2 s c2 t) = s t.


c
c
c
is a Wiener process.
By Proposition 2.13 W
Exercise 2.6: Show that the shifted Wiener process is again a Wiener
process and that the inverted Wiener process satisfies conditions 2,3 of the
definition.
Solution: 1. Shifted process.
Verify the conditions of Definition 2.4 for Wiener process.
1. W u (0) = W(u) W(u) = 0.
2. For 0 s < t, W u (t) W u (s) = W(u + t) W(u + s). Hence W u (t)
W u (s)
has normal distribution with mean value 0 and standard deviation t s.
3. For all m and 0 t1 < . . . < tm the increments W u (tn+1 ) W u (tn ) =
W(u + tn+1 ) W(u + tn ), n = 1, . . . , m 1, are independent because the
increments of Winer process W(u + tn+1 ) W(u + tn ), n = 1, . . . , m 1 are
independent.
4. For almost all the paths of W are continuous functions, then also the
paths of W u are continuous.
2. Inverted process. Consider the process Y(t) = tW( 1t ) for t > 0, Y(0) = 0.
Since Y(t) = 1c W(c2 t) for t > 0, c = 1t , by the previous Exercise Y(t)
have normal distributions E(Y(t)) = 0, VarY(t) = t for t > 0. To verify
condition 2 of Def. 2.4, choose 0 < s < t. Then 0 < 1t < 1s and Y(t)
Y(s) = (s)(W( 1s )W( 1t ))+(ts)W( 1t ). Since the increments W( 1t ), W( 1s )
W( 1t ) are independent, Gaussian variables, the variables (t s)W( 1t and
(s)(W( 1s ) W( 1t ) are also independent and Gaussian. Hence their sum
Y(t) Y(s) also has a Gaussian distribution. Now E(W(r)) = 0 for all r 0
implies E(Y(t)Y(s)) = 0. This lets us calculate the standard deviation of
Y(t) Y(s) as follows 2 = Var(Y(t) Y(s)) = Var((s)(W( 1s ) W( 1t )) +(t
s)W( 1t )) = s2 Var(W( 1s ) W( 1t )) + (t s)2 Var(W( 1t )) = s2 ( 1s 1t ) + (t s)2 1t =
t s.
To verify condition 3 of Def. 2.4 take 0 < t1 < . . . < tm . It is necessary
to prove that the components of the vector Ym , (Ym )T = (Y(t1 ), Y(t2 )
Y(t1 ), . . . , Y(tm ) Y(tm1 ))T are independent random variables. To obtain
this property we prove that Ym has a Gaussian distribution and Cov(Y(s), Y(t)) =

Solutions to Exercises

23

min(t, s) . These facts give the independence of components of Ym (see


1
proof of Proposition 2.13). It is 0 < t1m < tm1
< . . . < t11 . Hence the compo1
nents of the vector (Zm )T = (W( t1m ), W( tm1
) W( t1m ), . . . , W( t11 ) W( t12 ))T
are independent and have normal distributions as increments of a Wiener
process. Then the vector Zm has a Gaussian distribution. Now it is easy to
calculate the relation Y m = BAZm where the matrices A and B have the
forms

A =

(i)

t1
t2

tm

t1 t1

t2 0

0 0
,
0 0 0

0 0 0 0

0
1 0

1 1

B =
.

1 1
0

Since detA = t1 . . . tm , 0, detB = 1 , 0, we know by Exercise 2.5,


that Y m is Gaussian vector. Since the sequence (ti ) was arbitrary,
Y(t), t 0 is a Gaussian process.
The last condition we have to verify Cov(Y(t), Y(s)) = min(t, s). Let
0 < s t. Then Cov(Y(t), Y(s)) = E(tW( 1t )sW( 1s )) = ts min( 1t , 1s ) = ts 1t =
min(s, t). From the proof of Proposition 2.13 the increments of the process
Y(t), t 0 are independent.

Exercise 2.7: Show that X(t) = tZ does not satisfy conditions 2,3 of
the definition of the Wiener process.

Solution: Assume 0 s < t. Then we have X(t) X(s) = ( t s)Z.


2
Hence
E(X(t) X(s)) = 0 and Var(X(t) X(s)) = E(X(t) X(s)) = (t +
s 2 ts). The last equality contradicts Condition 2 in Definition 2.4 of the
Wiener process.
To check condition 3 of Def. 2.4, consider the increments X(tk+1 ) X(tk ),

k = 1, . . . , m 1, where tk+1 = ( tk + 1)2 , t1 0. Then tk+1 tk = 1 and

24

Solutions to Exercises

P(X(t2 ) X(t1 ) 0, . . . , X(tm ) X(tm1 ) 0) = P(Z 0) =


m1
Y
k=1

1
, while
2

1
P(X(tk+1 ) X(tk ) 0) = P(Z 0)m1 = ( )m1 .
2

So Condition 3 of Def. 2.4 is not satisfied.


Exercise 2.8: Prove the last claim - i.e.that if X, Y have continuous paths
and Y is a modification of X, then these processes are indistinguishable.
Solution: Suppose Y is a modification of X and X and Y have continuous
paths. Let T 0 = {tk : k = 1, 2, . . .} be a dense, countable subset of the time
set T. We know that the sets Ak = {; X(tk , ) = Y(tk , )}, k = 1, 2, . . . have
T
P(Ak ) = 1 or, equivalently, P( \ Ak ) = 0. Now take the set A =
k=1 Ak .
Since

\
[
X
P( \ A) = P( \
Ak ) = P( ( \ Ak ))
P( \ Ak ) = 0,
k=1

k=1

k=1

we have P(A) = 1. If 0 A, then 0 Ak for all k = 1, 2, . . .. This


means that X(t, 0 ) = Y(t, 0 ) for all t T 0 . Since X(, 0 ) and Y(, 0 )
are continuous functions and T 0 is a dense subset of T , it follows that
X(t, 0 ) = Y(t, 0 ) for all t T . But 0 was an arbitrary element of A
and P(A) = 1, so the processes X and Y are indistinguishable.
Exercise 2.9: Prove that If M(t) is a martingale with respect to Ft , then
E(M 2 (t) M 2 (s)|F s ) = E([M(t) M(s)]2 |F s ).
and in particular
E(M 2 (t) M 2 (s)) = E([M(t) M(s)]2 ).
Solution: The first equality follows from the relations
E([M(t) M(s)]2 |F s )

= E(M 2 (t) + M 2 (s)|F s ) 2E(M(s)M(t)|F s ) (linearity)

= E(M 2 (t) + M 2 (s)|F s ) 2M(s)E(M(t)|F s ) (M(s) is F s measurable)


= E(M 2 (t) + M 2 (s)|F s ) 2M 2 (s) (M is a martingale)
= E(M 2 (t) M 2 (s)|F s ).

The second equality follows from the first by the tower property.

Solutions to Exercises

25

Exercise 2.10: Consider a process X on = [0, 1] with Lebesgue measure, given by X(0, ) = 0, and X(t, ) = 1[0, 1t ] () for t > 0. Find the
natural filtration FtX for X.
Solution: The definitions of the probability space ([0, 1], B([0, 1]), m),
m-Lebesgue measure, and the process X yield

, for 0 s 1
{, [0, 1]}
FX(s) =

1
1
{, [0, 1], [0, ], ( , 1]} , for s > 1.
s
s
S
X
This implies Ft = ( ts0 FX(s) ) = {0, [0, 1]} for 0 t 1. In the case
t > 1 all intervals ( s11 , s12 ] = ( s11 , 1] [0, s12 ], where 0 < s2 s1 t, also
belong to FtX . Hence we must have B(( 1t , 1] FtX and then [0, 1t ] FtX .
These conditions give FtX = B(( 1t , 1]) B , where B = {[0, 1] \ A : A
B(( 1t , 1])}, because B(( 1t , 1]) B is a -field.
Exercise 2.11: Find M(t) = E(Z|Ft X ) where FtX is constructed in the
previous exercise.
Solution: Let Z be a random function on the probability space ([0, 1], B([0, 1]), m)
R1
such that 0 |Z|dm exists. We will calculate the conditional mean values of
Z with respect to the filtration (FtX )t0 defined in the Exercise 2.10.
From Exercise 2.10 we know that in the case t > 1 every set A FtX
either belongs to B(( 1t , 1]) or it is of the form A = [0, 1t ] C where C
B(( 1t , 1]). Hence every FtX -measurable variable, including E(Z|Ft X ), must
be a constant function when restricted to the interval [0, 1t ], while restricted
to ( 1t , 1] it is an F (( 1t , 1])-measurable
Rfunction. Then from the definition of
R
conditional mean value [0, 1 ] Zdm = [0, 1 ] E(Z|FtX )dm = c 1t and for every
t
R t
R
A B(( 1t , 1]) we have A Zdm = A E(Z|FtX )dm. The last equality implies
E(Z|Ft X ) = Z on ( 1t , 1] Finally
R1

t 0 t Zdm , for [0, 1t ]


E(Z|Ft )() =

Z()
, for ( 1t , 1]
X

a.e. in the case t > 1. In the case t < 1 E(Z|Ft X ) = E(Z) a.e.
Exercise 2.12: Is Y(t, ) = t 12 t a martingale (FtX as above)? Compute
E(Y(t)).
R1
Solution: It costs little to compute the expectation: E(Y(t)) = 0 (t
1
t)d = 0. If the expectation were not constant, we would conclude that
2
the process is not a martingale, however, constant expectation is just a necessary condition, so we have to investigate further. The martingale condi-

26

Solutions to Exercises

tion reads E(Y(t)|F sX ) = Y(s). Consider 1 < s < t. The random variable on
the left is F sX -measurable, so since [0, 1s ] is an atom of the -field, it has
to be constant on this event. However, Y(s) is not constant (being a linear
function of ), so Y is not a martingale for this filtration.
Exercise 2.13: Prove that for almost all paths of the Wiener process W
we have supt0 W(t) = + and inf t0 W(t) = .
Solution
Set Z = supt0 W(t). Exercise 2.4 shows that for every c > 0 the process
cW( ct2 ) is also a Wiener process. Hence cZ and Z have the same distribution
for all c > 0, which implies that P(0 < Z < ) = 0, and so the distribution
of Z is concentrated on {0, +}. It therefore suffices to show that P(Z =
0) = 0. Now we have
\
P(Z = 0) P({W(1) 0} {W(u) 0})
u1

= P({W(1) 0} {sup(W(1 + t) W(1)) = 0})


t0

since the process Y(t) = W(1 + t) W(1) is also a Wiener process, so that
its supremum is almost surely 0 or +. But (Y(t))t0 and (W(t))t[0,1] are
independent, so
P(Z = 0) P(W(1) 0)P(sup Y(t) = 0)
t

= P(W(1) 0)P(Z = 0),


(as Y is a Wiener process, supt0 Y(t) has the same distribution as Z) and
so P(Z = 0) = 0. The second claim is now immediate, since W is also a
Wiener process.
Exercise 2.14: Use Proposition 2.35 to complete the proof that the inversion of a Wiener process is a Wiener process, by verifying path-continuity
at t = 0.
Solution: We have to verify that the process

tW( t ) , for t > 0


Y(t) =

0
, for t = 0

has almost all paths continuous at 0. This follows from Proposition 2.35,
since
!
W( 1 )
1
tW
= 1 t 0 a.s. if t .
t
t

Solutions to Exercises

27

Exercise 2.15: Let (n )n1 be a sequence of stopping times. Show that


supn n and inf n n are stopping times.
Erratum: The claim for the infimum as stated in the text is false in
general. It requires right-continuity of the filtration, as shown in the proof
below.
Solution: sup n is a stopping time because for all t 0, {sup n t} =
n {n t} Ft as an intersection of sets in -field Ft .

The case of inf n n needs an additional assumption.

Definition. A filtration (Ft )tT is called right continuous if Ft+ =


Ft .

s>t

Fs =

We now prove the following auxiliary result (see also lemma 2.48 in the
text).
Claim. If a filtration (Ft )tT is right continuous, is a stopping time for
(Ft )tT if and only if for every t, { < t} Ft .
Proof. If is stopping time, then for every t and n = 1, . . ., { t 1n }
S
1
Ft 1n Ft . Hence { < t} = (
n=1 { t n }) Ft . If { < t} Ft for all t,
T
then { t} = ( n=1 { < t + 1n }) Ft+ = F .
This allows us to prove the desired result: if (n )n1 is a sequence of stopping times for a right continuous filtration (Ft )tT , then inf n n is a stopping
time for (Ft )tT
Proof. According to the claim n are stopping times imply that for every
S
t and n, {n < t} Ft . Hence {inf n n < t} = n {n < t} Ft for all t,
which, again by virtue of claim, filtration is continuous, testifies, inf n is a
stopping times.
Exercise 2.16: Verify that F is a -field when is a stopping time.
Solution 2.16:1. Because Ft is a -field , { t} = Ft for all t.
Then by the definition of F , F
2.If A F , then A { t} Ft for all tk . Hence( \ A) { t} = {
t} \ (A { t}) Ft (both sets are in Ft ). Since t was arbitrary, \ A F .
3. If Ak , k = 1, 2, . . . belong to F , then Ak { t} Ft for all t. Now

28
Solutions to Exercises
S
S
(
t} =
k=1 Ak ) {S
k=1 (Ak { t}) Ft for Ft is a -field. Since t
was arbitrary,
A

F
k
. 1.,2.,3. imply F is a -field.
k=1

Exercise 2.17: Show that if then F F , and that F = F F .

Solution: If A F then A { } Ft for all t. From the assumption


it follows that { t} { t} and hence { t} = { t} { t}
for all t. Now A{ t} = (A{ t}){ t} Ft , as is a stopping time
and Ft is a -field. Thus A F . For the equality F = F F , note that
by the previous result the relations , imply F F F .
For the reverse inclusion take A F F . hence A { t} Ft and
A { t} Ft for all t. Since { t} = { t} { t}, we have
A { t} = (A { t}) (A { t}) Ft for all t because Ft
is a -field. A was an arbitrary set, so F F F and hence the result
follows.
Exercise 2.18:Let W be a Wiener process. Show that the natural filtraS
tion is left-continuous: for each t 0 we have Ft = ( s<t F s ). Deduce
S
that if n , where n , are FtW -stopping times, then ( n1 FWn ) = Fv.W .

Solution: Proof of the first statement:: For any s > 0 the -field F sW
is generated by sets of the form A = {(W(u1 ), W(u2 ), ..., W(un)) B } where
B B(Rn ) and (ui ) is a partition of [0, s]. Now fix t > 0. By left-continuity
of the paths of W we know that W(t) = lim sm t W(sm ) a.s., the set A belongs
S
S
W
W
to (
m=1 F sm ) ( s<t F s ). So this -field contains the generators of
W
W
Ft , hence contains Ft . The opposite conclusion is true for any filtration
S
(Ft )t0 , since F s Ft for s < t gives ( s<t F s ) Ft .
Erratum: The second statement should be deleted. The claim holds for
only for quasi-left-continuous filtrations, which involves concepts well beyond the scope of this text. (See Dellacherie-Meyer, Probabilities and Potential, Vol2, Theorem 83, p.217.)

Exercise 2.19: Show that if X(t) is Markov then for any 0 t0 < t1 <
< tN T the sequence (X(tn ))n=0,...,N is a discrete time Markov process.
Solution: We have to verify that the discrete process (X(tn )), n = 1, . . . , N
is a discrete Markov process with respect to the filtration (Ftn ), n = 0, 1, . . . , N.
Let f be a bounded Borel function f : R R. Since X is a Markov process, it follows that E( f (X(tn+1 ))|Ftn ) = E( f (X(tn ))|FX(tn ) ) for all n. But this
means that (X(tn ))n is a Markov chain (a discrete-parameter Markov process).
Exercise 2.20: Let W be a Wiener process. Show that for x R, t 0,

Solutions to Exercises

29

M(t) = exp{ixW(t) + 21 x2 t} defines a martingale with respect to the natural filtration of W. (Recall from [PF] that expectations of complex-valued
random variables are defined via taking the expectations of their real and
imaginary parts separately.)
Solution: Recall that Z : C, where C is the set of complex numbers, is a complex-valued random variable if Z = X1 + iX2 and X1 , X2 are
real-valued random variables. Z has mean value EZ if X1 , X2 have mean
values and EZ = EX1 + iEX2 . If G is a -subfield of F we also have
E(Z|G) = E(X1 |G) + iE(X2 |G) when (Z(t))t is a complex valued process
(martingale) if X1 , X2 are real processes (are real martingales for for the
same filtration) and Z(t) = X1 (t) + iX2 (t).
1 2
Now for Exercise 2.20 take 0 s < t and denote F(u, v) = eix(u+v)+ 2 x t
where u, v R. Let ReF(u, v) = F 1 (u, v) and ImF(u, v) = F 2 (u, v) be
the real and imaginary parts of F(u, v). Write Y = W(t) W(s), X = W(s),
F sW = G. We prove that ((M(t))t is a martingale for (F )t . Using our notation
we can write
E(M(t)|F sW ) = E(F(W(s), W(t) W(s))|F sW )

= E(F 1 (X, Y)|G) + iE(F 2 (X, Y)|G).

The variable Y is independent of the -field G, X is G-measurable and


the mappings F 1 , F 2 are measurable (continuous) and bounded. So we
have: E(F 1 (X, Y)|G) = G1 (X), E(F 2 (X, Y)|G) = G2 (X) where G1 (u) =
E(F 1 (u, Y)), G2 (u) = E(F 2 (u, Y)). Setting G = G1 + iG2 we have the for1 2
mula E(M(t)|F sW ) = G(X) where G(u) = E(F(u, Y)) = E(eix(u+Y)+ 2 x t ) =
1 2
eixu+ 2 x t E(eixY ). Since the distribution function of Y is the same as that of
W(t s) and E(eixW(ts) ) is nothing other than the value at x of the characteristic function of an N(0, t s)-distributed random variable, we obtain
1
2
2
E(eixY ) = e 2 (ts)x [PF]. Hence G(u) = eixu+sx . Finally, E(M(t)|F sW ) =
2
eixW(s)+sx = M(s). Since 0 s < t were arbitrary (M(t))t is a martingale.

30

Solutions to Exercises

Chapter 3
Exercise 3.1: Prove that W and W 2 are in M2 .
Solution: W, W 2 are measurable (continuous, adapted) processes. Since
E(W 2 (t)) = t, E(W 4 (t)) = 3t2 (see[PF]), by the Fubini theorem we obtain
T

T
T2
W (t)dt =
E(W (t))dt =
tdt =
,
2
0
0
0
! Z T
Z T
Z T
4
4
E
W (t)dt =
E(W (t))dt =
3t2 dt = T 3 .

Exercise 3.2: Prove that in general I( f ) does not depend on a particular


representation of f .
Solution: A sequence 0 = t0 < t1 < . . . < tn = T is called a partition of
the interval [0, T ]. We will denote this partition by (ti ). A partition (uk )
of [0, T ] is a refinement of the partition (ti ) if the inclusion {ti } {uk }
holds. Let f be a simple process on [0, T ], f S 2 and let (ti ) be a partition
of [0, T ] compatible with f . The latter means that f can be written in the
form
f (t, ) = 0 ()1{0} (t) +

n1
X

i ()1(ti ,ti+1 ] (t),

i=1

where i is Fti measurable and i L2 (). Note that f (ti+1 ) = i for i 0.


To emphasize the presence of the partition in the definition of the integral
of f we also write I( f ) = I(ti ) ( f ). If a partition (uk ) is a refinement of a
partition (ti ) and (ti ) is compatible with f , then (uk ) is also compatible
with f . This is because for each i there exists k(i) such that ti = uk(i) . Since
f (t) = f (ti+1 ) = i for ti < t ti+1 , it follows that f (uk ) = f (ti+1 ) = i
and f (uk ) Fti Fuk for all k such that ti = uk(i) < uk uk(i+1) = ti+1 .
Additionally,
0 1{0} (t) +

p1
X

f (uk+1 )1(uk ,uk+1 ] (t)

k=0

n1
X

= 0 1{0} (t) +
i=0

k(i)<kk(i+1)

i 1(uk ,uk+1 ] (t) = f (t).

Solutions to Exercises

31

Now for the integral of f we have


I(uk ) ( f ) =

p1
X
k=0

f (uk+1 )(W(uk+1 ) W(uk ))

n1
X

=
i=0

n1
X
i=0

k(i)<kk(i+1)

i (W(uk+1 ) W(uk ))

i (W(ti+1 ) W(ti )) = I(ti ) ( f ).

Returning to our exercise let (ti ) and (s j ) be partitions on [0, T ] compatible with f . We can construct the partition (vk ), where {vk } = {ti } {s j }
and elements vk are ordered as real numbers. Then the partition (vk ) is a
refinement of both partitions (ti ) and (s j ). By the previous results we
have I(ti ) ( f ) = I(vk ) ( f ) = I(s j ) ( f ). Thus the Ito integral of a simple process
is independent of the representation of that process.
Exercise 3.3: Give a proof for the general case (i.e., linearity of the
integral for simple functions).
Solution: We prove two implications.
1. If f S 2 , R, then f S 2 , I( f ) = I( f ).
2. If f, g S 2 , then f + g S 2 , I( f + g) = I( f ) + I(g).
P
Proof 1. If f (t) = 0 1{0} (t) + n1
(t), where i Fti , the i Fti
i=0 i 1(ti ,ti+1 ] P
Pn1
and I( f ) = i=0 i (W(ti+1 )W(ti )) = n1
i=1 i (W(ti+1 )W(ti )) = I( f ).
Proof 2. We use the notation and the results of Exercise 3.2. Let (ti ) and
(s j ) be partitions (of the interval [0, T ]) compatible with processes f and
g respectively. We can construct a partition (vk ) which is a refinement of
both (ti ) and (s j ) (as was shown in Exercise 3.2). Then f + g S 2 and
I(t) + I(g) = I(ti ) ( f ) + I(s j ) (g) = I(vk ) ( f ) + I(vk ) (g) = I(vk ) ( f + g) = I( f + g).
Rb
Exercise 3.4: Prove that for a f (t)dW(t), [a, b] [0, T ] we have
Z b
Z b
Z b
2
E[
f (t)dW(t)] = 0, E[(
f (t)dW(t)) ] = E[
f 2 (t)dt].
a

Solution: Since for f S 2 the process 1[ a, b] f belongs to S 2 , it follows


Rb
RT
that E( a f dW) = E( 0 1[a,b] f dW) (definition) = 0 (Theorem 3.9). SimiRb
RT
RT
larly E( a f dW)2 = E( 0 1[a,b] f dW)2 (definition)= E( 0 1[a,b] f 2 dt)(Theorem
Rb
3.10) = E( a f 2 dt).

32

Solutions to Exercises

Exercise 3.5: Prove that the stochastic integral does not depend on the
choice of the sequence fn approximating f.
Solution: Assume f M 2 and let ( fn ) and (gnR) be two sequences apT
proximating f . That is fn , gn S 2 for all n and E( 0 ( fn f )2 dt) 0,
n
RT
E( 0 ( f gn )2 dt) 0. The last two relations imply, by the inequality
n

(a + b)2 2a2 + 2b2 ,


E

T
0

!
!
Z T
( fn f )2 dt
( fn gn )2 dt 2E
0

+2E

T
2

( f gn ) dt 0.
n

By assumption, the integrals I(gn ), I( fn ) exist and there exist limn I( fn )


and limn I(gn ) in L2 ().
We want to prove that limn I( fn ) = limn I(gn ). Now
Z T
!2

E((I( fn ) I(gn )) ) = E
( fn (t) gn (t)t)dW(t)
2

(linearity in S )
!
Z T
=E
( fn (t) gn (t))2 dt (isometry in S 2 ) 0 as n .
0

That last convergence was shown above. Thus limn I( fn ) = limn I(gn )
in L2 ()-norm.
Exercise 3.6: Show that
Z t
Z t
sdW(s) = tW(t)
W(s)ds.
0

Solution: We have to choose an approximating sequence for our integrals and next to calculate the limit of Ito integrals for the approximating
P
(s) itn for 0 < s t,
sequence. Denote f (s) = s and take fn (s) = n1
i=0 1( itn , (i+1)t
n ]
fn (0) = 0. Then fn are simple functions and fn S 2 (they do not depend on
). ( fn ) is an approximating sequence for f because | fn (s) f (s)| nt for
Rt
2
all 0 s t. This inequality gives E( 0 ( f (s) fn (s))2 ds) nt 2 t 0 as
n . Now we calculate limn I( fn ). According to the definition of the

Solutions to Exercises

33

integral of a simple function


n1
X
it

!
 it !
(i + 1)t
I( fn ) =
W
W
n
n
n
i=0
Z
n1
t
X
t  it 
W(s)ds a.e.
W
tW(t) +
= tW(t)
n
n
0
i=1
This holds because for almost all W(, ) is continuous function and
Pn1 t
it
i=1 n W( n ) is a Riemann approximating sum for the integral of W(, ).
Convergence with probability one is not sufficient. We need the convergence in L2 () norm. We verify the Cauchy condition to prove L2 ()-norm
convergence of (I( fn )):
E((I( fn ) I( fm ))2 ) = E((I( fn fm ))2 ) (linearity in S 2 )
Z t
!
!2
Z t

= E
( fn (s) fm (s)) ds
( fn (s) fm (s))dW(s) = E
0

(Ito isometry) .

Since fn f in L2 ([0, T ] ) norm, it satisfies the Cauchy condition.


The Ito isometry guarantees that the sequence of integrals also satisfies the
Cauchy condition in L2 (). Then (I( fn ))n converges in L2 ()-norm. But
the limits of a sequence convergent with probability one and at the same
time convergent in L2 ()-norm must be the same. Thus
Z t
I( f )(s) = ( lim I( fn ))(s) = tW(t)
W(s)ds.
n

Exercise 3.7: Compute the variance of the random variable


t)dW(t).

RT
0

(W(t)

Solution: In order to calculate the variance of a random variable we


need its mean value. Denote by fn an approximating sequence for the process f (t) = W(t) t. From the definition of the Ito integral I( fn ) I( f ) in
L2 ()-norm. We have the inequalities
|E(I( f )) E(I( fn ))| E(|I( f ) I( fn )|) (Schwarz inequality)

E((I( f ) I( fn ))2 ) 0 as n .

Since E(I( fn )) = 0 for all n (Theorem 3.9), it follows that E(I( f )) = 0. Now

34

Solutions to Exercises

we calculate
Z

Var(I( f )) = E((I( f )) ) (E(I( f )) = 0) = E


Z

E(W(t) t)2 dt =
(Ito isometry) =
0
Z T
T2 T3
=
(t + t2 )dt =
+
.
2
3
0

T
2

f (t)dt
0

(E(W 2 (t)) + t2 )dt


0

Exercise 3.8: Prove that if f, g M2 and , are real numbers, then


I( f + g) = I( f ) + I(g).
Solution: Let ( fn ) and (gn ), fn , gn S 2 be approximating sequences for
f and g, respectively and fix , R. This gives fn f , gn g
in L2 ([0, T ] )-norm. Hence the sum of these sequences ( fn + gn )
converges in L2 ([0, T ] )-norm to f + g and of course fn + gn S 2 .
So the sequence ( fn + gn ) is an approximating sequence for the process
f + g. From the definition of integral follow the relations
I( fn + gn ) I( f + g)
and
I( fn + gn ) = (linearity in S 2 )
I( fn ) + I(gn ) I( f ) + I(g)
in L2 ()-norm. Since the limit of a sequence in a normed vector space is
determined explicitly, it follows that
I( f ) + I(g) = I( f + g).
Exercise 3.9: Prove that for f M2 , a < c < b,
Z

f (s)dW(s) =
a

f (s)dW(s) +
a

f (s)dW(s).
b

Solutions to Exercises

35

Solution: Let a < c < b. Then


Z T
Z b
Z c
f (s)1[a,c] (s)dW(s)
f (s)dW(s) =
f (s)dW(s) +
0
c
a
Z T
Z T
+
f (s)1[c,b] (s)dW(s) =
( f (s)1[a,c] (s) + f (s)1[c,b] (s))dW(s)
0
0
Z T
(linearity) =
( f (s)1[a,c] (s) + f (s)1(c,b] (s))dW(s)
0
Z T
Z b
=
f (s)1[a,b] (s)dW(s) =
f (s)dW(s).
0

Note that a change of value of an integrand at one point has no influence


on the value of the integral. For example, let ( fn ) be an approximating sequence for f M 2 and let 0 = t0(n) < t1(n) < . . . < tn(n) = T be the partition
for fn . Then
E((I( fn ) I( fn 1(0,T ] )2 ) = E(( f (0)(W(t1(n) ) W(0)))2 )

= E(( f 2 (0)E(W 2 (t1(n) )|F0 )) = E( f 2 (0))E(W 2 (t1 ))

(W 2 (t1 ) independent of F0 ) = E( f 2 (0))t1(n) 0.


n

Exercise 3.10: Show that the process M(t) =


martingale.

Rt
0

sin(W(t))dW(t) is a

Solution: Let be 0 < s < t. We have to prove the equality


! Z s
Z t
E
sin(W(u))dW(u)|F s =
sin(W(u))dW(u).
0

Beginning with the equality


Z t
Z s
sin(W(u))dW(u)) =
sin(W(u))dW(u)
0
0
Z t
+
sin(W(u))dW(u) = +
s

We see that to solve the problem it is enough to show that E(|F s ) =


and E(|F s ) = 0. The first equality means that should be an F s measurable random variable. To prove it let ( fn )n , fn S 2 be an approximating sequence for the process W on the interval [0, s]. Then (sin( fn ))n is an
approximating sequence for sin(W) and of course sin( fn ) S 2 . The integrals n = I(sin( fn )) converge in L2 () norm to and they are of the form
P (n)
(n)
(n)
(n)
n = mi=1 1 (n)
s and (n)
Ft(n) F s .
i (W(ti+1 ) W(ti )) where ti
i
i
Then n are F s -measurable variables and as a consequence must be an

36

Solutions to Exercises

F s measurable variable. For the equality E(|F s ) = 0, let (gn )n , gn S 2 ,


gn = 1[s,t] gn be an approximating sequence for the process W on the interval [s, t]. Then similarly as in the previous case n = I(1[s,t] sin(gn ))
P (n)
(n)
(n)
(n)
are of the form n = lj=11 (n)
j (W(s j+1 ) W(s j )) where j F s(n) and
j

s s(n)
j t for all j and n. These conditions and the definition of Wiener
(n)
process W adapted to the filtration (Fn ) give the variable W(s(n)
j+1 ) W(s j )
(n)
is independent of the -field F s(n) for all s j and n. This property and the
j

(n)
fact that (n)
j F s implies that
j

E(n |F s ) =

(n)
lX
1

j=1

(n)
(n)
E( (n)
j (W(s j+1 ) W(s j ))|F s )

and further
(n)
(n)
E( (n)
j (W(s j+1 ) W(s j ))|F s )

(n)
(n)
(n)
= E(E( (n)
j (W(s j+1 ) W(s j ))|F s )|F s ) (tower property)
j

(n)
E( (n)
j E(W(s j+1 )

(n)
(n)
E( (n)
j E(W(s j+1 ) W(s j ))|F s )
(n)
(n)
E(W(s(n)
j+1 ) W(s j ))E( j |F s )

(n)
W(s(n)
j )|F s j )|F s )

(independence)
=0

(n)
for all j, n because E(W(s(n)
j+1 ) W(s j )) = 0. Thus E(n |F s ) = 0 for all
n. The convergence n in L2 ()-norm implies E(n |F s ) E(|F s ) in
L2 () (see [PF]). The last result E((E(|F s ))2 ) = 0 gives E(|F s ) = 0 almost
everywhere. The proof is completed.

ExerciseR3.11: For each t in [0, T ] compare the mean and variance of the
T
Ito integral 0 W(s)dW(s) with those of the random variable 21 (W(T )2 T ).
RT
Solution: We have E( 0 W(s)dW(s)) = 0 (Theorem 3.14). As a consequence
Z T
!
!2
Z T

Var
W(s)dW(s) = E
W(s)dW(s)
0

W (s)ds (isometry) =

E(W (s))ds =

sds =

For the second random variable we obtain


!
 1
1
1
E(W 2 (T )) T = (T T ) = 0
E (W 2 (T ) T ) =
2
2
2

T2
.
2

Solutions to Exercises

37

and so
!

 1

1 2
1
Var (W (T ) T ) = Var W 2 (T ) =
E((W 2 (T ))2 ) (E(W 2 (T )))2
2
4
4
 1
 T2
1
=
.
E(W 4 (T )) T 2 =
3T 2 T 2 =
4
4
2
Exercise 3.12: Use the identity 2a(ba) = (b2 a2 )(ba)2 Rand approT
priate approximating partitions to show from first principles that 0 W(s)dW(s) =
1
(W(T )2 T ).
2
Solution: Since the process W belongs to M 2 , we know that the integral of W exists and it is enough to calculate the limit of integrals for an
approximating W sequence ( fn )n . We take fn , n = 1, . . . given by the partiP
(n)
(n) (t),
tions ti(n) = iTn , i = 0, 1, . . . , n 1. So we have fn (t) = n1
i=0 W(ti )1(ti(n) ,ti+1
]
2
fn (0) = 0, n = 1, 2, . . .. It is easy to verify that fn W in L ([0, T ] )norm. Using our hypothesis about limn I( fn ) we see that it is necessary
P
(n)
(n)
(n)
1
2
to prove that I( fn ) = n1
i=0 W(ti )(W(ti+1 ) W(ti )) 2 (W (T ) T ) in
2
2
2
2
L ()-norm. The identity 2a(b a) = (b a ) (b a) lets us write the
Ito sum of I( fn ) as follows
n1

I( fn ) =

n1

1 X 2 (n)
1X
(n)
(W (ti+1 ) W 2 (ti(n) ))
(W(ti+1
) W(ti(n) ))2
2 i=0
2 i=1

1
1 2
W (T ) n
2
2
Pn1
(n)
where n = i=0 (W(ti+1
)W(ti(n) ))2 . Then it is sufficient to show that E(n
(n)
(n)
T )2 0. Since E((W(ti+1 )W(ti(n) ))2 ) = E(W 2 (ti+1
ti(n) )) (the same distributions) =
E(W 2 ( Tn )) = Tn , we obtain E(n ) = T . Hence
=

E(n T )2 = Var(n ) =
=

n1
X
i=1

Var(W

(n)
(ti+1

ti(n) ))

n1
X
i=0

(n)
Var((W(ti+1
) W(ti(n) ))2 ) (independence)

(the same distributions) =

n1
X
i=0

 T 2  T 2 !
  T     T 2 !
2
4
E W
=n 3

=n E W
n
n
n
n
2T 2
=
0
n
as n . The proof is completed.

  T 
Var W 2
n

38

Solutions to Exercises

Exercise 3.13: Give a direct proof of the conditional Ito isometry (Theorem 3.20) : if f M 2 , [a, b] [0, T ], then
Z
E([

b
a

Z
f (s)dW(s)]2 |Fa ) = E(

f 2 (s)ds|Fa )
a

following the method used for proving the unconditional Ito isometry.
Solution: (Conditional Ito isometry.) The proof has two steps. In step
1. we prove the theorem for f S 2 . In this case, let a = t0 < t1 < . . . <
tn = b be a partition of [a, b] and let f be of the form f (t) = 0 1{a} (t) +
Pn1
k=0 k 1(tk ,tk+1 ] (t), where, for k < n, k is an Ftk -measurable variable. Then
we can calculate similarly as in Theorem 3.10
"Z

n1
X
k=0

+2

2
#2
n1
X

f (s)dW(s) |Fa = E k (W(tk+1 ) W(tk )) |Fa


k=0

E([k (W(tk+1 ) W(tk ))]2 |Fa )

X
i<k

E([i k (W(ti+1 ) W(ti ))(W(tk+1 ) W(tk ))]|Fa ) = A + 2B.

Consider A.We have


E(k2 (W(tk+1 ) W(tk ))2 )|Fa )

=
=
=
=

= E(E(k2 (W(tk+1 ) W(tk ))2 |Ftk )|Fa ) (tower property)

E(k2 (E[(W(tk+1 ) W(tk ))2 |Ftk ]|Fa ) (k2 is Ftk -measurable)

E(k2 (E[W(tk+1 ) W(tk )]2 )|Fa ) (independence)

E((W(tk+1 tk ))2 )E(a2 |Fa ) (linearity, the same distribution)


(tk+1 tk )E(k2 |Fa ) = E(k2 (tk+1 tk )|Fa ).

This proves that

A=
=E

n1
X
k=0
b

n1
X
E(k2 (tk+1 tk )|Fa ) = E( k2 (tk+1 tk )|Fa )
k=0

f 2 (t)dt|Fa .
a

Solutions to Exercises

39

Now for B we have


E(i k (W(ti+1 ) W(ti ))(W(tk+1 ) W(tk ))|Fa )

= E(E[i k (W(ti+1 ) W(ti ))(W(tk+1 ) W(tk ))|Ftk ]|Fa )


(tower property) = E(i k (W(ti+1 ) W(ti ))

E(W(tk+1 ) W(tk )|Ftk )|Fa ) (terms for i < k are Ftk -measurable) = 0
because W(tk+1 )W(tk ) is independent of the -field Fk , hence E(W(tk+1 )
W(tk )|Ftk ) = E(W(tk+1 ) W(tk )) = 0. Hence also B = 0.
Step 2. The general case. Let be f M 2 and let ( fn ) be a sequence of
approximating processes for f on [a, b]. Then fn S 2 , n = 1, 2, . . . and || f
fn ||L2 ([a,b]) 7 0 as n . The last condition implies ||I(1[a,b] fn )||L2 ()
0.
Now we will want to utilize the conditional isometry for fn and to take
the limit as n . This needs the following general observation.
Observation. Let (Zn ) be a sequence of random variables on a probability
space ( , F , P ) and let be a sub -field of F . If Zn L1 ( ), n =
1, 2, . . . and Zn Z in L1 ()-norm, then E(Zn |) E(Z|) in L1 ()-norm.
Proof. First note that E(Zn |), E(Z|) belong to L1 ().
Now we have
E(|E(Zn |) E(Z|)|) = E(|E(Zn Z|)|)

E(E(|Zn Z||)) = E(|Zn Z|) 0 as n .


To use our Observation we have to verify that [I(1[a,b] fn )]2 [I(1[a,b] f )]2
RT
RT
and 0 (1[a,b] fn )2 ds 0 (1[a,b] f )2 ds in L1 ()-norm. The following relations hold
E(|[I(1[a,b] fn )]2 [I(1[a,b] f )]2 |) = E(|I(1[a,b] ( fn f ))||I(1[a,b] ( fn + f ))|)
1

(E([I(1[a,b] ( fn f ))]2 )) 2 (E([I(1[a,b] ( fn + f ))]2 )) 2 (Schwarz inequality)


Z T
Z T
1
1
1[a,b] ( fn + f )2 ds)) 2 (isometry)
= (E(
1[a,b] ( fn f )2 ds)) 2 (E(
0

= || fn f ||L2 ([a,b]) || fn + f ||L2 ([a,b]) 0


as n because the second sequence is bounded. For the second se-

40

Solutions to Exercises

quence we have similarly


Z T
Z T
E|
1[a,b] ( fn2 f 2 )ds| E(
|1[a,b] ( fn f )||1[a,b] ( fn + f )|ds)
0
0
Z T
Z T
1
1
2
2
(E(
1[a,b] ( fn f ) ds)) (E(
1[a,b] ( fn + f )2 ds)) 2
0

= || fn f ||L2 ([a,b]) || fn + f ||L2 ([a,b]) 0 as n .


Thus we have by our Observation that
E([I(1[a,b] fn )]2 |Fa ) E([I(1[a,b] f )]2 |Fa )
and
Z
E(

T
0

Z
1[a,b] fn2 ds|Fa ) E(

1[a,b] f 2 ds|Fa )
0

in L () norm. Hence and from the equality


Z T
1[a,b] fn2 ds|Fa )
E([I(1[a,b] fn )]2 |Fa ) = E(
0

valid for all fn S , we can obtain the final result.


Rt
Exercise 3.14: Show 0 g(s)ds = 0 for all t [0, T ] implies g = 0 almost
surely on [0, T ].
Solution: Denote by g+ and g the positive and the negative parts of
g. Then g+ 0, g 0 and g+ g = g. The assumption about g
Rb
Rb
implies a g+ (s) = a g (s) = 0 for all intervals [a, b] [0, T ]. Write
R
R
+ (A) = A g+ (s)ds, (A) = A g (s)ds for A B([0, T ]). Of course
and + are measures on B([0, T ]) and the properties of g+ and g give
+ ([a, b]) = ([a, b]) for every interval [a, b]. Since intervals generate the
-field B([0, T ]), we must have + (A) = (A) for all A B([0, T ]). Suppose now that the Lebesbue measure of the set {x : g(x) , 0} = {x : g+ (x) ,
g (x)} is positive. Then the measure of the set B = {x : g+ (x) > g (x)} (or
the set {x : g+ (x) < g (x)}) must be also positive. But this conjecture leads
to the conclusion
Z
+

(B) = (B) (B) = (g+ g )ds > 0


B

which contradicts the assumption (A) = 0 for all A B([0, T ]).

Solutions to Exercises

41

Chapter 4
Exercise 4.1: Show that for cross-terms all we need is the fact that W 2 (t)t
is a martingale.
Solution: We need to calculate H = E(F (W(ti ))F (W(t j ))Xi X j ) for i <
j, where Xk = [W(tk+1 )W(tk ]2 [tk+1 tk ]. As in the proof of Theorem 4.5,
(Hurdle 1-Cross terms) we have H = E(F (W(ti ))F (W(t j ))Xi E(X j |Ft j )).
So we calculate E(X j |Ft j ). It is possible to write it in the form
E(X j |F j ) = E([W 2 (t j+1 ) t j+1 ]|Ft j ) 2W(t j )E(W(t j+1 )|Ft j )
+W 2 (t j ) + t j (W(t j ) is Ft j -measurable)

= W 2 (t j ) t j 2W 2 (t j ) + W 2 (t j ) + t j = 0
(W 2 (t) t, W(t) are martingales).

Hence also H = 0.
Exercise 4.2: Verify the convergence claimed in (4.3), using the fact that
quadratic variation of W is t.
Solution: Actually we have to repeat the calculus done for the quadratic
variation of W. As in the previous exercise write Xi = (W(ti+1 ) W(ti ))2
P
(ti+1 ti ), i = 1, . . . , n 1. Since E(Xi ) = 0 and hence E( n1
i=0 Xi ) = 0, and
since Xi , i = 1, . . . , n 1 are independent random variables, we have

i=0

E({(W(ti+1 ) W(ti )) (ti+1 ti )} ) =

n1
X

n1
X
Xi ) (Xi are independent)
Var(Xi ) (E(Xi ) = 0) = Var(

i=0

n1
X

= E((
=

n1
X

n1
X

EXi2

i=0

i=0

n1
X

Xi )2 ) (E(

i=0
2
E((V[0,t]
(n)

i=0

n1
X
Xi ) = 0) = E[(( (W(ti+1 ) W(ti ))2 ) t)]
i=0

t) ) E(([W, W](t) t)2 ) = 0

in L2 ()-norm, independently of the sequence of partitions with mesh going to 0 as n . (See Proposition 2.2.)
Exercise 4.3: Prove that
Z t
M = inf{t :
| f (s)|ds M}
0

is a stopping time.
Rt
Solution: The process t 7 0 | f (s)|ds has continuous paths and we can

42

Solutions to Exercises

apply the argument given at the beginning of the section provided it is


adapted. For this we have
R t to assume that the process f (t) is adapted and
notice that the integral 0 | f (s)|ds, computed pathwise, is the limit of approximating sums. These sums are Ft -measurable and measurability is preserved in the limit.
Exercise 4.4: Find a process that is in P2 but not in M2 .
Solution: If a process has continuous paths, it is in P2 since the integral
over any finite time interval of a continuous
function is finite. We need an
RT
example for which the expectation E 0 f 2 (s)ds is infinite. Fubinis theoRT
rem implies that it is sufficient to find f such that 0 E( f 2 (s))ds is infinite.
Going for a simple example, let = [0, 1] and T = 1. The goal will be
R1
achieved if E( f 2 (s)) = 1s . Now E( f 2 (s)) = 0 f 2 (s, )d so we need a ranR1
dom variable, i.e. a Borel function X : [0, 1] R such that 0 X()d =
1
. Clearly, X() = s12 1[0,s] () does the trick, so f (s, ) = 1s 1[0,s] () is the
s
example we are looking for.
Exercise 4.5: Show that theRIto process dX(t) = a(t)dt + b(t)dW(t) has
t
quadratic variation [X, X](t) = 0 b2 (s)ds.
RT
Solution: Under the additional assumption that 0 b(s)dW(s) is bounded
result is given by Theorem 3.26. For theR general case let n = min{t :
Rthe
t
t
b(s)dW(s)
n}, so that, writing M(t) = 0 b(s)dW(s), the stopped pro0
RT
cess M n (t) is bounded (by n). Since M n (t) = 0 1[0,n ] b(s)dW(s), [X tn , X n ](t) =
Rt
Rt
2
b2 (s)ds almost surely, because n is localising.
1
b
(s)ds

[0,
]
n
0
0
Exercise 4.6: Show that the characteristics of an Ito process are uniquely
defined by the process, i.e. prove that X = Y implies aX = aY , bX = bY , by
applying the Ito formula to find the form of (X(t) Y(t))2 .
Solution: Let Z(t) = X(t)Y(t) and by the Ito formula dZ 2 (t) = 2Z(t)aZ (t)dt+
2Z(t)bZ (t)dW(t) + b2Z (t)dt with aZ = aX aY , bZ = bX bY . But Z(t) = 0
Rt
Rt
hence 0 b2Z (s)ds = 0, all t, so bZ = 0. This implies 0 aZ (s)ds = 0 for all t
hence aZ (t) = 0 as well.
Exercise 4.7: Suppose that the Ito process dX(t) = a(t)dt + b(t)dW(t) is
positive for all t and find the characteristic of the processes Y(t) = 1/X(t),
Z(t) = ln X(t).
1
Solution: Y(t) = X(t)
= F(X(t)), F(x) = 1x , F (x) = x12 , F (x) = 2 x13
dY =

1
1
1 1 2
adt 2 bdW(t) +
b dt,
X2
X
2 X3

Z(t) = ln X(t), so Z(t) = F(X(t)) with F(x) = ln x, F (x) = 1x , F (x) =

Solutions to Exercises

43

x12 and

1
1
1 1 2
adt + bdW(t)
b dt.
X
X
2 X2
Exercise 4.8: Find the characteristics of exp{at + X(t)}, given the form
of the Ito process X.
Solution: Let F(t, x) = exp{at + x} so F t = aF, F x = F, F xx = F and
with Z(t) = exp{at + X(t)}, dX(t) = aX (t)dt + bX (t)dW(t) we have
dZ =

1
dZ = aZdt + a x Zdt + bX ZdW(t) + b2X Zdt.
2
Exercise 4.9: Find a version of Corollary 4.32 for the case where is a
deterministic function of time. R
Rt
t
Solution: Let M(t) = exp{ 0 (s)dW(s) 21 0 2 (s)ds} = exp{X(t)}
where X(t) is Ito with aX = 12 2 , bX = . Since X(t) has normal distribution ( is deterministic), M M2 can be show in the same way as in the
proof of the corollary and (4.16) is clearly satisfied.
Exercise 4.10: Find the characteristics of the process ert X(t).
Solution: Let Y(t) = ert , aY (t) = rert , bY (t) = 0, dY(t) = rert dt so
integration by parts (Ito product rule, in other words) gives
d[ert X(t)] = rert X(t)dt + ert dX(t).
Exercise 4.11: Find the form of the process X/Y using Exercise 4.7.
1
Solution: Write dX(t) = aX (t)dt + bX (t)dW(t), d( Y(t)
) = a1/Y (t)dt +
b1/Y (t)dW(t) with the characteristics of Y given by Exercise 4.7:
1
1 1 2
aY +
b ,
2
Y
2 Y3 Y
1
b1/Y = 2 bY .
Y
All that is left is to plug these into the claim of Theorem 4.36
a1/Y =

1
1
1
d(X ) = Xd( ) + dX + bX b1/Y dt.
Y
Y
Y

44

Solutions to Exercises

Chapter 5
Exercise 5.1: Find an equation satisfied by X(t) = S (0) exp{X t + W(t)}.
Solution: Write the process in (5.3) in the form S (t) = S (0) exp{S t +
W(t)} with S = X 12 2 and (5.2) takes the form dS (t) = (S +
1 2
)S (t)dt + S (t)dW(t) so immediately
2
1
dX(t) = (X + 2 )X(t)dt + X(t)dW(t)
2
Exercise 5.2: Find the equations for the functions t 7 E(S (t)), t 7
Var(S (t)).
Solution: We have E(S (t)) = S (0) exp{t} = m(t), say, so m (t) = m(t)
2
with m(0) = S (0). Next,l Var(S (t)) = E(S (t) S (0)et )2 = S 2 (0)e2t (e t
1) = v(t), say and
2

v (t) = 2S 2 (0)e2t (e t 1) + 2 S 2 (0)e2t e t


= [2 + 2 ]v(t) + 2 m2 (t).

Exercise 5.3: Show that the linear equation


dS (t) = (t)S (t)dt + (t)S (t)dW(t)
with continuous deterministic functions (t) and (t) has a unique solution
!
Z t
Z t
1
(s)dW(s)}.
(s) 2 (s) ds +
S (t) = S (0) exp{
2
0
0
Solution: For uniqueness we can repeat the proof of Proposition 5.3 (or
notice that the coefficients of the equation satisfy the conditions of Theorem 5.8). To see that the process solves the equation, take
!
Z t
1 2
F(t, x) = S (0) exp{
(s) (s) ds + x},
2
0
and S (t) = F(t, X(t)) with X(t) =

Rt
0

(s)dW(s). Now

1
F t (t, x) = ((t) 2 (t))S (0)F(t, x),
2
F x (t, x) = F xx (t, x) = S (0)F(t, x),
dX(t) = (t)dW(t),

Solutions to Exercises

45

so by the Ito formula we get the result:


!
Z t
1
1
dS (t) = ( 2 )S (0) exp{
(s) 2 (s) ds + X(t)}dt
2
2
0
!
Z t
1 2
+S (0) exp{
(s) (s) ds + X(t)}dW(t)
2
0
!
Z t
1 2
1 2
(s) (s) ds + X(t)}dt
+ S (0) exp{
2
2
0
= S (t)dt + S (t)dW(t)
(We have essentially repeated the proof of Theorem 5.2.)
Exercise 5.4: Find the equation solved by the process sin W(t) = X(t),
say.
Solution: Take F(x) = sin(x), F (x) = cos(x), F (x) = sin(x) and the
simplest version of Ito formula gives
p
1
1
sin(W(t))dt = 1 X 2 (t)dW(t) X(t)dt
2
2

Exercise 5.5: Find a solution to the equation dX = 1 X 2 dW + 21 Xdt


with X(0) = 1.
Solution: Comparing with Exercise 5.4 we can guess X(t) = cos(W(t))
and check that with F(x) = cos x the Ito formula gives the result.
Exercise 5.6: Find a solution to the equation
dX(t) = cos(W(t))dW(t)

dX(t) = 3X 2 (t)dt X 3/2 (t)dW(t)


bearing in mind the above derivation of dX(t) = X 3 (t)dt + X 2 (t)dW(t).
Solution: An educated guess (the educated part is to solve F = F 3/2
so that the stochastic term agrees, the guess is to use F of some special
form (1+ax)b , then keep fingers crossed that the dt term will be as needed)
3
gives F(x) = (1 + 12 x)2 with F (x) = (1 + 12 x)3 = [F(x)] 2 , F (x) =
3(1 + 21 x)4 = 3F 2 (x) so X(t) = F(W(t)) satisfies the equation.
Exercise 5.7: Solve the following Vasicek equation dX(t) = (abX(t))dt+
dW(t).
h
i
Solution: Observe that d ebt X(t) = aebt dt + ebt dW(t) (Exercise 4.10)
hence
Z t
Z t
bu
bt
ebu dW(u)
e du +
e X(t) = X(0) + a
0
0
Z t

a  bt
ebu dW(u),
= X(0) + e 1 +
b
0

46

Solutions to Exercises

so that

a
1 ebt + ebt
b

X(t) = ebt X(0) +

ebu dW(u).
0

Exercise 5.8: Find the equation solved by the process X 2 where X is the
Ornstein-Uhlenbeck process.
Solution: Recall dX(t) = X(t)dt + dW(t), so by the Ito formula,
dX 2 (t) = 2X(t)dX(t) + 2 dt = 2X 2 (t)dt + 2X(t)dW(t) + 2 dt.
Exercise 5.9: Prove uniqueness using the method of Proposition 5.3 for
a general equation with Lipschitz coefficients (take any two solutions and
estimate the square of their difference to show that it is zero).
Solution: Suppose
Z t
Z t
Xi (t) = X0 +
a(s, Xi (s))ds +
b(s, Xi (s))dW(s), i = 1, 2.
0

Then
X1 (t) X2 (t) =

[a(u, X1 (u)) a(u, X2 (u))]du


Z t
+ [b(u, X1 (u)) b(u, X2 (u))]dW(u)
0

and using (a + b) 2a + 2b and taking expectation we get


f (t) := E(X1 (t) X2 (t))2
!2
Z t
[a(u, X1 (u)) a(u, X2 (u))]du
2E
0

+2E

t
0

!2
[b(u, X1 (u)) b(u, X2 (u))]dW(u) .

Using the Lipschitz condition for a, the first term on the right is estimated
2
R t
by 2E 0 K[X1 (u) X2 (u)]du and we can continue from here as in the
proof of Proposition 5.3.
Ito isometry and the Lipschitz condition for b allow us to estimate the
second term by
!2
Z t
2E
[b(u, X1 (u)) b(u, X2 (u))]dW(u)
0
Z t
=2
E[b(u, X1 (u)) b(u, X2 (u))]2 du
Z0 t
K 2 E[X1 (u)) X2 (u)]2 du
2
0

Solutions to Exercises

47
Rt

Putting these together we obtain f (t) 2K 2 (1 + T ) 0 f (u)du and the


Gronwall lemma implies f (t) = 0, i.e. X1 (t) = X2 (t).
Exercise 5.10: Prove that the solution depends continuously on the initial value in the L2 norm, namely show that if X, Y are solutions of (5.4)
with initial conditions X0 , Y0 , respectively, then for all t we have E(X(t)
Y(t))2 cE(X0 Y0 )2 . Find the form of the constant c.
Solution: We proceed as in Exercise 5.9 but the first step is
Z t
X(t) Y(t) = X(0) Y(0) +
[a(u, X(u)) a(u, Y(u))]du
0
Z t
+ [b(u, X(u)) b(u, Y(u))]dW(u).
0

After taking squares, expectation and following the same estimations we


will end up with
Z t
f (t) 2E(X0 Y0 )2 + 2K 2 (1 + T )
f (u)du
0

so after Gronwall
E(X1 (t) X2 (t))2 2 exp{2K 2 (1 + T )T }E(X0 Y0 )2 .

You might also like