Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Chapter 18

Itô Integration
and Quadratic Variation
Local and finite-variation martingales, completeness, covariation, con-
tinuity, norm comparison, uniform integrability, Cauchy-type inequali-
ties, martingale integral, semi-martingales, continuity, dominated con-
vergence, chain rule, optional stopping, integration by parts, approxi-
mation of covariation, Itô’s formula, local integral, conformal mapping,
Fisk–Stratonovich integral, continuity and approximation, random time
change, dependence on parameter, functional representation

Here we initiate the study of stochastic calculus, arguably the most powerful
tool of modern probability, with applications to a broad variety of subjects
throughout the subject. For the moment, we may only mention the time-
change reduction and integral representation of continuous local martingales
in Chapter 19, the Girsanov theory for removal of drift in the same chapter,
the predictable transformations in Chapter 27, the construction of local time
in Chapter 29, and the stochastic differential equations in Chapters 32–33.
In this chapter we consider only stochastic integration with respect to con-
tinuous semi-martingales, whereas the more subtle case of integrators with
possible jump discontinuities is postponed until Chapter 20, and an extension
to non-anticipating integrators appears in Chapter 21. The theory of stochas-
tic integration is inextricably linked to the notions of quadratic variation and
covariation, already encountered in Chapter 14 in the special case of Brownian
motion, and together the two notions are developed into a theory of amazing
power and beauty.
We begin with a construction of the covariation process [M, N ] of a pair of
continuous local martingales M and N , which requires an elementary approxi-
mation and completeness argument. The processes M ∗ and [M ] = [M, M ] will
be related by some useful continuity and norm relations, including the powerful
BDG inequalities.
Given the quadratic variation [M ], we may next construct the stochastic
integral V dM for suitable progressive processes V , using a simple 
Hilbert
space argument. Combining with the ordinary Stieltjes integral V dA for
processes A of locally finite variation, we may finally extend the integral to
arbitrary continuous semi-martingales X = M + A. The continuity properties
of quadratic variation carry over to the stochastic integral, and in conjunction
with the obvious linearity they characterize the integration.

© Springer Nature Switzerland AG 2021 395


O. Kallenberg, Foundations of Modern Probability, Probability Theory
and Stochastic Modelling 99, https://doi.org/10.1007/978-3-030-61871-1_19
396 Foundations of Modern Probability

A key result for applications is Itô’s formula, which shows how semi-mart-
ingales are transformed under smooth mappings. Though the present substi-
tution rule differs from the elementary result for Stieltjes integrals, the two
formulas can be brought into agreement by a simple modification of the in-
tegral. We conclude the chapter with some special topics of importance for
applications, such as the transformation of stochastic integrals under a ran-
dom time-change, and the integration of processes depending on a parameter.
The present material may be thought of as a continuation of the martingale
theory of Chapters 9–10. Though no results for Brownian motion are used
explicitly in this chapter, the existence of the Brownian quadratic variation in
Chapter 14 may serve as a motivation. We also need the representation and
measurability of limits obtained in Chapter 5.
Throughout the chapter we take F = (Ft ) to be a right-continuous and
complete filtration on R+ . A process M is said to be a local martingale, if it
is adapted to F and such that the stopped and centered processes M τn − M0
are true martingales for some optional times τn ↑ ∞. By a similar localiza-
tion we may define local L2 -martingales, locally bounded martingales, locally
integrable processes, etc. The required optional times τn are said to form a
localizing sequence.
Any continuous local martingale may clearly be reduced by localization to a
sequence of bounded, continuous martingales. Conversely, we see by dominated
convergence that every bounded local martingale is a true martingale. The
following useful result may be less obvious.

Lemma 18.1 (local martingales) For any process M and optional times τn ↑
∞, these conditions are equivalent:
(i) M is a local martingale,
(ii) M τn is a local martingale for every n.

Proof, (i) ⇒ (ii): If M is a local martingale with localizing sequence (σn )


and τ is an arbitrary optional time, then the processes (M τ )σn = (M σn )τ are
true martingales. Thus, M τ is again a local martingale with localizing sequence
(σn ).
(ii) ⇒ (i): Suppose that each process M τn is a local martingale with local-
izing sequence (σkn ). Since σkn → ∞ a.s. for each n, we may choose some indices
kn with
P σknn < τn ∧ n ≤ 2−n , n ∈ N.
Writing τn = τn ∧ σknn , we get τn → ∞ a.s. by the Borel–Cantelli lemma, and
so the optional times τn = inf m≥n τm 
satisfy τn ↑ ∞ a.s. It remains to note
τn τn τn
that the processes M = (M ) are true martingales. 2

Next we show that every continuous martingale of finite variation is a.s.


constant. An extension appears as Lemma 10.11.
18. Itô Integration and Quadratic Variation 397

Proposition 18.2 (finite-variation martingales) Let M be a continuous local


martingale. Then
M has locally finite variation ⇔ M is a.s. constant.
Proof: By localization we may reduce to the case where M0 = 0 and M
has bounded variation. In fact, let Vt denote the total variation of M on the
interval [0, t], and note that V is continuous and adapted. For each n ∈ N,
we may then introduce the optional time τn = inf{t ≥ 0; Vt = n}, and note
that M τn − M0 is a continuous martingale with total variation bounded by n.
We further note that τn → ∞, and that if M τn = M0 a.s. for each n, then
even M = M0 a.s. In the reduced case, fix any t > 0, write tn,k = kt/n, and
conclude from the continuity of M that a.s.
 2
ζn ≡ Σ
k≤n
Mtn,k − Mtn,k−1
 
 
≤ Vt max Mtn,k − Mtn,k−1  → 0.
k≤n

Since ζn ≤ Vt2 , which is bounded by a constant, it follows by the martingale


property and dominated convergence that EMt2 = Eζn → 0, and so Mt = 0
a.s. for each t > 0. 2

Our construction of stochastic integrals depends on the quadratic variation


and covariation processes, which therefore need to be constructed first. Here we
use a direct approach, which has the further advantage of giving some insight
into the nature of the basic integration-by-parts formula in Proposition 18.16.
An alternative but less elementary approach would be to use the Doob–Meyer
decomposition in Chapter 10.
The construction utilizes predictable step processes of the form
Vt = Σ k ξk 1{t > τk }
= Σ k ηk 1(τ ,τ ] (t), k k+1
t ≥ 0, (1)
where the τn are optional times with τn ↑ ∞ a.s., and the ξk and ηk are Fτk -
measurable random variables for all k ∈ N. For any process X we consider the
elementary integral process V · X, given as in Chapter 9 by
t  
(V · X)t ≡ V dX = Σ k ξk Xt − Xtτk
0  
= Σ k ηk Xτtk+1 − Xτtk , (2)
where the series converge since they have only finitely many non-zero terms.
Note that (V ·X)0 = 0, and that V ·X inherits the possible continuity properties
of X. It is further useful to note that V · X = V · (X − X0 ). The following
simple estimate will be needed later.
Lemma 18.3 (martingale preservation and L2 -bound) For any continuous
L2 -martingale M with M0 = 0 and predictable step process V with |V | ≤ 1, the
process V · M is again an L2 -martingale satisfying
E(V · M )2t ≤ EMt2 , t ≥ 0.
398 Foundations of Modern Probability

Proof: First suppose that the sum in (1) has only finitely many non-zero
terms. Then V · M is a martingale by Corollary 9.15, and the L2 -bound follows
by the computation
 2
E(V · M )2t = E Σ k ηk2 Mτt k+1
− Mτtk
2
≤ E Σ k Mτt k+1
− Mτtk
= E Mt2 .
The estimate extends to the general case by Fatou’s lemma, and the martingale
property then extends by uniform integrability. 2

Now consider the space M2 of all L2 -bounded, continuous martingales M


with M0 = 0, equipped with the norm M = M∞ 2 . Recall that M ∗ 2 ≤
2 M by Proposition 9.17.

Lemma 18.4 (completeness) Let M2 be the space of L2 -bounded, continuous


martingales with M0 = 0 and norm M = M∞ 2 . Then M2 is complete and
hence a Hilbert space.

Proof: For any Cauchy sequence M 1 , M 2 , . . . in M2 , the sequence (M∞


n
)
is Cauchy in L and thus converges toward some ξ ∈ L . Introduce the L2 -
2 2

martingale Mt = E(ξ | Ft ), t ≥ 0, and note that M∞ = ξ a.s. since ξ is


F∞ -measurable. Hence,
 
 
(M n − M )∗ 2 ≤ 2 M n − M
 
 n 
= 2 M∞ − M∞ 2 → 0,
and so M n − M → 0. Moreover, (M n − M )∗ → 0 a.s. along a sub-sequence,
which implies that M is a.s. continuous with M0 = 0. 2

We may now establish the existence and basic properties of the quadratic
variation and covariation processes [M ] and [M, N ]. Extensions to possibly
discontinuous processes are considered in Chapter 20.

Theorem 18.5 (covariation) For any continuous local martingales M, N ,


there exists a continuous process [M, N ] with locally finite variation and [M, N ]0
= 0, such that a.s.
(i) M N − [M, N ] is a local martingale,
(ii) [M, N ] = [N, M ],
(iii) [a M1 + b M2 , N ] = a [M1 , N ] + b [M2 , N ],
(iv) [M, N ] = [M − M0 , N ],
(v) [M ] = [M, M ] is non-decreasing,
(vi) [M τ , N ] = [M τ , N τ ] = [M, N ]τ for any optional time τ .
The process [M, N ] is determined a.s. uniquely by (i).
18. Itô Integration and Quadratic Variation 399

Proof: The a.s. uniqueness of [M, N ] follows from Proposition 18.2, and
(ii)−(iii) are immediate consequences. If [M, N ] exists with the stated proper-
ties and τ is an optional time, then Lemma 18.1 shows that M τ N τ − [M, N ]τ
is a local martingale, as is the process M τ (N − N τ ) by Corollary 9.15. Hence,
even M τ N − [M, N ]τ is a local martingale, and (vi) follows. Furthermore,
M N − (M − M0 )N = M0 N is a local martingale, which yields (iv) whenever
either side exists. If both [M + N ] and [M − N ] exist, then
 
4M N − [M + N ] − [M − N ]
= (M + N )2 − [M + N ] − (M − N )2 − [M − N ]

is a local martingale, and so we may take [M, N ] = ([M + N ] − [M − N ])/4.


It is then enough to prove the existence of [M ] when M0 = 0.
First let M be bounded. For each n ∈ N, let τ0n = 0, and define recursively
n
τk+1 = inf t > τkn ; |Mt − Mτkn | = 2−n , k ≥ 0.
Note that τkn → ∞ as k → ∞ for fixed n. Introduce the processes
Vtn = Σ k Mτ 1n
k
t ∈ (τkn , τk+1
n
2
] ,
Qnt = Σ k Mt∧τ n
k
− Mt∧τk−1
n .
The V n are bounded, predictable step processes, and clearly

Mt2 = 2 (V n · M )t + Qnt , t ≥ 0. (3)

By Lemma 18.3, the integrals V n · M are continuous L2 -martingales, and since


|V n − M | ≤ 2−n for each n, we have for m ≤ n
   
 m   
V · M − V n · M  = (V m − V n ) · M 

≤ 2−m+1 M .
Hence, Lemma 18.4 yields a continuous martingale N with (V n · M − N )∗ → 0.
P

The process [M ] = M 2 − 2N is again continuous, and by (3) we have


 ∗  ∗
P
Qn − [M ] =2 N −Vn·M → 0.

In particular, [M ] is a.s. non-decreasing on the random time set T = {τkn ; n, k


∈ N}, which extends by continuity to the closure T̄ . Also note that [M ] is
constant on each interval in T̄ c , since this is true for M and hence also for
every Qn . Thus, (v) follows.
In the unbounded case, define

τn = inf{t > 0; |Mt | = n}, n ∈ N.

The processes [M τn ] exist as before, and clearly [M τm ]τm = [M τn ]τm a.s. for all
m < n. Hence, [M τm ] = [M τn ] a.s. on [0, τm ], and since τn → ∞, there exists a
non-decreasing, continuous, and adapted process [M ], such that [M ] = [M τn ]
400 Foundations of Modern Probability

a.s. on [0, τn ] for every n. Here (M τn )2 − [M ]τn is a local martingale for every
n, and so M 2 − [M ] is a local martingale by Lemma 18.1. 2

Next we establish a basic continuity property.

Proposition 18.6 (continuity) For any continuous local martingales Mn start-


ing at 0, we have
Mn∗ → 0 ⇔ [Mn ]∞ → 0.
P P

Proof: First let Mn∗ → 0. Fix any ε > 0, and define τn = inf{t ≥ 0;
P

|Mn (t)| > ε}, n ∈ N. Write Nn = Mn2 − [Mn ], and note that Nnτn is a true mar-
tingale on R̄+ . In particular, E[Mn ]τn ≤ ε2 , and so by Chebyshev’s inequality

P [Mn ]∞ > ε ≤ P {τn < ∞} + ε−1 E[Mn ]τn


≤ P {Mn∗ > ε} + ε.

Here the right-hand side tends to zero as n → ∞ and then ε → 0, which shows
P
that [Mn ]∞ → 0.
P
Conversely, let [Mn ]∞ → 0. By a localization argument and Fatou’s lemma,
we see that M is L2 -bounded. Now proceed as before to get Mn∗ → 0.
P
2

Next we prove a pair of basic norm inequalities1 involving the quadratic


variation, known as the BDG inequalities. Partial extensions to discontinuous
martingales appear in Theorem 20.12.

Theorem 18.7 (norm comparison, Burkholder, Millar, Gundy, Novikov) For


a continuous local martingale M with M0 = 0, we have 2
EM ∗p ) E[M ]p/2
∞ , p > 0.

Proof: By optional stopping, we may take M and [M ] to be bounded. Write


M  = M − M τ with τ = inf{t; Mt2 = r}, and define N = (M  )2 − [M  ]. By
Corollary 9.31, we have for any r > 0 and c ∈ (0, 2−p )

P M ∗2 ≥ 4r − P [M ]∞ ≥ c r ≤ P M ∗2 ≥ 4r, [M ]∞ < c r
≤ P N > −c r, supt Nt > r − c r
≤ c P {N ∗ > 0}
≤ c P {M ∗2 ≥ r}.

Multiplying by (p/2) rp/2−1 and integrating over R+ , we get by Lemma 4.4

2−p EM ∗p − c−p/2 E[M ]p/2 ∗p


∞ ≤ c EM ,

which gives the bound < with domination constant cp = c−p/2 /(2−p − c).
1
Recall that f ) g means f ≤ cg and g ≤ cf for some constant c > 0.
2
The domination constants are understood to depend only on p.
18. Itô Integration and Quadratic Variation 401

Defining N as before with τ = inf{t; [M ]t = r}, we get for any r > 0 and
c ∈ (0, 2−p/2−2 )
P [M ]∞ ≥ 2 r − P M ∗2 ≥ c r ≤ P [M ]∞ ≥ 2 r, M ∗2 < c r
≤ P N < 4 c r, inf t Nt < 4 c r − r
≤ 4c P [M ]∞ ≥ r .
Integrating as before yields

2−p/2 E[M ]p/2


∞ −c
−p/2
EM ∗p ≤ 4c E[M ]p/2
∞ ,

and the bound > follows with domination constant cp = c−p/2 /(2−p/2 −4 c). 2

We often need to certify that a given local martingale is a true martingale.


The last theorem yields a useful criterion.

Corollary 18.8 (uniform integrability) Let M be a continuous local martin-


gale. Then M is a uniformly integrable martingale, whenever
 
E |M0 | + [M ]1/2
∞ < ∞.

Proof: By Theorem 18.7 we have EM ∗ < ∞, and the martingale property


follows by dominated convergence. 2

The basic properties of [M, N ] suggest that we think of the covariation


process as a kind of inner product. A further justification is given by the
following Cauchy-type inequalities.

Proposition 18.9 (Cauchy-type inequalities, Courrège) For any continuous


local martingales M, N and product-measurable processes U, V , we have a.s.
     
(i)  [M, N ] ≤ d [M, N ] ≤ [M ] [N ] 1/2 ,
t     
  1/2
(ii)  U V d [M, N ] ≤ U 2 · [M ] t V 2 · [N ] t , t ≥ 0.
0

Proof: (i) By Theorem 18.5 (iii) and (v), we have a.s. for any a, b ∈ R and
t>0
0 ≤ [aM + bN ]t
= a2 [M ]t + 2ab [M, N ]t + b2 [N ]t .

By continuity we may choose a common exceptional null set for all a, b, and
so [M, N ]2t ≤ [M ]t [N ]t a.s. Applying this inequality to the processes M − M s
and N − N s for any s < t, we obtain a.s.
    
  1/2
 [M, N ]t − [M, N ]s  ≤ [M ]t − [M ]s [N ]t − [N ]s , (4)

and by continuity we may again choose a common null set. Now let 0 = t0 <
t1 < · · · < tn = t be arbitrary, and conclude from (4) and the classical Cauchy
inequality that
402 Foundations of Modern Probability

 
 
| [M, N ]t | ≤ Σ

k  [M, N ]t

k
− [M, N ]tk−1 
1/2
≤ [M ]t [N ]t .
It remains to take the supremum over all partitions of [0, t].
(ii) Writing dμ = d[M ], dν = d[N ], and dρ = |d[M, N ]|, we conclude from
(i) that (ρI)2 ≤ (μI)(νI) a.s. for every interval I. By continuity, we may
choose the exceptional null set A to be independent of I. Letting G ⊂ R+ be
open with connected components Ik and using Cauchy’s inequality, we get on
Ac ρG = Σ k ρIk
 1/2
≤ Σ k μIk νIk
 1/2


Σ j μIj Σ

k νIk
1/2
= μG · νG .
By Lemma 1.36, the last relation extends to any B ∈ BR+ .
Now fix any simple, measurable functions f = Σk ak 1Bk and g = Σk bk 1Bk .
Using Cauchy’s inequality again, we obtain on Ac
ρ|f g| ≤ Σ k |ak bk | ρBk
 1/2
≤Σ

k |ak bk | μBk νBk
1/2
≤ Σ j a2j μBj Σ k b2k νBk
  1/2
≤ μf 2 νg 2 ,
which extends by monotone convergence to any measurable functions f, g on
R+ . By Lemma 1.35, we may choose f (t) = Ut (ω) and g(t) = Vt (ω) for any
fixed ω ∈ Ac . 2

Let E be the class of bounded, predictable step processes with jumps at


finitely many fixed times. To motivate the construction of general stochastic
integrals, and for subsequent needs, we derive a basic identity for elementary
integrals.
Lemma 18.10 (covariation identity) For any continuous local martingales
M, N and processes U, V ∈ E, the integrals U ·M and V ·N are again continuous
local martingales, and we have 
U · M, V · N = ( U V ) · [M, N ] a.s. (5)
Proof: We may clearly take M0 = N0 = 0. The first assertion follows by
localization from Lemma 18.3. To prove (5), let Ut = Σ k≤n ξk 1(tk ,tk+1 ] (t), where
ξk is bounded and Ftk -measurable for each k. By localization we may take M ,
N , [M, N ] to be bounded, so that M , N , and M N − [M, N ] are martingales
on R̄+ . Then
   
E ( U · M ) ∞ N∞ = E Σ j ξj Mt − Mt Σ k Nt 
j+1 j k+1
− Ntk
= E Σ k ξk Mt Nt − Mt Ntk+1 k+1 k k
 
= E Σ k ξk [M, N ]t − [M, N ]t k+1 k
 
= E U · [M, N ] ∞.
18. Itô Integration and Quadratic Variation 403

Replacing M, N by M τ , N τ for an arbitrary optional time τ , we get

E ( U · M )τ Nτ = E( U · M τ )∞ N∞
τ
 
= E U · [M τ , N τ ] ∞
 
= E U · [M, N ] τ .

By Lemma 9.14, the process ( U · M )N − U · [M, N ] is then a martingale, and


so [ U · M, N ] = U · [M, N ] a.s. The general formula follows by iteration. 2

To extend the stochastic integral V ·M to more general processes V , we take


(5) to be the characteristic property. For any continuous local martingale M ,
let L(M ) be the class of all progressive processes V , such that (V 2 · [M ])t < ∞
a.s. for every t > 0.

Theorem 18.11 (martingale integral, Itô, Kunita & Watanabe) For any con-
tinuous local martingale M and process V ∈ L(M ), there exists an a.s. unique
continuous local martingale V · M with (V · M )0 = 0, such that for any con-
tinuous local martingale N ,
[V · M, N ] = V · [M, N ] a.s.

Proof: To prove the uniqueness, let M  , M  be continuous local martingales


with M0 = M0 = 0, such that
[M  , N ] = [M  , N ]
= V · [M, N ] a.s.
for all continuous local martingales N . Then by linearity [M  − M  , N ] = 0
a.s. Taking N = M  − M  gives [M  − M  ] = 0 a.s., and so M  = M  a.s. by
Proposition 18.6.  
To prove the existence, we first assume V 2M = E V 2 · [M ] ∞ < ∞. Since
V is measurable, we get by Proposition 18.9 and Cauchy’s inequality
   
 
E V · [M, N ] ∞  ≤ V M N , N ∈ M2 .
 
The mapping N → E V · [M, N ] ∞ is then a continuous linear functional on
M2 , and so Lemma 18.4 yields an element V · M ∈ M2 with
 
E V · [M, N ] ∞ = E (V · M )∞ N∞ , N ∈ M2 .

Replacing N by N τ for an arbitrary optional time τ and using Theorem 18.5


and optional sampling, we get
   
E V · [M, N ] = E V · [M, N ]τ
τ
 ∞
= E V · [M, N τ ] ∞
= E(V · M )∞ Nτ
= E(V · M )τ Nτ .
404 Foundations of Modern Probability

Since V is progressive, Lemma 9.14 shows that V · [M, N ] − (V · M )N is


a martingale, which implies [V · M, N ] = V · [M, N ] a.s. The last relation
extends by localization to any continuous local martingale N .
For general V , define
 
τn = inf t > 0; V 2 · [M ] t =n , n ∈ N.

By the previous argument, there exist some continuous local martingales V · M τn


such that, for any continuous local martingale N ,

V · M τn , N = V · [M τn , N ] a.s., n ∈ N. (6)

For m < n it follows that (V · M τn )τm satisfies the corresponding relation with
[M τm , N ], and so (V · M τn )τm = V · M τm a.s. Hence, there exists a continuous
process V · M with (V · M )τn = V · M τn a.s. for all n, and Lemma 18.1 shows
that V ·M is again a local martingale. Finally, (6) yields [V ·M, N ] = V ·[M, N ]
a.s. on [0, τn ] for each n, and so the same relation holds on R+ . 2

By Lemma 18.10, the stochastic integral V · M of Theorem 18.11 extends


the previously defined elementary integral. It is also clear that V · M is a.s.
bilinear in the pair (V, M ), with the following basic continuity property.

Lemma 18.12 (continuity) For any continuous local martingales Mn and


processes Vn ∈ L(Mn ), we have
 
(Vn · Mn )∗ → 0
P P
⇔ Vn2 · [Mn ] ∞ → 0.

Proof: Note that [Vn · Mn ] = Vn2 · [Mn ], and use Proposition 18.6. 2

Before continuing our study of basic properties, we extend the stochastic


integral to a larger class of integrators. By a continuous semi-martingale we
mean a process X = M + A, where M is a continuous local martingale and
A is a continuous, adapted process with locally finite variation and A0 = 0.
The decomposition X = M + A is then a.s. unique by Proposition 18.2, and it
is often referred to as the canonical decomposition of X. A continuous semi-
martingale in Rd is defined as a process X = (X 1 , . . . , X d ), where X 1 , . . . , X d
are continuous semi-martingales in R.
Let L(A)
be the class of progressive processes V , such that the processes
(V · A)t = 0t V dA exist as elementary Stieltjes integrals. For any continuous
semi-martingale X = M + A, we write L(X) = L(M ) ∩ L(A), and define the
X-integral of a process V ∈ L(X) as the sum V ·X = V ·M +V ·A, which makes
V ·X a continuous semi-martingale with canonical decomposition V ·M +V ·A.
For progressive processes V , it is further clear that

V ∈ L(X) ⇔ V 2 ∈ L([M ]), V ∈ L(A).

Lemma 18.12 yields the following stochastic version of the dominated con-
vergence theorem.
18. Itô Integration and Quadratic Variation 405

Corollary 18.13 (dominated convergence) For any continuous semi-martin-


gale X and processes U, V, V1 , V2 , . . . ∈ L(X), we have
Vn → V  ∗
P
⇒ Vn · X − V · X → 0, t ≥ 0.
|Vn | ≤ U t

Proof: Let X = M + A. Since U ∈ L(X), we have U 2 ∈ L([M ]) and


U ∈ L(A). By dominated convergence for ordinary Stieltjes integrals, we
obtain a.s.  ∗
(Vn − V )2 · [M ] t + Vn · A − V · A t → 0.

Here the former convergence implies (Vn · M − V · M )∗t → 0 by Lemma 18.12,


P

and the assertion follows. 2

We further extend the elementary chain rule of Lemma 1.25 to stochastic


integrals.

Lemma 18.14 (chain rule) For any continuous semi-martingale X and pro-
gressive processes U, V with V ∈ L(X), we have
(i) U ∈ L(V · X) ⇔ U V ∈ L(X), and then
(ii) U · (V · X) = ( U V ) · X a.s.

Proof: (i) Letting X = M + A, we have

U ∈ L(V · X) ⇔ U 2 ∈ L([ V · M ]), U ∈ L(V · A),


U V ∈ L(X) ⇔ (U V )2 ∈ L([M ]), U V ∈ L(A).

Since [ V · M ] = V 2 · [M ], the two pairs of conditions are equivalent.


(ii) The relation U · (V · A) = ( U V ) · A is elementary. To see that even
U · (V · M ) = (U V ) · M a.s., consider any continuous local martingale N , and
note that

( U V ) · M, N = ( U V ) · [M, N ]
 
= U · V · [M, N ]
= U · [ V · M, N ] 
= U · ( V · M ), N . 2

Next we examine the behavior under optional stopping.

Lemma 18.15 (optional stopping) For any continuous semi-martingale X,


process V ∈ L(X), and optional time τ , we have a.s.

(V · X)τ = V · X τ = (V 1[0,τ ] ) · X.
406 Foundations of Modern Probability

Proof: The relations being obvious for ordinary Stieltjes integrals, we may
take X = M to be a continuous local martingale. Then (V ·M )τ is a continuous
local martingale starting at 0, and we have
 
(V · M )τ , N = V · M, N τ
= V · [M, N τ ]
= V · [M, N ]τ
= (V 1[0,τ ] ) · [M, N ].
Thus, (V · M )τ satisfies the conditions characterizing the integrals V · M τ and
(V 1[0,τ ] ) · M . 2

We may extend the definitions of quadratic variation and covariation to any


continuous semi-martingales X = M +A and Y = N +B by putting [X] = [M ]
and [X, Y ] = [M, N ]. As a crucial step toward a general substitution rule,
we show how the covariation process can be expressed in terms of stochastic
integrals. For martingales X and Y , the result is implicit in the proof of
Theorem 18.5.
Theorem 18.16 (integration by parts) For any continuous semi-martingales
X, Y , we have a.s.
XY = X0 Y0 + X · Y + Y · X + [X, Y ]. (7)
Proof: We may take X = Y , since the general result will then follow by
polarization. First let X = M ∈ M2 , and define V n and Qn as in the proof of
Theorem 18.5. Then V n → M and |Vtn | ≤ Mt∗ < ∞, and so Corollary 18.13
P
yields (V n · M )t → (M · M )t for each t ≥ 0. Now (7) follows as we let n → ∞
in the relation M 2 = 2 V n · M + Qn , and it extends by localization to general
continuous local martingales M with M0 = 0. If instead X = A, then (7)
reduces to A2 = 2 A · A, which holds by Fubini’s theorem.
For general X we may take X0 = 0, since the formula for general X0 will
then follow by an easy computation from the result for X − X0 . In this case,
(7) reduces to X 2 = 2 X · X + [M ]. Subtracting the formulas for M 2 and A2 ,
it remains to show that AM = A · M + M · A a.s. Then fix any t > 0, put
tnk = (k/n)t, and introduce for s ∈ (tnk−1 , tnk ], k, n ∈ N, the processes
Ans = Atnk−1 , Msn = Mtnk .
Note that
At Mt = (An · M )t + (M n · A)t , n ∈ N.
P
Here (A · M )t → (A · M )t by Corollary 18.13, and (M n · A)t → (M · A)t by
n

dominated convergence for ordinary Stieltjes integrals. 2

Our terminology is justified by the following result, which extends Theo-


rem 14.9 for Brownian motion. It also shows that [X, Y ] is a.s. measurably
determined3 by X and Y .
3
This is remarkable, since it is defined by martingale properties that depend on both
probability measure and filtration.
18. Itô Integration and Quadratic Variation 407

Proposition 18.17 (approximation of covariation, Fisk) For any continuous


semi-martingales X, Y on [0, t] and partitions 0 = tn0 < tn1 < · · · < tnkn = t,
n ∈ N, with maxk (tnk − tnk−1 ) → 0, we have
  
ζn ≡ Σ k Xtnk − Xtnk−1
P
Ytnk − Ytnk−1 → [X, Y ]t . (8)

Proof: We may clearly take X0 = Y0 = 0. Introduce for s ∈ (tnk−1 , tnk ],


k, n ∈ N, the predictable step processes
Xsn = Xtnk−1 , Ysn = Ytnk−1 ,
and note that
Xt Yt = (X n · Y )t + (Y n · X)t + ζn , n ∈ N.

Since X n → X and Y n → Y , and also (X n )∗t ≤ Xt∗ < ∞ and (Y n )∗t ≤ Xt∗ < ∞,
we get by Corollary 18.13 and Theorem 18.16
P
ζn → Xt Yt − (X · Y )t − (Y · X)t
= [X, Y ]t . 2

We turn to a version of Itô’s formula, arguably the most important formula


of modern probability.4 It shows that the class of continuous semi-martingales
is closed under smooth maps, and exhibits the canonical decomposition of the
image process in terms of the components of the original process. Extended
versions appear in Corollaries 18.19 and 18.20, as well as in Theorems 20.7 and
29.5.
Write C k = C k (Rd ) for the class of k times continuously differentiable
functions on Rd . When f ∈ C 2 , we write ∂i f and ∂ ij2 f for the first and second
order partial derivatives of f . Here and below, summation over repeated indices
is understood.

Theorem 18.18 (substitution rule, Itô) For any continuous semi-martingale


X in Rd and function f ∈ C 2 (Rd ), we have a.s.
f (X) = f (X0 ) + ∂ i f (X) · X i + 12 ∂ ij2 f (X) · [X i , X j ]. (9)

This may also be written in differential form as

df (X) = ∂ i f (X) dX i + 12 ∂ ij2 f (X) d [X i , X j ]. (10)

It is suggestive to write d [X i , X j ] = dX i dX j , and think of (10) as a second


order Taylor expansion.
If X has canonical decomposition M + A, we get the corresponding de-
composition of f (X) by substituting M i + Ai for X i on the right side of (9).
When M = 0, the last term vanishes, and (9) reduces to the familiar substitu-
tion rule for ordinary Stieltjes integrals. In general, the appearance of this Itô
4
Possible contenders might include the representation of infinitely divisible distributions,
the polynomial representation of multiple WI-integrals, and the formula for the generator of
a continuous Feller process.
408 Foundations of Modern Probability

correction term shows that the rules of ordinary calculus fail for the Itô integral.

Proof of Theorem 18.18: For notational convenience, we may take d = 1,


the general case being similar. Then fix a continuous semi-martingale X in R,
and let C be the class of functions f ∈ C 2 satisfying (9), now written in the
form
f (X) = f (X0 ) + f  (X) · X + 12 f  (X) · [X]. (11)
The class C is clearly a linear subspace of C 2 containing the functions f (x) ≡ 1
and f (x) ≡ x. We shall prove that C is closed under multiplication, and hence
contains all polynomials.
Then suppose that (11) holds for both f and g, so that F = f (X) and
G = g(X) are continuous semi-martingales. Using the definition of the integral,
along with Proposition 18.14 and Theorem 18.16, we get
(f g)(X) − (f g)(X0 )
= F G − F 0 G0
= F · G + G · F + [F, G]
= F · g  (X) · X + 12 g  (X) · [X]

+ G · f  (X) · X + 12 f  (X) · [X] + f  (X) · X, g  (X) · X
   
= f g  + f  g (X) · X + 1
2
f g  + 2f  g  + f  g (X) · [X]
= (f g) (X) · X + 12 (f g) (X) · [X].


Now let f ∈ C 2 be arbitrary. By Weierstrass’ approximation theorem5 , we


may choose some polynomials p1 , p2 , . . . , such that sup|x|≤c |pn (x) − f  (x)| → 0
for every c > 0. Integrating the pn twice yields some polynomials fn satisfying
     
sup fn (x) − f (x) ∨ fn (x) − f  (x) ∨ fn (x) − f  (x) → 0,
    
c > 0.
|x|≤c

In particular, fn (Xt ) → f (Xt ) for each t > 0. Letting X have canonical


decomposition M + A and using dominated convergence for ordinary Stieltjes
integrals, we get for any t ≥ 0

fn (X) · A + 12 fn (X) · [X] t


→ f  (X) · A + 12 f  (X) · [X] t .

Similarly, {fn (X) − f  (X)}2 · [M ])t → 0 for all t, and so by Lemma 18.12

fn (X) · M → f  (X) · M t ,


P
t t ≥ 0.

Thus, formula (11) for the polynomials fn extends in the limit to the same
relation for f . 2

We also need a local version of the last theorem, involving stochastic in-
tegrals up to the first time ζD when X exits a given domain D ⊂ Rd . For
5
Any continuous function on [0, 1] admits a uniform approximation by polynomials.
18. Itô Integration and Quadratic Variation 409

continuous and adapted X, the time ζD is clearly predictable, in the sense of


being announced by some optional times τn ↑ ζD with τn < ζD a.s. on {ζD > 0}
for all n. Indeed, writing ρ for the Euclidean metric in Rd , we may choose

τn = inf t ∈ [0, n]; ρ(Xt , Dc ) ≤ n−1 , n ∈ N. (12)

We say that X is a semi-martingale on [0, ζD ), if the stopped process X τn


is a semi-martingale in the usual sense for every n ∈ N. To define the co-
variation processes [X i , X j ] on the interval [0, ζD ), we require [X i , X j ]τn =
[(X i )τn , (X j )τn ] a.s. for every n. Stochastic integrals with respect to X 1 , . . . , X d
are defined on [0, ζD ) in a similar way.

Corollary 18.19 (local Itô formula) For any domain D ⊂ Rd , let X be a


continuous semi-martingale on [0, ζD ). Then (9) holds a.s. on [0, ζD ) for every
f ∈ C 2 (D).

Proof: Choose some fn ∈ C 2 (Rd ) with fn (x) = f (x) when ρ(x, Dc ) ≥ n−1 .
Applying Theorem 18.18 to fn (X τn ) with τn as in (12), we get (9) on [0, τn ].
Since n was arbitrary, the result extends to [0, ζD ). 2

By a complex, continuous semi-martingale we mean a process of the form


Z = X +iY , where X, Y are real, continuous semi-martingales. The bilinearity
of the covariation process suggests that we define the quadratic variation of Z
as 
[Z] = [Z, Z] = X + iY, X + iY
= [X] + 2i [X, Y ] − [Y ].

Let L(Z) be the class of processes W = U + iV with U, V ∈ L(X) ∩ L(Y ). For


such a process W , we define the integral by

W · Z = (U + iV ) · (X + iY )
 
= U ·X −V ·Y +i U ·Y +V ·X .

Corollary 18.20 (conformal mapping) Let f be an analytic function on a


domain D ⊂ C. Then (9) holds for every continuous semi-martingale Z in D.

Proof: Writing f (x + iy) = g(x, y) + ih(x, y) for any x + iy ∈ D, we get

g1 + ih1 = f  , g2 + ih2 = if  ,


and so by iteration

g11 + ih11 = f  , 
g12 + ih12 = if  , 
g22 + ih22 = −f  .

Now (9) follows for Z = X + iY , as we apply Corollary 18.19 to the semi-mar-


tingale (X, Y ) and the functions g, h. 2
410 Foundations of Modern Probability

Under suitable regularity conditions, we may modify the Itô integral so


that it will obey the rules of ordinary calculus. Then for any continuous semi-
martingales X, Y , we define the Fisk–Stratonovich integral by
t
X ◦ dY = (X · Y )t + 12 [X, Y ]t , t ≥ 0, (13)
0

or, in differential form X ◦ dY = XdY + 12 d[X, Y ], where the first term on


the right is an ordinary Itô integral. The point of this modification is that
the substitution rule simplifies to df (X) = ∂i f (X) ◦ dX i , conforming with the
chain rule of ordinary calculus. We may also prove a version for FS -integrals
of the chain rule in Proposition 18.14.

Theorem 18.21 (modified Itô integral, Fisk, Stratonovich) The integral in


(13) satisfies the computational rules of elementary calculus. Thus, a.s.,
(i) for any continuous semi-martingale X in Rd and function f ∈ C 3 (Rd ),
t
f (Xt ) = f (X0 ) + ∂i f (X) ◦ dX i , t ≥ 0,
0

(ii) for any real, continuous semi-martingales X, Y, Z,


X ◦ (Y ◦ Z) = (XY ) ◦ Z.

Proof: (i) By Itô’s formula,

∂i f (X) = ∂i f (X0 ) + ∂ij2 f (X) · X j + 12 ∂ijk


3
f (X) · [X j , X k ].

Using Itô’s formula again together with (5) and (13), we get

∂i f (X) ◦ dX i = ∂i f (X) · X i + 12 [∂i f (X), X i ]


0
= ∂i f (X) · X i + 12 ∂ij2 f (X) · [X j , X i ]
= f (X) − f (X0 ).

(ii) By Lemma 18.14 and integration by parts,

X ◦ (Y ◦ Z) = X · (Y ◦ Z) + 12 [X, Y ◦ Z] 
= X · (Y · Z) + 12 X · [Y, Z] + 12 [X, Y · Z] + 1
4
X, [Y, Z]
= XY · Z + 12 [XY, Z]
= XY ◦ Z. 2

The more convenient substitution rule of Corollary 18.21 comes at a high


price: The FS -integral does not preserve the martingale property, and it re-
quires even the integrand to be a continuous semi-martingale, which forces us
to impose the stronger regularity constraint on the function f in the substitu-
tion rule.
Our next task is to establish a basic uniqueness property, justifying our
reference to the process V · M in Theorem 18.11 as an integral.
18. Itô Integration and Quadratic Variation 411

Theorem 18.22 (continuity) The integral V ·M in Theorem 18.11 is the a.s.


unique linear extension of the elementary stochastic integral, such that for any
t > 0,  
Vn2 · [M ] t → 0 ⇒ (Vn · M )∗t → 0.
P P

This follows immediately from Lemmas 18.10 and 18.12, together with the
following approximation of progressive processes by predictable step processes.

Lemma 18.23 (approximation) For any continuous semi-martingale X =


M + A and process V ∈ L(X), there exist some processes V1 , V2 , . . . ∈ E, such
that a.s., simultaneously for all t > 0,

(Vn − V )2 · [M ] t
+ (Vn − V ) · A t → 0. (14)

Proof: We may take t = 1, since we can then combine the processes Vn


for disjoint, finite intervals to construct an approximating sequence on R+ . It
is further enough to consider approximations in the sense of convergence in
probability, since the a.s. versions will then follow for a suitable sub-sequence.
This allows us to perform the construction in steps, first approximating V by
bounded and progressive processes V  , next approximating each V  by contin-
uous and adapted processes V  , and finally approximating each V  by pre-
dictable step processes V  .
The first and last steps being elementary, we may focus on the second
step. Then let V be bounded. We need to construct some continuous, adapted
processes Vn , such that (14) holds a.s. for t = 1. Since the Vn can be chosen
to be uniformly bounded, we may replace the first term by (|Vn − V | · [M ])1 .
Thus, it is enough to establish the approximation (|Vn − V | · A)1 → 0 when the
process A is non-decreasing, continuous, and adapted with A0 = 0. Replacing
At by At + t if necessary, we may even take A to be strictly increasing.
To form the required approximations, we may introduce the inverse process
Ts = sup{t ≥ 0; At ≤ s}, and define for all t, h > 0
t
Vth = h−1 V dA
T (At −h)
At
= h−1 V (Ts ) ds.
(At −h)+

Then Theorem 2.15 yields V h ◦ T → V ◦ T as h → 0, a.e. on [0, A1 ], and so by


dominated convergence,
1   A1  
 h   h 
V − V  dA = V (Ts ) − V (Ts ) ds → 0.
0 0

The processes V h are clearly continuous. To prove their adaptedness, we note


that the process T (At − h) is adapted for every h > 0, by the definition of
T . Since V is progressive, we further note that V · A is adapted and hence
progressive. The adaptedness of (V · A)T (At −h) now follows by composition. 2
412 Foundations of Modern Probability

Though the class L(X) of stochastic integrands is sufficient for most pur-
poses, it is sometimes useful to allow integration of slightly more general pro-
cesses. Given any continuous semi-martingale X = M + A, let L̂(X) denote
the class of product-measurable processes V , such that (V − Ṽ ) · [M ] = 0 and
(V − Ṽ ) · A = 0 a.s. for some process Ṽ ∈ L(X). For V ∈ L̂(X) we define
V · X = Ṽ · X a.s. The extension clearly enjoys all the previously established
properties of stochastic integration.
We often need to see how semi-martingales, covariation processes, and
stochastic integrals are transformed by a random time change. Then consider a
non-decreasing, right-continuous family of finite optional times τs , s ≥ 0, here
referred to as a finite random time change τ . If even F is right-continuous, so
is the induced filtration Gs = Fτs , s ≥ 0, by Lemma 9.3. A process X is said to
be τ -continuous, if it is a.s. continuous on R+ and constant on every interval
[τs− , τs ], s ≥ 0, where τ0− = X0− = 0 by convention.

Theorem 18.24 (random time change, Kazamaki) Let τ be a finite random


time change with induced filtration G, and let X = M + A be a τ -continuous
F-semi-martingale. Then
(i) X ◦ τ is a continuous G-semi-martingale with canonical decomposition
M ◦ τ + A ◦ τ , such that [X ◦ τ ] = [X] ◦ τ a.s.,
(ii) V ∈ L(X) implies V ◦ τ ∈ L̂(X ◦ τ ) and
(V ◦ τ ) · (X ◦ τ ) = (V · X) ◦ τ a.s. (15)

Proof: (i) It is easy to check that the time change X → X ◦ τ preserves


continuity, adaptedness, monotonicity, and the local martingale property. In
particular, X ◦ τ is then a continuous G -semi-martingale with canonical de-
composition M ◦ τ + A ◦ τ . Since M 2 − [M ] is a continuous local martingale,
so is the time-changed process M 2 ◦ τ − [M ] ◦ τ , and we get
[X ◦ τ ] = [M ◦ τ ]
= [M ] ◦ τ
= [X] ◦ τ a.s.
If V ∈ L(X), we also note that V ◦ τ is product-measurable, since this is true
for both V and τ .
(ii) Fixing any t ≥ 0 and using the τ -continuity of X, we get
(1[0,t] ◦ τ ) · (X ◦ τ ) = 1[0,τt−1 ] · (X ◦ τ )
−1
= (X ◦ τ )τt
= (1[0,t] · X) ◦ τ,

which proves (15) when V = 1[0,t] . If X has locally finite variation, the result
extends by a monotone-class argument and monotone convergence to arbitrary
V ∈ L(X). In general, Lemma 18.23 yields some continuous, adapted processes
V1 , V2 , . . . , such that a.s.
18. Itô Integration and Quadratic Variation 413

 
 
(Vn − V )2 d[M ] + (Vn − V ) dA → 0.

By (15) the corresponding properties hold for the time-changed processes, and
since the processes Vn ◦ τ are right-continuous and adapted, hence progressive,
we obtain V ◦ τ ∈ L̂(X ◦ τ ).
Now assume instead that the approximating processes V1 , V2 , . . . are pre-
dictable step processes. Then (15) holds as before for each Vn , and the relation
extends to V by Lemma 18.12. 2

Next we consider stochastic integrals of processes depending on a param-


eter. Given a measurable space (S, S), we say that a process V on S × R+ is
progressive 6 , if its restriction to S × [0, t] is S ⊗ Bt ⊗ Ft -measurable for every
t ≥ 0, where Bt = B[0,t] . A simple version of the following result will be useful
in Chapter 19.

Theorem 18.25 (dependence on parameter, Doléans, Stricker & Yor) For


measurable S, consider a continuous semi-martingale X and a progressive pro-
cess Vs (t) on S × R+ such that Vs ∈ L(X) for all s ∈ S. Then the process
Ys (t) = (Vs · X)t has a version that is progressive and a.s. continuous at each
s ∈ S.

Proof: Let X have canonical decomposition of M + A, and let the processes


Vsn on S × R+ be progressive and such that for any t ≥ 0 and s ∈ S,
∗ P
(Vsn − Vs )2 · [M ] t
+ (Vsn − Vs ) · A t
→ 0.

Then Lemma 18.12 yields (Vsn · X − Vs · X)∗t → 0 for every s and t. Proceeding
P

as in the proof of Proposition 5.32, we may choose a sub-sequence {nk (s)} ⊂ N


depending measurably on s, such that the same convergence holds a.s. along
{nk (s)} for any s and t. Define Ys,t = lim supk (Vsnk · X)t whenever this is finite,
and put Ys,t = 0 otherwise. If the processes (Vsn · X)t have progressive versions
on S × R+ that are a.s. continuous for each s, then Ys,t is clearly a version
of the process (Vs · X)t with the same properties. We apply this argument in
three steps.
First we reduce to the case of bounded, progressive integrands by taking
V n = V 1{|V | ≤ n}. Next, we apply the transformation in the proof of Lemma
18.23, to reduce to the case of continuous and progressive integrands. Finally,
we approximate any continuous, progressive process V by the predictable step
processes Vsn (t) = Vs (2−n [2n t]). The integrals Vsn · X are then elementary, and
the desired continuity and measurability are obvious by inspection. 2

We turn to the related topic of functional representations. For motivation,


we note that the construction of a stochastic integral V · X depends in a subtle
way on the underlying probability measure P and filtration F. Thus, we cannot
expect a general representation F (V, X) of the integral process V · X. In view
6
short for progressively measurable
414 Foundations of Modern Probability

of Proposition 5.32, we might still hope for a modified representation of the


form F (μ, V, X), where μ = L(V, X). Even this may be too optimistic, since
the canonical decomposition of X also depends on F.
Here we consider only a special case, sufficient for our needs in Chapter
32. Fixing any progressive functions σji and b i of suitable dimension, defined
on the path space CR+ ,Rd , we consider an adapted process X satisfying the
stochastic differential equation

dXti = σji (t, X) dBtj + bi (t, X) dt, (16)

where B is a Brownian motion in Rr . A detailed discussion of such equations


appears in Chapter 32. Here we need only the simple fact from Lemma 32.1
that the coefficients σji (t, X) and bi (t, X) are again progressive. Write aij =
σki σkj .

Proposition 18.26 (functional representation) For any progressive functions


σ, b, f of suitable dimension, there exists a measurable mapping
 
F : M̂ CR+ , Rd × CR+ , Rd → CR+ , R , (17)
such that whenever X solves (16) with L(X) = μ and f i (X) ∈ L(X i ) for all i,
we have
f i (X) · X i = F (μ, X) a.s.

Proof: From (16) we note that X is a semi-martingale with covariation


processes [X i , X j ] = aij (X) · λ and drift components bi (X) · λ. Hence, f i (X) ∈
L(X i ) for all i iff the processes (f i )2 aii (X) and f i bi (X) are a.s. Lebesgue in-
tegrable, which holds in particular when f is bounded. Letting f1 , f2 , . . . be
progressive with
 
 
(fni − f i )2 aii (X) · λ + (fni − f i ) bi (X) · λ → 0, (18)

we get by Lemma 18.12


∗ P
fni (X) · X i − f i (X) · X i t → 0, t ≥ 0.

Thus, if fni (X) · X i = Fn (μ, X) a.s. for some measurable mappings Fn as in


(17), Proposition 5.32 yields a similar representation for the limit f i (X) · X i .
As in the previous proof, we may apply this argument in steps, reducing
first to the case where f is bounded, next to the case of continuous f , and
finally to the case where f is a predictable step function. Here the first and
last steps are again elementary. For the second step, we may now use the
simpler approximation
t
fn (t, x) = n f (s, x)ds, t ≥ 0, n ∈ N, x ∈ CR+ ,Rd .
(t−n−1 )+

By Theorem 2.15 we have fn (t, x) → f (t, x) a.e. in t for each x ∈ CR+ ,Rd , and
(18) follows by dominated convergence. 2
18. Itô Integration and Quadratic Variation 415

Exercises

1. Show that if M is a local martingale and ξ is an F0 -measurable random variable,


then the process Nt = ξMt is again a local martingale.
2. Show that every local martingale M ≥ 0 with EM0 < ∞ is a super-martingale.
Also show by an example that M may fail to be a martingale. (Hint: Use Fatou’s
lemma. Then take Mt = Xt/(1−t)+ , where X is a Brownian motion starting at 1,
stopped when it reaches 0.)
3. For a continuous local martingale M , show that M and [M ] have a.s. the same
intervals of constancy. (Hint: For any r ∈ Q+ , put τ = inf{t > r; [M ]t > [M ]r }.
Then M τ is a continuous local martingale on [r, ∞) with quadratic variation 0, and
so M τ is a.s. constant on [s, τ ]. Use a similar argument in the other direction.)
4. For any continuous local martingales Mn starting at 0 and associated optional
times τn , show that (Mn )∗τn → 0 iff [Mn ]τn → 0. State the corresponding result for
P P

stochastic integrals.
5. Give examples of continuous semi-martingales X1 , X2 , . . . , such that Xn∗ → 0,
P

P
and yet [Xn ]t →
 0 for all t > 0. (Hint: Let B be a Brownian motion stopped at
time 1, put Ak2−n = B(k−1)+ 2−n , and interpolate linearly. Define X n = B − An .)
6. For a Brownian motion B and an optional time τ , show that EBτ = 0 when
Eτ 1/2 < ∞ and EBτ2 = Eτ when Eτ < ∞. (Hint: Use optional sampling and
Theorem 18.7.)
7. Deduce the first inequality of Proposition 18.9 from Proposition 18.17 and the
classical Cauchy inequality.
8. For any continuous semi-martingales X, Y , show that [X + Y ]1/2 ≤ [X]1/2 +
[Y ]1/2 a.s.
9. (Kunita & Watanabe) Let M, N be continuous local martingales, and fix any
p, q, r > 0 with p−1 + q −1 = r−1 . Show that [M, N ]t 22r ≤ [M ]t p [N ]t q for all
t > 0.
10. Let M, N be continuous local martingales with M0 = N0 = 0. Show that
M ⊥⊥N implies [M, N ] ≡ 0 a.s. Also show by an example that the converse is false.
(Hint: Let M = U · B and N = V · B for a Brownian motion B and suitable
U, V ∈ L(B).)
11. Fix a continuous semi-martingale X, and let U, V ∈ L(X) with U = V a.s. on
a set A ∈ F0 . Show that U · X = V · X a.s. on A. (Hint: Use Proposition 18.15.)
12. For a continuous local martingale M , let U, U1 , U2 , . . . and V, V1 , V2 , . . . ∈
L(M ) with |Un | ≤ Vn , Un → U , Vn → V , and {(Vn − V ) · M }∗t → 0 for all t > 0.
P

P
Show that (Un ·M )t → (U ·M )t for all t. (Hint: Write (Un −U )2 ≤ 2 (Vn −V )2 +8 V 2 ,
and use Theorem 1.23 and Lemmas 5.2 and 18.12.)
13. Let B be a Brownian bridge. Show that Xt = Bt∧1 is a semi-martingale on
R+ with the induced filtration. (Hint: Note that Mt = (1 − t)−1 Bt is a martingale
on [0, 1), integrate by parts, and check that the compensator has finite variation.)
14. Show by an example that the canonical decomposition of a continuous semi-
martingale may depend on the filtration. (Hint: Let B be a Brownian motion with
induced filtration F, put Gt = Ft ∨ σ(B1 ), and use the preceding result.)
416 Foundations of Modern Probability

15. Show by stochastic calculus that t−p Bt → 0 a.s. as t → ∞, where B is


a Brownian motion and p > 12 . (Hint: Integrate by parts to find the canonical
decomposition. Compare with the L1 -limit.)
16. Extend Theorem 18.16 to a product of n semi-martingales.
1
17. Consider a Brownian bridge

X and a bounded, progressive process V with
V dt = 0 a.s. Show that E 01 V dX = 0. (Hint: Integrate by parts to get
01 t 1 
−1 1 V ds.)
0 V dX = 0 (V − U )dB, where B is a Brownian motion and Ut = (1 − t) t s
18. Show that Proposition 18.17 remains valid for any finite optional times t and
P
tnk satisfying maxk (tnk − tn,k−1 ) → 0.
19. Let M be a continuous local martingale. Find the canonical decomposition of
|M |p when p ≥ 2. For such a p, deduce the second relation in Theorem 18.7. (Hint:
Use Theorem 18.18. For the last part, use Hölder’s inequality.)
20. Let M be a continuous local martingale with M0 = 0 and [M ]∞ ≤ 1. Show
2
that P {supt Mt ≥ r} ≤ e−r /2 for all r ≥ 0. (Hint: Consider the super-martingale
Z = exp(cM − c [M ]/2) for a suitable c > 0.)
2

21. Let X, Y be continuous semi-martingales. Fix a t > 0 and some partitions


(tnk ) of [0, t] with maxk (tnk − tn,k−1 ) → 0. Show that 12 Σk (Ytnk + Ytn,k−1 )(Xtnk −
P
Xtn,k−1 ) → (Y ◦ X)t . (Hint: Use Corollary 18.13 and Proposition 18.17.)
23. Say that a process is predictable if it is measurable with respect to the σ-field
in R+ × Ω induced by all predictable step processes. Show that every predictable
process is progressive. Conversely, given any progressive process X and constant
h > 0, show that the process Yt = X(t−h)+ is predictable.
24. For a progressive process V and a non-decreasing, continuous, adapted process
A, prove the existence of a predictable process Ṽ with |V − Ṽ | · A = 0 a.s. (Hint:
Use Lemma 18.23.)
25. Use the preceding statement to deduce Lemma 18.23. (Hint: Begin with
predictable V , using a monotone-class argument.)
26. Construct the stochastic integral V · M by approximation from elementary
integrals, using Lemmas 18.10 and 18.23. Show that the resulting integral satisfies
the relation in Theorem 18.11. (Hint: First let M ∈ M2 and E(V 2 · [M ])∞ < ∞,
and extend by localization.)
d
27. Let (V, B) = (Ṽ , B̃) for some Brownian motions B, B̃ on possibly different
d
filtered probability spaces and some V ∈ L(B), Ṽ ∈ L(B̃). Show that (V, B, V ·B) =
(Ṽ , B̃, Ṽ · B̃). (Hint: Argue as in the proof of Proposition 18.26.)
28. Let X be a continuous F-semi-martingale. Show that X remains conditionally
a semi-martingale given F0 , and that the conditional quadratic variation agrees
with [X]. Also show that if V ∈ L(X), where V = σ(Y ) for some continuous
process Y and measurable function σ, then V remains conditionally X-integrable,
and the conditional integral agrees with V · X. (Hint: Conditioning on F0 preserves
martingales.)

You might also like