Professional Documents
Culture Documents
978-3-030-61871-1_19
978-3-030-61871-1_19
Itô Integration
and Quadratic Variation
Local and finite-variation martingales, completeness, covariation, con-
tinuity, norm comparison, uniform integrability, Cauchy-type inequali-
ties, martingale integral, semi-martingales, continuity, dominated con-
vergence, chain rule, optional stopping, integration by parts, approxi-
mation of covariation, Itô’s formula, local integral, conformal mapping,
Fisk–Stratonovich integral, continuity and approximation, random time
change, dependence on parameter, functional representation
Here we initiate the study of stochastic calculus, arguably the most powerful
tool of modern probability, with applications to a broad variety of subjects
throughout the subject. For the moment, we may only mention the time-
change reduction and integral representation of continuous local martingales
in Chapter 19, the Girsanov theory for removal of drift in the same chapter,
the predictable transformations in Chapter 27, the construction of local time
in Chapter 29, and the stochastic differential equations in Chapters 32–33.
In this chapter we consider only stochastic integration with respect to con-
tinuous semi-martingales, whereas the more subtle case of integrators with
possible jump discontinuities is postponed until Chapter 20, and an extension
to non-anticipating integrators appears in Chapter 21. The theory of stochas-
tic integration is inextricably linked to the notions of quadratic variation and
covariation, already encountered in Chapter 14 in the special case of Brownian
motion, and together the two notions are developed into a theory of amazing
power and beauty.
We begin with a construction of the covariation process [M, N ] of a pair of
continuous local martingales M and N , which requires an elementary approxi-
mation and completeness argument. The processes M ∗ and [M ] = [M, M ] will
be related by some useful continuity and norm relations, including the powerful
BDG inequalities.
Given the quadratic variation [M ], we may next construct the stochastic
integral V dM for suitable progressive processes V , using a simple
Hilbert
space argument. Combining with the ordinary Stieltjes integral V dA for
processes A of locally finite variation, we may finally extend the integral to
arbitrary continuous semi-martingales X = M + A. The continuity properties
of quadratic variation carry over to the stochastic integral, and in conjunction
with the obvious linearity they characterize the integration.
A key result for applications is Itô’s formula, which shows how semi-mart-
ingales are transformed under smooth mappings. Though the present substi-
tution rule differs from the elementary result for Stieltjes integrals, the two
formulas can be brought into agreement by a simple modification of the in-
tegral. We conclude the chapter with some special topics of importance for
applications, such as the transformation of stochastic integrals under a ran-
dom time-change, and the integration of processes depending on a parameter.
The present material may be thought of as a continuation of the martingale
theory of Chapters 9–10. Though no results for Brownian motion are used
explicitly in this chapter, the existence of the Brownian quadratic variation in
Chapter 14 may serve as a motivation. We also need the representation and
measurability of limits obtained in Chapter 5.
Throughout the chapter we take F = (Ft ) to be a right-continuous and
complete filtration on R+ . A process M is said to be a local martingale, if it
is adapted to F and such that the stopped and centered processes M τn − M0
are true martingales for some optional times τn ↑ ∞. By a similar localiza-
tion we may define local L2 -martingales, locally bounded martingales, locally
integrable processes, etc. The required optional times τn are said to form a
localizing sequence.
Any continuous local martingale may clearly be reduced by localization to a
sequence of bounded, continuous martingales. Conversely, we see by dominated
convergence that every bounded local martingale is a true martingale. The
following useful result may be less obvious.
Lemma 18.1 (local martingales) For any process M and optional times τn ↑
∞, these conditions are equivalent:
(i) M is a local martingale,
(ii) M τn is a local martingale for every n.
Proof: First suppose that the sum in (1) has only finitely many non-zero
terms. Then V · M is a martingale by Corollary 9.15, and the L2 -bound follows
by the computation
2
E(V · M )2t = E Σ k ηk2 Mτt k+1
− Mτtk
2
≤ E Σ k Mτt k+1
− Mτtk
= E Mt2 .
The estimate extends to the general case by Fatou’s lemma, and the martingale
property then extends by uniform integrability. 2
We may now establish the existence and basic properties of the quadratic
variation and covariation processes [M ] and [M, N ]. Extensions to possibly
discontinuous processes are considered in Chapter 20.
Proof: The a.s. uniqueness of [M, N ] follows from Proposition 18.2, and
(ii)−(iii) are immediate consequences. If [M, N ] exists with the stated proper-
ties and τ is an optional time, then Lemma 18.1 shows that M τ N τ − [M, N ]τ
is a local martingale, as is the process M τ (N − N τ ) by Corollary 9.15. Hence,
even M τ N − [M, N ]τ is a local martingale, and (vi) follows. Furthermore,
M N − (M − M0 )N = M0 N is a local martingale, which yields (iv) whenever
either side exists. If both [M + N ] and [M − N ] exist, then
4M N − [M + N ] − [M − N ]
= (M + N )2 − [M + N ] − (M − N )2 − [M − N ]
≤ 2−m+1 M .
Hence, Lemma 18.4 yields a continuous martingale N with (V n · M − N )∗ → 0.
P
The processes [M τn ] exist as before, and clearly [M τm ]τm = [M τn ]τm a.s. for all
m < n. Hence, [M τm ] = [M τn ] a.s. on [0, τm ], and since τn → ∞, there exists a
non-decreasing, continuous, and adapted process [M ], such that [M ] = [M τn ]
400 Foundations of Modern Probability
a.s. on [0, τn ] for every n. Here (M τn )2 − [M ]τn is a local martingale for every
n, and so M 2 − [M ] is a local martingale by Lemma 18.1. 2
Proof: First let Mn∗ → 0. Fix any ε > 0, and define τn = inf{t ≥ 0;
P
|Mn (t)| > ε}, n ∈ N. Write Nn = Mn2 − [Mn ], and note that Nnτn is a true mar-
tingale on R̄+ . In particular, E[Mn ]τn ≤ ε2 , and so by Chebyshev’s inequality
Here the right-hand side tends to zero as n → ∞ and then ε → 0, which shows
P
that [Mn ]∞ → 0.
P
Conversely, let [Mn ]∞ → 0. By a localization argument and Fatou’s lemma,
we see that M is L2 -bounded. Now proceed as before to get Mn∗ → 0.
P
2
P M ∗2 ≥ 4r − P [M ]∞ ≥ c r ≤ P M ∗2 ≥ 4r, [M ]∞ < c r
≤ P N > −c r, supt Nt > r − c r
≤ c P {N ∗ > 0}
≤ c P {M ∗2 ≥ r}.
which gives the bound < with domination constant cp = c−p/2 /(2−p − c).
1
Recall that f ) g means f ≤ cg and g ≤ cf for some constant c > 0.
2
The domination constants are understood to depend only on p.
18. Itô Integration and Quadratic Variation 401
Defining N as before with τ = inf{t; [M ]t = r}, we get for any r > 0 and
c ∈ (0, 2−p/2−2 )
P [M ]∞ ≥ 2 r − P M ∗2 ≥ c r ≤ P [M ]∞ ≥ 2 r, M ∗2 < c r
≤ P N < 4 c r, inf t Nt < 4 c r − r
≤ 4c P [M ]∞ ≥ r .
Integrating as before yields
and the bound > follows with domination constant cp = c−p/2 /(2−p/2 −4 c). 2
Proof: (i) By Theorem 18.5 (iii) and (v), we have a.s. for any a, b ∈ R and
t>0
0 ≤ [aM + bN ]t
= a2 [M ]t + 2ab [M, N ]t + b2 [N ]t .
By continuity we may choose a common exceptional null set for all a, b, and
so [M, N ]2t ≤ [M ]t [N ]t a.s. Applying this inequality to the processes M − M s
and N − N s for any s < t, we obtain a.s.
1/2
[M, N ]t − [M, N ]s ≤ [M ]t − [M ]s [N ]t − [N ]s , (4)
and by continuity we may again choose a common null set. Now let 0 = t0 <
t1 < · · · < tn = t be arbitrary, and conclude from (4) and the classical Cauchy
inequality that
402 Foundations of Modern Probability
| [M, N ]t | ≤ Σ
k [M, N ]t
k
− [M, N ]tk−1
1/2
≤ [M ]t [N ]t .
It remains to take the supremum over all partitions of [0, t].
(ii) Writing dμ = d[M ], dν = d[N ], and dρ = |d[M, N ]|, we conclude from
(i) that (ρI)2 ≤ (μI)(νI) a.s. for every interval I. By continuity, we may
choose the exceptional null set A to be independent of I. Letting G ⊂ R+ be
open with connected components Ik and using Cauchy’s inequality, we get on
Ac ρG = Σ k ρIk
1/2
≤ Σ k μIk νIk
1/2
≤
Σ j μIj Σ
k νIk
1/2
= μG · νG .
By Lemma 1.36, the last relation extends to any B ∈ BR+ .
Now fix any simple, measurable functions f = Σk ak 1Bk and g = Σk bk 1Bk .
Using Cauchy’s inequality again, we obtain on Ac
ρ|f g| ≤ Σ k |ak bk | ρBk
1/2
≤Σ
k |ak bk | μBk νBk
1/2
≤ Σ j a2j μBj Σ k b2k νBk
1/2
≤ μf 2 νg 2 ,
which extends by monotone convergence to any measurable functions f, g on
R+ . By Lemma 1.35, we may choose f (t) = Ut (ω) and g(t) = Vt (ω) for any
fixed ω ∈ Ac . 2
E ( U · M )τ Nτ = E( U · M τ )∞ N∞
τ
= E U · [M τ , N τ ] ∞
= E U · [M, N ] τ .
Theorem 18.11 (martingale integral, Itô, Kunita & Watanabe) For any con-
tinuous local martingale M and process V ∈ L(M ), there exists an a.s. unique
continuous local martingale V · M with (V · M )0 = 0, such that for any con-
tinuous local martingale N ,
[V · M, N ] = V · [M, N ] a.s.
For m < n it follows that (V · M τn )τm satisfies the corresponding relation with
[M τm , N ], and so (V · M τn )τm = V · M τm a.s. Hence, there exists a continuous
process V · M with (V · M )τn = V · M τn a.s. for all n, and Lemma 18.1 shows
that V ·M is again a local martingale. Finally, (6) yields [V ·M, N ] = V ·[M, N ]
a.s. on [0, τn ] for each n, and so the same relation holds on R+ . 2
Proof: Note that [Vn · Mn ] = Vn2 · [Mn ], and use Proposition 18.6. 2
Lemma 18.12 yields the following stochastic version of the dominated con-
vergence theorem.
18. Itô Integration and Quadratic Variation 405
Lemma 18.14 (chain rule) For any continuous semi-martingale X and pro-
gressive processes U, V with V ∈ L(X), we have
(i) U ∈ L(V · X) ⇔ U V ∈ L(X), and then
(ii) U · (V · X) = ( U V ) · X a.s.
(V · X)τ = V · X τ = (V 1[0,τ ] ) · X.
406 Foundations of Modern Probability
Proof: The relations being obvious for ordinary Stieltjes integrals, we may
take X = M to be a continuous local martingale. Then (V ·M )τ is a continuous
local martingale starting at 0, and we have
(V · M )τ , N = V · M, N τ
= V · [M, N τ ]
= V · [M, N ]τ
= (V 1[0,τ ] ) · [M, N ].
Thus, (V · M )τ satisfies the conditions characterizing the integrals V · M τ and
(V 1[0,τ ] ) · M . 2
Since X n → X and Y n → Y , and also (X n )∗t ≤ Xt∗ < ∞ and (Y n )∗t ≤ Xt∗ < ∞,
we get by Corollary 18.13 and Theorem 18.16
P
ζn → Xt Yt − (X · Y )t − (Y · X)t
= [X, Y ]t . 2
correction term shows that the rules of ordinary calculus fail for the Itô integral.
Similarly, {fn (X) − f (X)}2 · [M ])t → 0 for all t, and so by Lemma 18.12
Thus, formula (11) for the polynomials fn extends in the limit to the same
relation for f . 2
We also need a local version of the last theorem, involving stochastic in-
tegrals up to the first time ζD when X exits a given domain D ⊂ Rd . For
5
Any continuous function on [0, 1] admits a uniform approximation by polynomials.
18. Itô Integration and Quadratic Variation 409
Proof: Choose some fn ∈ C 2 (Rd ) with fn (x) = f (x) when ρ(x, Dc ) ≥ n−1 .
Applying Theorem 18.18 to fn (X τn ) with τn as in (12), we get (9) on [0, τn ].
Since n was arbitrary, the result extends to [0, ζD ). 2
W · Z = (U + iV ) · (X + iY )
= U ·X −V ·Y +i U ·Y +V ·X .
Using Itô’s formula again together with (5) and (13), we get
X ◦ (Y ◦ Z) = X · (Y ◦ Z) + 12 [X, Y ◦ Z]
= X · (Y · Z) + 12 X · [Y, Z] + 12 [X, Y · Z] + 1
4
X, [Y, Z]
= XY · Z + 12 [XY, Z]
= XY ◦ Z. 2
This follows immediately from Lemmas 18.10 and 18.12, together with the
following approximation of progressive processes by predictable step processes.
Though the class L(X) of stochastic integrands is sufficient for most pur-
poses, it is sometimes useful to allow integration of slightly more general pro-
cesses. Given any continuous semi-martingale X = M + A, let L̂(X) denote
the class of product-measurable processes V , such that (V − Ṽ ) · [M ] = 0 and
(V − Ṽ ) · A = 0 a.s. for some process Ṽ ∈ L(X). For V ∈ L̂(X) we define
V · X = Ṽ · X a.s. The extension clearly enjoys all the previously established
properties of stochastic integration.
We often need to see how semi-martingales, covariation processes, and
stochastic integrals are transformed by a random time change. Then consider a
non-decreasing, right-continuous family of finite optional times τs , s ≥ 0, here
referred to as a finite random time change τ . If even F is right-continuous, so
is the induced filtration Gs = Fτs , s ≥ 0, by Lemma 9.3. A process X is said to
be τ -continuous, if it is a.s. continuous on R+ and constant on every interval
[τs− , τs ], s ≥ 0, where τ0− = X0− = 0 by convention.
which proves (15) when V = 1[0,t] . If X has locally finite variation, the result
extends by a monotone-class argument and monotone convergence to arbitrary
V ∈ L(X). In general, Lemma 18.23 yields some continuous, adapted processes
V1 , V2 , . . . , such that a.s.
18. Itô Integration and Quadratic Variation 413
(Vn − V )2 d[M ] + (Vn − V ) dA → 0.
By (15) the corresponding properties hold for the time-changed processes, and
since the processes Vn ◦ τ are right-continuous and adapted, hence progressive,
we obtain V ◦ τ ∈ L̂(X ◦ τ ).
Now assume instead that the approximating processes V1 , V2 , . . . are pre-
dictable step processes. Then (15) holds as before for each Vn , and the relation
extends to V by Lemma 18.12. 2
Then Lemma 18.12 yields (Vsn · X − Vs · X)∗t → 0 for every s and t. Proceeding
P
By Theorem 2.15 we have fn (t, x) → f (t, x) a.e. in t for each x ∈ CR+ ,Rd , and
(18) follows by dominated convergence. 2
18. Itô Integration and Quadratic Variation 415
Exercises
stochastic integrals.
5. Give examples of continuous semi-martingales X1 , X2 , . . . , such that Xn∗ → 0,
P
P
and yet [Xn ]t →
0 for all t > 0. (Hint: Let B be a Brownian motion stopped at
time 1, put Ak2−n = B(k−1)+ 2−n , and interpolate linearly. Define X n = B − An .)
6. For a Brownian motion B and an optional time τ , show that EBτ = 0 when
Eτ 1/2 < ∞ and EBτ2 = Eτ when Eτ < ∞. (Hint: Use optional sampling and
Theorem 18.7.)
7. Deduce the first inequality of Proposition 18.9 from Proposition 18.17 and the
classical Cauchy inequality.
8. For any continuous semi-martingales X, Y , show that [X + Y ]1/2 ≤ [X]1/2 +
[Y ]1/2 a.s.
9. (Kunita & Watanabe) Let M, N be continuous local martingales, and fix any
p, q, r > 0 with p−1 + q −1 = r−1 . Show that [M, N ]t 22r ≤ [M ]t p [N ]t q for all
t > 0.
10. Let M, N be continuous local martingales with M0 = N0 = 0. Show that
M ⊥⊥N implies [M, N ] ≡ 0 a.s. Also show by an example that the converse is false.
(Hint: Let M = U · B and N = V · B for a Brownian motion B and suitable
U, V ∈ L(B).)
11. Fix a continuous semi-martingale X, and let U, V ∈ L(X) with U = V a.s. on
a set A ∈ F0 . Show that U · X = V · X a.s. on A. (Hint: Use Proposition 18.15.)
12. For a continuous local martingale M , let U, U1 , U2 , . . . and V, V1 , V2 , . . . ∈
L(M ) with |Un | ≤ Vn , Un → U , Vn → V , and {(Vn − V ) · M }∗t → 0 for all t > 0.
P
P
Show that (Un ·M )t → (U ·M )t for all t. (Hint: Write (Un −U )2 ≤ 2 (Vn −V )2 +8 V 2 ,
and use Theorem 1.23 and Lemmas 5.2 and 18.12.)
13. Let B be a Brownian bridge. Show that Xt = Bt∧1 is a semi-martingale on
R+ with the induced filtration. (Hint: Note that Mt = (1 − t)−1 Bt is a martingale
on [0, 1), integrate by parts, and check that the compensator has finite variation.)
14. Show by an example that the canonical decomposition of a continuous semi-
martingale may depend on the filtration. (Hint: Let B be a Brownian motion with
induced filtration F, put Gt = Ft ∨ σ(B1 ), and use the preceding result.)
416 Foundations of Modern Probability