Professional Documents
Culture Documents
CalcustoENSTA2020-21IPP MasterSFA
CalcustoENSTA2020-21IPP MasterSFA
Francesco RUSSO
November 2020
(iv) Itô and chain rule formulae, a first approach to stochastic differential
equations.
2
1 Introduction
PROBLEM 1
Consider a simple model of population growth
dN
(t) = a(t) N (t), N (0) = A, (1.1)
dt
where N (t) is the size of the population at time t, a(t) is the growth rate at
time t. It might happen that a(t) is not completely known, but subject to
some random environmental effects, so that we have
where we do not know the exact behaviour of the noise term, only its proba-
bility distribution. The function r(t) is assumed to be nonrandom. How do
we solve (1.1) in this case?
1
(B) Stochastic approach to a deterministic problem with bound-
ary or initial condition
i) f˜ = f on ∂U ,
PROBLEM 3
We consider the problem
(
∂u
∂t (t, x) − 21 ∆u(t, x) = 0
u(0, x) = f (x),
2
(C) Optimal stopping
PROBLEM 4
Suppose that a person has an asset or resource (e.g. a house, stocks, oil...)
that she is planning to sell. The price Xt at time t of her asset on the open
market varies according to a stochastic differential equation of the same type
as in PROBLEM 1, i.e.
d
Xt = Xt (r + α “noise”),
dt
where r, α are known constants. The discount rate is a known constant ρ.
At what time should she decide to sell?
We assume that she knows the behaviour of (Xs ) up to the present time t,
but because of the noise in the system, she can of course never be sure at
the time of the sale if her choice of time will turn out to be the best. So
what we are searching for a stopping strategy that gives the best result in the
long run, i.e. maximizes the expected profit when the inflation is taken into
account.
This is an optimal stopping problem. It turns out that the solution can be
expressed in terms of the solution of a corresponding boundary value problem
(PROBLEM 2), except that the boundary is unknown (free) as well.
PROBLEM 5
Suppose that, in order to improve our knowledge about the solution of a
random differential equation, say of (1.1) in PROBLEM 1, we perform ob-
servations Z(s) of N (s) at times s ≤ t. However, due to inaccuracies in our
measurements we do not really measure N (s) but a disturbed version of it,
i.e.
Z(s) = N (s) + “noise”, (1.3)
So in this case, there are two sources of noise, the second coming from the
error of measurement.
3
The filtering problem is the following. What is the best estimate of N (s)
satisfying (1.1), based on the observations (1.3) when s ≤ t ?
Intuitively, the problem is to filter the noise away from the observations in an
optimal way. In 1960 Kalman and in 1961 Kalman and Bucy proved what is
now known as the Kalman-Bucy filter. Basically the filter gives a procedure
for estimating the state of a system which satisfies a "noisy" linear differential
equation, based on a series of "noisy" observations. Almost immediately
the discovery found applications in aerospace engineering (Ranger, Mariner,
Apollo etc.) and it now has a broad range of applications but it has known
a wide range of other applications.
Remark 1.1 This equation is the continuous time version of the difference
equations
0
Sn+1 = Sn0 (1 + r). (1.5)
where
∆t = 1, ∆Sn0 = Sn+1
0
− Sn0 .
In (1.4), the interest rate related to the non-risky asset is constant and equal
to r.
4
We set
S00 = 1,
such that
St0 = ert , ∀ t ≥ 0.
We suppose that the evolution of the risky asset price (stock) is described by
SDEs of the type (1.1) and (1.2)
dSt = St (µ + “noise”)dt.
At time t, we set πt0 (resp. πt ) the quantity of non-risky (resp. risky) detained
in the portfolio.
Suppose that at time t = 0 the investor is offered the right (but without
obligation) to buy one unity of risky asset at a specified price K at a specified
future time t = T . Such a right is called a European call option. How
much should the person be willing to pay for such an option? This problem
was solved when Fisher Black and Myron Scholes (1973) used stochastic
analysis to compute a theoretical value for the price, the now famous Black-
Scholes opion price formula. They also solved the so called hedging problem
i.e. the investment that the seller of the option has to perform in order to
reproduce the sum to provide to the owner of the option in case he exercise
it. In particular he has to evaluate the quantities πt0 and πt to invest at
each time t in order to be able to reimburse the owner of the option at the
maturity T .
PROBLEME 6 Stochastic control
Xt = St0 πt0 + St πt .
5
At each time t, the investor can decide the portion ut ; he will put (1−ut )Xt in
the non-risky asset. Given a utility function U and a maturity T , the problem
consists in determining the optimal portfolio, i.e. to find a good allocation
investment, ut , 0 ≤ t ≤ T , which maximizes at the maturity time the
(u)
expected utility of the wealth Xt ; one tries to maximize on 0 ≤ ut ≤ 1 the
expected utility n o
(u)
E U (XT ) .
2.1 Preliminaries
E = Rd , C, R, (2.1)
ν(B) = P {X ∈ B} = P (X −1 (B)).
6
If X = (X1 , . . . , Xd ) is a r.v. taking values in Rd (or random vector), we call
characteristic function of X the function ϕX : Rd → C defined by
Let X be a square integrable random vector, i.e. such that E(|X|2 ) < ∞.
The covariance matrix of X is the matrix Γ(X) = (σij )1≤i,j≤d whose
coefficients are given by
E(X) = µ, V ar(X) = σ 2 .
7
If σ = 0, we say that X ∼ N (µ, σ 2 ) if the law of X is the (discrete) Dirac
law δµ , i.e. (
1 if µ ∈ A
δµ (A) =
0 otherwise.
In that case X ≡ µ a.s.
8
Proof. Exercise. Indication: use Remark 2.2.
Exercise 2.9 Let X be a square integrable random vector. Prove that the
covariance matrix Γ is a positive definite symmetric matrix.
Here we only recall some well-known results that we have non necessarily in
mind.
Let (Ω, F, P ) be a probability space and G a sub σ-field of F.
9
This property has no converse implication. However the interesting result
below holds.
Theorem 2.13 Let (E1 , E1 ), (E2 , E2 ) two sets equipped with a σ-field. Let
X be a G-measurable r.v. taking values in (E1 , E1 ) and Y be a r.v. with
values in (E2 , E2 ). We suppose that Y is independent of G.
Let Φ : (E1 × E2 , E1 ⊗ E2 ) −→ C measurable.
We define ϕ : E1 → C by
Then
E(Φ(X, Y )|G) = ϕ(X) a.s.
Remark 2.14 i) Under the stated hypotheses, we can calculate E(Φ(X, Y )|G)
as if X were constant.
10
Proposition 2.15 Suppose that the covariance matrix Γ(n) of (ξ1 , . . . , ξn )
is invertible. Then the linear regression ξˆn+1 of ξn+1 on ξ1 , ..., ξn is given by
n
X
µn+1 + (ξj − µj ) bj ,
j=1
Remark 2.16 When (ξ1 , . . . , ξn+1 ) is Gaussian, then the linear regression
ξˆn+1 coincides with E(ξn+1 |ξ1 , . . . , ξn ).
We introduce now basic material about the topic, which will be widely used
in the sequel. For more specific properties, the reader can however consult
for instance [17, 30].
We go on with some reminders about measure theory on the real line, see
[18] (Chapter 9) and [31] (Chapter 8) for more details. See also [25].
11
f is said right-continuous or càd (resp. to have a left-limit or
làg) if for any x, f (x+ ) = f (x) (resp. f (x− ) exists).
f is said left-continuous or càg (resp. to have a right-limit or
làd) if for any x, f (x− ) = f (x) (resp. f (x+ ) exists).
12
3.2 Generalities on continuous time processes
d) We will also consider processes indexed by the time interval [0, T ]. Most
of the notions introduced for R+ can be expressed for [0, T ]. We will
maintain some ambiguity with respect to this.
Definition 3.5 (i) A process (Xt )t≥0 is said to have independent in-
crements if for every 0 ≤ t0 < t1 < . . . < tn < ∞, the family
{Xt1 − Xt0 , . . . , Xtn − Xtn−1 } is independent.
(iii) A process (Xt )t≥0 is said to have stationary increments if for every
0 ≤ t0 < t1 < . . . < tn < ∞, the law of (Xt1 +h − Xt0 +h , . . . , Xtn +h −
Xtn−1 +h ) does not depend on h.
13
(iv) (Xt )t≥0 is said to be square integrable if E(|Xt |2 ) < ∞, ∀t ≥ 0.
Xt (ω) = Yt (ω), ∀t ≥ 0, ∀ω ∈ N c .
Proof.
It is enough to show that two left-continuous (resp. right-continuous) pro-
cesses (Xt ) and (Yt ), modification of each other are indistinguishable. We
only consider the case when the paths are left-continuous, the other cases
14
being similar. Let N be a negligible set such that for ω ∈
/ N the paths
t 7→ Xt (ω) and t 7→ Yt (ω) are left-continuous.
On the other hand, for each n ∈ N∗ , k ∈ N, there is a null set Nk,n such that
X k (ω) = Y k (ω), ∀ω ∈
/ Nk,n .
2n 2n
S
We set Ñ = k,n Nk,n ∪ N which is still a null set.
By left-continuity of paths, for each ω ∈
/ Ñ , t ≥ 0, we have
X
Xt (ω) = lim 1[ k k+1
, [ (t)X k (ω)
n→∞ 2n 2n 2n
k≥0
X
= lim 1[ k k+1
, [ (t)Y k (ω) = Yt (ω). (3.1)
n→∞ 2n 2n 2n
k≥0
where C, α, β > 0.
Then, there is a modification of X (still denoted by the same letter) such that
β
for any γ ∈]0, [, a.s. we have
α
1
sup |Xt − Xs | → 0, (h ↓ 0). (3.3)
hγ |t−s|≤h
15
We now introduce the concept of filtration and related definitions.
Given a process (Xt )t≥0 , let (Ft ) be the filtration Ft = σ(Xs , s ≤ t). Ob-
viously (Ft ) is the smallest filtration for which (Xt )t≥0 is adapted. A priori
the above filtration does not verify the usual conditions.
Remark 3.11 Given the (natural) filtration (Ft ) it is no difficult to show the
existence of a smallest filtration (FtX ) fulfilling the usual conditions such that
Ft ⊂ FtX for every t ∈ [0, T ]. In particular, F0X contains all the P -negligible
subsets in F. This filtration is called the canonical filtration of the process
(Xt )t≥0 . When we speak about a filtration associated with a process (Xt )t≥0
without other comments, we intend its canonical filtration.
16
Remark 3.12 (i) If (Xt ), (Yt ) are two modifications of each other, they
will have the same canonical filtration.
(ii) If (Xt ) is (Ft )-adapted , then every modification remains (Ft )-adapted.
Sometimes it is not sufficient to only suppose that a process (Xt )t≥0 is (Ft )-
adapted, we need a little bit more.
Remark 3.14 (i) It is obvious that any (Ft )-progressively measurable pro-
cess is (Ft )-adapted.
(ii) If (Xt ) is (Ft )-adapted, and càd (or càg), then (Xt ) is progressively
measurable, see [17] Proposition 1.13, chapter 1.
Xtτ = Xt∧τ , t ≥ 0.
{τ ≤ t} ∈ Ft , ∀t ≥ 0.
17
Let τ be a stopping time. We introduce
Fτ = {A ∈ F|A ∩ {τ ≤ t} ∈ Ft ; ∀t ≥ 0} .
Exercise 3.15 (i) Prove that Fτ is a σ-field. Prove that it coincides with
Ft when τ = t.
a) τ is Fτ -measurable.
b) If S ≤ τ a.s., then FS ⊂ Fτ .
Definition 3.17 A real valued process (Bt )t≥0 will be called Brownian mo-
tion (or Wiener process) with variance σ 2 , if
18
b) Assumptions (i) and (ii) imply that that process has stationary incre-
ments.
B0 ≡ x,
Proof. It will be enough to show that for all 0 ≤ t0 < t1 < . . . <
tn , (Bt1 − Bt0 , . . . , Btn − Btn−1 ) is a Gaussian vector. This is a consequence
of independence and of the Gaussian character of each component.
In the sequel, when we will mention a Brownian motion, without other pre-
cision, this will be a classical Brownian motion.
In this case the law of Wt is N (0, t), i.e. its density with respect to Lebesgue
measure is
1 −x2
√ exp .
2πt 2t
We will also need the notion of Brownian motion related to a particular
filtration (Ft )t≥0 .
19
Definition 3.22 Let (Ft ) be a σ-field fulfilling the usual conditions. We will
call (Ft )-Brownian motion (with variance σ 2 ), a (continuous) real process
(Bt )t≥0 which verifies the following properties.
(i) ∀t ≥ 0, Bt is Ft -measurable;
Also, we will call classical (Ft )-Brownian motion an (Ft )-Brownian mo-
tion (Wt ) if σ = 1 and W0 = 0 a.s.
Remark 3.23 Let (Bt )t≥0 be an (Ft )-Brownian motion Then, the filtra-
tion σ(Bs , s ≤ t) ∨ N where N is the family of P -null sets is the canonical
filtration (FtB ) of B, see Theorem 31, section 4, Chapter 1 of [29].
20
where we do not know the exact behaviour of the noise, but only its proba-
bility distribution and its “mean” r(t) is known.
A particular case of (3.5) and (3.6) can be
dN
= (r + α“noise′′ )N (t), N (0) = A. (3.7)
dt
N (t) can represent for instance the price of a resource, as oil, apartments,
currencies, stocks and so on. α could represent the volatility of the resource.
If α = 0, one would have Xt = X0 ert .
More generally, we will be interested in equations of the type
dXt
= b(t, Xt ) + a(t, Xt ).“noise′′ ,
dt
where b, a are given functions.
(iii) E(ξt ) = 0, ∀t ≥ 0.
21
(i’) (Bt ) has independent increments,
(iii’) B0 = 0 a.s.
22
• an (Ft )-submartingale if E(Mt |Fs ) ≥ Ms , ∀t ≥ s.
Remark 4.1 From that definition, we can deduce that if (Mt )t≥0 is a Ft -
martingale, then E(Mt ) = E(M0 ), ∀t ≥ 0. If (Mt )t≥0 is an Ft -supermartingale
(resp. Ft -submartingale) then t 7→ E(Mt ) is decreasing (resp. increasing).
When one speaks about a martingale, without σ-field specification, one refers
to the canonical filtration.
• (Mt ) is a martingale,
• ϕ is non-decreasing.
23
At this stage we can provide some examples coming from Brownian motion.
Proposition 4.6 Let (Wt )t≥0 be (Ft )-classical Brownian motion. Then, the
following processes are (Ft )-martingales:
1)Wt
2)Wt2 − t
σ2
3) exp σWt − 2 t .
where G ∼ N (0, 1). At this stage (2.3) implies that the previous quantity
σ 2 (t−s)
equals e 2 . This yields the announced result.
24
Lemma 4.7 Any (Ft )-martingale admits a càdlàg modification.
If (Mt )t≥0 is a martingale, the relation E(Mt |Fs ) = Ms , t > s, can be gen-
eralized to random times, if those are bounded stopping times. Next theorem,
namely Doob’s stopping time theorem, will be stated without proof.
Theorem 4.9 Let (Mt )t≥0 be a càdlàg martingale with respect to a filtration
(Ft )t≥0 and τ1 , τ2 are two stopping times such that τ1 ≤ τ2 ≤ K, K being
a real positive constant. Then Mτ2 is integrable and
Remark 4.10 (i) Let τ be a bounded stopping time. Previous result im-
plies that E(Mτ ) = E(M0 ). To observe this, it is enough to apply
Doob’s stopping time to τ1 = 0, τ2 = τ and to take the expectation of
both members.
(ii) If (Mt ) is a submartingale, the same result holds replacing (4.10) with
25
Proof. We can replace (Xt ) with an indistinguishable process whose paths
are continuous, see Remark 3.18 a).
Remark 4.12 The path continuity allows to write Ta (ω) = min{s ≥ 0|Xs (ω) ≥
6 ∅. In particular XTa (ω) = a if {s ≥ 0|Xs (ω) ≥
a}, if {s ≥ 0|Xs (ω) ≥ a} =
6 ∅.
a} =
Since
{Ta ≤ t} = {max Xs ≥ a},
s≤t
max Xs ,
s=tk2−n ,k=0...2n
Equality (4.12) means that the two previous complex functions are identical
on R+ . By analytical continuation, they also coincide on {Reλ > 0}. By
continuity, they are equal on {Reλ ≥ 0}; this determines the characteristic
function which uniquely determines the law.
26
continuous. The obtained indistinguishable process is still a (Ft ) -Brownian
motion.
We suppose a ≥ 0. Since the paths are continuous, τa = Ta , so that τa is
again a stopping time.
We will apply Doob’s stopping time theorem to the martingale
σ2t
Mt = exp σWt − .
2
Unfortunately, we cannot apply directly that stopping time to τa , which is
not a priori bounded. However, if n is a strictly positive integer, Proposition
3.16 says that τa ∧ n is still a stopping time, which is this time bounded.
Applying Remark 4.10 (i), we obtain E(Mτa ∧n ) = 1. Since Wr (ω) ≤ a, if
r ≤ τa (ω),
σ2
Mτa ∧n = eσWτa ∧n − 2
(τa ∧n)
≤ eσa .
Letting σ go to 0, we obtain
P {τa < ∞} = 1.
This means that the classical Brownian motion reaches the barrier a with
probability one. Then
2
− σ 2τa
E e = e−σa . (4.13)
27
Let us suppose now that −a > 0. We remark that
Now (−Wt )t≥0 is an (Ft )-Brownian motion and the result follows.
Proposition 4.16 Let (Mt )0≤t≤T be a càdlàg (Ft )-martingale (resp. sub-
martingale, supermartingale). Then M τ is still an (Ft )-martingale (resp.
submartingale, supermartingale).
28
Remark 4.19 As mentioned earlier, the space C([0, T ]) equipped with the
distance ρ is a complete metric space.
Indeed the space of random elements with values in the Banach space E :=
C([0, T ]) (equipped with the sup-norm k · k) is a complete metric space if
equipped with the metric
kX − Y k
ρC (X, Y ) = ,
1 + kX − Y k
describing the convergence in probability.
So if (X n ) is a Cauchy sequence in C([0, T ]), then by definition of ρ, (X n )
is Cauchy as sequence of random elements with values in E with respect to
the metric ρC . Therefore there is a random element X with values in E such
that X n converges in probability to X.
29
Proof. |Mt |s≤t≤T is a submartingale and x → |x| is a convex function.
• X τn is an (Ft )-martingale;
• limn→∞ τn = +∞ a.s.
• St := sups≤t Ws . Why?
Remark 4.27 When the filtration is not mentioned, again the canonical fil-
tration will be underlying.
30
Proposition 4.28 (Stricker theorem).
Let S be an (Ft )- semimartingale and (Gt ) be a subfiltration of (Ft ) such that
S is (Gt )- adapted. Then S is also a (Gt )- semimartingale.
Lemma 4.29 Let (Mt )t∈[0,T ] be an (Ft )-càdlàg local martingale such that
sup |Mt | ∈ L1 .
t∈[0,T ]
E[Ms∧τn 1Λ ] = E[Mt∧τn 1Λ ].
31
Theorem 4.30 (Doob-Meyer decomposition of a continuous sub-
martingale)
Let X be a continuous (Ft )-submartingale non-negative or such there is γ > 1
such that E(|XT |γ ) < ∞. Then, there is a continuous (Ft )- martingale M
and an adapted, continuous, and increasing process V (such that V0 = 0)
with X = M + V . The decomposition is unique.
32
RT
Remark 4.36 Let K = (Ks ) be process such that 0 |Ks |ds < ∞ a.s.
Rt
(i) Vt = 0 Ks ds defines a finite variation process and its total variation
process is given by Z t
kV k[0,t] = |Ks |ds.
0
In the sequel we will often have to deal with processes depending on a pa-
rameter ε which converge to zero when ε → 0. For this reason, the generic
notation (R(ε, s), s ∈ [0, T ]) indicate a family of processes such that
33
in probability. In the definition below we come back to the notion of subdi-
vision introduced in Definition 3.1.
a)We denote
Z t
Y d− X := lim St− (Π, Y, X).
0 |Π|→0
b) We set
Z t
Y d◦ X := lim St◦ (Π, Y, X).
0 |Π|→0
34
Rt Rt Rt
any t ∈ [0, T ] 0 Y d− X (resp. 0 Y d◦ X) exists and the process ( 0 Y d− X)t
Rt
(resp. ( 0 Y d◦ X)t ) admits a càdlàg version. In that case that version will be
chosen by default.
If St− (Π, Y, X) (resp. St◦ (Π, Y, X)) converge ucp (without dependence from
the chosen sequence of subdivisions), then we say that the forward (resp.
symmetric) integral exists in the ucp sense.
If the forward integral of Y with respect to X exists, for a < b in [0, T ] we
Rb Rb Ra
also set a Y d− X := 0 Y d− X − 0 Y d− X. Similar considerations can be
done for the symmetric integrals.
We define now the notions of covariation and quadratic variation.
c) (Covariation)
We set
[Y, X]t := lim C(Π, Y, X)t (5.2)
|Π|→0
if the limit holds in the ucp sense. [Y, X] is called the covariation of Y
and X. If Y = X, then [X] := [X, X] is called quadratic variation of
X.
iii) If the forward (resp. symmetric) integral exists in the ucp sense then
the (indefinite) forward (resp. symmetric) integral exists.
35
iv) As covariations, also stochastic integrals, if they exist in the ucp sense,
allow a stopping property:
Z t Z t Z t Z t∧τ
τ ⋆ ⋆ τ τ ⋆ τ
Y d X= Yd X = Y d X = Y d⋆ X,
0 0 0 0
where ⋆ denotes −, ◦.
36
Definition 5.7 A vector (X 1 , . . . , X n ) of càdlàg processes is said to have
all its mutual brackets (covariations) if [X i , X j ] exist for every 1 ≤ i,
j ≤ n.
[X i + X j , X i + X j ] = [X i , X i ] + 2[X i , X j ] + [X j , X j ]; (5.4)
(iii) (Kunita-Watanabe).
If X and Y are such that (X, Y ) has all its mutual brackets, we have
1
|[X, Y ]| ≤ {[X, X][Y, Y ]} 2 .
37
(iv) If X is a finite quadratic variation process and Y is a zero quadratic
variation process then (X, Y ) has all its mutual brackets and [X, Y ] = 0.
(v) Let X be a bounded variation continuous process. Then for any càdlàg
(or càglàd) process Y we have
Rt Rt Rt
a) 0 Y d− X = 0 Y dX = 0 Y d◦ X,
b) [X, Y ] = 0. In particular a bounded variation continuous process is
a zero quadratic variation process.
(i) Let
Π = {0 = t0 < ... < tn = T }, (5.5)
Taking the limit when |Π| goes to zero gives the result (i).
38
Clearly for almost all ω
Z t Z T
sup (Y − Y )dX ≤
Π
|YsΠ − Ys |dkXk[0,s]
t≤T 0 0
We need at this point a Dini type lemma in the stochastics framework. The
deterministic version of this lemma is well-known.
39
Corollary 5.12 Let (Xt )0≤t≤T a process. Suppose there is a continuous
process (At )
lim C(Π, X, X)t
|Π|→0
Proof (of the Lemma 5.11). Since the process Z is continuous and because
of assumptions (i) and (ii), it is clear that Z non-decreasing.
iT
Let ρ, α > 0, N ∈ N∗ . We set tN i = , 0 ≤ i ≤ N.
N
For almost all ω in Ω,
T
sup |Z(tN
i+1 ) − Z(t N
i )|(ω) ≤ δ Z(·, ω) ; ,
i N
T
where δ Z(·, ω) ; is the continuity modulus of Z(t, ω), 0 ≤ t ≤ T . The
N
T
a.s. convergence of δ Z(., ω) ; to zero, when N → ∞, implies the one in
N
probability. Therefore, we choose N (fixed) so that
n T αo
P δ Z(.) ; > ≤ ρ.
N 4
We define n o
A= sup |Zℓ (t)) − Z(t)| > α .
0≤t≤T
N[
−1
the event A is included in the union Ai where
i=0
n o
Ai = sup |Zℓ (t) − Z(t)| > α ; 0 ≤ i ≤ N − 1.
t∈[ti ,ti+1 ]
40
We rearrange the right hand-side of the above inequality,
Then T
Zℓ (t) − Z(t) ≤ Zℓ (ti+1 ) − Z(ti+1 ) + δ Z(.) ; .
N
Similarly
T
Zℓ (t) − Z(t) ≥ Zℓ (ti ) − Z(ti ) − δ Z(.) ; .
N
Therefore,
T
|Zℓ (t)−Z(t)| ≤ δ Z(.) ; +|Zℓ (ti+1 )−Z(ti+1 )|+|Zℓ (ti )−Z(ti )| ; ∀t ∈ [ti , ti+1 ].
N
Consequently,
n T αo e
Ai ⊂ δ Z(.) ; > ei+1 ,
∪ Ai ∪ A
N 2
where n o
ei = |Zℓ (ti ) − Z(ti )| > α .
A
4
Finally,
!
1 α
N
[
P (A) ≤ P δ Z(.) ; > + P( ei )
A
N 4
i=0
!
1 α
N
X
≤ P δ Z(.) ; > + ei ).
P (A
N 4
i=0
Mt2 = Nt + At , t ≥ 0, (5.6)
41
Theorem 5.13 Let (Mt ; t ≥ 0) be a continuous (Ft )-local martingale. Then
the process A coming from the Doob decomposition of M 2 coincides with
[M, M ]. In particular (Mt2 − [M, M ]t ; t ≥ 0) is a continuous (Ft )-local mar-
tingale. If (Mt ; t ≥ 0) is a continuous and square integrable martingale (see
Definition 4.2) then (Mt2 − [M, M ]t ; t ≥ 0) is a continuous martingale and
E[[M, M ]t ] < ∞, for any t ≥ 0.
The proof of Theorem 5.13 is adapted from Theorem 5.8 chap. 1 of [17]. The
key fact employed here is that, when squaring sums of martingale increments
and taking the expectation, one can neglect the cross-product terms. Before
the sketch of the proof we propose an easy exercise.
Let A as the process appearing in the decomposition (5.6).
(ii)
42
Lemma 5.17 Let M be a square integrable (Ft )-martingale and let K such
that such that supt≤T |Mt | ≤ K. Then, for every t ∈ [0, T ],
E(C(Π, M, M )t )2 ) ≤ 3K 4 .
So
X m−1
m−1 X
E (Msj+1 − Msj )2 (Msk+1 − Msk )2
k=0 j=k+1
m−1
X m−1
X
= E (Msk+1 − Msk )2 E (Msj+1 − Msj )2 |Fsk+1 (5.9)
k=0 j=k+1
m−1
!
X
≤ K 2E (Msk+1 − Msk )2 ≤ K 4,
k=0
where the latter line can be justified taking the expectation in (5.8). We also
have
m−1
! m−1
!
X X
E (Msk+1 − Msk )4 ≤ K 2E (Msk+1 − Msk )2 ≤ K 4. (5.10)
k=0 k=0
Inequalities (5.9) and (5.10) imply
m−1
!
X
E(C(Π, M, M )2t ) = E (Msk+1 − Msk ) 4
k=0
X m−1
m−1 X
+ 2E (Msj+1 − Msj )2 (Msk+1 − Msk )2
k=0 j=k+1
4
≤ 3K .
43
Lemma 5.18 Let M be a square integrable continuous martingale and let K
such that supt≤T |Mt | ≤ K a.s. Then, given a sequence (Π) of subdivisions
whose mesh converges to zero, for every t ∈ [0, T ], we have
m−1
!
X
4
lim E (Msk+1 − Msk ) = 0.
|Π| →0
k=0
As |Π| approaches zero, the first factor on the right-hand side remains bounded
by Lemma 5.17 and the second term tends to zero, by the uniform continuity
of M and by Lebesgue dominated convergence theorem.
We go on with the proof of Theorem 5.13. Taking into account Dini’s type
Lemma 5.11 it is enough to show that for any fixed t ∈ [0, T ], C(Π, M, M )t −
At converges to zero in probability. Let Π of the type (5.1) as in the beginning
of the present theorem proof.
We start supposing that sup0≤t≤T |Mt | ≤ K and also that A intervening
in the Doob decomposition M 2 = N + A is also bounded by K, for some
constant K. Then N = M 2 − A is also bounded and it is of course even a
square integrable martingale. Since for 0 ≤ j ≤ m − 1 we have
44
and taking into account Exercise 5.15 it follows
2
m−1
X
E{(C(Π, M, M )t − At )2 } = E {(Msj+1 − Msj )2 − (Asj+1 − Asj )}
j=0
X m−1
m−1 X
= E ((Msj+1 − Msj )2 − (Asj+1 − Asj ))((Msk+1 − Msk )2 − (Ask+1 − Ask )) .
j=0 k=0
So
m−1
X
2
E{(C(Π, M, M )t − At ) } = E ((Msj+1 − Msj )2 − (Asj+1 − Asj ))2
j=0
m−1
X
≤ 2 E (Msj+1 − Msj )4 + (Asj+1 − Asj )2
j=0
m−1
X
≤ 2 E (Msj+1 − Msj )4 + E(At δ(A; Π)) .
j=0
As the mesh of Π approaches zero, the first term on the right-hand side of this
inequality converges to zero because of Lemma 5.18; the second term does
as well, by the Lebesgue dominated convergence theorem and the sample
path uniform continuity of A. Convergence in L2 implies convergence in
probability, so this proves the theorem for martingales which are uniformly
bounded.
We treat now the general case. We proceed by localization. Let (τ N ) be the
localizing sequence defined by
45
is a martingale, by the first part of the proof we have
[M τN , M τN ] = AτN .
Proposition 5.19 Let (Ft ) be a usual filtration and M is a càdlàg (Ft )-local
martingale vanishing at zero with [M ] = 0. Then M is identically zero.
Proof.
Let us consider the sequence (τk ) intervening in the definition of local mar-
tingale. For every k, M τk is a martingale so (M 2 )τk is a non-negative sub-
martingale which is a martingale since [M τk , M τk ] = [M, M ]τk ≡ 0. See
Doob decomposition and Theorem 5.13. Without restriction of generality,
we can suppose that limk→+∞ τk (ω) = ∞, for every ω ∈ Ω. It will enough
to show that, for every k,
1{τk >T } M ≡ 0,
or better that M τk ≡ 0. Finally, it will be sufficient to show the proposition
when M is a martingale such that M 2 is a martingale.
Consequently, E(Mt2 ) = E(M02 ) = 0 and so Mt = 0 a.s. Since M is càdlàg
then M will be indistinguishable from 0.
46
Exercise 5.21 Show that M, N are continuous (Ft )-local martingales then
(M, N ) has all its mutual brackets.
Proof. The result follows directly from Proposition 5.10 (v) and the bilin-
earity of the covariation.
Proof. Items (i) and (ii) follow from the bilinearity of the covariation and
Theorem 5.13.
47
to show that (M ϕ )2 − [M ]ϕ is a martingale. This happens because N =
M 2 − [M ] is a martingale and the definition of quadratic variation.
Proposition 6.2 Let M, N be two processes such that (M, N ) has all its
RT
mutal brackets. Let H, K be measurable processes such that 0 Hs2 d[M, M ]s +
RT 2
0 Ks d[N, N ]s < ∞ a.s. Then for all t ∈ [0, T ]
Z sZ Z
t t t
| Hr Kr d[M, N ]r | ≤ Hr2 d[M, M ]r Kr2 d[N, N ]r .
0 0 0
Proposition 6.4 Let (Mt )t≥0 be a continuous (Ft )-local martingale such
that M0 = 0. We set
Mt∗ = maxs≤t |Ms |.
For every m > 0, there are universal positive constants km , Km such that
km E ([M ]m ∗ 2m
τ ) ≤ E (M )τ ≤ Km E ([M ]m
τ ), (6.1)
Proposition 6.5 Let (Mt )t∈[0,T ] be a continuous (Ft )-local martingale such
that M0 ∈ Lp , p ≥ 1. Suppose that one of the items below are valid.
48
p
(i) E([M ]T2 ) < ∞.
Corollary 6.6 Let (Mt ) be a continuous (Ft )-local martingale such that
M0 ∈ L2 and
E([M ]T ) < ∞, ∀T > 0.
• The case p > 1 follows since (ii) implies that supt∈[0,T ] |Mt | ∈ L1 .
49
We remark that, if M, N ∈ HT2 , then M + N and M − N belong to HT2 , and
by Corollary 5.23 M N − [M, N ] is a martingale.
Proof. The positivity and the bilinearity properties for (7.2) are obvious.
Suppose that hM, M iH2 = 0. This implies that E(MT2 ) = 0. So M is a
T
submartingale with
(n)
E((MT − MT )2 ) → 0,
(n) (m)
E(sup |Ms(n) − Ms(m) |2 ) ≤ 4E(MT − MT )2 ,
s≤T
(n)
which converges to zero when n, m → ∞. Consequently the r.v. sups≤T |Ms −
(m)
Ms | converges in probability to zero and so M (n) is a Cauchy sequence in
CF ([0, T ]) equipped with the usual metric d, related to the convergence ucp,
so it converges to some (continuous) process M . It is square integrable since
(n)
for every t Mt converges in L2 to Mt .
It remains to show that M is a martingale: indeed
(n) (n)
Mt = lim Mt = lim E(MT |Ft ) = E(MT |Ft ),
n→∞ n→∞
when n → ∞.
50
Definition 7.3 Given a continuous (Ft )-local martingale M we denote L2 (M )T
RT
the set of progressively measurable processes H such that E( 0 Hs2 d[M, M ]s ) <
∞ equipped with the inner product
Z T
hH, KiL2 (M )T = E Hs Ks d[M, M ]s .
0
Definition 7.4 We denote by E the set of elementary process, i.e. the set of
processes H which are linear combination of processes of the type
n−1
X
Hi 1[ti ,ti+1 [ , (7.3)
i=0
51
If H is of the type (7.3) and M is a continuous square integrable martingale
then we denote
n−1
X
(H · M )t := Hi (Mti+1 ∧t − Mti ∧t ).
i=0
where the latter passage can be explained by (5.7) and Theorem 5.13. Con-
sequently this equals
n−1
X
E(H · M )2T = E E(Hi2 ([M, M ]ti+1 − [M, M ]ti )|Fti )
i=0
n−1
X
= E Hi2 ([M, M ]ti+1 − [M, M ]ti )
i=0
Z !
T n−1
X
= E Hi2 1[ti ,ti+1 [ (s)d[M, M ](s))
0 i=0
Z T
= E Hs2 d[M, M ]s .
0
52
Theorem 7.8 The application E → HT2 defined by H 7→ H · M extends to
an isometry to L2 (M )T → HT2 .
Proposition 7.10 Let (F1 , d1 ) and (F2 , d2 ) be two metric spaces, D a dense
subset of F1 . Let T : D → F2 for which there is c > 0 such that
for every x, y ∈ D.
If (F2 , d2 ) is complete then the map T prolonges to the full space F1 to a map
for which (7.5) remains true.
53
Proof. We set Φ := H · M and we prove (7.6). Taking into account Corol-
lary 5.23 and Corollary 5.20, we need to show that
Z ·
(H · M )N − Hr d[M, N ]r is a martingale. (7.7)
0
For this let 0 ≤ s < t ≤ T . We have to show that, for every ξ bounded
Fs -measurable we have
Z t
E (H · M )t Nt − (H · M )s Ns − Hr d[M, N ]r ξ = 0. (7.8)
s
[X, N ]T = [H · M, N ]T .
Exercise 7.12 Show (7.8) for H is an elementary process. Hint. Make use
of (5.7) in Exercise 5.15.
Exercise 7.13 Let N ∈ HT2 . Show that the map M 7→ [M, N ]T is continu-
ous from HT2 to L1 (Ω).
54
7.2 The case of local martingales
For this part we do not develop all the details, the interested reader can
consult [17], chapter 3, section 3.2 D. Let M be a continuous (Ft )-local
martingale vanishing at zero and let P(M ) be the collections of (equivalence
classes) of all progressively measurable processes X satisfying
Z T
Xt2 d[M ]t < ∞, a.s. (7.9)
0
Then Z t Z t
lim sup | Hrn dMr − Hr dMr | = 0,
n→∞ t≤τ 0 0
in probability
55
R·
Remark 7.19 Taking τ = T , previous convergence says that 0 H n dM con-
R·
verge to 0 HdM ucp.
Then Z · Z · Z ·
(αHs1 + βHs2 )dMs =α Hs1 dMs +β Hs2 dMs .
0 0 0
Proof. Exercise.
The stochastic integral with respect to a local martingale extends the one
related to square integrable martingales. The proof of proposition below is
an easy exercise.
The following chain rule holds and it is an easy consequence of Theorem 7.11
and Proposition 7.15.
Itô stochastic integral can be extended to the case where the integrator is
a continuous (Ft )-semimartingale (St ). Let S = M + V its decomposition
where M is a (Ft )-local martingale and V is an (Ft )-progressively process
finite variation process, V0 = 0, for simplicity.
56
Example 8.1 Itô process
Let (Wt )t≥0 be an (Ft )-classical Brownian motion and X0 be F0 -measurable,
(Ht ), (Kt ) (Ft )-progressively measurable processes such that
Z T Z T
Hs2 ds + |Ks |ds < ∞ a.s.
0 0
Then Z t Z t
St = X0 + Hs dWs + Ks ds
0 0
is called (W, Ft )-Itô process or simply Itô process.
and S = M + V , then
Z t Z t Z t
Ys dSs := Ys dMs + Ys dVs
0 0 0
57
Application.
Rt
Let (Wt ) an (Ft )-standard Brownian motion. Let St = X0 + 0 Hs dWs +
Rt
0 Ks ds an Itô stochastic process. Let (Yt ) be an (Ft )-progressively measur-
RT RT
able process such that 0 Ys2 Hs2 ds + 0 |Ys Ks |ds < +∞ a.s. Then
Z t Z t Z t
Y dS = Y HdW + Ys Ks ds.
0 0 0
ii) Z Z Z
· · ·
1 1 2 2
Y dS , Y dS = Y 1 Y 2 d[S 1 , S 2 ]s . (8.13)
0 0 t 0
Solutions
ii) We make use of Corollary 7.16, the definition of stochastic integral with
respect to a semimartingale. We also take into account the fact that,
R·
whenever V is a bounded variation process, then 0 Y i dV is of finite
variation, i = 1, 2 and so
Z ·
i
Y dV ≡ [V ] ≡ 0.
0
58
Remark 8.5 Let M, N be two (Ft )-local martingales, (Yt ), (Zt ) (Ft )-progressively
RT RT
measurable processes such that 0 Ys2 d[M ]s + 0 Zs2 d[N ]s < ∞ a.s. Let
Ω0 ∈ F such that
Then Z t Z t
1Ω0 Ys dMs = 1Ω0 Zs dNs , t ≤ T.
0 0
(True for elementary processes, afterwards we go to the limit).
We set Z t
Zta = H(a, s)dXs .
0
59
Definition 8.7 Let (St )t≥0 , (Yt )t≥0 , be two (Ft )-continuous semimartingales.
We denote then Z t Z t
1
Y ◦ dS := Y dS + [Y, S]t . (8.15)
0 0 2
This process will be called Stratonovich integral of Y with respect to S.
We remark that
Rt
• Itô integral 0 Y dS exists,
• [Y, S] exists.
Proof.
Suppose that Z is càglàd. If it were càdlàg we can replace Z with Zs− =
Zs− , s ≥ 0 which is càglàd, see Exercise 8.9 below.
We use the fact that a càglàd function g is a pointwise limit of simple func-
Rt
tions. We have S − (Π, Z, X)t = 0 Z Π dX where
n−1
X
Z Π (s) = Ztk 1]tk ,tk+1 ] (s).
k=0
60
Rt Π −Z)dX
We need to show that 0 (Z → 0 in the ucp sense. Now Z Π (s) → Zs
pointwise a.s. We decompose X = M + V where M is an (Ft )-local martin-
gale and V a bounded variation process. Taking in account Proposition 5.10
(v), it is enough to consider the case X = M .
We proceed via localization. For k > 0 we consider the stopping time
Indeed the càglàd process (Zs ) is locally bounded, see Remark 5.1. We set
Ωk = {τk > T }. If ω ∈ Ωk , for t ∈ [0, T ], we have
Now Z t Z t
Π
1Ωk (Z − Z )dX = 1Ωk (Z − Z Π )τk dX τk (8.16)
0 0
Moreover Z t 2
(Z − Z Π )τsk 1[0,τk ] d[X τk ] ≤ k 3 a.s. (8.17)
0
We have, up to a negligible set
[
Ω= Ωk ,
k
converges to zero in probability. This will hold if we show that the expec-
tation of previous expression (8.18) goes to zero when |Π| → 0. Because of
(8.16), that expectation equals
Z t !
E sup 1Ωk (Z − Zs ) k dXs k
Π τ τ
t≤T 0
Z T
≤ 4E ((Z − Z Π )τk )2s 1[0,τk ] d[X]τsk .
0
This converges to zero because of Lebesgue dominated convergence theorem
taking into account (8.17).
ii) Consequence of previous point and of the definition of Stratonovich inte-
gral.
61
Exercise 8.9 Let (Ft ), X, Y as in the statement of Theorem 8.8. Let Z be
an (Ft )-progressively measurable càdlàg process.
Rt Rt
(i) Show that the Itô integrals 0 ZdX and 0 Z − dX are equal, where Zs− =
Zs− , s ≥ 0.
Rt Rt
(ii) Prove item i) of Theorem 8.8 by first proving that 0 Zd− X = 0 Z − dX
if Z is càdlàg.
(iii) A càdlàg (càglàd) function is bounded and has at most countable jumps.
(iv) For basic properties (as those above), the reader can refer to [4], chapter
1.
62
• Vn (0) −→ V (0),
The natural topology is the total variation on each compact but, it is too
strong for our purposes.
Remark 9.2 a) The sequence (dVn ) converges to dV if and only if for every
α∈ C 0 (R)
Z t Z t
αdVn −→ αdV
0 0
holds (at every continuity point t of V ).
63
We have the following particular cases.
We recall that the ucp convergence coïncides with the convergence in prob-
ability of random variables with the values in the metric space E = C[0, T ].
We recall the following fact.
We will not prove Proposition 9.3 but only point a) of Remark 9.4. The
proof of the Proposition and point b) can be proven analogously.
64
Let Π = {0 = t0 < ... < tn = T } be an element of a subdivisions sequence,
whose mesh converges to zero. We set Πt = {0 = s0 < ... < sn = t} obtained
setting si = ti ∧ t. We expand
where
Z 1
|RΠ (t)| = ′
f (Xsi + a(Xsi+1 − Xsi ))da − f (Xsi ) ≤ δ(f ′ , δ(X, |Π|)),
′
0
with
Z t
I1 (t, Π) = f ′ (Xs )2 dµΠ (s) + I11 (t, Π),
0
n−1
X
I2 (t, Π) ≤ (Xti+1 ∧t − Xti ∧t )2 RΠ (t),
i=0
Now
n−1
X
sup |I2 (t, Π)| ≤ sup |RΠ (t)| (Xti+1 ∧t − Xti ∧t )2 .
t≤T s≤T i=0
65
ucp
Since [X, X] exists then I2 (·, Π) −→ 0, it remains to prove that
Z t Z t
Π
Ys dµ (s) → Ys d[X, X]s ucp (9.3)
0 0
By Dini’s lemma 5.11, it will be enough to show that (9.4) holds in probability
for every fixed t.
By Exercise 9.6, we can consider a null set N such that for ω ∈
/ N the
sequence of the real functions (µΠn (·)(ω)) in BV to [X, X](ω). Therefore
Z t Z t
Ys (ω)dµΠn (s)(ω) −→ Y (ω)d[X, X](ω),
0 0
where
66
• A is a zero (continuous) quadratic variation process such that A0 = 0.
X = M i + Ai , i = 1, 2,
M + A = 0, M = M 1 − M 2, A = A1 − A2 .
0 = [M + A, M + A]
= [M, M ] + 2[M, A] + [A, A].
Since A has zero quadratic variation, Proposition 5.10 (iii) implies that
[M, A] = 0 so that [M ] = 0. Recalling that M0 = 0, Proposition 5.19
yields that M ≡ 0. Finally A will also be zero.
We have already seen that the class of finite quadratic variation processes
is stable under C 1 transformations. This property extends to the case of
Dirichlet processes as the following result shows.
67
Proof. Let X = M + A, as in (9.5).
We set Z t
Mtf = f ′ (X)dM + f (X0 ).
0
To conclude, it is enough to show that
(ii) Since M f is a local martingale and taking into account Corollary 7.14
and Proposition 5.10 (v) it follows that
Z t
f f
[M , M ]t = f ′ (Xs )2 d[M ]s ,
0
68
Example 9.12 Let f of class C 1 . Then X = f (W ) is an (Ft )-Dirichlet
process.
Previous Example and Remark show easily that the class of (Ft )-Dirichlet
processes strictly include the class of (Ft )-semimartingales.
Open question
Let (Ft ) be a filtration such that X is a (Ft )-Dirichlet process and let (Gt )
be a subfiltration of (Ft ) so that X is still (Gt )-adapted.
Is X a (Gt )-Dirichlet process ?
In general, at our knowledge the answer is unknown. However, at least
in the particular case when X is (Ft )-semimartingale, then X is a (Gt )-
Dirichlet process; in fact Stricker Theorem 4.28 asserts that if X is an (Ft )-
semimartingale then it is also a (Gt )-semimartingale.
Proposition 9.14 Suppose that [X, X] exists and let f ∈ C 2 (R). Then
Z · Z ·
′ −
f (X)d X and f ′ (X)d◦ X exist. (9.6)
0 0
Moreover
Rt 1
Rt
a) f (Xt ) = f (X0 ) + 0 f ′ (X)d− X + 2 0 f ′′ (Xs )d[X, X]s .
Rt
b) f (Xt ) = f (X0 ) + 0 f ′ (X)d− X + 21 [f ′ (X), X]t .
Rt
c) f (Xt ) = f (X0 ) + 0 f ′ (X)d◦ X.
69
b) follows from a) because Remark 9.4 a) implies that
Z t
[f ′ (X), X]t = f ′′ (X)d[X, X].
0
c) follows from b) and Proposition 5.10 (i). It remains to justify a) and (9.6).
We write Taylor expansion until the second order to obtain, for t ≥ 0
where
n−1
X
I1 (t, Π) = f ′ (Xsi )(Xsi+1 − Xsi )
i=0
n−1
X
1
I2 (t, Π) = f ′′ (Xsi )(Xsi+1 − Xsi )2 ,
2
i=0
I3 (t, Π) ≤ C(Π, X, X)(t)|RΠ (t)|,
70
a) [X, X] exists.
R·
b) 0 Xd− X exist.
a) [X, X] exists.
R·
b) 0 g(X)d− X exists ∀g ∈ C 1 .
exist 2 ≤ i ≤ d. Then
d Z
X t
f (X t ) = f (X 0 ) + ∂i f (X s )d− Xsi
i=1 0
(9.11)
d Z
1 X t 2
+ ∂i,j f (X s )d[X i , X j ]s
2 0
i,j=1
In particular Z ·
∂1 f (X s )d− X 1 exists.
0
71
Remark 9.18 a) One could define a vector stochastic integral of an Rd -
Rt
valued process Y with respect to an Rd -valued process X denoted 0 Y ·
d− X. It would be defined as the limit in probability of
n−1
XX d
Ysℓi (Xsℓi+1 − Xsℓi ), (9.12)
i=0 ℓ=1
and
Z t n Z
X ·
f (X t ) = f (X 0 ) + ∂1 f (X)d− X 1 + ∂i f (X s )dX is
0 i=2 0
Z t
1
+ ∂12 f (Xs )d[X 1 , X 1 ]s .
2 0
72
d) In reality, if one knows a priori that the process (Xt ) will remain in an
open set O and setting Vt = t, we only need to suppose f ∈ C 1,2 (R+ ×O)
in order to expand f (t, Xt ). For instance if X is a strictly positive finite
quadratic variation process and f (t, x) := log x, we can write
Z t Z
1 − 1 t 1
log Xt = log X0 + d Xs − d[X]s .
0 Xs 2 0 Xs2
Taking in account the relation between symmetric and forward integrals, see
Proposition 5.10 (i) and Proposition 9.3, we can easily prove the following.
Let (St )t≥0 be an (Ft )-semimartingale and (Wt )t≥0 be an (Ft )-Brownian
motion. Let (Xt )t∈[0,T ] an Itô process of the form
Z t Z t
Xt = X0 + Ks ds + Hs dWs , t ∈ [0, T ]. (9.13)
0 0
Proposition 9.20 Let f ∈ C 1,2 (R+ × R), (s, x) 7→ f (s, x). We have the
following.
i)
Z t
∂f
f (t, St ) = f (0, S0 ) + (s, Ss )ds
0 ∂s
Z t Z
∂f 1 t ∂2f
+ (s, Ss )dSs + (s, Ss )d[S, S]s .
0 ∂x 2 0 ∂x2
73
ii)
Z t
∂f
f (t, Xt ) = f (0, X0 ) + (s, Xs )ds
0 ∂s
Z t Z
∂f 1 t ∂2f
+ (s, Xs )(Hs dWs + Ks ds) + (s, Xs )Hs2 ds.
0 ∂x 2 0 ∂x2
Proof.
i) We recall that Itô and forward integral coincide, see Theorem 8.8. The
result follows then from Remark 9.18 c) which follows Theorem 9.17
with d = 2, Vt = t, Xt = St .
ii) We write the Chain rule Proposition 8.3, the fact that [W, W ]t = t, and
Rt
Corollary 7.16 which implies that [X, X]t = 0 Hs2 d[W ]s .
74
Exercise 9.22 a) Evaluate the quadratic variation of [sin(W ), sin(W )].
Let A be a zero quadratic variation process and set D = W +A. Deduce
[sin(D), sin(D)].
Rt
b) If X is a Itô process in the form given above, express 0 X ◦ dW .
R ′ 2
p( √y )
is an (Ft )-martingale if R |F | (y) 1+y 2
T
dy < ∞, where p is the law
density of N (0, 1).
• X is (Ft )-adapted;
RT RT
• 0 a2 (s, Xs )ds + 0 |b|(s, Xs )ds < ∞ a.s.
Rt Rt
• Xt = ξ + 0 a(s, Xs )dWs + 0 b(s, Xs )ds, a.s., ∀t ∈ [0, T ].
75
Let (St )t≥0 be a strictly positive solution (if it exists) of
(
dSt = µSt dt + σSt dWt
(9.15)
S0 = s0 .
We proceed now using the Itô formula in an open set, i.e. Remark 9.18 d),
since f (x) = log(x) is not of class C 2 (R). Such a solution is a Itô process.
We have Z t Z t
dSs 1 −1 2 2
log(St ) = log(s0 ) + + σ Ss ds. (9.16)
0 Ss 2 0 Ss2
Using (9.15), for Yt = logSt we have
Z t Z
σ2 t
Yt = Y0 + µ− ds + σdWs . (9.17)
0 2 0
We deduce σ2
Yt = logSt = logS0 + µ − t + σWt . (9.18)
2
It seems that
σ2
St = s0 exp µ− t + σWt (9.19)
2
is a candidate solution of (9.15). We will verify rigorously that this is even
the unique solution to (9.15).
We start with the existence property which will be stated in a more general
framework in the next Proposition 9.26.
76
Remark 9.25 Suppose a, b continuous functions as before.
If A is a (Ft )- semimartingale, an (Ft )-adapted process X is a solution to the
equation in the Itô sense if and only if it is a solution in the forward sense.
∂x f (v, x) = σf (v, x)
2
∂xx f (v, x) = σ 2 f (v, x)
∂v f (v, x) = f (v, x).
77
Rx 1
Set H(x) = 0 a (y)dy, x ∈ R and set Y = H(X). Show explicitly that Y
solves one equation directed by a bounded variation process (so not involving
stochastic integrals).
So
σ2
St = s0 exp µ− t + σWt
2
is a solution of (9.15); we will suppose that (Xt )t≥0 is another one. We will
try to express the “Itô stochastic differential” of Xt St−1 . We set
−1 σ2
Zt = s0 St = exp −µ+ t − σWt ;
2
n ′2
o
if µ′ = −µ+σ 2 et σ ′ = −σ then Zt = exp µ′ − σ2 t + σ ′ Wt . Proposition
9.26 shows again that
Z t Z t
Zt = 1 + Zs µ′ ds + σ ′ dWs = 1 + Zs (σ 2 − µ)ds − σdWs . (9.23)
0 0
We deduce that
Z t Z t Z t
Xt Zt = s 0 + Xs dZs + Zs dX − σ 2 Xs Zs ds
0 |{z} 0 |{z}s 0
Zs ((σ 2 −µ)ds−σdWs ) Xs (µds+σdWs )
Z t Z t Z t
2
= s0 + (σ − µ)Xs Zs ds − σ Zs Xs dWs + Zs Xs (µ − σ 2 )ds
0 0 0
Z t
+ σZs Xs dWs
0
= s0 .
78
So Xt Zt ≡ s0 , ∀t ≥ 0 , P -a.s.
This yields
Xt = s0 Zt−1 = St , ∀t ≥ 0, P −a.s.
(
dXt = µXt dt + σXt dWt
(9.24)
X0 = s0 .
Remark 9.30 (i) Process St that we have provided constitutes the classi-
cal model of the price of a stock in the financial model of Black-Scholes.
79
Remark 9.32 Previous equation admits a unique solution in the class of
continuous processes. This is an elementary consequence of a fixed point
theorem on each path.
80
Definition 9.35 (i) A non-empty class D of subsets of Ω is called Π-
system if
\
A, B ∈ D ⇒ A B ∈ D.
(a) Ω ∈ Λ;
(b) A, B ∈ Λ, B ⊂ A ⇒ A − B ∈ Λ;
S
(c) If (An ) is an increasing sequence of elements of Λ then n An ∈ Λ.
E(Mt |Ms ) = Ms , t ≥ s.
In other words
E(Mt 1C ) = E(Ms 1C ), ∀C ∈ Ms . (9.27)
81
Now Ms = σ(D) where
\
D = {A B, A ∈ FsM , B ∈ YT },
see Proposition 9.37. Let Λ be the family of C for which (9.27) is verified.
• Ω ∈ Λ;
where
X
YsΠ,+ = Ysk+1 1]sk ,sk+1 ] (s),
k
where sk = tk ∧ t, 1 ≤ k ≤ n.
82
Since (YsΠ,+ ) is Ms -adapted this equals the Itô integral
Z t
YsΠ,+ dMs .
0
By a similar argument as for the proof of Theorem 8.8, this can be shown to
Rt
converge ucp again to Itô integral 0 Y dM .
This concludes the proof.
Theorem 8.8, Theorem 9.17 and Remark 9.18 c), give the following multidi-
mensional Itô formula.
83
Definition 9.43 Let W = (W 1 , . . . , W p ) be an (Ft )-p-dimensional classical
Brownian motion. We say that a process (Xt )t≥0 is a (W, F)-Itô process (or
simply general Itô process) if
Z t p Z
X t
Xt = X0 + Ks ds + Hsi dWsi ,
0 i=1 0
where
• X0 est F0 -measurable;
Remark 9.42 and Corollary 7.16 (which allows to calculate brackets of Itô
integrals related to local martingales) provide the following result.
Z t
1
F (Wt ) − F (0) − ∆F (Ws )ds
2 0
defines an (Ft )-martingale if ∇F is bounded.
Proposition 9.44 also allows to evaluate the covariation of two general Itô
processes. Consider in fact
Z t p Z
X t
Xti = X0i + Ksi ds + Hsi,j dWsj , i = 1, 2.
0 j=1 0
84
Then
p Z
X t
1 2
[X , X ]t = Hs1,j Hs2,j ds.
j=1 0
We recall that those equations are called Itô stochastic differential equa-
tions. A solution of previous equation is named diffusion.
85
A function γ : [0, T ] × Rm −→ Rd is said to have linear growth (with respect
to x uniformly with respect to t), if there is a constant C > 0 with
In fact, if x, y ∈ Rm , t ≥ 0
86
a (unique up to indistinguishability) continuous solution X. Moreover, for
every T > 0 !
2
E sup |Xs | < ∞.
s≤T
In fact
2 2
||X|| = sup E sup |Xs | .
t∈[0,T ] s≤t
87
In fact, Doob inequality says
! Z t 2 !
E sup |Φ(X)t |2 ≤ 4E(Z 2 ) + 4E sup a(s, Xs )dWs
t≤T t≤T 0
Z T 2
+ 4E |b(s, Xs )|ds
0
Z T Z T
2 2 2
≤ 4E(|Z| ) + 16E a (s, Xs )ds + 4T E b (s, Xs )ds
0 0
88
Therefore
||Φ(X) − Φ(Y )||2∼,A = sup E sup |Φ(X)s − Φ(Y )s | 2 e−Au
u∈[0,T ] s≤u
K̃
≤ sup 1 − e−Au ||X − Y ||2∼,A
u∈[0,T ] A
K̃
≤ ||X − Y ||2∼,A .
A
Choosing A > K̃, the application Φ is a contraction on A equipped with the
norm || · ||∼,A .
By Banach fixed point theorem there is a “unique process in A” (Xt ) such
that
Z t Z t
Xt = Z + a(s, Xs )dWs + b(s, Xs )ds, ∀t ∈ [0, T ] Pa.s. (10.5)
0 0
where by solution we intend again a continuous process (Xt )t≥0 with values in
Rm , adapted to (Ft )t≥0 and such that (10.6) is verified P a.s. ∀t ∈ [0, T ], i ∈
{1, . . . , m}.
89
(10.6) can be compactified as
Z t Z t
Xt = Z + b(s, Xs )ds + a(s, Xs )dWs . (10.7)
0 0
Theorem 10.6 We suppose a, b Lipschitz with linear growth and E(|Z|2 ) <
∞. Then, there is a solution to (10.7), unique up to indistinguishability.
Moreover, that solution verifies
!
E sup |Xt |2 < ∞, ∀T > 0.
t≤T
90
(i) Determine one particular strictly increasing solution h of equation
1 2 ′′
a h + bh′ = 0.
2
We set ã(x) = (ah′ )(h−1 (x)) where h−1 is the inverse of h.
(iv) Explain why previous arguments provide uniqueness for problem (10.8).
One motivation for studying Dirichlet processes comes from the irregular
medium, see i.e. [33], [23, 24].
Those authors consider formally one equation of the type
91
11 Change of probability et martingale representa-
tion theorems
P (N ) = 0 ⇒ Q(N ) = 0 ∀N ∈ F.
Exercise 11.3 Verify that a discrete probability on the Borel sets of R is not
absolutely continuous with respect to Lebesgue measure.
92
11.2 Girsanov Theorem
Theorem 11.4 Let (θt )t∈[0,T ] be a progressively measurable process such that
Z T
θs2 ds < ∞ a.s. We set
0
Z t Z t
1
Lt = exp θs dWs − |θs |2 ds .
0 2 0
iii) For the proof, the reader can consult for instance [17], section 3.5.
dLt = θt Lt dWt .
93
Theorem 11.7 Let W be a standard Brownian motion and θ be a progres-
sively measurable process such that there are instants 0 = t0 < t1 < . . . <
tn = T such that for every i ∈ {1, . . . , n}
Z !!
1 ti 2
E exp |θs | ds < ∞.
2 ti−1
Then the process
Z t Z t
1 2
Lt = exp θs dWs − |θs | ds , 0 ≤ t ≤ T,
0 2 0
Proof. For every i ∈ {1, . . . , n}, we set θ(i)t = θt 1[ti−1 ,ti [ . Novikov condi-
tion is fulfilled for θ(i) and so, the process
Z t Z
1 t 2
Lt (θ(i)) := exp θ(i)s dWs − |θ(i)s | ds
0 2 0
is a martingale, so
We set
Z t
1 2
Lt := exp (b(s, Ws )dWs − b (s, Ws ))ds , t ∈ [0, T ].
0 2
Then L is a martingale. In particular E(LT ) = 1.
94
Remark 11.9 Let N be a standard Gaussian r.v. If c0 < 12 , then
E exp(c0 N 2 ) < ∞.
Exercise.
Ti
Proof (of Proposition 11.8). Let n be a positive integer and ti = n ,i ∈
{1, . . . , n}. For every i ∈ {1, . . . , n},
Z !
ti
T
|b(s, Ws )|2 ds ≤ 2K 2 1 + sup |Ws |2 .
ti−1 n 0≤s≤T
K2T 1
We set c = n . We choose n large enough so that cT < 2. By Remark
11.9, for t ≥ 0
95
Example 11.11 Let (Xt )t∈[0,T ] be a solution to
Theorem 11.12 Let (Mt )0≤t≤T be an (Ft )t∈[0,T ] -martingale such that E(Mt2 ) <
∞ (i.e. square integrable).
There is a progressively measurable process (Ht )t∈[0,T ] such that
Z T
2
E Hs ds < ∞ and
0
96
Z t
Mt = M0 + Hs dWs a.s. ∀t ∈ [0, T ]. (11.3)
0
In particular the martingale admits a continuous modification.
97
12 Markov property of solutions of stochastic dif-
ferential equations (SDEs)
12.1 Definition
A process (Xt )t≥0 with values in Rm is said to fulfill the Markov property
and it is called Markov process when its future behaviour after an instant
t only depends on Xt and not from the past before t.
Mathematically speaking, a process (Xt )t≥0 verifies the Markov property
with respect to a filtration (Ft )t≥0 to which it is adapted, if for any Borel
bounded function f : Rm −→ R and for any s ≤ t we have
E f (Xt )Fs = E f (Xt )Xs .
We will see that solutions of SDEs are Markov. We consider the solution
(Xt ) of an SDE of the type
Z t Z t
Xt = x 0 + a(u, Xu )dWu + b(u, Xu )du, (12.1)
0 0
Remark 12.1 (i) (Xt )t≥0 can be considered as a r.v. ω 7→ Xt (ω) from
Ω with values in C(R+ ) (equipped with the topology of uniform conver-
gence on each compact and of the related Borel σ-field). Indeed we can
write
X = Φ(x, W· ),
98
with respect to W coincides with the integral with respect to (W̄u )u≥0 ,
where W̄u = Ws+u − Ws . We remark that W̄ is a classical Brownian
motion independent of Fs .
Consequently, (12.2) with Z = y can also be written as
Z t Z t
Yt = y + a(u, Yu )dW̄u + b(u, Yu )du, t ≥ s.
s s
Via point (i), there is Φ̄ : R × C([0, ∞[) −→ C([s, ∞[) such that Y =
Φ̄(y, W̄ ). In other words, Y is measurable with respect to y and to the
process of Brownian motion increments.
For x ∈ R, (Xts,x )t≥s will denote the unique solution (Yt )t≥s of
Z t Z t
Yt = x + a(u, Yu )dWu + b(u, Yu )du, t ≥ s.
s s
This notion generalizes the flow property fulfilled in the case of ordinary
differential equations.
99
Since left and right members are continuous in (s, t, y), we have (12.3) for
every t ≥ s ≥ 0, y ∈ R P -a.s.
In fact, two continuous processes (random fields) which are modification one
of the other are indeed indistinguishable.
Finally, for y = Xsx
Z t Z t
s,X x x x
Xt s = Xsx + b u, Xus,Xs du + a u, Xus,Xs dWu . (12.4)
s s
We recall that equation (12.4), which is (12.2) with initial condition Z = Xsx ,
has a unique solution. This implies the result.
Theorem 12.4 Let (Xt )t≥0 be a solution of (12.1). Then it has the Markov
property with respect to the filtration (Ft ). More precisely, for every bounded
Borel function f , we have
E f (Xt )Fs = φ(Xs ), (12.5)
where
φ(y) = E f Xts,y .
100
By Remark 12.1, there is Φ : R × C(R+ ) −→ C([s, +∞[) such that
The previous result can be generalized to the case when Φ depends on the
whole path of the diffusion after time s.
Theorem 12.6 Let (Xt )t≥0 be a solution to (12.1) and r(s, x) be a non-
negative measurable fonction. If t > s we have
Rt
− s r(u,Xu )du
E e f (Xt )Fs = φ(Xs ) P a.s.
with Rt
s,y
φ(y) = E e− s r(u,Xu )du f Xts,y .
Remark 12.7 We can in fact establish a more general result than the one
previously stated. Omitting technical details, we can affirm that, given a
function Φ of the whole path of Xt after s, we have
E Φ Xtx ; t ≥ s Fs = E Φ Xts,y , t ≥ s P a.s.
y=Xs
101
12.3 Infinitesimal generator of a diffusion
Suppose a, b : R → R Lipschitz.
We denote by (Xt )t≥0 a solution of
Proof. Exercise.
Remark 12.9 We denote by (Xtx )t≥0 the solution of the SDE (12.7) such
that X0x = x.
Let f ∈ C 2 (R) with bounded derivatives. Let Kf be a constant bounding the
first and second derivatives and K such that |b(x)| + |a(x)| ≤ K(1 + |x|).
Theorem 10.4 of existence and uniqueness for SDEs says that
! !!
2
E sup Af Xsx ≤ Kf′ 1 + E sup Xsx < ∞,
s≤T s≤T
102
Proposition 12.10 Let (t, x) 7→ u(t, x) of class C 1,2 (R+ × R). We suppose
moreover that first order derivative with respect to x is bounded. Then the
process Z t
∂u
Mt = u(t, Xt ) − + As u (s, Xs )ds
0 ∂t
is a martingale, where As is the operator acting on variable x defined by
a2 (s, x) ′′
(As f )(x) = f (x) + b(s, x)f ′ (x).
2
103
. We make the assumptions of existence and uniqueness of Section 10.2. For
every t, we introduce the operator At which to a function f ∈ C 2 (Rm ; R)
associates the function
n n
1 X ∂2f X ∂f
At f (x) = ℓij (t, x) (x) + bj (t, x) (x),
2 ∂xi ∂xj ∂xj
i,j=1 j=1
With matrix notations, we will have Σ(t, x) = (σij (t, x)) with Σ(t, x) =
A(t, x)A⊤ (t, x), where A⊤ (t, x) is the transposed matrix of A(t, x) = (aij (t, x)).
∂
Remark 12.13 The differential operator + At is called the Dynkin oper-
∂t
ator of the diffusion.
104
classical problems of the theory of financial mathematics consists in evaluat-
ing RT
Vs = E e− s r(t,Xt )dt f (XT )Fs .
As we have seen at the beginning of this chapter for the case m = 1, we have
equally
Vs = F (s, Xs ),
where RT
s,x
F (s, x) = E e− s r(t,Xt )dt f XTs,x , (12.10)
X s,x being the unique solution to system (12.9) issued from x at time s.
We can extend this result and show that, if r is a function only depending
on x (time-homogeneity), we have
n R s+t s,y
o n Rt 0,y
o
E e− s r(Xu )du f Xs+t
s,y
= E e− 0 r(Xu )du f Xt0,y .
then RT
s,x
u(s, x) = F (s, x) := E e− s r(t,Xt )dt f XTs,x .
105
Proof. We prove equality u(s, x) = F (s, x) for s = 0.
By Proposition 12.12, we know that the process
r(v,Xv0,x )dv
Rt
Mt = e − 0 u(t, Xt0,x )
we have
u(s, x) = E (f (x + WT −s )) ,
106
condition for operator At , which has the form
In the sequel of this chapter, we will have a process (Xt )t≥0 which is solution
of a stochastic differential equation with coefficients a, b which do not depend
on time. Let also consider r : R → R bounded and continuous.
So we have
dXt = a(Xt )dWt + b(Xt )dt. (12.14)
We consider the differential operator
1
Ag(x) = a2 (x)g ′′ (x) + b(x)g ′ (x). (12.15)
2
We denote Ãg(x) = Ag(x) − r(x)g(x). Equation (12.11) can be written as
∂u (t, x) + Ãu(t, x) = 0, in ]0, T ] × R
∂t (12.16)
u(T, x) = f (x), ∀x ∈ R.
If we state Problem (12.16) not any more on the whole real line, but on
O =]x0 , x1 [, we need to impose boundary conditions on x0 and x1 .
We will be more particularly interested to the case when we impose zero
boundary conditions (of Dirichlet type). Let f : O → R. We try in this case
to solve
∂u
∂t (t, x) + Ãu(t, x) =
0, in ]0, T ] × O
u(t, x0 ) = u(t, x1 ) = 0, ∀t ≤ T (12.17)
u(T, x) = f (x) ∀x ∈ O.
A regular solution of (12.17) can so be interpreted with the help of the
diffusion equation (12.14). We denote by X s,x the solution of (12.14) issued
from x at time s.
107
Theorem 12.18 Let u : R+ × Ō −→ R be a function in (t, x) of class
C 1,2 ([0, T ] × Ō).
We suppose moreover u to be solution of (12.17).
Then n o
RT s,x
u(s, x) = E 1{∀t∈[s,T ],Xts,x ∈O} e− s r(Xt )dt f (XTs,x ) .
Proof. We check the result when s = 0, the proof being similar in the other
cases.
We can extend u from [0, T ] × Ō to [0, T ] × R conserving the C 1,2 character
of u on [0, T ] × R. We continue symbolizing u such a prolongation. Through
Proposition 12.11, we know that the process
Rt 0,x
Mt = e− 0 r(Xv )dv u(t, Xt0,x )
Z t Z v
0,x ∂u
− exp − r(Xα )dα + Au − ru (v, Xv0,x )dv
0 0 ∂t
we have
n R τx 0,x
o
u(0, x) = E e− 0 r(Xv )dv u(τ x , Xτ0,x x )
n RT 0,x
o
= E 1{∀t∈[0,T ],X 0,x ∈O} e− 0 r(Xv )dv u(T, XT0,x )
t
n R x o
− 0τ r(Xv0,x )dv x 0,x
+ E 1{∃t∈[0,T ],X 0,x ∈O}
/
e u(τ , X τ x ) .
t
108
Now, on the event {∃t ∈ [0, T ], Xt0,x ∈
/ O}, u(τ x , Xτ0,x x 0,x
x ) = u(T , XT x ) = 0.
13.1 Generalities
The study of stochastic differential equations is much richer than the study
of ordinary differential equations which are well-posed essentially when the
coefficients are Lipschitz with very few exceptions.
We consider first the following well-known problem with m = p = 1.
( p
ẋ(t) = x(t)
x(0) = 0.
109
Generally speaking, we will be interested in a SDE of Itô type as follows:
Remark 13.2 Item iii) means that the left and right-hand side are indistin-
guishable. If X is a continuous process then the equality holds if for any t
Rt Rt
Xt = X0 + 0 a(s, Xs )dWs + 0 b(s, Xs )ds a.s.
110
Definition 13.5 (Existence in law or Weak existence). Let ν be a
probability law (on Rm ). We will say that E(a, b; ν) admits weak existence if
there is a probability space (Ω, F, P ), a usual filtration (Ft )t≥0 , an (Ft )t≥0 -
p-dimensional classical Brownian motion (Wt )t≥0 and a process (Xt )t≥0 so-
lution of E(a, b) where ν is the law of X0 .
We say that E(a, b) admits weak existence if E(a, b; ν) admits weak existence
for every ν.
In Section 7., we considered in some details the case where a and b are
Lipschitz.
111
t), if for every K > 0 (resp. for every T > 0, K > 0) γ|[0,T ]×[−K,K]m is
Lipschitz (with respect to x uniformly with respect to t).
Remark 13.9 i) If a and b are Lipschitz with linear growth, E(a,b) admits
a unique solution, of course (Ft )t≥0 - adapted. This means that E(a,b)
admits strong existence and pathwise uniqueness. This result can be
deduced from Theorem 10.4.
ii) If a and b are only locally Lipschitz, it is possible to show existence until
some suitable (explosion) stopping time. The local Lipschitz property
guarantees however pathwise uniqueness.
Proof.
Rt
We set G(t) = a + b 0 g(s)ds. Then by assumption g(t) ≤ G(t).
(e−bt G(t))′ = −be−bt G(t) + e−bt G′ (t) = −be−bt G(t) + be−bt g(t) ≤ 0.
(13.3)
112
So t 7→ e−bt G(t) is non-increasing and
But now the continuous function G fulfills (13.2) so we can apply the
first part of the proof. Finally
We start with an example where E(a, b) does not admit pathwise uniqueness,
even though it admits uniqueness in law.
Before this, we need to state an important result due to Paul Lévy, called
Lévy characterization of Brownian motion.
Theorem 13.11 Let (Mt )t≥0 be an (Ft )t≥0 - continuous local martingale
such that M0 = 0. Then (Mt )t≥0 is an (Ft )t≥0 - classical Brownian motion if
and only if [M, M ]t ≡ t.
113
Remark 13.12 A solution X of equation E(a,b;δ0 ) is also a solution to
Z t
Xt = sign− (Xs )dWs ,
0
and Z t
Xt = sign(Xs )dWs ,
0
where (
1 : x>0
sign− (x) =
−1 : x ≤ 0
and
1 : x>0
sign(x) = −1 : x < 0
0 : x = 0.
Rt
In fact, since X is a Brownian motion then 0 1{Xs =0} dWs = 0 for any t ≥ 0.
In fact, using Fubini’s theorem and the isometry of stochastic integral,
Z t Z t
E 1{Xs =0} ds = P {Xs = 0}ds = 0,
0 0
Later we will show that E(a, 0; δ0 ) admits weak existence. Let now (Ω, F, P )
be a probability space, an (Ft )t≥0 - classical Brownian motion (Wt ) with
respect to a usual filtration and (Xt )t≥0 be a solution of E(a, b; δ0 ). Since
Rt
X is a solution to Xt = 0 sign+ (Xs )dWs , t ≥ 0, then X̃t = −Xt verifies
Rt
of X̃t = 0 sign− (X̃s )dWs , t ≥ 0. Since X and X̃ are both solutions of
E(a, b; δ0 ), then E(a, b; δ0 ) does not admit pathwise uniqueness.
We recall here the known results about existence and uniqueness in law in the
multi-dimensional case. Let a : [0, T ] × Rm → Rm×p , b : [0, T ] × Rm → Rm
be two Borel functions.
We start introducing the so called Non-degeneracy condition for a given ma-
trix function σ.
114
Assumption 13.13 A function σ : [0, T ] × Rm → Rm×m is said to fulfill
the Non-degeneracy condition if
m
X m
X
m m
∃C > 0, ∀(t, x) ∈ [0, T ]×R , ∀(ξ1 , · · · , ξm ) ∈ R : σij (t, x)ξi ξj ≥ C |ξi |2 .
i,j=1 i=1
(13.4)
The first result of existence in law, that we state, is taken from the Theorem
1 in the Section 2.6 in [19].
In Exercise 7.3.3 in [35], we can find a theorem for the uniqueness in law only
for m = p = 1.
Then the problem E(a, b) admits weak existence and uniqueness in law.
115
We remark that in Theorem 13.18, a could be also degenerate. A consequence
of this theorem is the following.
(i) We truncate the coefficients a and b defining, for each N > 0, aN (t, x) =
(aN N
ij (t, x)) where aij (t, x) = (aij (t, x)∧N )∨(−N ) and b = (b1 , . . . , bm )
⊤
bN
i (t, x) = (bi (t, x) ∧ N ) ∨ (−N ), 1 ≤ i, j ≤ m.
(iii) On the other hand previous item, the same BDG and Jensen’s inequal-
ities, together with Gronwall’s lemma allow to show the existence of a
constant C(γ, T, ca , cb ), where ca , cb are the linear growth coefficients
related to a and b such that
(iv) At this point we can show that the laws of X N as r.v. with val-
ues in C([0, T ]) are tight. For this we make use of the Prohorov and
Kolmogorov theorems whose consequences for us are summarized in
Problem 4.11 in Section 2.4 in [17]. It is enough to show
where the constant const does not depend on N . So (13.7) holds with
γ
α = γ, β = 2 − m.
116
(v) Since the laws of X N are tight, there exists a subsequence (still de-
noted by X N ) which converges in law. By Skorohod theorem there is
a sequence of copies X̃ N of copies of X N with the same distributions
as X N such that X̃ N converge a.s., in particular ucp to some process
X. Without restriction of generality we still write X N = X̃ N .
(vii) Since the concept of solution in law is equivalent to the one of martin-
gale problem, see Proposition 4.16 of [17], it is possible to show that
that X solves E(a, b; ν). See also Exercise 13.20 below.
Exercise 13.20 Let (Ω, F, P ) and a filtration (Ft ) fulfilling the usual con-
ditions. Let a : [0, T ] × Rm → Rm×p , b : [0, T ] × Rm → Rm be bounded Borel
functions.
(i) Let X be a solution of E(a, b). Show that for every f ∈ C 2 (Rm ) then
Z m Z
t
1X t
f (Xt )−f (X0 )− ∇f (Xs )·b(s, Xs )ds− ∂x2i ,xj f (s, Xs )(aa⊤ )ij (s, Xs )ds
0 2 0
i,j
(13.9)
is a local martingale, where a⊤ is the transposed of matrix a.
If (13.9) is verified for every f ∈ C 2 (Rm ), we say that (X, P ) solves
the martingale problem related to b and aa⊤ .
117
(ii) Suppose now (X, Q) solves previous martingale problem, where X a
continuous process and Q is a probability on (Ω, F). Q is the reference
probability for the items below.
118
then we define the set I(γ) as the set of real numbers x such that
Z x+ε
dy
2 (y)
= ∞, ∀ε > 0.
x−ε γ
For a clear exposition of the next result, see Theorems 5.4 and 5.5 of [17].
ii) E(a,0; ν) admits weak existence and uniqueness in law if and only if
ii) (13.10) is verified also for some discontinuous functions as for instance
sign+ . This confirms what was affirmed previously, i.e. the weak exis-
tence for E(a, 0), see Theorem 13.15.
119
13.5 Feller test for explosion
a2 ′′
Ag = g + bg ′ .
2
We denote Z x
b
Σ(x) := 2 (y) dy. (13.12)
0 a2
Let ℓ ∈ C 0 (R) and u0 , u1 ∈ R. Then there is a unique solution to
(
Au = ℓ
(13.13)
u(0) = u0 , u′ (0) = u1 ,
which is given by
(
u(0) = u0
Rx 2ℓ Σ
u′ (x) = e−Σ(x) 0 a2 e (y) dy + u1 ,
Remark 13.24 This procedure can be extended to the case when b is the
derivative of a continuous function β, therefore a distribution, see [10].
Proposition 13.25 (Feller test for explosion). Suppose a > 0 such that
a, b, ab2 are locally integrable. Let v be the unique solution to
Av =
1
v(0) = 0, v ′ (0) = 0.
Then weak existence and uniqueness in law for E(a, b) takes place (without
explosion, for every t ≥ 0) if and only if
Proof. Theorem 5.5.29 in [17] and the remark below which follows from
(13.13) and (13.14) give the result.
120
Remark 13.26 v given above is explicitely given by
v(0) =
0
R
v ′ (x) = e−Σ(x) x 2eΣ
0 a2
(y) dy .
121
Exercise 13.27 i) Verify that whenever
Z ∞ Z 0
−Σ(y)
e dy = e−Σ(y) dy = ∞, (13.15)
0 −∞
a) b = 0;
b)Σ is upper bounded;
b
c) a2
is integrable.
Solutions
i) It will be enough to check that v(+∞) = +∞, since the −∞ case can be
justified similarly. For z ≥ 1 we have
Z z Z x Σ
e
v(z) = 2 dxe−Σ(x) 2
(y) dy
0 0 a
Z z Σ Z z
e
= 2 dy 2 (y) e−Σ(x) dx
0 a y
Z 1 Z z
eΣ
≥ 2 dy 2 (y) e−Σ(x) dx
0 a 1
Therefore
Z 1 Z ∞
eΣ
liminf z→0 v(z) ≥ 2 dy 2 (y) e−Σ(x) dx.
0 a 1
122
a) Σ = 0 since b = 0;
b) Σ is bounded from above since
e−Σ ≥ e−MaxΣ ;
iii) We have
Z t Z t
1
a(Xs ) ◦ dWs = a(Xs )dWs + [a(X), W ]t
0 0 2
Z t Z t 2 ′
a
= a(Xs )dWs + (Xs )ds.
0 0 4
′
a2
So, X can be considered as solution to E(a, b) with b = 4 . We
have
2 Z Z
( a4 )′ 1 x (a2 )′
x
Σ(x) = 2 (y)dy = (y)dy
0 a2 2 0 a2
1 a(x)
= (loga2 )(y)|x0 = log ,
2 a(0)
so that
a(x)
eΣ(x) =
a(0)
and we have Z x
′ 1 1
v (x) = dy.
a(x) 0 a(y)
(13.14) becomes
Z ∞ Z x Z ∞ 2
1 1 1 1
dx dy = dx = +∞;
0 a(x) 0 a(y) 2 0 a(x)
Z 0 Z 0 Z 0 2
1 1 1 1
dx dy = dx = +∞.
−∞ a(x) x a(y) 2 −∞ a(x)
123
(Sketch). Without restriction of generality we suppose a to be continuous.
We only discuss uniqueness in the particular case when (13.15) is verified,
i.e. Z ∞ Z 0
e−Σ(y) dy = e−Σ(y) dy.
0 −∞
Let (Xt )t≥0 be a solution to E(a, b) related to some filtered probability space
and an (Ft )t≥0 - classical Brownian motion (Wt )t≥0 .
We denote h : R → R such that h′ = e−Σ , h(0) = 0; we remark that h ∈
C 2 (R) and Ah = 0. We set Yt = h(Xt ); using Itô formula we obtain
Z t Z
1 t ′′
Yt = h(X0 ) + h′ (Xs )dXs + h (Xs )d[X]s
0 2 0
Z t Z
′ 1 t
= h(X0 ) + (ah )(Xs )dWs + Ah (Xs )ds
0 2 0 |{z}
=0
Z t
= h(X0 ) + a0 (Ys )dWs ,
0
• h(0) = 0;
124
• Z ε
1
(y)dy = ∞, ∀ε > 0; (13.17)
0 h2
Proof (of Proposition 13.29). Taking into account the conditions imposed
on h, there is a strictly decreasing sequence (an ) in [0, 1], with
Z an−1
1
a0 = 1, limn→∞ an = 0, (x)dx = n,
an h2
125
We can show how such a construction is possible. We fix n and, for u ∈ [0, 1],
we set
1+u 1−u
φ(u) = an + an−1 ;
2 2
1−u 1+u
ψ(u) = an + an−1 .
2 2
We define the family of functions ρn (·, u) such that
0 : x∈/ [an , an−1 ]
2u
ρn (x, u) = nh2 (x)
: x ∈ [φ(u), ψ(u)]
linear affine : x ∈ [a , φ(u)[∪]ψ(u), a
n n−1 ].
In fact we have,
Ifu = 0, ρn (x, u) ≡ 0;
2
ifu = 1, ρn (x, u) = , x ∈ [an , an−1 ].
nh2 (x)
We set Z an−1
Φ(u) = ρn (x, u)dx.
an
Now, Φ : [0, 1] → R+ is continuous and Φ(0) = 0, Φ(1) = 2.
ψn is a even function with |ψn′ (x)| ≤ 1. On the other hand, if y > 0, there is
N (y) with
Z y Z an−1
n > N (y) ⇒ ρn (z)dz = ρn (z)dz = 1.
0 an
Consequently we have
126
a.s. It is enough to show that X (1) and X (2) are indistinguishable under the
assumption
Z t
2
E a (s, Xs(i) )ds < ∞
0
(13.19)
Z t
E |b|(s, Xs(i) )ds < ∞,
0
We conclude that
Z t h i t
E(ψn (∆t )) ≤ E ψn′ (∆s ) b(s, Xs(1) ) − b(s, Xs(2) )ds + . (13.21)
0 n
If k is the Lipschitz constant of b we get
Z t
t
E(ψn (∆t )) ≤ + k E(|∆s |)ds.
n 0
127
We let n go to ∞ which by Lebesgue dominated convergence theorem gives
Z t
E(|∆t |) ≤ k E(|∆s |)ds.
0
Remark 13.32 We explain here how we can reduce to the hypothesis (13.19)
the proof of Proposition 13.29.
(i) First we reduce to the case when the initial condition X0 belong to a
compact interval [−M, M ]. For this we replace the inital probability P
with the conditional expectation P M := P (·|X ∈ [−M, M ]). Under P M
the Brownian motion remains a Brownian motion and the solutions of
SDE remain solutions of SDE.
(ii) Suppose now that X0 lives in a compact interval. For each positive
integer N we define the stopping times
with the convention the infimum of the empty set is +∞. We remark
that (up to a null set) Ω = ∪ΩN , ΩN := {τ N > T }. Consequently, it is
enough to show that X 1 ≡ X 2 on ΩN . On ΩN , we have (for i = 1, 2)
Z t Z t
Xti = X0 + aN (s, Xsi )dWs + bN (s, Xsi )ds,
0 0
where
128
Corollary 13.33 Suppose the assumption of Proposition 13.29 verified and
a, b are continuous with linear growth. Then E(a, b) admits strong existence
and pathwise uniqueness.
Example 13.34 Z t
Xt = |Xs |α dWs , t ≥ 0. (13.22)
0
We set a(x) = |x|α , 0 < α < 1, b = 0.
1
• If α ≥ 2 then I(a) = {0}.
1
• If α < 2 then I(a) = ∅.
where h(z) = z α .
This follows from the fact that, if β ≥ 1, a, t ≥ 0,
(a + t)β − aβ − tβ ≥ 0. (13.24)
Therefore
|x|α + |y − x|α ≥ |y|α .
129
By Corollary 13.33, (13.22) admits strong existence and pathwise uniqueness.
The unique solution is X ≡ 0.
If α < 21 , X ≡ 0 is always a solution. This is not the only one; even uniqueness
in law is not true, by Engelbert-Schmidt criterion.
Then
P {Xt1 ≤ Xt2 ; ∀ t ≥ 0} = 1.
130
In some cases E(a, b) admits pathwise uniqueness even if b is only measurable,
and sometimes for unbounded drifts b.
We conclude the section about strong existence and pathwise uniqueness with
a celebrated theorem of A.Yu. Veretennikov.
Remark 13.38 (i) There exists still big activity in this field. For instance
[20] has establishes pathwise uniqueness for the case m = p, a being the
identity and b is locally in some suitable Lq spaces in time and space.
For instance if b is time-independent then b is allowed to locally in
Lq (Rm ) when q > m.
(iii) Concerning the study of SDEs in the multidimensional case, see also
[9] and references therein.
14 Bessel processes
131
p
Example 14.1 Let a(x) = c |x|, where c is a positive constant. Let b be
continuous and Lipschitz. Then E(a, b) admits strong existence and pathwise
uniqueness.
In fact, first we observe that a, b are continuous with linear growth.
We have
p p p
|a(x) − a(y)| = c| |x| − |y|| ≤ c |x − y|
p
by (13.23). Proposition 13.29 can be applied taking h(z) = c |z|. By Corol-
lary 13.33, the result follows.
p
Let x0 ≥ 0, δ ≥ 0. Consider the SDE E(a, b), with a(x) = 2 |x|, b(t, x) = δ.
Z tp
Zt = x 0 + 2 |Zs |dWs + δt. (14.26)
0
132
where
p Z
X t
Mt = (Wsi + y0i )dWsi .
i=1 0
Therefore Z t
dMs
Bt =
0 kWs + y0 k
defines a classical Brownian motion because [B]t = t and because of Lévy
characterization Theorem 13.11 of Brownian motion. Now,
Z tp Z t
kWs + y0 k
2 Zs dBs = 2 dMs = 2Mt ,
0 0 kWs + y0 k
Exercise 14.5 Show item a) supposing that Xt > 0 a.s. for every t.
Exercise 14.6 Let (St ) be a BESQ(x0 , δ). For N > 0, we set τN = inf{s ≥
0|Ss > N }.
(i) Show that a.s. (τN ) is an increasing sequence of stopping times diverg-
ing to infinity.
133
(v) Calculate V ar(St ) for any t ≥ 0.
134
where W ′′ is another Brownian motion independent of W, W ′ , possibly by
enlarging the probability space. We omit the details about this. We show
that M is Brownian motion. Since M is a local martingale, according to
Lévy’s characterization theorem, it will be enough to show [M ]t = t.
Zs +Zs′
Recalling that Xs = 1, we obtain
Z t Z t
[M ]t = 1{Xs >0} ds + 1{Xs =0} ds = t.
0 0
Proof.
We remark that this quantity is finite since µ is a positive measure and Bessel
process is always non-negative.
Let Z (resp. Z ′ ) respectively a BESQ(x, δ) (resp. BESQ(x′ , δ ′ )) for x, x′ ∈
R+ , δ, δ ′ ≥ 0. Using Lemma 14.7 we have
R∞ ′
R∞ R∞ ′
ϕ(x + x′ , δ + δ ′ ) = E e−λ 0 (Zs +Zs )dµ(s) = E e−λ 0 Zs dµ(s) E e−λ 0 Zs dµ(s)
= ϕ(x, δ)ϕ(x′ , δ ′ ).
135
So we have proved that
In particular, for x, δ ≥ 0,
Proof. We omit the details of proof. However the exercise below allows to
show that if there are two sequences (xn ) and (δn ) which converge increas-
ingly (or decreasingly) respectively to x0 and δ0 , then ϕ(xn , δn ) → ϕ(x0 , δ0 ).
where 0 ≤ x1 ≤ x2 , 0 ≤ δ1 ≤ δ2 .
136
(iii) Show that
(δ2 − δ1 )t2
Var(St2 − St1 ) ≤ (x2 − x1 )t + .
2
(iv) Using Doob’s inequality find an upper bound for
!
E sup |St1 − St2 |2 .
t≤T
(vi) Deduce that ϕ(xn , δn ) → ϕ(x, δ) when µ has its support on a compact
interval [0, T ].
Next theorem allows to calculate the Laplace transform of the law of a square
Bessel process.
Proof. We apply previous results with µ being the positive measure δt . Let
λ > 0. Using the notation ϕ as before we have
x δ
E(exp(−λSt )) = ϕ (x, δ) = ϕ ,1 ,
δ
by Lemma 14.9. So by Proposition 14.3, it is enough to evaluate previous
p
quantity for St = ( xδ + Wt )2 , where (Wt ) is a classical Wiener process. We
137
denote x0 = xδ . We get
√ Z √ √ 2 y2
2 1
ϕ(x0 , 1) = E e−λ( x0 +Wt ) = √ e−λ( x0 +y t) − 2 dy
Z 2π R
1 1 2 √
= √ e−λx0 e−(λt+ 2 )y −2λy x0 t dy
2π ZR
1 −λx0 1
√ 2
√
= √ e e− 2 {(y 2λt+1) +4λy x0 t} dy
2π R
q 2 Z √ q 2
1 −λx0 2 2λ 2λt+1
1 x0 t x0 t
− 21 y 2λt+1+2λ 2λt+1
= √ e e e dy
2π
R Z
x0 t 1
= exp −λx0 + 2λ2 √ q(y)dy,
2λt + 1 2λt + 1 R
where q is some law density. Finally
1 −λx0
ϕ(x0 , 1) = 1 exp .
(1 + 2λt) 2 2λt + 1
15 Cox-Ingersoll-Ross equation
where b, c ∈ R and a, x0 ≥ 0.
Proof.
138
p
The function x 7→ a + bx is Lipschitz and x 7→ |x| fulfills the assumption
of Proposition 13.29. Moreover those functions are continuous. The result
follows by Corollary 13.33.
Remark 15.2 (i) The unique solution X of Proposition 15.1 is said Cox-
Ingersoll-Ross process.
(ii) Again, as for the case of square Bessel process, We apply comparison
theorem for SDEs, setting
Next step allows to associate the solution of (15.31) to square Bessel process.
Before this we need a lemma that can be found in [17].
Lemma 15.3 Let (Mt ) be an (Ft )-local martingale with quadratic variation
Rt
[M ]t = 0 Φ2s ds, for Φ being a measurable (Ft )-adapted non-negative process.
Then, there us an enlarged (filtered) probability space (Ω̄, F̄, P̄ , (F̄t ), a clas-
sical Wiener process (W̄t ) where
Z t
Mt = M0 + Φs dW̄s .
0
139
Remark 15.4 By enlarged probability space we intend a space of the type
Ω̄ = Ω × Ω0 , F̄ = F ⊗ F0 .
Proposition 15.5 Let (Xt ) be the solution of (15.33) and (Tt ) be a BESQ(x0 , δ)
4a c2
where δ = c2
, x0 = b . Then Xt = ebt Tϕ(t) in law where
c2
ϕ(t) = (1 − e−bt ).
4b
140
Consequently by (15.34) we have
Z t Z t
bs p
Rt = R0 + a e−bs ds + ce− 2 Rs dWs .
0 0
141
Assumption 16.1 a) We are given a square integrable random variable ξ
which plays the role of final condition.
Remark 16.2 (i) In this first section, Assumption (iii) on f will not op-
erate.
(ii) The Lipschitz conditions with respect to y given in item (iii) are not
necessary. They can be replaced by monotonicity conditions.
(ii) Since the right-hand side in item b) is continuous then every solution
is a continuous process.
142
Remark 16.5 (i) f = 0. It constitutes the representation theorem of
martingales and hedging problem in finance.
In this case the equation BSDE(ξ, f ) becomes
( RT
Yt = ξ − t Zs dWs
RT
E( 0 Zs2 ds) < ∞.
So Z Z
T T
Yt = YT − Zs dWs = ξ − Zs dWs ,
t t
which shows that BSDE(ξ, 0) has a solution.
We go on with uniqueness.
Let (Y 1 , Z 1 ), (Y 2 , Z 2 ) be two solutions. We set
Y = Y 1 − Y 2, Z = Z 1 − Z 2.
We have Z T
Yt = − Zs dWs , t ∈ [0, T ]. (16.1)
t
R R·
T
Since E 0 Zs2 ds < ∞ then 0 Zs dWs is a (square integrable) mar-
tingale. Taking the conditional expectation in (16.1) with respect to
(Ft ), we get Yt = 0 and so Y ≡ 0.
Then Z t
0= Zs dWs , t ≥ 0.
0
Taking the quadratic variation we obtain
Z t
0= Zs2 ds,
0
143
where (Xt ) is the solution of a classical SDE
Z t Z t
Xt = x 0 + a(s, Xs )dWs + b(s, Xs )ds, E(a, b), (16.2)
0 0
144
so Z τ Z τ
Yt = Yτ − Zs dWs + g(s, Xs , Ys , Zs )ds.
t t
Finally, letting τ converge to T we get
Z T Z T
Yt = ξ − Zs dWs + g(s, Xs , Ys , Zs )ds,
t t
Lemma 16.7 Let Y be an adapted measurable process such that E(supt≤T Yt2 ) <
Rt
∞. Let Z ∈ L2 (W )T . Then Mt := 0 Ys Zs dWs , t ≥ 0, is a martingale.
which is finite.
From now on, we will make Assumption (iii) (Lipschitz on (y, z)) on f .
145
Proof. If (Y, Z) is a solution, then
Z t Z t
Yt = Y0 − f (s, Ys , Zs )ds + Zs dWs . (16.4)
0 0
(with the convention that the infimum over an empty set is infinite) and the
stopped process
Ytn = Yt∧τn .
Since Y is continuous and therefore almost all paths are bounded on [0, T ]
we have Tn → +∞ when n → ∞. Then |Ytn | ≤ n ∨ |Y0 | and
Z t Z t
n n n
Yt = Y0 − 1[0,τn ] (s)f (s, Ys , Zs )ds + 1[0,τn ] (s)Zs dWs .
0 0
Taking into account Assumption 16.1 b) (ii), (iii) there is a constant C de-
pending on K and (which in the sequel can change from one line to the other)
such that previous expression is inferior or equal to
Z t
C |Y02 | + E(f (s, 0, 0)2 + |Ysn |2 + |Zs |2 )ds .
0
RT RT
So there exists a quantity C, depending on K, Y0 , E( 0 f (s, 0, 0)2 ds), E( 0 Zs2 ds)
but not on n such that
Z t
E|Ytn |2 ≤C 1+ n 2
E(|Ys | )ds .
0
146
Therefore, by Fatou’s lemma
E(Yt2 ) = E(lim inf (Ytn )2 ) ≤ lim inf E(Ytn )2 ≤ CeCT , t ∈ |0, T ]. (16.5)
n→∞ n→∞
147
Taking the expectation in previous expression and using (16.9) we get
Z T
2
E(ξ ) = E(Yt2 ) −2 E(Ys f (s, Ys , Zs ))ds
t
Z T
+ E Zs2 ds .
t
Therefore
Z T Z T
E(Yt2 ) +E Zs2 ds = E(ξ 2 ) + 2 E(Ys f (s, Ys , Zs ))ds. (16.10)
t t
On the other hand, taking into account the initial assumptions on f we get
2E(|Ys f (s, Ys , Zs )|) ≤ 2E |Ys (f (s, Ys , Zs ) − f (s, 0, 0))| + 2E|Ys f (s, 0, 0)|
≤ 2K (E(|Ys |(|Ys | + |Zs |) + 2E(|Ys ||f (s, 0, 0)|)))
≤ c E(|Ys |2 ) + E(|Ys Zs |) + E(|Ys ||f (s, 0, 0)|) ,
a 2 + b2 |Zs |2
ab ≤ , a2 = c|Ys |2 , b2 = ,
2 c
it yields
1 2 Zs2
E(|Ys Zs |) ≤ E cYs + .
2 c
So
c 1
2E(|Ys f (s, Ys , Zs )|) ≤ c E(|Ys |2 ) + E(|Ys |2 ) + E(|Zs |2 )
2 2c
1 2 1 2
+ E(f (s, 0, 0)) + E(|Ys | )
2 2
1
= C0 (E(|Ys |2 ) + E(|f (s, 0, 0)2 )) + E(|Zs |2 ),
2
where C0 is a constant only depending on K, T . Coming back to (16.10), we
obtain
Z T Z T
E(Yt2 ) + E Zs2 ds ≤ E(ξ ) + C0 E 2
Ys2 ds 2
+ f (s, 0, 0) ds
t t
Z T
1 2
+ E Zs ds .
2 t
148
Therefore, since E(Yt2 ) < ∞, t ∈ [0, T ]
Z T Z T
1
E(Yt2 ) + E Zs2 ds 2
≤ E(ξ ) + C0 E(Ys2 )ds
2 t t
(16.11)
Z T
+ C0 E f (s, 0, 0)2 ds .
0
Combining (16.11) and (16.12) provides the final statement but with the
supremum outside the expectation, i. e.
Z T Z T
2 2 2 2
sup E |Yt | + |Zt | dt ≤ cE |ξ| + |f (t, 0, 0)| dt . (16.13)
0≤t≤T 0 0
149
We recall that by the Brownian martingale representation theorem there is
Z ∈ L2 (W )T such that
Z T Z t
Mt = E ξ + f (s, Us , Vs )ds + Zr dWr , t ∈ [0, T ]. (16.15)
0 0
We also set Z t
Y t = Mt − f (s, Us , Vs )ds, t ∈ [0, T ], (16.16)
0
which is a continuous process. In particular YT = ξ.
(iv) (Y, Z) is a fixed point of Φ if and only if (Y, Z) is a solution of BSDE(f, ξ).
150
where Z
T
Mt = E ξ + f (r, Ur , Vr )dr|Ft a.s.,
0
which says that Φ(U, V ) = (Y, Z). This shows (iii). (iv) is a direct conse-
quence of (iii).
b2
2ab ≤ a2 (4K) +
4K
151
with
a = |Ȳs |, b = |Ūs | + |V̄s |.
We choose γ = 1 + 4K 2 hence
Z T Z T
γs 2 2 1 γs 2 2
E e (|Ȳs | + |Z̄s | )ds ≤ E e (|Ūs | + |V̄s | )ds ,
0 2 0
Proof. We use Itô’s formula to develop the increment of |Ys − Ys′ |2 between
s = t and T , yielding
Z T
|Yt − Yt′ |2 + |Zs − Zs′ |2 ds = |ξ − ξ ′ |2
t
Z T
+ 2 (Ys − Ys′ )(f (s, Ys , Zs ) − f ′ (s, Ys′ , Zs′ ))ds (16.19)
t
− 2(MT − Mt ),
152
Rt
where Mt = 0 (Ys − Ys′ )(Zs − Zs′ )dWs , is a martingale by Lemma 16.7. We
remark that
and
a = |Ys − Ys′ |, b = |Zs − Zs′ |.
Hence taking the expectation (16.19) and taking (16.20) into account we
obtain
Z
′ 2 1 T
E |Yt − Yt | + |Zs − Zs | ds ≤ E(|ξ − ξ ′ |2 )
′ 2
(16.21)
2 t
Z T Z T
′ 2 ′ ′2 ′ 2
+ E (f (s, Ys , Zs ) − f (s, Ys , Zs )) ds + (1 + 2K + 2K )E |Ys − Ys | ds .
t t
153
where
ℓ(s) = f (s, Ys , Zs ) − f ′ (s, Ys , Zs ).
By Doob’s we get
Z T Z T
E(sup |Yt − Yt′ |2 ) ≤ 4|Y0 − Y0′ |2 + 4T 2
ℓ(s) ds + 8K E ′2
(Zs − Zs′ )2 ds
t≤T 0 0
Z T Z T
′2
+ 8K E(Ys − Ys′ )2 ds + 16E (Zs − Zs′ )2 ds .
0 0
Previous expression together with (16.23) and Fubini’s provide the final state-
ment.
Theorem 16.12 Let ξ, ξ ′ be two square integrable r.v. such that ξ ≤ ξ ′ a.s.
Let f, f ′ fulfilling Assumption 16.1 and f (t, y, z) ≤ f ′ (t, y, z) dt ⊗ dP a.e.
Let Y (resp. Y ′ ) be the solution of BSDE(ξ, f ) (resp. BSDE(ξ ′ , f ′ ).
We have the following.
(iii) Whenever P {ξ < ξ ′ } > 0 or f (t, Yt′ , Zt′ ) < f ′ (t, Yt′ , Zt′ ) on a set of
positive dt ⊗ dP measure, then Y0 < Y0′ .
Proof. We define
( f (t,Yt′ ,Zt′ )−f (t,Yt ,Zt′ )
Yt′ −Yt
, if Yt 6= Yt′
αt =
0 if Yt = Yt′ .
We note that (αt )0≤t≤T and (βt )0≤t≤T are measurable adapted processes,
|α| ≤ K, |β| ≤ K.
154
For 0 ≤ s ≤ t ≤ T , let
Z t Z t
|βr |2
Γs,t := exp αr − dr + βr dWr .
s 2 s
Define
(Ȳt , Z̄t ) = (Yt′ − Yt , Zt′ − Zt ), ξ¯ = ξ ′ − ξ, Ut = f ′ (t, Yt′ , Zt′ ) − f (t, Yt′ , Zt′ ).
We state a lemma.
Then we apply integration by parts to Γs,t Ȳt using (16.25) and (16.26).
(16.25) gives
Z t Z t Z t
Ȳt = Ȳs − (αr Ȳr + βr Z̄r )dr − Ur dr + Z̄r dWr .
s s s
155
Consequently
Z t
Γs,t Ȳt = Γs,s Ȳs + Γs,r dȲr
|{z} s
=1
Z t
+ Ȳr dΓ̄s,r + [Ȳ , Γs,· ]t
s
Z t
= Ȳs + Γs,r (−αr Ȳr − βr Z̄r )dr
s
Z t Z t
− Γs,r Ur dr + Γs,r Z̄r dWr
s s
Z t
+ Ȳr Γs,r (βr dWr + αr dr)
s
Z t
+ Γs,r βr Z̄r dr
s
Z t Z t
= Ȳs − Γs,r Ur dr + Γs,r (Z̄r + βr Ȳr )dWr ,
s s
which implies the first equality. The second equality of the statement follows
taking the conditional expectation of the first expression with respect to Fs .
Indeed the local martingale
Z t
Mt := Γs,r (Z̄r + Ȳr βr )dWr ,
s
R
T
is a martingale by Lemma 16.7. Indeed first β is bounded and so E 0 (Z̄r + Ȳr βr )2 dr <
∞. Moreover
sup Γs,t ≤ exp(T kαk∞ )Γ̃s,t , (16.27)
s≤t≤T
where Z t Z t
|βr |2
Γ̃s,t = exp − dr + βr dWr , s ≤ t ≤ T,
s 2 s
which is a martingale because of the Novikov criterion. Indeed
Z T
|βr |2
E exp dr < ∞,
0 2
156
with Z t Z t
|2βr |2
Ls,t = exp − dr + 2βr dWr ,
s 2 s
References
157
[4] Patrick Billingsley. Convergence of probability measures. Wiley Series
in Probability and Statistics: Probability and Statistics. John Wiley
& Sons Inc., New York, second edition, 1999. A Wiley-Interscience
Publication.
[10] F. Flandoli, F. Russo, and J. Wolf. Some SDEs with distributional drift.
I. General calculus. Osaka J. Math., 40(2):493–542, 2003.
[11] F. Flandoli, F. Russo, and J. Wolf. Some SDEs with distributional drift.
II. Lyons-Zheng structure, Itô’s formula and semimartingale character-
ization. Random Oper. Stochastic Equations, 12(2):145–184, 2004.
158
[14] Ĭ. Ī. Gı̄hman and A. V. Skorohod. The theory of stochastic processes. I,
volume 210 of Grundlehren der Mathematischen Wissenschaften [Fun-
damental Principles of Mathematical Sciences]. Springer-Verlag, Berlin,
english edition, 1980. Translated from the Russian by Samuel Kotz.
[15] T. Hida, H.-H. Kuo, J. Potthoff, and L. Streit. White noise, volume
253 of Mathematics and its Applications. Kluwer Academic Publishers
Group, Dordrecht, 1993. An infinite-dimensional calculus.
159
[23] P. Mathieu. Zero white noise limit through Dirichlet forms, with appli-
cation to diffusions in a random medium. Probab. Theory Related Fields,
99(4):549–580, 1994.
[31] W. Rudin. Real and complex analysis. McGraw-Hill Book Co., New
York, third edition, 1987.
160
[34] Ch. Stricker. Quasimartingales, martingales locales, semimartingales et
filtration naturelle. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete,
39(1):55–63, 1977.
[38] Marc Yor. Some aspects of Brownian motion. Part II. Lectures in
Mathematics ETH Zürich. Birkhäuser Verlag, Basel, 1997. Some recent
martingale problems.
161