Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Mean square convergent numerical methods

for random differential equations

J.C. Cortés, L. Jódar


Instituto de Matemática Multidisciplinar
Universidad Politécnica de Valencia
Edificio 8G, 46022, Valencia, Spain

L. Villafuerte
Facultad de Ingenierı́a, Universidad Autónoma de Chiapas
Calle 4a Ote. Nte. 1428, Tuxtla Gutiérrez, Chiapas, México

Abstract

This paper deals with the construction of numerical solution of random matrix
initial value problems by means a random Euler scheme. Conditions for the mean
square convergence of the method are established avoiding the application of the
mean value theorem. Finally, one includes several illustrative examples where the
main statistics properties of the stochastic approximation processes are given.

Key words: Random differential equation, mean square calculus, numerical


solution
MSC2000: 35R60, 60H15, 65N12, 65N25

1 Introduction

Random differential equations are useful to model problems involving rates


of changes of quantities representing variables under uncertainties or random-
ness, being in fact stochastic processes instead of deterministic functions [1],
[2], [3], [8], [9], [10] and [15]. The model takes the form

Email addresses: {jccortes, ljodar}@imm.upv.es ( J.C. Cortés, L. Jódar),


{lva5}@hotmail.com ( L. Villafuerte).

Preprint submitted to Elsevier Science 23 January 2008



Ẋ(t) = F (X(t), t) , t0 ≤ t ≤ te 

(1.1)
X(t0 ) = X0 ,

where X0 is a second order random variable and, the unknown X(t) as well
as the second member F(X(t), t) are second order stochastic processes.
The random initial value problem (1.1) has been treated from the theoreti-
cal point of view by many authors [14,13] and [6]. We may distinguish two
main approaches. One of them deals with the treatment of problem (1.1) as
an initial value problem in an abstract Banach space, [14], [6], and the other is
the so-called sample approach, where throughout the realizations of problem
(1.1) one gets information about the stochastic process solution of (1.1), [13,
Appendix A] and [7].

It is important to remark that unlike to the deterministic case, dealing with nu-
merical methods, it is not enough to construct discrete approximating stochas-
tic processes that converge to the exact theoretical solution, but it is also nec-
essary to compute at least the expectation and the variance functions of the
approximating process, in order to have a representable statistic idea of the
solution process.
In recent papers [4,5] the authors proposed a random Euler method to solve
numerically problem (1.1), but they are based on a strong version of the ran-
dom mean value theorem whose proof is not correct. Thus, in this paper we
establish the convergence of a random Euler method for problem (1.1) without
using the random mean value theorem.

This paper is organized as follows. Section 2 deals with some preliminary


definitions, results, notations and examples that clarify the presentation of the
paper as well the extension of the results to the matrix framework, necessary to
afford problem (1.1) for the case of a system of random differential equations.
Section 3 is addressed to the presentation and the proof of the convergence
for the random Euler method in the mean square sense, for the scalar case.
The case of system of random differential equations is treated in section 4.
Illustrative examples are included in section 5.

2 Preliminaries

This section deals with some preliminary notations, results and examples that
will clarify the presentation of the main results of the paper related to the
random Euler method for solving numerically initial value problems associ-
ated to random differential equations.

2
Let (Ω, F, P ) be a probability space. In the following we are interested in
second order real random variables (2-r.v.’s), Y : Ω → R having a density
probability function, fY (y), such that
h i Z ∞
2
E Y = y 2 fY (y)dy < +∞,
−∞

where E [·] denotes the expectation operator. The space of all 2-r.v.’s defined
on (Ω, F, P ) and endowed with the norm
 h i1/2
kY k = E Y 2 , (2.1)

has a Banach space structure, denoted by L2 . If {X ij : 1 ≤ i ≤ r, 1 ≤ j ≤ s}


is the set of r × s 2-r.v.’s, then the second order random matrix (2-r.m.)
associated to this family is defined as

 
11 1s
X ··· X 

. .. .. 
 ..
X= . .  . (2.2)

 
 
X r1 · · · X rs

The set of all 2-r.m.’s X of size r × s endowed with the norm


s
X ij
kXkr×s = max X , (2.3)
1≤i≤r
j=1

has a Banach space structure denoted by Lr×s 2 . Given an interval T ⊆ R, a


stochastic process {X(t), t ∈ T } defined on (Ω, F, P ) is called a second order
stochastic process (2-s.p.), if for each t ∈ T , X(t) is a 2-r.v. In an analogous
way if for each t ∈ T , X(t) is a 2-r.m. of size r × s, then {X(t), t ∈ T } is a
second order matrix stochastic process (2-m.s.p.). In what follows, we shall
assume that each r.v., r.m., s.p. and m.s.p. are of second order unless the
contrary is stated.
We say that a sequence of 2-r.m.’s {Xn }n≥0 is mean square (m.s.) convergent
m.c.
to X ∈ Lr×s
2 , and will be denoted by Xn − −−→ X, if
n→∞

lim kXn − Xkr×s = 0.


n→∞
(2.4)

From the corresponding properties for its components, see [13, p.88], if {Xn }n≥0
is a sequence of random matrices in Lr×s
2 m.s. convergent to X, then

E [Xn ] −−−→ E [X] , (2.5)


n→∞

3
where using the notation introduced at (2.2), E [X] = (E [X ij ])r×s . In the par-
ticular case where the 2-m.s.p. X(t) is a vector, ~
h say
i X(t), its expectation func-
~
tion is the deterministic vector function E X(t) = (E [X i (t)])r×1 and note
h T i  h i
that by definition one gets E X~ T (t) = E X(t)
~ ~ T (t) denotes the
where X
n o
~
transposed vector of X(t). ~
The covariance matrix function of X(t), t∈T
is defined by
 h i  h iT   
ΛX(t)
~ =E ~
X(t) ~
− E X(t) ~
X(t) ~
− E X(t) = v ij (t) , (2.6)
r×r

where
h h i  h ii
v ij (t) = E X i (t) − E X i (t) X j (t) − E X j (t)
h i h i h i (2.7)
= E X i (t)X j (t) − E X i (t) E X j (t) , 1 ≤ i, j ≤ r, t ∈ T.

2 2
h i
Note that v ii (t) denoted by V [X i (t)] = E (X i (t)) − (E [X i (t)]) is the
h i
~
variance of the r.v. X i (t), 1 ≤ i ≤ r. If E X(t) = ~0, then ΛX(t)
~ is called the
n o
correlation matrix function of X(t),~ t ∈ T . Also, for the covariance matrix
one gets the following property (see, [13, p.88])

m.c.
~ n −− ~ ⇒ Λ ~ −−−→ Λ ~ .
X −→ X Xn X (2.8)
n→∞ n→∞

Given a matrix A = (aij ) in Rr×r , we denote by kAk∞ the norm defined as


[11, p.57]
r
X ij
kAk∞ = max a . (2.9)
1≤i≤r
j=1

The following lemma deals with the norm of the product of a deterministic
matrix function by a random matrix of compatible sizes and it will play an
important role in the following:

Lemma 2.1 Let A, B be matrices in Rr×r , Rs×s respectively, and X, Y ∈


Lr×s
2 , then

kAXkr×s ≤ kAk∞ kXkr×s , kYBkr×s ≤ kBk∞ kYkr×s . (2.10)

Proof. Since the procedure for establishing these two inequalities is analogous,
we only prove the first one. By (2.3) and (2.9) it follows that

4
s X r s X
r


ik kj ik kj
X X
kAXkr×s = max a X ≤ max a X


1≤i≤r 1≤i≤r 2
j=1 k=1 2 j=1 k=1
r X s r s
ik kj ik X
X kj
X X
≤ max a X = max a

1≤i≤r 2 1≤i≤r 2
k=1 j=1 k=1 j=1
r s r
X ik X kj X ik
≤ max a max X = max a kXkr×s
1≤i≤r 1≤k≤r 2 1≤i≤r
k=1 j=1 k=1

= kAk∞ kXkr×s .

We say that a 2-m.s.p. {X(t) : t ∈ T } in Lr×s


2 is m.s. continuous at t ∈ T , T
an interval of the real line, if

lim kX(t + τ ) − X(t)kr×s = 0, t, t + τ ∈ T,


τ →0

and
n it is m.s.odifferentiable at t ∈ T , if there exists a 2-m.s.p. denoted by
Ẋ(t) : t ∈ T such that

X(t + τ ) − X(t)
lim
− Ẋ(t) = 0, t, t + τ ∈ T. (2.11)
τ →0 τ
r×s

Example 2.2 Let Y be a 2-r.v. and let us consider the 2-s.p. Y (t) = Y · t for
t lying in the interval T . Note that Y (t) is m.s. differentiable at t because for
τ a real number such that t + τ ∈ T , one gets
2  !2 
Y (t + τ ) − Y (t) Y · (t + τ ) − Y · t
−Y = E −Y 


τ τ
2
h i t+τ −t
= E Y2 −1 −−→ 0.
τ τ →0

Definition 2.3 Let S be a bounded set in Lr×s


2 , an interval T ⊆ R and h > 0,
r×s
we say that F : S × T → L2 is randomly bounded time uniformly continuous
in S if
lim ω(S, h) = 0, (2.12)
h→0
where
ω(S, h) = sup sup kF(X, t) − F(X, t0 )kr×s . (2.13)
X∈S⊂Lr×s |t−t0 |≤h
2

Example 2.4 Let us consider the second vector stochastic process F(X, t) =
A(t)X + G(t), 0 ≤ t ≤ te , where

     
1
X   0 1  0 
~ =X=
X   , A(t) = A =  ~ = G(t) = 
 , G(t)  (2.14)
X2 −ω02 −2ω0 ξ −Y (t)

5
(for convenience we have identified vectorial notation with matrix notation)
being
m
taj e−αj t cos (ωj t + θj ) , t ≥ 0,
X
Y (t) = (2.15)
j=1

and aj , αj , ωj and ξ are positive real numbers, and θj are pairwise independent
2-r.v.’s uniformly distributed on [0, 2π]. Note that
h i
(A(t)X + G(t))T = X 2 , −ω02 X 1 − 2ω0 ξX 2 − Y (t) , (2.16)

and
T
(F(X, t) − F(X, t0 )) = [0 , Y (t0 ) − Y (t)] . (2.17)
Now by considering the following relationship



 0 if j 6= k
E [cos (ωj t + θj ) cos (ωk t0 + θk )] = , (2.18)
1
cos (ωj (t − t0 )) if j = k


2

one gets
m
h
2
i 1  0

E (Y (t0 ) − Y (t)) a2j t2 e−2αj t + (t0 )2 e−2αj t
X
=
j=1 2
m (2.19)
0
 
a2j tt0 e−αj (t+t ) cos (ωj (t − t0 )) .
X

j=1

Hence, using the fact that the exponential and cosines deterministic functions
involved in (2.19) are continuous with respect to variable t, from definition 2.3
it follows that F(X, t) is randomly bounded time uniformly continuous.

For the sake of clarity in the presentation, we include in this section by enun-
ciating without proof some results that may be found in [13, chap.4].

Theorem 2.5 (Integration by parts) Let {X(t), t ∈ T } be a 2-s.p. m.s.


differentiable on T = [t0 , t] and let the deterministic function h(t, u) be con-
tinuous on T × T whose partial derivative ∂h(t, u)/∂u exists. If one denotes
by
Z t
Y (t) = h(t, u)Ẋ(u)du, (2.20)
t0

(where the above integral is considered in the Riemann mean square sense)
then,
Z t
u=t ∂h(t, u)
Y (t) = [h(t, u)X(u)]u=t0 − X(u)du. (2.21)
t0 ∂u

Taking h(t, u) ≡ 1 in (2.20)-(2.21) one deduces the following

6
Theorem 2.6 (Fundamental theorem of the mean square calculus) Let
{X(t), t ∈ T } be a 2-s.p. m.s. differentiable on T = [t0 , t], such that Ẋ(t) is
m.s. Riemann integrable on T , then one gets
Z t
Ẋ(u)du = X(t) − X(t0 ). (2.22)
t0

Example 2.7 In the framework of the example 2.2, note that taking the m.s.
differentiable process X(t) = Y · t and applying the formula of integration by
parts for h(t, u) ≡ 1 one gets
Z t
Y du = Y (t − t0 ) .
t0

Proposition 2.8 If {X(t), t ∈ T } is a 2-s.p. m.s. continuous on T = [t0 , t],


then
t t
Z Z

X(u)du ≤
kX(u)k du ≤ MX (t − t0 ) , MX = max kX(u)k . (2.23)
t0 ≤u≤t

t0 t0

3 Convergence of the scalar random Euler method

Let us consider the random initial value problem (1.1) under the following
hypotheses on F : S × T → L2 , with S ⊂ L2

• H1: F (X, t) is m.s. randomly bounded time uniformly continuous.


• H2: F (X, t) satisfies the m.s. Lipschitz condition
Z te
kF (X, t) − F (Y, t)k ≤ k(t) kX − Y k , k(t)dt < +∞. (3.1)
t0
Note that condition H2 guarantees the m.s. continuity of F (X, t) with respect
to the first variable while H1 guarantees the continuity of F (X, t) with respect
to the second variable. Hence and from the the inequality

kF (X, t) − F (Y, t0 )k ≤ kF (X, t) − F (Y, t)k + kF (Y, t) − F (Y, t0 )k ,

one gets the m.s. continuity of F (X, t) with respect to both variables.

Let us introduce the random Euler method for problem (1.1) defined by


Xn+1 = Xn + hF (Xn , tn ) , n ≥ 0 

(3.2)
X0 = X (t0 )

7
where Xn , F (Xn , tn ) are 2-r.v.’s, h = tn − tn−1 , with tn = t0 + nh, for n =
0, 1, 2, . . .. We wish to prove that under hypotheses H1 and H2, the Euler
method (3.2) is m.s. convergent in the fixed station sense, i.e., fixed t ∈ [t0 , te ]
and taking n so that t = tn = t0 + nh, the m.s. error

en = Xn − X(t) = Xn − X (tn ) , (3.3)

tends to zero in L2 , as h → 0, n → ∞ with t − t0 = nh.


Note that under hypotheses H1 and H2, theorem 5.1.2. of [13, p. 118] guaran-
tees the existence and uniqueness of a m.s. solution X(t) in [tn , tn+1 ] ⊂ [t0 , te ],
and by the m.s. fundamental theorem of calculus, i.e., theorem 2.6 it follows
that Z tn+1
X (tn+1 ) = X (tn ) + Ẋ(u)du, n ≥ 0. (3.4)
tn
From (3.2)-(3.4) it follows that

en+1 − en = (Xn+1 − Xn ) − (X (tn+1 ) − X (tn ))


Z tn+1 (3.5)
= hF (Xn , tn ) − Ẋ(u)du.
tn

Note also that as F (Xn , tn ) ∈ L2 , the first term appearing in the right-hand
side of (3.5) can be written as follows, see example 2.7,
Z tn+1
hF (Xn , tn ) = F (Xn , tn ) (tn+1 − tn ) = F (Xn , tn ) du. (3.6)
tn

By (3.5), (3.6) and using that Ẋ(u) = F (X(u), u), one gets
Z tn+1 h i
en+1 = en + F (Xn , tn ) − Ẋ(u) du
tn
Z tn+1 (3.7)
= en + [F (Xn , tn ) − F (X(u), u)] du.
tn

As under hypothesis H1 and H2, F (X, t) is a m.s. continuous with respect to


both variables, the 2-s.p. defined by

G(u) = F (Xn , tn ) − F (X(u), u) , (3.8)

is m.s. continuous for u ∈ [tn , tn+1 ]. Taking norms in (3.7) and using proposi-
tion 2.8 it follows that
Z tn+1
ken+1 k ≤ ken k + kF (Xn , tn ) − F (X(u), u)k du. (3.9)
tn

Let us bound the integrand appearing in (3.9) in the form

kF (Xn , tn ) − F (X(u), u)k ≤ kF (Xn , tn ) − F (X(tn ), tn )k


+ kF (X(tn ), tn ) − F (X(u), tn )k (3.10)
+ kF (X(u), tn ) − F (X(u), u)k .

8
For the two first terms, using hypothesis H2, one gets the following bounds

kF (Xn , tn ) − F (X(tn ), tn )k ≤ k(tn ) kXn − X(tn )k = k(tn ) ken k , (3.11)

kF (X(tn ), tn )−F (X(u), tn )k ≤ k(tn ) kX(tn )−X(u)k , u ∈ [tn , tn+1 ] . (3.12)


Note that applying (3.4) in [tn , u] ⊂ [tn , tn+1 ] and using again proposition 2.8,
it follows that
u u
Z Z

kX(u) − X(tn )k = Ẋ(v)dv ≤ Ẋ(v) dv ≤ MẊ (u − tn ) , (3.13)

tn tn

where n o
MẊ = sup Ẋ(v) ; t0 ≤ v ≤ te . (3.14)
From (3.12)-(3.14) one gets

kF (X(tn ), tn ) − F (X(u), tn )k ≤ k(tn )hMẊ . (3.15)

Let SX be the bounded set in L2 defined by the exact theoretical solution of


problem (1.1),
SX = {X(t) , t0 ≤ t ≤ te } . (3.16)
Then by hypothesis H1 and definition 2.3, we have

kF (X(u), tn ) − F (X(u), u)k ≤ ω (SX , h) , (3.17)

and by (3.11), (3.15) and (3.17), it follows that (3.10) can be written in the
form

kF (Xn , tn ) − F (X(u), u)k ≤ k(tn ) ken k + hMẊ k(tn ) + ω (SX , h) ,

and hence, (3.9) takes the form

ken+1 k ≤ ken k [1 + hk(tn )] + h [ω (SX , h) + hMẊ k(tn )] . (3.18)

By (3.18) and lemma 1.2 of [12, p.28], one gets

enhk(tn ) − 1
ken k ≤ enhk(tn ) ke0 k + [ω (SX , h) + hMẊ k(tn )] ,
k(tn )
and as nh = t − t0 , ke0 k = 0, the last inequality can be written in the form

e(t−t0 )k(t) − 1
ken k ≤ [ω (SX , h) + hMẊ k(t)] . (3.19)
k(t)
From (3.19), it follows that {en } is m.s. convergent to zero and summarizing
the following result has been established:

Theorem 3.1 With the previous notation, under the hypotheses H1 and H2,
the random Euler method (3.2) is m.s. convergent and the discretization error
en defined by (3.3) satisfies the inequality (3.19) for t = t0 + nh, h > 0,
t0 ≤ t ≤ te .

9
4 The matrix case

Let us consider the random initial value problem (1.1) where now X(t) and
F(X, t) being F : S × T → Lr×s
2 with S ⊂ Lr×s
2 are matrix stochastic proceses
of size r × s and X0 is a random matrix of size r × s. The aim of this section
is to extend the m.s. convergence of the random Euler method to the matrix
framework. Let us introduce the notation
   
X(t) = X ij (t) , F (X, t) = F ij (X, t) ,
r×s

and the corresponding random matrix Euler method associated to (1.1) takes
the form


Xn+1 = Xn + hF (Xn , tn ) , n ≥ 0 

(4.1)
X0 = X(t0 ),

where Xnij is the (i, j)-th entry of the random matrix Xn and F ij (Xn , tn ) is the
(i, j)-th entry of F (Xn , tn ). The discretization error en has its corresponding
(i, j)-th entry
eij ij ij
n = Xn − X (tn ). (4.2)
We assume the same hypotheses H1 and H2 with the only difference that the
norm in Lr×s
2 is k·kr×s introduced in section 2, F : S × T → Lr×s r×s
2 , S ⊂ L2 ,
satisfies

• H1: F(X, t) is m.s. randomly bounded time uniformly continuous.


• H2: F(X, t) satisfies the m.s. Lipschitz condition
Z te
kF(X, t) − F(Y, t)kr×s ≤ k(t) kX − Ykr×s , k(t)dt < +∞.
t0

If X(t) is the theoretical solution of matrix problem (1.1), then each of its
entries functions X ij (t) are m.s. differentiable and applying the m.s. funda-
mental theorem of calculus in the interval [tn , tn+1 ] ⊂ [t0 , te ], and considering
the (i, j)-th entry of Ẋ(u) = F(X(u), u), it follows
Z tn+1 Z tn+1
ij ij ij
X (tn+1 ) − X (tn ) = Ẋ (u)du = F ij (X(u), u) du. (4.3)
tn tn

Taking into account the (i, j)-th entry of (4.1) and example 2.7 with Y =
F ij (Xn , tn ) ∈ L2 one gets
Z tn+1
ij
Xn+1 − Xnij = hF ij (Xn , tn ) = F ij (Xn , tn ) du. (4.4)
tn

10
From (4.2)-(4.4) it follows that
Z tn+1 h i
eij ij
n+1 = en + F ij (Xn , tn ) − F ij (X(u), u) du. (4.5)
tn

In order to prove the m.s. convergence of the error random matrix sequence
r×s
en = (eij
n ) to the zero matrix in L2 , as F(X, t) is m.s. continuous, by propo-
sition 2.8 it follows that
s Z tn+1 h i
ij ij
X
max F (Xn , tn ) − F (X(u), u) du
1≤i≤r

j=1 tn
s Z tn+1
ij
F (Xn , tn ) − F ij (X(u), u) du
X
≤ max
(4.6)
1≤i≤r
j=1 tn
Z tn+1 s
ij
F (Xn , tn ) − F ij (X(u), u) du.
X
≤ max

tn 1≤i≤r
j=1

By applying the inequality (3.10) to each component (i, j) of F(X, t) for u ∈


[tn , tn+1 ], we have

ij
F (Xn , tn ) − F ij (X(u), u) ≤ F ij (Xn , tn ) − F ij (X(tn ), tn )


+ F ij (X(tn ), tn ) − F ij (X(u), tn ) (4.7)

+ F ij (X(u), tn ) − F ij (X(u), u) .

Note that using the Lispchitz condition given by H2 and the definition of
k·kr×s , we have

ij
F (Xn , tn ) − F ij (X(tn ), tn ) ≤ kF (Xn , tn ) − F (X(tn ), tn )kr×s

≤ k(tn ) kXn − X(tn )kr×s (4.8)


= k(tn ) ken kr×s ,

ij
F (X(u), tn ) − F ij (X(u), u) ≤ kF (X(u), tn ) − F (X(u), u)kr×s

(4.9)
≤ ω (SX , h) ,
where SX is defined as (3.16) being X(t) the exact theoretical solution of the
matrix problem (1.1). In order to bound kF ij (X(tn ), tn ) − F ij (X(u), tn )k, first
note that F(X(tn ), tn ) and F(X(u), tn ) depend of (r × s) + 1 arguments, and
we shall express the above difference as the sum of r × s terms where in each
term only one argument changes and all the remaining entries keep constant.
Hence, let us write
 
F (X(tn ), tn ) = F ij X 11 (tn ), . . . , X 1s (tn ); . . . ; X r1 (tn ), . . . , X rs (tn ); tn ,
 
F (X(u), tn ) = F ij X 11 (u), . . . , X 1s (u); . . . ; X r1 (u), . . . , X rs (u); tn ,

11
and by decomposing the difference F ij (X(tn ), tn ) − F ij (X(u), tn ) in r × s
terms and applying the Lipschitz condition we have
s
ij
F (X(tn ), tn )−F ij (X(u), tn ) ≤ rsk(tn ) max
ij
X (u)−X ij (tn ) , (4.10)
X
1≤i≤r
j=1

because

kF ij (X(tn ), tn ) − F ij (X(u), tn )k
≤ kF (X(tn ), tn ) − F (X(u), tn )k
≤ kF (X 11 (tn ), X 12 (tn ), . . . , X 1s (tn ); . . . ; X r1 (tn ), X r2 (tn ), . . . , X rs (tn ), tn )
− F (X 11 (u), X 12 (tn ), . . . , X 1s (tn ); . . . ; X r1 (tn ), X r2 (tn ), . . . , X rs (tn ), tn )k
+ kF (X 11 (u), X 12 (tn ), X 13 (tn ), . . . , X 1s (tn ); . . . ; X r1 (tn ), . . . , X rs (tn ), tn )
− F (X 11 (u), X 12 (u), X 13 (tn ), . . . , X 1s (tn ); . . . ; X r1 (tn ), . . . , X rs (tn ), tn )k
+ ···+
+ kF (X 11 (u), X 12 (u), . . . , X 1s (u); . . . ; X r1 (u), . . . , X rs−1 (u), X rs (tn ), tn )
− F (X 11 (u), X 12 (u), . . . , X 1s (u); . . . ; X r1 (u), . . . , X rs−1 (u), X rs (u), tn )k
≤ rsk(tn ) kX(u) − X(tn )kr×s
s
ij
X (u) − X ij (tn ) .
X
= rsk(tn ) max

1≤i≤r
j=1

As each entry X ij (t) of the solution X(t) of matrix problem (1.1) is m.s.
differentiable, in an analogous way to (3.13)-(3.14) it follows that
u u
Z Z
ij ij
ij
ij
X (tn ) − X (u) = Ẋ (v)dv ≤ Ẋ (v) dv ≤ hMẊ , (4.11)


tn tn

where X(t) is the theoretical solution of matrix problem (1.1) and we have
applied that
 
ij
Ẋ (v) ≤ Ẋ(v) ≤ MẊ = sup Ẋ(t) : t0 ≤ t ≤ te . (4.12)

r×s r×s

By (4.10)-(4.12) one gets



ij
F (X(tn ), tn )−F ij (X(u), tn ) ≤ s2 rk(tn )hMẊ , 1 ≤ i ≤ r , 1 ≤ j ≤ s. (4.13)

From (4.7)-(4.8) and (4.13) it follows that


h i
ij
F (Xn , tn )−F ij (X(u), u) ≤ k(tn ) kenkr×s +s2 rhMẊ + ω (SX , h) , (4.14)

12
s
ij
F (Xn , tn ) − F ij (X(u), u)
X
max

1≤i≤r
h
j=1
i
(4.15)
2
≤ sk(tn ) ken kr×s + s rhMẊ + s ω (SX , h) .
Taking norms in (4.5) and using (4.6), (4.15) one gets

Z t i

ij
n+1 h ij ij
en+1 ≤ keij

n k +
F (X ,
n n t ) − F (X(u), u) du
Ztnt
ij
n+1
≤ ken k + [F (Xn , tn ) − F (X(u), u)] du
tn r×s

≤ keij
nk
2 3
+ hsk(tn ) ken kr×s + rh s k(tn )MẊ + hs ω (SX , h) .

Hence,

ken+1 kr×s ≤ ken kr×s + hs2 k(tn ) ken kr×s + rh2 s4 k(tn )MẊ + hs2 ω (SX , h)
  h i
= 1 + hs2 k(tn ) ken kr×s + h s2 ω (SX , h) + rhs4 k(tn )MẊ .

From this inequality and lemma 1.2 of [12, p.28], it follows that
2
nhs2 k(tn ) enhs k(tn ) − 1 h i
ken+1 kr×s ≤ e ke0 kr×s + k(tn )rs4 hMẊ + s2 ω (SX , h) .
k(tn )
As e0 = 0 and nh = tn − t0 = t − t0 , the last expression can be written in the
form
2
e(t−t0 )s k(t) − 1 h i
ken+1 kr×s ≤ k(t)rs4 hMẊ + s2 ω (SX , h) . (4.16)
k(t)

By hypothesis H1, ω (SX , h) → 0 as h → 0 and by (4.16), one gets that


ken+1 kr×s → 0 as h → 0, n → ∞, nh = t − t0 .

Summarizing, the following result has been established:

Theorem 4.1 With the notation introduced in this section, under the hy-
potheses H1 and H2, the random Euler method (4.1) is m.s. convergent and
the discretization error en defined by (4.2) satisfies the inequality (4.16) for
t = t0 + nh, h > 0, t0 ≤ t ≤ te .

5 Examples

In this section we provide several illustrative applications of random Euler


method. As in the first one the exact solution is available, we shall compare
the numerical approximation with the exact values.

13
Example 5.1 Consider the problem of determining the effect on a earthbound
structure of the earthquake-type disturbance. Let us assume that the structure is
at rest at t = 0, and let X(t) > 0, t ≥ 0 be the relative horizontal displacement
of the roof with respect to the ground. Then, based upon an idealized linear
model, the relative displacement X(t) is governed by


Ẍ(t) + 2ξω0 Ẋ(t) + ω02 X(t) = −Y (t) , t ≥ 0, 

(5.1)
X(0) = 0 , Ẋ(0) = 0 ,

where Y (t) is the 2-s.p. given by (2.15). The m.s. solution of the problem (5.1)
has the form, see [13, p.165]
Z t
X(t) = − h(t − z)Y (z) dz, (5.2)
0

with ξ < 1, and where h(t) is the impulse response

1 −ξω0 t q
h(t) = e sin (ω b 0 = ω0 1 − ξ 2 .
b 0 t) , t ≥ 0, ω (5.3)
ω
b0

The expectation and the auto-correlation functions of the 2-s.p. Y (t) are given
by
E [Y (t)] = 0, t ≥ 0, (5.4)
and
m
1X
ΓY (u, v) = E [Y (u)Y (v)] = uva2j e−αj (u+v) cos ((u − v)ωj ) , u, v ≥ 0. (5.5)
2 j=1

Hence from equations (5.2)-(5.5), it follows that

E [X(t)] = 0, t ≥ 0,

and
Z tZ t
V ar[X(t)] = E [X(t)X(t)] = h(t − u)h(t − v)ΓY (u, v)du dv, t ≥ 0. (5.6)
0 0

In order to apply the Euler random scheme to the problem, we shall convert the
second order differential equation (5.1) into a system of first-order differential
equations. Let X 1 (t) = X(t), X 2 (t) = Ẋ(t) and (X(t))T = [X 1 (t), X 2 (t)],
then the vector-matrix form of equation (5.1) is

Ẋ(t) = A(t)X(t) + G(t),

where A(t) and G(t) are given by (2.14) and (X(0))T = 0T = [0, 0]. As
XT0 = [0, 0], the expression of the random Euler method in this case takes the

14
form
n−1 n−1
Xn = (I + hA)n X0 + h (I + hA)n−i−1 G(ti ) = h (I + hA)n−i−1 G(ti ),
X X

i=0 i=0

where I denotes the identity matrix of size 2. It is important to point out that
hypotheses H1 and H2 hold, then m.s. convergence of random Euler method is
guaranteed. In fact, H1 holds by example 2.4 and, by lemma 2.1 one gets the
Lipschitz condition

kF(X, t) − F(Y, t)kr×s ≤ kAk∞ kX − Ykr×s = k kX − Ykr×s ,

where n o Z te
k = max 1, ω02 + 2ω0 ξ , k dt < +∞.
t0

Note that as E [Y (ti )] = 0, one gets E [Xn ] = 0 and E [Xn ] (E [Xn ])T = 0,
then covariance matrix of the random vector Xn is given by
h i
ΛXn = E Xn (Xn )T
X n−1
n−1 h i T (5.7)
= h2 (I + hA)n−i−1 E G(ti )(G(tj ))T (I + hA)n−j−1
X
,
i=0 j=0

being

 
0 0
h i
E G(ti ) (G(tj ))T =  ,

0 E [Y (ti )Y (tj )]

where E [Y (ti )Y (tj )] is defined by (5.5). Note that if ΛXn = (V ij )2×2 denotes
the square matrix of size 2 given by (5.7), then V 11 is the approximating
variance of the 2-s.p. solution of the equation (5.1) obtained from the scalar
random Euler method. Figure 1 shows V 11 = V ar [Xn ] for h = 1/20, h = 1/40,
h = 1/80 and the theoretical variance V ar [X(tn )] given by (5.6), taking into
(5.1) ξ = 0.05 and ωj = 1, aj = 1, αj = (1/2)j , 0 ≤ j ≤ 20 for Y (t) given by
(2.15). It illustrates that the numerical and the exact values are closer when
h decreases. Note that it is in accordance with property (2.8).

Fig. 1. Comparison between the theoretical and numerical values for example 5.1
The next example shows an application of the random Euler method to the
Sylvester random differential system. The work developed in sections 4.3 pro-
vides us an adequate framework in order to test the reliability of this random
numerical scheme for random differential systems.

Example 5.2 Let us consider the non-homogeneous rectangular random dif-


ferential system given by

15

Ẋ(t) = A(t)X(t) + X(t)B(t) + C(t) , t ≥ 0, 

(5.8)
X(0) = X0 ,

where

     
0 1 1 1  B(t) 0
A(t) =   , B(t) =   , C(t) =   , X0 = 0, (5.9)

10 00 0 (B(t))2

h 2 × i2 and B(t) a Brownian motion process.


being 0 hthe null imatrix of size
Since E (B(t)) = t and E (B(t))4 = 3t2 , then for 0 ≤ t ≤ t0 it follows that
2

n o
kF(X, t) − F(X, t0 )kr×s = max kB(t) − B(t0 )k , (B(t))2 − (B(t0 ))2
 
0 1/2

0 1/2 2 0 2
max |t − t | , −t + 3(t ) − 2t t ,

and by lemma 2.1 one gets

kF(X, t) − F(Y, t)kr×s ≤ kA(X − Y)kr×s + k(X − Y)Bkr×s


≤ (kAk∞ + kBk∞ ) kX − Ykr×s ,

then problem (5.8) satisfies hypotheses H1 and H2, thus the random Euler
method is m.s. convergent.

Let us denote by E [Xn ] = (E ij )2×2 the expectation matrix of size 2 × 2


of the Euler approximations. Figures 2-5 provide the values of this matrix for
different values of the step h. One observes from the results that the numerical
values obtained are closer as the step value h decreases.

Fig. 2. Numerical values of E 11 entry of the matrix expectation for example 5.2

Fig. 3. Numerical values of E 12 entry of the matrix expectation for example 5.2

Fig. 4. Numerical values of E 21 entry of the matrix expectation for example 5.2

Fig. 5. Numerical values of E 22 entry of the matrix expectation for example 5.2

References

[1] R. Barron, G. Ayala, El método de yuxtaposición de dominios en la solución


numérica de ecuaciones diferenciales estocásticas, Proc. Metodos Numéricos en
Ingenierı́a y Ciencias Aplicadas (CIMNE), Monterrey, Mexico (2002), 267–276.

16
[2] C.A. Braumann, Variable effort harvesting models in random environments:
generalization to density-dependent noise intensities, Math. Biosci. 177 178
(2002) 229–245.

[3] J. Chilès and P. Delfiner, Geostatistics. Modelling Spatial Uncertainty (John


Wiley, New York 1999).

[4] J.C. Cortés, L. Jódar, L. Villafuerte, Mean square numerical solution of random
differential equations: facts and possibilities, Computers Math. Appl. 53(7)
(2007) 1098–1106.

[5] J.C. Cortés, L. Jódar, L. Villafuerte, Numerical solution of random diferential


equations: a mean square approach, Math. Comput. Model. 45(7-8) (2007) 757–
765.

[6] J. Dieudonnè, Foundations of Modern Analysis (Academic Press, New York,


1960).

[7] L. Jódar, J.C. Cortés, L. Villafuerte, A discrete eigenfunctions method for


numerical solution of random diffusion models, Conf. Differential and Difference
Equations and Applications, Miami, 2005 13 (Hindawi Publ. Corp., Miami,
2006) 457–466.

[8] J.B. Keller, Wave propagation in random media, Proc. Symp. Appl. Math. New
York, 1960 13 (Amer. Math. Soc., Providence, Rhode Island, 1962) 227–246.

[9] J.B. Keller, Stochastic equations and wave propagation in random media, Proc.
Symp. Appl. Math., New York, 1963 16 (Amer. Math. Soc., Providence, Rhode
Island, 1964) 145–170.

[10] P.E. Kloeden and E. Platen, Numerical Solution of Stochastic Differential


Equations , (Springer, Berlin, 1992).

[11] G. Golub and C.F. Van Loan, Matrix Computations, (The Johns Hopkins
University Press, Baltimore, MD, 1989).

[12] P. Henrici, Discrete Variable Methods in Ordinary Differential Equations (John


Wiley and Sons, New York, 1962).

[13] T.T. Soong, Random Differential Equations in Science and Engeneering


(Academic Press, New York, 1973).

[14] J.L. Strand, Random ordinary differential equations, J. Differential Equations


7 (1973) 538–553.

[15] D. Talay, Expansion of the global error for numerical schemes solving stochastic
differential equations, Stochastics Analysis and Applications 8(4) (1990) 483–
509.

17

You might also like