Professional Documents
Culture Documents
Euler Random
Euler Random
L. Villafuerte
Facultad de Ingenierı́a, Universidad Autónoma de Chiapas
Calle 4a Ote. Nte. 1428, Tuxtla Gutiérrez, Chiapas, México
Abstract
This paper deals with the construction of numerical solution of random matrix
initial value problems by means a random Euler scheme. Conditions for the mean
square convergence of the method are established avoiding the application of the
mean value theorem. Finally, one includes several illustrative examples where the
main statistics properties of the stochastic approximation processes are given.
1 Introduction
where X0 is a second order random variable and, the unknown X(t) as well
as the second member F(X(t), t) are second order stochastic processes.
The random initial value problem (1.1) has been treated from the theoreti-
cal point of view by many authors [14,13] and [6]. We may distinguish two
main approaches. One of them deals with the treatment of problem (1.1) as
an initial value problem in an abstract Banach space, [14], [6], and the other is
the so-called sample approach, where throughout the realizations of problem
(1.1) one gets information about the stochastic process solution of (1.1), [13,
Appendix A] and [7].
It is important to remark that unlike to the deterministic case, dealing with nu-
merical methods, it is not enough to construct discrete approximating stochas-
tic processes that converge to the exact theoretical solution, but it is also nec-
essary to compute at least the expectation and the variance functions of the
approximating process, in order to have a representable statistic idea of the
solution process.
In recent papers [4,5] the authors proposed a random Euler method to solve
numerically problem (1.1), but they are based on a strong version of the ran-
dom mean value theorem whose proof is not correct. Thus, in this paper we
establish the convergence of a random Euler method for problem (1.1) without
using the random mean value theorem.
2 Preliminaries
This section deals with some preliminary notations, results and examples that
will clarify the presentation of the main results of the paper related to the
random Euler method for solving numerically initial value problems associ-
ated to random differential equations.
2
Let (Ω, F, P ) be a probability space. In the following we are interested in
second order real random variables (2-r.v.’s), Y : Ω → R having a density
probability function, fY (y), such that
h i Z ∞
2
E Y = y 2 fY (y)dy < +∞,
−∞
where E [·] denotes the expectation operator. The space of all 2-r.v.’s defined
on (Ω, F, P ) and endowed with the norm
h i1/2
kY k = E Y 2 , (2.1)
11 1s
X ··· X
. .. ..
..
X= . . . (2.2)
X r1 · · · X rs
From the corresponding properties for its components, see [13, p.88], if {Xn }n≥0
is a sequence of random matrices in Lr×s
2 m.s. convergent to X, then
3
where using the notation introduced at (2.2), E [X] = (E [X ij ])r×s . In the par-
ticular case where the 2-m.s.p. X(t) is a vector, ~
h say
i X(t), its expectation func-
~
tion is the deterministic vector function E X(t) = (E [X i (t)])r×1 and note
h T i h i
that by definition one gets E X~ T (t) = E X(t)
~ ~ T (t) denotes the
where X
n o
~
transposed vector of X(t). ~
The covariance matrix function of X(t), t∈T
is defined by
h i h iT
ΛX(t)
~ =E ~
X(t) ~
− E X(t) ~
X(t) ~
− E X(t) = v ij (t) , (2.6)
r×r
where
h h i h ii
v ij (t) = E X i (t) − E X i (t) X j (t) − E X j (t)
h i h i h i (2.7)
= E X i (t)X j (t) − E X i (t) E X j (t) , 1 ≤ i, j ≤ r, t ∈ T.
2 2
h i
Note that v ii (t) denoted by V [X i (t)] = E (X i (t)) − (E [X i (t)]) is the
h i
~
variance of the r.v. X i (t), 1 ≤ i ≤ r. If E X(t) = ~0, then ΛX(t)
~ is called the
n o
correlation matrix function of X(t),~ t ∈ T . Also, for the covariance matrix
one gets the following property (see, [13, p.88])
m.c.
~ n −− ~ ⇒ Λ ~ −−−→ Λ ~ .
X −→ X Xn X (2.8)
n→∞ n→∞
The following lemma deals with the norm of the product of a deterministic
matrix function by a random matrix of compatible sizes and it will play an
important role in the following:
Proof. Since the procedure for establishing these two inequalities is analogous,
we only prove the first one. By (2.3) and (2.9) it follows that
4
s
X r s X
r
ik kj
ik
kj
X X
kAXkr×s = max a X
≤ max a
X
1≤i≤r
1≤i≤r 2
j=1 k=1 2 j=1 k=1
r X s
r s
ik
kj
ik X
X kj
X X
≤ max a
X
= max a
1≤i≤r 2 1≤i≤r 2
k=1 j=1 k=1 j=1
r s
r
X ik X
kj
X ik
≤ max a max
X
= max a kXkr×s
1≤i≤r 1≤k≤r 2 1≤i≤r
k=1 j=1 k=1
= kAk∞ kXkr×s .
and
n it is m.s.odifferentiable at t ∈ T , if there exists a 2-m.s.p. denoted by
Ẋ(t) : t ∈ T such that
X(t + τ ) − X(t)
lim
− Ẋ(t)
= 0, t, t + τ ∈ T. (2.11)
τ →0
τ
r×s
Example 2.2 Let Y be a 2-r.v. and let us consider the 2-s.p. Y (t) = Y · t for
t lying in the interval T . Note that Y (t) is m.s. differentiable at t because for
τ a real number such that t + τ ∈ T , one gets
2 !2
Y (t + τ ) − Y (t)
Y · (t + τ ) − Y · t
−Y
= E −Y
τ
τ
2
h i t+τ −t
= E Y2 −1 −−→ 0.
τ τ →0
Example 2.4 Let us consider the second vector stochastic process F(X, t) =
A(t)X + G(t), 0 ≤ t ≤ te , where
1
X 0 1 0
~ =X=
X , A(t) = A = ~ = G(t) =
, G(t) (2.14)
X2 −ω02 −2ω0 ξ −Y (t)
5
(for convenience we have identified vectorial notation with matrix notation)
being
m
taj e−αj t cos (ωj t + θj ) , t ≥ 0,
X
Y (t) = (2.15)
j=1
and aj , αj , ωj and ξ are positive real numbers, and θj are pairwise independent
2-r.v.’s uniformly distributed on [0, 2π]. Note that
h i
(A(t)X + G(t))T = X 2 , −ω02 X 1 − 2ω0 ξX 2 − Y (t) , (2.16)
and
T
(F(X, t) − F(X, t0 )) = [0 , Y (t0 ) − Y (t)] . (2.17)
Now by considering the following relationship
0 if j 6= k
E [cos (ωj t + θj ) cos (ωk t0 + θk )] = , (2.18)
1
cos (ωj (t − t0 )) if j = k
2
one gets
m
h
2
i 1 0
E (Y (t0 ) − Y (t)) a2j t2 e−2αj t + (t0 )2 e−2αj t
X
=
j=1 2
m (2.19)
0
a2j tt0 e−αj (t+t ) cos (ωj (t − t0 )) .
X
−
j=1
Hence, using the fact that the exponential and cosines deterministic functions
involved in (2.19) are continuous with respect to variable t, from definition 2.3
it follows that F(X, t) is randomly bounded time uniformly continuous.
For the sake of clarity in the presentation, we include in this section by enun-
ciating without proof some results that may be found in [13, chap.4].
(where the above integral is considered in the Riemann mean square sense)
then,
Z t
u=t ∂h(t, u)
Y (t) = [h(t, u)X(u)]u=t0 − X(u)du. (2.21)
t0 ∂u
6
Theorem 2.6 (Fundamental theorem of the mean square calculus) Let
{X(t), t ∈ T } be a 2-s.p. m.s. differentiable on T = [t0 , t], such that Ẋ(t) is
m.s. Riemann integrable on T , then one gets
Z t
Ẋ(u)du = X(t) − X(t0 ). (2.22)
t0
Example 2.7 In the framework of the example 2.2, note that taking the m.s.
differentiable process X(t) = Y · t and applying the formula of integration by
parts for h(t, u) ≡ 1 one gets
Z t
Y du = Y (t − t0 ) .
t0
Let us consider the random initial value problem (1.1) under the following
hypotheses on F : S × T → L2 , with S ⊂ L2
one gets the m.s. continuity of F (X, t) with respect to both variables.
Let us introduce the random Euler method for problem (1.1) defined by
Xn+1 = Xn + hF (Xn , tn ) , n ≥ 0
(3.2)
X0 = X (t0 )
7
where Xn , F (Xn , tn ) are 2-r.v.’s, h = tn − tn−1 , with tn = t0 + nh, for n =
0, 1, 2, . . .. We wish to prove that under hypotheses H1 and H2, the Euler
method (3.2) is m.s. convergent in the fixed station sense, i.e., fixed t ∈ [t0 , te ]
and taking n so that t = tn = t0 + nh, the m.s. error
Note also that as F (Xn , tn ) ∈ L2 , the first term appearing in the right-hand
side of (3.5) can be written as follows, see example 2.7,
Z tn+1
hF (Xn , tn ) = F (Xn , tn ) (tn+1 − tn ) = F (Xn , tn ) du. (3.6)
tn
By (3.5), (3.6) and using that Ẋ(u) = F (X(u), u), one gets
Z tn+1 h i
en+1 = en + F (Xn , tn ) − Ẋ(u) du
tn
Z tn+1 (3.7)
= en + [F (Xn , tn ) − F (X(u), u)] du.
tn
is m.s. continuous for u ∈ [tn , tn+1 ]. Taking norms in (3.7) and using proposi-
tion 2.8 it follows that
Z tn+1
ken+1 k ≤ ken k + kF (Xn , tn ) − F (X(u), u)k du. (3.9)
tn
8
For the two first terms, using hypothesis H2, one gets the following bounds
where n
o
MẊ = sup
Ẋ(v)
; t0 ≤ v ≤ te . (3.14)
From (3.12)-(3.14) one gets
and by (3.11), (3.15) and (3.17), it follows that (3.10) can be written in the
form
enhk(tn ) − 1
ken k ≤ enhk(tn ) ke0 k + [ω (SX , h) + hMẊ k(tn )] ,
k(tn )
and as nh = t − t0 , ke0 k = 0, the last inequality can be written in the form
e(t−t0 )k(t) − 1
ken k ≤ [ω (SX , h) + hMẊ k(t)] . (3.19)
k(t)
From (3.19), it follows that {en } is m.s. convergent to zero and summarizing
the following result has been established:
Theorem 3.1 With the previous notation, under the hypotheses H1 and H2,
the random Euler method (3.2) is m.s. convergent and the discretization error
en defined by (3.3) satisfies the inequality (3.19) for t = t0 + nh, h > 0,
t0 ≤ t ≤ te .
9
4 The matrix case
Let us consider the random initial value problem (1.1) where now X(t) and
F(X, t) being F : S × T → Lr×s
2 with S ⊂ Lr×s
2 are matrix stochastic proceses
of size r × s and X0 is a random matrix of size r × s. The aim of this section
is to extend the m.s. convergence of the random Euler method to the matrix
framework. Let us introduce the notation
X(t) = X ij (t) , F (X, t) = F ij (X, t) ,
r×s
and the corresponding random matrix Euler method associated to (1.1) takes
the form
Xn+1 = Xn + hF (Xn , tn ) , n ≥ 0
(4.1)
X0 = X(t0 ),
where Xnij is the (i, j)-th entry of the random matrix Xn and F ij (Xn , tn ) is the
(i, j)-th entry of F (Xn , tn ). The discretization error en has its corresponding
(i, j)-th entry
eij ij ij
n = Xn − X (tn ). (4.2)
We assume the same hypotheses H1 and H2 with the only difference that the
norm in Lr×s
2 is k·kr×s introduced in section 2, F : S × T → Lr×s r×s
2 , S ⊂ L2 ,
satisfies
If X(t) is the theoretical solution of matrix problem (1.1), then each of its
entries functions X ij (t) are m.s. differentiable and applying the m.s. funda-
mental theorem of calculus in the interval [tn , tn+1 ] ⊂ [t0 , te ], and considering
the (i, j)-th entry of Ẋ(u) = F(X(u), u), it follows
Z tn+1 Z tn+1
ij ij ij
X (tn+1 ) − X (tn ) = Ẋ (u)du = F ij (X(u), u) du. (4.3)
tn tn
Taking into account the (i, j)-th entry of (4.1) and example 2.7 with Y =
F ij (Xn , tn ) ∈ L2 one gets
Z tn+1
ij
Xn+1 − Xnij = hF ij (Xn , tn ) = F ij (Xn , tn ) du. (4.4)
tn
10
From (4.2)-(4.4) it follows that
Z tn+1 h i
eij ij
n+1 = en + F ij (Xn , tn ) − F ij (X(u), u) du. (4.5)
tn
In order to prove the m.s. convergence of the error random matrix sequence
r×s
en = (eij
n ) to the zero matrix in L2 , as F(X, t) is m.s. continuous, by propo-
sition 2.8 it follows that
s
Z tn+1 h i
ij ij
X
max
F (Xn , tn ) − F (X(u), u) du
1≤i≤r
j=1 tn
s Z tn+1
ij
F (Xn , tn ) − F ij (X(u), u)
du
X
≤ max
(4.6)
1≤i≤r
j=1 tn
Z tn+1 s
ij
F (Xn , tn ) − F ij (X(u), u)
du.
X
≤ max
tn 1≤i≤r
j=1
Note that using the Lispchitz condition given by H2 and the definition of
k·kr×s , we have
ij
F (Xn , tn ) − F ij (X(tn ), tn )
≤ kF (Xn , tn ) − F (X(tn ), tn )kr×s
11
and by decomposing the difference F ij (X(tn ), tn ) − F ij (X(u), tn ) in r × s
terms and applying the Lipschitz condition we have
s
ij
F (X(tn ), tn )−F ij (X(u), tn )
≤ rsk(tn ) max
ij
X (u)−X ij (tn )
, (4.10)
X
1≤i≤r
j=1
because
kF ij (X(tn ), tn ) − F ij (X(u), tn )k
≤ kF (X(tn ), tn ) − F (X(u), tn )k
≤ kF (X 11 (tn ), X 12 (tn ), . . . , X 1s (tn ); . . . ; X r1 (tn ), X r2 (tn ), . . . , X rs (tn ), tn )
− F (X 11 (u), X 12 (tn ), . . . , X 1s (tn ); . . . ; X r1 (tn ), X r2 (tn ), . . . , X rs (tn ), tn )k
+ kF (X 11 (u), X 12 (tn ), X 13 (tn ), . . . , X 1s (tn ); . . . ; X r1 (tn ), . . . , X rs (tn ), tn )
− F (X 11 (u), X 12 (u), X 13 (tn ), . . . , X 1s (tn ); . . . ; X r1 (tn ), . . . , X rs (tn ), tn )k
+ ···+
+ kF (X 11 (u), X 12 (u), . . . , X 1s (u); . . . ; X r1 (u), . . . , X rs−1 (u), X rs (tn ), tn )
− F (X 11 (u), X 12 (u), . . . , X 1s (u); . . . ; X r1 (u), . . . , X rs−1 (u), X rs (u), tn )k
≤ rsk(tn ) kX(u) − X(tn )kr×s
s
ij
X (u) − X ij (tn )
.
X
= rsk(tn ) max
1≤i≤r
j=1
As each entry X ij (t) of the solution X(t) of matrix problem (1.1) is m.s.
differentiable, in an analogous way to (3.13)-(3.14) it follows that
u u
Z
Z
ij ij
ij
ij
X (tn ) − X (u)
=
Ẋ (v)dv
≤
Ẋ (v)
dv ≤ hMẊ , (4.11)
tn tn
where X(t) is the theoretical solution of matrix problem (1.1) and we have
applied that
ij
Ẋ (v)
≤
Ẋ(v)
≤ MẊ = sup
Ẋ(t)
: t0 ≤ t ≤ te . (4.12)
r×s r×s
12
s
ij
F (Xn , tn ) − F ij (X(u), u)
X
max
1≤i≤r
h
j=1
i
(4.15)
2
≤ sk(tn ) ken kr×s + s rhMẊ + s ω (SX , h) .
Taking norms in (4.5) and using (4.6), (4.15) one gets
Z t i
ij
n+1 h ij ij
en+1
≤ keij
n k +
F (X ,
n n t ) − F (X(u), u) du
Ztnt
ij
n+1
≤ ken k +
[F (Xn , tn ) − F (X(u), u)] du
tn r×s
≤ keij
nk
2 3
+ hsk(tn ) ken kr×s + rh s k(tn )MẊ + hs ω (SX , h) .
Hence,
ken+1 kr×s ≤ ken kr×s + hs2 k(tn ) ken kr×s + rh2 s4 k(tn )MẊ + hs2 ω (SX , h)
h i
= 1 + hs2 k(tn ) ken kr×s + h s2 ω (SX , h) + rhs4 k(tn )MẊ .
From this inequality and lemma 1.2 of [12, p.28], it follows that
2
nhs2 k(tn ) enhs k(tn ) − 1 h i
ken+1 kr×s ≤ e ke0 kr×s + k(tn )rs4 hMẊ + s2 ω (SX , h) .
k(tn )
As e0 = 0 and nh = tn − t0 = t − t0 , the last expression can be written in the
form
2
e(t−t0 )s k(t) − 1 h i
ken+1 kr×s ≤ k(t)rs4 hMẊ + s2 ω (SX , h) . (4.16)
k(t)
Theorem 4.1 With the notation introduced in this section, under the hy-
potheses H1 and H2, the random Euler method (4.1) is m.s. convergent and
the discretization error en defined by (4.2) satisfies the inequality (4.16) for
t = t0 + nh, h > 0, t0 ≤ t ≤ te .
5 Examples
13
Example 5.1 Consider the problem of determining the effect on a earthbound
structure of the earthquake-type disturbance. Let us assume that the structure is
at rest at t = 0, and let X(t) > 0, t ≥ 0 be the relative horizontal displacement
of the roof with respect to the ground. Then, based upon an idealized linear
model, the relative displacement X(t) is governed by
Ẍ(t) + 2ξω0 Ẋ(t) + ω02 X(t) = −Y (t) , t ≥ 0,
(5.1)
X(0) = 0 , Ẋ(0) = 0 ,
where Y (t) is the 2-s.p. given by (2.15). The m.s. solution of the problem (5.1)
has the form, see [13, p.165]
Z t
X(t) = − h(t − z)Y (z) dz, (5.2)
0
1 −ξω0 t q
h(t) = e sin (ω b 0 = ω0 1 − ξ 2 .
b 0 t) , t ≥ 0, ω (5.3)
ω
b0
The expectation and the auto-correlation functions of the 2-s.p. Y (t) are given
by
E [Y (t)] = 0, t ≥ 0, (5.4)
and
m
1X
ΓY (u, v) = E [Y (u)Y (v)] = uva2j e−αj (u+v) cos ((u − v)ωj ) , u, v ≥ 0. (5.5)
2 j=1
E [X(t)] = 0, t ≥ 0,
and
Z tZ t
V ar[X(t)] = E [X(t)X(t)] = h(t − u)h(t − v)ΓY (u, v)du dv, t ≥ 0. (5.6)
0 0
In order to apply the Euler random scheme to the problem, we shall convert the
second order differential equation (5.1) into a system of first-order differential
equations. Let X 1 (t) = X(t), X 2 (t) = Ẋ(t) and (X(t))T = [X 1 (t), X 2 (t)],
then the vector-matrix form of equation (5.1) is
where A(t) and G(t) are given by (2.14) and (X(0))T = 0T = [0, 0]. As
XT0 = [0, 0], the expression of the random Euler method in this case takes the
14
form
n−1 n−1
Xn = (I + hA)n X0 + h (I + hA)n−i−1 G(ti ) = h (I + hA)n−i−1 G(ti ),
X X
i=0 i=0
where I denotes the identity matrix of size 2. It is important to point out that
hypotheses H1 and H2 hold, then m.s. convergence of random Euler method is
guaranteed. In fact, H1 holds by example 2.4 and, by lemma 2.1 one gets the
Lipschitz condition
where n o Z te
k = max 1, ω02 + 2ω0 ξ , k dt < +∞.
t0
Note that as E [Y (ti )] = 0, one gets E [Xn ] = 0 and E [Xn ] (E [Xn ])T = 0,
then covariance matrix of the random vector Xn is given by
h i
ΛXn = E Xn (Xn )T
X n−1
n−1 h i T (5.7)
= h2 (I + hA)n−i−1 E G(ti )(G(tj ))T (I + hA)n−j−1
X
,
i=0 j=0
being
0 0
h i
E G(ti ) (G(tj ))T = ,
0 E [Y (ti )Y (tj )]
where E [Y (ti )Y (tj )] is defined by (5.5). Note that if ΛXn = (V ij )2×2 denotes
the square matrix of size 2 given by (5.7), then V 11 is the approximating
variance of the 2-s.p. solution of the equation (5.1) obtained from the scalar
random Euler method. Figure 1 shows V 11 = V ar [Xn ] for h = 1/20, h = 1/40,
h = 1/80 and the theoretical variance V ar [X(tn )] given by (5.6), taking into
(5.1) ξ = 0.05 and ωj = 1, aj = 1, αj = (1/2)j , 0 ≤ j ≤ 20 for Y (t) given by
(2.15). It illustrates that the numerical and the exact values are closer when
h decreases. Note that it is in accordance with property (2.8).
Fig. 1. Comparison between the theoretical and numerical values for example 5.1
The next example shows an application of the random Euler method to the
Sylvester random differential system. The work developed in sections 4.3 pro-
vides us an adequate framework in order to test the reliability of this random
numerical scheme for random differential systems.
15
Ẋ(t) = A(t)X(t) + X(t)B(t) + C(t) , t ≥ 0,
(5.8)
X(0) = X0 ,
where
0 1 1 1 B(t) 0
A(t) = , B(t) = , C(t) = , X0 = 0, (5.9)
10 00 0 (B(t))2
n
o
kF(X, t) − F(X, t0 )kr×s = max kB(t) − B(t0 )k ,
(B(t))2 − (B(t0 ))2
0 1/2
0 1/2 2 0 2
max |t − t | , −t + 3(t ) − 2t t ,
then problem (5.8) satisfies hypotheses H1 and H2, thus the random Euler
method is m.s. convergent.
Fig. 2. Numerical values of E 11 entry of the matrix expectation for example 5.2
Fig. 3. Numerical values of E 12 entry of the matrix expectation for example 5.2
Fig. 4. Numerical values of E 21 entry of the matrix expectation for example 5.2
Fig. 5. Numerical values of E 22 entry of the matrix expectation for example 5.2
References
16
[2] C.A. Braumann, Variable effort harvesting models in random environments:
generalization to density-dependent noise intensities, Math. Biosci. 177 178
(2002) 229–245.
[4] J.C. Cortés, L. Jódar, L. Villafuerte, Mean square numerical solution of random
differential equations: facts and possibilities, Computers Math. Appl. 53(7)
(2007) 1098–1106.
[8] J.B. Keller, Wave propagation in random media, Proc. Symp. Appl. Math. New
York, 1960 13 (Amer. Math. Soc., Providence, Rhode Island, 1962) 227–246.
[9] J.B. Keller, Stochastic equations and wave propagation in random media, Proc.
Symp. Appl. Math., New York, 1963 16 (Amer. Math. Soc., Providence, Rhode
Island, 1964) 145–170.
[11] G. Golub and C.F. Van Loan, Matrix Computations, (The Johns Hopkins
University Press, Baltimore, MD, 1989).
[15] D. Talay, Expansion of the global error for numerical schemes solving stochastic
differential equations, Stochastics Analysis and Applications 8(4) (1990) 483–
509.
17