A Discontinuous Galerkin Method For Systems of Stocha 2021 Journal of Comput

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

Journal of Computational and Applied Mathematics 388 (2021) 113297

Contents lists available at ScienceDirect

Journal of Computational and Applied


Mathematics
journal homepage: www.elsevier.com/locate/cam

A discontinuous Galerkin method for systems of stochastic


differential equations with applications to population biology,
finance, and physics

Mahboub Baccouch a , , Helmi Temimi b , Mohamed Ben-Romdhane b
a
Department of Mathematics, University of Nebraska at Omaha, Omaha, NE 68182, United States of America
b
Department of Mathematics and Natural Sciences, Gulf University for Science and Technology, Kuwait

article info a b s t r a c t

Article history: In this paper, we propose a discontinuous Galerkin (DG) method for systems of stochastic
Received 10 February 2019 differential equations (SDEs) driven by m-dimensional Brownian motion. We first con-
Received in revised form 17 August 2020 struct a new approximate system of SDEs on each element using whose converges to the
solution of the original system. The new system is then discretized using the standard DG
MSC:
65C20 method for deterministic ordinary differential equations (ODEs). For the case of additive
65C30 noise, we prove that the proposed scheme is convergent in the mean-square sense. Our
65L20 numerical experiments suggest that our results hold true for the case of multiplicative
65L60 noise as well. Several linear and nonlinear test problems are presented to show the
60H10 accuracy and effectiveness of the proposed method. In particular, the proposed scheme
60H20 is illustrated by considering different examples arising in population biology, physics,
60H35 and mathematical finance.
Keywords: © 2020 Elsevier B.V. All rights reserved.
Systems of stochastic differential equation
Discontinuous Galerkin method
Wong–Zakai approximation
m-dimensional Brownian motion
Mean-square convergence

1. Introduction

In this paper, we propose a stochastic discontinuous Galerkin (SDG) method for solving the following system of
stochastic differential equations (SDEs)

dX(t) = a(t , X(t))dt + b(t , X(t))dW(t), t ∈ [0, T ], X(0) = X0 , (1.1)


where X = [X1 , X2 , . . . , Xd ] : [0, T ] → R is the unknown continuous stochastic process, a = [a1 , a2 , . . . , ad ]t :
t d

[0, T ] × Rd → Rd is the drift coefficient, b : [0, T ] × Rd → Rd×m is the diffusion coefficient with entries bij , and
W = [W1 , W2 , . . . , Wm ]t : [0, T ] → Rm is an m-dimensional standard Brownian motion (also called Wiener process).
Here, Wj , j = 1, 2, . . . , m are independent scalar Wiener processes defined on the complete probability space (Ω , F , P)
with a filtration {Ft : 0 ≤ t ≤ T } ⊂ F satisfying the usual conditions (that is, it is increasing and right-continuous while
F0 contains all P-negligible sets in F ). Each Wiener process Wj is a Gaussian process with the property that E[Wj (t)] = 0
and E[Wj (t)Wj (s)] = min{t , s}, where E[X] is the expected value of X. We note that the Wiener increments Wj (t) − Wj (s)

∗ Corresponding author.
E-mail address: mbaccouch@unomaha.edu (M. Baccouch).

https://doi.org/10.1016/j.cam.2020.113297
0377-0427/© 2020 Elsevier B.V. All rights reserved.
M. Baccouch, H. Temimi and M. Ben-Romdhane Journal of Computational and Applied Mathematics 388 (2021) 113297

] initial data X0 = [X0,1 , X0,2 , . . . , X0,d ] ∈ R is


t d
are independent Gaussian processes with mean 0 and variance |t −
[ s|. 2The
assumed to be independent of the Wiener process W satisfying E |X0 | < ∞.
Eq. (1.1) can be put in componentwise form as
m

dXi (t) = ai (t , X(t))dt + bij (t , X(t))dWj (t), t ∈ [0, T ], Xi (0) = X0,i , i = 1, 2, . . . , d. (1.2)
j=1

The SDE (1.2) is, in fact, only a symbolic representation for the Itô stochastic integral equation
∫ t m ∫ t

Xi (t) = X0,i + ai (s, X(s))ds + bij (s, X(s))dWj (s), t ∈ [0, T ], i = 1, 2, . . . , d, (1.3)
0 j=1 0

where the second integral is an Itô stochastic integral with respect to the Wiener process Wj (t). It cannot be defined path-
wise as a deterministic Riemann–Stieltjes integral because the sample paths of the Wiener process are not differentiable
on any finite interval.
In our analysis, we always assume that ai and bij are measurable functions and satisfy both Lipschitz and linear growth
bound conditions in x i.e., ∀ t ∈ [0, T ] and ∀ X, Y ∈ Rd
|ai (t , X) − ai (t , Y)| + ⏐bij (t , X) − bij (t , Y)⏐ ≤ K ∥X − Y∥ , i = 1, 2, . . . , d, j = 1, 2, . . . , m,
⏐ ⏐

and for all t ≥ 0 and X ∈ Rd , there is a constant C > 0 such that


|ai (t , X)| + ⏐bij (t , X)⏐ ≤ C (1 + ∥X∥), i = 1, 2, . . . , d, j = 1, 2, . . . , m.
⏐ ⏐
(1.4)
(∑ )1/2
d
Here, ∥X∥ = i=1 Xi is the Euclidean norm of X. Under the above assumptions (1.2) has a unique Ft -adapted
R -valued solution X with E supt ∈[0,T ] |Xi (t)|2 < ∞, i = 1, 2, . . . , d; see [1].
d
[ ]
SDEs are powerful tools to model real problems with uncertainties. They arise in many areas including finance, economics,
chemistry, biology, population dynamics and genetics, physics, fluid flows in random media, neuroscience; see e.g., [1–3]
and the references therein. Unlike the deterministic case, SDEs are difficult to solve numerically since they typically involve
a white noise, which is almost everywhere discontinuous [2]. Furthermore, developing high order numerical methods for
SDEs is very difficult. For instance, it is well-known that most schemes for deterministic equations do not extend naturally
to handle SDEs; see e.g ., [1].
Many numerical methods have been constructed for solving various types of SDEs with different properties; for example,
see [1–11] and the references therein. These methods include the Euler–Maruyama method, Milstein method, and Taylor
methods. We also refer the reader to [12] for a survey of some of numerical methods for SDEs together with a more
extensive list of references. However, to the best of our knowledge, the discontinuous Galerkin (DG) finite element method
has not been analyzed to systems of SDEs. The DG method was first presented in 1973 by Reed and Hill [13] for solving the
neutron transport equation. Recently, DG methods become very attractive to solve differential equations mainly because
they are high-order accurate, nonlinear stable, highly parallelizable, easy to handle complicated geometries and side
conditions, and have the ability to capture discontinuities without spurious oscillations. Also, the DG method can easily
handle adaptivity strategies since the h-refinement (mesh refinement and coarsening) and the p-refinement (method order
variation) can be achieved without taking into account the continuity restrictions typical of conforming FEMs. Moreover,
the degree of the approximating polynomial p can be varied throughout the mesh. Adaptivity is of particular importance
in hyperbolic problems given the complexity of the structure of the discontinuities.
In [5], we developed a DG method for solving single SDEs driven by one-dimensional Wiener processes. We first
constructed an approximate deterministic ODE with a random coefficient on each element using the well-known Wong–
Zakai approximation theorem. Since the resulting ODE converges to the solution of the corresponding Stratonovich SDE,
we applied a transformation to the drift term to obtain a deterministic ODE which converges to the solution of the
original SDE. The order of convergence is still an open problem for the general nonlinear SDE. The corrected equation
is then discretized using the standard DG method for deterministic ODEs. We proved that the proposed stochastic DG
(SDG) method is equivalent to an implicit stochastic Runge–Kutta method. Then, we studied the numerical stability of
the SDG scheme applied to linear SDEs with an additive noise term. The method is shown to be numerically stable in
the mean sense and also A-stable. Moreover, the method is proved to be convergent in the mean-square sense. Several
linear and nonlinear test problems are presented to show the accuracy and effectiveness of the proposed method. In
our previous work [5], we have essentially restricted ourselves to one-dimensional processes. In this work, we intend
to extend our previous work [5] about solving first-order SDEs of the form dX (t) = a(t , X (t))dt + b(t , X (t))dW (t), t ∈
[0, T ], X (0) = X0 , to solve systems of multi-dimensional SDEs of the form (1.1). In [5], we only considered equations
driven by one-dimensional Brownian motion. Here, we consider general systems driven by m-dimensional Brownian
motion. Systems of stochastic differential equations are common in applications. They have been mentioned in various
books; see e.g ., [1,2,14–16]. Furthermore, stochastic higher-order differential equations can be reduced to first-order
systems of SDEs.
The purpose of this paper is to present a stochastic DG (SDG) method for systems of SDEs of the form (1.1). We first
construct a new approximate system of SDEs with a random coefficients on each element using the famous Wong–Zakai
2
M. Baccouch, H. Temimi and M. Ben-Romdhane Journal of Computational and Applied Mathematics 388 (2021) 113297

approximation theorem. It turns out that the solution of the resulting system of SDEs converges to the solution of the
corresponding Stratonovich SDE. For this, we apply a transformation to the drift term to obtain a system of SDEs that
converges to the solution of the original system of SDEs. The corrected system is then discretized using the standard DG
method for deterministic equations. When the noise is additive (i.e., the diffusion coefficient b is independent of X), we
prove that the proposed SDG method is convergent in the mean-square sense. We observed similar results for the case
of multiplicative noise (i.e., the diffusion coefficient b depends on X). Several linear and nonlinear test problems driven
by additive and multiplicative noises are presented to show the accuracy and effectiveness of the proposed method.
In particular, the scheme is illustrated by considering different examples arising in population biology, physics, and
mathematical finance. We would like to point out that the present DG method has several features over the standard
numerical methods due to the following nice properties: (i) the DG method can be easily designed for any order of accuracy
(the order of accuracy can be locally determined in each cell, thus allowing for efficient p-adaptivity), (ii) it can be used
on arbitrary triangulations, even those with hanging nodes, thus allowing for efficient h-adaptivity, (iii) the DG method
provides optimal convergence properties for the solution, (iv) the DG method is extremely local in data communications
(the evolution of the solution in each cell needs to communicate only with the immediate neighbors, regardless of the
order of accuracy, thus allowing for efficient parallel implementations), and (iv) it achieves superconvergence properties,
which play a key role to construct asymptotically exact a posteriori error estimators.
This paper is organized as follows: In Section 2, we describe the proposed SDG method for solving systems of SDEs
of the form (1.2). In Section 3, we perform the global error analysis of the proposed SDG scheme. Applications to
population biology, finance, and physics are considered in Section 4. In Section 5, we present several numerical examples
to demonstrate the convergence and effectiveness of the proposed method. We conclude and discuss our results in
Section 6.

2. The proposed scheme

In this section, we develop a stochastic DG (SDG) scheme for solving (1.2). Let 0 = t0 < t1 < · · · < tN = T be a
partition of the interval [0, T ]. The length of In = [tn , tn+1 ], n = 0, . . . , N − 1 is denoted by hn = tn+1 − tn . We let
h = maxn=0,...,N −1 (tn+1 − tn ) be the length of the largest interval. Next, we approximate the Wiener process Wj (t), t ∈ In ,
by the continuous linear stochastic process that interpolates Wj (t) at tn and tn+1 :
Wj (tn+1 ) − Wj (tn )
Wn,j (t) = Wj (tn ) + (t − tn ), t ∈ In . (2.1)
tn+1 − tn
Thus, an approximation of the Wiener process Wj (t), t ∈ [0, T ] is given by the following piecewise linear random process:
N −1

Ŵj (t) = χIn (t)Wn,j (t), t ∈ [0, T ], (2.2)
n=0

where χIn (t) = 1 if t ∈ In and χIn (t) = 0 if t ∈


/ In . We would like to mention that the continuous process Ŵj (t) is of bounded
variation and has a piecewise continuous derivative on [0, T ]. These properties are not satisfied by the continuous process
Wj (t).
We replace the Wiener process Wj (t), t ∈ [0, T ] by Ŵj (t). Let X̂i (t) be the smoother solution of the approximated IVP
m

dX̂i (t) = ai (t , X̂(t))dt + bij (t , X̂(t))dŴj (t), t ∈ [0, T ], X̂i (0) = X0,i , i = 1, . . . , d, (2.3)
j=1
]t
∆Wn,j
[
where X̂ = X̂1 , X̂2 , . . . , X̂d ∈ Rd . Since dŴj (t) = hn
dt, where ∆Wn,j = Wj (tn+1 ) − Wj (tn ) and hn = tn+1 − tn , (2.3) is
equivalent to the following modified system of SDEs
m N −1
dX̂i (t) ∑ ∑ ∆Wn,j
= ai (t , X̂(t)) + bij (t , X̂(t)) χIn (t) , t ∈ [0, T ], X̂i (0) = X0,i , i = 1, . . . , d. (2.4)
dt hn
j=1 n=0

Wong and Zakai [17,18] used deterministic ODEs with random coefficients to approximate SDEs. They approximated the
Brownian motion by the piecewise linear interpolant and proved that the solution of the resulting ODE converges in
the mean to the solution of the original SDE, but in the Stratonovich sense. Their work was later extended to stochastic
differential equations of higher dimensions, for example, by McShane [19,20], Stroock and Varadhan [21], Ikeda et al. [22],
Ikeda and Watanabe [23], and recently by Kelly and Melbourne [24] in which the same approximation of the noise process
as this paper were studied. To be more precise, the sequence of random variables X̂i (t) converges in the mean to the
solution of the following Stratonovich SDE
m

dYi (t) = ai (t , Y(t))dt + bij (t , Y(t)) ◦ dWj (t), Yi (0) = X0,i , (2.5)
j=1

3
M. Baccouch, H. Temimi and M. Ben-Romdhane Journal of Computational and Applied Mathematics 388 (2021) 113297

where the notation ◦ is referring to the Stratonovich SDE. This equation is equivalent to the modified Itô SDE
⎛ ⎞
m d m
1 ∑∑ ∂ bij (t , Y(t)) ⎠ ∑
dYi (t) = ⎝ai (t , Y(t)) + bkj (t , Y(t)) dt + bij (t , Y(t))dWj (t), Yi (0) = X0,i . (2.6)
2 ∂ Xk
j=1 k=1 j=1

The second term on the right hand side (2.6) is the so-called Wong–Zakai correction term. For the sake of completeness,
we recall the following result from [23].

Theorem 2.1. Suppose that Xi (t), X̂i (t), and Yi (t), respectively, satisfy Eqs. (1.2), (2.3), and (2.6) for i = 1, . . . , d. Assume
that the functions ai (t , X) and bij (t , X) are bounded smooth functions on [0, T ] × Rd . To be more precise, the conditions
ai (t , X) ∈ Cb1 ([0, T ] × RD ) and bij (t , X) ∈ Cb2 ([0, T ] × Rd ) are enough, where Cbm ([0, T ] × Rd ) is the set of real m−times
continuously differentiable functions which are bounded together with their derivatives up to the mth order. Then for any
T > 0 and for i = 1, . . . , d, we have
[ ]
2
= 0.
⏐ ⏐
lim E sup ⏐Yi (t) − X̂i (t)⏐
N →∞ t ∈[0,T ]

Proof. A detailed proof can be found in [23]. □

We note that the approximation (4.2) converges to the Itô SDE with correction term (2.6). To achieve convergence to
the solution of the original system (1.2), we apply a transformation to the drift term and consider the following modified
system
⎛ ⎞
m d m
1 ∑∑ ∂ bij (t , X̂(t)) ⎠ ∑
dX̂i (t) = ⎝ai (t , X̂(t)) − bkj (t , X̂(t)) dt + bij (t , X̂(t))dŴj (t), Yi (0) = X0,i , (2.7)
2 ∂ Xk
j=1 k=1 j=1

which is equivalent to the system

dX̂i (t)
= âi (t , X̂(t)), X̂i (0) = X0,i , i = 1, 2, . . . , d, (2.8a)
dt
where
m d m N −1
1 ∑∑ ∂ bij (t , X̂(t)) ∑ ∑ ∆Wn,j
âi (t , X̂(t)) = ai (t , X̂(t)) − bkj (t , X̂(t)) + bij (t , X̂(t)) χIn (t) . (2.8b)
2 ∂ Xk hn
j=1 k=1 j=1 n=0

Under the assumptions of Theorem 2.1, the system (2.7) has a unique solution X̂(t) and the solution to the modified SDE
(2.7) converges to the solution of the original SDE (1.2). The following result follows immediately from Theorem 2.1.

Theorem[2.2. Suppose ]that the assumptions of Theorem 2.1 are satisfied. Let X = [X1 , X2 , . . . , Xd ]t be the solution of (1.2)
t
and X̂ = X̂1 , X̂2 , . . . , X̂d be the solution of (2.7), then we have for any T > 0 and for i = 1, . . . , d, we have
[ ]
2
= 0.
⏐ ⏐
lim E sup ⏐Xi (t) − X̂i (t)⏐ (2.9)
N →∞ t ∈[0,T ]

Remark 2.1. The reason for the interest in the smoothed version of the system of SDEs (2.7), is that its solution X̂i (t) is
smoother than the solution Xi (t) of (1.2). Thus, standard numerical methods can be applied to approximate X̂i (t). Suppose
that Xi,h (t) approximates X̂i (t). If we can show that Xi,h (t) converges to X̂i (t). Then, we can use Theorem 2.2 to prove that
Xi,h (t) converges to Xi (t) of the original system (1.2).

2.1. The SDG scheme

Since (2.8a) is a system of ODEs, we apply the standard DG method for deterministic ODEs [12,25]. Multiplying (2.8a)
by v , integrating over In , and using integration by parts, we obtain the DG weak formulation
∫ ∫
X̂i v ′ dt + âi (t , X̂)v dt − X̂i (tn+1 )v (tn+1 ) + X̂i (tn )v (tn ) = 0, i = 1, 2, . . . , d. (2.10)
In In
p
We define the piecewise polynomial space Vh = {v : v|In ∈ P p (In ), n = 0, 1, . . . , N − 1} as the space of polynomials of
degree at most p in each interval In , where P p (In ) is the set of all polynomials of degree less than or equal to p on In . Since
p
polynomials in Vh are allowed to have discontinuities across element boundaries, we use v (tn± ) = lims→0± v (tn + s) to
denote the left limit and the right limit of v at tn .
4
M. Baccouch, H. Temimi and M. Ben-Romdhane Journal of Computational and Applied Mathematics 388 (2021) 113297

p
Next, we approximate each X̂i (t) by a piecewise polynomial Xi,h (t) ∈ Vh . The discrete DG scheme consists of finding
p p
Xi,h ∈ Vh such that: ∀ v ∈ Vh and n = 0, . . . , N − 1,
∫ ∫
v ′ Xi,h dt + âi (t , Xh )v dt − Xi,h (tn−+1 )v (tn−+1 ) + Xi,h (tn− )v (tn+ ) = 0, i = 1, 2, . . . , d, (2.11)
In In

where we used the classical upwind numerical flux. Here,


m d m
1 ∑∑ ∂ bij (t , Xh (t)) ∑ ∆Wn,j
âi (t , Xh (t)) = ai (t , Xh (t)) − bkj (t , Xh (t)) + bij (t , Xh (t)) , t ∈ In .
2 ∂ Xk hn
j=1 k=1 j=1

We will refer this DG scheme as the stochastic DG (SDG) scheme.

2.2. Implementation

The SDG solution Xi,h (t) can be computed in an element-by-element fashion. Indeed, we can obtain Xi,h (t) as follows:
first we can get Xi,h (t) in I0 using (2.11) with n = 0 since Xi,h (t0− ) = X0,i is already given. After obtaining Xi,h (t) in I0 , we
can obtain Xi,h (t) in I1 . This process can be repeated to obtain Xi,h (t) in In since Xi,h (tn− ) ∑is already available.
p
In practice, Xi,h (t) can be computed locally on In as follows: (i) we write Xi,h (t) = k=0 ck,n (t)Lk,n (t), t ∈ In , n =
0, . . . , N − 1, as a linear combination of basis Lk,n (t), k = 0, . . . , p, where Lk,n denotes the kth-degree Legendre polynomial
on In , and (ii) we choose v = Lj,n (t), j = 0, . . . , p, to obtain a small system of nonlinear algebraic equations on each In . This
system can be solved for c√ 0,n , . . . , cp,n using e.g ., the Newton’s method for systems. In practice, ∆Wn,j = Wj (tn+1 ) − Wj (tn )
can be simulated using hn Z , where Z ∼ N (0, 1). Once we obtain the SDG solution on In , n = 0, . . . , N − 1, our
approximate SDG solution to our original SDE (1.2) is piecewise discontinuous polynomial of degree smaller than or equal
to p.

3. Error analysis

The noise is called additive if the matrix b(t , X) in (1.2) does not depend on X; otherwise it is called multiplicative. In
this section, we shall present an error analysis for the case of additive noise, i.e.,
m

dXi (t) = ai (t , X(t))dt + bij (t)dWj (t), t ∈ [0, T ], Xi (0) = X0,i , i = 1, 2, . . . , d. (3.1)
j=1

Let X̂ (t) be the solution of the approximated IVP


m
dX̂i (t) ∑ dŴj (t)
= ai (t , X̂(t)) + bij (t) , t ∈ [0, T ], X̂i (0) = X0,i , i = 1, . . . , d, (3.2)
dt dt
j=1

which is equivalent to
m N −1
dX̂i (t) ∑ ∑ ∆Wn,j
= ai (t , X̂(t)) + bij (t) χIn (t) = âi (t , X̂(t)), X̂i (0) = X0,i , i = 1, . . . , d. (3.3)
dt hn
j=1 n=0

The weak formulation (2.10) becomes


∫ ∫
X̂i v ′ dt + âi (t , X̂)v dt − X̂i (tn+1 )v (tn+1 ) + X̂i (tn )v (tn ) = 0, i = 1, 2, . . . , d. (3.4)
In In
p p
The SDG scheme (2.11) reduces to: find Xi,h ∈ Vh such that ∀ v ∈ Vh and n = 0, . . . , N − 1,
∫ ∫
v ′ Xi,h dt + âi (t , Xh )v dt − Xi,h (tn−+1 )v (tn−+1 ) + Xi,h (tn− )v (tn+ ) = 0, i = 1 , 2 , . . . , d, (3.5a)
In In

where
m
∑ ∆Wn,j
âi (t , Xh (t)) = ai (t , Xh (t)) + bij (t) , t ∈ In . (3.5b)
hn
j=1

We will show that the SDG solution Xh , defined in (3.5), converges to the solution X of the system (3.1).
We
( first introduce some norms that will be used throughout this paper. If X = [X1 , X2 , . . . , Xd ]t , we use the notation |X| =
) 1/2
X 2 (t)dt)1/2 .
∑d
|Xi |2

i=1 . We define the standard L2 -norm of an integrable function X = X (t) on In as ∥X ∥0,In = ( In
Let H (In ), where s = 0, 1, . . ., denote the standard Sobolev space of square integrable functions on In with all derivatives
s

5
M. Baccouch, H. Temimi and M. Ben-Romdhane Journal of Computational and Applied Mathematics 388 (2021) 113297

(∑
s
 (k) 2 )1/2
X (k) , k = 0, 1, . . . , s being square integrable on In and equipped with the norm ∥X ∥s,In = k=0
X 
0,In
. We
(∑ )1/2
N −1
also define the broken Sobolev norms on [0, T ] as ∥X ∥s = n=0 ∥X ∥2s,In , s ≥ 0. For convenience, we use ∥X ∥ to
(∑
s
 (k) 2 )1/2
denote ∥X ∥0 . For any vector-valued function X = [X1 , X2 , . . . , Xd ]t , we use the norm ∥X∥s = k=0
X  , where
 (k) 2 ∑d   (k) 2

X  = X
i=1  i  .
From now on, C , with or without a subscript, denotes a generic deterministic positive constant which might not be the
same in each appearance.

3.1. Regularity of the process X̂(t)

First, we consider the regularity of the approximate process Ŵ[


j (t). The
] following lemma gives some important
properties of Ŵj (t). Also it will be used to establish an estimate of E X̂ .
 
s

Lemma 3.1. The piecewise linear random process Ŵj (t), defined by (2.2), satisfies the following properties: The trajectories
dŴj
of dt
belong to L2 (0, T ) and there exists a constant C independent of h such that
⎡  ⎤
 dŴ 2
j
 ⎦ = N ≤ Ch−1 . (3.6)

E ⎣
 dt 

Moreover, if the function f (t) satisfies a Lipschitz condition on [0, T ] with Lipschitz constant L > 0, i.e., |f (t) − f (s)| ≤ L|t − s|,
∀ t , s ∈ [0, T ], then, for all n = 0, 1, . . . , N − 1,
[(∫ )2 ]
tn ∫ tn
E f (t)dWj (t) − f (t)dŴj (t) ≤ TL2 h2 . (3.7)
0 0

Proof. The proof of this lemma is given in [5], more precisely in its Lemma 4.1. □

Theorem 3.1. Suppose that Xi (t) and X̂i (t) satisfy (3.1) and (3.2), respectively, for i = 1, 2, . . . , d. If

1. ai (t , X) and bij (t) are continuous functions on [0, T ] × Rd and [0, T ], respectively,
2. ai (t∑, X) satisfies a Lipschitz condition with respect to X with a constant La > 0, i.e., |ai (t , X) − ai (t , Y)| ≤
d
La i=1 |Xi − Yi | for X and Y in [0, T ] × Rd , and
3. bij (t) satisfies a Lipschitz condition on [0, T ] with Lipschitz constant Lb > 0, i.e., ⏐bij (t) − bij (s)⏐ ≤ L|t − s|,
⏐ ⏐

then the sequence of solutions X̂i (tn ) converges almost surely (a.s.) to Xi (tn ) as N → ∞ i.e.,
[⏐ ⏐2 ]
E ⏐Xi (tn ) − X̂i (tn )⏐ ≤ Ch2 , n = 0, 1, . . . , N − 1. (3.8)
⏐ ⏐

⏐ [ ] ⏐⏐2
⏐E [Xi (tn )] − E X̂i (tn ) ⏐ ≤ Ch2 , n = 0, 1, . . . , N − 1.

⏐ ⏐ (3.9)
[ 2 ]
E Xi − X̂i  ≤ Ch2 . (3.10)
 

Proof. We note that the integral forms of (3.1) and (3.2) are
∫ t m ∫ t

Xi (t) = X0,i + ai (s, X(s))ds + bij (s)dWj (s), t ∈ [0, T ], i = 1, 2, . . . , d, (3.11)
0 j=1 0
∫ t m ∫ t

X̂i (t) = X0,i + ai (s, X̂(s))ds + bij (s)dŴj (s), t ∈ [0, T ], i = 1, 2, . . . , d. (3.12)
0 j=1 0

Subtracting (3.12) from (3.11) with t = tn yields


∫ tn ( ) m (∫ tn ∫ tn
)

Xi (tn ) − X̂i (tn ) = ai (s, X(s)) − ai (s, X̂(s)) ds + bij (s)dWj (s) − bij (s)dŴj (s) .
0 j=1 0 0

6
M. Baccouch, H. Temimi and M. Ben-Romdhane Journal of Computational and Applied Mathematics 388 (2021) 113297

Using the Lipschitz condition and the Cauchy–Schwarz inequality, we get


d ∫ tn ⏐ ⏐ m ⏐∫ tn ∫ tn

∑ ∑ ⏐ ⏐
Xi (tn ) − X̂i (tn ) ≤ La ⏐Xi (s) − X̂i (s)⏐ ds + bij (s)dWj (s) − bij (s)dŴj (s)⏐⏐
⏐ ⏐ ⏐

i=1 0 j=1 0 0

d (∫ tn ⏐ ⏐2 )1/2 m ⏐∫ tn ∫ tn

La tn1/2
∑ ∑
bij (s)dŴj (s)⏐⏐ .
⏐ ⏐
≤ ⏐Xi (s) − X̂i (s)⏐ ds + bij (s)dWj (s) −
⏐ ⏐ ⏐

i=1 0 j=1 0 0

(∑ )2
l ∑l
Squaring both sides, using the inequality (a + b)2 ≤ 2a2 + 2b2 , and then the inequality k=1 αi ≤ l k=1 αi2 with
l = d and l = m, we get
⏐ ⏐2 ∫ tn ⏐ ⏐2 m ⏐∫ tn ∫ tn
⏐2

bij (s)dŴj (s)⏐⏐ ,
2
⏐ ⏐
X (t ) − X̂ (t ) ≤ 2L t d X(s) − X̂(s) ds + 2m bij (s)dWj (s) −
⏐ ⏐ ⏐ ⏐
⏐ i n i n ⏐ a n ⏐ ⏐ ⏐

0 j=1 0 0

Summing over i, taking the expectation of both sides, and applying the estimate (3.7), we obtain
tn
[⏐ ⏐2 ] ∫ [⏐ ⏐2 ]
E ⏐X(tn ) − X̂(tn )⏐ ≤ 2L2a tn d2 E ⏐X(s) − X̂(s)⏐ ds + 2dm2 TL2b h2 .
⏐ ⏐ ⏐ ⏐
0

Applying Gronwall’s inequality gives


[⏐ ⏐2 ] 2 t 2
E ⏐X(tn ) − X̂(tn )⏐ ≤ 2dm2 TL2b h2 e2La tn d .
⏐ ⏐

which establishes (3.8). Next, (3.9) follows from (3.8) and Jensen’s Inequality φ (E [v ]) ≤ E [φ (v )] with φ (v ) = v 2 and
v = Xi (tn ) − X̂i (tn ).
Finally, we will prove (3.10). We first write (3.11) and (3.12) as
∫ t m ∫ T

Xi (t) = X0,i + ai (s, X(s))ds + H(t − s)bij (s)dWj (s), t ∈ [0, T ], i = 1 , 2 , . . . , d, (3.13)
0 j=1 0
∫ t m ∫ T

X̂i (t) = X0,i + ai (s, X̂(s))ds + H(t − s)bij (s)dŴj (s), t ∈ [0, T ], i = 1, 2, . . . , d, (3.14)
0 j=1 0

where H(t) is the Heaviside function.


Subtracting (3.14) from (3.13), we get
∫ t( ) m (∫ T ∫ T
)

Xi (t) − X̂i (t) = ai (s, X(s)) − ai (s, X̂(s)) ds + H(t − s)bij (s)dWj (s) − H(t − s)bij (s)dŴj (s) .
0 j=1 0 0

Using the Lipschitz condition and the Cauchy–Schwarz inequality, we obtain

⏐ ⏐ d ∫ t ⏐ ⏐ m ⏐∫ T ∫ T

∑ ∑ ⏐ ⏐
⏐Xi (t) − X̂i (t)⏐ ≤ La ⏐Xi (s) − X̂i (s)⏐ ds + H(t − s)bij (s)dWj (s) − H(t − s)bij (s)dŴj (s)⏐⏐
⏐ ⏐ ⏐ ⏐ ⏐

i=1 0 j=1 0 0

∑ (∫ t ⏐⏐ d ⏐2 )1/2
≤ La t 1/2 ⏐Xi (s) − X̂i (s)⏐ ds

i=1 0
m T T
∑ ⏐⏐∫ ∫ ⏐
H(t − s)bij (s)dŴj (s)⏐⏐ .

+ ⏐
⏐ H(t − s)bij (s)dWj (s) −
j=1 0 0

(∑ )2
l ∑l
Squaring both sides, using the inequality (a + b)2 ≤ 2a2 + 2b2 , and then the inequality k=1 αi ≤ l k=1 αi2 with
l = d and l = m, we get
⏐ ⏐2 ∫ t⏐ ⏐2 m ⏐∫
∑ T ∫ T
⏐2
H(t − s)bij (s)dŴj (s)⏐⏐ .
⏐ ⏐
⏐Xi (t) − X̂i (t)⏐ ≤ 2L2a Td ⏐X(s) − X̂(s)⏐ ds + 2m H(t − s)bij (s)dWj (s) −
⏐ ⏐ ⏐ ⏐ ⏐

0 j=1 0 0

7
M. Baccouch, H. Temimi and M. Ben-Romdhane Journal of Computational and Applied Mathematics 388 (2021) 113297

Summing over i yields


⏐ ⏐2 ∫ t 2
⏐X(t) − X̂(t)⏐ ≤ 2L2a Td2 X(s) − X̂(s) ds
⏐ ⏐  
0
d
∑ m ⏐∫
∑ T ∫ T
⏐2
H(t − s)bij (s)dŴj (s)⏐⏐ .
⏐ ⏐
+ 2m ⏐
⏐ H(t − s)bij (s)dWj (s) −
i=1 j=1 0 0

Applying Gronwall’s inequality, we obtain


⏐ ⏐2 d m ⏐∫ T ∫ T
⏐2
2L2a Td2 t
∑ ∑ ⏐ ⏐
X(t) − X̂(t) ≤ 2me H(t − s)bij (s)dWj (s) − H(t − s)bij (s)dŴj (s)⏐⏐
⏐ ⏐ ⏐
⏐ ⏐ ⏐
i=1 j=1 0 0

∑ ∑ ⏐⏐∫ d m T ∫ T
⏐2
2 2 2
H(t − s)bij (s)dŴj (s)⏐⏐ .

≤ 2me2La T d ⏐
⏐ H(t − s)bij (s)dWj (s) −
i=1 j=1 0 0

Taking the expectation of both sides, we get


[⏐ ⏐2 ] d m
[⏐∫
T ∫ T
⏐2 ]
2L2a T 2 d2
∑ ∑
H(t − s)bij (s)dŴj (s)⏐⏐ .
⏐ ⏐
E ⏐X(t) − X̂(t)⏐ ≤ 2me H(t − s)bij (s)dWj (s) −
⏐ ⏐
E ⏐⏐
i=1 j=1 0 0

Clearly, for any fixed t ∈ [0, T ], the function f (s) = H(t − s)bij (s) is Lipschitz continuous in s with some Lipschitz constant
L. Applying (3.7) with f (s) = H(x − y)H(t − s)bij (s) and n = N, we obtain
[(∫
T ∫ T
)2 ]
E H(t − s)bij (s)dWj (s) − H(t − s)bij (s)dŴj (s) ≤ TL2 h2 .
0 0

Consequently, we have
[⏐ ⏐2 ] d m
2 2 2 2 2 2
∑ ∑
E ⏐X(t) − X̂(t)⏐ ≤ 2me2La T d TL2 h2 = 2m2 dTe2La T d h2 = Ch2 .
⏐ ⏐
i=1 j=1

Integrating over [0, T ] yields (3.10), which completes the proof of the theorem. □

Next, we discuss the regularity of the approximate solution X̂(t) which is necessary to apply standard analysis
techniques in the DG method. In particular, we require the solution X̂(t) ∈ Hp+1 (0, T ). In the next theorem, we present
important estimates which will be needed in our convergence error analysis.

Theorem 3.2. Suppose that the assumptions of Theorem 3.1 are satisfied. Then there exists a positive constant C independent
of h such that
[ ] [⏐
2 2 ⏐ ]
≤ C, E ⏐X̂i (tn )⏐ ≤ C , n = 0, 1, . . . , N − 1.
⏐ ⏐
E sup ⏐X̂i (t)⏐ (3.15)
t ∈[0,T ]
[  ]
 2
E X̂i  ≤ C. (3.16)

Moreover, if ai (t , X) ∈ C p [0, T ] × Rd and bij (t) ∈ C p ([0, T ]), then X̂i ∈ H p+1 (0, T ) with
( )
[  ]
 2
E X̂i  ≤ Ch−1 , s = 1, 2, . . . , p + 1, (3.17)
s

where the constant C is deterministic and independent of h.

2
< ∞; see
[ ]
Proof. First, we will prove (3.15). Under the assumptions of Theorem
[ 3.1, we have E sup
]t ∈[0,T ] |Xi (t)|
⏐ ⏐2
e.g ., [26]. On the other hand, by Theorem 2.2, we have limN →∞ E supt ∈[0,T ] ⏐Xi (t) − X̂i (t)⏐ = 0. Since the sequence
⏐ ⏐
{ [ ⏐ ⏐2 ]}
E supt ∈[0,T ] ⏐Xi (t) − X̂i (t)⏐ converges, it is bounded. Using the triangle inequality and the standard inequality (a +
b)2 ≤ 2a2 + 2b2 , we write
(⏐ ⏐)2
⏐X̂i (t)⏐2 ≤ ⏐Xi (t)⏐ + ⏐Xi (t) − X̂i (t)⏐ ≤ 2⏐Xi (t)⏐2 + 2⏐Xi (t) − X̂i (t)⏐2 .
⏐ ⏐ ⏐ ⏐ ⏐ ⏐ ⏐ ⏐

8
M. Baccouch, H. Temimi and M. Ben-Romdhane Journal of Computational and Applied Mathematics 388 (2021) 113297

Taking the supremum and the expectation of both sides, we obtain


[ ] [ ] [ ]
⏐2 ⏐2 ⏐2
+ 2E sup Xi (t) − X̂i (t) .
⏐ ⏐ ⏐
E sup X̂i (t)
⏐ ⏐ ≤ 2E sup Xi (t)
⏐ ⏐ ⏐ ⏐
t ∈[0,T ] t ∈[0,T ] t ∈[0,T ]

Since the right-hand side is bounded, we deduce the first estimate in (3.15). The second estimate in (3.15) follows from
the first estimate. [  ]
 2
Next, we will estimate E X̂i  . Using the first estimate in (3.15), we have

T T T
[  ] [∫ ] ∫ ∫ [ ] [ ]
 2 2 2
[ ] ⏐ ⏐2 ⏐ ⏐2
E X̂i  = E |X̂i (t)| dt = E |X̂i (t)| dt ≤ E sup X̂i (t)
⏐ ⏐ dt = T E sup X̂i (t)
⏐ ⏐
0 0 0 t ∈[0,T ] t ∈[0,T ]
≤ TC1 = C ,
which completes the proof of (3.16).
dŴj (t)
If ai (t , X) ∈ C p [0, T ] × Rd and bij (t) ∈ C p ([0, T ]), then (3.5b) shows that âi ∈ H p ([0, T ] × Rd ) since
( )
dt
is piecewise
constant on [0, T ]. Thus, X̂i ∈ H p+1 (0, T ). We will use an induction argument to prove (3.17). Multiplying (3.3) by
dX̂i (t)
dt
and integrating over [0, T ], we get
 2 ∫ ⎛ m
⎞ ⏐ ⏐ ⏐
m ∫ T ⏐
⏐⏐ ⏐
 dX̂  T ∫ T⏐
d Ŵ dX̂i
⏐ ⏐ dX̂ ⏐ dŴj ⏐ ⏐ dX̂i ⏐
⏐⏐ ⏐
 i ⎝ai (t , X̂) +
∑ j i⏐

  = bij (t) dt ≤ ⏐ai (t , X̂)⏐ ⏐ ⏐ dt + M1 ⏐ ⏐ ⏐ dt ,
⎠ ⏐ ⏐ ⏐ ⏐

 dt  0 dt dt 0 ⏐ dt ⏐ 0 ⏐ dt ⏐ ⏐ dt ⏐
j=1 j=1

where M1 is an upper bound for bij (t) on [0, T ]. Applying the Cauchy–Schwarz inequality, we obtain
  
m 

 dX̂  
 dŴj 
 
 i  ∑
  ≤ ai (t , X̂) + M1 .


 dt   dt 
j=1

(∑m )2 ∑m
Squaring both sides and using the inequalities (a + b)2 ≤ 2a2 + 2b2 and j=1 αj ≤m j=1 αj2 , we get
 2 
m 
2
 dX̂ 
 dŴj 
 2 
 i ∑
  ≤ 2 ai (t , X̂) + 2M1 m
2
 .
 

 dt   dt 
j=1

Under the assumption that the function ai (t , X̂) is continuous and satisfies the Lipschitz condition on [0, T ] × Rd with a
Lipschitz constant La > 0, we have ai (t , X̂) satisfies the linear growth bound
d
( d ⏐ ⏐
)
⏐ ⏐ ⏐ ⏐ ∑ ∑
⏐ai (t , X̂)⏐ ≤ |ai (t , 0)| + ⏐ai (t , X̂) − ai (t , 0)⏐ ≤ max |ai (t , 0)| + La | Xi | ≤ K 1 + ⏐X̂i ⏐ , (3.18)
⏐ ⏐ ⏐ ⏐ ⏐ ⏐
t ∈[0,T ]
i=1 i=1
(∑ )2
d ∑d
where K = max La , maxt ∈[0,T ] |ai (t , 0)| . Using the inequalities (a + b)2 ≤ 2a2 + 2b2 and αi αi2 , we
( )
i=1 ≤d i=1
obtain
( d ⏐ ⏐
)2 ( d ⏐ ⏐
)
2 ∫ T ∫ T
⏐ ⏐2
 ∑ ∑  2
ai (t , X̂) ≤ K 2 1+ ⏐X̂i ⏐ dt ≤ 2K 2 1+d ⏐X̂i ⏐ dt = 2K 2 T + 2dK 2 X̂ .
  ⏐ ⏐  
0 i=1 0 i=1

Thus, we have
 2 
m 
2
 dX̂ 
 dŴj 
 2 
 i ∑
2 2  2
  ≤ 4K T + 4dK X̂ + 2M1 m   .
 dt   dt 
j=1

Taking the expectation and using the estimates (3.16) and (3.6), we get
⎡  ⎤ ⎡  ⎤
 dX̂ 2 [  ]
2
m  dŴ 2
i j

 ⎦ ≤ 4K 2 T + 4dK 2 E X̂ + 2M12 m
     ⎦
E ⎣ E ⎣ 
 dt   dt 
j=1

≤ 4K T + 4dK C1 + 2
≤ Ch−1 .2
2M12 C2 h−1 (3.19)
[  ] [  ] [  ]
 2  2  dX̂ 2
Using E X̂i  = E X̂i  + E  dti  and the estimates (3.16), (3.19), we establish (3.17) for s = 1.
1

9
M. Baccouch, H. Temimi and M. Ben-Romdhane Journal of Computational and Applied Mathematics 388 (2021) 113297

Now, we assume that (3.17) holds for s = 1, 2, . . . , p and we will prove that it is valid for s = p + 1. Taking the pth
(p+1)
derivative of (3.2) with respect to t, multiplying the resulting equation by X̂i and integrating over [0, T ], we obtain
T
dp ai (t , X̂) T
∫ ∫
 (p+1) 2 dŴj
 
(p+1) (p) (p+1)
X̂i  = X̂i dt + bij (t) X̂i dt ,
0 dt p 0 dt
dp ai (t ,X̂) dp ai (t ,X̂) dŴj
since dŴ
dt
is piecewise constant and so dt p
= dt p
+ b(p)
ij (t) dt
.
(p) dp ai (t ,X̂)
Since ai (t , X̂) ∈ C ([0, T ] × R ) and bij (t) ∈ C ([0, T ]), then
p d p
bij (t) is a bounded function on [0, T ] and dt p
is continuous
dp+1 ai (t ,X̂)
and satisfy the Lipschitz condition with a Lipschitz constant La > 0. Thus, satisfies the linear growth bound. dt p+1
Repeating the same steps used to show (3.17) for s = 1, we establish (3.17) for s = p + 1. This completes the proof of the
theorem. □

3.2. Convergence of the SDG solution in the mean-square sense

In our error analysis, we will be using two special Gauss–Radau projections Ph± . These projections are defined element-
p
by-element as follows: For any integrable function u, Ph± u ∈ Vh and the restrictions of Ph± u to In are polynomials in P p (In )
satisfying the conditions

(u − Ph− u)v dt = 0, ∀ v ∈ P p−1 (In ), and (u − Ph− u)(tn−+1 ) = 0, (3.20)
In

(u − Ph+ u)v dt = 0, ∀ v ∈ P p−1 (In ), and (u − Ph+ u)(tn+ ) = 0. (3.21)
In

For p = 0, we use P −1 (In ) = {0}. For these projections, we have the following projection result: For any function
u ∈ H p+1 (0, T ) there exists a positive constant C independent of the mesh size h and u, such that
u − P ± u + h (u − P ± u)′  ≤ Chp+1 ∥u∥p+1 .
   
h h (3.22)
p p
Next, we present some inverse properties of the finite element space Vh : For any v ∈ Vh , there exists a positive constant
C independent of v and h, such that
( N −1 )1/2
−1/2
≤ Ch−1/2 ∥v∥ .

(k)
∥v ∥ ≤ Ch −k
∥v∥ , ∥v∥∞ ≤ Ch ∥v∥ , v (tn ) + v
2 + 2
(tn−+1 ) (3.23)
n=0

In our analysis we need the pth-degree Legendre polynomial defined by Rodrigues formula [27]
1 dp
L̃p (ξ ) = [(ξ 2 − 1)p ], −1 ≤ ξ ≤ 1 ,
2p p! dξ p
p(p+1)
which satisfies the following properties: L̃p (1) = 1, L̃p (−1) = (−1)p , L̃′p (−1) = 2
( −1)p+1 , and the orthogonality
relation
∫ 1
2
L̃k (ξ )L̃p (ξ )dξ = δkp , where δkp is the Kronecker symbol. (3.24)
−1 2k + 1
Mapping the physical element In = [tn , tn+1 ] into a reference element [−1, 1] by the standard affine mapping
tn + tn+1 hn
t(ξ , hn ) = + ξ, (3.25)
2 2
( )
2t −tn −tn+1
we obtain the p-degree shifted Legendre polynomial, Lp,n (t) = L̃p hn
, on In .
Using the mapping (3.25) and the orthogonality relation (3.24), we obtain
∫ ∫ 1
 2 hn hn 2 hn
Lp,n 
0,In
= L2p,n (t)dt = L̃2p (ξ )dξ = = ≤ hn . (3.26)
In 2 −1 2 2p + 1 2p + 1
Now we are ready to derive error estimates in the mean-square sense for the SDG method. It follows from Theorem 3.1
that the SDE (3.2) possesses a unique solution X̂i (t) ∈ H p+1 (0, T ). Consequently, we can apply the projection result (3.22)
with u = X̂i .
Throughout this paper, we use ei = X̂i − Xi,h to indicate the error between the exact solution X̂i of (3.3) and the SDG
solution Xi,h defined in (3.5). The projection error is defined as ϵi = X̂i − Ph− X̂i and the error between the projection of the
exact solution Ph− X̂i and the DG solution Xi,h is denoted by ēi = Ph− X̂i − Xi,h . We remark that the actual error can be split
as
ei = ϵi + ēi . (3.27)
10
M. Baccouch, H. Temimi and M. Ben-Romdhane Journal of Computational and Applied Mathematics 388 (2021) 113297

Next, we state and prove optimal error estimate for E ∥ei ∥2 .


[ ]

Theorem 3.3. Assume that the assumptions of Theorem 3.2 are satisfied. Let p ≥ 0 and Xi,h be the SDG solution of (3.5), then,
for sufficiently small h, there exists a positive constant C independent of h such that,

E ∥ei ∥2 ≤ C h2p+1 , i = 1, 2, . . . , d.
[ ]
(3.28)
[  ]
2
E ē′i  ≤ C h2p+1 , i = 1, 2, . . . , d. (3.29)

Proof. The idea of the proof is to mimic the deterministic situation and to derive the error equation by subtracting (3.5a)
p
from (3.4) with v ∈ Vh
∫ ∫ ( )
ei v ′ dt + ai (t , X̂) − âi (t , Xh ) v dt − ei (tn−+1 )v (tn−+1 ) + ei (tn− )v (tn+ ) = 0, i = 1, 2, . . . , d, (3.30)
In In

which, after using a simple integration by parts on the first term, is equivalent to
∫ ( )
e′i − ai (t , X̂) + âi (t , Xh ) v dt + [ei ](tn )v (tn+ ) = 0. (3.31)
In

Using Taylor’s series with integral remainder in the variable u, we write


d d
∑ ∑
ai (t , X̂) − âi (t , Xh ) = θij (X̂j − Xj,h ) = θij ej , (3.32)
j=1 j=1
∫1 ∂ ai ∫1 ∂ ai
where θij = 0 ∂ Xj
(t , X̂ + s(Xh − X̂))ds = 0 ∂ Xj
(t , X̂ − se)ds. Substituting (3.32) into (3.31) yields
⎛ ⎞
∫ d

⎝e′i − θij ej ⎠ v dt + [ei ](tn )v (tn+ ) = 0, ∀ v ∈ Vhp . (3.33)
In j=1

In vector form, we get



vt e′ − Θ e dt + vt (tn+ )[e](tn ) = 0, ∀ v ∈ (Vhp )d ,
( )
(3.34)
In

where the entries of the matrix Θ are θij . For convenience, we introduce the operator An (V) as

Vt e′ − Θ e dt + Vt (tn+ )[e](tn ).
( )
An (V) = (3.35)
In

We remark that (3.34) can be written as

An (v) = 0, ∀ v ∈ (Vhp )d . (3.36)

Integrating by parts, An (V) can be written as



)t
−V′ − Θ t V edt + Vt (tn−+1 )e(tn−+1 ) − Vt (tn+ )e(tn− ).
(
An (V) = (3.37)
In

Now, adding and subtracting Ph+ V to V, we rewrite (3.35) as

An (V) = An (V − Ph V) + An (Ph V).


+ +
(3.38)

Applying (3.36) with v = Ph+ V ∈ (P p (In ))d and using the property of the projection Ph+ , i.e., (V − Ph+ V)(tn+ ) = 0, we obtain

+
(V − Ph+ V)t e′ − Θ e dt + (V − Ph+ V)t (tn+ )[e](tn )
( )
An (V) = A(V − Ph V) =
∫In
(V − Ph+ V)t e′ − Θ e dt .
( )
= (3.39)
In

By the property of the projection Ph+ , we have



(V − Ph+ V)t v′ dx = 0, ∀ v ∈ (P p (In ))d , (3.40)
In

since v ∈ (P p (In ))d and thus v′ ∈ (P p−1 (In ))d .


11
M. Baccouch, H. Temimi and M. Ben-Romdhane Journal of Computational and Applied Mathematics 388 (2021) 113297

Substituting e = ϵ + ē into (3.39) and using (3.40) with v = ē, we get


∫ ∫
(V − Ph+ V)t ϵ′ + ē′ − Θ e dt = (V − Ph+ V)t ϵ′ − Θ e dt .
( ) ( )
An (V) = (3.41)
In In

Now, we are ready to prove (3.28). We first construct the following adjoint problem: find a function W such that
− W′ − Θ t W = e, t ∈ [0, T ) subject to W(T ) = 0, (3.42)
∫1 ∂ ai
where the entries of the matrix Θ are θij = 0 ∂ Xj
(t , X̂ − se)ds. The solution to (3.42) can be expressed in terms of its
fundamental matrix
∫ t
W(t) = M(t) M −1 (s)e(s)dt , (3.43)
0

where the d × d fundamental matrix M(t) satisfies the following initial-value problem
M ′ (t) = −Θ t M(t), M(0) = I , (3.44)
with I the d × d identity matrix. Under the assumptions of Theorem 3.2, the entries of the matrix Θ (t) are bounded on
[0, T ]. Using (3.42) and (3.43), we can deduce that there exists a constant C such that (see [28, Lemma 4.2])
∥W∥21 ≤ C ∥e∥. (3.45)
Using the well-known projection result and the estimate (3.45), we obtain
W − P + W ≤ C1 h |W|1 ≤ C2 h ∥e∥ .
 
h (3.46)
Using (3.37) with V = W and (3.42), we have

An (W) = et edt + Wt (tn+1 )e(tn−+1 ) − Wt (tn )e(tn− ).
In

Summing over all elements and using the fact that W(T ) = e(0− ) = 0, we obtain
N −1

An (W) = ∥e∥2 + Wt (T )e(T − ) − Wt (0)e(0− ) = ∥e∥2 . (3.47)
n=0

Next, taking V = W in (3.41), summing over all elements, and applying the Cauchy–Schwarz inequality yields
N −1 N −1 ∫
∑ ∑
(W − Ph+ W)t ϵ′ − Θ e dt ≤ W − Ph+ W ϵ′  + C1 ∥e∥ .
( )   (  )
An (W) = (3.48)
n=0 n=0 In

Applying the estimates (3.22) and (3.46), we get


N −1 (   )  

An (W) ≤ C2 h C3 hp X̂ + C1 ∥e∥ ∥e∥ ≤ C1 hp+1 X̂ ∥e∥ + C2 h ∥e∥2 . (3.49)
   
p+1 p+1
n=0

Combining the two formulas (3.47) and (3.49) yields


 
∥e∥ ≤ C1 hp+1 X̂ + C2 h ∥e∥ . (3.50)
 
p+1
 
Consequently, (1 − C2 h) ∥e∥ ≤ C1 hp+1 X̂ , where C2 is a constant independent of h. Hence, for sufficiently small h,
 
p+1
   
e.g ., h ≤ 1
, we have 1
∥e∥ ≤ (1 − C2 h) ∥e∥ ≤ C1 hp+1 X̂ , which gives ∥e∥ ≤ 2C1 hp+1 X̂ for small h. Squaring
   
2C2 2
p+1 p+1
both sides, taking the expectation, and using the estimate (3.17) with s = p + 1, we get
[  ]
 2
E ∥e∥2 ≤ 4C12 h2p+2 E X̂ ≤ 4C12 h2p+2 C2 h−1 ≤ Ch2p+1 ,
[ ]
p+1

which completes the proof of (3.28).


Next, we will prove (3.29). By the property of the projection Ph− , we have

v ′ ϵi dt = 0, ∀ v ∈ P p (In ), and ϵi (tn− ) = 0, n = 0, 1, . . . , N − 1. (3.51)
In

Substituting ei = ϵi + ēi into (3.30) and applying (3.51) yields


∫ ∫ ( )
ēi v ′ dt + ai (t , X̂) − âi (t , Xh ) v dt − ēi (tn−+1 )v (tn−+1 ) + ēi (tn− )v (tn+ ) = 0, i = 1, 2, . . . , d.
In In
12
M. Baccouch, H. Temimi and M. Ben-Romdhane Journal of Computational and Applied Mathematics 388 (2021) 113297

Using integration by parts, we get


∫ ∫ ( )
− ē′i v dt + ai (t , X̂) − âi (t , Xh ) v dt − [ēi ](tn )v (tn+ ) = 0, i = 1 , 2 , . . . , d. (3.52)
In In

Taking v (t) = ē′i (t) − (−1)p ē′i (tn+ )Lp,n (t) ∈ P p (In ) in (3.53), we have, by the property L̃p (−1) = (−1)p , the orthogonality
relation (3.24), and the property v (tn+ ) = 0
∫ ∫ ( )(
(ē′i )2 dt = ai (t , X̂) − âi (t , Xh ) ē′i (t) − (−1)p ē′i (tn+ )Lp,n (t) dt = 0.
)
(3.53)
In In

Using the Lipschitz condition and applying the Cauchy–Schwarz inequality yields
∫ ⏐ ⏐ (⏐ ⏐ ⏐ d
∫ ∑
 ′ 2
⏐ai (t , X̂) − âi (t , Xh )⏐ ēi + ēi (tn ) Lp,n dt ≤ La
ē  ≤ ⏐ ⏐ ⏐ ′ ⏐ ⏐ ′ + ⏐⏐ ⏐⏐ ⏐⏐) ⏐ ⏐ (⏐ ′ ⏐ ⏐ ′ + ⏐ ⏐ ⏐)
⏐ej ⏐ ⏐ē ⏐ + ⏐ē (t )⏐ ⏐Lp,n ⏐ dt
i In i i n
In In j=1

d
∑   (  ⏐  )
ej  ē′  + ⏐ē′ (t + )⏐ Lp,n  .

≤ La In i In i n In
j=1

Using (3.23) and (3.26), we obtain


d d
  ( ′  )
ej  ē  + C1 h−1/2 ē′  h1/2 = C2 ē′ 
 ′ 2 ∑   ∑
ej  ,
   
ē  ≤ La
i In In i In n i In n i In In
j=1 j=1

where C2 = La (1 + C1 ). Thus, we have


d

ej  .
 ′  
ē  ≤ C2
i In In
j=1
(∑ )2
d ∑d
Squaring both sides, using j=1 αj ≤d j=1 αj2 , and summing over all elements, we obtain
d
 ′ 2 ∑  2
ē  ≤ C 2 d ej  = C3 ∥e∥2 .
i 2
j=1

Taking the expectation of both sides and applying the estimates (3.28), we get
[  ]
2
E ē′i  ≤ C3 E ∥e∥2 ≤ C1 h2p+1 .
[ ]

which completes the proof of (3.29). □


[⏐ ⏐2 ]
In the next theorem, we establish a bound for E ⏐X̂i (tn ) − Xi,h (tn )⏐ , which shows that the SDG solution Xi,h converges
⏐ ⏐

in the mean-square sense to X̂i as h → 0.


∂ ai (t ,X)
Theorem 3.4. Suppose that the assumptions of Theorem 3.3 are satisfied. In addition, we assume that ∂ Xj
is sufficiently
∂ ai (t ,X(t))
smooth with respect to the variables t and X (for example, Fij (t) = ∈ C [a, b] is enough). Let p ≥ 0, X̂i (t) be the
∂ Xj
p

solution of (3.2), and Xi,h (t) be the SDG solution defined in (3.5). Then there exists a positive constant C independent of h such
that
[⏐ ⏐ ]
2
E ⏐ei (tk− )⏐ ≤ Ch4p+1 , k = 0, 1, . . . , N − 1. (3.54)
[⏐ ]
2
E ⏐ēi (tk− )⏐ ≤ Ch4p+1 , k = 0, 1, . . . , N − 1.

(3.55)

E ∥ēi ∥2 ≤ C h2p+3 .
[ ]
(3.56)

Proof. We proceed by the duality argument. For 1 ≤ i ≤ d and 1 ≤ k ≤ N − 1, let U be the solution of the following
auxiliary problem:
U′ + Θ t U = 0, t ∈ [0, tk ] subject to U(tk ) = U0 , (3.57)
∫1 ∂ ai
where U0 ∈ R is the zero vector except the ith entry is one, and the entries of the matrix Θ are θij =
d
0 ∂ Xj
(t , X̂ − se)ds.
Under the assumption of the theorem, one can easily verify that (3.57) satisfies the following regularity estimate
∥U∥p+1,[0,tk ] ≤ C . (3.58)
13
M. Baccouch, H. Temimi and M. Ben-Romdhane Journal of Computational and Applied Mathematics 388 (2021) 113297

Substituting (3.57) into (3.37) yields



)t
−U′ − Θ t U edt + Ut (tn−+1 )e(tn−+1 ) − Ut (tn+ )e(tn− ) = Ut (tn+1 )e(tn−+1 ) − Ut (tn )e(tn− ).
(
An (U) =
In

Summing over the first k elements In , n = 0, 1, . . . , k − 1 and using U(tk ) = U0 and e(t0− ) = 0, we obtain
k−1

An (U) = Ut (tk )e(tk ) − Ut (t0 )e(t0 ) = ei (tk ).
− − −
(3.59)
n=0

Next, we take V = U in (3.41) to get



(U − Ph+ U)t ϵ′ − Θ e dt .
( )
An (U) =
In

Summing over the first k elements In , n = 0, 1, . . . , k − 1 with 1 ≤ k ≤ N − 1 and applying (3.59), we obtain
k−1 ∫

ei (tk− ) = (U − Ph+ U)t ϵ′ − Θ e dt .
( )
n=1 In

Using the Cauchy–Schwarz inequality, we get


(  ) ( 
ϵ′  ≤ ϵ′  + C1 ∥e∥ U − Ph+ U0,[0,t ] .
⏐ −⏐   ) 
⏐ei (t )⏐ ≤ U − P + U + C 1 ∥ e∥ ,[ ,
k h 0,[0,tk ] 0,[0,t ] k
0 0 t k ] k

Using the estimates (3.22) and (3.58), we obtain


(   )  
+ C5 hp+1 ∥e∥ .
⏐ −⏐
⏐ei (t )⏐ ≤ C2 hp 
X̂

+ C1 ∥e∥ C3 hp+1 |U|p+1,[0,tk ] ≤ C4 h2p+1 X̂
 
k
p+1 p+1

Squaring both sides, using the inequality (a + b)2 ≤ 2a2 + 2b2 , and taking the expectation of both sides, we obtain
[  ]
 2
[⏐ ⏐ ]
− ⏐2 2 4p+2
+ 2C52 h2p+2 E ∥e∥2 .
[ ]
E ei (tk )
⏐ ≤ 2C4 h E X̂
p+1

Applying the estimate (3.17) with s = p + 1 and (3.28), we conclude that


[⏐ ⏐ ]
2
E ⏐ei (tk− )⏐ ≤ 2C42 h4p+2 C6 h−1 + 2C52 h2p+2 C7 h2p+1 ≤ Ch4p+1 . (3.60)

In order to show (3.55) we use the relation ei = ēi +ϵi , the property of the projection Ph− i.e., ϵi (tk− ) = 0, and the estimate
(3.54) to get
[⏐ ⏐ ] [⏐ ⏐ ] [⏐ ⏐ ]
2 2 2
E ⏐ēi (tk− )⏐ = E ⏐ei (tk− ) − ϵi (tk− )⏐ = E ⏐ei (tk− )⏐ ≤ Ch4p+1 .

Finally, we will estimate E ∥ēi ∥2 . By the Fundamental Theorem of Calculus, we have


[ ]
⏐ ∫ t ⏐ ∫
⏐ ≤ ⏐ēi (t )⏐ + ⏐ē′ (s)⏐ds, ∀ t ∈ In .
⏐ ⏐ ⏐ ⏐ ⏐ −⏐ ⏐ ⏐
⏐ēi (t)⏐ = ⏐ēi (t − ) + ē ′
(s)ds
⏐ n i ⏐ n i
tn In

Squaring both sides, using (a + b)2 ≤ 2a2 + 2b2 and applying the Cauchy–Schwarz inequality, we obtain
(∫ )2 ∫
⏐ēi (t)⏐2 ≤ 2⏐ēi (t − )⏐2 + 2 ⏐ē (s)⏐ds ≤ 2⏐ēi (t − )⏐2 + 2hn ⏐ē′ (s)⏐2 ds = 2⏐ēi (t − )⏐2 + 2hn ē′ 2 .
⏐ ⏐ ⏐ ⏐ ⏐ ′ ⏐ ⏐ ⏐ ⏐ ⏐ ⏐ ⏐  
n i n i n i In
In In

Integrating this inequality with respect to x, we get


⏐2  2
∥ēi ∥2In ≤ 2hn ⏐ēi (tn− )⏐ + 2h2n ē′i I .

n

Summing over all elements and using the fact that h = max hn , we obtain
N −1
⏐ − ⏐2
⏐ēi (t )⏐ + 2h2 ē′ 2 .
∑  
∥ēi ∥2 ≤ 2h n i (3.61)
n=0

Taking the expectation and using (3.55) and (3.29), we get


N −1 [⏐ N −1
∑ ⏐2 ] [  ]
2

E ∥ēi ∥2 ≤ 2h E ⏐ēi (tn− )⏐ + 2h2 E ē′i  ≤ 2h C1 h4p+1 + 2h2 C2 h2p+1 ≤ C3 h4p+1 + C4 h2p+3 ,
[ ]
n=0 n=0

since 4p + 1 ≥ 2p + 3 for p ≥ 1. Thus, we have completed the proof of the Theorem. □


14
M. Baccouch, H. Temimi and M. Ben-Romdhane Journal of Computational and Applied Mathematics 388 (2021) 113297

Now, we are ready to prove our main results. In particular, we prove that the SDG solution converges in the
mean-square sense to the exact solution of the original SDE.

Corollary [3.1. Suppose that ]the assumptions of Theorem 3.4 are satisfied. Let X = [X1 , X2 , . . . , Xd ]t be the solution of (3.1)
t
and Xh = X1,h , X2,h , . . . , Xd,h be the SDG solution of (3.5), then for i = 1, 2, . . . , d, we have
[⏐ ⏐ ]
2
lim E ⏐Xi (tn− ) − Xi,h (tn− )⏐ = 0. (3.62)
N →∞

Proof. Using the inequality (a + b)2 ≤ 2(a2 + b2 ), we have


⏐ ⏐2 ⏐ ⏐2
⏐Xi (t ) − Xi,h (t − )⏐2 = ⏐⏐Xi (t − ) − X̂i (t − ) + X̂i (t − ) − Xi,h (t − )⏐⏐ ≤ 2 ⏐⏐Xi (t − ) − X̂i (t − )⏐2 + 2⏐X̂i (t − ) − Xi,h (t − )⏐⏐ .
⏐ − ⏐ ⏐ ⏐
n n n n n n n n n n

Taking the expectation of both sides and using the estimates (3.8) and (3.54), we obtain for p ≥ 1,
[⏐ ⏐ ] [⏐ ⏐ ]
⏐2 ⏐2
⏐[ ⏐2 ]
E ⏐Xi (tn− ) − Xi,h (tn− )⏐ ≤ 2E ⏐Xi (tn− ) − X̂i (tn− )⏐ + 2E ⏐X̂i (tn− ) − Xi,h (tn− )⏐
⏐ ⏐
[ ⏐ ⏐2 ] [⏐ ⏐2 ]
sup ⏐Xi (t) − X̂i (t)⏐ + 2E ⏐X̂i (tn ) − Xi,h (tn )⏐ .
⏐ − − ⏐
≤ 2E
⏐ ⏐
t ∈[0,T ]

Taking the limit as N → ∞ and using (2.9) and (3.54), we complete the proof of the corollary. □

Remark 3.1. In our error analysis, we assumed that the drift a and diffusion b coefficients both Lipschitz and linear growth
bound conditions. These assumptions are usually the hypotheses of the existence and uniqueness theorem of (1.1). We
were unable to prove our results under weaker conditions. This is still an open question for this problem. We believe
that a new technique will be needed to obtain similar convergence results. We are planning to investigate the numerical
approximations for stochastic differential equations under weaker conditions rather than the Lipschitz condition.

Remark 3.2. In this paper we presented a priori error analysis for the DG method for the model (1.1) driven by additive
noise. In Section 5, we present numerical examples to show that the results hold true for the multiplicative noise case.
We were unable extend the analysis to the multiplicative case. This is still an open question for this problem. We remark
that the estimate (3.7) is only valid under the assumption that the function f is deterministic. To extend the analysis to
case of multiplicative noise, one need to prove the estimate (3.7) when the function f is stochastic i.e., f = f (t , X). We
are planning to investigate the case of multiplicative noise in a separate paper. We believe that a new technique will be
needed to obtain similar convergence results.

4. Applications

4.1. Application in biology: A SIR epidemic model

We consider a SIR (susceptible–infected–removed) epidemic model in which the population is divided into susceptible
individuals, infected individuals, and recovered individuals. A susceptible individual may become infected then recover.
We assume that individuals develop self immunity to the disease. In other words, once an individual has been recovered
from the disease, he/she will never become susceptible again. We denote the susceptible population at time t by S(t),
the infected population by I(t), and the recovered population by R(t). Hence, the total population at time t is N(t) =
S(t) + I(t) + R(t). We can assume that stochastic perturbations are of a white noise type which are directly proportional to
S(t), I(t), and R(t) [29]. We can also assume that the disease transmission coefficient is subject to the environmental white
noise [30]. Considering both aforementioned perturbations, a stochastic SIR model describing the spread of the disease
is [31]
dS(t) = (Λ − β S(t)I(t) − µS(t)) dt + σ1 S(t) dW1 (t) − σ4 S(t)I(t) dW4 (t), (4.1a)
dI(t) = (β S(t)I(t) − (γ + µ + ϵ) I(t)) dt + σ2 I(t) dW2 (t) + σ4 S(t)I(t) dW4 (t), (4.1b)
dR(t) = (γ I(t) − µR(t)) dt + σ3 R(t) dW3 (t), (4.1c)
where the parameters in the model are summarized in the following list:

• Λ > 0 is the influx of individuals into the susceptible (the natural birth rate in the population),
• β > 0 is the contact rate for the infection transmission, called also transmission coefficient (i.e. the average number
of individuals to whom an infected individual passes the infection after sufficient contact),
• µ > 0 is the natural death rate of S, I, R compartments,
• ϵ > 0 is the disease induced mortality rate,
• γ > 0 is the relative removal rate, called also recovery rate (i.e. average number of infected individual who recover
and move from the infected to the susceptible population during a unit of time).
15
M. Baccouch, H. Temimi and M. Ben-Romdhane Journal of Computational and Applied Mathematics 388 (2021) 113297

In (4.1), Wj (t), j = 1 − 4 are standard Brownian motions (Wiener processes), and σ1 , σ2 , σ3 , and σ4 are the intensities of
environmental white noises. The global existence and positivity of the solution of system (4.1) are studied in [31].

We note that (4.1) can be written as


dX(t) = a(t , X(t))dt + b(t , X(t))dW(t), t ∈ [0, T ], X(0) = X0 ,
where X = [X1 , X2 , X3 ] = [S , I , R] and
t t

⎡ ⎤
W1
Λ − β X1 X2 − µX1 σ1 X 1 0 0 −σ4 X1 X2
[ ] [ ]
⎢ W2 ⎥
a= β X1 X2 − (γ + µ + ϵ) X2 , b= 0 σ2 X2 0 σ4 X1 X2 , W=⎣ .
W3 ⎦
γ X2 − µX3 0 0 σ3 X3 0
W4
This application will be considered in Example 5.3.

4.2. Second model: A stock–price stochastic model

We consider two stocks along with a fixed-interest money market account. Let us denote the two stock prices by
S1 and S2 . The probability distribution of S1 and S2 can be modeled by the following Itô stochastic differential equation
system [32]
dS = µ(t , S1 , S2 ) dt + B(t , S1 , S2 ) dW(t), (4.2)
where S = [S1 , S2 ] , W(t) = [W1 , W2 ] is the two-dimensional Wiener process, and
t t

c1 + c2 + c3 + w
[ ] [ ]
(b1 − d1 )S1 + (m22 + m21 − m12 − m11 )S1 S2 1 c2 − c3
µ= , B= ,
(b2 − d2 )S2 + (m22 − m21 + m12 − m11 )S1 S2 d c2 − c3 c2 + c3 + c4 + w
in which the parameters b1 , d1 , b2 , d2 , m11 , m12 , m21 , and m22 are the rates at which stocks experience individual gains
or losses or experience simultaneous gains and/or losses, and w and d are given by the following expressions
√ √
w= (c1 + c2 + c3 )(c2 + c3 + c4 ) − (c2 − c3 )2 , d= c1 + 2c2 + 2c3 + c4 + 2w.
We note that each parameter above may depend on the time t [32]. We note also that the results established for the case
of two stocks can be further generalized to a system of n stocks. This application will be considered in Example 5.1.

4.3. Application in engineering: Mechanical vibration model

Consider the following general form of a second-order SDEs form:


dW (t)
u′′ = f (t , u, u′ ) + g(t , u, u′ ) , u(0) = u0 , u′ (0) = u1 , (4.3)
dt
where u is a one-dimensional stochastic process defined on closed time interval [0, T ], dt is defined as white noise that
dW (t)

has derivative relation with Wiener process W , and the real functions f and g are stochastic integrable functions. We can
write (4.3), as a pair of first-order equations for X1 and X2 :
dX(t) = a(t , X(t))dt + b(t , X(t))dW (t), t ∈ [0, T ], X(0) = X0 ,
where
[ ] [ ] [ ] [ ] [ ]
X1 u X2 0 u0
X= = , a= , b= , X0 = .
X2 u′ f (t , X1 , X2 ) g(t , X1 , X2 ) u1
In mechanical vibration, the general governing equation of motion for a single-degree-of-freedom dynamic system has
the form:
x′′ (t) + r(x(t)) + c(x(t), x′ (t)) = f (t), (4.4)
where x(t) is the displacement of the mass from the rest point, r(x(t)) represents restoring forces, c(x(t), x′ (t)) models
damping forces, and f (t) is a stochastic excitation process [33,34].

Engineering applications of stochastic differential equation (4.4) arise, for example, in reliability analyses of structures
subject to wind, current, or earthquake loads. For instance, the spring–mass system can be modeled by a SDE of the form
(4.4), where x(t) is the displacement of the mass from equilibrium, v (t) = x′ (t) is the velocity, m is the mass. An important
example of (4.4) is the second-order stochastic system
√ dW (t)
mx′′ + cx′ + kx = 2γ 2 λ
, x(0) = x0 , v (0) = v0 . (4.5)
dt
Equations of the form (4.5) are well-known in the study of random vibration for mechanical systems [33,34] and represent
a form of SDE [35]. This application will be considered in Example 5.2.
16
M. Baccouch, H. Temimi and M. Ben-Romdhane Journal of Computational and Applied Mathematics 388 (2021) 113297

5. Numerical examples

In this section, we test the convergence of the proposed SDG scheme by solving several numerical examples. We use
(j)
X (j) (T ) and X̂h (T ) to denote the exact solution and the SDG solution at⏐ the final point ⏐tN = T in the jth simulation,
(j)
respectively. The global error at time T of simulation j is denoted by ϵj = ⏐X (j) (T ) − X̂h (T )⏐. In all numerical experiments,
⏐ ⏐
∑M
the mean of absolute error, defined by ϵ̂ = 1
M j=1 ϵj , where M denotes the total number of random simulations, will be
[ ]
used to estimate the error of the proposed SDG scheme. We note that ϵ̂ approximates the mean error E |X (T ) − X̂h (T )| .
ln(ϵ̂ N1 /ϵ̂ N2 )
The numerical rate of convergence α̂ is defined by α̂ = ln(N /N ) , where ϵ̂ N denotes the mean of absolute error using N
1 2
elements, is used to measure the convergence properties of the proposed SDG scheme. In the actual implementation,
different values of h were used. For each step size h, M runs are performed with different samples of noise. In all
experiments, we use a random number generator to produce independent pseudo-random numbers from the N (0, 1)
distribution. We do multiple runs of code with different random seeds so that we have results from various sample paths
from which to collect the desired results. We note that the approximate solutions include computer rounding error.

Example 5.1. We start from a decoupled system of the following two equations:

dY1 (t) = a1 Y1 (t)dt + b1 Y1 (t)dW1 (t), dY2 (t) = a2 Y2 (t)dt + b2 Y2 (t)dW2 (t), t ∈ [0, T ],

subject to Y1 (0) = Y0,1 and Y2 (0) = Y0,2 , where ai and bi for i = 1, 2 are constants. The exact solution of this system is [1]
b2 b2
1 )t +b W (t) 2 )t +b W (t)
Y1 (t) = Y0,1 e(a1 − 2 1 1 , Y2 (t) = Y0,2 e(a2 − 2 2 2 .
Let X1 = Y1 + Y2 and X2 = Y1 − Y2 , then we obtain the following coupled two-dimensional linear system
1 b1 b2
dX1 = (a1 (X1 + X2 ) + a2 (X1 − X2 )) dt + (X1 + X2 )dW1 + (X1 − X2 )dW2 , (5.1a)
2 2 2
1 b1 b2
dX2 = (a1 (X1 − X2 ) + a2 (X1 + X2 )) dt + (X1 + X2 )dW1 − (X1 − X2 )dW2 , (5.1b)
2 2 2
where X1 (0) = Y1 (0) + Y2 (0) and X2 (0) = Y1 (0) − Y2 (0). The exact solution of the system (5.1) is given by
( ) ( )
2 2
⎡ b b

a1 − 21 t +b1 W1 (t) a2 − 22 t +b2 W2 (t)
1 1
( ) ( )
X0,1 + X0,2 e( + X0,1 − X0,2 e( ⎥, t ∈ [0, T ].
⎢ ⎥
X(t) = ⎢ 2 ) 2 )
⎣ b2 b2 ⎦
a1 − 21 t +b1 W1 (t) a2 − 22 t +b2 W2 (t)
1 1
( ) ( )
2
X0,1 + X0,2 e − 2
X0,1 − X0,2 e

We choose the parameters as a1 = 1, a2 = 2, b1 = 2, b2 = 1, X0 = [1, 2]t and T = 1. We apply the proposed SDG
method to solve (5.1) on a uniform mesh having N = 2n , n = 1, 2, . . . , 7 elements. In Figs. 1 we show the exact mean
solutions E[X1 (t)], E[X2 (t)] using M = 10 sample paths with p = 1 and N = 8. In Figs. 2 and 3 we present the exact
mean solutions E[X1 (t)], E[X2 (t)], and 1000 sample paths using p = 1 and N = 8, 64. The exact mean values E[X1 (t)],
E[X2 (t)] and the mean of 1000 sample paths using N = 8[, 64]and p = 1 are shown in Figs. 4 and 5. In Figs. 6 and 7,
we use 1000 simulations to show the errors E [Xk (t)] − E Xk,h using N = 8, 64 and p = 1. In Table 1 and Fig. 8, we
|Xk(j) (T ) − Xk(j),h (T )|, k = 1, 2, the L2 errors
∑M
simulate M = 1000 sample trajectories and present the errors ϵ̂k = 1
M j=1
N N
∑M ∫ T 2 ln(ϵ̂k 1 /ϵ̂k 2 )
ϵ̄k = 1
M j=1 0
|Xk(j) (t) − Xk(j),h (t)| dt , k = 1, 2, and their numerical order of convergence α̂k = ln(N1 /N2 )
, k = 1, 2 and
N N
ln(ϵ̄k 1 /ϵ̄k 2 )
ᾱk = ln(N1 /N2 )
, k = 1, 2 using p = 1 and N = 2n , n = 1, 2, . . . , 7 elements. These results suggest O(h2 ) convergence
rate in the mean-square sense. This is in full agreement with the theoretical result. Finally, in Table 2 and Fig. 9 we display
the errors and their numerical order of convergence using p = 2 and N = 2n , n = 1, 2, . . . , 7 elements. These results
indicate that the SDG method is strongly convergent with order 2 mean-square sense.

Example 5.2. In this example we apply our scheme to the following system with d = 2 and m = 2 [1]
α
[ ] [ ] [ ] [ ]
X1 X2 0 W1
d = dt + d , t ∈ [0, 2π ], X(0) = X0 , (5.2)
X2 −X1 0 β W2

where α and β are constants. The exact solution is given by


∫ t ∫ t
X1 (t) = X0,1 cos(t) + X0,2 sin(t) + α cos(t − s)dW1 (s) + β sin(t − s)dW2 (s),
0 0
∫ t ∫ t
X2 (t) = −X0,1 sin(t) + X0,2 cos(t) − α sin(t − s)dW1 (s) + β cos(t − s)dW2 (s).
0 0
17
M. Baccouch, H. Temimi and M. Ben-Romdhane Journal of Computational and Applied Mathematics 388 (2021) 113297

Fig. 1. The mean solution E[X1 (t)] (left) and E[X2 (t)] (right) with 10 sample paths obtained using the DG method for Example 5.1 using N = 8 and
p = 1.

Fig. 2. The mean solution E[X1 (t)] (left) and E[X2 (t)] (right) with 1000 sample paths obtained using the DG method for Example 5.1 using N = 8
and p = 1.

Fig. 3. The mean solution E[X1 (t)] (left) and E[X2 (t)] (right) with 1000 sample paths obtained using the DG method for Example 5.1 using N = 64
and p = 1.

The mean of X1 and X2 are given by

E [X1 (t)] = X0,1 cos(t) + X0,2 sin(t), E [X2 (t)] = −X0,1 sin(t) + X0,2 cos(t).

We X0,1 = 0, X0,1 = 1, and α = β = 1. We solve (5.2) using the SDG method on a uniform mesh having N = 2n ,
n = 3, 4, . . . , 10 elements. In Fig. 10 we display the exact mean solutions E [Xk (t)] , k = 1, 2 and 100 sample paths using
N = 128 and p = 1. The exact mean values E [Xk (t)] , k = 1, 2 and the mean of 100 sample paths [ using
] N = 128 and
p = 1 are shown in Fig. 11. In Fig. 12, we use 100 simulations to show the errors E [Xk (t)] − E Xk,h using N = 128
18
M. Baccouch, H. Temimi and M. Ben-Romdhane Journal of Computational and Applied Mathematics 388 (2021) 113297

Fig. 4. Mean solution E[X1 (t)] (left), E[X2 (t)] (right) and the mean of 1000 sample paths for Example 5.1 using N = 8 and p = 1.

Fig. 5. Mean solution E[X1 (t)] (left), E[X2 (t)] (right) and the mean of 1000 sample paths for Example 5.1 using N = 64 and p = 1.

Fig. 6. The errors E[X1 ] − E[X1,h ] (left) and E[X2 ] − E[X2,h ] (right) for Example 5.1 using N = 8 and p = 1. E[Xk,h ], k = 1, 2 are obtaining by
averaging the solution of 1000 simulations.

and p = 1. Finally, Table 3 and Fig. 13 show that the SDG method has order of convergence 2 in the mean-square sense,
which confirms the theoretical results.

Example 5.3. Consider the stochastic SIR model (4.1) describing the spread of the disease. As in [31], we choose the
parameters Λ = 0.4, β = 0.8, µ = 0.2, ϵ = 0.1, γ = 0.2, σ1 = σ2 = σ3 = σ4 = 0.2, and the initial condition
[S(0), I(0), R(0)]t = [0.62, 0.54, 0.54]t . In Fig. 14, we present the SDG solutions over the interval [0, 400] using p = 1,
M = 100 and N = 1024. These results are similar to those in [31]. In Table 4, we report the approximations Xi,h , i = 1 − 3
at T = 1 using N = 2n , n = 1, 2, . . . , 10 with M = 10, M = 100, and M = 1000. It can be seen that the SDG solution
converges under mesh refinement. Since the exact solution is not available, we compute the strong error of the numerical
19
M. Baccouch, H. Temimi and M. Ben-Romdhane Journal of Computational and Applied Mathematics 388 (2021) 113297

Fig. 7. The errors E[X1 ] − E[X1,h ] (left) and E[X2 ] − E[X2,h ] (right) for Example 5.1 using N = 64 and p = 1. E[Xk,h ], k = 1, 2 are obtaining by
averaging the solution of 1000 simulations.

Fig. 8. The errors ϵ̂k , ϵ̄k and their orders of convergence α̂k and ᾱk with k = 1, 2 for Example 5.1 on uniform meshes having N = 2n , n = 1, 2, . . . , 7
elements using p = 1 and M = 1000 simulations.

Table 1
The errors ϵ̂k , ϵ̄k and their orders of convergence α̂k and ᾱk with k = 1, 2 for Example 5.1 on uniform meshes having
N = 2n , n = 1, 2, . . . , 7 elements using p = 1 and M = 1000 simulations.
N ϵ̂1 α̂1 ϵ̂2 α̂2 ϵ̄1 ᾱ1 ϵ̄2 ᾱ2
2 2.5920e+00 NA 2.8507e+00 NA 7.7878e−01 NA 8.5713e−01 NA
4 5.2255e−01 2.3104 5.4828e−01 2.3783 1.5581e−01 2.3214 1.6370e−01 2.3884
8 1.2671e−01 2.0440 1.2873e−01 2.0906 3.7895e−02 2.0397 3.8384e−02 2.0925
16 3.2760e−02 1.9516 3.3064e−02 1.9610 9.5101e−03 1.9945 9.5709e−03 2.0038
32 8.5343e−03 1.9406 8.5560e−03 1.9503 2.3936e−03 1.9903 2.3964e−03 1.9978
64 2.2158e−03 1.9454 2.2167e−03 1.9485 6.0188e−04 1.9916 6.0153e−04 1.9942
128 5.6918e−04 1.9609 5.6802e−04 1.9644 1.5091e−04 1.9958 1.5079e−04 1.9961

Table 2
The errors ϵ̂k , ϵ̄k and their orders of convergence α̂k and ᾱk with k = 1, 2 for Example 5.1 on uniform meshes having
N = 2n , n = 1, 2, . . . , 7 elements using p = 2 and M = 1000 simulations.
N ϵ̂1 α̂1 ϵ̂2 α̂2 ϵ̄1 ᾱ1 ϵ̄2 ᾱ2
2 3.1229e+00 NA 3.5454e+00 NA 8.6404e−01 NA 9.8736e−01 NA
4 5.2762e−01 2.5653 5.5467e−01 2.6762 1.5648e−01 2.4651 1.6453e−01 2.5852
8 1.2681e−01 2.0568 1.2884e−01 2.1060 3.7917e−02 2.0450 3.8400e−02 2.0992
16 3.2763e−02 1.9526 3.3067e−02 1.9621 9.5116e−03 1.9951 9.5716e−03 2.0043
32 8.5344e−03 1.9407 8.5560e−03 1.9504 2.3937e−03 1.9905 2.3965e−03 1.9979
64 2.2158e−03 1.9454 2.2167e−03 1.9485 6.0188e−04 1.9917 6.0153e−04 1.9942
128 5.6918e−04 1.9609 5.6802e−04 1.9644 1.5091e−04 1.9958 1.5079e−04 1.9961

20
M. Baccouch, H. Temimi and M. Ben-Romdhane Journal of Computational and Applied Mathematics 388 (2021) 113297

Fig. 9. The errors ϵ̂k , ϵ̄k and their orders of convergence α̂k and ᾱk with k = 1, 2 for Example 5.1 on uniform meshes having N = 2n , n = 1, 2, . . . , 7
elements using p = 2 and M = 1000 simulations.

Fig. 10. The mean solution E[X1 (t)] (left) and E[X2 (t)] (right) with 100 sample paths obtained using the DG method for Example 5.2 using N = 128
and p = 1.

Fig. 11. Mean solution E[X1 (t)] (left), E[X2 (t)] (right) and the mean of 100 sample paths for Example 5.2 using N = 128 and p = 1.

solution as

M ⏐
1 ∑ ⏐ (j)

(j)
ϵ̂i,h = ⏐X̂i,2h (T ) − X̂i,h (T )⏐ ,

M
j=1

,h i,2h ln(ϵ̂ /ϵ̂ )


and the strong convergence order is defined as α̂i,h = − iln(2) . These definitions can be found in [36]. In Table 5, we
report the approximations Xi,h , i = 1 − 3 and their order of convergence at T = 1 using N = 2n , n = 5, 6, . . . , 10 with
21
M. Baccouch, H. Temimi and M. Ben-Romdhane Journal of Computational and Applied Mathematics 388 (2021) 113297

Fig. 12. The errors E[X1 ] − E[X1,h ] (left) and E[X2 ] − E[X2,h ] (right) for Example 5.2 using N = 128 and p = 1. E[Xk,h ], k = 1, 2 are obtaining by
averaging the solution of 100 simulations.

Fig. 13. The errors ϵ̂k , ϵ̄k , and their orders of convergence α̂k and ᾱk with k = 1, 2 for Example 5.2 on uniform meshes having N = 2n , n = 3, 4, . . . , 10
elements using p = 1 and M = 100 simulations.

Table 3
The errors ϵ̂k , ϵ̄k , and their orders of convergence α̂k and ᾱk with k = 1, 2 for Example 5.2 on uniform meshes having
N = 2n , n = 3, 4, . . . , 10 elements using p = 1 and M = 100 simulations.
N ϵ̂1 α̂1 ϵ̂2 α̂2 ϵ̄1 ᾱ1 ϵ̄2 ᾱ2
23 6.5802 NA 6.7386 NA 10.679 NA 10.662 NA
24 1.9159 1.7801 1.9496 1.7893 2.9745 1.8440 2.9695 1.8442
25 5.1464e−01 1.8964 5.0751e−01 1.9417 7.5637e−01 1.9755 7.4612e−01 1.9927
26 1.3257e−01 1.9569 1.3075e−01 1.9566 1.9066e−01 1.9881 1.8785e−01 1.9899
27 3.4347e−02 1.9485 3.3279e−02 1.9742 4.7934e−02 1.9918 4.7157e−02 1.9940
28 8.7060e−03 1.9801 8.5028e−03 1.9686 1.2015e−02 1.9962 1.1816e−02 1.9967
29 2.2069e−03 1.9800 2.1510e−03 1.9829 3.0055e−03 1.9992 2.9530e−03 2.0005
210 5.5672e−04 1.9870 5.4126e−04 1.9906 7.5179e−04 1.9992 7.3862e−04 1.9993

M = 1000. It can be seen that the SFD scheme is convergent with order one in the mean-square sense. Similar results
are observed using M = 10, 100, 10,000 and 100,000 simulations. These results are not included to save space.

Remark 5.1. We note that the approximation Xh (t) of the solution X(t) of the original SDE (1.2) consists of two steps:

1. Approximate X(t) by the Wong–Zakai approximation X̂(t), where X̂(t) is the solution of (2.7).
2. Approximate X̂(t) by the DG approximation Xh (t), which depends on p.
We would like to mention that in order to improve the convergence speed of Xh (t) to X(t), we need to improve the error
of the Wong–Zakai approximation. This is currently under investigation and will be discussed in a separate manuscript.
We expect that a new technique will be needed to approximate the Wiener process Wj (t), t ∈ [0, T ].

22
M. Baccouch, H. Temimi and M. Ben-Romdhane Journal of Computational and Applied Mathematics 388 (2021) 113297

Fig. 14. The solution of stochastic system (4.1) using N = 1024 and p = 1. E[Xk,h ], k = 1 − 3 are obtaining by averaging the solution of 100
simulations.

Table 4
The SDG solution Xi,h (1), i = 1 − 3 for Example 5.3 on uniform meshes having N = 2n , n = 1, 2, . . . , 10 elements using
p = 1 and M = 10, 100, 1000 simulations.
M = 10 M = 100 M = 1000
N X1,h X2,h X3,h X1,h X2,h X3,h X1,h X2,h X3,h
2 0.4134 0.2949 0.2901 0.3892 0.2733 0.2847 0.3922 0.2818 0.2880
4 0.3990 0.2844 0.2846 0.3862 0.2737 0.2810 0.3872 0.2779 0.2827
8 0.3916 0.2804 0.2820 0.3850 0.2751 0.2801 0.3855 0.2771 0.2810
16 0.3880 0.2786 0.2809 0.3846 0.2760 0.2799 0.3849 0.2770 0.2804
32 0.3862 0.2778 0.2804 0.3845 0.2765 0.2799 0.3847 0.2770 0.2801
64 0.3854 0.2774 0.2801 0.3845 0.2768 0.2799 0.3846 0.2771 0.2800
128 0.3849 0.2773 0.2800 0.3845 0.2770 0.2799 0.3845 0.2771 0.2799
256 0.3847 0.2772 0.2799 0.3845 0.2770 0.2799 0.3845 0.2771 0.2799
512 0.3846 0.2771 0.2799 0.3845 0.2771 0.2799 0.3845 0.2771 0.2799
1024 0.3845 0.2771 0.2799 0.3845 0.2771 0.2799 0.3845 0.2771 0.2799

Table 5
The errors ϵ̂i,h and their orders of convergence α̂i,h with i = 1, 2, 3 for Example 5.3 on uniform meshes having
N = 2n , n = 5, 6, . . . , 10 elements using p = 1 and M = 1000 simulations.
N ϵ̂1,h α̂1,h ϵ̂2,h α̂2,h ϵ̂3,h α̂3,h
32 3.8466692770e−01 NA 2.7703405387e−01 NA 2.8009688502e−01 NA
64 3.8456547043e−01 NA 2.7706251729e−01 NA 2.7998316838e−01 NA
128 3.8451810052e−01 1.0988 2.7708149461e−01 0.5848 2.7992979715e−01 1.0913
256 3.8449517977e−01 1.0473 2.7709218122e−01 0.8285 2.7990402221e−01 1.0501
512 3.8448399951e−01 1.0357 2.7709780638e−01 0.9258 2.7989134060e−01 1.0232
1024 3.8447845789e−01 1.0126 2.7710069295e−01 0.9625 2.7988505566e−01 1.0128

6. Concluding remarks

In this paper, we developed and analyzed a DG method for solving systems of stochastic differential equations (SDEs)
arising in population biology, physics, and mathematical finance. We first constructed a new approximate system of
SDEs on each element whose solution converges to the solution of the original system. The approximate system is then
discretized using the standard DG method for deterministic equations. We proved that the proposed scheme is convergent
in the mean-square sense when the noise is additive. Several linear and nonlinear test problems driven by additive
and multiplicative noises are given to show the accuracy and effectiveness of the proposed SDG method. Currently, we
are designing SDG methods for stochastic partial differential equations. We are planning to investigate the numerical
approximations for stochastic differential equations under weaker conditions rather than the Lipschitz condition. Finally,
we would like to mention that our error analysis is valid for systems of SDEs driven by additive noise. For the multiplicative
noise case the analysis is still an open question for this problem. We believe that a new technique will be needed to obtain
similar convergence results.

Acknowledgments

The authors would like to thank the two anonymous reviewers for the valuable comments and suggestions which
improved the quality of the paper. This research was supported by the Kuwait Foundation for the Advancement of Sciences
(KFAS).
23
M. Baccouch, H. Temimi and M. Ben-Romdhane Journal of Computational and Applied Mathematics 388 (2021) 113297

References

[1] P. Kloeden, E. Platen, Numerical solution of stochastic differential equations, in: Stochastic Modelling and Applied Probability, Springer Berlin
Heidelberg, 2010.
[2] B. Oksendal, Stochastic Differential Equations: An Introduction with Applications, Springer, 2010.
[3] E. Platen, An introduction to numerical methods for stochastic differential equations, Acta Numer. 8 (1999) 197–246.
[4] M. Baccouch, A stochastic local discontinuous Galerkin method for stochastic two-point boundary-value problems driven by additive noises,
Appl. Numer. Math. 128 (2018) 43–64.
[5] M. Baccouch, B. Johnson, A high-order discontinuous Galerkin method for Itô stochastic ordinary differential equations, J. Comput. Appl. Math.
308 (2016) 138–165.
[6] M. Baccouch, H. Temimi, M. Ben-Romdhane, The discontinuous Galerkin method for stochastic differential equations driven by additive noises,
Appl. Numer. Math. 152 (2020) 285–309.
[7] K. Burrage, P. Burrage, High strong order explicit Runge-Kutta methods for stochastic ordinary differential equations, Appl. Numer. Math. 22
(1996) 81–101.
[8] Y. Cao, L. Yin, Spectral Galerkin method for stochastic wave equations driven by space–time white noise, Commun. Pure Appl. Anal. 6 (3)
(2007) 607–617.
[9] D.J. Higham, An algorithmic introduction to numerical simulation of stochastic differential equations, SIAM Rev. 43 (2001) 525–546.
[10] K. Ito, Approximation of the Zakai equation for nonlinear filtering, SIAM J. Control Optim. 34 (2) (1996) 620–634.
[11] G.N. Milstein, M.V. Tretyakov, Stochastic Numerics for Mathematical Physics, Springer Berlin Heidelberg, Berlin, Heidelberg, 2004.
[12] M. Baccouch, Analysis of a posteriori error estimates of the discontinuous Galerkin method for nonlinear ordinary differential equations, Appl.
Numer. Math. 106 (2016) 129–153.
[13] W.H. Reed, T.R. Hill, Triangular Mesh Methods for the Neutron Transport Equation, Tech. Rep. LA-UR-73-479, Los Alamos Scientific Laboratory,
Los Alamos, 1991.
[14] N.V. Krylov, Introduction to the Theory of Diffusion Processes, American Mathematical Society, Providence, R.I, 1995.
[15] X. Mao, Stochastic Differential Equations and Applications, Horwood Pub, Chichester, 2008.
[16] O. Calin, An Informal Introduction to Stochastic Calculus with Applications, World Scientific Publishing, 2015.
[17] E. Wong, M. Zakai, On the convergence of ordinary integrals to stochastic integrals, Ann. Math. Stat. (1965) 1560–1564.
[18] E. Wong, M. Zakai, On the relation between ordinary and stochastic differential equations, Internat. J. Engrg. Sci. 3 (2) (1965) 213–229.
[19] E.J. McShane, Stochastic differential equations and models of random processes, in: Proceedings of the Sixth Berkeley Symposium on
Mathematical Statistics and Probability, Volume 3: Probability Theory, University of California Press, Berkeley, Calif., 1972.
[20] E.J. McShane, Stochastic Calculus and Stochastic Models, Academic Press, New York, 1974.
[21] D.W. Stroock, S.R.S. Varadhan, On the support of diffusion processes with applications to the strong maximum principle, in: Proceedings of the
Sixth Berkeley Symposium on Mathematical Statistics and Probability, Volume 3: Probability Theory, University of California Press, Berkeley,
Calif., 1972.
[22] N. Ikeda, S. Nakao, Y. Yamato, A class of approximations of Brownian motion, Publ. Res. Inst. Math. Sci. 13 (1977) 285–300.
[23] S. Watanabe, N. Ikeda, Stochastic Differential Equations and Diffusion Processes, North-Holland Mathematical Library, Elsevier Science, 1981.
[24] D. Kelly, I. Melbourne, Smooth approximation of stochastic differential equations, Ann. Probab. 44 (1) (2016) 479–520.
[25] M. Baccouch, The discontinuous Galerkin finite element method for ordinary differential equations, in: R. Petrova (Ed.), Perusal of the Finite
Element Method, IntechOpen, 2016, pp. 31–68.
[26] G. Prato, Stochastic Equations in Infinite Dimensions, Cambridge University Press, Cambridge, United Kingdom, 2014.
[27] M. Abramowitz, I.A. Stegun, Handbook of Mathematical Functions, Dover, New York, 1965.
[28] M. Delfour, W. Hager, F. Trochu, Discontinuous Galerkin methods for ordinary differential equation, Math. Comp. 154 (1981) 455–473.
[29] D.Q. Jiang, J.J. Yu, C.Y. Ji, N.Z. Shi, Asymptotic behavior of global positive solution to a stochastic SIR model, Math. Comput. Model. 54 (2011)
221–232.
[30] Y.G. Lin, D.Q. Jiang, Long-time behavior of perturbed SIR model by white noise, Discrete Contin. Dyn. Syst. Ser. B 18 (2013) 1873–1887.
[31] Y. Zhou, W. Zhang, S. Yuan, Survival and stationary distribution of a SIR epidemic model with stochastic perturbations, Appl. Math. Comput.
244 (2014) 118–131.
[32] E. Allen, Modeling with Ito Stochastic Differential Equations, Springer, The Netherlands, 2007.
[33] H. Langtangen, Numerical solution of first passage problems in random vibration, SIAM J. Sci. Comput. 15 (1994) 977–996.
[34] J. Roberts, First-passage probabilities for randomly excited systems: Diffusion methods, Probabilistic Eng. Mech. 1 (1986) 66–81.
[35] H. Schurz, On Stochastic Liénard Equations, Preprint m-05-008, Department of Mathematics, Southern Illinois University, 2006.
[36] W. Cao, Z. Zhang, G. Karniadakis, Numerical methods for stochastic delay differential equations via the Wong-Zakai approximation, SIAM J. Sci.
Comput. 37 (1) (2015) 295–318.

24

You might also like