Download as pdf or txt
Download as pdf or txt
You are on page 1of 63

Annual Reviews in Control 51 (2021) 268–330

Contents lists available at ScienceDirect

Annual Reviews in Control


journal homepage: www.elsevier.com/locate/arcontrol

Review article

Control theory for stochastic distributed parameter systems, an engineering


perspective✩
Qi Lü, Xu Zhang ∗
School of Mathematics, Sichuan University, Chengdu 610064, Sichuan Province, China

ARTICLE INFO ABSTRACT

Keywords: The main purpose of this paper is to survey some recent progresses on control theory for stochastic distributed
Stochastic distributed parameter system parameter systems, i.e., systems governed by stochastic differential equations in infinite dimensions, typically
Controllability by stochastic partial differential equations. We will explain the new phenomenon and difficulties in the
Optimal control
study of controllability and optimal control problems for one dimensional stochastic parabolic equations
Pontryagin-type maximum principle
and stochastic hyperbolic equations. In particular, we shall see that both the formulation of corresponding
Stochastic linear quadratic problem
stochastic control problems and the tools to solve them may differ considerably from their deterministic/finite-
dimensional counterparts. More importantly, one has to develop new tools, say, the stochastic transposition
method introduced in our previous works, to solve some problems in this field.

1. Introduction in Mathematical Finance; while the second one is that for stochastic
distributed parameter systems (SDPSs for short), described by stochas-
It is well-known that Control Theory was founded by N. Wiener tic differential equations in infinite dimensions, typically by stochastic
in 1948. From then on, owing to the great effort of numerous math- partial differential equations (SPDEs for short).
ematicians and engineers, this theory was greatly extended to various Control theory for SDPSs is still at its very beginning stage, though it
complicated settings and widely used in sciences, technologies and was ‘‘born’’ in almost the same time as that for deterministic distributed
economics, particularly in Artificial Intelligence in recent years. parameter control systems.
Roughly speaking, Control Theory as a whole is divided into two Due to the inherent complexity of the underlying processes, many
main sub-categories. The first one is control theory for deterministic control systems in reality (such as that in the microelectronics industry,
systems, and the second one is that for stochastic systems. Inevitably, in the atmospheric motion, in communications and transportation,
these two sub-categories are not completely separated but inextricably also in chemistry, biology, microelectronics industry, pharmaceuticals
linked each other. industry, and so on) exhibit very complicated dynamics, including
Control theory for deterministic systems can be divided into two substantial model uncertainty, actuator and state constraints, and high
parts. The first one is that for finite dimensional systems, governed dimensionality (usually even infinite). These systems are often best
mainly by ordinary differential equations, and the second one is that described by SPDEs or even more complicated stochastic equations
for (deterministic) distributed parameter systems, mainly described in infinite dimensions (e.g., Carmona & Rozovskii, 1999; Da Prato &
by differential equations in infinite dimensional spaces, typically by
Zabczyk, 1992; Greenwood & Ward, 2016; Holden, Øksendal, Ubøe, &
partial differential equations (PDEs for short). Control theory for finite
Zhang, 2010; Kotelenez, 2008; Majda, Timofeyev, & Vanden Eijnden,
dimensional systems is by now relatively mature. For control theory of
2001; Murray, 2003). Some typical examples in this respect will be
distributed parameter systems, one can find a huge list of works but it
presented below.
is still quite active.
From now on, we fix 𝑇 > 0 and a filtered probability space
Likewise, control theory for stochastic systems can be divided into
(𝛺,  , 𝐅, P) satisfying the usual conditions (See Appendices A and B for
two parts. The first part is that for stochastic finite dimensional systems,
the notations/notions used throughout this paper). In the rest of this
governed by stochastic (ordinary) differential equations, for which
paper, unless other stated, we omit the argument 𝜔(∈ 𝛺) in any random
people can find a great many list of publications including applications

✩ This document is the results of the research project funded by NSF of China under grants 11971334, 12025105, 11931011 and 11821001, by the Chang
Jiang Scholars Program from the Chinese Education Ministry, and by the Science Development Project of Sichuan University under grants 2020SCUNL201 and
2020SCUNL101.
∗ Corresponding author.
E-mail addresses: lu@scu.edu.cn (Q. Lü), zhang_xu@scu.edu.cn (X. Zhang).

https://doi.org/10.1016/j.arcontrol.2021.04.002
Received 31 December 2020; Received in revised form 25 March 2021; Accepted 1 April 2021
Available online 28 April 2021
1367-5788/© 2021 Elsevier Ltd. All rights reserved.
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

variable or stochastic process defined on the probability space (𝛺,  , P). Hence, similar to (1.4), we obtain the following control system:
Assume {𝑊 (𝑡)}𝑡∈[0,𝑇 ] is a 1-dimensional standard Brownian motion ⎧ 𝑑𝑦 = (𝑦+𝑎
̂ 5 𝑢1 )𝑑𝑡 + (𝑎3 𝑦+𝑢1 )𝑑𝑊 (𝑡) in (0, 𝑇 ) × (0, 1),
defined on (𝛺,  , 𝐅, P). ⎪
⎪ 𝑑 𝑦̂ = (𝑦𝑥𝑥 + 𝑎1 𝑦+𝑎4 𝑢2 )𝑑𝑡
⎪ +(𝑎2 𝑦 + 𝑢2 )𝑑𝑊 (𝑡) in (0, 𝑇 ) × (0, 1),
Example 1.1 (Stochastic Parabolic Equations). The following equation ⎨ (1.5)
⎪ 𝑦=ℎ on (0, 𝑇 ) × {0},
is introduced to describe the evolution of the density of a bacteria ⎪ 𝑦=0 on (0, 𝑇 ) × {1},
population (e.g., Dawson, 1972): ⎪ 𝑦(0) = 𝑦0 , 𝑦(0)
̂ = 𝑦̂0 in (0, 1).

⎧ 𝑑𝑦 = 𝜕 𝑦𝑑𝑡 + 𝛼 √𝑦𝑑𝑊 (𝑡) in (0, 𝑇 ) × (0, 1), In (1.5), (𝑦, 𝑦)
̂ is the state, and 𝑢1 , 𝑢2 and ℎ are the controls. The
⎪ 𝑥𝑥
⎨ 𝑦𝑥 = 0 on (0, 𝑇 ) × {0, 1}, (1.1) coefficients 𝑎4 and 𝑎5 are suitable stochastic processes.
⎪ 𝑦(0) = 𝑦0 in (0, 1), In the above, we put two controls 𝑢1 and 𝑢2 in the diffusion terms

and one control ℎ on the boundary to obtain the exact controllability
where 𝛼 > 0 is a given constant, and 𝑦0 ∈ 𝐿2 (0, 1). To control the of the system (1.5). Usually, if we put a control in the diffusion term, it
population’s density, people can put in and/or draw out some species. may affect the drift term in some way. Here we assume that the effects
Under such sort of actions, Eq. (1.1) becomes the following controlled are in the form of ‘‘𝑎5 𝑢1 ’’ and ‘‘𝑎4 𝑢2 ’’ as that in the first and the second
stochastic parabolic equation: equations in (1.5), respectively.
In Control Theory, it is of fundamental importance to ‘‘understand’’

⎪ 𝑑𝑦 = (𝜕𝑥𝑥√𝑦 + 𝑢1 )𝑑𝑡 and then to modify the behavior of the system under consideration by
⎪ +(𝛼 𝑦 + 𝑢2 )𝑑𝑊 (𝑡) in (0, 𝑇 ) × (0, 1), means of suitable ‘‘control’’ actions in an ‘‘optimal’’ way. This leads to
⎨ 𝑦 =0 (1.2)
⎪ 𝑥 on (0, 𝑇 ) × {0, 1}, the formation of controllability and optimal control problems, actually
⎪ 𝑦(0) = 𝑦0 in (0, 1), two fundamental problems that we shall focus on in the sequel.

Controllability means that one can find at least one way to achieve a
where 𝑢1 and 𝑢2 are suitable functions which depends on the way of goal. According to different goals, there are different notions of control-
putting in and/or drawing out species. lability, such as exact controllability, approximate controllability, null
controllability, partial controllability, and so on.
Example 1.2 (Stochastic Wave Equation). To study the vibrating of Optimal control means that people are expected to find the best way,
small strings, such as a DNA molecule, perturbed by a random force, in some sense, to achieve their goals.
people introduced the following stochastic wave equation (e.g., Funaki, In this paper, we mainly focus on some recent progresses in the
study of the above two problems for SDPSs. We shall outline the
1983; Schlick, 1995):
existing theory as it has been developed in the last two decades and
⎧ 𝑑𝑦 = (𝑦 + 𝑎 𝑦)𝑑𝑡 + 𝑎 𝑦𝑑𝑊 (𝑡) in (0, 𝑇 ) × (0, 1), indicate some important unsolved problems.
⎪ 𝑡 𝑥𝑥 1 2
Although this paper is focused on SDPSs, it is helpful (in particu-
⎨ 𝑦=0 on (0, 𝑇 ) × {0, 1}, (1.3)
⎪ 𝑦(0) = 𝑦0 , 𝑦𝑡 (0) = 𝑦1 in (0, 1) lar for beginners) to introduce some fundamental ideas in a simpler
⎩ setting. Thus, we will review below, very briefly, the fundamental
with suitable coefficients 𝑎1 , 𝑎2 and initial value (𝑦0 , 𝑦1 ). notions/results of control theory for (deterministic) finite dimensional
Many biological events are related to the DNA molecule. Hence, systems.
there is a strong motivation for controlling its motion. This leads to Let us begin with the following linear control system:
{
the study of controlled stochastic wave equation as follows: 𝑦𝑡 = 𝐴𝑦 + 𝐵𝑢 in [0, 𝑇 ],
(1.6)
𝑦(0) = 𝑦0 ,
⎧ 𝑑𝑦𝑡 = (𝑦𝑥𝑥 + 𝑎1 𝑦 + 𝑔0 )𝑑𝑡
⎪ +(𝑎2 𝑦 + 𝑔1 )𝑑𝑊 (𝑡) in (0, 𝑇 ) × (0, 1), where, 𝐴 ∈ R𝑛×𝑛 , 𝐵 ∈ R𝑛×𝑚 (𝑛, 𝑚 ∈ N), 𝑦(⋅) is the state, 𝑢(⋅) is the control,

⎨ 𝑦 = ℎ0 on (0, 𝑇 ) × {0}, (1.4) and R𝑛 and 𝐿2 (0, 𝑇 ; R𝑚 ) are the state and control spaces, respectively.
⎪ 𝑦 = ℎ1 on (0, 𝑇 ) × {1},

⎩ 𝑦(0) = 𝑦0 , 𝑦𝑡 (0) = 𝑦1 in (0, 1). Definition 1.1. The system (1.6) is called exactly controllable at time
𝑇 if for any 𝑦0 , 𝑦1 ∈ R𝑛 , there is a control 𝑢(⋅) ∈ 𝐿2 (0, 𝑇 ; R𝑚 ) such that
Here 𝑦 is the state, 𝑔0 , 𝑔1 , ℎ0 and ℎ1 are the controls.
the solution 𝑦(⋅) to (1.6) satisfies 𝑦(𝑇 ) = 𝑦1 .
A natural question is, can one drive the DNA molecule from (𝑦0 , 𝑦1 )
to a given configuration (𝑦′0 , 𝑦′1 ) by applying a control (ℎ0 , ℎ1 , 𝑔0 , 𝑔1 )? It is well known that (Kalman, 1961) the exact controllability of
(1.6) is equivalent to that the rank of the matrix (𝐵, 𝐴𝐵, ⋯, 𝐴𝑛−1 𝐵) is
The controls in (1.4) are the strongest ones that people can intro- 𝑛, which is independent of 𝑇 . Consequently, the exact controllability of
duce into the system. However, as we shall see in Section 3, (1.4) is (1.6) is a property which either holds or fails for all intervals [0, 𝑇 ]
whenever 𝑇 > 0. A direct proof of this nice property is based on
not exactly controllable for controls belonging to some reasonable set.
the representation of solutions to (1.6) by means of the variation of
This differs significantly from the well-known controllability property
constants formula. Unfortunately, such a method cannot be extended
of deterministic wave equations (e.g., Zhang, 2011; Zuazua, 2007).
to study controllability problems of (stochastic) PDEs, for which people
Since (1.4) is a generalization of the classical wave equation to the have to develop different approaches.
stochastic setting, from the viewpoint of Control Theory, we believe In the rest of this paper, to simplify the presentation, we shall write
that some key features have been ignored in the derivation of Eq. (1.4).  for a generic positive constant, which may vary from one place to
Because of this, we proposed a refined model as follows (Lü & Zhang, another (unless otherwise stated).
2019a): For (1.6), the adjoint equation is1
{
𝑧𝑡 = −𝐴⊤ 𝑧 in [0, 𝑇 ],
Example 1.3 (A Refined Stochastic Wave Equation). In Lü and Zhang (1.7)
𝑧(𝑇 ) = 𝑧𝑇 ∈ R𝑛 .
(2019a), the first equation in (1.3) was modified as the following one
One can show the following result:
(for suitable coefficients 𝑎1 , 𝑎2 and 𝑎3 ):
{
𝑑𝑦 = 𝑦𝑑𝑡
̂ + 𝑎3 𝑦𝑑𝑊 (𝑡) in (0, 𝑇 ) × (0, 1),
1
𝑑 𝑦̂ = (𝑦𝑥𝑥 + 𝑎1 𝑦)𝑑𝑡 + 𝑎2 𝑦𝑑𝑊 (𝑡) in (0, 𝑇 ) × (0, 1). For any matrix 𝐷, denote by 𝐷⊤ the transpose of 𝐷.

269
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

Theorem 1.1. The system (1.6) is exactly controllable at time 𝑇 if and and prove the following Pontryagin’s Maximum Principle (Boltyanskiy,
only if solutions to (1.7) satisfy that Gamkrelidze, Mischenko, & Pontryagin, 1962):
𝑇
2
|𝑧𝑇 |2R𝑛 ≤  |𝐵 ⊤ 𝑧(𝑡)|R𝑚 𝑑𝑡, ∀ 𝑧𝑇 ∈ R𝑛 . (1.8) Theorem 1.2. Let (𝑦(⋅),
̄ 𝑢(⋅))
̄ be an optimal pair. Then, for a.e. 𝑡 ∈ [0, 𝑇 ],
∫0
The inequality (1.8) is called an observability estimate for (1.7).
The equivalence between the controllability of the control system and H(𝑡, 𝑦(𝑡), ̄ 𝑧(𝑡)) = max H(𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡), ̄ 𝑢, 𝑧(𝑡)), (1.12)
𝑢∈𝑈
the observability of its adjoint equation is one of the most classical
ingredients in the controllability theory for finite dimensional linear where 𝑧(⋅) ∶ [0, 𝑇 ] → R𝑛 solves
control systems. ⎧ 𝑧 (𝑡) = −𝑓 (𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡))̄ ⊤ 𝑧(𝑡)
By Theorem 1.1, the controllability problem is reduced to an a ⎪ 𝑡 𝑦
priori estimate for its adjoint equation. This idea is greatly extended to ⎨ +𝑔 𝑦 (𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡)),
̄ a.e. 𝑡 ∈ [0, 𝑇 ], (1.13)
⎪ 𝑧(𝑇 ) = −ℎ𝑦 (𝑦(𝑇
̄ )).
different kinds of control systems. Indeed, most of the controllability ⎩
results for (stochastic) distributed parameter systems are proved by
Pontryagin’s maximum principle (Boltyanskiy et al., 1962) for de-
establishing the corresponding observability estimates for their adjoint
terministic optimal control problems in finite dimensions is one of the
equations, as we shall see in Sections 2–3 of this paper.
three milestones in modern optimal control theory. Such kind of results
Besides the connection with controllability, observability has its
own interest in Control Theory. Roughly speaking, it concerns with have been extended to more general control systems (e.g., Boltyanskiy
whether the state of the system under consideration can be fully et al., 1962; Li & Yong, 1995; Lü & Zhang, 2014; Yong & Zhou, 1999
reconstructed from a measurement on the output during a given time and the rich references cited therein).
interval. Mathematically, this is reduced to the problem of whether the Thanks to Theorem 1.2, the optimization problem (1.11) (which
total energy of the system can be estimated in terms of some partial is usually infinite dimensional) is reduced to a finite dimensional
energy. optimization problem (1.12) (in the pointwise sense). As we shall see
Controllability problems are studied extensively for determ- in Section 4, the story for the stochastic situation is quite different,
inistic distributed parameter systems (e.g., Coron, 2007; Fursikov & especially for that in infinite dimensions.
Imanuvilov, 1996; Li, 2010; Lions, 1988; Zhang, 2011; Zuazua, 2007 Let 𝑛 ∈ N. Write S(R𝑛 ) for the set of all 𝑛 × 𝑛 symmetric matrices.
and the references therein). Recently, there are some works on control- For 𝑀, 𝑁 ∈ S(R𝑛 ), the notation 𝑀 ≥ 𝑁 (resp. 𝑀 > 𝑁) indicates that
lability problems for SDPSs (e.g., Dou & Lü, 2019; Fu & Liu, 2017a; 𝑀 − 𝑁 is positive semi-definite (resp. positive definite).
Gao, Chen, & Li, 2015; Liu, 2014a; Lü, 2011, 2013a, 2014; Lü & As a special case of Problem (DOP), we consider a linear quadratic
Zhang, 0000a, 0000b, 2021; Tang & Zhang, 2009; Zhang, 2011 and optimal control problem (LQ problem for short), for which the control
the references therein). It is worth pointing out that controllability is system is (1.6) with the following cost functional:
strongly related to (or in some situation, even equivalent to) other
important issues in Control Theory, say stabilization, existence of an  (𝑦0 ; 𝑢(⋅))
𝑇 [⟨ ⟩ ⟨ ⟩ ]
optimal control and so on. One can find numerous literatures on these 1
= 𝑀𝑦(𝑡), 𝑦(𝑡) R𝑛 + 𝑅𝑢(𝑡), 𝑢(𝑡) R𝑚 𝑑𝑡 (1.14)
topics (e.g., Coron, 2007; Li & Yong, 1995; Lions, 1988; Zhang, 2011; 2 ∫0
Zuazua, 2007 and the references therein), addressing mainly to the 1⟨ ⟩
+ 𝐺𝑦(𝑇 ), 𝑦(𝑇 ) R𝑛 ,
deterministic problems. 2
Next, we present a typical optimal control problem. Let 𝑈 be a where 𝑀 ∈ S(R𝑛 ), 𝑅 ∈ S(R𝑚 ), 𝐺 ∈ S(R𝑛 ), and 𝑢(⋅) ∈ 𝐿2 (0, 𝑇 ; R𝑚 ). In
nonempty set in R𝑚 and order to emphasize the dependence of the optimal control on the initial
𝛥 { } datum 𝑦0 , unlike (1.10), we also put it explicitly in the cost functional
 = 𝑢 ∶ [0, 𝑇 ] → 𝑈 ∣ 𝑢 is measurable .
 (⋅, ⋅) in (1.14).
Consider the following controlled ordinary differential equation: The LQ problem for the system (1.6) is as follows:
{ Problem (DLQ) To find a 𝑢(⋅)̄ ∈ 𝐿2 (0, 𝑇 ; R𝑚 ) which minimizes the
𝑦𝑡 (𝑡) = 𝑓 (𝑡, 𝑦(𝑡), 𝑢(𝑡)), a.e. 𝑡 ∈ [0, 𝑇 ],
(1.9) cost functional (1.14), i.e.,
𝑦(0) = 𝑦0 ,
with a cost functional  (𝑦0 ; 𝑢(⋅))
̄ = inf  (𝑦0 ; 𝑢(⋅)). (1.15)
𝑢(⋅) ∈ 𝐿2 (0,𝑇 ;R𝑚 )
𝑇
𝐽 (𝑢(⋅)) = 𝑔(𝑡, 𝑦(𝑡), 𝑢(𝑡))𝑑𝑡 + ℎ(𝑦(𝑇 )), 𝑢(⋅) ∈  , (1.10) Kalman’s theory about LQ problem for finite dimensional sys-
∫0
tems (Kalman, 1961) is another one of the three milestones in modern
where 𝑦0 ∈ R𝑛 ,
𝑦(⋅) is the state and 𝑢(⋅) ∈  is the control, and
control theory and has been extensively studied for different kinds of
𝑓 ∶ [0, 𝑇 ]×R𝑛 ×R𝑚 → R𝑛 , 𝑔(⋅, ⋅, ⋅) ∶ [0, 𝑇 ]×R𝑛 ×R𝑚 → R and ℎ ∶ R𝑛 → R
control systems. It is fundamentally important due to the following
are suitable functions. The optimal control problem is as follows:
reasons:
Problem (DOP) To find a 𝑢(⋅) ̄ ∈  which minimizes the cost
functional (1.10), i.e., • It can be used to model many problems in applications;
𝐽 (𝑢(⋅))
̄ = inf 𝐽 (𝑢(⋅)). (1.11) • Many optimal control problems for nonlinear systems can be
𝑢(⋅) ∈  reasonably approximated by LQ problems.
Any control 𝑢(⋅) ̄ ∈  satisfying (1.11) is called an optimal control. The
corresponding solution to (1.9), denoted by 𝑦(⋅), ̄ is called an optimal The deterministic LQ problem was first studied in Bellman, Glicks-
state, and (𝑦(⋅),
̄ 𝑢(⋅))
̄ is called an optimal pair. berg, and Gross (1958). In Kalman (1961), the optimal feedback con-
Optimal control problems are strongly related to the classical calcu- trol was found via the matrix-valued Riccati equation, and hence a
lus of variations and optimization theory. Nevertheless, since the con- systematic LQ control theory had been established.
trol domain 𝑈 may be quite general, the classical variation technique By Theorem 1.2, we have the following necessary condition for
cannot be applied to optimal control problems directly. (𝑦(⋅),
̄ 𝑢(⋅))
̄ (for Problem (DLQ)):
Similarly to Calculus, a usual way to solve an optimal control
̄ = −𝐵 ⊤ 𝜓(𝑡),
𝑅𝑢(𝑡) a.e. 𝑡 ∈ [0, 𝑇 ],
problem is to find some necessary conditions satisfied by optimal pairs.
Particularly, for Problem (DOP), people introduce a Hamiltonian where 𝜓 solves
𝛥 ⟨ ⟩ {
H(𝑡, 𝑦, 𝑢, 𝑝) = 𝑝, 𝑓 (𝑡, 𝑦, 𝑢) R𝑛 − 𝑔(𝑡, 𝑦, 𝑢), 𝜓𝑡 (𝑡) = −𝐴⊤ 𝜓(𝑡) + 𝑀 𝑦(𝑡),
̄
(𝑡, 𝑦, 𝑢, 𝑝) ∈ [0, 𝑇 ] × R𝑛 × 𝑈 × R𝑛 , 𝜓(𝑇 ) = −𝐺𝑦(𝑇
̄ ).

270
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

In particular, if 𝑅 is invertible, then the optimal control One of the most essential difficulties in the study of control prob-
−1 ⊤
lems for SDPSs is that, compared with the deterministic setting, people
̄ = −𝑅 𝐵 𝜓(𝑡),
𝑢(𝑡) a.e. 𝑡 ∈ [0, 𝑇 ]. (1.16) know very little about SPDEs. Further, both the formulation of control
The formula (1.16) gives an explicit form of the optimal control. problems for those systems and the tools to solve them may differ
However, it is not of the feedback form. In order to find optimal considerably from their deterministic/finite-dimensional counterparts.
feedback controls for Problem (DLQ), we need the following concept. As a result, one has to develop new tools, say, the stochastic transposition
method introduced in our works (Lü & Zhang, 2013, 2014, 2015b, 2018,
2019b) (See also Theorem 4.2 in Section 4, Theorem 5.4 in Section 5,
Definition 1.2. We call 𝛩(⋅) ∈ 𝐿2 (0, 𝑇 ; R𝑚×𝑛 ) an optimal feedback
and Theorem B.16 in Appendix B.7 for more technical details), to solve
matrix for Problem (DLQ) if
some problems in this field, even if some of which look quite simple.
 (𝑦0 ; 𝛩(⋅)𝑦(⋅))
̄ ≤  (𝑦0 ; 𝑢(⋅)), Much has been done for SDPSs and still much more remains to be
(1.17)
∀ (𝑦0 , 𝑢(⋅)) ∈ R𝑛 × 𝐿2 (0, 𝑇 ; R𝑚 ), done. Indeed, the field of SDPSs is full of challenging problems (we shall
present some of them in the sequel), which offers a rare opportunity for
where 𝑦(⋅)
̄ = 𝑦(⋅;
̄ 𝑦0 , 𝛩(⋅)𝑦(⋅))
̄ solves
{ the new generations in Control Theory.
𝑦̄𝑡 = (𝐴 + 𝐵𝛩)𝑦̄ in [0, 𝑇 ], We believe that, control theory for SDPSs is a direction which de-
(1.18)
̄ = 𝑦0 .
𝑦(0) serves to be developed with great efforts in the whole Control Science.
Actually, in the framework of classical physics, SDPSs are very likely
Clearly, the existence of an optimal feedback matrix implies the
the most general control systems, and therefore, a deep study on this
existence of an optimal control for each initial datum 𝑦0 ∈ R𝑛 . However,
field may provide some useful hints for the development of control
the converse is not always true. To obtain the existence of an optimal
theory for quantum systems.
feedback matrix, people introduce the following Riccati equation:
The rest of this paper consists of three parts. The first (resp. sec-
{
𝑃𝑡 +𝑃 𝐴 + 𝐴⊤ 𝑃 + 𝑀 −𝑃 𝐵𝑅−1 𝐵 ⊤ 𝑃 = 0 in [0, 𝑇 ], ond) one is devoted to studying controllability (resp. optimal control)
(1.19)
𝑃 (𝑇 ) = 𝐺. problems for SDPSs. Since this paper is designed for the readers having
only some basic knowledge of elementary calculus and linear algebra,
Theorem 1.3. Assume 𝑅 > 0. If 𝑃 (⋅) ∈ 𝐶([0, 𝑇 ]; S(R𝑛 )) is a solution to the third part is addressed to collect some mathematical preliminaries
(1.19), then there is an optimal feedback matrix for Problem (DLQ) in the used throughout the context.
following form: The first part of the paper consists of Sections 2 and 3 . Section 2
is devoted to the null and approximate controllability of stochas-
𝛩(⋅) = −𝑅−1 (⋅)𝐵(⋅)⊤ 𝑃 (⋅). tic parabolic equations; while Section 3 is addressed to the exact
controllability of stochastic hyperbolic equations.
Moreover, for each 𝑦0 ∈ R𝑛 ,
The second part of the paper consists of Sections 4 and 5. In Sec-
1 tion 4, we provide some results on Pontryagin-type maximum principle
inf  (𝑦0 ; 𝑢(⋅)) = ⟨𝑃 (0)𝑦0 , 𝑦0 ⟩R𝑛 .
𝑢(⋅) ∈ 𝐿2 (0,𝑇 ;R𝑚 ) 2 for optimal control problems for nonlinear SDPSs in infinite dimen-
Since (1.19) is locally Lipschitz in the unknown 𝑃 (⋅), it is locally sions. In Section 5, we focus on some recent results on LQ problems
solvable, that is, there exists 𝑠 < 𝑇 such that (1.19) admits a solution for SDPSs.
The third part of the paper consists of Appendices A and B. We
on [𝑠, 𝑇 ]. Nevertheless, due to the quadratic nonlinearity, the global
recall some preliminary results (without proofs) from Functional Anal-
well-posedness is not guaranteed without additional conditions. The
ysis, Partial Differential Equations, Probability Theory and Stochastic
following result gives the global solvability of the Riccati equation
Analysis.
(1.19) under some assumptions.
2. Null and approximate controllability of stochastic parabolic
Theorem 1.4. Assume 𝑅 > 0, 𝑀 ≥ 0 and 𝐺 ≥ 0. Then Eq. (1.19) admits equation
a unique global solution.

Various LQ problems are extensively studied in the literature (e.g., The main aim of this section is to review the null and approxi-
mate controllability of stochastic parabolic equations. We first study
Kalman, 1961; Lasiecka & Triggiani, 2000a, 2000b; Lü & Zhang, 2019b;
these controllability problems mainly for one dimensional stochastic
Sun & Yong, 2020; Yong & Zhou, 1999 and rich references cited
parabolic equations. Then we survey some recent results for their
therein). Nevertheless, compared with the deterministic setting, the
multidimensional counterparts. At last, we list some open problems.
stochastic LQ problems (SLQs for short) are much less understood.
In this section and also the next one (i.e., Section 3), we let
Control theory for SDPSs in the general form is quite technical (even
(𝛺,  , 𝐅, P) be a complete filtered probability space with the filtration
if only for controllability and optimal control problems), requiring
𝐅 = {𝑡 }𝑡≥0 , on which a one dimensional standard Brownian mo-
some heavy analytic machineries to obtain the results in a rigorous way. tion {𝑊 (𝑡)}𝑡≥0 is defined and 𝐅 is the natural filtration generated by
Hence, for the convenience of the engineering-oriented readers, in this {𝑊 (𝑡)}𝑡≥0 . Denoted by F the progressive 𝜎-field with respect to (w.r.t.
paper we only present some of the control results for two typical one for short) 𝐅.
dimensional SPDEs, due to the following three reasons:
2.1. Formulation of the problem and the main result
• These equations are almost the simplest SDPSs which have suf-
ficient complexities to permit an exposition of a wide variety
Let 𝑇 > 0 and 𝐺0 be a given nonempty open subset of (0, 1). Denote
of questions which are of interest to general SDPSs. Thus, this
by 𝜒𝐺0 the characteristic function of 𝐺0 .
helps the readers to avoid the technical details to understand the
Controllability problems for deterministic systems are often special
general theory presented in Lü and Zhang (0000b) .
cases of the analogous problems for stochastic systems. Therefore, it is
• These equations are two typical stochastic control systems for instructive to review the null and approximate controllability problems
which the controllability and observability are quite different for deterministic parabolic equations first. For simplicity, we consider
from each other. the following control system:
• In the subsequent context, we may provide some detailed proofs
for the main results without taking too much space, which illus- ⎧ 𝑦 − 𝑦 = 𝑎𝑦 + 𝜒 𝑢 in (0, 𝑇 ) × (0, 1),
⎪ 𝑡 𝑥𝑥 𝐺0
trate the ideas and mathematical techniques employed in the field ⎨ 𝑦=0 on (0, 𝑇 ) × {0, 1}, (2.1)
of control theory for SDPSs. ⎪ 𝑦(0) = 𝑦0 in (0, 1),

271
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

where 𝑎 ∈ 𝐿∞ ((0, 𝑇 ) × (0, 1)). In (2.1), 𝑦 is the state and 𝑢 is the control. Definition 2.2. The control system (2.5) is said to be null (resp.
The state and control spaces are chosen to be 𝐿2 (0, 1) and 𝐿2 ((0, 𝑇 ) × 𝐺0 ), approximately) controllable at time 𝑇 if for any 𝑦0 ∈ 𝐿2 (0, 1) (resp. for
respectively. any 𝜀 > 0, 𝑦0 ∈ 𝐿2 (0, 1) and 𝑦1 ∈ 𝐿2 (𝛺; 𝐿2 (0, 1))), there exists (𝑢1 , 𝑢2 ) ∈
𝑇
𝐿2F (0, 𝑇 ; 𝐿2 (𝐺0 ))×𝐿2F (0, 𝑇 ; 𝐿2 (0, 1)) such that the corresponding solution
Definition 2.1. The Eq. (2.1) is said to be null (resp. approximately) to (2.5) fulfills that 𝑦(𝑇 ) = 0, a.s. (resp. |𝑦(𝑇 ) − 𝑦1 |𝐿2 (𝛺;𝐿2 (0,1)) ≤ 𝜀).
𝑇
controllable at time 𝑇 if for any given 𝑦0 ∈ 𝐿2 (0, 1) (resp. for any 𝜀 > 0
and 𝑦0 , 𝑦1 ∈ 𝐿2 (0, 1)), one can find a control 𝑢 ∈ 𝐿2 ((0, 𝑇 )×𝐺0 ) such that We have the following result, which is a special case of Tang and
the solution 𝑦(⋅) ∈ 𝐶([0, 𝑇 ]; 𝐿2 (0, 1)) ∩ 𝐿2 (0, 𝑇 ; 𝐻01 (0, 1)) to (2.1) satisfies Zhang (2009, Theorem 2.1).
𝑦(𝑇 ) = 0 (resp. |𝑦(𝑇 ) − 𝑦1 |𝐿2 (0,1) ≤ 𝜀).
Theorem 2.2. The control system (2.5) is null and approximately
Remark 2.1. Due to the smoothing effect of solutions to the parabolic controllable at any time 𝑇 > 0.
equation, the exact controllability for (2.1) is impossible, i.e., the above
𝜀 cannot be zero. Remark 2.2. One may suspect that Theorem 2.2 is trivial and provide
a possible easy ‘‘proof’’ as follows:
To study the controllability of (2.1), we introduce its adjoint equa- Choosing 𝑢2 = −𝑎2 𝑦, then the system (2.5) becomes
tion:
⎧ 𝑦 −𝑦 = (𝑎 −𝑎 𝑎 )𝑦+𝜒 𝑢 in (0,𝑇 ) × (0,1),
⎧ 𝑧 + 𝑧 = −𝑎𝑧 ⎪ 𝑡 𝑥𝑥 1 2 3 𝐺0 1
in (0, 𝑇 ) × (0, 1), ⎨𝑦 = 0 on (0,𝑇 ) × {0,1}, (2.6)
⎪ 𝑡 𝑥𝑥
⎨ 𝑧=0 on (0, 𝑇 ) × {0, 1}, (2.2) ⎪ 𝑦(0) = 𝑦0 in (0,1),
⎪ 𝑧(𝑇 ) = 𝑧𝑇 ⎩
⎩ in (0, 1).
which is a parabolic equation with random coefficients. If one regards
By means of a standard duality argument, it is easy to show the the sample point 𝜔 as a parameter, then for every given 𝜔 ∈ 𝛺, there is
following result. a control 𝑢1 (⋅, 𝜔) such that the solution to (2.6) satisfies 𝑦(𝑇 , 𝑥, 𝜔) = 0,
a.s. However, in this way it is unclear whether the control can be
Proposition 2.1. (i) The Eq. (2.1) is null controllable at time 𝑇 if and only chosen to be measurable with respect to F.
if every solution to Eq. (2.2) satisfies the following observability estimate:
To prove Theorem 2.2, by a standard duality argument, it suffices to
|𝑧(0)|𝐿2 (0,1) ≤ |𝑧|𝐿2 ((0,𝑇 )×𝐺0 ) , ∀ 𝑧𝑇 ∈ 𝐿2 (0, 1). (2.3) establish a suitable observability estimate for the following equation:

(ii) The Eq. (2.1) is approximately controllable at time 𝑇 if and only if any ⎧ ( )
⎪ 𝑑𝑧+𝑧𝑥𝑥 𝑑𝑡 = − 𝑎1 𝑧+𝑎2 𝑍 𝑑𝑡
solution to Eq. (2.2) satisfies the following unique continuation property: ⎪ +𝑍𝑑𝑊 (𝑡) in (0, 𝑇 ) × (0, 1),
⎨𝑧=0 (2.7)
𝑧 = 0 in (0, 𝑇 ) × 𝐺0 ⟹ 𝑧𝑇 = 0. (2.4) ⎪ on (0,𝑇 ) × {0,1},
⎪ 𝑧(𝑇 ) = 𝑧𝑇 in (0, 1).

According to Proposition 2.1 and a global Carleman estimate, one
can prove the following result. By Theorem B.14, for any 𝑧𝑇 ∈ 𝐿2 (𝛺;𝐿2 (0, 1)), Eq. (2.7) admits
( 𝑇 ⋂
a unique weak solution (𝑧, 𝑍) ∈ 𝐿2F (𝛺; 𝐶([0,𝑇 ];𝐿2 (0,1))) 𝐿2F (0, 𝑇 ;
1
) 2
Theorem 2.1. The system (2.1) is null and approximately controllable at 𝐻0 (0,1)) ×𝐿F (0,𝑇 ; 𝐿2 (0, 1)). Moreover, for any 𝑡 ∈ [0, 𝑇 ],
time 𝑇 .
|(𝑧, 𝑍)|(𝐿2 (𝛺;𝐶([0,𝑡];𝐿2 (0,1)))∩𝐿2 (0,𝑡;𝐻 1 (0,1)))×𝐿2 (0,𝑡;𝐿2 (0,1))
F F 0 F
Controllability and observability problems of deterministic parabo-
lic equations are now well-understood (e.g. Fernández-Cara & Zuazua, ≤ 𝑒𝑟1 |𝑧(𝑡)|𝐿2 (𝛺;𝐿2 (0,1)) , (2.8)
𝑡
2000a, 2000b; Fu, Lü, & Zhang, 2019; Fursikov & Imanuvilov, 1996;
Lebeau & Robbiano, 1995; Zhang, 2011; Zuazua, 2007). Indeed, one ∑
2
where 𝑟1 = |𝑎𝑗 |2𝐿∞ (0,𝑇 ;𝐿∞ (0,1)) .
can use the global Carleman estimate to prove the observability in- 𝑗=1 F

equality (2.3) (c.f. Fursikov & Imanuvilov, 1996), and via which the Theorem 2.2 is implied by the following observability and unique
null controllability of (2.1) follows. On the other hand, by the global continuation results for (2.7):
Carleman estimate, it is easy to deduce the unique continuation prop-
erty (2.4), which implies the approximate controllability of (2.1). Theorem 2.3. For all 𝑧𝑇 ∈ 𝐿2 (𝛺; 𝐿2 (0, 1)), mild solutions (𝑧, 𝑍)
𝑇
Motivated by Example 1.1, we consider the following controlled to Eq. (2.7) satisfy
stochastic parabolic equation: |𝑧(0)|𝐿2 (0,1)
( ) (2.9)
⎧ ( ) ≤ 𝑒 𝑟̂1 |𝑧|𝐿2 (0,𝑇 ;𝐿2 (𝐺 )) + |𝑎3 𝑧+𝑍|𝐿2 (0,𝑇 ;𝐿2 (0,1)) ,
0
⎪ 𝑑𝑦 − 𝑦𝑥𝑥 𝑑𝑡 = 𝑎1 𝑦+𝜒𝐺0 𝑢1 +𝑎3 𝑢2 𝑑𝑡 F F

⎪ +(𝑎2 𝑦 + 𝑢2 )𝑑𝑊 (𝑡) in (0, 𝑇 ) × (0, 1), where 𝑟̂1 = 𝑟1 + |𝑎3 |2𝐿∞ (0,𝑇 ;𝐿∞ (0,1)) .
⎨𝑦=0 (2.5)
⎪ on (0, 𝑇 ) × {0,1}, F

⎪ 𝑦(0) = 𝑦0 in (0, 1),


⎩ Theorem 2.4. The final datum 𝑧𝑇 = 0, a.s., provided that the correspond-
where 𝑎1 , 𝑎2 , 𝑎3 ∈ 𝐿∞ (0, 𝑇 ; 𝐿∞ (0, 1)). In the system (2.5), the initial ing mild solution (𝑧, 𝑍) to Eq. (2.7) satisfies that 𝑧 = 0 in (0, 𝑇 ) × 𝐺0 a.s.
F
2
state 𝑦0 ∈ 𝐿 (0, 1), 𝑦 is the state, and the control is a pair (𝑢1 , 𝑢2 ) ∈ and 𝑎3 𝑧 + 𝑍 = 0 in (0, 𝑇 ) × (0, 1), a.s.
𝐿2F (0, 𝑇 ; 𝐿2 (𝐺0 ))×𝐿2F (0, 𝑇 ; 𝐿2 (0, 1)) (see Appendix B.4 for the definitions Two controls 𝑢1 and 𝑢2 have been introduced in (2.5). From the
these spaces). The term 𝑎3 𝑢2 (in the first equation of (2.5)) means that controllability result for the deterministic parabolic equation (2.1), it is
the control 𝑣 introduced in the diffusion term may affect the drift term. more natural to use only one control and consider the following system
By Theorem B.9, for any 𝑦0 ∈ 𝐿2 (0, 1) and (𝑢1 , 𝑢2 ) ∈ 𝐿2F (0, 𝑇 ; 𝐿2 (𝐺0 )) (which is a special case of (2.5) with 𝑢2 ≡ 0):
× 𝐿2F (0, 𝑇 ; 𝐿2 (0, 1)), the control system (2.5) admits a unique mild
⎧ ( )
solution 𝑦 ∈ 𝐿2F (𝛺; 𝐶([0, 𝑇 ]; 𝐿2 (0, 1))) ∩ 𝐿2F (0, 𝑇 ; 𝐻01 (0, 1)). Moreover, ⎪ 𝑑𝑦−𝑦𝑥𝑥 𝑑𝑡 = 𝑎1 𝑦+𝜒𝐺0 𝑢1 𝑑𝑡
⎪ +𝑎2 𝑦𝑑𝑊 (𝑡) in (0, 𝑇 ) × (0, 1),
|𝑦|𝐿2 (𝛺;𝐶([0,𝑇 ];𝐿2 (0,1)))∩𝐿2 (0,𝑇 ;𝐻 1 (0,1)) ⎨ 𝑦=0 (2.10)
(F F 0 ) ⎪ on (0, 𝑇 ) × {0, 1},
≤  |𝑦0 |𝐿2 (0,1) + |(𝑢1 , 𝑢2 )|𝐿2 (0,𝑇 ;𝐿2 (𝐺 ))×𝐿2 (0,𝑇 ;𝐿2 (0,1)) . ⎪ 𝑦(0) = 𝑦0 in (0, 1).
0
F F

272
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

To obtain the null controllability of (2.10), one needs to prove From (2.16), (2.18) and (2.19), we arrive at
that solutions to the system (2.7) satisfy the following observability 2𝐼𝐼2
estimate: ( )
= −2 𝓁𝑥 𝑤2𝑥 + 𝓁𝑥 𝑤2 𝑥 𝑑𝑡 + 2(𝑤𝑥 𝑑𝑤)𝑥
( 2 ) (2.20)
|𝑧(0)|𝐿2 (0,1) ≤ |𝑧|𝐿2 (0,𝑇 ;𝐿2 (𝐺 )) , +𝑑 −𝑤𝑥 + 𝑤2 + 2𝓁𝑥𝑥 𝑤2𝑥 𝑑𝑡
F 0
(2.11) [ ]
∀ 𝑧𝑇 ∈ 𝐿2 (𝛺; 𝐿2 (0, 1)). − 𝑡 − 2(𝓁𝑥 )𝑥 𝑤2 𝑑𝑡 + |𝑑𝑤𝑥 |2 − (𝑑𝑤)2 .
𝑇
Next, we compute 2𝐼𝐼3 . By (2.16), we get
The main difficulty to prove (2.11) is, though the correction term ‘‘𝑍’’
plays a nice ‘‘coercive’’ role for the well-posedness of (2.7), it seems a 2𝐼𝐼3
( )
‘‘bad’’ (non-homogeneous) term when one tries to prove (2.11) by using = 2 𝑤𝑥𝑥 + 𝑤 𝛹 𝑤𝑑𝑡
the global Carleman estimate. At this moment, it is unknown how to [ ( ) ]
= 2 𝛹 𝑤𝑤𝑥 𝑥 − 2𝛹 𝑤2𝑥 − 𝛹𝑥 (𝑤2 )𝑥 + 2𝛹 𝑤2 𝑑𝑡
prove the observability estimate (2.11) except for some special cases [( )
(e.g., Liu, 2014a; Lü, 2011; Yang & Zhong, 2016). = 2𝛹 𝑤𝑤𝑥 − 𝛹𝑥 𝑤2 𝑥 − 2𝛹 𝑤2𝑥 (2.21)
( ) ]
+ 𝛹𝑥𝑥 + 2𝛹 𝑤2 𝑑𝑡.
2.2. A weighted identity
Finally, combining (2.17), (2.20) and (2.21), and noting that

In order to prove Theorem 2.3, we need to derive a Carleman −|𝑑𝑤𝑥 |2 + (𝑑𝑤)2 = −𝜃 2 |𝑣𝑥 + 𝓁𝑥 𝑑𝑣|2 + 𝜃 2 (𝑑𝑣)2 ,
estimate for the stochastic parabolic-like operator ‘‘𝑑𝑣 + 𝑣𝑥𝑥 𝑑𝑡’’. To this we obtain the desired equality (2.12) immediately. □
end, we first give an identity, which is a special case of Tang and Zhang
(2009, Theorem 3.1). 2.3. Carleman estimate for backward stochastic parabolic equations

Theorem 2.5. Let 𝑣 be an 𝐻 2 (0, 1)-valued Itô process. Assume that As a key preliminary to prove Theorem 2.3, we need to establish a
𝓁, 𝛹 ∈ 𝐶 ((0, 𝑇 ) × (0, 1)). Set 𝜃 = 𝑒𝓁 and 𝑤 = 𝜃𝑣. Then, for any 𝑡 ∈ [0, 𝑇 ]
1,3 global Carleman estimate for Eq. (2.7). The following lemma is a special
and a.e. (𝑥, 𝜔) ∈ (0, 1) × 𝛺2 , case of a known technical result, i.e., Fursikov and Imanuvilov (1996,
( )( ) p. 4, Lemma 1.1) (Actually, such a special case in one space dimension
2𝜃 𝑤𝑥𝑥 + 𝑤 𝑑𝑣 + 𝑣𝑥𝑥 𝑑𝑡 − 2(𝑤𝑥 𝑑𝑤)𝑥 can be easily proved by means of elementary calculus, which is left as
[ ( 𝛹 ) ] an exercise for interesting readers).
+ 2 𝓁𝑥 𝑤2𝑥 − 𝛹 𝑤𝑤𝑥 + 𝓁𝑥 + 𝑥 𝑤2 𝑑𝑡 (2.12)
2 𝑥
2 2
( 2 2
)
= 3𝓁𝑥𝑥 𝑤𝑥 𝑑𝑡 + 𝑤 𝑑𝑡 − 𝑑 𝑤𝑥 − 𝑤 Lemma 2.1. If 𝐺1 is a nonempty open interval with 𝐺1 ⊂ 𝐺0 , then there
( )2 is a function 𝜓 ∈ 𝐶 ∞ ([0, 1]) so that 𝜓 > 0 in (0, 1), 𝜓(0) = 𝜓(1) = 0, and
+ 2 𝑤𝑥𝑥 + 𝑤 𝑑𝑡 + 𝜃 2 |𝑑𝑣𝑥 + 𝓁𝑥 𝑑𝑣|2 − 𝜃 2 (𝑑𝑣)2 ,
|𝜓𝑥 (𝑥)| > 0 for all 𝑥 ∈ [0, 1] ⧵ 𝐺1 .
where Let 𝜓 be given by Lemma 2.1. For any parameters 𝜆 > 0 and 𝜇 > 0,
{
 = 𝓁𝑥2 − 𝓁𝑥𝑥 − 𝛹 − 𝓁𝑡 , we put
[ ( ) ] (2.13)
 = 2 𝛹 + 𝓁𝑥 𝑥 − 𝑡 + 𝛹𝑥𝑥 . ⎧ 𝑒𝜇𝜓(𝑥) − 𝑒2𝜇|𝜓|𝐶([0,1])
⎪𝛼(𝑡, 𝑥) = 𝑡(𝑇 − 𝑡)
,
⎨ 𝜇𝜓(𝑥) (2.22)
Proof. From 𝜃 = 𝑒𝓁 and 𝑤 = 𝜃𝑣, one has ⎪𝜑(𝑡, 𝑥) = 𝑒 ,
⎩ 𝑡(𝑇 − 𝑡)
𝑑𝑣 = 𝜃 −1 (𝑑𝑤 − 𝓁𝑡 𝑤𝑑𝑡), 𝑣𝑥 = 𝜃 −1 (𝑤𝑥 − 𝓁𝑥 𝑤) (2.14) and choose the functions 𝓁 and 𝛹 in Theorem 2.5 as follows:

and 𝓁 = 𝜆𝛼, 𝛹 = −2𝓁𝑥𝑥 . (2.23)


−1
𝜃𝑣𝑥𝑥 = 𝜃[𝜃 (𝑤𝑥 − 𝓁𝑥 𝑤)]𝑥 In the sequel, for a positive integer 𝑟, we denote by 𝑂(𝜇 𝑟 )
a function of
= [(𝑤𝑥 − 𝓁𝑥 𝑤)]𝑥 − 𝓁𝑥 (𝑤𝑥 − 𝓁𝑥 𝑤) (2.15) order 𝜇 𝑟 for large 𝜇 (which is independent of 𝜆); by 𝑂𝜇 (𝜆𝑟 ) a function
= 𝑤𝑥𝑥 − 2𝓁𝑥 𝑤𝑥 + (|𝓁𝑥 |2 − 𝓁𝑥𝑥 )𝑤. of order 𝜆𝑟(for a fixed)𝜇 and for large 𝜆. In a similar way, we use the
notation 𝑂 𝑒2𝜇|𝜓|𝐶([0,1]) and so on. It is easy to check that
Put {
⎧ 𝛥 𝓁𝑡 = 𝜆𝛼𝑡 , 𝓁𝑥 = 𝜆𝜇𝜑𝜓𝑥 ,
𝛥 (2.24)
⎪𝐼 = 𝑤𝑥𝑥 + 𝑤, 𝐼1 = 𝐼𝑑𝑡, 𝓁𝑥𝑥 = 𝜆𝜇2 𝜑𝜓𝑥2 + 𝜆𝜇𝜑𝜓𝑥𝑥
⎨ 𝛥 𝛥 (2.16)
⎪𝐼2 = 𝑑𝑤 − 2𝓁𝑥 𝑤𝑥 𝑑𝑡, 𝐼3 = 𝛹 𝑤𝑑𝑡. and that

⎧ 2
( 2𝜇|𝜓| )
⎪𝛼 𝑡 = 𝜑 𝑂 𝑒
𝐶([0,1]) ,
By (2.14)–(2.16), we see that
( )( ) ( ) ⎨ 3
( 2𝜇|𝜓| ) 𝑇 − 2𝑡 (2.25)
⎪𝛼𝑡𝑡 = 𝜑 𝑂 𝑒 𝜑𝑡 =
𝐶([0,1]) , 𝜑.
2𝜃 𝑤𝑥𝑥 + 𝑤 𝑑𝑣 + 𝑣𝑥𝑥 𝑑𝑡 = 2𝐼 𝐼1 + 𝐼2 + 𝐼3 . (2.17) 𝑡(𝑇 − 𝑡)

Let us first compute 2𝐼𝐼2 . We have that The desired global Carleman estimate for (2.7) is stated as follows:
( )
−4 𝑤𝑥𝑥 + 𝑤 𝓁𝑥 𝑤𝑥
( ) Theorem 2.6. There is a constant 𝜇0 = 𝜇0 (𝐺0 , 𝑇 ) > 0 such that for all
= −2 𝓁𝑥 𝑤2𝑥 𝑥 + 2𝓁𝑥𝑥 𝑤2𝑥 − 2𝓁𝑥 (𝑤2 )𝑥 (2.18) 𝜇 ≥ 𝜇0 , one can find two constants  = (𝜇) > 0 and 𝜆1 = 𝜆1 (𝜇) > 0 such
( ) ( ) that for all 𝜆 ≥ 𝜆1 and 𝑧𝑇 ∈ 𝐿2 (𝛺; 𝐿2 (0, 1)), the mild solution (𝑧, 𝑍) to
= −2 𝓁𝑥 𝑤2𝑥 + 𝓁𝑥 𝑤2 𝑥 + 2𝓁𝑥𝑥 𝑤2𝑥 + 2 𝓁𝑥 𝑥 𝑤2 . 𝑇
(2.7) satisfies that
By Itô’s formula (See Theorem B.5 and Remark B.2), we obtain that 𝑇 1
( ) 𝜆3 𝜇 4 E 𝜃 2 𝜑3 𝑧2 𝑑𝑥𝑑𝑡
2 𝑤𝑥𝑥 + 𝑤 𝑑𝑤 ∫0 ∫0
𝑇 1
= 2(𝑤𝑥 𝑑𝑤)𝑥 − 2𝑤𝑥 𝑑𝑤𝑥 + 2𝑤𝑑𝑤 +𝜆𝜇 2 E 𝜃 2 𝜑𝑧2𝑥 𝑑𝑥𝑑𝑡
( ) (2.19)
= 2(𝑤𝑥 𝑑𝑤)𝑥 + 𝑑 −𝑤2𝑥 + 𝑤2 ∫0 ∫0
( 𝑇 (2.26)
+|𝑑𝑤𝑥 |2 − 𝑡 𝑤2 𝑑𝑡 − (𝑑𝑤)2 . ≤  𝜆3 𝜇 4 E 𝜃 2 𝜑3 𝑧2 𝑑𝑥𝑑𝑡
∫0 ∫𝐺0
𝑇 1 )
+𝜆2 𝜇 2 E 𝜃 2 𝜑2 |𝑎3 𝑧 + 𝑍|2 𝑑𝑥𝑑𝑡 .
2
See Remark B.3 for (𝑑𝑣)2 ∫0 ∫0

273
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

Proof. Noting 2.1–((2.24)), from (2.13), we have 𝓁𝑥𝑥 = 𝜆𝜇 2 𝜑𝜓𝑥2 + It follows from (2.7) that
𝜆𝜑𝑂(𝜇). Consequently, 𝑇 1 ( )( )
2E 𝜃 𝑤𝑥𝑥 + 𝑤 𝑑𝑧 + 𝑧𝑥𝑥 𝑑𝑡 𝑑𝑥
3𝓁𝑥𝑥 𝑤2𝑥 = 3𝜆𝜇 2 𝜑𝜓𝑥2 𝑤2𝑥 + 𝜆𝜑𝑤2𝑥 𝑂(𝜇) ∫0 ∫0
( ) (2.27)
≥ 3𝜆𝜇 2 𝜑𝜓𝑥2 + 𝜆𝜑𝑂(𝜇) 𝑤2𝑥 . 𝑇 1 ( )( )
= −2E 𝜃 𝑤𝑥𝑥 + 𝑤 𝑎1 𝑧 + 𝑎2 𝑍 𝑑𝑥𝑑𝑡
Similarly, by the definition of  in (2.13), and noting (2.25), we see ∫0 ∫0
𝑇 1( )2
that
≤E 𝑤𝑥𝑥 + 𝑤 𝑑𝑥𝑑𝑡 (2.31)
∫0 ∫0
 = 𝓁𝑥2 + 𝓁𝑥𝑥 − 𝓁𝑡
( ) 𝑇 1 ( )2
= 𝜆𝜇 𝜆𝜇𝜑2 𝜓𝑥2 + 𝜇𝜑𝜓𝑥2 + 𝜑𝜓𝑥𝑥 𝜃 2 𝑎1 𝑧 + 𝑎2 𝑍 𝑑𝑥𝑑𝑡.
( 2𝜇|𝜓| ) +E
2
+𝜆𝜑 𝑂 𝑒 𝐶([0,1]) (2.28) ∫0 ∫0
= 𝜆2 𝜇 2 𝜑2 𝜓𝑥2 + 𝜆𝜇 2 𝜑𝜓𝑥2 + 𝜆𝜇𝜑𝜓𝑥𝑥 By (2.30)–(2.31), one can show that
( )
+𝜆𝜑2 𝑂 𝑒2𝜇|𝜓|𝐶([0,1]) . [ (
𝑇 1 )
Now, let us estimate  (defined in (2.13)). By (2.24), and recalling 2E 𝜑 𝜆𝜇2 𝜓𝑥2 + 𝜆𝑂(𝜇) 𝑤2𝑥
∫0 ∫0 (
the definitions of 𝛹 (in (2.23)), we see that +𝜑3 𝜆3 𝜇 4 𝜓𝑥4 + 𝜆3 𝑂(𝜇3 )
𝛹 = −2𝜆𝜇(𝜇𝜑𝜓𝑥2 + 𝜑𝜓𝑥𝑥 ) = −2𝜆𝜇 2 𝜑𝜓𝑥2 + 𝜆𝜑𝑂(𝜇), ( )) 2] (2.32)
+𝜆2 𝑂 𝜇2 𝑒2𝜇|𝜓|𝐶([0,1]) 𝑤 𝑑𝑥𝑑𝑡
𝛹𝑥 = −2𝓁𝑥𝑥𝑥 = −2𝜆𝜇 3 𝜑𝜓𝑥3 + 𝜆𝜑𝑂(𝜇 2 ), 𝑇 1 [( )2 ]
𝛹𝑥𝑥 = −2𝓁𝑥𝑥𝑥𝑥 = −2𝜆𝜇4 𝜑𝜓𝑥4 + 𝜆𝜑𝑂(𝜇 3 ). ≤E 𝜃 2 𝑎1 𝑧 + 𝑎2 𝑍 + 𝑍 2 𝑑𝑥𝑑𝑡.
∫0 ∫0
Recalling the definition of  (in (2.13)), and using (2.24) and (2.25),
Noting that |𝜓𝑥 | > 0 in [0, 1] ⧵ 𝐺1 , from (2.32), we conclude that
we have that
there is a 𝜇0 > 0 such that for all 𝜇 ≥ 𝜇0 , one can find a constant
𝛹 = −2𝜆3 𝜇 4 𝜑3 𝜓𝑥4 + 𝜆3 𝜑3 𝑂(𝜇 3 ) 𝜆0 = 𝜆0 (𝜇) so that for any 𝜆 ≥ 𝜆0 , it holds that
( )
+𝜆2 𝜇 2 𝜑3 𝑂 𝑒2𝜇|𝜓|𝐶([0,1]) ,
( ) 𝑇 1 ( )
𝑥 = 2𝓁𝑥 𝓁𝑥𝑥 + 𝓁𝑥𝑥𝑥 − 𝓁𝑡𝑥
( ) 𝜆𝜇 2 E 𝜃 2 𝜑 𝑧2𝑥 + 𝜆2 𝜇 2 𝜑2 𝑧2 𝑑𝑥𝑑𝑡
∫0 ∫0
= 2𝜆2 𝜇3 𝜑2 𝜓𝑥3 + 𝜆2 𝜑2 𝑂(𝜇 2 ) + 𝜆𝜇𝜑2 𝑂 𝑒2𝜇|𝜓|𝐶([0,1]) ,
[ 𝑇 1 ( )2
−𝑥 𝓁𝑥 = −2𝜆3 𝜇 4 𝜑3 𝜓𝑥4 + 𝜆3 𝜑3 𝑂(𝜇 3 ) 𝜃 2 𝑎1 𝑧 + 𝑎2 𝑍 𝑑𝑥𝑑𝑡
( ) ≤ E (2.33)
+𝜆2 𝜇 2 𝜑3 𝑂 𝑒2𝜇|𝜓|𝐶([0,1]) , ∫0 ∫0
( )
− 𝓁𝑥 𝑥 = −𝑥 𝓁𝑥 − 𝓁𝑥𝑥 𝑇 1 [ ( )2 ]
+E 𝜃 2 𝜆2 𝜇 2 𝜑2 𝑎23 𝑧2 + 𝑎3 𝑧+𝑍 𝑑𝑥𝑑𝑡
= −3𝜆3 𝜇 4 𝜑3 𝜓𝑥4 + 𝜆3 𝜑3 𝑂(𝜇 3 ) ∫0 ∫0
( )
+𝜆2 𝜇 2 𝜑3 𝑂 𝑒2𝜇|𝜓|𝐶([0,1]) , 𝑇 ( ) ]
+𝜆𝜇 2 E 𝜃 2 𝜑 𝑧2𝑥 + 𝜆2 𝜇2 𝜑2 𝑧2 𝑑𝑥𝑑𝑡 .
and that ∫0 ∫𝐺1
( )
𝑡 = 𝓁𝑥2 + 𝓁𝑥𝑥 + 𝓁𝑡 𝑡 Choose a function 𝜁 ∈ 𝐶0∞ (𝐺0 ) so that 0 ≤ 𝜁 ≤ 1 in 𝐺0 and 𝜁 ≡ 1 in 𝐺1 .
( 2 2𝜇|𝜓| ) ( )
2 3
=𝜆 𝜑 𝑂 𝜇 𝑒 𝐶([0,1]) + 𝜆𝜑3 𝑂 𝑒2𝜇|𝜓|𝐶([0,1]) . By

By (2.13), we have that 𝑑(𝜃 2 𝜑𝑧2 ) = 𝑧2 (𝜃 2 𝜑)𝑡 𝑑𝑡 + 2𝜃 2 𝜑𝑧𝑑𝑧 + 𝜃 2 𝜑(𝑑𝑧)2 ,


( )
 = −4𝜆3 𝜇 4 𝜑3 𝜓𝑥4 + 𝜆3 𝜑3 𝑂(𝜇3 )+𝜆2 𝜇 2 𝜑3 𝑂 𝑒2𝜇|𝜓|𝐶([0,1]) recalling
( )
+6𝜆3 𝜇 4 𝜑3 𝜓𝑥4 + 𝜆3 𝜑3 𝑂(𝜇 3 )+𝜆2 𝜇 2 𝜑3 𝑂 𝑒2𝜇|𝜓|𝐶([0,1]) lim 𝜃(𝑡, ⋅) = lim− 𝜃(𝑡, ⋅) ≡ 0
( ) ( ) 𝑡→0+ 𝑡→𝑇
+𝜆2 𝜇 2 𝜑3 𝑂 𝑒2𝜇|𝜓|𝐶([0,1]) + 𝜆𝜑3 𝑂 𝑒2𝜇|𝜓|𝐶([0,1])
and using (2.7), we find that
−2𝜆𝜇 4 𝜑𝜓𝑥4 + 𝜆𝜑𝑂(𝜇 3 ) (2.29)
( ) 𝑇 [ ( )
= 2𝜆3 𝜇 4 𝜑3 𝜓𝑥4 +𝜆3 𝜑3 𝑂(𝜇3 )+𝜆2 𝜇 2 𝜑3 𝑂 𝑒2𝜇|𝜓|𝐶([0,1]) . 0 =E 𝜃 2 𝜁 2 𝑧2 𝜑𝑡 + 2𝜆𝜑𝜂𝑡 + 2𝜁 2 𝜑𝑧2𝑥
∫0 ∫𝐺0
( )
Integrating the equality (2.12) on (0, 𝑇 ) × (0, 1), taking expectation +2𝜇𝜁 2 𝜑 1 + 2𝜆𝜑 𝜓𝑥 𝑧𝑧𝑥 + 4𝜁 𝜑𝜁𝑥 𝑧𝑧𝑥
in both sides, and noting (2.27)–(2.29), we conclude that ( ) ]
+2𝜁 2 𝜑 𝑎1 𝑧 + 𝑎2 𝑍 𝑧 + 𝜁 2 𝜑𝑍 2 𝑑𝑥𝑑𝑡.
𝑇 1 ( )( )
2E 𝜃 𝑤𝑥𝑥 + 𝑤 𝑑𝑧 + 𝑧𝑥𝑥 𝑑𝑡 𝑑𝑥 Therefore, for any 𝜀 > 0, one has
∫0 ∫0
𝑇 1 𝑇
−2E (𝑤𝑥 𝑑𝑤)𝑥 𝑑𝑥 2E 𝜃 2 𝜁 2 𝜑𝑧2𝑥 𝑑𝑥𝑑𝑡 + E 𝜃 2 𝜁 2 𝜑𝑍 2 𝑑𝑥𝑑𝑡
∫0 ∫0 ∫0 ∫𝐺0 ∫𝑄0
𝑇 1[ 𝑇
+2E 𝑤2𝑥 𝓁𝑥 − 𝛹 𝑤𝑤𝑥 ≤ 𝜀E 𝜃 2 𝜁 2 𝜑𝑧2𝑥 𝑑𝑥𝑑𝑡
∫0 ∫0 ∫0 ∫𝐺0
( [ ]
𝛹 ) ] 1 ( )2
𝑇

+ 𝓁𝑥 + 𝑥 𝑤2 𝑑𝑥𝑑𝑡 + E 𝜃2 𝑎1 𝑧+𝑎2 𝑍 +𝜆2 𝜇 2 𝜑3 𝑧2 𝑑𝑥𝑑𝑡,
2 𝑥 𝜀 ∫0 ∫𝐺0 𝜆2 𝜇 2
𝑇 1[ ( ) 2
≥ 2E 2 2
𝜑 𝜆𝜇 𝜓𝑥 + 𝜆𝑂(𝜇) 𝑤𝑥 which concludes that
∫0 ∫0
( 𝑇
+𝜑3 𝜆3 𝜇 4 𝜓𝑥4 + 𝜆3 𝑂(𝜇 3 ) (2.30) E 𝜃 2 𝜑𝑧2𝑥 𝑑𝑥𝑑𝑡
∫0 ∫𝐺1
( )) ] 𝑇 [
1 ( )2 ]
+𝜆2 𝜇2 𝑂 𝑒2𝜇|𝜓|𝐶([0,1]) 𝑤2 𝑑𝑥𝑑𝑡 ≤ E 𝜃2 𝑎1 𝑧+𝑎2 𝑍 +𝜆2 𝜇2 𝜑3 𝑧2 𝑑𝑥𝑑𝑡.
∫0 ∫𝐺0 𝜆2 𝜇 2
𝑇 1
+2E |𝑤𝑥𝑥 + 𝑤|2 𝑑𝑥𝑑𝑡 This, together with (2.33), implies that
∫0 ∫0
𝑇 1 𝑇 1
+E 𝜃 2 |𝑑𝑧𝑥 + 𝓁𝑥 𝑑𝑧|2 𝑑𝑥 𝜆3 𝜇 4 E 𝜃 2 𝜑3 𝑧2 𝑑𝑥𝑑𝑡 + 𝜆𝜇 2 E 𝜃 2 𝜑𝑧2𝑥 𝑑𝑥𝑑𝑡
∫0 ∫0 ∫0 ∫0 ∫𝑄
𝑇 1 [ 𝑇
−E 𝜃 2 (𝑑𝑧)2 𝑑𝑥. ≤ 𝜆3 𝜇 4 E 𝜃 2 𝜑3 𝑧2 𝑑𝑥𝑑𝑡 (2.34)
∫0 ∫0 ∫0 ∫𝐺0

274
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

(
𝑇 )2
1
where
+E 𝜃 2 𝑎1 𝑧 + 𝑎2 𝑍 𝑑𝑥𝑑𝑡 {
∫0 ∫0
𝑎1𝑗 ∈ 𝐿∞ (0, 𝑇 ; 𝑊 1,∞ (𝐺)), 𝑗 = 1, 2, … , 𝑛,
𝑇 1 [ ( )2 ] ] F (2.39)
+E 𝜃 2 𝜆2 𝜇 2 𝜑2 𝑎23 𝑧2 + 𝑎3 𝑧+𝑍 𝑑𝑥𝑑𝑡 . 𝑎𝑘 ∈ 𝐿F (0, 𝑇 ; 𝐿∞ (𝐺)), 𝑘 = 2, 3, 4.

∫0 ∫0
In the system (2.38), the initial state 𝑦0 ∈ 𝐿2 (𝐺), 𝑦 is the state, and the
Let 𝜆1 = max{𝜆0 , (𝑟1 +|𝑎3 |𝐿∞ (0,𝑇 ;𝐿∞ (0,1)) )+1}. For any 𝜆 ≥ 𝜆1 and 𝜇 ≥ 𝜇0 , control ia a pair (𝑢1 , 𝑢2 ) ∈ 𝐿2F (0, 𝑇 ; 𝐿2 (𝐺0 )) × 𝐿2F (0, 𝑇 ; 𝐿2 (𝐺)).
F
we get from (2.34) that (2.26) holds. □
Remark 2.3. Similarly to the control system (2.5), the term 𝑎4 𝑢2
2.4. Observability estimate of backward stochastic parabolic equations
reflects the effect of the control 𝑢2 in the diffusion term to the drift
one. One can also consider the case that the control 𝑢1 in the drift
Theorem 2.4 follows from Theorem 2.6 immediately. We now prove
term may influence the diffusion term. In such case, some conditions
Theorem 2.3. on that influence should be assumed. Otherwise the controls 𝑢1 and 𝑢2
may cancel each other.
Proof of Theorem 2.3. Choosing 𝜇 = 𝜇0 and 𝜆 = 𝜆1 , from (2.26), we
obtain that By Theorem B.10, for any 𝑦0 ∈ 𝐿2 (𝐺) and (𝑢1 , 𝑢2 ) ∈ 𝐿2F (0, 𝑇 ; 𝐿2 (𝐺0 ))
𝑇 1 × 𝐿2F (0, 𝑇 ; 𝐿2 (𝐺)), the system (2.38) admits a unique weak solution
E 𝜃 2 𝜑3 𝑧2 𝑑𝑥𝑑𝑡 𝑦 ∈ 𝐿2F (𝛺; 𝐶([0, 𝑇 ];𝐿2 (𝐺))) ∩ 𝐿2F (0, 𝑇 ;𝐻01 (𝐺)).
∫0 ∫0 The following result is a modification of the one in Tang and Zhang
[ 𝑇
(2009).
≤ E 𝜃 2 𝜑3 𝑧2 𝑑𝑥𝑑𝑡 (2.35)
∫0 ∫𝐺0
𝑇 1 ( )2 ] Theorem 2.7. System (2.38) is null and approximately controllable at
+E 𝜃 2 𝜑2 𝑎3 𝑧 + 𝑍 𝑑𝑥𝑑𝑡 . any time 𝑇 > 0.
∫0 ∫0
Recalling 2.1, it follows from (2.35) that To prove Theorem 2.7, let us introduce the adjoint equation of
(2.38):
3𝑇 ∕4 1
E 𝑧2 𝑑𝑥𝑑𝑡 (2.36) ⎧ ∑
𝑛
∫𝑇 ∕4 ∫0 ⎪ 𝑑𝑧 + (𝑎𝑗𝑘 𝑧𝑥𝑗 )𝑥𝑘 𝑑𝑡
( ) ⎪ 𝑗,𝑘=1
max 𝜃 2 (𝑡, 𝑥)𝜑3 (𝑡, 𝑥)+𝜃 2 (𝑡, 𝑥)𝜑2 (𝑡, 𝑥) ⎪ [∑
𝑛
( ) ]
≤
(𝑡,𝑥)∈[0,𝑇 ]×[0,1]
( ) ⎪ = 𝑎1𝑗 𝑧 𝑥 − 𝑎2 𝑧 − 𝑎3 𝑍 𝑑𝑡
⎨ 𝑗 (2.40)
min 𝜃 2 (𝑇 ∕4, 𝑥)𝜑3 (𝑇 ∕2, 𝑥) ⎪ 𝑗=1
𝑥∈[0,1] +𝑍𝑑𝑊 (𝑡) in 𝑄,

[ 𝑇 𝑇 1 ] ⎪ 𝑧=0 on 𝛴,
× E 𝑧2 𝑑𝑥𝑑𝑡+E (𝑎3 𝑧+𝑍)2 𝑑𝑥𝑑𝑡 ⎪
∫0 ∫𝐺0 ∫0 ∫0 ⎩ 𝑧(𝑇 ) = 𝑧𝑇 in 𝐺.
[ 1 𝑇 1( )2 ]
≤ E 𝑧2 𝑑𝑥𝑑𝑡 + E By Theorem B.14, for any 𝑧𝑇 ∈ 𝐿2 (𝛺; 𝐿2 (𝐺)), Eq. (2.40) has a
∫0 ∫𝐺0 ∫0 ∫0
𝑎3 𝑧 + 𝑍 𝑑𝑥𝑑𝑡 . ( 𝑇 ⋂
unique )mild solution (𝑧, 𝑍) ∈ 𝐿2F (𝛺;𝐶([0,𝑇 ]; 𝐿2 (𝐺))) 𝐿2F (0, 𝑇 ;
1 2 2
𝐻0 (𝐺)) ×𝐿F (0, 𝑇 ; 𝐿 (𝐺)).
From (2.8), it follows that
1 1
The null controllability of (2.38) is implied by the following observ-
E 𝑧2 (0)𝑑𝑥 ≤ E 𝑧2 (𝑡)𝑑𝑥, ∀ 𝑡 ∈ [0, 𝑇 ]. (2.37) ability estimate:
∫0 ∫0
( ⋂
Combining (2.36) and (2.37), we conclude that, the solution (𝑧, 𝑍) Theorem ) 2.8. All solutions (𝑧,𝑍) ∈ 𝐿2F (𝛺;𝐶([0,𝑇 ]; 𝐿2 (𝐺))) 𝐿2F (0, 𝑇 ;
1 2 2
𝐻0 (𝐺)) ×𝐿F (0, 𝑇 ; 𝐿 (𝐺)) to the system (2.40) satisfy that
to Eq. (2.7) satisfies (2.9). This completes the proof of Theorem 2.3. □
2( )
2.5. Null and approximate controllability for multidimensional stochastic |𝑧(0)|𝐿2 (𝐺) ≤ 𝑒𝜅1 |𝜒𝐺0 𝑧|𝐿2 (0,𝑇 ;𝐿2 (𝐺)) + |𝑎4 𝑧+𝑍|𝐿2 (0,𝑇 ;𝐿2 (𝐺)) ,
F F
parabolic equations where

We have discussed the null and approximate controllability prob- ∑


𝑛 ∑
4
𝜅1 = |𝑎1𝑗 |2𝐿∞ (0,𝑇 ;𝑊 1,∞ (𝐺)) + |𝑎𝑘 |2𝐿∞ (0,𝑇 ;𝐿∞ (𝐺)) .
lems for one dimensional stochastic parabolic equations. Now we give 𝑗=1 F 𝑘=2
F

a brief introduction to these controllability results for the multidimen-


The approximate controllability of (2.38) is implied by the following
sional equation.
unique continuation property:
Let 𝐺 ⊂ R𝑛 (𝑛 ∈ N) be a given bounded domain with a 𝐶 ∞ boundary
𝛤 , and 𝐺0 be a nonempty open subset of 𝐺. Put
Theorem 2.9. The final datum 𝑧𝑇 = 0, a.s., provided that the correspond-
𝛥 𝛥 𝛥 ing mild solution (𝑧, 𝑍) to Eq. (2.40) satisfies that 𝑧 = 0 in (0, 𝑇 ) × 𝐺0 a.s.
𝑄 = (0, 𝑇 ) × 𝐺, 𝛴 = (0, 𝑇 ) × 𝛤 , 𝑄0 = (0, 𝑇 ) × 𝐺0 .
and 𝑎4 𝑧 + 𝑍 = 0 in (0, 𝑇 ) × 𝐺, a.s.
Also, for 𝑗, 𝑘 = 1, 2, … , 𝑛, we assume that 𝑎𝑗𝑘 ∶ 𝐺 → R satisfies
The proofs of Theorems 2.8 and 2.9 are similar (but more compli-
𝑎𝑗𝑘 ∈ 𝐶 2 (𝐺), 𝑎𝑗𝑘 = 𝑎𝑘𝑗 , and for some 𝑠0 > 0,
cated) to that of Theorem 2.3 and Theorem 2.4, respectively. They can

𝑛
be found in Tang and Zhang (2009).
𝑎𝑗𝑘 (𝑥)𝜉𝑖 𝜉𝑗 ≥ 𝑠0 |𝜉|2 , (𝑥, 𝜉) ≡ (𝑥, 𝜉1 , … , 𝜉𝑛 ) ∈ 𝐺 × R𝑛 . Although it does not imply any controllability of the same sort
𝑗,𝑘=1
equations, the observability of stochastic parabolic equations is still an
Consider the following control system: interesting control problem. Consider the following stochastic parabolic
⎧ ∑
𝑛 equation:
⎪ 𝑑𝑦 − (𝑎𝑗𝑘 𝑦𝑥𝑗 )𝑥𝑘 𝑑𝑡 ∑
𝑛
⎧ ( 𝑗𝑘 )
⎪ 𝑗,𝑘=1 𝑑𝑧 − 𝑎 𝑧𝑥𝑗 𝑥 𝑑𝑡
⎪ (∑
𝑛 ) ⎪ 𝑘
⎪ ⎪ 𝑗,𝑘=1
= 𝑎1𝑗 𝑦𝑥𝑗 +𝑎2 𝑦+𝜒𝐺0 𝑢1 + 𝑎4 𝑢2 𝑑𝑡
(2.38) ⎪ (∑
𝑛 )
⎨ (2.41)
⎪ 𝑗=1 ⎨ = 𝑏1𝑗 𝑧𝑥𝑗 + 𝑏2 𝑧 𝑑𝑡 + 𝑏3 𝑧𝑑𝑊 (𝑡) in 𝑄,
⎪ +(𝑎3 𝑦 + 𝑢2 ) 𝑑𝑊 (𝑡) in 𝑄, ⎪ 𝑗=1
⎪ 𝑦=0 on 𝛴, ⎪ 𝑧=0 on 𝛴,
⎪ ⎪
⎩ 𝑦(0) = 𝑦0 in 𝐺, ⎩ 𝑧(0) = 𝑧0 in 𝐺,

275
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

where These differ from deterministic parabolic equations significantly.


{ There are some other interesting techniques to prove the controlla-
𝑏1𝑗 ∈ 𝐿∞ (0, 𝑇 ; 𝐿∞ (𝐺)), 𝑗 = 1, 2, … , 𝑛,
F (2.42) bility of deterministic parabolic equations, which should be extended
𝑏2 , 𝑏3 ∈ 𝐿∞F
(0, 𝑇 ; 𝐿∞ (𝐺)).
to the stochastic setting, for example:
We have the following observability estimate for (2.41) and its
proof can be found in Liu (2014b) (See also Barbu, Răşcanu, & Tes- • In Fattorini and Russell (1971), the null controllability of the
sitore, 2003; Tang & Zhang, 2009 for earlier results under stronger heat equation in one space dimension was obtained by solving
assumptions). a moment problem. However, it seems that it is very hard to em-
ploy the same idea to prove the null controllability of stochastic
Theorem 2.10. Solutions to (2.41) satisfy that, for any 𝑧0 ∈ 𝐿2 (𝐺), parabolic equations. For example, let us consider the following
equation
|𝑧(𝑇 )|𝐿2 (𝛺;𝐿2 (𝐺)) ≤ 𝑒𝜅2 |𝑧|𝐿2 (0,𝑇 ;𝐿2 (𝐺 )) , (2.43)
0
𝑇 F
⎧ 𝑑𝑦 − 𝑦 𝑑𝑡 = 𝑓 𝑢𝑑𝑡 + 𝑎𝑦𝑑𝑊 (𝑡) in (0, 𝑇 ] × (0, 1),
where ⎪ 𝑥𝑥
⎨ 𝑦=0 on (0, 𝑇 ) × {0, 1}, (2.45)

𝑛 ∑
3 ⎪ 𝑦(0, 𝑥) = 𝑦0 (𝑥) in (0, 1).
𝜅2 = |𝑏1𝑗 |2𝐿∞ (0,𝑇 ;𝐿∞ (𝐺)) + |𝑏𝑘 |2𝐿∞ (0,𝑇 ;𝐿∞ (𝐺)) . ⎩
F F
𝑗=1 𝑘=2 Here 𝑦0 ∈ 𝐿2 (0, 1), 𝑎 ∈ 𝐿∞ (0, 1), 𝑓 ∈ 𝐿2 (0, 1), 𝑦 is the state and
When 𝑏3 ∈ 𝐿∞ (0, 𝑇 ; 𝑊 2,∞ (𝐺)), (2.41) can be reduced to a random 𝑢 ∈ 𝐿2F (0, 𝑇 ) is the control. One can see that it is not easy to reduce
F 𝑡
parabolic equation. To see this, we write 𝜗 = 𝑒− ∫0 𝑏3 (𝑠)𝑑𝑊 (𝑠) , and the null controllability problem for the system (2.45) to the usual
introduce a simple transformation 𝑧̃ = 𝜗𝑧. Then, one can check that moment problem (but, under some conditions it is still possible
𝑧̃ satisfies to reduce it to a stochastic moment problem, which remains to
be done).
⎧ ∑𝑛
( 𝑗𝑘 ) ∑𝑛
⎪ 𝑧̃ 𝑡 − • In Russell (1978), it is shown that if the wave equation is exactly
𝑎 𝑧̃ 𝑥𝑗 𝑥 = 𝑏̃ 1𝑗 𝑧̃ 𝑥𝑗 + 𝑏̃ 2 𝑧̃ in 𝑄,
⎪ 𝑗,𝑘=1
𝑘
𝑗=1
controllable for some 𝑇 > 0 with controls supported in some
⎨ (2.44)
on 𝛴, 𝐺0 ⊂ 𝐺, then the heat equation is null controllable for all 𝑇 > 0
⎪ 𝑧̃ = 0
⎪ 𝑧(0)
̃ = 𝑧0 in 𝐺, with controls supported in the same controller 𝐺0 . It seems that
⎩ one can follow this idea to establish a connection between the null
where controllability of stochastic heat equations and stochastic wave

𝑛 𝑡 equations. Nevertheless, at this moment the null controllability
𝑏̃ 1𝑗 = 𝑏1𝑗 − 2 𝑎𝑗𝑘 𝑏3,𝑥𝑘 (𝑠)𝑑𝑊 (𝑠), 𝑗 = 1, 2, … , 𝑛, problem of stochastic wave equations is still open, which seems
∫0
𝑘=1 even more difficult than that of stochastic heat ones.
1 ∑
𝑛 𝑡
𝑏̃ 2 = 𝑏2 − 𝑏23 − 𝑏1𝑗 𝑏 (𝑠)𝑑𝑊 (𝑠) • In Section 2.5, we assume that 𝑎𝑗𝑘 ∈ 𝐶 2 (𝐺), 𝑗, 𝑘 = 1, ⋯ , 𝑛.
2 ∫0 3,𝑥𝑗
𝑗=1 This assumption can be weaken. Indeed, one can easily see that
∑𝑛 ( 𝑡 ) the Carleman estimate approach works for Lipschitz continuous
− 𝑎𝑗𝑘 𝑏 (𝑠)𝑑𝑊 (𝑠)
∫0 3,𝑥𝑗 𝑥𝑘 coefficients. Nevertheless, at this moment, we do not know what
𝑗,𝑘=1
∑𝑛 𝑡 𝑡 the minimal regularity condition is for the coefficients 𝑎𝑗𝑘 , 𝑗, 𝑘 =
+ 𝑎𝑗𝑘 𝑏3,𝑥𝑗 (𝑠)𝑑𝑊 (𝑠) 𝑏3,𝑥𝑘 (𝑠)𝑑𝑊 (𝑠). 1, ⋯ , 𝑛.
∫0 ∫0
𝑗,𝑘=1
One can find some other interesting works related to controllability,
One may expect that the observability estimate (2.43) (in
observability, unique continuation, stabilization, Hardy’s uncertainty
Theorem 2.10) for (2.41) follows from a similar observability estimate
principle, insensitizing controls and so on for stochastic parabolic equa-
for the parabolic equation (2.44) with a parameter 𝜔. But usually this is
tions, say Barbu (2013), Barbu et al. (2003), Chen (2018), Fernández-
not the case. Indeed, it is easy to see that, in general neither 𝑏̃ 1𝑗 nor 𝑏̃ 2
Bertolin and Zhong (2020), Fu and Liu (2017a, 2017b), Hernández-
is bounded (w.r.t. the sample point 𝜔) unless 𝑏3 is space-independent.
Santamaría and Peralta (2020), Li and Lü (2012, 2013), Liu and Liu
(2018), Lü and Yin (2015), Wu, Chen, and Wang (2020), Yan (2018),
2.6. Notes and open problems
Yan and Sun (2011), Yan, Wu, Lu, and Wang (2020), Yang and Zhong
(2016), Yin (2015) and Zhang (2008).
There are mainly two methods to study the controllability of de-
There are lots of interesting open problems for controllability of
terministic parabolic equations. One is the global Carleman estimate
stochastic parabolic equations. Some of them are listed below:
introduced in Fursikov and Imanuvilov (1996), the other is the time
(1) Null and approximate controllability problems for stochas-
iteration method introduced in Lebeau and Robbiano (1995). Each
tic parabolic equations with one control
method has advantages. Both of them are generalized to study the
As we have said before, it is more natural to put one control to the
controllability problems of stochastic parabolic equations (e.g. Liu,
stochastic parabolic equations to get the corresponding null/approx-
2014a, 2014b; Liu & Yu, 2019; Lü, 2011; Tang & Zhang, 2009).
imate controllability. However, this is a very difficult problem, and we
It is very interesting to study the controllability of (2.38), in which
have no idea on how to solve it at this moment.
only one control 𝑢 is introduced. Until now, this problem is much less
(2) Null/approximate controllability problem for semilinear/
understood. Indeed, positive results are available only for some special
quasilinear stochastic parabolic equations
cases of (2.38) when the time iteration method can be applied (e.g., Liu,
There are some deep results for the null controllability problems of
2014a; Lü, 2011). For these special cases, some new phenomena appear
semilinear/quasilinear deterministic parabolic equations. For example,
in the stochastic setting:
in Fernández-Cara and Zuazua (2000b), the null controllability was
• The null controllability of stochastic parabolic equations does proved for weakly blowup semilinear deterministic parabolic equa-
not imply the approximate controllability of such equations (Lü, tions; in Liu and Zhang (2012), the local null controllability of quasi-
2011). linear deterministic parabolic equations was obtained. However, as far
• The null controllability of a system coupled by two stochastic as we know, Hernández-Santamaría, Balc’h, and Peralta (2020) is the
parabolic equations is not robust with respect to the lower order only one work addressing null controllability of semilinear stochas-
terms (Liu, 2014a). Hence, it seems that the Carleman estimate tic parabolic equations with globally Lipschitz nonlinearities and so far
cannot be used to study the controllability of coupled stochastic there is no controllability result for quasilinear stochastic parabolic
parabolic systems. equations.

276
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

(3) Cost of approximate controllability for stochastic parabolic Here (𝑦0 , 𝑦̂0 ) ∈ 𝐿2 (0, 1) × 𝐻 −1 (0, 1), (𝑦, 𝑦) ̂ is the state, and 𝑢1 ∈
equations 𝐿2F (0, 𝑇 ; 𝐿2 (0, 1)), 𝑢2 ∈ 𝐿2F (0, 𝑇 ; 𝐻 −1 (0, 1)) and ℎ ∈ 𝐿2F (0, 𝑇 ) are controls.
In Theorem 2.7, we conclude the approximate controllability of Systems (3.3) and (3.4) are equations with nonhomogeneous bound-
some stochastic parabolic equations. However, we do not give any ary value. Their solutions are understood in the sense of transposition.
estimate for the cost of the control which drives the state to the To do this, we need the following backward stochastic ‘‘reference’’
destination approximately. Another problem related to this is how to equation:
describe the attainable set of a controlled stochastic parabolic equation.
(4) Stabilization for stochastic parabolic equations ⎧
⎪ 𝑑𝑧 = 𝑧𝑑𝑡̂ + 𝑍𝑑𝑊 (𝑡) in (0, 𝜏) × (0, 1),
Stabilization problem is another important control problem, which ⎪ ̂
𝑑 𝑧̂ − 𝑧𝑥𝑥 𝑑𝑡 = (𝑏1 𝑧 + 𝑏2 𝑍 + 𝑏3 𝑍)𝑑𝑡 ̂
+ 𝑍𝑑𝑊 (𝑡) in (0, 𝜏) × (0, 1),
is closely related to the null controllability problem. Compared with the ⎨ 𝑧=0 on (0, 𝜏) × {0, 1},
fruitful works on stabilization of deterministic parabolic equations, as ⎪
⎪ 𝑧(𝜏) = 𝑧𝜏 , 𝑧(𝜏)
̂ = 𝑧̂ 𝜏 in (0, 1),
far as we know, there are few works for the stabilization of stochastic ⎩
parabolic equations (e.g., Barbu, 2013; Liu, 2005; Munteanu, 2018; Wu (3.5)
& Zhang, 2020).
where 𝜏 ∈ (0, 𝑇 ], (𝑧𝜏 , 𝑧̂ 𝜏 ) ∈ 𝐿2 (𝛺; 𝐻01 (0, 1) × 𝐿2 (0, 1)), and 𝑏𝑖 ∈
𝜏
3. Exact controllability of stochastic hyperbolic equations 𝐿∞
F
(0, 𝑇 ; 𝐿∞ (0, 1)) (𝑖 = 1, 2, 3).
By Theorem B.12, for any (𝑧 , 𝑧̂ 𝜏 ) ∈ 𝐿2 (𝛺; 𝐻01 (0, 1)) × 𝐿2 (𝛺;
𝜏
𝜏 𝜏
This section is devoted to the exact controllability problem of 2
𝐿 (0, 1)), Eq. (3.5) admits a unique solution (𝑧, 𝑧, ̂ 𝑍, 𝑍)̂ ∈ 𝐿2 (𝛺;
F
stochastic hyperbolic equations. First, we shall consider the control- 𝐶([0, 𝜏]; 𝐻01 (0, 1))) × 𝐿2F (𝛺; 𝐶([0, 𝜏]; 𝐿2 (0, 1))) × 𝐿2F (0, 𝜏; 𝐻01 (0, 1)) × 𝐿2F (0,
lability problems for the models introduced in Examples 1.2 and 1.3. 𝜏; 𝐿2 (0, 1)). Moreover,
Then, we survey some recent results for the general setting. At last, we
list some open problems. |𝑧|𝐿2 (𝛺;𝐶([0,𝜏];𝐻 1 (0,1))) + |𝑧|
̂ 𝐿2 (𝛺;𝐶([0,𝜏];𝐿2 (0,1)))
F 0 F
+|𝑍|𝐿2 (0,𝜏;𝐻 1 (0,1)) + |𝑍|̂ 2
𝐿F (0,𝜏;𝐿2 (0,1)) (3.6)
3.1. Formulation of the problem and the main results (F 0 )
≤ 𝑒𝑟2 |𝑧𝜏 |𝐿2 (𝛺;𝐻 1 (0,1)) + |𝑧̂ 𝜏 |𝐿2 (𝛺;𝐿2 (0,1)) ,
𝜏 0 𝜏

We begin with the following controlled (deterministic) one dimen-


𝛥 ∑
3
sional hyperbolic equation: where 𝑟2 = |𝑏𝑖 |2𝐿∞ (0,𝑇 ;𝐿∞ (0,1)) .
F
𝑖=1
⎧ We need the following hidden regularity result for solutions to (3.5)
⎪ 𝑦𝑡𝑡 − 𝑦𝑥𝑥 = 𝑎𝑦 in (0, 𝑇 ) × (0, 1),
⎪ 𝑦=ℎ on (0, 𝑇 ) × {0}, (which is special case of Lü and Zhang (2019a, Proposition 3.1)).
⎨ (3.1)
⎪ 𝑦=0 on (0, 𝑇 ) × {1},
⎪ 𝑦(0) = 𝑦0 , 𝑦𝑡 (0) = 𝑦1 in (0, 1). Proposition 3.1. For any (𝑧𝜏 , 𝑧̂ 𝜏 ) ∈ 𝐿2 (𝛺; 𝐻01 (0, 1)) × 𝐿2 (𝛺; 𝐿2 (0, 1)),
⎩ 𝜏 𝜏
̂ to (3.5) satisfies that 𝑧𝑥 (0), 𝑧𝑥 (1) ∈ 𝐿2 (0, 𝜏), and
the solution (𝑧, 𝑍, 𝑧,
̂ 𝑍) F
Here (𝑦0 , 𝑦1 ) ∈ 𝐿2 (0, 1) × 𝐻 −1 (0, 1), 𝑎 ∈ 𝐿∞ ((0, 𝑇 ) × (0, 1)), (𝑦, 𝑦𝑡 ) is the
state, and ℎ ∈ 𝐿2 (0, 𝑇 ) is the control. It is well-known that (e.g., Lions, |(𝑧𝑥 (0), 𝑧𝑥 (1))|𝐿2 (0,𝜏)×𝐿2 (0,𝜏)
( F F ) (3.7)
1988), the system (3.1) admits a unique (transposition) solution 𝑦 ∈ ≤ 𝑒𝑟2 |𝑧𝜏 |𝐿2 (𝛺;𝐻 1 (0,1)) + |𝑧̂ 𝜏 |𝐿2 (𝛺;𝐿2 (0,1)) ,
𝜏 0 𝜏
𝐶([0, 𝑇 ]; 𝐿2 (0, 1)) ∩ 𝐶 1 ([0, 𝑇 ]; 𝐻 −1 (0, 1)).
Controllability (and also observability) for deterministic hyperbolic where the constant  is independent of 𝜏.
equations are now well-understood (e.g., Bardos, Lebeau, & Rauch,
1992; Duyckaerts, Zhang, & Zuazua, 2008; Fu et al., 2019; Lions, Proof. Let 𝜉(𝑥) = 1−2𝑥 for 𝑥 ∈ R. By Itô’s formula and the first equation
1988; Zhang, 2011; Zuazua, 2007). It is known that, for 𝑇 > 2, the of (3.5), we have
system (3.1) is exactly controllable, i.e., for any given (𝑦0 , 𝑦1 ), (𝑦̃0 , 𝑦̃1 ) ∈ ̂ 𝑥)
𝑑(𝜉 𝑧𝑧
𝐿2 (0, 1) × 𝐻 −1 (0, 1), one can find a control ℎ ∈ 𝐿2 (0, 𝑇 ) such that the = 𝜉𝑑 𝑧𝑧
̂ 𝑥 + 𝜉 𝑧𝑑𝑧
̂ 𝑥 + 𝜉𝑑 𝑧𝑑𝑧 ̂ 𝑥
solution to (3.1) satisfying (𝑦(𝑇 ), 𝑦𝑡 (𝑇 )) = (𝑦̃0 , 𝑦̃1 ) (e.g., Zuazua, 2007). ( )
= 𝜉𝑑 𝑧𝑧
̂ 𝑥 + 𝜉 𝑧̂ 𝑧̂ 𝑥 𝑑𝑡 + 𝑍𝑥 𝑑𝑊 (𝑡) + 𝜉𝑑 𝑧𝑑𝑧
̂ 𝑥
The main goal of this section is to study what will happen when 1[ ]
̂ 𝑥 + (𝜉 𝑧̂ 2 )𝑥 −𝜉𝑥 𝑧̂ 2 𝑑𝑡 + 𝜉 𝑧𝑍
= 𝜉𝑑 𝑧𝑧 ̂ 𝑥 𝑑𝑊 (𝑡) + 𝜉𝑑 𝑧𝑑𝑧
̂ 𝑥.
(3.1) is replaced by stochastic models. We shall see that, the corre- 2
sponding stochastic controllability problems are much less understood. Next,
Let us fix the following coefficients
1 2 1
𝜉𝑧𝑥 𝑧𝑥𝑥 = (𝜉𝑧 ) − 𝜉 𝑧2 .
𝑎1 , 𝑎2 , 𝑎3 , 𝑎5 ∈ 𝐿∞ (0, 𝑇 ; 𝐿∞ (0, 1)), 𝑎4 ∈ 𝐿∞ (0, 𝑇 ; 𝑊 1,∞ (0, 1)). (3.2) 2 𝑥𝑥 2 𝑥 𝑥
F F
Therefore,
The first control system is [ ( )]
− 𝜉 𝑧̂ 2 + 𝑧2𝑥 𝑥 𝑑𝑡
[ ( ) ]
⎧ = 2 −𝑑(𝜉 𝑧𝑧 ̂ 𝑥 ) + 𝑑 𝑧̂ − 𝑧𝑥𝑥 𝑑𝑡 𝜉𝑧𝑥 𝑑𝑡 − 𝜉𝑥 𝑧̂ 2 𝑑𝑡 (3.8)
⎪ 𝑑𝑦𝑡 = (𝑦𝑥𝑥 + 𝑎1 𝑦 + 𝑔0 )𝑑𝑡 + (𝑎2 𝑦 + 𝑔1 )𝑑𝑊 (𝑡) in (0, 𝑇 ) × (0, 1),
2
⎪ 𝑦 = ℎ0 on (0, 𝑇 ) × {0}, −𝜉𝑥 𝑧𝑥 𝑑𝑡 + 2𝜉𝑑 𝑧𝑑𝑧
̂ 𝑥 + 2𝜉 𝑧𝑍 ̂ 𝑥 𝑑𝑊 (𝑡).
⎨ 𝑦 = ℎ1 on (0, 𝑇 ) × {1},
⎪ Integrating (3.8) in (0, 𝑇 ) × (0, 1), and taking expectation on 𝛺, we
⎪ 𝑦(0) = 𝑦0 , 𝑦𝑡 (0) = 𝑦1 in (0, 1). get that

𝜏 𝜏
(3.3) E 𝑧𝑥 (0)2 𝑑𝑡 + E 𝑧𝑥 (1)2 𝑑𝑡
∫0 ∫0
1 1
Here 𝑦 is the state, 𝑔0 , 𝑔1 ∈ 𝐿∞ (0, 𝑇 ; 𝐻 −1 (0, 1)) and ℎ0 , ℎ1 ∈ 𝐿2F (0, 𝑇 ) are
F = −2E 𝜉 𝑧̂ 𝜏 𝑧𝜏𝑥 𝑑𝑥 + 2E 𝜉 𝑧(0)𝑧
̂ 𝑥 (0)𝑑𝑥 (3.9)
controls. The second control system is ∫0 ∫0
[ ( ) ]
⎧ +2E 𝜉 𝑏1 𝑧 + 𝑏2 𝑍 + 𝑏3 𝑍̂ 𝑧𝑥 + 𝑧2 + 𝑧̂ 2 + 𝜉 𝑍𝑍
̂ 𝑥 𝑑𝑥𝑑𝑡.
𝑑𝑦 = (𝑦+𝑎
̂ 5 𝑢1 )𝑑𝑡 + (𝑎3 𝑦+𝑢1 )𝑑𝑊 (𝑡) in (0, 𝑇 ) × (0,1), ∫𝑄𝜏 𝑥

⎪ 𝑑 𝑦̂ − 𝑦𝑥𝑥 𝑑𝑡 = (𝑎1 𝑦+𝑎4 𝑢2 )𝑑𝑡
⎪ +(𝑎2 𝑦 + 𝑢2 )𝑑𝑊 (𝑡) in (0, 𝑇 ) × (0,1), Denote by  the right hand side of (3.9). It follows from (3.6) that
⎨ (3.4) ( )
⎪ 𝑦=ℎ on (0, 𝑇 ) × {0}, || ≤ 𝑒𝑟2 |𝑧𝜏 |𝐿2 (𝛺;𝐻 1 (𝐺)) + |𝑧̂ 𝜏 |𝐿2 (𝛺;𝐿2 (𝐺)) .
⎪ 𝑦=0 on (0, 𝑇 ) × {1}, 𝜏 0 𝜏

⎪ 𝑦(0) = 𝑦0 , 𝑦(0)
̂ = 𝑦̂0 in (0, 1). This, together with (3.9), implies (3.7). □

277
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

With the aid of Proposition 3.1, we introduce the following notions. Proposition 3.3. For each (𝑦0 , 𝑦̂0 ) ∈ 𝐿2 (0, 1)×𝐻 −1 (0, 1), the system (3.4)
admits a unique transposition solution (𝑦, 𝑦).
̂ Moreover,
Definition 3.1. A stochastic process 𝑦 ∈ 𝐶F ([0, 𝑇 ]; 𝐿2 (𝛺; 𝐿2 (0, 1))) ∩
|(𝑦, 𝑦)|
̂ 𝐶F ([0,𝑇 ];𝐿2 (𝛺;𝐿2 (0,1)))×𝐶F ([0,𝑇 ];𝐿2 (𝛺;𝐻 −1 (0,1)))
𝐶F1 ([0, 𝑇 ]; 𝐿2 (𝛺; 𝐻 −1 (0, 1))) is called a transposition solution to (3.3) if (
≤ 𝑒𝑟4 |𝑦0 |𝐿2 (0,1) +|𝑦̂0 |𝐻 −1 (0,1) +|𝑢1 |𝐿2 (0,𝑇 ;𝐿2 (0,1)) (3.12)
for any 𝜏 ∈ (0, 𝑇 ] and (𝑧𝜏 , 𝑧̂ 𝜏 ) ∈ 𝐿2 (𝛺; 𝐻01 (0, 1)) × 𝐿2 (𝛺; 𝐿2 (0, 1)), it )F
𝜏 𝜏 +|𝑢2 |𝐿2 (0,𝑇 ;𝐻 −1 (0,1)) + |ℎ|𝐿2 (0,𝑇 ) .
holds that F F

E⟨𝑦𝑡 (𝜏), 𝑧 ⟩𝐻 −1 (0,1),𝐻 1 (0,1) − ⟨𝑦1 , 𝑧(0)⟩𝐻 −1 (0,1),𝐻 1 (0,1)


𝜏 𝛥 ∑
3
0 0 Here 𝑟4 = |𝑎𝑘 |2𝐿∞ (0,𝑇 ;𝐿∞ (0,1)) .
−E⟨𝑦(𝜏), 𝑧̂ ⟩𝐿2 (0,1) + ⟨𝑦0 , 𝑧(0)⟩
𝜏 F
̂ 𝑘=1
𝐿2 (0,1)
𝜏
Proofs of Propositions 3.2 and 3.3 are similar. We only prove the
=E ⟨𝑔0 , 𝑧⟩𝐻 −1 (0,1),𝐻 1 (0,1) 𝑑𝑡 (3.10)
∫0 0 more complex one, i.e., Proposition 3.3.
𝜏
−E ⟨𝑔1 , 𝑍⟩𝐻 −1 (0,1),𝐻 1 (0,1) 𝑑𝑡 ̃̂ are
∫0 0 Proof of Proposition 3.3. Uniqueness. Assume that (𝑦, 𝑦) ̂ and (𝑦,
̃ 𝑦)
𝜏 𝜏 two transposition solutions to (3.4). By Definition 3.2, for any 𝜏 ∈ (0, 𝑇 ]
−E 𝑧𝑥 (1)ℎ1 𝑑𝑡 + E 𝑧𝑥 (0)ℎ0 𝑑𝑡. and (𝑧𝜏 , 𝑧̂ 𝜏 ) ∈ 𝐿2 (𝛺; 𝐻01 (0, 1)) × 𝐿2 (𝛺; 𝐿2 (0, 1)),
∫0 ∫0 𝜏 𝜏

Here (𝑧, 𝑍, 𝑧, ̂ solves (3.5) with


̂ 𝑍) E⟨𝑦(𝜏),
̂ 𝑧 ⟩𝐻 −1 (0,1),𝐻 1 (0,1) − E⟨𝑦(𝜏), 𝑧̂ ⟩𝐿2 (0,1)
𝜏 𝜏
0
̃̂
= E⟨𝑦(𝜏), 𝑧𝜏 ⟩ −1 − E⟨𝑦(𝜏),
̃ 𝑧̂ 𝜏 ⟩ 2
𝐻 1 (0,1),𝐻0 (0,1) 𝐿 (0,1) ,
𝑏1 = 𝑎1 , 𝑏2 = 𝑎2 , 𝑏3 = 0.
which implies that
Definition 3.2. A couple of processes (𝑦, 𝑦)̂ ∈ 𝐶F ([0, 𝑇 ]; 𝐿2 (𝛺; ( ) ( )
𝑦(𝜏),
̂ ̃̂
𝑦(𝜏) = 𝑦(𝜏), 𝑦(𝜏)
̃ , a.s., ∀ 𝜏 ∈ (0, 𝑇 ].
2 2 −1
𝐿 (0, 1))) × 𝐶F ([0, 𝑇 ];𝐿 (𝛺; 𝐻 (0, 1))) is called a transposition solu-
( ) ( )
tion to (3.4) if for any 𝜏 ∈ (0, 𝑇 ] and (𝑧𝜏 , 𝑧̂ 𝜏 ) ∈ 𝐿2 (𝛺; 𝐻01 (𝐺)) × Thus, 𝑦, ̂ 𝑦 = 𝑦, ̃
̂ 𝑦̃ in 𝐶F ([0, 𝑇 ]; 𝐿2 (𝛺; 𝐿2 (0, 1))) × 𝐶F ([0, 𝑇 ]; 𝐿2 (𝛺;
𝜏
−1
𝐿2 (𝛺; 𝐿2 (𝐺)), it holds that 𝐻 (0, 1))).
𝜏
Existence. Since ℎ ∈ 𝐿2F (0, 𝑇 ), there exists a sequence {ℎ𝑚 }∞ 𝑚=1

E⟨𝑦(𝜏),
̂ 𝑧𝜏 ⟩𝐻 −1 (0,1),𝐻 1 (0,1) − ⟨𝑦̂0 , 𝑧(0)⟩𝐻 −1 (0,1),𝐻 1 (0,1) 2
𝐶F ([0, 𝑇 ]) with ℎ𝑚 (0) = 0 for all 𝑚 ∈ N such that
0 0
−E⟨𝑦(𝜏), 𝑧̂ 𝜏 ⟩𝐿2 (0,1) + ⟨𝑦0 , 𝑧(0)⟩
̂ 𝐿2 (0,1)
lim ℎ =ℎ in 𝐿2F (0, 𝑇 ). (3.13)
𝜏 1 𝑚→∞ 𝑚
= −E ̂
𝑢1 (𝑎5 𝑧̂ + 𝑍)𝑑𝑥𝑑𝑡 (3.11)
∫0 ∫0 For each 𝑚 ∈ N, we can find an ℎ̃ 𝑚 ∈ 𝐶F2 ([0, 𝑇 ]; 𝐻 2 (0, 1)) such that
𝜏 𝜏
ℎ̃ 𝑚 (0, 𝑡) = ℎ𝑚 (𝑡) for all 𝑡 ∈ [0, 𝑇 ] and ℎ̃ 𝑚 (𝑥, 0) = 0 for a.e. 𝑥 ∈ [0, 1].
+E ⟨𝑢2 , 𝑎4 𝑧+𝑍⟩𝐻 −1 (0,1),𝐻 1 (0,1) 𝑑𝑡 +E 𝑧𝑥 (0)ℎ𝑑𝑡.
∫0 0 ∫0 Consider the following equation:

Here (𝑧, 𝑍, 𝑧, ̂ solves (3.5) with


̂ 𝑍) ⎧ 𝑑 𝑦̃ = (𝑦̃̂ − ℎ̃ + 𝑎 𝑢 )𝑑𝑡
⎪ 𝑚 𝑚 𝑚,𝑡 5 1
⎪ +[𝑎3 (𝑦̃𝑚 + ℎ̃ 𝑚 )+𝑢1 ]𝑑𝑊 (𝑡) in (0, 𝑇 ) × (0, 1),
𝑏 1 = 𝑎1 , 𝑏2 = 𝑎2 , 𝑏3 = −𝑎3 . ⎪ 𝑑 𝑦̃̂ − 𝑦̃ 𝑑𝑡
⎪ 𝑚 𝑚,𝑥𝑥
⎨ = (𝑎1 𝑦̃𝑚 + 𝑎4 𝑢2 +𝜁𝑚 )𝑑𝑡 (3.14)
Remark 3.1. When (ℎ0 , ℎ1 ) = 0 (resp. ℎ = 0), the control sys- ⎪ +[𝑎2 (𝑦̃𝑚 + ℎ̃ 𝑚 )+𝑢2 ]𝑑𝑊 (𝑡) in (0, 𝑇 ) × (0, 1),
tem (3.3) (resp. (3.4)) is a homogeneous boundary value problem. By ⎪
⎪ 𝑦̃𝑚 = 0 on (0, 𝑇 ) × {0, 1},
Theorem B.7, (3.3) and (3.4) admit a unique weak solution ⎪ 𝑦̃𝑚 (0) = 𝑦0 , 𝑦̃̂𝑚 (0) = 𝑦̂0 in (0, 1),

𝑦 ∈ 𝐶F ([0,𝑇 ]; 𝐿2 (𝛺;𝐿2 (0,1))) ∩ 𝐶F1 ([0,𝑇 ];𝐿2 (𝛺;𝐻 −1 (0,1))) where 𝜁𝑚 = ℎ̃ 𝑚,𝑥𝑥 + 𝑎1 ℎ̃ 𝑚 . By Theorem B.7, the system (3.14) admits a
and unique mild (also weak) solution (𝑦̃𝑚 , 𝑦̃̂𝑚 ) ∈ 𝐶F ([0, 𝑇 ]; 𝐿2 (𝛺; 𝐿2 (0, 1))) ×
𝐶F ([0, 𝑇 ]; 𝐿2 (𝛺; 𝐻 −1 (0, 1))).
̂ ∈ 𝐶F ([0, 𝑇 ]; 𝐿2 (𝛺; 𝐿2 (0, 1))) × 𝐶F ([0, 𝑇 ]; 𝐿2 (𝛺; 𝐻 −1 (0, 1))),
(𝑦, 𝑦) Let 𝑦𝑚 = 𝑦̃𝑚 + ℎ̃ 𝑚 and 𝑦̂𝑚 = 𝑦̃̂𝑚 . For any 𝑚1 , 𝑚2 ∈ N, by Itô’s formula
and integration by parts, we have that
respectively. It follows from Itô’s formula that these solutions are
respectively transposition solutions to (3.3) and (3.4). Then, for the case E⟨𝑦̂𝑚1 (𝜏), 𝑧𝜏 ⟩𝐻 −1 (0,1),𝐻 1 (0,1) − ⟨𝑦̂0 , 𝑧(0)⟩𝐻 −1 (0,1),𝐻 1 (0,1)
0 0
of homogeneous boundary conditions, by the uniqueness of transposi- −E⟨𝑦𝑚1 (𝜏), 𝑧̂ 𝜏 ⟩𝐿2 (0,1) + ⟨𝑦0 , 𝑧(0)⟩
̂ 𝐿2 (0,1)
tion solutions, the transposition solution to (3.3) (resp. (3.4)) is also the 𝜏 𝜏 1
weak solution to the same equation. =E 𝑧𝑥 (0)ℎ𝑚1 𝑑𝑡 − E 𝑢1 (𝑎5 𝑧+ ̂
̂ 𝑍)𝑑𝑥𝑑𝑡
∫0 ∫0 ∫0
𝜏
We have the following well-posedness results for (3.3) and (3.4), +E ⟨𝑢 , 𝑎 𝑧 + 𝑍⟩𝐻 −1 (0,1),𝐻 1 (0,1) 𝑑𝑡 (3.15)
∫0 2 4 0
which are special cases of Lü and Zhang (2019a, Proposition 4.2)
and Lü and Zhang (2019a, Proposition 4.1), respectively. and
E⟨𝑦̂𝑚2 (𝜏), 𝑧𝜏 ⟩𝐻 −1 (0,1),𝐻 1 (0,1) − ⟨𝑦̂0 , 𝑧(0)⟩𝐻 −1 (0,1),𝐻 1 (0,1)
0 0
Proposition 3.2. For each (𝑦0 , 𝑦̂0 ) ∈ 𝐿2 (0, 1)×𝐻 −1 (0, 1), the system (3.3) −E⟨𝑦𝑚2 (𝜏), 𝑧̂ 𝜏 ⟩𝐿2 (0,1) + ⟨𝑦0 , 𝑧(0)⟩
̂ 𝐿2 (0,1)
admits a unique transposition solution 𝑦. Furthermore, 𝜏 𝜏 1
=E 𝑧𝑥 (0)ℎ𝑚2 𝑑𝑡 − E ̂
𝑢1 (𝑎5 𝑧̂ + 𝑍)𝑑𝑥𝑑𝑡
∫0 ∫0 ∫0
|𝑦|𝐶 ];𝐿2 (𝛺;𝐿2 (0,1)))∩𝐶F1 ([0,𝑇 ];𝐿2 (𝛺;𝐻 −1 (0,1))) 𝜏
F ([0,𝑇(
≤ 𝑒𝑟3 |𝑦0 |𝐿2 (0,1) +|𝑦1 |𝐻 −1 (0,1) +|𝑔0 |𝐿2 (0,𝑇 ;𝐻 −1 (0,1)) +E ⟨𝑢2 , 𝑎4 𝑧 + 𝑍⟩𝐻 −1 (0,1),𝐻 1 (0,1) 𝑑𝑡.
∫0 0
F )
+|𝑔1 |𝐿2 (0,𝑇 ;𝐻 −1 (0,1)) +|(ℎ0 ,ℎ1 )|𝐿2 (0,𝑇 )×𝐿2 (0,𝑇 ) , Consequently,
F F F

𝛥 ∑
2 E⟨𝑦̂𝑚1 (𝜏) − 𝑦̂𝑚2 (𝜏), 𝑧𝜏 ⟩𝐻 −1 (0,1),𝐻 1 (0,1)
where 𝑟3 = |𝑎2 |2𝐿∞ (0,𝑇 ;𝐿∞ (0,1)) . 0

𝑘=1
F
−E⟨𝑦𝑚1 (𝜏) − 𝑦𝑚2 (𝜏), 𝑧̂ 𝜏 ⟩𝐿2 (0,1) (3.16)

278
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

𝜏
= −E 𝑧𝑥 (0)(ℎ𝑚1 − ℎ𝑚2 )𝑑𝑡. 𝐿2 (𝛺; 𝐻 −1 (0, 1)), one can find (𝑔1 , 𝑔2 , ℎ0 , ℎ1 ) ∈ 𝐿2F (0, 𝑇 ; 𝐻 −1 (0, 1)) ×
∫0 𝑇
𝐿2F (0, 𝑇 ; 𝐻 −1 (0, 1)) × 𝐿2F (0, 𝑇 ) × 𝐿2F (0, 𝑇 ) such that the corresponding
Let us choose (𝑧𝜏 , 𝑧̂ 𝜏 ) ∈ 𝐿2 (𝛺; 𝐻01 (0, 1)) × 𝐿2 (𝛺; 𝐿2 (0, 1)) with solution 𝑦 to the system (3.3) satisfies that (𝑦(𝑇 ), 𝑦𝑡 (𝑇 )) = (𝑦′0 , 𝑦′1 ), a.s.
𝜏 𝜏

|𝑧𝜏 |𝐿2 (𝛺;𝐻01 (0,1)) = 1, |𝑧̂ 𝜏 |𝐿2 (𝛺;𝐿2 (0,1)) =1


𝜏 𝜏 Definition 3.4. The system (3.4) is called exactly controllable at time
and 𝑇 if for any (𝑦0 , 𝑦̂0 ) ∈ 𝐿2 (0, 1)×𝐻 −1 (0, 1) and (𝑦1 , 𝑦̂1 ) ∈ 𝐿2 (𝛺; 𝐿2 (0, 1))×
𝑇
E⟨𝑦̂𝑚1 (𝜏) − 𝑦̂𝑚2 (𝜏), 𝑧𝜏 ⟩𝐻 −1 (0,1),𝐻 1 (0,1) 𝐿2 (𝛺; 𝐻 −1 (0, 1)), one can find (𝑢1 , 𝑢2 , ℎ) ∈ 𝐿2F (0, 𝑇 ; 𝐿2 (0, 1)) × 𝐿2F (0, 𝑇 ;
0 𝑇
−E⟨𝑦𝑚1 (𝜏) − 𝑦𝑚2 (𝜏), 𝑧̂ 𝜏 ⟩𝐿2 (0,1) 𝐻 −1 (0, 1)) × 𝐿2F (0, 𝑇 ) such that the corresponding solution (𝑦, 𝑦) ̂ to (3.4)
1 ( (3.17) satisfies that (𝑦(𝑇 ), 𝑦(𝑇
̂ )) = (𝑦1 , 𝑦̂1 ), a.s.
≥ |𝑦𝑚1 (𝜏) − 𝑦𝑚2 (𝜏)|𝐿2 (𝛺;𝐿2 (0,1))
2 𝜏 )
+|𝑦̂𝑚1 (𝜏) − 𝑦̂𝑚2 (𝜏)|𝐿2 (𝛺;𝐻 −1 (0,1)) . As we have said, since four controls, 𝑔0 , 𝑔1 , ℎ0 and ℎ1 , are put in
𝜏 (3.3), one may guess that the desired exact controllability should triv-
It follows from (3.16), (3.17) and Proposition 3.1 that ially hold. Surprisingly, we have the following negative controllability
|𝑦𝑚1 (𝜏) − 𝑦𝑚2 (𝜏)|𝐿2 result for the system (3.3), which is a special case of Lü and Zhang
(𝛺;𝐿2 (0,1))
𝜏 (2019a, Theorem 2.1).
+|𝑦̂𝑚1 (𝜏) − 𝑦̂𝑚2 (𝜏)|𝐿2 (𝛺;𝐻 −1 (0,1))
𝜏
𝜏 Theorem 3.1. The system (3.3) is not exactly controllable for any time
| |
≤ 2|E 𝑧 (0)(ℎ𝑚1 − ℎ𝑚2 )𝑑𝑡|
| ∫0 𝑥 | 𝑇 > 0.
≤ |ℎ𝑚1−ℎ𝑚2 |𝐿2 (0,𝑇 ) |(𝑧𝜏, 𝑧̂ 𝜏 )|𝐿2 (𝛺;𝐻01 (0,1))×𝐿2 (𝛺;𝐿2 (0,1))
F 𝜏 𝜏 Next, we consider the exact controllability of (3.4). To this end, let
≤ |ℎ𝑚1 − ℎ𝑚2 |𝐿2 (0,𝑇 ) ,
F
us choose 𝑥0 > 1, 𝑐0 > 0, 𝑐1 > 0 and 𝑇 > 0 such that the following
where the constant  is independent of 𝜏. Consequently, it holds that condition holds:

|𝑦𝑚1 − 𝑦𝑚2 |𝐶 2 2 Condition 3.1.


F ([0,𝑇 ];𝐿 (𝛺;𝐿 (0,1)))
+|𝑦̂𝑚1 − 𝑦̂𝑚2 |𝐶 2 −1

F ([0,𝑇 ];𝐿 (𝛺;𝐻 (0,1))) 𝛥 𝛥 5
≤ |ℎ𝑚1 − ℎ𝑚2 |𝐿2 (0,𝑇 ) . 1. 𝑅1 = max |𝑥 − 𝑥0 | ≥ 𝑅0 = min |𝑥 − 𝑥0 | ≥ 𝑅 . (3.18)
F
𝑥∈[0,1] 𝑥∈[0,1] 6 1
This concludes that {(𝑦𝑚 , 𝑦̂𝑚 )}∞ is a Cauchy sequence in 𝐶F ([0, 𝑇 ]; ( 2𝑅 )2 2𝑅1
𝑚=1 1
2. 𝑇 > 2𝑅1 and < 𝑐1 < .
𝐿2 (𝛺; 𝐿2 (0, 1))) × 𝐶F ([0, 𝑇 ]; 𝐿2 (𝛺; 𝐻 −1 (0, 1))). Denote by (𝑦, 𝑦)
̂ the limit 𝑇 𝑇
of {(𝑦𝑚 , 𝑦̂𝑚 )}∞
𝑚=1
. Letting 𝑚 1 → ∞ in (3.15), we obtain (3.11). Thus, (𝑦, 𝑦)
̂
is a transposition solution to (3.4). Theorem 3.2. Suppose that Condition 3.1 holds. Then the system (3.4)
Let us choose (𝑧𝜏 , 𝑧̂ 𝜏 ) ∈ 𝐿2 (𝛺; 𝐻01 (0, 1))×𝐿2 (𝛺; 𝐿2 (0, 1)) to be such is exactly controllable at time 𝑇 .
𝜏 𝜏
that
Remark 3.2. Due to the finite speed of propagation, the exact con-
|𝑧𝑇 |𝐿2 (𝛺;𝐻01 (0,1)) = 1, |𝑧̂ 𝑇 |𝐿2 (𝛺;𝐿2 (0,1)) =1
𝜏 𝜏 trollability of stochastic hyperbolic equations is possible only when the
and ‘‘waiting’’ time 𝑇 is large enough. Condition 3.1 gives an estimate of
E⟨𝑦(𝜏),
̂ 𝑧𝜏 ⟩𝐻 −1 (0,1),𝐻 1 (0,1) − E⟨𝑦(𝜏), 𝑧̂ 𝜏 ⟩𝐿2 (0,1) such time 𝑇 , which is definitely not sharp. We believe 𝑇 > 2 is enough
1 ( 0
) but we cannot prove it at this moment.
≥ |𝑦(𝜏)|𝐿2 (𝛺;𝐿2 (0,1)) + |𝑦(𝜏)|̂ 𝐿2 (𝛺;𝐻 −1 (0,1)) .
2 𝜏 𝜏
Remark 3.3. One may suspect that Theorem 3.2 is trivial. For instance,
This, together with (3.11), and Proposition 3.1, implies that
one may give a possible ‘‘proof’’ of Theorem 3.2 as follows:
|𝑦(𝜏)|𝐿2 (𝛺;𝐿2 (0,1)) + |𝑦(𝜏)|
̂ 𝐿2 (𝛺;𝐻 −1 (0,1)) Choosing 𝑢1 = −𝑎3 𝑦 and 𝑢2 = −𝑎2 𝑦, then the system (3.4) becomes

( 𝜏 𝜏
| | | | ⎧ 𝑑𝑦 = (𝑦̂ − 𝑎3 𝑎5 𝑦)𝑑𝑡 in (0, 𝑇 ) × (0, 1),
≤ 2 |⟨𝑦̂0 , 𝑧(0)⟩𝐻 −1 (0,1),𝐻 1 (0,1) | + |⟨𝑦0 , 𝑧(0)⟩
̂ 𝐿2 (0,1) ||
| 0 | | ⎪ 𝑑 𝑦̂ − 𝑦 𝑑𝑡 = (𝑎 𝑦−𝑎 𝑎 𝑦)𝑑𝑡 in (0, 𝑇 ) × (0, 1),
𝜏 1 ⎪ 𝑥𝑥 1 4 2
| ̂ | ⎨𝑦=ℎ on (0, 𝑇 ) × {0}, (3.19)
+|E 𝑢 (𝑎 𝑧̂ + 𝑍)𝑑𝑥𝑑𝑡 |
| ∫0 ∫0 1 5 | ⎪𝑦=0 on (0, 𝑇 ) × {1},
𝜏 ⎪
| | ⎩ 𝑦(0) = 𝑦0 , 𝑦(0)
̂ = 𝑦̂0 in (0, 1).
+|E ⟨𝑢 , 𝑎 𝑧 + 𝑍⟩𝐻 −1 (0,1),𝐻 1 (0,1) 𝑑𝑡|
| ∫0 2 4 0 |
𝜏 ) This is a wave-like equation with random coefficients. For every given
| |
+|E 𝑧 (0)ℎ𝑑𝑡| 𝜔 ∈ 𝛺, one can find a control ℎ(⋅, 𝜔) such that the solution to (3.19)
| ∫0 𝑥 |
( satisfying (𝑦(𝑇 , 𝑥, 𝜔), 𝑦(𝑇
̂ , 𝑥, 𝜔)) = (𝑦1 (𝑥, 𝜔), 𝑦̂1 (𝑥, 𝜔)). However, it is
≤ 𝑒𝑟4 |𝑦0 |𝐿2 (0,1) + |𝑦̂0 |𝐻 −1 (0,1) + |𝑢1 |𝐿2 (0,𝑇 ;𝐿2 (0,1)) unclear whether such a control can be chosen to be adapted to the
) F

+|𝑢2 |𝐿2 (0,𝑇 ;𝐻 −1 (0,1)) + |ℎ|𝐿2 (0,𝑇 ) filtration 𝐅 or not.


F F
×|(𝑧𝜏 , 𝑧̂ 𝜏 )|𝐿2 (𝛺;𝐻 1 (0,1))×𝐿2 (𝛺;𝐿2 (0,1)) We introduce three controls in the system (3.4) and the controls in
𝜏 0 𝜏
( the diffusion terms are acted on the whole domain (0, 1). One may ask
≤ 𝑒𝑟4 |𝑦0 |𝐿2 (0,1) + |𝑦̂0 |𝐻 −1 (0,1) + |𝑢1 |𝐿2 (0,𝑇 ;𝐿2 (0,1))
) F whether localized controls are enough or the boundary control can be
+|𝑢2 |𝐿2 (0,𝑇 ;𝐻 −1 (0,1)) + |ℎ|𝐿2 (0,𝑇 ) , dropped. However, the answer is NO. We have the following negative
F F
result.
where the constant  is independent of 𝜏. Therefore, we obtain (3.12).
This completes the proof of Proposition 3.3. □
Theorem 3.3. The system (3.4) is not exactly controllable at any time
Now we introduce the following notions of exact controllability for 𝑇 > 0 provided that one of the following three conditions is satisfied:
the control systems (3.3) and (3.4), respectively.
(1) 𝑎3 ∈ 𝐶F ([0, 𝑇 ]; 𝐿∞ (0, 1)), (0, 1) ⧵ 𝐺0 ≠ ∅ and 𝑢1 is supported in 𝐺0 ;
Definition 3.3. The system (3.3) is called exactly controllable at time (2) 𝑎2 ∈ 𝐶F ([0, 𝑇 ]; 𝐿∞ (0, 1)), (0, 1) ⧵ 𝐺0 ≠ ∅ and 𝑢2 is supported in 𝐺0 ;
𝑇 if for any (𝑦0 , 𝑦1 ) ∈ 𝐿2 (0, 1)×𝐻 −1 (0, 1) and (𝑦′0 , 𝑦′1 ) ∈ 𝐿2 (𝛺; 𝐿2 (0, 1))× (3) ℎ = 0.
𝑇

279
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

3.2. Lack of exact controllability ̂ ) = 𝜌𝜉.


where 𝜁 = −𝜌𝑥𝑥 𝑦−2𝑦𝑥 𝜌𝑥+𝜌𝑎1 𝑦. Further, we have 𝜙(𝑇 ) = 0 and 𝜙(𝑇
̂
Noting that (𝜙, 𝜙) is the weak solution to (3.22), we see that
Before proving Theorems 3.1 and 3.3, we recall the following known ⟨ ⟩
𝜌𝜉, 𝜌 𝐻 −2 (0,1),𝐻 2 (0,1)
result (Peng, 1990, Lemma 2.1). 0
𝑇( ⟨ ⟩ )
= ⟨𝜙𝑥𝑥 , 𝜌⟩𝐻 −2 (0,1),𝐻 2 (0,1) + 𝜁 , 𝜌 𝐻 −1 (0,1),𝐻 1 (0,1) 𝑑𝑡
Lemma 3.1. There is a random variable 𝜉 ∈ 𝐿2 (𝛺) such that it is ∫0 0 0
𝑇 𝑇 1
impossible to find (𝜚1 , 𝜚2 ) ∈ 𝐿2F (0, 𝑇 ) × 𝐶F ([0, 𝑇 ]) and 𝜂 ∈ R satisfying + 𝑎2 𝜙𝜌𝑑𝑥𝑑𝑊 (𝑡),
∫0 ∫0
𝑇 𝑇
𝜉=𝜂+ 𝜚1 (𝑡)𝑑𝑡 + 𝜚2 (𝑡)𝑑𝑊 (𝑡). which implies that
∫0 ∫0
𝑇( ⟨ ⟩ )
𝜉= ⟨𝜙𝑥𝑥 , 𝜌⟩𝐻 −2 (0,1),𝐻 2 (0,1) + 𝜁 , 𝜌 𝐻 −1 (0,1),𝐻 1 (0,1) 𝑑𝑡
Proof of Theorem 3.1. We use the contradiction argument. Choose ∫0 0 0
1
𝜓 ∈ 𝐻01 (0, 1) satisfying |𝜓|𝐿2 (0,1) = 1 and let 𝑦̃0 = 𝜉𝜓, where 𝑇
+ 𝑎2 𝜙𝜌𝑑𝑥𝑑𝑊 (𝑡). (3.23)
𝜉 is given in Lemma 3.1. Assume that (3.3) were exactly control- ∫0 ∫0
lable. Then, for any 𝑦0 ∈ 𝐿2 (0, 1), we would find (𝑔1 , 𝑔2 , ℎ0 , ℎ1 ) ∈
Since (𝜙, 𝜙)̂ ∈ 𝐶F ([0, 𝑇 ];𝐿2 (𝛺;𝐿2 (0,1)))×𝐶F ([0, 𝑇 ];𝐿2 (𝛺; 𝐻 −1 (0, 1))), then
𝐿2F (0, 𝑇 ; 𝐻 −1 (0, 1)) × 𝐿2F (0, 𝑇 ; 𝐻 −1 (0, 1)) × 𝐿2F (0, 𝑇 ) × 𝐿2F (0, 𝑇 ) × 𝐿2F (0, 𝑇 )
⟨ ⟩
such that the corresponding solution 𝑦 ∈ 𝐶F ([0, 𝑇 ]; 𝐿2 (𝛺; 𝐿2 (0, 1))) ∩ ⟨𝜙𝑥𝑥 , 𝜌⟩𝐻 −2 (0,1),𝐻 2 (0,1) + 𝜁 , 𝜌 𝐻 −1 (0,1),𝐻 1 (0,1) ∈ 𝐿2F (0, 𝑇 )
𝐶F1 ([0, 𝑇 ]; 𝐿2 (𝛺; 𝐻 −1 (0, 1))) to Eq. (3.3) satisfies that 𝑦(𝑇 ) = 𝑦̃0 . Clearly, 0 0

1 1 𝑇
and
𝑦̃0 𝜓𝑑𝑥 − 𝑦0 𝜓𝑑𝑥 = ⟨𝑦𝑡 , 𝜓⟩𝐻 −1 (0,1),𝐻 1 (0,1) 𝑑𝑡,
∫0 ∫0 ∫0 0 ⟨𝑎2 𝜙, 𝜌⟩𝐿2 (0,1) ∈ 𝐶F ([0, 𝑇 ]).
which leads to These, together with (3.23), contradict Lemma 3.1.
1 𝑇
Case (3) ℎ = 0. For simplicity, we only consider a special case
𝜉= 𝑦0 𝜓𝑑𝑥 + ⟨𝑦𝑡 , 𝜓⟩𝐻 −1 (0,1),𝐻 1 (0,1) 𝑑𝑡.
∫0 ∫0 0 that 𝑎𝑖 = 0 (𝑖 = 1, 2, 3, 4, 5). Assume that the system (3.4) were exactly
This contradicts Lemma 3.1. □ controllable. Then, by taking expectation in both sides of each equation
in (3.4) (with 𝑎𝑖 = 0, 𝑖 = 1, 2, 3, 4, 5), it is easy to see that the following
Proof of Theorem 3.3. : Let us employ the contradiction argument, (deterministic) wave equation (without controls)
and divide the proof into three cases. ⎧ (E𝑦) − (E𝑦) = 0 in (0, 𝑇 ) × (0,1),
Case (1) 𝑎3 ∈ 𝐶F ([0, 𝑇 ]; 𝐿∞ (0, 1)) and supp 𝑢1 ⊂ 𝐺0 . ⎪ 𝑡𝑡 𝑥𝑥

Since 𝐺0 ⊂ (0, 1) is an open subset and (0, 1) ⧵ 𝐺0 ≠ ∅, we can find a ⎨ E𝑦 = 0 on (0, 𝑇 ) × {0, 1},
⎪ (E𝑦)(0) = 𝑦0 , (E𝑦)𝑡 (0) = 𝑦̂0 in (0, 1)
𝜌 ∈ 𝐶0∞ ((0, 1) ⧵ 𝐺0 ) satisfying |𝜌|𝐿2 ((0,1)) = 1. ⎩
Assume that (3.4) were exactly controllable. Then, for (𝑦0 , 𝑦̂0 ) = would be exactly controllable at time 𝑇, which is clearly
(0, 0), one could find (𝑢1 , 𝑢2 , ℎ) ∈ 𝐿2F (0, 𝑇 ; 𝐿2 ((0, 1))) × 𝐿2F (0, 𝑇 ; impossible! □
𝐻 −1 ((0, 1))) × 𝐿2F (0, 𝑇 ) with supp 𝑢1 ⊂ 𝐺0 , a.e. (𝑡, 𝜔) ∈ (0, 𝑇 ) × 𝛺 such
that the corresponding solution to (3.4) fulfills (𝑦(𝑇 ), 𝑦(𝑇 ̂ )) = (𝜌𝜉, 0),
3.3. Observability estimate for backward stochastic hyperbolic equations
where 𝜉 is given in Lemma 3.1. Thus,
𝑇 𝑇
𝜌𝜉 = (𝑦̂ + 𝑎5 𝑢1 )𝑑𝑡 + (𝑎3 𝑦 + 𝑢1 )𝑑𝑊 (𝑡). (3.20) By a standard duality argument (e.g., Lü & Zhang, 0000b, Chapter
∫0 ∫0 7), Theorem 3.2 is equivalent to the following result.
Multiplying both sides of (3.20) by 𝜌 and integrating it in (0, 1), we get
that Theorem 3.4. Under Condition 3.1, all solutions to Eq. (3.5) with 𝜏 = 𝑇
𝑇 𝑇 1 satisfy
𝜉= ⟨𝑦,
̂ 𝜌⟩𝐻 −1 (0,1),𝐻 1 (0,1) 𝑑𝑡 + 𝑎3 𝑦𝜌𝑑𝑥𝑑𝑊 (𝑡). (3.21)
∫0 0 ∫0 ∫0 |(𝑧𝑇 , 𝑧̂ 𝑇 )|𝐿2 (𝛺;𝐻 1 (0,1)×𝐿2 (0,1))
( 𝑇 0
Since (𝑦, 𝑦)
̂ ∈ 𝐶F ([0, 𝑇 ]; 𝐿2 (𝛺; 𝐿2 (0,1))) × 𝐶F ([0, 𝑇 ];𝐿2 (𝛺; 𝐻 −1 (0, 1))) ≤ 𝑒𝑟5 |𝑧𝑥 (⋅, 0)|𝐿2 (0,𝑇 ) +|𝑎4 𝑧+𝑍|𝐿2 (0,𝑇 ;𝐻 1 (0,1))
solves Eq. (3.4), we have F ) F 0 (3.24)
̂ 2
+|𝑎5 𝑧̂ + 𝑍| 2 ,
𝐿F (0,𝑇 ;𝐿 (0,1))
̂ 𝜌⟩𝐻 −1 (0,1),𝐻 1 (0,1) ∈ 𝐶F ([0, 𝑇 ]; 𝐿2 (𝛺))
⟨𝑦, ∀ (𝑧𝑇 , 𝑧̂ 𝑇 ) ∈ 𝐿2 (𝛺; 𝐻01 (0, 1) × 𝐿2 (0, 1)),
0 𝑇

and 𝛥
where 𝑟5 = 𝑟4 + |𝑎4 |2𝐿∞ (0,𝑇 ;𝑊 1,∞ (0,1)) + |𝑎5 |2𝐿∞ (0,𝑇 ;𝐿∞ (0,1)) .
2 F
⟨𝑎3 𝑦, 𝜌⟩𝐿2 (0,1) ∈ 𝐶F ([0, 𝑇 ]; 𝐿 (𝛺)).
F

To prove Theorem 3.4, we need the following pointwise identity for


These, together with (3.21), contradict Lemma 3.1. stochastic wave operators in one space dimension.
Case (2) 𝑎2 ∈ 𝐶F ([0, 𝑇 ]; 𝐿∞ (0, 1)) and supp 𝑢2 ⊂ 𝐺0 .
If (3.4) were exactly controllable, then, for (𝑦0 , 𝑦̂0 ) = (0, 0), one can Lemma 3.2. Let 𝐳 be an 𝐻 2 (0, 1)-valued Itô process and 𝐳̂ be an
find (𝑢1 , 𝑢2 , ℎ) ∈ 𝐿2F (0, 𝑇 ; 𝐿2 (0, 1)) × 𝐿2F (0, 𝑇 ; 𝐻 −1 (0, 1)) × 𝐿2F (0, 𝑇 ) with 2
𝐿 (0, 1)-valued Itô process, and
supp 𝑢2 ⊂ 𝐺0 , a.e. (𝑡, 𝜔) ∈ (0, 𝑇 )×𝛺 such that the corresponding solution
to (3.4) fulfills (𝑦(𝑇 ), 𝑦(𝑇
̂ )) = (0, 𝜉). 𝑑𝐳 = 𝐳̂ 𝑑𝑡 + 𝐙𝑑𝑊 (𝑡) in (0, 𝑇 ) × (0, 1) (3.25)
Choose 𝜌 as in Case (1). It is clear that (𝜙, 𝜙) ̂ = (𝜌𝑦, 𝜌𝑦) ̂ solves the
for some 𝐙 ∈ 𝐿2F (0, 𝑇 ; 𝐻 1 (0, 1)). For 𝓁, 𝛹 ∈ 𝐶 2 ([0, 𝑇 ] × R), set
following equation:
⎧ 𝑑𝜙 = (𝜙̂ + 𝑎5 𝜌𝑢1 )𝑑𝑡 𝜃 = 𝑒𝓁 , 𝑣 = 𝜃𝐳, 𝑣̂ = 𝜃 𝐳̂ + 𝓁𝑡 𝑣

⎪ +(𝑎3 𝜙 + 𝜌𝑢1 )𝑑𝑊 (𝑡) in (0,𝑇 ) × (0,1), and
⎪ 𝑑 𝜙̂ − 𝜙𝑥𝑥 𝑑𝑡 = 𝜁 𝑑𝑡+𝑎2 𝜙𝑑𝑊 (𝑡) in (0,𝑇 ) × (0,1),
⎨ (3.22) ⎧ 𝛥 ( )
⎪ 𝜙=0 on (0,𝑇 ) × {0,1}, ⎪  = (𝓁𝑡2 − 𝓁𝑡𝑡 ) − 𝓁𝑥2 − 𝓁𝑥𝑥 − 𝛹 ,
⎪ 𝜙̂ = 0 on (0,𝑇 ) × {0,1}, ⎨ 𝛥 1 (3.26)
⎪ 𝜙(0) = 0, 𝜙(0)̂ =0 in (0, 1), ⎪  = 𝛹 + (𝓁𝑡 )𝑡 − (𝓁𝑥 )𝑥 + 2 (𝛹𝑡𝑡 − 𝛹𝑥𝑥 ).
⎩ ⎩

280
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

( ) ( )
Then, for any 𝑡 ∈ [0, 𝑇 ], a.e. 𝑥 ∈ (0, 1) and a.s. 𝜔 ∈ 𝛺3 , ̂ − 𝑑 𝓁𝑡 𝑣2𝑥 + 𝓁𝑡 𝑣2
= 2 𝓁𝑡 𝑣𝑥 𝑣̂ 𝑥 𝑑𝑡 − 2𝓁𝑡𝑥 𝑣𝑥 𝑣𝑑𝑡
( )( ) ( )
𝜃 −2𝓁𝑡 𝑣̂ + 2𝓁𝑥 𝑣𝑥 + 𝛹 𝑣 𝑑 𝑧̂ − 𝑧𝑥𝑥 𝑑𝑡 +𝓁𝑡𝑡 𝑣2𝑥 𝑑𝑡 + 𝓁𝑡 𝑡 𝑣2 𝑑𝑡 + 𝓁𝑡 (𝑑𝑣𝑥 )2 + 𝓁𝑡 (𝑑𝑣)2
( ) [ ]
𝛹 +2 𝑣𝑥 (𝜃𝑍)𝑥 + 𝜃𝑣𝐙 𝓁𝑡 𝑑𝑊 (𝑡).
̂ 𝑥 𝑣̂ 2 +𝛹 𝑣𝑥 𝑣− 𝑥 𝑣2 −𝓁𝑥 𝑣2
+ 2𝓁𝑥 𝑣2𝑥 −2𝓁𝑡 𝑣𝑥 𝑣+𝓁
2 𝑥
[ ( 𝛹 ) ]
Further, by some direct computation, one may check that
+𝑑 𝓁𝑡 𝑣2𝑥 +𝓁𝑡 𝑣̂ 2 −2𝓁𝑥 𝑣𝑥 𝑣−𝛹
̂ ̂ 𝓁𝑡 + 𝑡 𝑣2
𝑣𝑣+ ( )
2 2𝓁𝑥 𝑣𝑥 −𝑣𝑥𝑥 + 𝑣
[ ( ) ( ) (3.34)
= (𝓁𝑡𝑡 + 𝓁𝑥𝑥 − 𝛹 )𝑣̂ 2 + (𝓁𝑡𝑡 + 𝓁𝑥𝑥 + 𝛹 )𝑣2𝑥 − 2𝓁𝑡𝑥 𝑣𝑥 𝑣̂ = − 𝓁𝑥 𝑣𝑥 − 𝓁𝑥 𝑣2 𝑥 + 𝓁𝑥𝑥 𝑣2𝑥 − 𝓁𝑥 𝑥 𝑣2
2

( )2 ] and
+𝑣2 + −2𝓁𝑡 𝑣+2𝓁
̂ 𝑥 𝑣𝑥 + 𝛹 𝑣 𝑑𝑡 (3.27) ( )
𝛹 𝑣 −𝑣𝑥𝑥 + 𝑣
2
̂ − 2𝓁𝑥 𝑑𝑣𝑥 𝑑 𝑣̂ − 𝛹 𝑑𝑣𝑑 𝑣̂ + 𝓁𝑡 (𝑑𝑣𝑥 )
+𝓁𝑡 (𝑑 𝑣) 2 ( 𝛹 ) ( ) (3.35)
{ ( 1
) = − 𝛹 𝑣𝑥 𝑣 − 𝑥 𝑣2 + 𝛹 𝑣2𝑥 + − 𝛹𝑥𝑥 + 𝛹 𝑣2 .
+𝓁𝑡 (𝑑𝑣)2 − 𝜃 −2𝓁𝑡 𝑣̂ + 2𝓁𝑥 𝑣𝑥 +𝛹 𝑣 𝓁𝑡 𝐙 2 𝑥 2
[ ] Finally, combining (3.31)–(3.35), we arrive at the desired equality
− 2(𝜃𝐙)𝑥 𝓁𝑥 𝑣̂ − 𝜃𝛹𝑡 𝑣𝐙 + 𝜃𝛹 𝑣𝐙
̂
} (3.27). □
[ ]
+2 𝑣𝑥 (𝜃𝐙)𝑥 + 𝜃𝑣𝐙 𝓁𝑡 𝑑𝑊 (𝑡).
Proof of Theorem 3.4. We divide the proof into four steps.
Step 1. For 𝜆 > 0 and 𝜇 > 0, let
Proof. By (3.25), and recalling 𝑣 = 𝜃𝐳 and 𝑣̂ = 𝜃 𝐳̂ + 𝓁𝑡 𝑣, we obtain that
( )
𝑇 2
𝜓(𝑡, 𝑥) = (𝑥 − 𝑥0 )2 − 𝑐1 𝑡 − ,
2 (3.36)
𝑑𝑣 = 𝑑(𝜃𝐳) = 𝜃𝑡 𝐳𝑑𝑡 + 𝜃𝑑𝐳 𝜙(𝑡, 𝑥) = 𝑒𝜇𝜓(𝑡,𝑥) , 𝓁(𝑡, 𝑥) = 𝜆𝜙(𝑡, 𝑥),
= 𝓁𝑡 𝜃𝐳𝑑𝑡 + 𝜃 𝐳̂ 𝑑𝑡 + 𝜃𝐙𝑑𝑊 (𝑡) (3.28)
= 𝑣𝑑𝑡
̂ + 𝜃𝐙𝑑𝑊 (𝑡). where 𝑐1 is given in Condition 3.1. Put
{ ( ) 2}
Hence, 𝛥 | 𝑇 2 3𝑅1
𝛬1 = (𝑡, 𝑥) ∈ (0, 𝑇 ) × (0, 1) | (𝑥 − 𝑥0 )2 − 𝑐1 𝑡 − > (3.37)
| 2 4
𝑑 𝐳̂ = 𝑑[𝜃 −1 (𝑣̂ − 𝓁𝑡 𝑣)]
[ ( ) ] and
= 𝜃 −1 𝑑 𝑣̂ − 𝓁𝑡𝑡 𝑣𝑑𝑡 − 𝓁𝑡 𝑑𝑣 − 𝓁𝑡 𝑣̂ − 𝓁𝑡 𝑣 𝑑𝑡 (3.29) { ( ) 2}
[ ( ) ] 𝛥 | 𝑇 2 𝑅1
= 𝜃 −1 𝑑 𝑣̂ − 2𝓁𝑡 𝑣̂ + 𝓁𝑡𝑡 𝑣 − 𝓁𝑡2 𝑣 𝑑𝑡 − 𝜃𝓁𝑡 𝐙𝑑𝑊 (𝑡) . 𝛬2 = (𝑡, 𝑥) ∈ (0, 𝑇 ) × (0, 1) | (𝑥 − 𝑥0 )2 − 𝑐1 𝑡 − > . (3.38)
| 2 2

Similarly, we have
Noting that 𝑅0 > 56 𝑅1 , we see that both 𝛬1 and 𝛬2 are nonempty. It
[ ( ) ]
𝐳𝑥𝑥 = 𝜃 −1 𝑣𝑥𝑥 − 2𝓁𝑥 𝑣𝑥 + 𝓁𝑥2 − 𝓁𝑥𝑥 𝑣 . (3.30) follows from (3.38) that
( )
𝑇 2
Therefore, by (3.29)–(3.30) and the definition of  in (3.26), we get (𝑥 − 𝑥0 )2 > 2𝑐1 𝑡 − , ∀ (𝑥, 𝑡) ∈ 𝛬2 . (3.39)
( )( ) 2
𝜃 −2𝓁𝑡 𝑣̂ + 2𝓁𝑥 𝑣𝑥 + 𝛹 𝑣 𝑑 𝑧̂ − 𝑧𝑥𝑥 𝑑𝑡 From Condition 3.1 and (3.36), we find that
( )[
= −2𝓁𝑡 𝑣̂ + 2𝓁𝑥 𝑣𝑥 + 𝛹 𝑣 𝑑 𝑣̂ − 𝑣𝑥𝑥 𝑑𝑡 + 𝑣𝑑𝑡 ( 𝑐 𝑇2 )
( ) ] 𝜓(0, 𝑥) = 𝜓(𝑇 , 𝑥) ≤ 𝜆 𝑅1 − 1 < 0, ∀ 𝑥 ∈ (0, 1). (3.40)
+ −2𝓁𝑡 𝑣̂ + 2𝓁𝑥 𝑣𝑥 + 𝛹 𝑣 𝑑𝑡 − 𝜃𝓁𝑡 𝐙𝑑𝑊 (𝑡) 4
( )
= −2𝓁𝑡 𝑣̂ + 2𝓁𝑥 𝑣𝑥 + 𝛹 𝑣 𝑑 𝑣̂ Hence, there exists 𝜀2 ∈ (0, 21 ) such that
( )2 ( )
+ −2𝓁𝑡 𝑣̂ + 2𝓁𝑥 𝑣𝑥 + 𝛹 𝑣 𝑑𝑡 (3.31) 𝛥 𝑇 𝑇
( )( ) 𝛬2 ⊂ 𝑄2 = − 𝜀2 𝑇 , + 𝜀2 𝑇 × (0, 1) (3.41)
+ −2𝓁𝑡 𝑣̂ + 2𝓁𝑥 𝑣𝑥 + 𝛹 𝑣 −𝑣𝑥𝑥 + 𝑣 𝑑𝑡 2 2
( ) and that
−𝜃 −2𝓁𝑡 𝑣̂ + 2𝓁𝑥 𝑣𝑥 + 𝛹 𝑣 𝓁𝑡 𝐙𝑑𝑊 (𝑡).

We now analyze the first and third terms in the right-hand side 𝜓(𝑡, 𝑥) < 0, ∀ (𝑡, 𝑥) ∈ 𝑄 ⧵ 𝑄2 . (3.42)
of (3.31). Using Itô’s formula and noting (3.28), we have Next, since {𝑇 ∕2} × (0, 1) ⊂ 𝛬1 , we can find an 𝜀1 ∈ (0, 𝜀2 ) such that
( ) ( )
−2𝓁𝑡 𝑣̂ + 2𝓁𝑥 𝑣𝑥 + 𝛹 𝑣 𝑑 𝑣̂ 𝛥 𝑇 𝑇
( ) 𝑄1 = − 𝜀1 𝑇 , + 𝜀1 𝑇 × (0, 1) ⊂ 𝛬1 . (3.43)
= 𝑑 −𝓁𝑡 𝑣̂ 2 + 2𝓁𝑥 𝑣𝑥 𝑣̂ + 𝛹 𝑣𝑣̂ − 2𝓁𝑥 𝑣𝑑𝑣
̂ 𝑥 2 2
( 2
) Step 2. By (3.39), we may choose 𝑐0 > 0 such that
−𝛹 𝑣𝑑𝑣
̂ − −𝓁𝑡𝑡 𝑣̂ + 2𝓁𝑥𝑡 𝑣𝑥 𝑣̂ + 𝛹𝑡 𝑣𝑣̂ 𝑑𝑡
( )
𝑇 2
̂ 2 − 2𝓁𝑥 𝑑𝑣𝑥 𝑑 𝑣̂ − 𝛹 𝑑𝑣𝑑 𝑣̂
+𝓁𝑡 (𝑑 𝑣) (3.32) 4𝑐12 𝑡 − < 𝑐0 < 4(𝑥 − 𝑥0 )2 , ∀ (𝑥, 𝑡) ∈ 𝛬2 . (3.44)
( 2
𝛹 ) ( )
= 𝑑 −𝓁𝑡 𝑣̂ 2 + 2𝓁𝑥 𝑣𝑥 𝑣̂ + 𝛹 𝑣𝑣̂ − 𝑡 𝑣2 − 𝓁𝑥 𝑣̂ 2 𝑥 𝑑𝑡 For each (𝑧𝑇 , 𝑧̂ 𝑇 ) ∈ 𝐿2 (𝛺; 𝐻01 (0, 1)) × 𝐿2 (𝛺; 𝐿2 (0, 1)), let us apply
2 𝑇 𝑇
[( ) 𝛹 ] Lemma 3.2 with
+ 𝓁𝑡𝑡 + 𝓁𝑥𝑥 −𝛹 𝑣̂ 2 − 2𝓁𝑥𝑡 𝑣𝑥 𝑣̂ + 𝑡𝑡 𝑣2 𝑑𝑡
2 𝛹 = 𝓁𝑡𝑡 + 𝓁𝑥𝑥 − 𝑐0 𝜆𝜇 2 𝜙 (3.45)
̂ 2 − 2𝓁𝑥 𝑑𝑣𝑥 𝑑 𝑣̂ − 𝛹 𝑑𝑣𝑑 𝑣̂
+𝓁𝑡 (𝑑 𝑣)
[ ] to the corresponding solution (𝑧, 𝑧, ̂ 𝑍, 𝑍)̂ ∈ 𝐿2 (𝛺; 𝐶([0, 𝑇 ]; 𝐻 1 (0,1))) ×
− 2(𝜃𝐙)𝑥 𝓁𝑥 𝑣̂ − 𝜃𝛹𝑡 𝑣𝐙 + 𝜃𝛹 𝑣𝐙 ̂ 𝑑𝑊 (𝑡). F 0
𝐿2F (𝛺; 𝐶([0, 𝑇 ]; 𝐿2 (0,1)))×𝐿2F (0, 𝑇 ;𝐻01 (0,1))×𝐿2F (0, 𝑇 ; 𝐿2 (0, 1)) of Eq. (3.5)
Next, with 𝜏 = 𝑇 , and then analyze the resulting terms in (3.27) one by one.
( ) In what follows, we use the notations 𝑣 = 𝜃𝑧 and 𝑣̂ = 𝜃 𝑧̂ + 𝓁𝑡 𝑣, where
−2𝓁𝑡 𝑣̂ −𝑣𝑥𝑥 + 𝑣 𝑑𝑡 𝜃 = 𝑒𝓁 for 𝓁 given by (3.36).
( )
= 2 𝓁𝑡 𝑣𝑥 𝑣̂ 𝑥 𝑑𝑡 − 2𝓁𝑡𝑥 𝑣𝑥 𝑣𝑑𝑡
̂ − 2𝓁𝑡 𝑣𝑥 𝑣̂ 𝑥 𝑑𝑡 − 2𝓁𝑡 𝑣𝑣𝑑𝑡
̂ Let us first analyze the terms which stand for the ‘‘energy’’ of the
( ) ( ) solution.
= 2 𝓁𝑡 𝑣𝑥 𝑣̂ 𝑥 𝑑𝑡 − 2𝓁𝑡𝑥 𝑣𝑥 𝑣𝑑𝑡
̂ − 2𝓁𝑡 𝑣𝑥 𝑑𝑣 − 𝜃𝐙𝑑𝑊 (𝑡) 𝑥
( ) From (3.36), we have
−2𝓁𝑡 𝑣 𝑑𝑣 − 𝜃𝐙𝑑𝑊 (𝑡) (3.33)
⎧ ( )
𝑇
⎪𝓁𝑡 = −2𝑐1 𝜆𝜇 𝑡 − 𝜙,
⎨ 2 (3.46)
3
̂ 2 , (𝑑𝑣𝑥 )2 , (𝑑𝑣)2 , 𝑑𝑣𝑥 𝑑 𝑣̂ and 𝑑𝑣𝑑 𝑣.
See Remark B.3 for (𝑑 𝑣) ̂ ⎪𝓁𝑥 = 2𝜆𝜇(𝑥 − 𝑥0 )𝜙,

281
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

( ) [ ( ) ]
⎧ 𝑇 2 𝑇 2
𝑥 𝓁𝑥 = 32𝜆3 𝜇 4 𝜙3 (𝑥−𝑥0 )2 𝑐12 𝑡− − (𝑥−𝑥0 )2
⎪𝓁𝑡𝑡 = 4𝑐12 𝜆𝜇2 𝑡 − 𝜙 − 2𝑐1 𝜆𝜇𝜙, 2
⎪ 2
2 2
⎨𝓁𝑥𝑥 = 4𝜆𝜇 (𝑥 − 𝑥0 ) 𝜙 + 2𝜆𝜇𝜙, (3.47) +𝜆3 𝑂(𝜇 3 )𝜙3 + 𝜆2 𝑂(𝜇 4 )𝜙2 , (3.62)
⎪ ( )
2 𝑇
⎪𝓁𝑡𝑥 = −4𝑐1 𝜆𝜇 𝑡 − (𝑥 − 𝑥0 )𝜙, and
⎩ 2
1
(𝛹 − 𝛹𝑥𝑥 ) = 𝜆𝑂(𝜇 4 )𝜙. (3.63)
2 𝑡𝑡
𝓁𝑡𝑡𝑡𝑡 = 𝑐14 𝜆𝜇4 (2𝑡 − 𝑇 )4 𝜙 − 8𝑐13 𝜆𝜇 3 (2𝑡 − 𝑇 )2 𝜙
(3.48) By the definition of  and recalling (3.45), we see that
+12𝑐12 𝜆𝜇 2 𝜙,
1
 = 𝛹 + (𝓁𝑡 )𝑡 − (𝓁𝑥 )𝑥 + (𝛹 − 𝛹𝑥𝑥 )
2 𝑡𝑡
4 4 3 2 1
𝓁𝑥𝑥𝑥𝑥 = 16𝜆𝜇 𝜙(𝑥 − 𝑥0 ) + 32𝜆𝜇 𝜙(𝑥 − 𝑥0 ) = 𝛹 +𝑡 𝓁𝑡 +𝓁𝑡𝑡 −𝑥 𝓁𝑥 −𝓁𝑥𝑥 + (𝛹𝑡𝑡 −𝛹𝑥𝑥 ) (3.64)
(3.49) 2
+12𝜆𝜇 2 𝜙 1
= 𝑡 𝓁𝑡 +2𝓁𝑡𝑡 −𝑥 𝓁𝑥 −𝑐0 𝜆𝜇 2 𝜙+ (𝛹𝑡𝑡 − 𝛹𝑥𝑥 ).
and 2
𝓁𝑡𝑡𝑥𝑥 = 4𝑐12 𝜆𝜇 4 (2𝑡 − 𝑇 )2 (𝑥 − 𝑥0 )2 𝜙 Combining (3.55)–(3.64), we obtain that
( )
+2𝑐12 𝜆𝜇 3 (2𝑡 − 𝑇 )2 𝜙 − 8𝑐1 𝜆𝜇 3 𝜙(𝑥 − 𝑥0 )2 (3.50) 𝑇 2
 ≥ 32𝜆3 𝜇 4 𝜙3 (𝑥−𝑥0 )4 −96𝑐12 𝜆3 𝜇 4 𝜙3 𝑡− (𝑥−𝑥0 )2
−4𝑐1 𝜆𝜇 2 𝜙. 2
( )
𝑇 2
By (3.46) and (3.47), we get +64𝜆3 𝜇4 𝜙3 𝑐14 𝑡 − + 4𝑐0 𝜆3 𝜇4 𝜙3 (𝑥−𝑥0 )2
2
( )
(𝓁𝑡𝑡 + 𝓁𝑥𝑥 − 𝛹 )𝑣̂ 2 = 𝑐0 𝜆𝜇 2 𝜙𝑣̂ 2 , 𝑇 2
(3.51) −4𝑐0 𝑐12 𝜆3 𝜇 4 𝜙3 𝑡− + 𝜆3 𝑂(𝜇 3 )𝜙3
2
+𝜆2 𝑂(𝜇 4 )𝜙2 + 𝜆𝑂(𝜇 4 )𝜙
( )
(𝓁𝑡𝑡 + 𝓁𝑥𝑥 + 𝛹 )𝑣2𝑥 𝑇 2
[ ( ) ≥ 16𝜆3 𝜇4 𝜙3 (𝑥−𝑥0 )4 − 32𝑐12 𝜆3 𝜇 4 𝜙3 𝑡− (𝑥−𝑥0 )2
𝑇 2 𝑐 ] 2
= 8𝜆𝜇 2 𝜙 𝑐12 𝑡 − + (𝑥 − 𝑥0 )2 − 0 𝑣2𝑥 (3.52)
+𝜆3 𝑂(𝜇 3 )𝜙3 + 𝜆2 𝑂(𝜇 4 )𝜙2 + 𝜆𝑂(𝜇 4 )𝜙.
2 8
+4(1 − 𝑐1 )𝜆𝜇𝜙𝑣2𝑥
This, together with (3.39), implies that there exist 𝜆1 > 0, 𝜇1 > 0 and
and 𝑐3 > 0 such that for all 𝜆 ≥ 𝜆1 and 𝜇 ≥ 𝜇1 ,
2 ||𝓁𝑥𝑡 𝑣𝑥 𝑣̂ ||
| ( ) |  ≥ 𝑐 3 𝜆3 𝜇 4 𝜙 3 , ∀ (𝑡, 𝑥) ∈ 𝛬2 . (3.65)
𝑇
= ||8𝑐1 𝜆𝜇 2 𝑡 − (𝑥 − 𝑥0 )𝜙𝑣𝑥 𝑣̂ || (3.53)
| ( 2 ) | ̂ 2 , (𝑑𝑣𝑥 )2 ,
Step 3. In this step, we analyze the terms concerning (𝑑 𝑣)
𝑇 2 2 (𝑑𝑣)2 , 𝑑𝑣𝑥 𝑑 𝑣̂ and( 𝑑𝑣𝑑 𝑣.
≥ −4𝑐12 𝜆𝜇2 𝑡 − 𝜙𝑣̂ − 4𝜆𝜇 2 (𝑥 − 𝑥0 )2 𝜙𝑣2𝑥 . ̂ ) ( )
2
For any 𝜏 ∈ 0, 𝑇2 − 𝜀1 𝑇 and 𝜏 ′ ∈ 𝑇2 + 𝜀1 𝑇 , 𝑇 , put
By (3.44) and (3.51)–(3.53), there exists 𝑐2 > 0 such that
′ 𝛥
(𝓁𝑡𝑡 + 𝓁𝑥𝑥 − 𝛹 )𝑣̂ 2 𝑄𝜏𝜏 = (𝜏, 𝜏 ′ ) × (0, 1). (3.66)
+(𝓁𝑡𝑡 + 𝓁𝑥𝑥 + 𝛹 )𝑣2𝑥 + 2𝓁𝑥𝑡 𝑣𝑥 𝑣̂ (3.54)
Noting that 𝑑 𝑣̂ = 𝜃𝓁𝑡 𝑧𝑑𝑡 ̂ + 𝜃𝑑 𝑧̂ and 𝑑𝑣 = 𝜃𝓁𝑡 𝑧𝑑𝑡 + 𝜃𝑑𝑧, we have
≥ 𝑐2 𝜆𝜇2 𝜙(𝑣̂ 2 + 𝑣2𝑥 ), ∀ (𝑥, 𝑡) ∈ 𝛬2 . ( )
| 2| | 𝑇 ̂ 2 𝑑𝑥𝑑𝑡||
By (3.26), (3.46) and (3.47), we have |E 𝓁 (𝑑 𝑣)
̂ | = 2𝑐 𝜆𝜇 | E 𝑡 − 𝜙𝑍
| ∫𝑄𝜏 ′ 𝑡 | 1 | ∫𝑄𝜏 ′ 2 |
( ) 𝜏 𝜏
 = (𝓁𝑡2 − 𝓁𝑡𝑡 ) − 𝓁𝑥2 − 𝓁𝑥𝑥 − 𝛹
2 2 2 ≤ 𝜆𝜇E 𝜃 2 𝜙𝑍
̂ 2 𝑑𝑥𝑑𝑡 (3.67)
= 𝓁𝑡 − 𝓁𝑥 − 2𝓁𝑡𝑡 + 𝑐0 𝜆𝜇 𝜙 ∫𝑄𝜏 ′
[ ( ) ] 𝜏
𝑇 2 (3.55)
= 4𝜆2 𝜇 2 𝜙2 𝑐12 𝑡 − − (𝑥 − 𝑥0 )2 ≤ 𝜆𝜇E 𝜃 2 𝜙|𝑎5 𝑧̂ + 𝑍|
2
̂ 𝑑𝑥𝑑𝑡
( )2 ∫𝑄𝜏 ′
𝑇 2 𝜏
−8𝑐12 𝜆𝜇2 𝑡 − 𝜙 + 4𝑐1 𝜆𝜇𝜙 + 𝑐0 𝜆𝜇2 𝜙,
2 +𝜆𝜇𝑟5 E ̂ 2 𝑑𝑥𝑑𝑡,
𝜃 2 𝜙|𝑧|
∫𝑄𝜏 ′
( )[ ( ) ] 𝜏
𝑇 𝑇 2
𝑡 = −16𝜆2 𝜇 3 𝜙2 𝑐1 𝑡 − 𝑐12 𝑡 − − 4(𝑥 − 𝑥0 )2
2 2 | ( ) |
|E 𝓁 𝑑𝑣 𝑑 𝑣̂ 𝑑𝑥𝑑𝑡|
+𝜆2 𝑂(𝜇 2 )𝜙2 + 𝜆𝑂(𝜇 3 )𝜙, (3.56) | ∫𝑄𝜏 ′ 𝑥 𝑥 |
𝜏
| {[ ] }
= 2𝜆𝜇|E ′ (𝑥−𝑥0 )𝜙E 2𝜆𝜇(𝑥−𝑥0 )𝜙𝜃𝑍 + 𝜃𝑍𝑥 𝜃 𝑍 ̂ 𝑑𝑥𝑑𝑡|| (3.68)
| ∫𝑄𝜏 |
[ ( ) ] 𝜏
𝑇 2 ( )
𝑥 = 16𝜆 𝜇 𝜙 (𝑥−𝑥0 ) 𝑐12 𝑡−
2 3 2
−(𝑥−𝑥0 )2
2 (3.57) ≤ 𝜆𝜇E 𝜙𝜃 2 𝜆2 𝜇 2 𝜙2 𝑍 2 + |𝑍𝑥 |2 + 𝑍 ̂ 2 𝑑𝑥𝑑𝑡
+𝜆2 𝑂(𝜇 2 )𝜙2 + 𝜆𝑂(𝜇 3 )𝜙, ∫𝑄𝜏 ′
𝜏
[ ( )
( ) [ ( ) ] ≤ 𝜆𝜇 𝑟5 E 𝜙 |𝑣𝑥 |2 + |𝑣|̂ 2 +𝜆2 𝜇 2 𝜙2 |𝑣2 | 𝑑𝑥𝑑𝑡
𝑇 2 2 𝑇 2 ∫𝑄𝜏 ′
𝑡 𝓁𝑡 = 32𝜆3 𝜇 4 𝜙3 𝑐12 𝑡 − 𝑐1 𝑡 − − (𝑥 − 𝑥0 )2 𝜏
2 2 [
+E ′𝜙𝜃 2 𝜆2 𝜇 2 𝜙2 |𝑎4 𝑧+𝑍|2 + |(𝑎4 𝑧+𝑍)𝑥 |2
+𝜆3 𝑂(𝜇3 )𝜙3 + 𝜆2 𝑂(𝜇 4 )𝜙2 , (3.58) ∫𝑄𝜏
𝜏
] ]
( ) [ ( ) ] ̂ 2 𝑑𝑥𝑑𝑡 ,
+|𝑎5 𝑧̂ + 𝑍|
𝑇 2 2 𝑇 2
𝓁𝑡𝑡 = 16𝜆3 𝜇 4 𝜙3 𝑐12 𝑡 − 𝑐1 𝑡 − − (𝑥 − 𝑥0 )2 (3.59)
2 2
+𝜆2 𝑂(𝜇 4 )𝜙2 , | |
(3.60) |E (𝛹 𝑑𝑣𝑑 𝑣) ̂ 𝑑𝑥𝑑𝑡|
| ∫𝑄𝜏 ′ |
𝜏
[ ( )
| 𝑇 2
[ ( ) ] = |E 4𝑐12 𝜆𝜇 2 𝑡 − 𝜙 − 2𝑐1 𝜆𝜇𝜙 (3.69)
𝑇 2 | ∫𝑄𝜏 ′ 2
𝑐0 𝜆𝜇 2 𝜙 = 4𝑐0 𝜆3 𝜇 4 𝜙3 𝑐12 𝑡− −(𝑥−𝑥0 )2 𝜏
]
2 (3.61) |
+𝜆2 𝑂(𝜇 4 )𝜙2 , + 4𝜆𝜇 2 (𝑥−𝑥0 )2 𝜙+2𝜆𝜇𝜙 − 𝑐0 𝜆𝜇 2 𝜙 𝜃𝑍𝜃 𝑍𝑑𝑥𝑑𝑡
̂ |
|

282
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330
[ ( ) [ ( ′2 2) ]}
≤  𝜆𝜇E ̂ 2 + 𝜇2 |𝑎4 𝑧+𝑍|2 𝑑𝑥𝑑𝑡
𝜃 2 𝜙 |𝑎5 𝑧̂ + 𝑍| ̂ ) + |𝑣𝑥 (𝜏 ′ )| + 𝜙3 𝑣(𝜏 ′ )2 𝑑𝑥.
+ 𝜙 𝑣(𝜏
∫𝑄𝜏 ′
𝜏
( 2 ) ] By (3.42), there exists 𝜆2 > 0 and 𝜇2 > 0 such that for all 𝜆 ≥ 𝜆2 and
+ 𝜆𝜇𝑟5 E ̂ + 𝜇2 |𝑣|2 𝑑𝑥𝑑𝑡 ,
𝜙 |𝑣| 𝜇 ≥ 𝜇2 ,
∫𝑄𝜏 ′
𝜏 { ( )
𝜆3 𝜇 3 𝜃(𝜏) 𝜙(𝜏) + 𝜙(𝜏)3 ≤ 𝑒𝜆 ,
| ( )2 | ( ) (3.74)
|E 𝓁 𝑑𝑣𝑥 𝑑𝑥𝑑𝑡| 𝜆3 𝜇 3 𝜃(𝜏 ′ ) 𝜙(𝜏 ′ ) + 𝜙(𝜏 ′ )3 ≤ 𝑒𝜆 .
| ∫𝑄𝜏 ′ 𝑡 |
𝜏
( ) [ ]2 Since 𝑣 = 𝜃𝑧 and 𝑣̂ = 𝜃 𝑧, ̂ it follows from (3.73) and (3.74) that
| 𝑇 |
= 2𝑐1 𝜆𝜇|E 𝑡− 𝜙E 2𝜆𝜇(𝑥−𝑥0 )𝜙𝜃𝑍 + 𝜃𝑍𝑥 𝑑𝑥𝑑𝑡| (3.70) [
| ∫𝑄𝜏 ′ 2 | |
𝜏
|E 𝑑 𝓁𝑡 𝑣2𝑥 + 𝓁𝑡 𝑣̂ 2 − 2𝓁𝑥 𝑣𝑥 𝑣̂ − 𝛹 𝑣𝑣̂
( ) | ∫𝑄𝜏 ′
≤ 𝜆𝜇𝑟5 E 𝜙 |𝑣𝑥 |2 + 𝜆2 𝜇2 𝜙2 |𝑣2 | 𝑑𝑥𝑑𝑡 𝜏 ( 𝛹 ) ] |
∫𝑄𝜏 ′
𝜏 + 𝓁𝑡 + 𝑡 𝑣2 𝑑𝑥|
[ 2 | (3.75)
1 (
+𝜆𝜇E 𝜙 |(𝑎4 𝑧 + 𝑍)𝑥 |2
∫𝑄𝜏 ′ ≤ 𝑒𝜆 E ̂ 2 + |𝑧𝑥 (𝜏)|2 + 𝑧(𝜏)2 + 𝑧(𝜏
𝑧(𝜏) ̂ ′ )2
𝜏
] ∫0
)
+𝜆2 𝜇2 𝜙2 |𝑎4 𝑧 + 𝑍|2 𝑑𝑥𝑑𝑡, 2
+|𝑧𝑥 (𝜏 ′ )| + 𝑧(𝜏 ′ )2 𝑑𝑥.
and By (3.37), (3.41) and (3.66), we see that 𝛬2 ⊂ 𝑄𝜏𝜏 . It follows from

| | (3.51)–(3.53) and (3.64) that


|E 𝓁𝑡 (𝑑𝑣)2 𝑑𝑥𝑑𝑡|
| ∫𝑄𝜏 ′ |
𝜏 [
| { [ ] E (𝓁𝑡𝑡 + 𝓁𝑥𝑥 − 𝛹 )𝑣̂ 2
= |E 𝑐 (2𝑡−𝑇 )𝜆3 𝜇 3 𝜙3 𝑐12 (2𝑡−𝑇 )2 −4(𝑥−𝑥0 )2 ∫𝑄𝜏 ′ ⧵𝛬2
| ∫𝑄𝜏 ′ 1 𝜏 ]
𝜏 +(𝓁𝑡𝑡 + 𝓁𝑥𝑥 + 𝛹 )𝑣2𝑥 + 2𝓁𝑥𝑡 𝑣𝑥 𝑣̂ 𝑑𝑥𝑑𝑡
} | ( ) (3.76)
−𝜆2 𝑂(𝜇 3 )𝜙2 (𝜃𝑍)2 𝑑𝑥𝑑𝑡| (3.71) ≤ 𝜆𝜇 2 E 𝜙𝜃 2 𝑧̂ 2 + 𝑧2𝑥 𝑑𝑥𝑑𝑡
| ∫𝑄𝜏 ′ ⧵𝛬2
𝜏
≤ 𝜆3 𝜇 3 𝑟5 E 𝜙3 |𝑣2 |𝑑𝑥𝑑𝑡 +𝜆3 𝜇 4 E 𝜙3 𝜃 2 𝑧2 𝑑𝑥𝑑𝑡
∫𝑄𝜏 ′ ∫𝑄𝜏 ′ ⧵𝛬2
𝜏
𝜏

+𝜆3 𝜇 3 E 𝜙3 𝜃 2 |𝑎4 𝑧 + 𝑍|2 𝑑𝑥𝑑𝑡. and


∫𝑄𝜏 ′
𝜏
′ E 𝑣2 𝑑𝑥𝑑𝑡 ≤ 𝜆3 𝜇 4 E ′ 𝜙3 𝜃 2 𝑧2 𝑑𝑥𝑑𝑡. (3.77)
Step 4. Integrating (3.27) in 𝑄𝜏𝜏 , taking expectation and by (3.67)– ∫𝑄𝜏 ′⧵𝛬2 ∫𝑄𝜏 ⧵𝛬2
𝜏 𝜏
(3.71), we obtain that
From (3.37), we know that
( )( )
E 𝜃 −2𝓁𝑡 𝑣̂ + 2𝓁𝑥 𝑣𝑥 + 𝛹 𝑣 𝑑 𝑧̂ − 𝑧𝑥𝑥 𝑑𝑡 𝑑𝑥 ( 2 ) ′
∫𝑄𝜏 ′
𝜏
𝜃 ≤ exp 𝜆𝑒𝑅1 𝜇∕2 in 𝑄𝜏𝜏 ⧵ 𝛬2 .
𝜏′ 𝜏′
+𝜆(1−𝑥0 )E |𝑣𝑥 (𝑡, 1)|2 𝑑𝑡 +𝜆𝑥0 E |𝑣𝑥 (𝑡, 0)|2 𝑑𝑡 Consequently, there exist 𝜆3 ≥ 0 and 𝜇3 ≥ 0 such that for all 𝜆 ≥ 𝜆3 and
∫𝜏 ∫𝜏 𝜇 ≥ 𝜇3 ,
[
𝑑 𝓁𝑡 𝑣2𝑥 + 𝓁𝑡 𝑣̂ 2 − 2𝓁𝑥 𝑣𝑥 𝑣̂ − 𝛹 𝑣𝑣̂ ( 2 )
+E
∫𝑄𝜏 ′ ⎧𝜆𝜇 2 max 𝜙𝜃 2 ≤ exp 2𝜆𝑒5𝑅1 𝜇∕8 ,
𝜏 ⎪ 𝜏 ′
(𝑥,𝑡)∈𝑄𝜏 ⧵𝛬2
( 𝛹 ) ] ⎨ 3 4 ( 2 ) (3.78)
+ 𝓁𝑡 + 𝑡 𝑣2 𝑑𝑥 (3.72) ⎪𝜆 𝜇 max 𝜙3 𝜃 2 ≤ exp 2𝜆𝑒5𝑅1 𝜇∕8 .
2 ⎩ ′
(𝑥,𝑡)∈𝑄𝜏𝜏 ⧵𝛬2
[
≥E (𝓁 + 𝓁𝑥𝑥 − 𝛹 )𝑣̂ 2 + (𝓁𝑡𝑡 + 𝓁𝑥𝑥 + 𝛹 )𝑣2𝑥 It follows from (3.76)–(3.78) that there exists 𝑐4 > 0 such that for
∫𝑄𝜏 ′ 𝑡𝑡 𝛥 𝛥
𝜏
] all 𝜆 ≥ 𝜆4 = max {𝜆1 , 𝜆2 , 𝜆3 , 𝑟5 + 1} and 𝜇 ≥ 𝜇4 = max{𝜇1 , 𝜇2 , 𝜇3 },
+2𝓁𝑥𝑡 𝑣𝑥 𝑣̂ 𝑑𝑥𝑑𝑡 + E 𝑣2 𝑑𝑥𝑑𝑡 [( ) ]
∫𝑄𝜏 ′
𝜏 E ′ 𝓁𝑡𝑡 +𝓁𝑥𝑥 −𝛹 𝑣̂ 2 +(𝓁𝑡𝑡 +𝓁𝑥𝑥 +𝛹 )𝑣2𝑥 +2𝓁𝑥𝑡 𝑣𝑥 𝑣̂ 𝑑𝑥𝑑𝑡
( )2 ∫𝑄𝜏
𝜏
+E −2𝓁𝑡 𝑣̂ +2𝓁𝑥 𝑣𝑥 + 𝛹 𝑣 𝑑𝑥𝑑𝑡 ( )
∫𝑄
+E 𝑣2 𝑑𝑥𝑑𝑡 − 𝜆𝜇𝑟4 E 𝜙 |𝑣𝑥 |2 + |𝑣̂ 2 | 𝑑𝑥𝑑𝑡
[ ∫𝑄𝜏 ′ ∫𝑄𝜏 ′
−𝜆𝜇E ′ 𝜙 |(𝑎4 𝑧 + 𝑍)𝑥 |2 +|𝑎5 𝑧̂ + 𝑍| ̂2 𝜏 𝜏
∫𝑄𝜏
𝜏
] −𝜆3 𝜇 3 𝑟4 E 𝜙3 |𝑣2 |𝑑𝑥𝑑𝑡 (3.79)
∫𝑄𝜏 ′
+𝜆2 𝜇 2 𝜙2 |𝑎4 𝑧 + 𝑍|2 𝑑𝑥𝑑𝑡 𝜏
( )
−𝜆3 𝜇 3 𝑟5 E 𝜙3 |𝑣2 |𝑑𝑥𝑑𝑡 ≥ 𝑐4 𝜆𝜇 2 E 𝜙 𝑣̂ 2 +|𝑣𝑥 |2 𝑑𝑥𝑑𝑡+𝑐4 𝜆3 𝜇 4 E 𝜙3 |𝑣|2 𝑑𝑥𝑑𝑡
∫𝑄𝜏 ′ ∫ 𝛬2 ∫𝛬2
𝜏
( ) 2 ( 2 )
−𝜆𝜇𝑟5 E 𝜙 |𝑣𝑥 |2 + |𝑣̂ 2 | 𝑑𝑥𝑑𝑡. − exp(𝜆𝑒5𝑅1 𝜇∕8 )E ̂ + |𝑧𝑥 |2 + |𝑧|2 𝑑𝑥𝑑𝑡.
|𝑧|
∫𝑄𝜏 ′
∫𝑄
𝜏

Clearly, Noting that (𝑧, 𝑧, ̂ solves Eq. (3.5) with 𝜏 = 𝑇 , we deduce that
̂ 𝑍, 𝑍)
[ ( 𝛹 ) ] | ( )( )
|
|E ′ 𝑑 𝓁𝑡 𝑣2𝑥 +𝓁𝑡 𝑣̂ 2 −2𝓁𝑥 𝑣𝑥 𝑣−𝛹 ̂ 𝑣𝑣+̂ 𝓁𝑡 + 𝑡 𝑣2 𝑑𝑥| E 𝜃 −2𝓁𝑡 𝑣̂ + 2𝓁𝑥 𝑣𝑥 + 𝛹 𝑣 𝑑 𝑧̂ − 𝑧𝑥𝑥 𝑑𝑡 𝑑𝑥
| ∫𝑄𝜏 2 | ∫𝑄𝜏 ′
𝜏
𝜏
1[ ( )
| =E 𝜃 −2𝓁𝑡 𝑣̂ + 2𝓁𝑥 𝑣𝑥 + 𝛹 𝑣 (3.80)
= |E 𝓁𝑡 𝑣𝑥 (𝜏 ′ )2 + 𝓁𝑡 𝑣(𝜏
̂ ′ )2 − 2𝓁𝑥 𝑣𝑥 (𝜏 ′ )𝑣(𝜏
̂ ′) ∫𝑄𝜏 ′
| ∫0 𝜏
( ( )
𝛹 ) ]
× 𝑎1 𝑧 + 𝑎2 𝑍 − 𝑎3 𝑍̂ 𝑑𝑥𝑑𝑡
̂ ′ ) + 𝓁𝑡 + 𝑡 𝑣(𝜏 ′ )2 𝑑𝑥
−𝛹 𝑣(𝜏 ′ )𝑣(𝜏 (3.73)
2 ( )2
1[ ≤E 𝜃 −2𝓁𝑡 𝑣̂ + 2𝓁𝑥 𝑣𝑥 + 𝛹 𝑣 𝑑𝑥𝑑𝑡
−E 𝓁𝑡 𝑣𝑥 (𝜏)2 + 𝓁𝑡 𝑣(𝜏)
̂ 2 − 2𝓁𝑥 𝑣𝑥 (𝜏)𝑣(𝜏)
̂ ∫𝑄𝜏 ′
𝜏
∫0 ( )
( ) ] +𝑟4 E |𝑧𝑥 |2 + 𝑧2 + 𝑍 2 + 𝑍
̂ 2 𝑑𝑥𝑑𝑡.
𝛹 | ∫𝑄𝜏 ′
̂ + 𝓁𝑡 + 𝑡 𝑣(𝜏)2 𝑑𝑥|
−𝛹 𝑣(𝜏)𝑣(𝜏)
2 | 𝜏

1 {[ ( ) ] Combining (3.72), (3.75), (3.79) and (3.80), we conclude that there


3 3 2 2
≤ 𝜆 𝜇 E 𝜙 𝑣(𝜏)
̂ +|𝑣𝑥 (𝜏)| +𝜙 𝑣(𝜏)2
3
exists 𝜆5 ≥ max{𝜆4 , 𝑟5 +1} such that for any 𝜆 ≥ 𝜆5 and 𝜇 = 𝜇4 , it holds
∫0

283
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

that It follows from (3.84)–(3.86) that


2
( 2 ) |(𝑧𝑇 , 𝑧̂ 𝑇 )|𝐿2 (𝛺;𝐻01 (0,1)×𝐿2 (0,1))
E ̂ + |𝑧𝑥 |2 𝑑𝑥𝑑𝑡 + E
𝜃 2 |𝑧| 𝜃 2 |𝑧|2 𝑑𝑥𝑑𝑡 (
𝑇
)
∫ 𝛬1 ∫𝛬1 2 2 2
≤ 𝑒 exp 𝜆𝑒5𝑅1 𝜇4 ∕8 − 𝜆𝑒3𝑅1 𝜇4 ∕4 |(𝑧𝑇 , 𝑧̂ 𝑇 )|𝐿2 (𝛺;𝐻 1 (0,1)×𝐿2 (0,1))
𝑟5
{ ( 2 ) ( ) 𝑇 0
≤  exp 2𝜆𝑒5𝑅1 𝜇4 ∕8 E ̂ 2 +|𝑧𝑥 |2 +|𝑧|2 𝑑𝑥𝑑𝑡
|𝑧| ( 2 )
[ 𝑇 (3.87)
∫𝑄 + exp 𝜆𝑒𝑅1 𝜇4 E |𝑧𝑥 (0, 𝑡)|2 𝑑𝑡
[ 𝑇 ∫0
( 2 ) ( ) ]
+ exp 2𝜆𝑒𝑅1 𝜇4 E |𝑧𝑥 (0, 𝑡)|2 𝑑𝑡 (3.81) +E |𝑎 𝑧 + 𝑍|2 + |𝑎5 𝑧̂ + 𝑍| ̂ 2 𝑑𝑥𝑑𝑡 .
∫0 ∫𝑄 4
( ) ]
+ |𝑎 𝑧+𝑍|2 + |𝑎5 𝑧+̂ 𝑍| ̂ 2 𝑑𝑥𝑑𝑡 Let us choose 𝜆6 ≥ 𝜆5 such that
∫𝑄 4
2 2
1(
𝑒𝑟5 exp(𝜆𝑒5𝑅1 𝜇4 ∕8 − 𝜆𝑒3𝑅1 𝜇4 ∕4 ) < 1.
+𝑒𝜆 E ̂ 2 + |𝑧𝑥 (𝜏)|2 + 𝑧(𝜏)2 + 𝑧(𝜏
𝑧(𝜏) ̂ ′ )2
∫0
2 ) } Then, from (3.87) with 𝜆 = 𝜆6 and 𝜇 = 𝜇4 , we obtain the desired
+|𝑧𝑥 (𝜏 ′ )| + 𝑧(𝜏 ′ )2 𝑑𝑥 . inequality (3.24) immediately. □
[ ]
𝑇 𝑇
Integrating (3.81) with respect to 𝜏 and 𝜏 ′ on − 𝜀2 𝑇 , − 𝜀1 𝑇 and 3.4. Exact controllability of multidimensional stochastic hyperbolic equa-
[ ] 2 2
𝑇
+ 𝜀1 𝑇 , 𝑇2 + 𝜀2 𝑇 , respectively, we get that tions
2

( 2 ) We have considered the exact controllability of one dimensional


E ̂ + |𝑧𝑥 |2 𝑑𝑥𝑑𝑡 + E
𝜃 2 |𝑧| 𝜃 2 |𝑧|2 𝑑𝑥𝑑𝑡 stochastic hyperbolic equations. In this section, we shall present very
∫ 𝛬1 ∫𝛬1
{ ( 2 ) ( 2 ) quickly the same controllability results for the multidimensional situa-
≤  exp 2𝜆𝑒5𝑅1 𝜇4 ∕8 E ̂ + |𝑧𝑥 |2 + |𝑧|2 𝑑𝑥𝑑𝑡
|𝑧| tion (See Lü & Zhang, 2019a for more details).
∫𝑄
( [ Let 𝐺 ⊂ R𝑛 (𝑛 ∈ N) be a bounded domain with a 𝐶 2 boundary 𝛤 .
2 )
𝑇
+ exp 2𝜆𝑒𝑅1 𝜇4 E |𝑧𝑥 (0, 𝑡)|2 𝑑𝑡 (3.82) Let 𝛤0 be a nonempty subset of 𝛤 . Write
∫0
( ) ] 𝛥 𝛥 𝛥
+E |𝑎4 𝑧+𝑍|2 + |𝑎5 𝑧+ ̂ 𝑍| ̂ 2 𝑑𝑥𝑑𝑡 𝑄 = (0, 𝑇 ) × 𝐺, 𝛴 = (0, 𝑇 ) × 𝛤 , 𝛴0 = (0, 𝑇 ) × 𝛤0 .
∫𝑄
( 2 ) } Assume (𝑎𝑗𝑘 )1≤𝑗,𝑘≤𝑛 ∈ 𝐶 3 (𝐺; R𝑛×𝑛 ) satisfies that
+𝑒 E 𝜆
𝑧̂ + |𝑧𝑥 |2 + 𝑧2 𝑑𝑥𝑑𝑡 .
∫𝑄
𝑎𝑗𝑘 = 𝑎𝑘𝑗 , 𝑗, 𝑘 = 1, 2, … , 𝑛
From (3.37), we obtain that and for some constant 𝑠0 > 0,
( 2 ) ∑
𝑛
E ̂ + |𝑧𝑥 |2 + |𝑧|2 𝑑𝑥𝑑𝑡
𝜃 2 |𝑧| (3.83) 𝑎𝑗𝑘 𝜉 𝑗 𝜉 𝑘 ≥ 𝑠0 |𝜉|2 , ∀ (𝑥, 𝜉) = (𝑥, 𝜉 1 , … , 𝜉 𝑛 ) ∈ 𝐺 × R𝑛 .
∫ 𝛬1
𝑗,𝑘=1
( 2 ) ( 2 )
≥ exp 2𝜆𝑒3𝑅1 𝜇4 ∕4 E ̂ + |𝑧𝑥 |2 + |𝑧|2 𝑑𝑥𝑑𝑡.
|𝑧| Let 𝑎1 ∈ 𝐿∞ (0, 𝑇 ; 𝑊 1,∞ (𝐺; R𝑛 )), 𝑎2 , 𝑎3 , 𝑎4 ∈ 𝐿∞ (0, 𝑇 ; 𝐿∞ (𝐺)) and 𝑎5 ∈
∫𝑄1 F F
∞ 1,∞
𝐿F (0, 𝑇 ; 𝑊0 (𝐺)).
Combining (3.82) and (3.83), we arrive at Consider the following controlled stochastic hyperbolic equation:

⎪ 𝑑𝑦 − ∑ (𝑎𝑗𝑘 𝑦 ) 𝑑𝑡 = (𝑎 ⋅∇𝑦+𝑎 𝑦+𝑔 )𝑑𝑡
𝑛
( 2 )
E ̂ + |𝑧𝑥 |2 + |𝑧|2 𝑑𝑥𝑑𝑡
|𝑧| ⎪ 𝑡 𝑥𝑗 𝑥𝑘 1 2 1
∫𝑄1
{ ( ⎪ 𝑗,𝑘=1 ( )
2 2 ) ⎨ + 𝑎3 𝑦 + 𝑔2 𝑑𝑊 (𝑡) in 𝑄, (3.88)
≤  exp 2𝜆𝑒5𝑅1 𝜇4 ∕8 − 2𝜆𝑒3𝑅1 𝜇4 ∕4 ⎪ 𝑦=𝜒 ℎ
⎪ 𝛴0 on 𝛴,
( 2 )
×E ̂ + |𝑧𝑥 |2 + |𝑧|2 𝑑𝑥𝑑𝑡
|𝑧| (3.84) ⎪ 𝑦(0) = 𝑦0 , 𝑦𝑡 (0) = 𝑦1 in 𝐺,
∫𝑄 ⎩
[ 𝑇
2 where the initial datum (𝑦0 , 𝑦1 ) ∈ 𝐿2 (𝐺) × 𝐻 −1 (𝐺), (𝑦, 𝑦𝑡 ) is the state,
+ exp(2𝜆𝑒𝑅1 𝜇4 ) E |𝑧𝑥 (0, 𝑡)|2 𝑑𝑡
∫0 and 𝑔1 , 𝑔2 ∈ 𝐿∞ (0, 𝑇 ; 𝐻 −1 (𝐺)) and ℎ ∈ 𝐿2F (0, 𝑇 ; 𝐿2 (𝛤0 )) are controls.
( ) ]} F
+E ̂ 2 𝑑𝑥𝑑𝑡 .
|𝑎4 𝑧+𝑍|2 + |𝑎5 𝑧̂ + 𝑍| Similarly to the control system (3.3), (3.88) is a nonhomogeneous
∫𝑄 boundary value problem, for which solutions are also understood in the
sense of transposition. One can prove that (3.88) admits a unique trans-
Applying the standard energy method to Eq. (3.5) with 𝜏 = 𝑇 , we have position solution 𝑦 ∈ 𝐶F ([0, 𝑇 ]; 𝐿2 (𝛺; 𝐿2 (𝐺)))∩𝐶F1 ([0, 𝑇 ]; 𝐿2 (𝛺; 𝐻 −1 (𝐺)))
that (Lü & Zhang, 2019a, Proposition 4.2).

2
|(𝑧𝑇 , 𝑧̂ 𝑇 )|𝐿2 (𝛺;𝐻 1 (0,1)×𝐿2 (0,1)) Definition 3.5. The system (3.88) is called exactly controllable at time
[𝑇 ( 0 ) 𝑇 if for any (𝑦0 , 𝑦1 ) ∈ 𝐿2 (𝐺) × 𝐻 −1 (𝐺) and (𝑦′0 , 𝑦′1 ) ∈ 𝐿2 (𝛺; 𝐿2 (𝐺)) ×
≤ 𝑒𝑟5 E ̂ 2 + |𝑧𝑥 |2 + |𝑧|2 𝑑𝑥𝑑𝑡
|𝑧| (3.85)
𝑇
∫𝑄1 𝐿2 (𝛺; 𝐻 −1 (𝐺)), one can find (𝑔1 , 𝑔2 , ℎ) ∈ 𝐿2F (0, 𝑇 ;𝐻 −1 (𝐺)) × 𝐿2F (0, 𝑇 ;
( ) ] 𝑇

+ ̂ 2 𝑑𝑥𝑑𝑡
|𝑎 𝑧 + 𝑍|2 + |𝑎5 𝑧̂ + 𝑍| 𝐻 −1 (𝐺)) × 𝐿2F (0, 𝑇 ; 𝐿2 (𝛤0 )) such that the corresponding solution 𝑦 to
∫𝑄 4 (3.88) satisfies that (𝑦(𝑇 ), 𝑦𝑡 (𝑇 )) = (𝑦′0 , 𝑦′1 ).

and that Similarly to that for the system (3.3), we have the following nega-
tive result (See Lü & Zhang, 2019a, Theorem 2.1).
2
|(𝑧𝑇 , 𝑧̂ 𝑇 )|𝐿2 (𝛺;𝐻01 (0,1)×𝐿2 (0,1))
𝑇
(3.86) Theorem 3.5. The system (3.88) is not exactly controllable for any 𝑇 > 0
≥ 𝑒−𝑟5 E ̂ 2 + |𝑧𝑥 |2 + |𝑧|2 )𝑑𝑥𝑑𝑡.
(|𝑧| and 𝛤0 ⊂ 𝛤 .
∫𝑄

284
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330
{ }
1
Inspired by Theorem 3.5, similarly to (3.4), we consider the follow- (4)𝑐1 < min 1, ;
ing refined version of (3.88): 16(1 + |𝑎5 |2𝐿∞ (0,𝑇 ;𝐿∞ (𝐺)) )2
√ F
(5) 𝑟0 − 4𝑐1 − 𝑐0 > 𝑅1 .
⎧ 𝑑𝑦 = 𝑦𝑑𝑡
̂ + (𝑎4 𝑦 + 𝑢1 )𝑑𝑊 (𝑡) in 𝑄,
⎪ ∑𝑛
( 𝑗𝑘 ) ( )
⎪ 𝑑 𝑦−
̂ 𝑎 𝑦𝑥𝑗 𝑥 𝑑𝑡 = 𝑎1 ⋅ ∇𝑦+𝑎2 𝑦+𝑎5 𝑢2 𝑑𝑡 Remark 3.5. Conditions 3.2–3.3 can be regarded as modifications
⎪ 𝑘
⎨ 𝑗,𝑘=1 ( ) (3.89) of similar conditions introduced in Imanuvilov (2002). As we have
⎪ + 𝑎3 𝑦 + 𝑢2 𝑑𝑊 (𝑡) in 𝑄, ∑ 𝑛

⎪ 𝑦 = 𝜒𝛴0 ℎ on 𝛴, explained, since 𝑎𝑗𝑘 𝜑𝑥𝑗 𝜑𝑥𝑘 > 0, and one can choose 𝑟0 in Condi-
⎪ 𝑗,𝑘=1
⎩ 𝑦(0) = 𝑦0 , ̂ = 𝑦̂0
𝑦(0) in 𝐺. tion 3.2 large enough, Condition 3.3 could be satisfied obviously. We
̂ is the state, 𝑢1 ∈ 𝐿2F (0, 𝑇 ; 𝐿2 (𝐺)),
Here (𝑦0 , 𝑦̂0 ) ∈ 𝐿2 (𝐺) × 𝐻 −1 (𝐺), (𝑦, 𝑦) put it here merely to emphasize the relationship among 0 < 𝑐0 < 𝑐1 < 1,
𝑢2 ∈ 𝐿2F (0, 𝑇 ; 𝐻 −1 (𝐺)) and ℎ ∈ 𝐿2F (0, 𝑇 ; 𝐿2 (𝛤0 )) are controls. Similarly to 𝑟0 and 𝑇 . In other words, once Condition 3.2 is fulfilled, Condition 3.3
(3.4), the control system (3.89) admits a unique transposition solution can always be satisfied.
̂ ∈ 𝐶F ([0, 𝑇 ]; 𝐿2 (𝛺; 𝐿2 (𝐺))) × 𝐶F ([0, 𝑇 ]; 𝐿2 (𝛺; 𝐻 −1 (𝐺))).
(𝑦, 𝑦)
Remark 3.6. To ensure that (3) in Condition 3.3 holds, the larger of
the number |𝑎5 |𝐿∞ (0,𝑇 ;𝐿∞ (𝐺)) is given, the smaller of 𝑐1 and the longer
Remark 3.4. We introduce two controls 𝑢1 and 𝑢2 in the diffusion F
of time 𝑇 we should choose. This does not happen for (3.4). Hence, we
terms of (3.89). Usually, if we put a control in the diffusion term, it
believe that it is a technical condition. However, so far we do not know
may affect the drift term in some way. Here we assume that the effect
how to drop it.
is linear and in the form of ‘‘𝑎5 𝑢2 𝑑𝑡’’. One may consider more general
cases, say, to add a term like ‘‘𝑎6 𝑢1 𝑑𝑡’’ (with 𝑎6 ∈ 𝐿∞
F
(0, 𝑇 ; 𝐿∞ (𝐺))) to The exact controllability result for the system (3.89) is stated as
the first equation of (3.89). However, except for one dimensional case follows:
(the control system (3.4)), the corresponding controllability problem
(as the one studied in this subsection) is still unsolved. Theorem 3.6. Let Conditions 3.2 and 3.3 hold, and 𝛤0 be given by (3.90).
Then, the system (3.89) is exactly controllable at time 𝑇 .
The exact controllability for (3.89) is defined as follows:
Theorem 3.6 can also be proved by a duality argument and a global
Definition 3.6. The system (3.89) is called exactly controllable at time Carleman estimate (See Lü & Zhang, 2019a).
𝑇 if for any (𝑦0 , 𝑦̂0 ) ∈ 𝐿2 (𝐺) × 𝐻 −1 (𝐺) and (𝑦1 , 𝑦̂1 ) ∈ 𝐿2 (𝛺; 𝐿2 (𝐺)) ×
𝑇
𝐿2 (𝛺; 𝐻 −1 (𝐺)), one can find (𝑢1 , 𝑢2 , ℎ) ∈ 𝐿2F (0, 𝑇 ; 𝐿2 (𝐺)) × 𝐿2F (0, 𝑇 ; 3.5. Notes and open problems
𝑇
𝐻 −1 (𝐺)) × 𝐿2F (0, 𝑇 ; 𝐿2 (𝛤0 )) such that the corresponding solution (𝑦, 𝑦) ̂
Generally speaking, there are five main methods to solve the ex-
to (3.89) satisfies that (𝑦(𝑇 ), 𝑦(𝑇
̂ )) = (𝑦1 , 𝑦̂1 ), a.s.
act controllability/observability problem of deterministic hyperbolic
Similar to the deterministic case (e.g., Fu, Yong, & Zhang, 2007), in equations:
order to give a positive controllability result for the system (3.89), we
• The first one is the method of characteristic line (e.g., Li, 2010;
need the following additional assumption on the coefficients
Russell, 1978). This method only applies to hyperbolic equations
(𝑎𝑗𝑘 )1≤𝑗,𝑘≤𝑛 :
in one space dimension.
• The second one is based on the Ingham type inequality (e.g., Rus-
Condition 3.2. There is a positive function 𝜑(⋅) ∈ 𝐶 2 (𝐺) satisfying that: sell, 1978). This method works well for many PDEs evolved in
(1) For some constant 𝑟0 > 0, intervals and rectangles. However, it seems that it is very hard to

𝑛 𝑛 [
∑ ] be applied to equations in general domains.
′( ′ ) 𝑗 ′ 𝑘′
2𝑎𝑗𝑘 𝑎𝑗 𝑘 𝜑𝑥𝑗 ′ 𝑥 ′ − 𝑎𝑗𝑘
𝑥 ′𝑎 𝜑𝑥𝑗 ′ 𝜉 𝑗 𝜉 𝑘 • The third one is the classical Rellich-type multiplier approach
𝑘 𝑘
𝑗,𝑘=1 𝑗 ′ ,𝑘′ =1 (e.g., Lions, 1988). Usually, it is used to treat time-reversible PDEs
∑ 𝑛
≥ 𝑟0 𝑎𝑗𝑘 𝜉 𝑗 𝜉 𝑘 , ∀ (𝑥, 𝜉 1 , … , 𝜉 𝑛 ) ∈ 𝐺 × R𝑛 . with time independent lower order terms.
𝑗,𝑘=1 • The fourth one is based on the microlocal analysis (e.g., Bardos
et al., 1992), which may give sharp sufficient conditions for the
(2) The function 𝜑(⋅) has no critical point in 𝐺, i.e., exact controllability of wave equations. However, we have no
min |∇𝜑(𝑥)| > 0. idea how to apply it to stochastic hyperbolic equations.
𝑥∈𝐺 • The last one is the global Carleman estimate (e.g., Fu et al.,
( ) 2019; Zhang, 2011). This approach allows to address variable
Choose 𝛤0 as follows (See (A.15) for 𝜈 = 𝜈 1 , … , 𝜈 𝑛 ):
coefficients. More importantly, it can be used to estimate the
{ }
| ∑ 𝑗𝑘
𝑛
𝛥 dependence of the control cost with respect to the coefficients of
𝛤0 = 𝑥 ∈ 𝛤 | 𝑎 𝜑𝑥𝑗 (𝑥)𝜈 𝑘 (𝑥) > 0 . (3.90)
| lower order terms, which plays a key role in the study of exact
𝑗,𝑘=1
controllability of semilinear hyperbolic equations (Duyckaerts
Also, write et al., 2008; Fu et al., 2007).
𝛥
√ 𝛥

𝑅1 = max 𝜑(𝑥), 𝑅0 = min 𝜑(𝑥). Until now, the global Carleman estimate is the only method that can be
𝑥∈𝐺 𝑥∈𝐺
adapted to solve exact controllability problem of stochastic hyperbolic
If 𝜑(⋅) satisfies Condition 3.2, then for any given constants 𝛼 ≥ 1 equations. Furthermore, the Carleman estimate method which we have
and 𝛽 ∈ R, 𝜑̃ = 𝛼𝜑 + 𝛽 still satisfies Condition 3.2 with 𝑟0 replaced by employed in Sections 2 and 3 for stochastic parabolic and hyperbolic
𝛼𝑟0 . Therefore we may choose 𝜑, 𝑟0 , 𝑐0 > 0, 𝑐1 > 0 and 𝑇 such that equations applies to solve not only controllability and observability
problems, but also inverse problems for many other SPDEs (e.g., Fu &
Condition 3.3. The following inequalities hold: Liu, 2017a, 2017b; Gao et al., 2015; Liu & Yu, 2019; Lü, 2012, 2013a,
1 ∑ 𝑗𝑘
𝑛
2013b, 2013c; Lü & Zhang, 2015a; Yuan, 2015, 2017; Zhang, 2010,
(1) 𝑎 (𝑥)𝜑𝑥𝑗 (𝑥)𝜑𝑥𝑘 (𝑥) ≥ 𝑅21 , ∀ 𝑥 ∈ 𝐺;
4 𝑗,𝑘=1 2011). In each case the problem can be reduced to establish a suitable
𝛥 pointwise identity first, and then to choose a suitable weight function.
(2) 𝑇 > 𝑇0 = 2𝑅1 ;
( 2𝑅 )2 2𝑅1 We have introduced the null/approximate controllability problem
1
(3) < 𝑐1 < ; for stochastic parabolic equations and the exact controllability problem
𝑇 𝑇

285
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

for stochastic hyperbolic equations. As we have explained, there is no 4. Pontryagin-type maximum principle for controlled stochastic
exact controllability for stochastic parabolic equations. On the other evolution equations
hand, one can define the null/approximate controllability of stochastic
hyperbolic equations similarly. In the literature, there are several other This section is addressed to Pontryagin-type maximum principle for
concepts of controllability for stochastic control systems (e.g. Chen, optimal control problems of nonlinear SEEs in infinite dimensions, in
1980; Dou & Lü, 2019; Sunahara, Aihara, Kamejima, & Kishino, 1977; which both drift and diffusion terms may contain control variables, and
Sunahara, Kabeuchi, Asada, Aihara, & Kishino, 1974; Zabczyk, 1981). the control domain is allowed to be nonconvex. As before, we first
Among them, the so-called 𝜀 -controllability is studied extensively. give the results for a controlled one dimensional stochastic parabolic
The 𝜀-controllability can be defined for very general stochastic control equation and provide proofs of these results. Then we survey some
systems. Let us recall it for the control system (2.5) as an example. recent progress for general controlled SEEs and list some unsolved
problems.
Definition 3.7. The system (2.5) is said to be completely
𝜀-controllable in probability 𝑝 (for some 𝜀 > 0 and 𝑝 ∈ [0, 1]), if for
4.1. Formulation of the problem
any 𝑦0 ∈ 𝐿2 (0, 1), it holds that
𝐿2{
(0, 1) Let (𝛺,  , 𝐅, P) be a complete filtered probability space with the
| ( )
= 𝑦𝑇 ∈ 𝐿2 (0, 1) | P |𝑦(𝑇 ) − 𝑦𝑇 |2𝐿2 (0,1) > 𝜀 ≤ 1 − 𝑝 filtration 𝐅 = {𝑡 }𝑡≥0 satisfying the usual condition, on which a one
| }
dimensional standard Brownian motion {𝑊 (𝑡)}𝑡≥0 is defined. Denote
for some (𝑢1 , 𝑢2 ) ∈ 𝐿2F (0, 𝑇 ; 𝐿2 (𝐺0 ) × 𝐿2 (0, 1)) .
by F the progressive 𝜎-field w.r.t. 𝐅.
From Definition 3.7, if the system (2.5) is completely 𝜀-controllable, Let 𝑈 be a subset of 𝐿2 (0, 1) and
then it can transfer any given initial state 𝑦0 into the 𝜀-neighborhood { }
𝛥 |
of an arbitrary point in 𝐿2 (0, 1) with probability not less than 𝑝. This  [0, 𝑇 ] = 𝑢(⋅) ∶ [0, 𝑇 ] × 𝛺 → 𝑈 | 𝑢(⋅) is 𝐅-adapted .
|
kind of controllability and related problems have been studied for
Let us impose the following condition.
SPDEs with deterministic coefficients (e.g., Bashirov & Kerimov, 1997;
(B1) For 𝜑 = 𝑎, 𝑏, suppose that 𝜑(⋅, ⋅, ⋅, ⋅) ∶ [0, 𝑇 ]×(0, 1)×R×R → R satisfies
Bashirov & Mahmudov, 1999; Mahmudov, 2001a, 2001b, 2003). The
: (i) For any (𝑟, 𝑢) ∈ R × R, the functions 𝜑(⋅, ⋅, 𝑟, 𝑢) ∶ [0, 𝑇 ] × (0, 1) → R
main idea in these works is to reduce the 𝜀-controllability problem of
is Lebesgue measurable; (ii) For any (𝑡, 𝑥, 𝑟) ∈ [0, 𝑇 ] × (0, 1) × R, the
a stochastic control system to the exact/approximate controllability of
functions 𝜑(𝑡, 𝑥, 𝑟, ⋅) ∶ R → R is continuous; and (iii) For all (𝑡, 𝑥, 𝑟1 , 𝑟2 , 𝑢) ∈
a suitable deterministic control system. It should be pointed out that
when the stochastic control system is degenerated to a deterministic [0, 𝑇 ] × (0, 1) × R × R × R,
{
one, the complete 𝜀-controllability does not coincide with the usual |𝜑(𝑡, 𝑥, 𝑟1 , 𝑢) − 𝜑(𝑡, 𝑥, 𝑟2 , 𝑢)| ≤ |𝑟1 − 𝑟2 |,
(4.1)
notion of exact or approximate controllability. On the other hand, |𝜑(𝑡, 𝑥, 0, 𝑢)| ≤ .
to the best of our knowledge, there exist no published works on the
𝜀-controllability problem for general stochastic control systems. Consider the following controlled stochastic parabolic equation:
There are many open problems related to the topic of this section.
Some of them, which are particularly interesting in our opinion, are ⎧ 𝑑𝑦 = (𝑦 + 𝑎(𝑡, 𝑦, 𝑢))𝑑𝑡 + 𝑏(𝑡, 𝑦, 𝑢)𝑑𝑊 (𝑡) in (0, 𝑇 ] × (0, 1),
⎪ 𝑥𝑥
listed as follows. ⎨ 𝑦=0 on (0, 𝑇 ] × {0, 1}, (4.2)
(1) Null and approximate controllability for stochastic hyper- ⎪ 𝑦(0) = 𝑦0 in (0, 1),

bolic equations
Similarly to Definition 2.2, one can define the null and approximate where 𝑢(⋅) ∈  [0, 𝑇 ] and 𝑦0 ∈ 𝐿2 (0, 1). Here and henceforth, to simplify
controllability of stochastic hyperbolic equations. As an immediate the notations, unless it should be emphasized, we omit the spatial
consequence of Theorem 3.6, we obtain these controllability results variable 𝑥(∈ (0, 1)) in the functions 𝑦, 𝑎, 𝑏, and 𝑔 and ℎ to be given later.
for the system (3.89). However, there is no reason to introduce three Under the assumption (B1), by Eqs. (4.2) and (B.19) admits a unique
controls to guarantee the null/approximate controllability of (3.89). It mild solution 𝑦 ∈ 𝐿2F (𝛺; 𝐶([0, 𝑇 ]; 𝐿2 (0, 1))) ∩ 𝐿2F (𝛺; 𝐿2 (0, 𝑇 ; 𝐻01 (0, 1))).
seems that one boundary control is enough. However, at this moment Also, we need the following condition:
we have no idea about it. (B2) Suppose that 𝑔(⋅, ⋅, ⋅) ∶ [0, 𝑇 ] × (0, 1) × R × R → R and ℎ(⋅) ∶
(2) Exact controllability for stochastic hyperbolic equations (0, 1) × R → R are two functions satisfying: (i) For any (𝑟, 𝑢) ∈ R × R,
with more regular controls the function 𝑔(⋅, ⋅, 𝑟, 𝑢) ∶ [0, 𝑇 ] × (0, 1) → R is Lebesgue measurable; (ii) For
In Theorem 3.6, we use three controls 𝑢1 , 𝑢2 and ℎ to get the any (𝑡, 𝑥, 𝑟) ∈ [0, 𝑇 ] × R, the function 𝑔(𝑡, 𝑥, 𝑟, ⋅) ∶ R → R is continuous; and
exact controllability of (3.89), where 𝑢2 ∈ 𝐿2F (0, 𝑇 ; 𝐻 −1 (𝐺)), which is (iii) For all (𝑡, 𝑥, 𝑟1 , 𝑟2 , 𝑢) ∈ [0, 𝑇 ] × (0, 1) × R × R × R,
highly irregular. It is very interesting to see whether (3.89) is exactly {
controllable with the control 𝑢2 ∈ 𝐿2F (0, 𝑇 ; 𝐿2 (𝐺)). |𝑔(𝑡, 𝑥, 𝑟1 , 𝑢) − 𝑔(𝑡, 𝑥, 𝑟2 , 𝑢)| + |ℎ(𝑥, 𝑟1 ) − ℎ(𝑥, 𝑟2 )| ≤ |𝑟1 − 𝑟2 |,
(3) Controllability problems for semilinear and quasilinear |𝑔(𝑡, 𝑥, 0, 𝑢)| + |ℎ(𝑥, 0)| ≤ .
stochastic hyperbolic equations
Controllability problems for semilinear and quasilinear hyperbolic Define a cost functional  (⋅) as follows:
equations are studied extensively in the literature (e.g., Duyckaerts 𝛥
[ 𝑇 1 1 ]
et al., 2008; Fu et al., 2007; Li, 2010; Zhang, 2010, 2011; Zhou  (𝑢(⋅)) = E 𝑔(𝑡, 𝑦, 𝑢)𝑑𝑥𝑑𝑡 + ℎ(𝑦(𝑇 ))𝑑𝑥 ,
∫0 ∫0 ∫0
& Lei, 2007; Zuazua, 2007). However, as far as we know, there is
no published works addressing the controllability problems for their ∀ 𝑢(⋅) ∈  [0, 𝑇 ], (4.3)
stochastic counterparts.
where 𝑦(⋅) is the corresponding solution to (4.2). We consider the
(4) Stabilization for stochastic hyperbolic equations
following optimal control problem:
Stabilization is another important topic in Control Theory. In fact,
̄ ∈  [0, 𝑇 ] such that
Problem (OP) Find a 𝑢(⋅)
one of the main concerns of Control Theory is the relationship between
controllability and stabilizability. However, as far as we know, there are  (𝑢(⋅))
̄ = inf  (𝑢(⋅)). (4.4)
𝑢(⋅)∈ [0,𝑇 ]
very few works for the stabilization of stochastic evolution equations
(SEEs for short) in infinite dimensions (including in particular stochas- Any 𝑢(⋅)
̄ satisfying (4.4) is called an optimal control. The corresponding
tic hyperbolic equations) with very strong feedbacks (See Tessitore, state 𝑦(⋅)
̄ is called an optimal state, and (𝑦(⋅),
̄ 𝑢(⋅))
̄ is called an optimal
1993, 1994 for some interesting early results in this respect). pair.

286
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

4.2. Pontryagin-type maximum principle for the convex control domain which contradicts (4.7). This completes the proof of Lemma 4.1. □

In this subsection, we shall give a necessary condition for optimal Proof of Theorem 4.1. We use the convex perturbation technique and
controls of Problem (OP) for the case that 𝑈 is a convex subset of divide the proof into several steps.
𝐿2 (0, 1). Step 1. For the optimal pair (𝑦(⋅),̄ 𝑢(⋅)),
̄ we fix arbitrarily a control
To begin with, we introduce the following assumptions for 𝑎(⋅, ⋅, ⋅), ̄ ∈ 𝐿2F (0, 𝑇 ; 𝐿2 (0, 1)). Since 𝑈 is convex,
𝑢(⋅) ∈  [0, 𝑇 ] satisfying 𝑢(⋅)− 𝑢(⋅)
𝑏(⋅, ⋅, ⋅), 𝑔(⋅, ⋅, ⋅) and ℎ(⋅). we see that
(B3) The functions 𝑎(𝑡, 𝑥, ⋅, ⋅), 𝑏(𝑡, 𝑥, ⋅, ⋅), 𝑔(𝑡, 𝑥, ⋅, ⋅) ∈ 𝐶 1 (R×R) and ℎ(𝑥, ⋅) ∈
𝐶 1 (R). Moreover, for any (𝑟, 𝑢) ∈ R × R and a.e. (𝑡, 𝑥) ∈ [0, 𝑇 ] × (0, 1), for 𝑢𝜀 (⋅) = 𝑢(⋅)
̄ + 𝜀[𝑢(⋅) − 𝑢(⋅)]
̄
𝜑 = 𝑎, 𝑏, 𝑔, ̄ + 𝜀𝑢(⋅) ∈  [0, 𝑇 ],
= (1 − 𝜀)𝑢(⋅) ∀ 𝜀 ∈ [0, 1].

|𝜑𝑦 (𝑡, 𝑥, 𝑟, 𝑢)| + |ℎ𝑦 (𝑥, 𝑟)| + |𝜑𝑢 (𝑡, 𝑥, 𝑟, 𝑢)| ≤ . Denote by 𝑦𝜀 (⋅) the state of (4.2) corresponding to the control 𝑢𝜀 (⋅).
1[ ]
Write 𝑦𝜀1 (⋅) = 𝑦𝜀 (⋅)− 𝑦(⋅)
̄ and 𝛿𝑢(⋅) = 𝑢(⋅)− 𝑢(⋅).
̄ Since (𝑦(⋅),
̄ 𝑢(⋅))
̄ satisfies
𝜀
(4.2), it is easy to see that 𝑦𝜀1 (⋅) solves
For any optimal pair (𝑦(⋅),
̄ 𝑢(⋅))
̄ of Problem (OP), let (𝑧(⋅), 𝑍(⋅)) be
the corresponding transposition solution to the following equation: ⎧ 𝜀
( 𝜀 𝜀 𝜀 𝜀
)
⎪ 𝑑𝑦1 = 𝑦(1,𝑥𝑥 +𝑎1 𝑦1 +𝑎)2 𝛿𝑢 𝑑𝑡
⎧𝑑𝑧 = − ( 𝑧 + 𝑎 (𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡))𝑧
̄ ⎪ + 𝑏𝜀1 𝑦𝜀1 + 𝑏𝜀2 𝛿𝑢 𝑑𝑊 (𝑡) in (0,𝑇 ] × (0,1),
⎪ 𝑥𝑥
(
𝑦
) ( )) ⎨ 𝜀 (4.9)
⎪ +𝑏𝑦 𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡)̄ 𝑍 − 𝑔𝑦 𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡)̄ 𝑑𝑡 𝑦
⎪ 1 = 0 on (0,𝑇 ] × {0,1},
⎪ ⎪ 𝑦𝜀1 (0) = 0 in (0, 1),
⎨ +𝑍𝑑𝑊 (𝑡) in [0, 𝑇 ) × (0, 1), (4.5) ⎩

⎪ 𝑧 = 0 on [0, 𝑇 ) × {0, 1}, where for 𝜑 = 𝑎, 𝑏,
( )
⎪𝑧(𝑇 ) = −ℎ 𝑦(𝑇 in (0, 1).
⎩ 𝑦 ̄ ) 1
⎧𝜑𝜀 (𝑡) = ̄ + 𝜎𝜀𝑦𝜀1 (𝑡), 𝑢𝜀 (𝑡))𝑑𝜎,
𝜑 (𝑡, 𝑦(𝑡)
We have the following result. ⎪ 1 ∫0 𝑦
⎨ 1 (4.10)
⎪𝜑𝜀 (𝑡) = 𝜑 (𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡)̄ + 𝜎𝜀𝛿𝑢(𝑡))𝑑𝜎.
Theorem 4.1. Under the assumptions (B1)–(B3), it holds that, for any ⎩ 2 ∫0 𝑢
𝜌 ∈ 𝑈, Consider the following stochastic parabolic equation:

𝑎𝑢 (𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡))𝑧(𝑡)
̄ + 𝑏𝑢 (𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡))𝑍(𝑡)

̄ ⎧ ( )
−𝑔𝑢 (𝑡, 𝑢(𝑡),
̄ 𝑦(𝑡)),
̄ 𝜌 − 𝑢(𝑡)
̄ 2
𝐿 (0,1) (4.6) ⎪ 𝑑𝑦2 = 𝑦2,𝑥𝑥
(
+𝑎1 𝑦2 +𝑎2 𝛿𝑢 𝑑𝑡
)
⎪ + 𝑏1 𝑦2 + 𝑏2 𝛿𝑢 𝑑𝑊 (𝑡) in (0,𝑇 ] × (0,1),
≤ 0, a.e. (𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺. ⎨ 𝑦 =0 (4.11)
⎪ 2 on (0,𝑇 ] × {0,1},
To prove Theorem 4.1, we need the following result. ⎪ 𝑦2 (0) = 0 in (0, 1),

Lemma 4.1. If 𝐹 (⋅) ∈ 𝐿2F (0, 𝑇 ;𝐿2 (0, 1)) and 𝑢(⋅)
̄ ∈  [0, 𝑇 ] such that where for 𝜑 = 𝑎, 𝑏,
𝑇 1
E 𝐹 (𝑢 − 𝑢)𝑑𝑥𝑑𝑡
̄ ≤ 0, ∀ 𝑢(⋅) ∈  [0, 𝑇 ], (4.7) 𝜑1 (𝑡) = 𝑎𝑦 (𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡)),
̄ 𝜑2 (𝑡) = 𝑎𝑢 (𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡)).
̄ (4.12)
∫0 ∫0
Step 2. In this step, we shall show that
then, for any 𝜌 ∈ 𝑈 ,
⟨ ⟩ | |
lim |𝑦𝜀 − 𝑦2 | ∞ = 0. (4.13)
𝐹 (𝑡), 𝜌 − 𝑢(𝑡)
̄ 𝐿2 (0,1)
≤ 0, a.e. (𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺. (4.8) 𝜀→0+ | 1 |𝐿F (0,𝑇 ;𝐿2 (𝛺;𝐿2 (0,1)))
First, using Burkholder–Davis–Gundy’s inequality (See Theorem B.4)
Proof. We use the contradiction argument. Suppose that the inequality and by the assumption (B1), we find that
(4.8) did not hold. Then, there would exist a 𝜌0 ∈ 𝑈 and an 𝜀 > 0 such
that E|𝑦𝜀1 (𝑡)|2𝐿2 (0,1)
𝑇 𝑡 𝑡
𝛥 |
𝛼𝜀 = 𝜒𝛬𝜀 (𝑡, 𝜔)𝑑𝑡𝑑P > 0, = E| 𝑆(𝑡 − 𝑠)𝑎𝜀1 𝑦𝜀1 𝑑𝑠 + 𝑆(𝑡−𝑠)𝑎𝜀2 𝛿𝑢𝑑𝑠
∫𝛺 ∫0 | ∫0 ∫0
𝑡
where
{ } + 𝑆(𝑡 − 𝑠)𝑏𝜀1 𝑦𝜀1 𝑑𝑊 (𝑠)
𝛥 |⟨ ⟩ ∫0
𝛬𝜀 = (𝑡, 𝑥, 𝜔) ∈ [0, 𝑇 ] × (0, 1) × 𝛺 | 𝐹 (𝑡), 𝜌0 − 𝑢(𝑡)
̄ 2 (0,1) ≥ 𝜀 ,
| 𝐿 𝑡
|2
+ 𝑆(𝑡 − 𝑠)𝑏𝜀2 𝛿𝑢𝑑𝑊 (𝑠)| 2
and 𝜒𝛬𝜀 is the characteristic function of 𝛬𝜀 . For any 𝑚 ∈ N, define ∫0 |𝐿 (0,1)
{ } ( 𝑡
𝛥 | | |2
𝛬𝜀,𝑚 = 𝛬𝜀 ∩ (𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺 | |𝑢(𝑡,
̄ 𝜔)|𝐿2 (0,1) ≤ 𝑚 . ≤ E | 𝑆(𝑡 − 𝑠)𝑎𝜀1 𝑦𝜀1 𝑑𝑠| 2 (4.14)
| | ∫0 |𝐿 (0,1)
𝑡
Then there is an 𝑚𝜀 ∈ N such that | |2
+| 𝑆(𝑡 − 𝑠)𝑏𝜀1 𝑦𝜀1 𝑑𝑊 (𝑠)| 2
𝑇 𝛼𝜀 | ∫0 |𝐿 (0,1)
𝜒𝛬𝜀,𝑚 (𝑡, 𝜔)𝑑𝑡𝑑P > > 0, ∀ 𝑚 ≥ 𝑚𝜀 . 𝑡
∫𝛺 ∫0 2 | |2
+| 𝑆(𝑡 − 𝑠)𝑎𝜀2 𝛿𝑢𝑑𝑠| 2
⟨ ⟩ | ∫0 |𝐿 (0,1)
Since 𝐹 , 𝜌0 − 𝑢̄ 𝐿2 (0,1) is 𝐅-adapted, so is the process 𝜒𝛬𝜀,𝑚 (⋅). Let 𝑡 )
| |2
+| 𝑆(𝑡 − 𝑠)𝑏𝜀2 𝛿𝑢𝑑𝑊 (𝑠)| 2
𝛥 | ∫0 |𝐿 (0,1)
̄ 𝜔)𝜒𝛬𝑐𝜀,𝑚 (𝑡, 𝜔),
𝑢̂ 𝜀,𝑚 (𝑡, 𝜔) = 𝜌0 𝜒𝛬𝜀,𝑚 (𝑡, 𝜔) + 𝑢(𝑡, (𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺.
( 𝑡 𝑇 )
Noting that |𝑢(⋅)|
̄ ≤ 𝑚 on 𝛬𝜀,𝑚 , we see that 𝑢̂ 𝜀,𝑚 (⋅) ∈  [0, 𝑇 ]. Hence, for ≤ E|𝑦𝜀1 |2𝐿2 (0,1) 𝑑𝑠 + E|𝛿𝑢|2𝐿2 (0,1) 𝑑𝑡 .
∫0 ∫0
any 𝑚 ≥ 𝑚𝜀 , we obtain that
𝑇 1
It follows from (4.14) and Gronwall’s inequality that
E 𝐹 (𝑢̂ 𝜀,𝑚 − 𝑢)𝑑𝑥𝑑𝑡
̄
∫0 ∫0 E|𝑦𝜀1 (𝑡)|2𝐿2 (0,1) ≤ |𝑢̄ − 𝑢|2 2 , ∀ 𝑡 ∈ [0, 𝑇 ]. (4.15)
𝐿F (0,𝑇 ;𝐿2 (0,1))
𝑇 1 ( )
= 𝜒𝛬𝜀,𝑚 𝐹 𝜌0 − 𝑢̄ 𝑑𝑥𝑑𝑡𝑑P By a similar computation, we see that
∫𝛺 ∫0 ∫0
𝑇 1 𝜀𝛼
≥𝜀 𝜒𝛬𝜀,𝑚 𝑑𝑥𝑑𝑡𝑑P ≥ 𝜀 > 0, E|𝑦2 (𝑡)|2𝐿2 (0,1) ≤ |𝑢̄ − 𝑢|2 2 , ∀ 𝑡 ∈ [0, 𝑇 ]. (4.16)
∫𝛺 ∫0 ∫0 2 𝐿F (0,𝑇 ;𝐿2 (0,1))

287
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

Put 𝑦𝜀3 = 𝑦𝜀1 − 𝑦2 . Then, 𝑦𝜀3 solves the following equation: By the definition of the transposition solution to Eq. (4.5), we have
[ ( )
⎧ 𝑑𝑦𝜀 = 𝑦𝜀 +𝑎𝜀 𝑦𝜀 + 𝑎𝜀 −𝑎1 𝑦2 1 𝑇 1
⎪ 3 ( 𝜀 1 3 ) 1]
3,𝑥𝑥
⎪ + 𝑎2 − 𝑎2 𝛿𝑢 𝑑𝑡 −E ℎ𝑦 (𝑦(𝑇
̄ ))𝑦2 (𝑇 )𝑑𝑥 − E 𝑔1 𝑦2 𝑑𝑥𝑑𝑡
[ ( ) ∫0 ∫0 ∫0
⎪ + 𝑏𝜀1 𝑦𝜀3 + 𝑏𝜀1 − 𝑏1 𝑦2 𝑇 1( )
(4.20)
⎨ ( 𝜀 ) ] (4.17)
+ 𝑏2 − 𝑏2 𝛿𝑢 𝑑𝑊 (𝑡) in (0, 𝑇 ] × (0, 1), =E 𝑎2 𝛿𝑢𝑧 + 𝑏2 𝛿𝑢𝑍 𝑑𝑥𝑑𝑡.
⎪ ∫0 ∫0
⎪ 𝑦𝜀3 = 0 on (0, 𝑇 ] × {0, 1},
⎪ 𝑦𝜀 (0) = 0 in (0, 1). Combining (4.19) and (4.20), we find
⎩ 3
𝑇 1( )( )
It follows from (4.16)–(4.17) that E 𝑎2 𝑧 + 𝑏2 𝑍 − 𝑔2 𝑢 − 𝑢̄ 𝑑𝑥𝑑𝑡 ≤ 0 (4.21)
∫0 ∫0
E|𝑦𝜀3 (𝑡)|2𝐿2 (0,1) ̄ ∈ 𝐿2F (0, 𝑇 ; 𝐿2 (0, 1)).
holds for any 𝑢(⋅) ∈  [0, 𝑇 ] satisfying 𝑢(⋅) − 𝑢(⋅)
|
𝑡 𝑡 Hence, by means of Lemma 4.1, we conclude that (4.6) holds. This
= E| 𝑆(𝑡 − 𝑠)𝑎𝜀1 𝑦𝜀3 𝑑𝑠 + 𝑆(𝑡 − 𝑠)𝑏𝜀1 𝑦𝜀3 𝑑𝑊 (𝑠) completes the proof of Theorem 4.1. □
| ∫0 ∫0
𝑡 ( )
+ 𝑆(𝑡 − 𝑠) 𝑎𝜀1 − 𝑎1 𝑦2 𝑑𝑠 4.3. Pontryagin-type maximum principle for the nonconvex control domain
∫0
𝑡 ( )
+ 𝑆(𝑡 − 𝑠) 𝑏𝜀1 − 𝑏1 𝑦2 𝑑𝑊 (𝑠) In this subsection, we shall give a necessary condition for optimal
∫0
controls of Problem (OP) for the case that 𝑈 is a nonconvex subset of
𝑡 ( ) 𝐿2 (0, 1).
+ 𝑆(𝑡 − 𝑠) 𝑎𝜀2 − 𝑎2 𝛿𝑢𝑑𝑠
∫0
𝑡 ( ) |2 4.3.1. Transposition solution to operator-valued backward stochastic evo-
+ 𝑆(𝑡 − 𝑠) 𝑏𝜀2 − 𝑏2 𝛿𝑢𝑑𝑊 (𝑠)| 2
∫0 |𝐿 (0,1) lution equations
[ 𝑡 We begin with some notations. For a Hilbert space 𝐻 and 1 ≤
≤ E |𝑦𝜀 |2 𝑑𝑠
∫0 3 𝐿2 (0,1) 𝑝, 𝑞 ≤ ∞, let (see Appendix A.2 for the definition of (𝐻), and see
+|𝑦2 |2𝐿∞ (0,𝑇 ;𝐿2 (𝛺;𝐿2 (0,1))) Appendix B.4 for the definition of 𝐿𝑝F (0, 𝑇 ; 𝐿𝑞 (𝛺; 𝐻)))
F
𝑇( ) 𝐿𝑝, (0, 𝑇 ; 𝐿𝑞 (𝛺; (𝐻)))
𝛥{
F
× |𝑎𝜀1 − 𝑎1 |2𝐿∞ (0,1) + |𝑏𝜀1 − 𝑏1 |2𝐿∞ (0,1) 𝑑𝑡 |
∫0 = 𝑅 ∶ [0, 𝑇 ] × 𝛺 → (𝐻) | |𝑅|(𝐻) ∈ 𝐿𝑝F (0, 𝑇 ; 𝐿𝑞 (𝛺)),
𝑝
| }
𝑞
̄ 22
+|𝑢 − 𝑢| 𝑅𝜂 ∈ 𝐿F (0, 𝑇 ; 𝐿 (𝛺; 𝐻)), ∀ 𝜂 ∈ 𝐻
𝐿F (0,𝑇 ;𝐿2 (0,1))
𝑇( ) ] and
× |𝑎𝜀2 − 𝑎2 |2𝐿∞ (0,1) +|𝑏𝜀2 − 𝑏2 |2𝐿∞ (0,1) 𝑑𝑡
∫0 𝐿𝑝, (0, 𝑇 ; 𝐿𝑞 (𝛺; S(𝐻)))
𝛥{ }
F
( )[ 𝑡 |
= 𝑅 ∶ [0, 𝑇 ] × 𝛺 → S(𝐻) | 𝑅 ∈ 𝐿𝑝, (0, 𝑇 ; 𝐿𝑞 (𝛺; (𝐻))) ,
̄ 22
≤  1 + |𝑢 − 𝑢| E |𝑦𝜀 |2 𝑑𝑠 | F
2
𝐿F (0,𝑇 ;𝐿 (0,1)) ∫0 3 𝐿2 (0,1)
𝑇 ( where
+ E |𝑎𝜀1 − 𝑎1 |2𝐿∞ (0,1) + |𝑏𝜀1 − 𝑏1 |2𝐿∞ (0,1) 𝛥 { | }
∫0 S(𝐻) = 𝐹 ∈ (𝐻) | 𝐹 is self-adjoint , (4.22)
) ] |
+|𝑎𝜀2 − 𝑎2 |2𝐿∞ (0,1) + |𝑏𝜀2 − 𝑏2 |2𝐿∞ (0,1) 𝑑𝑠 . which is a Banach space with the norm inherited from the Banach space
(𝐻).
This, together with Gronwall’s inequality, implies that
Introduce an unbounded operator 𝐴 on 𝐿2 (0, 1) as follows:
{
E|𝑦𝜀3 (𝑡)|2𝐿2 (0,1) 𝐷(𝐴) = 𝐻 2 (0, 1) ∩ 𝐻01 (0, 1),
|𝑢−𝑢|
̄ 2
(4.23)
≤ 𝑒 𝐿 (0,𝑇 ;𝐿2 (0,1))
F (4.18) 𝐴𝑓 = 𝑓𝑥𝑥 , ∀ 𝑓 ∈ 𝐷(𝐴).
(𝑇
Clearly, 𝐴 is a self-adjoint operator. Denote by {𝑆(𝑡)}𝑡≥0 the
× E |𝑎𝜀1 − 𝑎1 |2𝐿∞ (0,1) + |𝑏𝜀1 − 𝑏1 |2𝐿∞ (0,1)
∫0 𝐶0 -semigroup generated by 𝐴.
)
+|𝑎𝜀2 −𝑎2 |2𝐿∞ (0,1) +|𝑏𝜀2 −𝑏2 |2𝐿∞ (0,1) 𝑑𝑠, ∀ 𝑡 ∈ [0, 𝑇 ]. Denote by 𝛤𝑛 the orthogonal projection from 𝐿2 (0, 1) to span
{√ }𝑛
2 sin(𝑘𝜋𝑥) 𝑘=1 . For 1 ≤ 𝑝, 𝑞 ≤ ∞, let4
Note that (4.15) implies 𝑦𝜀 (⋅) → 𝑦(⋅) ̄ in 𝐿∞ (0, 𝑇 ; 𝐿2 (𝛺; 𝐿2 (0, 1))) as
𝐿𝑝,,2 (0, 𝑇 ; 𝐿𝑞 (𝛺; (𝐿2 (0, 1))))
F
𝜀 → 0. Hence, by (4.10), (4.12) and the continuity of 𝑎𝑦 (𝑡, ⋅, ⋅), 𝑏𝑦 (𝑡, ⋅, ⋅),
𝛥{
F
|
𝑎𝑢 (𝑡, ⋅, ⋅) and 𝑏𝑢 (𝑡, ⋅, ⋅), we deduce that = 𝑅 ∈ 𝐿𝑝, (0, 𝑇 ; 𝐿𝑞 (𝛺; (𝐿2 (0, 1)))) | (4.24)
F | }
1 𝑞 2
𝛤𝑛 𝑅𝛤𝑛 ∈ 𝐿F (0, 𝑇 ; 𝐿 (𝛺; 2 (𝐿 (0, 1)))) .
𝑇 (
lim E |𝑎𝜀1 − 𝑎1 |2𝐿∞ (0,1) + |𝑏𝜀1 − 𝑏1 |2𝐿∞ (0,1) To deal with the case of non-convex control domain 𝑈 , one needs
𝜀→0 ∫0
)
+|𝑎𝜀2 − 𝑎2 |2𝐿∞ (0,1) + |𝑏𝜀2 − 𝑏2 |2𝐿∞ (0,1) 𝑑𝑠 = 0. to introduce the following (𝐿2 (0, 1))-valued backward stochastic evo-
lution equation5 (BSEE for short):
This, together with (4.18), gives (4.13).
⎧ 𝑑𝑃 = −(𝐴 + 𝐽 ∗ )𝑃 𝑑𝑡 − 𝑃 (𝐴 + 𝐽 )𝑑𝑡 − 𝐾 ∗ 𝑃 𝐾𝑑𝑡
Step 3. Since (𝑦(⋅),
̄ 𝑢(⋅))
̄ is an optimal pair of Problem (OP), from ⎪
(4.13), we find that ⎨ −(𝐾 ∗ 𝑄 + 𝑄𝐾)𝑑𝑡 + 𝐹 𝑑𝑡 + 𝑄𝑑𝑊 (𝑡) in [0, 𝑇 ), (4.25)
⎪ 𝑃 (𝑇 ) = 𝑃𝑇 ,
 (𝑢𝜀 (⋅)) −  (𝑢(⋅))
̄ ⎩
0 ≤ lim
𝜀→0 𝜀
𝑇 1( )
=E 𝑔1 𝑦2 + 𝑔2 𝛿𝑢 𝑑𝑥𝑑𝑡 (4.19) 4
See Appendix A.4 for the definition of 2 (𝐿2 (0, 1)).
∫0 ∫0
5
1 Throughout this paper, for any Hilbert space 𝐻 and (𝐻)-valued process
+E ℎ𝑦 (𝑦(𝑇
̄ ))𝑦2 (𝑇 )𝑑𝑥, (resp. random variable) 𝑅, we denote by 𝑅∗ its pointwise dual operator-valued
∫0
process (resp. random variable). For example, if 𝑅 ∈ 𝐿1F (0, 𝑇 ; 𝐿2 (𝛺; (𝐻))), then
where 𝑔1 (𝑡) = 𝑔𝑦 (𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡))
̄ and 𝑔2 (𝑡) = 𝑔𝑢 (𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡)).
̄ 𝑅∗ ∈ 𝐿1F (0, 𝑇 ; 𝐿2 (𝛺; (𝐻))), and |𝑅|𝐿1 (0,𝑇 ;𝐿2 (𝛺;(𝐻))) = |𝑅∗ |𝐿1 (0,𝑇 ;𝐿2 (𝛺;(𝐻))) .
F F

288
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

where 𝐴 is given by (4.23), and ⟨ ⟩ 𝑇⟨ ⟩


= E 𝑃 (𝑡)𝜉1 , 𝜉2 𝐿2 (0,1) +E 𝑃 (𝑠)𝑢1 (𝑠), 𝜑2 (𝑠) 𝐿2 (0,1) 𝑑𝑠
∫𝑡
⎧𝐹 ∈ 𝐿1,,2 (0, 𝑇 ; 𝐿2 (𝛺; S(𝐿2 (0, 1)))), 𝑇⟨
⎪ F ⟩
∞ ∞ 2 +E 𝑃 (𝑠)𝜑1 (𝑠), 𝑢2 (𝑠) 𝐿2 (0,1) 𝑑𝑠 (4.30)
⎨𝐽 , 𝐾 ∈ 𝐿F (0, 𝑇 ; 𝐿 (𝛺; (𝐿 (0, 1)))), ∫𝑡
⎪𝑃 ∈ 𝐿2 (𝛺; S(𝐿2 (0, 1))). 𝑇⟨
⎩ 𝑇 𝑇 ⟩
+E 𝑃 (𝑠)𝐾(𝑠)𝜑1 (𝑠), 𝑣2 (𝑠) 𝐿2 (0,1) 𝑑𝑠
∫𝑡
In the infinite dimensional setting, although (𝐿2 (0, 1))
is still a
𝑇⟨ ⟩
Banach space, it is neither reflexive nor separable (See Problem 99 in +E 𝑃 (𝑠)𝑣1 (𝑠), 𝐾(𝑠)𝜑2 (𝑠) + 𝑣2 (𝑠) 𝐿2 (0,1) 𝑑𝑠
Halmos, 1950). The existing result on stochastic integration/evolution ∫𝑡
equation in UMD Banach spaces (e.g., van Neerven, Veraar, & Weis, 𝑇⟨ ⟩
𝑇
2008) cannot be used to handle the stochastic integral ‘‘∫⋅ 𝑄𝑑𝑊 (𝑡)’’ +E 𝑄(𝑠)𝑣1 (𝑠), 𝜑2 (𝑠) 𝐻 −1 (0,1),𝐻 1 (0,1) 𝑑𝑠
∫𝑡 0
(for (𝐿2 (0, 1))-valued stochastic process 𝑄) in Eq. (4.25) because, if a 𝑇⟨ ⟩
Banach space is UMD, then it is reflexive. +E 𝜑1 (𝑠), 𝑄(𝑠)𝑣2 (𝑠) 𝐻 1 (0,1),𝐻 −1 (0,1) 𝑑𝑠.
∫𝑡 0
To overcome the above-mentioned difficulty, we employ our
stochastic transposition method developed in Lü and Zhang (2013). Here, 𝜑1 (⋅) and 𝜑2 (⋅) solve (4.26) and (4.27), respectively.
More precisely, we introduce a concept of transposition solution to
Eq. (4.25), and develop a way to study the corresponding well- We have the following well-posedness result for the transposition
posedness. solution to Eq. (4.25).
To define the solution to (4.25), we need the following two SEEs:
⎧ 𝑑𝜑 = (𝐴 + 𝐽 )𝜑 𝑑𝑠 + 𝑢 𝑑𝑠 Theorem 4.2. The Eq. (4.25) admits a unique transposition solution and
⎪ 1 1 1
⎨ +𝐾𝜑1 𝑑𝑊 (𝑠) + 𝑣1 𝑑𝑊 (𝑠) in (𝑡, 𝑇 ], (4.26) |𝑃 |𝐷F,𝑤 ([0,𝑇 ];𝐿2 (𝛺;S(𝐿2 (0,1)))) + |𝑄|𝐿2 (0,𝑇 ;S (𝐿2 (0,1);𝐻 −1 (0,1)))
⎪ 𝜑1 (𝑡) = 𝜉1 ( F 2 )
⎩ | |
≤  | |𝐹 |S(𝐿2 (0,1)) | 1 + |𝑃𝑇 |𝐿2 (𝛺;S(𝐿2 (0,1))) .
| |𝐿F (0,𝑇 ;𝐿2 (𝛺)) 𝑇
and
⎧ 𝑑𝜑 = (𝐴 + 𝐽 )𝜑 𝑑𝑠 + 𝑢 𝑑𝑠
⎪ 2 2 2
4.3.2. Some preliminaries
⎨ +𝐾𝜑2 𝑑𝑊 (𝑠) + 𝑣2 𝑑𝑊 (𝑠) in (𝑡, 𝑇 ], (4.27)
⎪ 𝜑2 (𝑡) = 𝜉2 . Let us present some preliminaries for the proof of Theorem 4.2.

For any 𝜆 ∈ 𝜌(𝐴) (the resolvent of 𝐴), denote by 𝐴𝜆 the Yosida
Here 𝜉1 , 𝜉2 ∈ 𝐿4 (𝛺; 𝐿2 (0, 1)) and 𝑢1 , 𝑢2 , 𝑣1 , 𝑣2 ∈ 𝐿2F (𝑡, 𝑇 ; 𝐿4 (𝛺; 𝐿2 (0, 1))). approximation of 𝐴, that is, 𝐴𝜆 = 𝜆𝐴(𝜆 − 𝐴)−1 . Consider the following
𝑡
Denote by 𝜑1 (⋅; 𝜉1 , 𝑢1 , 𝑣1 ) and 𝜑2 (⋅; 𝜉2 , 𝑢2 , 𝑣2 ) the solutions to (4.26) and
two SEEs:
(4.27), respectively. By Theorem B.9, the solutions to (4.26) and (4.27) { [ ] ( )
belong to 𝐶F ([𝑡, 𝑇 ]; 𝐿4 (𝛺; 𝐿2 (0, 1))) ∩ 𝐿2F (𝑡, 𝑇 ; 𝐻01 (0, 1)) and 𝑑𝜑𝜆1 = (𝐴𝜆 + 𝐽 )𝜑𝜆1 + 𝑢1 𝑑𝑠 + 𝐾𝜑𝜆1 + 𝑣1 𝑑𝑊 (𝑠) in (𝑡, 𝑇 ],
𝜆 (4.31)
𝜑1 (𝑡) = 𝜉1
|𝜑𝑖 |𝐶F ([𝑡,𝑇 ];𝐿4 (𝛺;𝐿2 (0,1))) + |𝜑𝑖 |𝐿2 (𝑡,𝑇 ;𝐿4 (𝛺;𝐻 1 (0,1)))
( F 0
and
≤  |𝜉𝑖 |𝐿4 (𝛺;𝐿2 (0,1)) + |𝑢𝑖 |𝐿2 (𝑡,𝑇 ;𝐿4 (𝛺;𝐿2 (0,1)))
𝑡 )F { [ ] ( )
+|𝑣𝑖 |𝐿2 (𝑡,𝑇 ;𝐿4 (𝛺;𝐿2 (0,1))) , for 𝑖 = 1, 2. 𝑑𝜑𝜆2 = (𝐴𝜆 + 𝐽 )𝜑𝜆2 + 𝑢2 𝑑𝑠 + 𝐾𝜑𝜆2 + 𝑣2 𝑑𝑊 (𝑠) in (𝑡, 𝑇 ],
(4.32)
F 𝜑𝜆2 (𝑡) = 𝜉2 .
Also, we need to introduce the solution space for (4.25). Set
Here (𝜉1 , 𝑢1 , 𝑣1 ) (resp. (𝜉2 , 𝑢2 , 𝑣2 )) is the same as that in (4.26) (resp.
S2 (𝐿2 (0, 1); 𝐻 −1 (0, 1)) (4.27)). We have the following result, which is a special case of Lü and
𝛥{ |
= 𝐹 ∈ 2 (𝐿2 (0, 1); 𝐻 −1 (0, 1)) | 𝐹 |𝐻 1 (0,1) , Zhang (2014, Lemma 2.7).
| 0
1 (4.28)
the restriction of 𝐹 on 𝐻0 (0, 1), is
a Hilbert–Schmidt operator from 𝐻01 (0, 1) Lemma 4.2. Let 𝜑𝜆1 (⋅) and 𝜑𝜆2 (⋅) be solutions to (4.31) and (4.32)
}
to 𝐿2 (0, 1), and (𝐹 |𝐻 1 (0,1) )∗ = 𝐹 , respectively. Then for 𝑗 = 1, 2,
0

which is a Hilbert space with the inner product inherited from 2 lim 𝜑𝜆𝑗 (⋅) = 𝜑𝑗 (⋅) in 𝐶F ([𝑡, 𝑇 ]; 𝐿4 (𝛺; 𝐿2 (0, 1))), (4.33)
𝜆→+∞
(𝐿2 (0, 1); 𝐻 −1 (0, 1)). Put
𝐷 ([0, 𝑇 ]; 𝐿2 (𝛺; S(𝐿2 (0, 1)))) where 𝜑1 (⋅) and 𝜑2 (⋅) are solutions to (4.26) and (4.27), respectively.
{ F,𝑤
𝛥 |
= 𝑃 (⋅) ∈ 𝐷F ([0,𝑇 ]; 𝐿2 (𝛺;S2 (𝐿2 (0,1);𝐻 −1 (0,1)))) |
| Proof. Clearly, for any 𝑠 ∈ [𝑡, 𝑇 ] and 𝑗 = 1, 2, it holds that
For every 𝑡 ∈ [0, 𝑇 ] and 𝜉 ∈ 𝐿4 (𝛺; 𝐿2 (0, 1)), (4.29)
𝑡 4
4
E|𝜑𝑗 (𝑠) − 𝜑𝜆𝑗 (𝑠)| 2
𝜒[𝑡,𝑇 ] 𝑃 (⋅)𝜉 ∈ 𝐷F ([𝑡, 𝑇 ]; 𝐿 3 (𝛺; 𝐿2 (0, 1))) and } 𝐿 (0,1)
[ ]
|𝑃 (⋅)𝜉| |
4 ≤ |𝜉|𝐿4 (𝛺;𝐿2 (0,1)) . = E| 𝑆(𝑠 − 𝑡) − 𝑆𝜆 (𝑠 − 𝑡) 𝜉𝑗
𝐷F ([𝑡,𝑇 ];𝐿 3 (𝛺;𝐿2 (0,1))) 𝑡 |
𝑠[ ]
The transposition solution to (4.25) is defined as follows: + 𝑆(𝑠 − 𝜎)𝐽 𝜑𝑗 − 𝑆𝜆 (𝑠 − 𝜎)𝐽 𝜑𝜆𝑗 𝑑𝜎
∫𝑡
𝑠[ ]
Definition 4.1. We call + 𝑆(𝑠 − 𝜎) − 𝑆𝜆 (𝑠 − 𝜎) 𝑢𝑗 𝑑𝜎
∫𝑡
(𝑃 (⋅), 𝑄(⋅)) ∈ 𝐷F,𝑤 ([0, 𝑇 ]; 𝐿2 (𝛺; S(𝐿2 (0, 1)))) 𝑠[ ]
×𝐿2F (0, 𝑇 ; S2 (𝐿2 (0, 1); 𝐻 −1 (0, 1))) + 𝑆(𝑠 − 𝜎)𝐾𝜑𝑗 − 𝑆𝜆 (𝑠 − 𝜎)𝐾𝜑𝜆𝑗 𝑑𝑊 (𝜎)
∫𝑡
a transposition solution to Eq. (4.25) if for any 𝑡 ∈ [0, 𝑇 ], 𝜉1 , 𝜉2 ∈ 𝑠[ ]
|4
𝐿4 (𝛺; 𝐿2 (0, 1)) and 𝑢1 (⋅), 𝑢2 (⋅), 𝑣1 (⋅), 𝑣2 (⋅) ∈ 𝐿2F (𝑡, 𝑇 ; 𝐿4 (𝛺; 𝐿2 (0, 1))), it + 𝑆(𝑠 − 𝜎) − 𝑆𝜆 (𝑠 − 𝜎) 𝑣𝑗 𝑑𝑊 (𝜎)| 2 .
𝑡 ∫𝑡 |𝐿 (0,1)
holds that
⟨ ⟩ Since 𝐴𝜆 is the Yosida approximation of 𝐴, one can find a positive
E 𝑃𝑇 𝜑1 (𝑇 ), 𝜑2 (𝑇 ) 𝐿2 (0,1) constant  = (𝐴, 𝑇 ), independent of 𝜆, such that
𝑇⟨ ⟩ | |
−E 𝐹 (𝑠)𝜑1 (𝑠), 𝜑2 (𝑠) 𝐿2 (0,1) 𝑑𝑠 ||𝑆𝜆 (⋅)|(𝐿2 (0,1)) | ∞ ≤ . (4.34)
∫𝑡 | |𝐿 (0,𝑇 )

289
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

Consequently, Proof. Define 𝐔(⋅, 𝑡) as follows:


{
𝑠[ ] 4 𝐔(⋅, 𝑡) ∶ 𝐿4 (𝛺;𝐿2 (0, 1)) → 𝐶F ([𝑡, 𝑇 ]; 𝐿4 (𝛺;𝐿2 (0, 1))),
| |
E| 𝑆(𝑠 − 𝜎)𝐽 𝜑𝑗 − 𝑆𝜆 (𝑠 − 𝜎)𝐽 𝜑𝜆𝑗 𝑑𝜎 | 𝑡
| ∫𝑡 |𝐿2 (0,1) 𝐔(𝑠, 𝑡)𝜉2 = 𝜑2 (𝑠), ∀ 𝑠 ∈ [𝑡, 𝑇 ],
𝑠 [ ]
| |4
≤ E | 𝑆(𝑠 − 𝜎) − 𝑆𝜆 (𝑠 − 𝜎) 𝐽 𝜑𝑗 | 2 𝑑𝜎 where 𝜑2 (⋅) is the mild solution to (4.27) with 𝑢2 = 𝑣2 = 0.
∫𝑡 | |𝐿 (0,1)
𝑠 ( ) By Eq. (B.19), we obtain that for any 𝑡 ∈ [0, 𝑇 ],
| | 4
+E |𝑆 (𝑠 − 𝜎)𝐽 𝜑𝑗 − 𝜑𝜆𝑗 | 2 𝑑𝜎
∫𝑡 | 𝜆 |𝐿 (0,1) |𝜑2 (⋅)|𝐶F ([𝑡,𝑇 ];𝐿4 (𝛺;𝐿2 (0,1))) ≤ |𝜉2 |𝐿4
𝑠[ ] 4 𝑡
(𝛺;𝐿2 (0,1)) .
| |
≤ E| 𝑆(𝑠 − 𝜎) − 𝑆𝜆 (𝑠 − 𝜎) 𝐽 𝜑𝑗 𝑑𝜎 |
| ∫𝑡 |𝐿2 (0,1) Thus, 𝐔(⋅, 𝑡) is a bounded linear operator from 𝐿4 (𝛺;𝐿2 (0,1)) to
𝑠 𝑡
| |4 | |4 𝐶F ([𝑡, 𝑇 ]; 𝐿4 (𝛺; 𝐿2 (0, 1))) and 𝐔(⋅, 𝑡)𝜉2 solves Eq. (4.27) with 𝑢2 = 𝑣2 = 0.
+E ||𝐽 | 2 | |𝜑 − 𝜑𝜆𝑗 | 2 𝑑𝜎.
∫𝑡 | (𝐿 (0,1)) |𝐿∞ (𝛺) | 𝑗 |𝐿 (0,1) On the other hand, by the definition of 𝐔(⋅, 𝑡) and 𝐔(⋅, 𝑠), for each
It follows from Theorem B.4 that 𝑠 ∈ [𝑡, 𝑇 ] and 𝑟 ∈ [𝑠, 𝑇 ], we see that
𝑟
𝑠[ ] 4
| | 𝐔(𝑟, 𝑡)𝜉 = 𝑆(𝑟 − 𝑡)𝜉 + 𝑆(𝑟 − 𝜏)𝐽 (𝜏)𝐔(𝜏, 𝑡)𝜉𝑑𝜏
E| 𝑆(𝑠 − 𝜎)𝐾𝜑𝑗 − 𝑆𝜆 (𝑠 − 𝜎)𝐾𝜑𝜆𝑗 𝑑𝑊 (𝜎)| ∫𝑡
| ∫𝑡 |𝐿2 (0,1) 𝑟
𝑠 [ ]
| |4 + 𝑆(𝑟 − 𝜏)𝐾(𝜏)𝐔(𝜏, 𝑡)𝜉𝑑𝑊 (𝜏),
≤ E | 𝑆(𝑠 − 𝜎) − 𝑆𝜆 (𝑠 − 𝜎) 𝐾𝜑𝑗 | 2 𝑑𝜎 ∫𝑡
∫𝑡 | |𝐿 (0,1)
𝑠
| ( )| 4 and
+E |𝑆 (𝑠 − 𝜎)𝐾 𝜑𝑗 − 𝜑𝜆𝑗 | 2 𝑑𝜎
∫𝑡 | 𝜆 |𝐿 (0,1) 𝑟
𝑠 [ ] 𝐔(𝑟, 𝑠)𝜉 = 𝑆(𝑟 − 𝑠)𝜉 + 𝑆(𝑟 − 𝜏)𝐽 𝐔(𝜏, 𝑠)𝜉𝑑𝜏
| |4
∫𝑠
≤ E | 𝑆(𝑠 − 𝜎) − 𝑆𝜆 (𝑠 − 𝜎) 𝐾𝜑𝑗 | 2 𝑑𝜎
∫𝑡 | |𝐿 (0,1) 𝑟
𝑠 + 𝑆(𝑟 − 𝜏)𝐾𝐔(𝜏, 𝑠)𝜉𝑑𝑊 (𝜏).
| |4 | 𝜆|
4 ∫𝑠
+E ||𝐾| 2 | |𝜑 − 𝜑𝑗 | 2 𝑑𝜎.
∫𝑡 | (𝐿 (0,1)) |𝐿∞ (𝛺) | 𝑗 |𝐿 (0,1)
Hence,
Hence, for 𝑡 ≤ 𝑠 ≤ 𝑇 ,
E|𝐔(𝑟, 𝑠)𝜉 − 𝐔(𝑟, 𝑡)𝜉|4𝐿2 (0,1)
| |4 [
E|𝜑𝑗 (𝑠) − 𝜑𝜆𝑗 (𝑠)| 2 | |4
| |𝐿 (0,1) ≤ E |𝑆(𝑟 − 𝑠)𝜉 − 𝑆(𝑟 − 𝑡)𝜉 | 2
≤ 𝛬(𝜆, 𝑠) | |𝐿 (0,1)
𝑠( ) |
𝑟 ( ) |
4
| |4 | |4 +| 𝑆(𝑟 − 𝜏)𝐽 𝐔(𝜏, 𝑠) −𝐔(𝜏, 𝑡) 𝜉𝑑𝜏 |
+E ||𝐽 | 2 | + ||𝐾| 2 | |∫𝑠 |𝐿2 (0,1)
∫𝑡 | (𝐿 (0,1)) |𝐿∞ (𝛺) | (𝐿 (0,1)) |𝐿∞ (𝛺)
| |4 |
𝑟 ( ) |
4
×|𝜑𝑗 − 𝜑𝜆𝑗 | 2 𝑑𝜎. +| 𝑆(𝑟−𝜏)𝐾 𝐔(𝜏,𝑠) −𝐔(𝜏,𝑡) 𝜉𝑑𝑊 (𝜏)|
| |𝐿 (0,1) |∫𝑠 |𝐿2 (0,1)
Here 𝑠 4
| |
+| 𝑆(𝑟 − 𝜏)𝐽 𝐔(𝜏, 𝑡)𝜉𝑑𝜏 |
𝛬(𝜆, 𝑠) | ∫𝑡 |𝐿2 (0,1)
[ ] 4 ]
| | |
𝑠
|
4
= E| 𝑆(𝑠 − 𝑡) − 𝑆𝜆 (𝑠 − 𝑡) 𝜉𝑗 | +| 𝑆(𝑠 − 𝜏)𝐾𝐔(𝜏, 𝑡)𝜉𝑑𝑊 (𝜏)|
| |𝐿2 (0,1) | ∫𝑡 |𝐿2 (0,1)
𝑠[ ] 4
| | | |4
+E| 𝑆(𝑠 − 𝜎) − 𝑆𝜆 (𝑠 − 𝜎) 𝑢𝑗 𝑑𝜎 | ≤ E|𝑆(𝑟 − 𝑠)𝜉 − 𝑆(𝑟 − 𝑡)𝜉 | 2
| ∫𝑡 |𝐿2 (0,1) | |𝐿 (0,1)
𝑠 [ ] 4 𝑟
| | | |4
+E | 𝑆(𝑠 − 𝜎) − 𝑆𝜆 (𝑠 − 𝜎) 𝑣𝑗 | 2 𝑑𝜎 + E|𝐔(𝜏, 𝑠)𝜉 − 𝐔(𝜏, 𝑡)𝜉 | 2 𝑑𝜏
∫𝑡 | |𝐿 (0,1) ∫𝑠 | |𝐿 (0,1)
𝑠[ ] 𝑠
| |
4 | |4
+E| 𝑆(𝑠 − 𝜎) − 𝑆𝜆 (𝑠 − 𝜎) 𝐽 𝜑𝑗 𝑑𝜎 | + E|𝐔(𝜏, 𝑡)𝜉 | 2 𝑑𝜏
| ∫𝑡 |𝐿2 (0,1) ∫𝑡 | |𝐿 (0,1)
𝑠 [ ] | |4
| |4 ≤ E|𝑆(𝑟 − 𝑠)𝜉 − 𝑆(𝑟 − 𝑡)𝜉 | 2
+E | 𝑆(𝑠 − 𝜎) − 𝑆𝜆 (𝑠 − 𝜎) 𝐾𝜑𝑗 | 2 𝑑𝜎. | |𝐿 (0,1)
∫𝑡 | |𝐿 (0,1) 𝑟
| |4
+ E|𝐔(𝜏, 𝑠)𝜉 − 𝐔(𝜏, 𝑡)𝜉 | 2 𝑑𝜏
By Gronwall’s inequality, it follows that ∫𝑠 | |𝐿 (0,1)

E|𝜑𝑗 (𝑠) − 𝜑𝜆𝑗 (𝑠)|


4 +(𝑠 − 𝑡)E|𝜉|4𝐿2 (0,1) .
𝐿2 (0,1)
𝑠 (4.35) Then, by Gronwall’s inequality, we find that
(𝑠−𝜏)
≤ 𝛬(𝜆, 𝑠) +  𝑒 𝛬(𝜆, 𝜏)𝑑𝜏, 𝑡 ≤ 𝑠 ≤ 𝑇.
∫𝑡 | |4
E|𝐔(𝑟, 𝑠)𝜉 − 𝐔(𝑟, 𝑡)𝜉 | 2
Since 𝐴𝜆 is the Yosida approximation of 𝐴, we see that | |𝐿 (0,1)
( 𝑟 )
lim 𝛬(𝜆, 𝑠) = 0, ≤  ℎ(𝑟, 𝑠, 𝑡) + ℎ(𝜎, 𝑠, 𝑡)𝑑𝜎 ,
𝜆→∞
∫𝑠
which, together with (4.35), implies that where
| |4
lim |𝜑𝜆𝑗 (⋅) − 𝜑𝑗 (⋅)| ℎ(𝑟, 𝑠, 𝑡) = E|𝑆(𝑟 − 𝑠)𝜉 − 𝑆(𝑟 − 𝑡)𝜉 | 2
𝐶F ([𝑡,𝑇 ]∶𝐿4 (𝛺;𝐿2 (0,1)))
= 0. | |𝐿 (0,1)
𝜆→∞
+(𝑠 − 𝑡)E|𝜉|4𝐿2 (0,1) .
This completes the proof of Lemma 4.2. □
Clearly,
Lemma 4.3. If 𝑢2 = 𝑣2 = 0 in Eq. (4.27), then for each 𝑡 ∈ [0, 𝑇 ], | |4
( 4 |𝜉 − 𝑆(𝑠−𝑡)𝜉 | 2 ≤ |𝜉|4𝐿2 (0,1) , ∀𝑡 ∈ [0, 𝑇 ], 𝑠 ∈ [𝑡, 𝑇 ].
there exists an operator 𝐔(⋅, 𝑡) ∈  𝐿 (𝛺; 𝐿2 (0, 1)); 𝐶F ([𝑡, 𝑇 ]; 𝐿4 (𝛺; | |𝐿 (0,1)
) 𝑡
2
𝐿 (0, 1))) such that the solution to (4.27) can be represented as 𝜑2 (⋅) = By Lebesgue’s dominated convergence theorem, we have
𝐔(⋅, 𝑡)𝜉2 . Further, for any 𝑡 ∈ [0, 𝑇 ), 𝜉 ∈ 𝐿4 (𝛺; 𝐿2 (0, 1)) and 𝜀 > 0, there | |4
𝑡 lim E|𝜉 − 𝑆(𝑠 − 𝑡)𝜉 | 2 = 0.
is a 𝛿 ∈ (0, 𝑇 − 𝑡) such that for any 𝑠 ∈ [𝑡, 𝑡 + 𝛿], it holds that 𝑠→𝑡+0 | |𝐿 (0,1)
Hence, there is a 𝛿 ∈ (0, 𝑇 −𝑡) such that (4.36) holds for any 𝑠 ∈ [𝑡, 𝑡+𝛿].
|𝐔(⋅, 𝑡)𝜉 − 𝐔(⋅, 𝑠)𝜉|𝐿∞ (𝑠,𝑇 ;𝐿4 (𝛺;𝐿2 (0,1))) < 𝜀. (4.36)
F This completes the proof of Lemma 4.3. □

290
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

Lemma 4.4. The set Differentiating the equality (4.44) with respect to 𝜎, and noting (4.43),
𝛥{
we see that
|
R = 𝜑2 (⋅) | 𝜑2 (⋅) solves (4.27) with 𝑡 = 0, 𝜉2 = 0, 𝑣2 = 0 ( )
| } (𝜆0 − 𝐴)−1 𝜌 − 𝐾 ∗ 𝛹
and 𝑢2 ∈ 𝐿4F (0, 𝑇 ; 𝐻01 (0, 1)) 𝑇 ( )
=− 𝑆(𝑠 − 𝜎)𝐴(𝜆0 − 𝐴)−1 𝜌 − 𝐾 ∗ 𝛹 𝑑𝑠
∫𝜎
is dense in 𝐿4F (0, 𝑇 ; 𝐻01 (0, 1)). 𝑇 ( )
= 𝑆(𝑠 − 𝜎) 𝜌 − 𝐾 ∗ 𝛹 𝑑𝑠
∫𝜎
Proof. Let us prove Lemma 4.4 by contradiction. Assume that there is 𝑇 ( )
4 −𝜆0 𝑆(𝑠 − 𝜎)(𝜆0 − 𝐴)−1 𝜌 − 𝐾 ∗ 𝛹 𝑑𝑠
𝜌 ∈ 𝐿F3 (0, 𝑇 ; 𝐻 −1 (0, 1)) such that for any 𝜑2 ∈ R, ∫𝜎
= 0, ∀ 𝜎 ∈ [0, 𝑇 ].
𝑇⟨ ⟩
E 𝜌, 𝜑2 𝑑𝑠 = 0. (4.37) Therefore, 𝜌(⋅) = 𝐾(⋅)∗ 𝛹 (⋅). Consequently, Eq. (4.38) is reduced to
∫0 𝐻 −1 (0,1),𝐻01 (0,1)
{
𝑑𝜓 = −𝐴𝜓𝑑𝑡 − 𝐽 ∗ 𝜓𝑑𝑡 + 𝛹 𝑑𝑊 (𝑡) in [0, 𝑇 ),
Consider the following backward stochastic parabolic equation: (4.45)
𝜓(𝑇 ) = 0.
⎧ 𝑑𝜓 = −𝐴𝜓𝑑𝑡 + (𝜌 − 𝐽 ∗ 𝜓 − 𝐾 ∗ 𝛹 )𝑑𝑡 Clearly, the unique transposition solution to (4.45) is (𝜓(⋅), 𝛹 (⋅)) =

⎨ +𝛹 𝑑𝑊 (𝑡) in [0, 𝑇 ), (4.38) (0, 0). Hence, we conclude that 𝜌(⋅) = 0, which is a contradiction.
⎪ 𝜓(𝑇 ) = 0. Therefore, 𝛯 is dense in 𝐿4F (0, 𝑇 ; 𝐻01 (0, 1)). □

By Theorem B.17, (4.38) admits a unique transposition solution (𝜓(⋅), Consider the following equation:
4 4
𝛹 (⋅)) ∈ 𝐷F ([0, 𝑇 ]; 𝐿 3 (𝛺; 𝐿2 (0, 1))) ∩ ×𝐿2F (0, 𝑇 ; 𝐿 3 (𝛺; 𝐿2 (0, 1))). Hence,
⎧ 𝑑𝑤 = (𝑤 + 𝑓 )𝑑𝑡 + 𝑔𝑑𝑊 (𝑡) in (0, 𝑇 ] × (0, 1),
for any 𝜙1 (⋅) ∈ 𝐿1F (0, 𝑇 ; 𝐿4 (𝛺; 𝐿2 (0, 1))) and 𝜙2 (⋅) ∈ 𝐿2F (0,𝑇 ; 𝐿4 (𝛺; ⎪ 𝑥𝑥
⎨ 𝑤=0 on (0, 𝑇 ] × {0, 1}, (4.46)
𝐿2 (0, 1))), it holds that
⎪ 𝑤(0) = 0 in (0, 1).

𝑇⟨ ⟩
−E 𝑧, 𝜌−𝐽 ∗ 𝜓 −𝐾 ∗ 𝛹 𝐻01 (0,1),𝐻 −1 (0,1)
𝑑𝑠 Here 𝑓 , 𝑔 ∈ 𝐿2F (0, 𝑇 ; 𝐿2 (0, 1)) with supp 𝑔 ⊂ (𝑡0 , 𝑇 ) × (0, 1) for some 𝑡0 > 0
∫0
𝑇⟨ (4.39)
⟩ 𝑇⟨ ⟩ (See (A.17) for the definition of supp 𝑔). We have the following result.
=E 𝜙1 , 𝜓 𝐿2 (0,1)
𝑑𝑠+E 𝜙2 , 𝛹 𝐿2 (0,1)
𝑑𝑠,
∫0 ∫0
Lemma 4.5. The solution 𝑤 to (4.46) belongs to 𝐿∞ F
(0, 𝑇 ; 𝐻01 (0, 1)).
where 𝑧(⋅) solves
Further, there exists a constant , depending on 𝑡0 , such that
{
𝑑𝑧 = (𝐴𝑧 + 𝜙1 )𝑑𝑡 + 𝜙2 𝑑𝑊 (𝑡) in (0, 𝑇 ], ( )
(4.40) |𝑤|𝐿∞ (0,𝑇 ;𝐻 1 (0,1)) ≤  |𝑓 |𝐿2 (0,𝑇 ;𝐿2 (0,1)) + |𝑔|𝐿2 (0,𝑇 ;𝐿2 (0,1)) . (4.47)
𝑧(0) = 0. F 0 F F

For any 𝜑2 (⋅) solving (4.27) with 𝑡 = 0, 𝜉2 = 0, 𝑣2 = 0 and an Proof. The proof of Lemma 4.5 is standard. For 𝑗 ∈ N, let
arbitrarily given 𝑢2 ∈ 𝐿4F (0, 𝑇 ; 𝐻01 (0, 1)), we choose 𝑧 = 𝜑2 , 𝜙1 = 𝐽 𝑦2 +𝑢2
√ 1
and 𝜙2 = 𝐾𝑦2 . It follows from (4.39) that 𝑓𝑗 (𝑡) = 2 𝑓 (𝑡, 𝑥) sin(𝑗𝜋𝑥)𝑑𝑥
∫0
𝑇⟨ ⟩
−E 𝜑2 , 𝜌 𝐻 1 (0,1),𝐻 −1 (0,1) 𝑑𝑠 and
∫0 0
𝑇⟨ ⟩
(4.41) √ 1
=E 𝑢2 , 𝜓 𝑑𝑠. 𝑔𝑗 (𝑡) = 2 𝑔(𝑡, 𝑥) sin(𝑗𝜋𝑥)𝑑𝑥.
∫0 𝐿2 (0,1) ∫0

By (4.41) and recalling (4.37), we conclude that 𝜓(⋅) = 0. Hence, (4.39) Then

is reduced to √ ∑∞
𝑓 (𝑡, 𝑥) = 2 𝑓𝑗 (𝑡) sin(𝑗𝜋𝑥)
𝑇⟨ ⟩ 𝑗=1
−E 𝑧, 𝜌(𝑠) − 𝐾 ∗ 𝛹 𝐻01 (0,1),𝐻 −1 (0,1)
𝑑𝑠
∫0 and
𝑇⟨ (4.42)

=E 𝜙2 , 𝛹 𝐿2 (0,1)
𝑑𝑠. √ ∑∞
∫0
𝑔(𝑡, 𝑥) = 2 𝑔𝑗 (𝑡) sin(𝑗𝜋𝑥).
𝑗=1
Choosing 𝜙2 (⋅) = 0 in (4.40) and (4.42), we obtain that for every
𝜙1 (⋅) ∈ 𝐿1F (0, 𝑇 ; 𝐿4 (𝛺; 𝐿2 (0, 1))), Furthermore,

𝑇⟨ 𝑠 ⟩ √ 𝑡∑∞
2 2
𝑤(𝑡, 𝑥) = 2 𝑓𝑗 (𝑠)𝑒−𝑗 𝜋 𝑠 sin(𝑗𝜋𝑥)𝑑𝑠
E 𝑆(𝑠 − 𝜎)𝜙1 𝑑𝜎, 𝜌 − 𝐾 ∗ 𝛹 𝑑𝑠 = 0. ∫0
∫0 ∫0 𝐻01 (0,1),𝐻 −1 (0,1) 𝑗=1
√ 𝑡∑∞
2 2
Hence, + 2 𝑔𝑗 (𝑠)𝑒−𝑗 𝜋 𝑠 sin(𝑗𝜋𝑥)𝑑𝑊 (𝑠).
∫0
𝑗=1
𝑇 ( ) Then we have
𝑆(𝑠 − 𝜎) 𝜌 − 𝐾 ∗ 𝛹 𝑑𝑠 = 0, ∀ 𝜎 ∈ [0, 𝑇 ]. (4.43)
∫𝜎
1
E |𝑤𝑥 (𝑡, 𝑥)|2 𝑑𝑥
Then, for any given 𝜆0 ∈ 𝜌(𝐴) and 𝜎 ∈ [0, 𝑇 ], we have ∫0
1 𝑡∑

| 2 2
𝑇 ( ) = 2E | 𝑓𝑗 (𝑠)𝑒−𝑗 𝜋 𝑠 𝑗𝜋 cos(𝑗𝜋𝑥)𝑑𝑠 (4.48)
𝑆(𝑠 − 𝜎)(𝜆0 − 𝐴)−1 𝜌 − 𝐾 ∗ 𝛹 𝑑𝑠 ∫0 | ∫0
𝑗=1
∫𝜎 𝑡∑

(4.44) |2
𝑇 ( ) + 𝑔𝑗 (𝑠)𝑒−𝑗
2 𝜋2 𝑠
𝑗𝜋 cos(𝑗𝜋𝑥)𝑑𝑊 (𝑠)| 𝑑𝑥.
= (𝜆0 − 𝐴)−1 𝑆(𝑠 − 𝜎) 𝜌 − 𝐾 ∗ 𝛹 𝑑𝑠 = 0. ∫0 |
∫𝜎 𝑗=1

291
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

Since and
1 𝑡∑
∞ ⎧ lim |𝑃 𝜂 − 𝑃 𝑛 𝜂|
| 2 2 |2 ⎪ 𝑛→∞ 𝑇 4 = 0,
2E | 𝑓𝑗 (𝑠)𝑒−𝑗 𝜋 𝑠 𝑗𝜋 cos(𝑗𝜋𝑥)𝑑𝑠| 𝑑𝑥 𝑇
∫0 | ∫0 | ⎪ 𝐿3 (𝛺;𝐿2 (0,1))
𝑇
𝑗=1
∑∞( 𝑡 )2 ⎪ ∀ 𝜂 ∈ 𝐿4 (𝛺; 𝐿2 (0, 1)),
2 2 ⎨ 𝑇 (4.50)
=E 𝑓 (𝑠)𝑒−𝑗 𝜋 𝑠 𝑗𝜋𝑑𝑠 lim |𝐹 𝜑 − 𝐹𝑛 𝜑|
⎪ 𝑛→∞ = 0,
∫0 𝑗 1
4
𝐿F (0,𝑇 ;𝐿 (𝛺;𝐿2 (0,1)))
𝑗=1
∞(
⎪ 3
∑ 𝑡 𝑡
2 2
) ⎪ ∞ 4 2
∀ 𝜑 ∈ 𝐿F (0, 𝑇 ; 𝐿 (𝛺; 𝐿 (0, 1))).
≤E 𝑓𝑗 (𝑠)2 𝑑𝑠 𝑒−2𝑗 𝜋 𝑠 𝑗 2 𝜋 2 𝑑𝑠 ⎩
∫0 ∫0
𝑗=1
Consider the following equation:

∞ 𝑡
≤E 𝑓𝑗 (𝑠)2 𝑑𝑠 = |𝑓 |2 2 ⎧ 𝑑𝑃 = −∗ 𝑃 𝑑𝑡 + 𝑓 (𝑡, 𝑃 , 𝑄 )𝑑𝑡
∫0 𝐿F (0,𝑇 ;𝐿2 (0,1))
𝑗=1 ⎪ 𝑛 𝑛 𝑛 𝑛
⎨ +𝐹𝑛 𝑑𝑡 + 𝑄𝑛 𝑑𝑊 (𝑡) in [0, 𝑇 ), (4.51)
and ⎪ 𝑃𝑛 (𝑇 ) = 𝑃𝑇 ,𝑛 ,

1 𝑡∑

| 2 2 |2 where
2E | 𝑔𝑗 (𝑠)𝑒−𝑗 𝜋 𝑠 𝑗𝜋 cos(𝑗𝜋𝑥)𝑑𝑊 (𝑠)| 𝑑𝑥
∫0 | ∫0 |
𝑗=1
∞( 𝑓 (𝑡, 𝑃𝑛 , 𝑄𝑛 ) = −𝐽 ∗ 𝑃𝑛 − 𝑃𝑛 𝐽 − 𝐾 ∗ 𝑃𝑛 𝐾 − 𝐾 ∗ 𝑄𝑛 − 𝑄𝑛 𝐾. (4.52)
∑ 𝑡 )2
−𝑗 2 𝜋 2 𝑠
=E 𝑔𝑗 (𝑠)𝑒 𝑗𝜋𝑑𝑊 (𝑠) Noting that 𝑃𝑇 ,𝑛 ∈ 𝐿2 (𝛺; 2 (𝐿2 (0, 1))) and 𝐹𝑛 ∈ 𝐿1F (0, 𝑇 ; 𝐿2 (𝛺;
∫0
𝑗=1 𝑇
2 (𝐿2 (0, 1)))), by choosing 𝐻 = 2 (𝐿2 (0, 1)), it follows from Theo-
∑∞ 𝑡
2 𝜋2 𝑠
=E 𝑔𝑗 (𝑠)2 𝑒−2𝑗 𝑗 2 𝜋 2 𝑑𝑠 rem B.16 that there exists a unique pair
∫0
𝑗=1
(𝑃𝑛 , 𝑄𝑛 ) ∈ 𝐷F ([0, 𝑇 ]; 𝐿2 (𝛺; 2 (𝐿2 (0, 1))))
∑∞ 𝑡
×𝐿2F (0, 𝑇 ; 2 (𝐿2 (0, 1)))
−2𝑗 2 𝜋 2 𝑡0 2 2 −2𝑗 2 𝜋 2 (𝑠−𝑡0 ) 2
=E 𝑒 𝑗 𝜋 𝑒 𝑔𝑗 (𝑠) 𝑑𝑠
∫ 𝑡0
𝑗=1 which solves (4.51) in the sense of Eq. (B.28) and
{ −2𝑗 2 𝜋 2 𝑡0 2 2
} 𝑡
−2𝑗 2 𝜋 2 (𝑠−𝑡0 ) 2
≤ max 𝑒 𝑗 𝜋 E 𝑒 𝑔𝑗 (𝑠) 𝑑𝑠 |(𝑃𝑛 , 𝑄𝑛 )|𝐷 ([0,𝑇 ];𝐿2 (𝛺; (𝐿2 (0,1))))×𝐿2 (0,𝑇 ; (𝐿2 (0,1)))
𝑗∈N ∫𝑡0 ( F 2 F 2 ) (4.53)
≤  |𝐹𝑛 |𝐿1 (0,𝑇 ;𝐿2 (𝛺; (𝐿2 (0,1)))) +|𝑃𝑇 ,𝑛 |𝐿2 (𝛺; (𝐿2 (0,1))) .
≤ |𝑔|2 2 , F 2 𝑇 2
𝐿F (0,𝑇 ;𝐿2 (0,1))
Further, for the following (forward) SEE:
it follows from (4.48) that {
𝑑𝑧 = (𝑧 + 𝑢)𝑑𝑠
̂ + 𝑣𝑑𝑊
̂ (𝑠) in (𝑡, 𝑇 ],
(4.54)
sup E|𝑤(𝑡)|2 1 𝑧(𝑡) = 𝜂,
𝑡∈[𝑡0 ,𝑇 ] 𝐻0 (0,1)
( )
≤  |𝑓 |𝐿2 (0,𝑇 ;𝐿2 (0,1)) + |𝑔|𝐿2 (0,𝑇 ;𝐿2 (0,1)) . where 𝑡 ∈ [0, 𝑇 ], 𝑢̂ ∈ 𝐿1F (𝑡, 𝑇 ; 𝐿2 (𝛺; 2 (𝐿2 (0, 1)))), 𝑣̂ ∈ 𝐿2F (𝑡, 𝑇 ; 2
F F
(𝐿2 (0, 1))) and 𝜂 ∈ 𝐿2 (𝛺; 2 (𝐿2 (0, 1))), we have
𝑡
On the other hand, noting that 𝑤(𝑡) = 0 for 𝑡 ∈ [0, 𝑡0 ], we get (4.47). □ ⟨ ⟩
E 𝑧(𝑇 ), 𝑃𝑇 ,𝑛  (𝐿2 (0,1))
2
𝑇⟨ ⟩
−E 𝑧, 𝑓𝑛 (𝑡, 𝑃𝑛 , 𝑄𝑛 )  2 𝑑𝑠 (4.55)
4.3.3. Proof of Theorem 4.2 ∫𝑡 2 (𝐿 (0,1))

This section is addressed to proving Theorem 4.2. ⟨ ⟩ 𝑇⟨ ⟩


= E 𝜂, 𝑃𝑛 (𝑡)  2 +E 𝑢,
̂ 𝑃𝑛 2 (𝐿2 (0,1))
𝑑𝑠
2 (𝐿 (0,1)) ∫𝑡
𝑇⟨ ⟩
Proof of Theorem 4.2. : The proof is divided into five steps.
+E 𝑣,
̂ 𝑄𝑛 2 (𝐿2 (0,1))
𝑑𝑠.
Step 1. Firstly, recall that {𝑆(𝑡)}𝑡≥0 is a 𝐶0 -semigroup on 𝐿2 (0, 1) and ∫𝑡
the restriction of {𝑆(𝑡)}𝑡≥0 on 𝐻01 (0, 1) is a 𝐶0 -semigroup on 𝐻01 (0, 1). From (4.55), for any 𝑚, 𝑛 ∈ N, we get that
Furthermore, {𝑆(𝑡)}𝑡≥0 can be extended to be a 𝐶0 -semigroup on ⟨ ⟩
E 𝑧(𝑇 ), 𝑃𝑇 ,𝑛 − 𝑃𝑇 ,𝑚  (𝐿2 (0,1))
𝐻 −1 (0, 1). In the rest of this subsection, for simplicity of notations, 2
𝑇⟨ ⟩
we use the same notation {𝑆(𝑡)}𝑡≥0 to denote the restriction and the
−E 𝑧, 𝐹𝑛 − 𝐹𝑚 2 (𝐿2 (0,1))
𝑑𝑠
extension if there is no confusion. ∫𝑡
𝑇⟨ ⟩
Define a family of operators { (𝑡)}𝑡≥0 on 2 (𝐻 −1 (0, 1); 𝐿2 (0, 1)) as
−E 𝑧, 𝑓 (𝑠, 𝑃𝑛 , 𝑄𝑛 ) − 𝑓 (𝑠, 𝑃𝑚 , 𝑄𝑚 )  2 𝑑𝑠
follows: ∫𝑡 2 (𝐿 (0,1))
⟨ ⟩
= E 𝜂, 𝑃𝑛 (𝑡) − 𝑃𝑚 (𝑡)  2 (0,1)) (4.56)
 (𝑡)𝑂 = 𝑆(𝑡)𝑂𝑆(𝑡), ∀ 𝑂 ∈ 2 (𝐻 −1 (0, 1); 𝐿2 (0, 1)). 2 (𝐿
𝑇⟨ ⟩
+E ̂ 𝑃𝑛 − 𝑃𝑚
𝑢, 2 (𝐿2 (0,1))
𝑑𝑠
Since {𝑆(𝑡)}𝑡≥0 are 𝐶0 -semigroups on 𝐻 −1 (0, 1) and 𝐿2 (0, 1), we have ∫𝑡
𝑇⟨ ⟩
that { (𝑡)}𝑡≥0 is a 𝐶0 -semigroup on 2 (𝐻 −1 (0, 1); 𝐿2 (0, 1)). Write 
+E ̂ 𝑄 𝑛 − 𝑄𝑚
𝑣, 2 (𝐿2 (0,1))
𝑑𝑠.
for the infinitesimal generator of { (𝑡)}𝑡≥0 . The restriction of { (𝑡)}𝑡≥0 ∫𝑡
on 2 (𝐿2(0,1); 𝐻01 (0, 1)) is a 𝐶0 -semigroup on 2 (𝐿2 (0, 1); 𝐻01 (0, 1)). For In Eq. (4.54), we choose 𝜂 ∈ 𝐿2 (𝛺; 2 (𝐿2 (0, 1); 𝐻01 (0, 1))) ⊂
𝑡
simplicity of notations, we use { (𝑡)}𝑡≥0 to denote that restriction. 𝐿2 (𝛺; 2 (𝐿2 (0, 1))), 𝑢̂ ∈ 𝐿1F (𝑡, 𝑇 ;
𝐿2 (𝛺; 2 (𝐿2 (0, 1);𝐻01 (0, 1)))) ⊂
Recall that 𝛤𝑛 is the orthogonal projection from 𝐿2 (0, 1) to span 𝑡
{√ }𝑛 𝐿1F (𝑡, 𝑇 ; 𝐿2 (𝛺; 2 (𝐿2 (0,1)))) and 𝑣̂ ∈
𝐿2F (𝑡, 𝑇 ; 2 (𝐿2 (0, 1); 𝐻01 (0, 1))) ⊂
2 sin(𝑘𝜋𝑥) 𝑘=1 . Then, it holds that 𝐿2F (𝑡, 𝑇 ;2 (𝐿2 (0, 1))) such that
𝛥
⎧ 𝑃𝑇𝑛 = 𝛤𝑛 𝑃𝑇 𝛤𝑛 ∈ 𝐿2 (𝛺; 2 (𝐿2 (0, 1))), |𝜂| 2 1 = 1, (4.57)
2 (𝐿 (0,1);𝐻0 (0,1))
⎪ 𝛥
𝑇
⎪ 𝐹𝑛 = 𝛤𝑛 𝐹 𝛤𝑛 ∈ 𝐿1F (0, 𝑇 ; 𝐿2 (𝛺; 2 (𝐿2 (0, 1)))),
⎨ (4.49) ⟨ ⟩
⎪ lim |𝑃𝑇 − 𝑃𝑇𝑛 |𝐿2 (𝛺;2 (𝐿2 (0,1);𝐻 −1 (0,1)))
= 0, E 𝜂, 𝑃𝑛 (𝑡) − 𝑃𝑚 (𝑡)  (𝐿2 (0,1))

𝑛→∞ 𝑇 ⟨ ⟩2
⎩ lim |𝐹 − 𝐹 𝑛 |𝐿1 (0,𝑇 ;𝐿2 (𝛺; 1 −1 = 0, = E 𝜂, 𝑃𝑛 (𝑡)−𝑃𝑚 (𝑡)  (𝐿2 (0,1);𝐻 1 (0,1)), 2 −1
𝑛→∞ F 2 (𝐻0 (0,1);𝐻 (0,1)))) 2 0 2 (𝐿 (0,1);𝐻 (0,1))

292
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

1
≥ |𝑃 (𝑡) − 𝑃𝑚 (𝑡)|2 (𝐿2 (0,1);𝐻 −1 (0,1)) , (4.58) ≤ |𝑃𝑇 ,𝑛 − 𝑃𝑇 ,𝑚 |𝐿2 (𝛺;2 (𝐿2 (0,1)))
2 𝑛 𝑇

+|𝐹𝑛 − 𝐹𝑚 |𝐿1 (0,𝑇 ;𝐿2 (𝛺; (𝐿2 (0,1))))


|𝑢|
̂ 𝐿1 (𝑡,𝑇 ;𝐿2 (𝛺; 2 1 = 1, (4.59) ( √
F
)
2
F 2 (𝐿 (0,1);𝐻0 (0,1))))
+ 𝑇 − 𝑡 + 𝑇 − 𝑡 (4.65)
( )
𝑇⟨ ⟩ × ||𝐽 |(𝐿2 (0,1)) |𝐿∞ (0,𝑇 ) + ||𝐾|(𝐿2 (0,1)) |𝐿∞ (0,𝑇 )
E ̂ 𝑃𝑛 − 𝑃𝑚
𝑢, 2 (𝐿2 (0,1))
𝑑𝑠 ( F F
∫𝑡
× |𝑃𝑛 − 𝑃𝑚 |𝐿∞ (𝑡,𝑇 ;𝐿2 (𝛺;2 (𝐿2 (0,1);𝐻 −1 (0,1))))
𝑇⟨ ⟩ F
)
=E
∫𝑡
̂ 𝑃𝑛 −𝑃𝑚  (𝐿2 (0,1);𝐻 1 (0,1)), (𝐿2 (0,1);𝐻 −1 (0,1)) 𝑑𝑠
𝑢,
2 2
+|𝑄𝑛 − 𝑄𝑚 |𝐿2 (𝑡,𝑇 ; (𝐿2 (0,1);𝐻 −1 (0,1))) .
0 F 2

1
≥ |𝑃𝑛 − 𝑃𝑚 |𝐿∞ (𝑡,𝑇 ;𝐿2 (𝛺;2 (𝐿2 (0,1);𝐻 −1 (0,1)))) , (4.60) By choosing 𝑡0 ∈ [0, 𝑇 ) such that
2 F
( √ )
 𝑇 − 𝑡0 + 𝑇 − 𝑡0
|𝑣|
̂ 𝐿2 (𝑡,𝑇 ; 2 1 = 1, (4.61) ( ) 1
F 2 (𝐿 (0,1);𝐻0 (0,1))) × ||𝐽 |(𝐿2 (0,1)) |𝐿∞ (0,𝑇 ) + ||𝐾|(𝐿2 (0,1)) |𝐿∞ (0,𝑇 ) ≤ ,
F F 2
and
𝑇⟨
we get from (4.65) that

E ̂ 𝑄 𝑛 − 𝑄𝑚
𝑣, 2 (𝐿2 (0,1))
𝑑𝑠 |𝑃𝑛 − 𝑃𝑚 |𝐿∞ (𝑡0 ,𝑇 ;𝐿2 (𝛺;2 (𝐿2 (0,1);𝐻 −1 (0,1))))
∫𝑡 F
𝑇⟨ ⟩ +|𝑄𝑛 − 𝑄𝑚 |𝐿2 (𝑡 ,𝑇 ;2 (𝐿2 (0,1);𝐻 −1 (0,1)))
=E ̂ 𝑄𝑛 −𝑄𝑚
𝑣, 𝑑𝑠 F 0
2 (𝐿2 (0,1);𝐻01 (0,1)),2 (4.66)
∫𝑡 (𝐿2 (0,1);𝐻 −1 (0,1)) ≤ |𝑃𝑇 ,𝑛 − 𝑃𝑇 ,𝑚 |𝐿2 (𝛺;2 (𝐿2 (0,1)))
𝑇
1
≥ |𝑄𝑛 − 𝑄𝑚 |𝐿2 (𝑡,𝑇 ; (𝐿2 (0,1);𝐻 −1 (0,1))) . (4.62) +|𝐹𝑛 − 𝐹𝑚 |𝐿1 (0,𝑇 ;𝐿2 (𝛺; 2 .
2 F 2 F 2 (𝐿 (0,1))))

By Theorem B.7, we have 𝑧 ∈ 𝐶F ([𝑡, 𝑇 ]; 2 (𝐿2 (0, 1); 𝐻01 (0, 1))) and If 𝑡0 ≠ 0, by a similar argument, we can show that there exists 𝑡1 ∈ [0, 𝑡0 )
such that
|𝑧|𝐶 ([𝑡,𝑇 ]; (𝐿2 (0,1);𝐻 1 (0,1)))
F
(
2 0 |𝑃𝑛 − 𝑃𝑚 |𝐿∞ (𝑡1 ,𝑡0 ;𝐿2 (𝛺;2 (𝐿2 (0,1);𝐻 −1 (0,1))))
≤  |𝜂| (𝐿2 (0,1);𝐻 1 (0,1)) +|𝑢| ̂ 𝐿1 (𝑡,𝑇 ;𝐿2 (𝛺; (𝐿2 (0,1);𝐻 1 (0,1)))) F
+|𝑄𝑛 − 𝑄𝑚 |𝐿2 (𝑡
2 0 F 2 0 ,𝑡 ; (𝐿2 (0,1);𝐻 −1 (0,1)))
) F 1 0 2
(4.67)
+|𝑣|
̂ 𝐿2 (𝑡,𝑇 ; (𝐿2 (0,1);𝐻 1 (0,1))) (4.63) ≤ |𝑃𝑇 ,𝑛 − 𝑃𝑇 ,𝑚 |𝐿2 (𝛺;2 (𝐿2 (0,1)))
F 2 0 𝑇
≤ , +|𝐹𝑛 − 𝐹𝑚 |𝐿1 (0,𝑇 ;𝐿2 (𝛺; 2 .
F 2 (𝐿 (0,1))))

where the constant  is independent of 𝑡 ∈ [0, 𝑇 ]. By induction, we have that


From (4.52) and noting that |𝑃𝑛 − 𝑃𝑚 |2 (𝐿2 (0,1);𝐻 −1 (0,1)) =
|𝑃𝑛 − 𝑃𝑚 |𝐿∞ (0,𝑇 ;𝐿2 (𝛺;2 (𝐿2 (0,1);𝐻 −1 (0,1))))
|𝑃𝑛 − 𝑃𝑚 | (𝐻 1 (0,1);𝐿2 (0,1)) and |𝑄𝑛 − 𝑄𝑚 |2 (𝐿2 (0,1);𝐻 −1 (0,1)) = F
2 0
|𝑄𝑛 − 𝑄𝑚 | (𝐻 1 (0,1);𝐿2 (0,1)) , we have that +|𝑄𝑛 − 𝑄𝑚 |𝐿2 (0,𝑇 ; 2 −1
F 2 (𝐿 (0,1);𝐻 (0,1)))
2 0 (4.68)
≤ |𝑃𝑇 ,𝑛 − 𝑃𝑇 ,𝑚 |𝐿2 (𝛺;2 (𝐿2 (0,1)))
𝑇⟨ ⟩ 𝑇
| |
|E 𝑧, 𝑓 (𝑡, 𝑃𝑛 , 𝑄𝑛 ) − 𝑓 (𝑡, 𝑃𝑚 , 𝑄𝑚 )  𝑑𝑠| +|𝐹𝑛 − 𝐹𝑚 |𝐿1 (0,𝑇 ;𝐿2 (𝛺; .
| ∫𝑡 2
2 (𝐿 (0,1)) | F 2 (𝐿
2 (0,1))))

𝑇⟨
| Thus, {(𝑃𝑛 , 𝑄𝑛 )}∞ is a Cauchy sequence in 𝐷F (0, 𝑇 ; 𝐿2 (𝛺; 2
= |E 𝑧(𝑠), 𝐽 ∗ (𝑃𝑛 −𝑃𝑚 )+(𝑃𝑛 −𝑃𝑚 )𝐽 +𝐾 ∗ (𝑃𝑛 −𝑃𝑚 )𝐾 𝑛=1
| ∫𝑡 (𝐿2 (0,1); 𝐻 −1 (0,1)))) × 𝐿2F (0, 𝑇 ; 2 (𝐿2 (0,1); 𝐻 −1 (0,1))). Therefore, there
⟩ |
+𝐾 ∗ (𝑄𝑛 − 𝑄𝑚 ) + (𝑄𝑛 − 𝑄𝑚 )𝐾  (𝐿2 (0,1)) 𝑑𝑠| exists (𝑃 , 𝑄) ∈ 𝐷F (0, 𝑇 ; 𝐿2 (𝛺; 2 (𝐿2 (0, 1); 𝐻 −1 (0, 1)))) × 𝐿2F (0, 𝑇 ; 2
2 |
(𝐿2 (0, 1); 𝐻 −1 (0, 1))) such that
≤ |𝑧|𝐶 ([𝑡,𝑇 ]; (𝐿2 (0,1);𝐻 1 (0,1))) (4.64)
F 2
( 0
) lim (𝑃𝑛 , 𝑄𝑛 ) = (𝑃 , 𝑄)
× ||𝐽 |(𝐿2 (0,1)) |𝐿∞ (0,𝑇 ) + ||𝐾|(𝐿2 (0,1)) |𝐿∞ (0,𝑇 ) 𝑛→∞
F F in 𝐷F (0, 𝑇 ; 𝐿2 (𝛺; 2 (𝐿2 (0, 1); 𝐻 −1 (0, 1)))) (4.69)
𝑇 ( ×𝐿2F (0, 𝑇 ; 2 (𝐿2 (0, 1); 𝐻 −1 (0, 1))).
×E |𝑃𝑛 − 𝑃𝑚 |2 (𝐿2 (0,1);𝐻 −1 (0,1))
∫𝑡
Step 2. Let 𝑂(⋅) = 𝜑2 (⋅) ⊗ 𝜑1 (⋅) with 𝜑1 and 𝜑2 solve (4.26) and
+|𝑃𝑛 − 𝑃𝑚 | 1 2
2 (𝐻0 (0,1);𝐿 (0,1)) (4.27) respectively, where 𝜉1 ∈ 𝐿4 (𝛺; 𝐿2 (0, 1)), 𝜉2 ∈ 𝐿4 (𝛺; 𝐻01 (0, 1)),
𝑡 𝑡
+|𝑄𝑛 − 𝑄𝑚 |2 (𝐿2 (0,1);𝐻 −1 (0,1)) 𝑢1 ∈ 𝐿2F (𝑡, 𝑇 ; 𝐿4 (𝛺; 𝐿2 (0, 1))) and 𝑢2 , 𝑣1 , 𝑣2 ∈ 𝐿2F (𝑡, 𝑇 ; 𝐿4 (𝛺; 𝐻01 (0, 1))).
)
+|𝑄𝑛 − 𝑄𝑚 | (𝐻 1 (0,1);𝐿2 (0,1)) 𝑑𝑠 As usual, 𝑂(𝑡, 𝜔)𝜂 = ⟨𝜂, 𝜑1 ⟩𝐿2 (0,1) 𝜑2 for a.e. (𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺 and
2 0
𝜂 ∈ 𝐿2 (0, 1). Hence, 𝑂(𝑡, 𝜔) ∈ 2 (𝐿2 (0, 1); 𝐻01 (0, 1)).
= 2|𝑧|𝐶 ([𝑡,𝑇 ]; (𝐿2 (0,1);𝐻 1 (0,1)))
(
F 2 0
) For any 𝜆 ∈ 𝜌(𝐴), define a family of operators {𝜆 (𝑡)}𝑡≥0 on
× ||𝐽 |(𝐿2 (0,1)) |𝐿∞ (0,𝑇 ) + ||𝐾|(𝐿2 (0,1)) |𝐿∞ (0,𝑇 ) 2 (𝐿2 (0, 1); 𝐻01 (0, 1)) as follows:
F F
𝑇 ( 𝜆 (𝑡)𝑂 = 𝑆𝜆 (𝑡)𝑂𝑆𝜆 (𝑡), ∀ 𝑂 ∈ 2 (𝐿2 (0, 1); 𝐻01 (0, 1)).
×E |𝑃𝑛 − 𝑃𝑚 |2 (𝐿2 (0,1);𝐻 −1 (0,1))
∫𝑡
) Here {𝑆𝜆 (𝑡)}𝑡≥0 is the 𝐶0 -semigroup generated by the Yosida approxi-
+|𝑄𝑛 − 𝑄𝑚 |2 (𝐿2 (0,1);𝐻 −1 (0,1)) 𝑑𝑠
( √ ) mation 𝐴𝜆 of 𝐴.
≤ 𝑇 −𝑡+ 𝑇 −𝑡 Similarly to Step 1, we can prove that {𝜆 (𝑡)}𝑡≥0 is a 𝐶0 -semigroup
( )
× ||𝐽 |(𝐿2 (0,1)) |𝐿∞ (0,𝑇 ) + ||𝐾|(𝐿2 (0,1)) |𝐿∞ (0,𝑇 ) on 2 (𝐿2 (0, 1); 𝐻01 (0, 1)). Furthermore, for any 𝑂 ∈ 2 (𝐿2 (0, 1);
( F F
𝐻01 (0, 1)), we have that
× |𝑃𝑛 − 𝑃𝑚 |𝐿∞ (𝑡,𝑇 ;𝐿2 (𝛺;2 (𝐿2 (0,1);𝐻 −1 (0,1))))
F
) 𝜆 (𝑡)𝑂 − 𝑂
+|𝑄𝑛 − 𝑄𝑚 |𝐿2 (𝑡,𝑇 ; (𝐿2 (0,1);𝐻 −1 (0,1))) . lim
F 2 𝑡→0+ 𝑡
𝑆 (𝑡)𝑂𝑆𝜆 (𝑡) − 𝑂
Combining (4.56)–(4.64), we get that = lim+ 𝜆
𝑡→0 𝑡
|𝑃𝑛 − 𝑃𝑚 |𝐿∞ (𝑡,𝑇 ;𝐿2 (𝛺;2 (𝐿2 (0,1);𝐻 −1 (0,1)))) 𝑆 (𝑡)𝑂𝑆𝜆 (𝑡) − 𝑂𝑆𝜆 (𝑡) + 𝑂𝑆𝜆 (𝑡) − 𝑂
F
= lim+ 𝜆
𝑡→0 𝑡
+|𝑄𝑛 − 𝑄𝑚 |𝐿2 (𝑡,𝑇 ; 2 −1 = 𝐴𝜆 𝑂 + 𝑂𝐴𝜆 .
F 2 (𝐿 (0,1);𝐻 (0,1)))

293
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

Hence, the infinitesimal generator 𝜆 of {𝜆 (𝑡)}𝑡≥0 is This, together with Lemma 4.2, implies (4.78).
2 Likewise, noting (4.70), using Burkholder–Davis–Gundy’s inequality
𝜆 𝑂 = 𝐴𝜆 𝑂 + 𝑂𝐴𝜆 , ∀ 𝑂 ∈ 2 (𝐿 (0, 1);𝐻01 (0, 1)).
and (4.78), we can show that, for any 𝑡 ∈ [0, 𝑇 ],
For any 𝑂 = ℎ ⊗ 𝑔 ∈ 𝐻01 (0, 1) ⊗ 𝐿2 (0, 1) = 2 (𝐿2 (0, 1); 𝐻01 (0, 1)), it holds
⎧ |

𝜆
that ⎪ lim || ∫ 𝜆 (⋅ − 𝜏)𝑢 𝑑𝜏
| | ⎪ 𝜆→∞ 𝑡 ⋅
lim | (𝑡)𝑂 − 𝜆 (𝑡)𝑂| ⎪ |
𝜆→∞ | |2 (𝐿2 (0,1);𝐻01 (0,1)) −  (⋅ − 𝜏)𝑢𝑑𝜏 | = 0,
⎪ ∫𝑡 |𝐶F ([𝑡,𝑇 ];𝐿2 (𝛺;2 (𝐿2 (0,1);𝐻01 (0,1))))
| | ⎨ |
⋅ (4.79)
= lim |𝑆(𝑡)𝑂𝑆(𝑡) − 𝑆𝜆 (𝑡)𝑂𝑆𝜆 (𝑡)| ⎪ lim | 𝜆 (⋅ − 𝜏)𝑣𝜆 𝑑𝑊 (𝜏)
𝜆→∞ | |2 (𝐿2 (0,1);𝐻01 (0,1))
⎪ 𝜆→∞ | ∫⋅𝑡
| | ⎪ |
= lim |𝑆(𝑡)ℎ⊗𝑆(𝑡)𝑔−𝑆𝜆 (𝑡)ℎ⊗𝑆𝜆 (𝑡)𝑔 | 2 −  (⋅ − 𝜏)𝑣𝑑𝑊 (𝜏)| = 0,
𝜆→∞ | |𝐿 (0,1)⊗𝐻01 (0,1) ⎪ ∫ |𝐶F ([𝑡,𝑇 ];𝐿2 (𝛺;2 (𝐿2 (0,1);𝐻01 (0,1))))
( ) ⎩ 𝑡
| | | |
= lim |𝑆(𝑡)ℎ−𝑆𝜆 (𝑡)ℎ| 1 |𝑆(𝑡)𝑔−𝑆𝜆 (𝑡)𝑔 | 2
𝜆→∞ | |𝐻0 (0,1) | |𝐿 (0,1) where
| | | | ⎧ 𝑢 = 𝐽 𝑂 + 𝑂𝐽 ∗ + 𝑢 ⊗ 𝜑 + 𝜑 ⊗ 𝑢 + 𝐾𝑂𝐾 ∗
= lim |𝑆(𝑡)ℎ−𝑆𝜆 (𝑡)ℎ| 1 lim |𝑆(𝑡)𝑔−𝑆𝜆 (𝑡)𝑔 | 2
𝜆→∞ | |𝐻0 (0,1) 𝜆→∞ | |𝐿 (0,1) ⎪ 2 1 2 1
⎨ +(𝐾𝜑2 ) ⊗ 𝑣1 + 𝑣2 ⊗ (𝐾𝜑1 ) + 𝑣2 ⊗ 𝑣1 , (4.80)
= 0. (4.70) ⎪ 𝑣 = 𝐾𝑂 + 𝑂𝐾 ∗ + 𝑣2 ⊗ 𝜑1 + 𝜑2 ⊗ 𝑣1 .

Write 𝑂𝜆 = 𝜑𝜆2 ⊗ 𝜑𝜆1 , where 𝜑𝜆1 and 𝜑𝜆2 solve accordingly (4.31) and
According to (4.77)–(4.79), we obtain that
(4.32). Then,
𝑠
[ ] 𝑂(𝑠) =  (𝑠 − 𝑡)(𝜉2 ⊗ 𝜉1 ) +  (𝑠 − 𝜏)𝑢𝑑𝜏
𝑑𝑂𝜆 = (𝐴𝜆 𝜑𝜆2 ) ⊗ 𝜑𝜆1 + 𝜑𝜆2 ⊗ (𝐴𝜆 𝜑𝜆1 ) + 𝑢𝜆 𝑑𝑠 + 𝑣𝜆 𝑑𝑊 (𝑠), (4.71) ∫𝑡
𝑠 (4.81)
where +  (𝑠 − 𝜏)𝑣𝑑𝑊 (𝜏), ∀ 𝑠 ∈ [𝑡, 𝑇 ].
∫𝑡
⎧ 𝜆 ( 𝜆) 𝜆 𝜆
( 𝜆) 𝜆
⎪ 𝑢 = 𝐽 𝜑2 ⊗ 𝜑1(+ 𝜑2 )⊗ (𝐽 𝜑1 )+ 𝑢2(⊗ 𝜑1) Hence, 𝑂(⋅) solves
⎪ 𝜆 𝜆 𝜆
+𝜑2 ⊗ 𝑢1 + 𝐾𝜑2 ⊗ 𝐾𝜑1 + 𝐾𝜑2 ⊗ 𝑣1 𝜆
{ ( )
⎨ ( ) (4.72) 𝑑𝑂 = 𝑂 + 𝑢 𝑑𝑠 + 𝑣𝑑𝑊 (𝑠) in (𝑡, 𝑇 ],
⎪ 𝜆 +𝑣 ⊗ 𝐾𝜑𝜆1 + 𝑣2 ⊗ 𝑣1 , (4.82)
( 2 𝜆) ( ) 𝑂(𝑡) = 𝜉2 ⊗ 𝜉1 .
⎪ 𝑣 = 𝐾𝜑2 ⊗ 𝜑1 +𝜑𝜆2 ⊗ 𝐾𝜑𝜆1 +𝑣2 ⊗ 𝜑𝜆1 +𝜑𝜆2 ⊗ 𝑣1 .
𝜆

Step 3. Since (𝑃𝑛 , 𝑄𝑛 ) is the transposition solution to (4.51), it
Further, for any ℎ ∈ 𝐿2 (0, 1), we have that follows from (4.82) that
( ) ⟨ ⟩ ⟨ ⟩
(𝐴𝜆 𝜑𝜆2 ) ⊗ 𝜑𝜆1 (ℎ) = ℎ, 𝜑𝜆1 𝐿2 (0,1) 𝐴𝜆 𝜑𝜆2 E 𝑂(𝑇 ), 𝑃𝑇 ,𝑛  (𝐿2 (0,1))
(⟨ ⟩ ) ( ) 2
= 𝐴𝜆 ℎ, 𝜑𝜆1 𝐿2 (0,1) 𝜑𝜆2 = 𝐴𝜆 𝜑𝜆2 ⊗ 𝜑𝜆1 ℎ. 𝑇⟨ ⟩
−E 𝑂, 𝑓𝑛 (𝑠, 𝑃𝑛 (𝑠), 𝑄𝑛 (𝑠))  2 𝑑𝑠 (4.83)
Thus, ∫𝑡 2 (𝐿 (0,1))
( ) ⟨ ⟩ 𝑇⟨ ⟩
𝐴𝜆 𝜑𝜆2 ⊗ 𝜑𝜆1 = 𝐴𝜆 𝑂𝜆 . (4.73) = E 𝜉1 ⊗𝜉2 , 𝑃𝑛 (𝑡)  +E 𝑢, 𝑃𝑛 𝑑𝑠
2 2 (𝐿2 (0,1))
2 (𝐿 (0,1)) ∫𝑡
Similarly, we have the following equalities: 𝑇⟨ ⟩
( ) +E 𝑣, 𝑄𝑛 𝑑𝑠.
⎧ 𝜑𝜆 ⊗ 𝐴𝜆 𝜑𝜆 = 𝑂𝜆 𝐴∗ = 𝑂𝜆 𝐴𝜆 , ∫𝑡 2 (𝐿2 (0,1))
⎪ ( 2 𝜆) 1
𝜆 𝜆
(𝜆 𝜆 ) 𝜆 𝜆 ∗
⎪ (𝐽 𝜑2 ) ⊗ 𝜑(1 + 𝜑)2 ⊗ 𝐽 𝜑1 = 𝐽 𝑂 + 𝑂 𝐽 , Recalling that 𝑂(⋅) = 𝜑2 (⋅) ⊗ 𝜑1 (⋅), we find that
⎪ 𝐾𝜑2 ⊗ 𝐾𝜑1 = 𝐾𝑂𝜆 𝐾 ∗ ,
𝜆 𝜆
⎨ (𝐾𝜑𝜆 ) ⊗ 𝑣 + 𝑣 ⊗ (𝐾𝜑𝜆 ) (4.74) 𝑇⟨ ⟩
⎪ (2 1 ) 2( 1) E 𝑂(𝑠), 𝑓𝑛 (𝑠, 𝑃𝑛 (𝑠), 𝑄𝑛 (𝑠))  2 𝑑𝑠
⎪ (= 𝐾 𝜑)𝜆2 ⊗ 𝑣1 + 𝑣2 ⊗ 𝜑𝜆1 𝐾 ∗ , ∫𝑡 2 (𝐿 (0,1))

⎪ 𝐾𝜑𝜆 ⊗ 𝜑𝜆 + 𝜑𝜆 ⊗ 𝐾𝜑𝜆 ) = 𝐾𝑂𝜆 + 𝑂𝜆 𝐾 ∗ .


( 𝑇 ⟨(
(4.84)
⎩ 2 1 2 1 =E −𝐽 𝑃𝑛 −𝑃𝑛 𝐽 −𝐾 ∗ 𝑃𝑛 𝐾 −𝐾 ∗ 𝑄𝑛 −𝑄𝑛 𝐾
∫𝑡 ) ⟩
It follows from (4.72)–(4.74) that +𝐹𝑛 𝜑1 , 𝜑2 𝐿2 (0,1) 𝑑𝑠.
⎧ 𝑢𝜆 = 𝐽 𝑂𝜆 +𝑂𝜆 𝐽 ∗ +𝑢 ⊗ 𝜑𝜆 +𝜑𝜆 ⊗ 𝑢 +𝐾𝑂𝜆 𝐾 ∗ Further, by (4.80), we have
⎪ ( ) 2( 1 2) 1
⎨ +𝐾 𝜑𝜆2 ⊗ 𝑣1 + 𝑣2 ⊗ 𝜑𝜆1 𝐾 + 𝑣2 ⊗ 𝑣1 , (4.75) 𝑇⟨ ⟩
⎪ 𝑣𝜆 = 𝐾𝑂𝜆 + 𝑂𝜆 𝐾 ∗ + 𝑣2 ⊗ 𝑥𝜆 + 𝑥𝜆 ⊗ 𝑣1 . E 𝑢, 𝑃𝑛 𝑑𝑠 (4.85)
⎩ 1 2 ∫𝑡 2 (𝐿2 (0,1))

From (4.71), (4.73), (4.75) and the first equality in (4.74), we see that 𝑇(⟨ ⟩ ⟨ ⟩
𝑂𝜆 solves =E 𝐽 𝑃𝑛 𝜑1 , 𝜑2 𝐿2 (0,1) + 𝑃𝑛 𝐽 𝜑1 , 𝜑2 𝐿2 (0,1)
∫𝑡
{ ( ) ⟨ ⟩ ⟨ ⟩
𝑑𝑂𝜆 = 𝜆 𝑂𝜆 + 𝑢𝜆 𝑑𝑠 + 𝑣𝜆 𝑑𝑊 (𝑠) in (𝑡, 𝑇 ], + 𝑃𝑛 𝑢1 , 𝜑2 𝐿2 (0,1) 𝑑𝑠 + 𝑃𝑛 𝜑1 , 𝑢2 𝐿2 (0,1)
(4.76) ⟨ ∗ ⟩ ⟨ ⟩
𝑂𝜆 (𝑡) = 𝜉2 ⊗ 𝜉1 .
+ 𝐾 𝑃𝑛 𝐾𝜑1 , 𝜑2 𝐿2 (0,1) + 𝐾𝜑1 , 𝑃𝑛∗ 𝑣2 𝐿2 (0,1)
Therefore, ⟨ ⟩ ⟨ ⟩ )
𝑠
+ 𝐾 ∗ 𝑃𝑛 𝑣1 , 𝜑2 𝐿2 (0,1) + 𝑃𝑛 𝑣1 , 𝑣2 𝐿2 (0,1) 𝑑𝑠,
𝑂𝜆 (𝑠) = 𝜆 (𝑠 − 𝑡)(𝜉2 ⊗ 𝜉1 ) + 𝜆 (𝑠 − 𝜏)𝑢𝜆 𝑑𝜏 and
∫𝑡
𝑠 (4.77) 𝑇⟨
+ 𝜆 (𝑠 − 𝜏)𝑣𝜆 𝑑𝑊 (𝜏), ∀ 𝑠 ∈ [𝑡, 𝑇 ]. ⟩
∫𝑡 E 𝑣, 𝑄𝑛 2 (𝐿2 (0,1))
𝑑𝑠 (4.86)
∫𝑡
We claim that for every 𝑡 ∈ [0, 𝑇 ), 𝑇 (⟨ ∗ ⟩ ⟨ ⟩
=E 𝐾 𝑄𝑛 𝜑1 , 𝜑2 𝐿2 (0,1) + 𝑄𝑛 𝐾𝜑1 , 𝜑2 𝐿2 (0,1)
lim |𝑂 − 𝑂|𝐶
𝜆
= 0. (4.78) ∫𝑡
𝜆→∞ F ([𝑡,𝑇 ];𝐿
2 (𝛺; 2 1
2 (𝐿 (0,1);𝐻0 (0,1)))) ⟨ ⟩ ⟨ ⟩ )
+ 𝑣1 , 𝑄∗𝑛 𝜑2 𝐿2 (0,1) + 𝑄𝑛 𝜑1 , 𝑣2 𝐿2 (0,1) 𝑑𝑠.
Indeed, for any 𝑠 ∈ [𝑡, 𝑇 ], we have that
2
It follows from (4.83) to (4.86) that
|𝑂𝜆 (𝑠) − 𝑂(𝑠)| 2 (0,1);𝐻 1 (0,1))
2 (𝐿 0 ⟨ ⟩ 𝑇⟨ ⟩
| |2 E 𝑃𝑇 ,𝑛 𝜑1 (𝑇 ), 𝜑2 (𝑇 ) 𝐿2 (0,1) − E 𝐹𝑛 𝜑1 , 𝜑2 𝐿2 (0,1) 𝑑𝑠
= |𝜑𝜆2 (𝑠) ⊗ 𝜑𝜆1 (𝑠) − 𝜑2 (𝑠) ⊗ 𝜑1 (𝑠)| 1 ∫𝑡
| |𝐻0 (0,1)⊗𝐿2 (0,1)
| 𝜆 |2 | 𝜆 |2 ⟨ ⟩ 𝑇 (⟨ ⟩
≤  |𝜑2 (𝑠) − 𝜑2 (𝑠)| 1 |𝜑 (𝑠) − 𝜑1 (𝑠)| 2 . = E 𝑃𝑛 (𝑡)𝜉1 , 𝜉2 𝐿2 (0,1) + E 𝑢1 , 𝑃𝑛∗ 𝜑2 𝐿2 (0,1)
| |𝐻0 (0,1) | 1 |𝐿 (0,1) ∫𝑡

294
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

⟨ ⟩ ⟨ ⟩
+ 𝑃𝑛 𝜑1 , 𝑢2 𝐿2 (0,1) + 𝑃𝑛 𝐾𝜑1 , 𝑣2 𝐿2 (0,1) It follows from (4.92) and (4.93) that
⟨ ⟩ 4
+ 𝑣1 , 𝑃𝑛∗ 𝐾𝜑2 + 𝑃𝑛∗ 𝑣2 𝐿2 (0,1) (4.87) | |
E|𝑃 (𝑟)𝜉 − 𝑃 (𝑡)𝜉 | 3 2
⟨ ⟩ ⟨ ⟩ ) | [ ( |𝐿 (0,1)
+ 𝑣1 , 𝑄∗𝑛 𝜑2 𝐿2 (0,1) + 𝑄𝑛 𝜑1 , 𝑣2 𝐿2 (0,1) 𝑑𝑠. |
≤  E|E 𝐔(𝑇 , 𝑟)∗ 𝑃𝑇 𝐔(𝑇 , 𝑟)𝜉
|
𝑇 )
From (4.49), (4.50) and (4.87), by letting 𝑛 → ∞, we obtain that |
− 𝐔(𝑠, 𝑟)∗ 𝐹 (𝑠)𝐔(𝑠, 𝑟)𝜉𝑑𝑠 | 𝑟
for any 𝑡 ∈ [0, 𝑇 ], 𝜉1 ∈ 𝐿4 (𝛺; 𝐿2 (0, 1)), 𝜉2 ∈ 𝐿4 (𝛺; 𝐻01 (0, 1)), 𝑢1 ∈ ∫𝑟( |
𝑡 𝑡
𝐿2F (𝑡, 𝑇 ; 𝐿4 (𝛺; 𝐿2 (0, 1))) and 𝑢2 , 𝑣1 , 𝑣2 ∈ 𝐿2F (𝑡, 𝑇 ; 𝐿4 (𝛺; 𝐻01 (0, 1))), −E 𝐔(𝑇 , 𝑡)∗ 𝑃𝑇 𝐔(𝑇 , 𝑡)𝜉
𝑇 ) 4
⟨ ⟩ 𝑇⟨ ⟩ | |3
− 𝐔(𝑠, 𝑡)∗ 𝐹 (𝑠)𝐔(𝑠, 𝑡)𝜉𝑑𝑠 | 𝑟 | (4.94)
E 𝑃𝑇 𝜑1 (𝑇 ), 𝜑2 (𝑇 ) 𝐿2 (0,1) −E 𝐹 𝜑1 , 𝜑2 𝐿2 (0,1) 𝑑𝑠 ∫𝑡 ( | |𝐿2 (0,1)
∫𝑡 |
⟨ ⟩ +E|E 𝐔(𝑇 , 𝑡)∗ 𝑃𝑇 𝐔(𝑇 , 𝑡)𝜉
= E 𝑃 (𝑡)𝜉1 , 𝜉2 𝐻 −1 (0,1),𝐻 1 (0,1) |
𝑇 )
0 |
(⟨ 𝑇 ⟩ − 𝐔(𝑠, 𝑡)∗ 𝐹 (𝑠)𝐔(𝑠, 𝑡)𝜉𝑑𝑠 | 𝑟
∫𝑡( |
+E 𝑃 𝑢1 , 𝜑2 𝐻 −1 (0,1),𝐻 1 (0,1) (4.88)
∫𝑡 0 −E 𝐔(𝑇 , 𝑡)∗ 𝑃𝑇 𝐔(𝑇 , 𝑡)𝜉
⟨ ⟩ ⟨ ⟩
+ 𝑃 𝜑1 , 𝑢2 𝐻 −1 (0,1),𝐻 1 (0,1) + 𝑃 𝐾𝜑1 , 𝑣2 𝐻 −1 (0,1),𝐻 1 (0,1) 𝑇 ) 4 ]
| |3
⟨ 0
⟩ 0
− 𝐔(𝑠, 𝑡)∗ 𝐹 (𝑠)𝐔(𝑠, 𝑡)𝜉𝑑𝑠 | 𝑡 | 2 .
+ 𝑣1 (𝑠), 𝑃 ∗ 𝐾𝜑2 + 𝑃 ∗ 𝑣2 𝐻 1 (0,1),𝐻 −1 (0,1) ∫𝑡 | |𝐿 (0,1)
⟨ ⟩ ⟨0 ⟩ ) By Lemma B.1, it is easy to show that
+ 𝑄𝑣1 ,𝜑2 𝐻 −1 (0,1),𝐻 1 (0,1)+ 𝑄𝜑1 ,𝑣2 𝐻 −1 (0,1),𝐻 1 (0,1) 𝑑𝑠.
(
0 0
|
lim E|E 𝐔(𝑇 , 𝑡)∗ 𝑃𝑇 𝐔(𝑇 , 𝑡)𝜉
Step 4. In this step, we prove that 𝑟→𝑡+ |
𝑇 )
|
𝑃 (⋅) ∈ 𝐷F,𝑤 ([0, 𝑇 ]; 𝐿2 (𝛺; (𝐿2 (0, 1)))). (4.89) − 𝐔(𝑠, 𝑡)∗ 𝐹 (𝑠)𝐔(𝑠, 𝑡)𝜉𝑑𝑠 | 𝑟
∫𝑡 |
(
To this end, we only need to show that for any 𝜉 ∈ 𝐿4 (𝛺; 𝐿2 (0, 1)), ∗
−E 𝐔(𝑇 , 𝑡) 𝑃𝑇 𝐔(𝑇 , 𝑡)𝜉 (4.95)
𝑡
4
𝜒[𝑡,𝑇 ] 𝑃 (⋅)𝜉 ∈ 𝐷F ([𝑡, 𝑇 ]; 𝐿 3 (𝛺; 𝐿2 (0, 1))). 𝑇 ) 4
| | 3
For any 𝜉2 ∈ 𝐿4 (𝛺; 𝐿2 (0, 1)), there exists a sequence {𝜉2,𝑛 }∞ ⊂ − 𝐔(𝑠, 𝑡)∗ 𝐹 (𝑠)𝐔(𝑠, 𝑡)𝜉𝑑𝑠 | 𝑡 | 2
𝑡 𝑛=1 ∫𝑡 | |𝐿 (0,1)
𝐿 (𝛺; 𝐻01 (0, 1)) such that
4
= 0.
𝑡

lim 𝜉 = 𝜉2 in 𝐿4 (𝛺; 𝐿2 (0, 1)). On the other hand,


𝑛→∞ 2,𝑛 𝑡
(
|
Then it holds that E|E 𝐔(𝑇 , 𝑟)∗ 𝑃𝑇 𝐔(𝑇 , 𝑟)𝜉
|
𝑇 )
lim 𝜑2 (⋅; 𝜉2,𝑛 , 0, 0) = 𝜑2 (⋅; 𝜉2 , 0, 0) |
− 𝐔(𝑠, 𝑟)∗ 𝐹 (𝑠)𝐔(𝑠, 𝑟)𝜉𝑑𝑠 | 𝑟
𝑛→∞ ∫𝑟( |
in 𝐶F ([𝑡, 𝑇 ]; 𝐿4 (𝛺; 𝐿2 (0, 1))).
−E 𝐔(𝑇 , 𝑡)∗ 𝑃𝑇 𝐔(𝑇 , 𝑡)𝜉
For any 𝜉1 ∈ 𝐿4 (𝛺; 𝐿2 (0, 1)) and 𝑛 ∈ N, it follows from (4.88) that 𝑇 ) 4
| |3
𝑡
− 𝐔(𝑠, 𝑡)∗ 𝐹 (𝑠)𝐔(𝑠, 𝑡)𝜉𝑑𝑠 | 𝑟 | 2
⟨ ⟩ ∫𝑡 | |𝐿 (0,1)
E 𝑃 (𝑡)𝜉1 , 𝜉2,𝑛 𝐻 −1 (0,1),𝐻 1 (0,1) |
0 ≤ E|𝐔(𝑇 , 𝑟)∗ 𝑃𝑇 𝐔(𝑇 , 𝑟)𝜉
⟨ ⟩ | 4 (4.96)
= E 𝑃𝑇 𝜑1 (𝑇 ; 𝜉1 , 0, 0), 𝜑2 (𝑇 ; 𝜉2,𝑛 , 0, 0) 𝐿2 (0,1) (4.90) |
−𝐔(𝑇 , 𝑡)∗ 𝑃𝑇 𝐔(𝑇 , 𝑡)𝜉 | 3 2
𝑇⟨ |𝐿 (0,1)
⟩ 𝑇 [
−E 𝐹 (𝑠)𝜑1 (𝑠; 𝜉1 , 0, 0), 𝜑2 (𝑠; 𝜉2,𝑛 , 0, 0) 𝐿2 (0,1) 𝑑𝑠. |
∫𝑡 +E| 𝐔(𝑠, 𝑟)∗ 𝐹 (𝑠)𝐔(𝑠, 𝑟)𝜉
| ∫𝑟
] 4
Letting 𝑛 → ∞ in (4.90), we get that |
−𝐔(𝑠, 𝑡)∗ 𝐹 (𝑠)𝐔(𝑠, 𝑡)𝜉 𝑑𝑠| 3 2
⟨ ⟩ |𝐿 (0,1)
4
E 𝑃 (𝑡)𝜉1 , 𝜉2 𝐿2 (0,1) 𝑟
| |3
⟨ ⟩ +E| 𝐔(𝑠, 𝑡)∗ 𝐹 (𝑠)𝐔(𝑠, 𝑡)𝜉𝑑𝑠| .
| ∫𝑡 |𝐿2 (0,1)
= E 𝑃𝑇 𝜑1 (𝑇 ; 𝜉1 , 0, 0), 𝜑2 (𝑇 ; 𝜉2 , 0, 0) 𝐿2 (0,1) (4.91)
𝑇⟨ ⟩ By Lemma B.1 again, we obtain that
−E 𝐹 (𝑠)𝜑1 (𝑠; 𝜉2 , 0, 0), 𝜑2 (𝑠; 𝜉2 , 0, 0) 𝐿2 (0,1) 𝑑𝑠. (
∫𝑡 |
lim E|E 𝐔(𝑇 , 𝑟)∗ 𝑃𝑇 𝐔(𝑇 , 𝑟)𝜉
𝑟→𝑡+ |
By the above argument, (4.91) holds for any 𝜉1 , 𝜉2 ∈ 𝐿4 (𝛺; 𝐿2 (0, 1)). 𝑇 )
|
𝑡 − 𝐔(𝑠, 𝑟)∗ 𝐹 (𝑠)𝐔(𝑠, 𝑟)𝜉𝑑𝑠|𝑟
Hence, ∫𝑟( |
⟨ ⟩ −E 𝐔(𝑇 , 𝑡)∗ 𝑃𝑇 𝐔(𝑇 , 𝑡)𝜉 (4.97)
E 𝑃 (𝑡)𝜉1 , 𝜉2 𝐿2 (0,1)
⟨ 𝑇 ) 4
= E 𝐔(𝑇 , 𝑡)∗ 𝑃𝑇 𝐔(𝑇 , 𝑡)𝜉1 | |
− 𝐔(𝑠, 𝑡)∗ 𝐹 (𝑠)𝐔(𝑠, 𝑡)𝜉𝑑𝑠 | 𝑟 | 2 3

𝑇 ⟩ ∫𝑡 | |𝐿 (0,1)
− 𝐔(𝑠, 𝑡)∗ 𝐹 (𝑠)𝐔(𝑠, 𝑡)𝜉1 𝑑𝑠, 𝜉2 . = 0.
∫𝑡 𝐿2 (0,1)
From (4.94) to (4.97), we obtain that for any 𝜉 ∈ 𝐿4 (𝛺; 𝐿2 (0, 1)),
This leads to 𝑡
( 4
𝑃 (𝑡)𝜉1 = E 𝐔(𝑇 , 𝑡)∗ 𝑃𝑇 𝐔(𝑇 , 𝑡)𝜉1 𝜒[𝑡,𝑇 ] 𝑃 (⋅)𝜉 ∈ 𝐷F ([𝑡, 𝑇 ]; 𝐿 (𝛺; 𝐿2 (0, 1)))
3

𝑇 ) (4.92)
|
− 𝐔(𝑠, 𝑡)∗ 𝐹 (𝑠)𝐔(𝑠, 𝑡)𝜉1 𝑑𝑠 | 𝑡 . and
∫𝑡 |
|𝑃 (⋅)𝜉| 4
2
Similarly, for any 𝑟 ∈ [𝑡, 𝑇 ], we can obtain that ( 𝐷F ([𝑡,𝑇 ];𝐿 3 (𝛺;𝐿 (0,1))) )
( ≤  ||𝐹 |(𝐿2 (0,1)) |𝐿1 (0,𝑇 ;𝐿2 (𝛺)) + |𝑃𝑇 |𝐿2 (𝛺;(𝐿2 (0,1))) |𝜉|𝐿4 (𝛺;𝐿2 (0,1)) .
𝑇 𝑡
𝑃 (𝑟)𝜉1 = E 𝐔(𝑇 , 𝑟)∗ 𝑃𝑇 𝐔(𝑇 , 𝑟)𝜉1 F

𝑇 ) (4.93) Hence, we get that 𝑃 (⋅) ∈ 𝐷F,𝑤 ([0, 𝑇 ]; 𝐿2 (𝛺; (𝐿2 (0, 1)))). This completes
|
− 𝐔(𝑠, 𝑟)∗ 𝐹 (𝑠)𝐔(𝑠, 𝑟)𝜉1 𝑑𝑠 | 𝑟 . the proof of (4.89).
∫𝑟 |

295
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

Step 5. In this step, we show that (𝑃 , 𝑄) satisfies (4.30). ⟨ ⟩ 𝑇⟨ ⟩


= E 𝑃 (𝑡)𝜉1 , 𝜉2 𝐿2 (0,1) + E 𝑃 𝑢1 , 𝜑2 𝐿2 (0,1)
𝑑𝑠
For any 𝑡 ∈ [0, 𝑇 ], 𝜉1 , 𝜉2 ∈ 𝐿4 (𝛺; 𝐿2 (0, 1)) and 𝑢1 (⋅), 𝑢2 (⋅), 𝑣1 (⋅), 𝑣2 (⋅) ∈ ∫𝑡
𝑡 𝑇⟨
𝐿2F (𝑡, 𝑇 ; 𝐿4 (𝛺; 𝐿2 (0, 1))), we can find four sequences {𝜉2,𝑛 }∞ ⊂ 𝐿4 ⟩ 𝑇⟨ ⟩
𝑛=1 𝑡 +E 𝑃 𝜑1 , 𝑢2 𝐿2 (0,1)
𝑑𝑠 + E 𝑃 𝐾𝜑1 , 𝑣2 𝐿2 (0,1)
𝑑𝑠
(𝛺; 𝐻01 (0, 1)) and {𝑢2,𝑛 }∞ , {𝑣 } ∞ , {𝑣 }∞ ⊂ 𝐿2 (𝑡, 𝑇 ; 𝐿4 (𝛺; 𝐻 1 (0, 1))) ∫𝑡 ∫𝑡
𝑛=1 1,𝑛 𝑛=1 2,𝑛 𝑛=1 F 0
𝑇⟨ ⟩
such that
+E 𝑃 𝑣1 , 𝐾𝜑2 + 𝑣2 𝐿2 (0,1)
𝑑𝑠 (4.103)
lim |𝜉2,𝑛 − 𝜉2 |𝐿4 ∫𝑡
⎧ 𝑛→∞ (𝛺;𝐿2 (0,1))
= 0,
⎪ 𝑡 𝑇⟨ ⟩
⎪ lim |𝑢2,𝑛 − 𝑢2 |𝐿2 (𝑡,𝑇 ;𝐿4 (𝛺;𝐿2 (0,1))) = 0, +E 𝑄𝑣1 , 𝜑2 𝑑𝑠
𝑛→∞ (4.98) ∫𝑡 𝐻 −1 (0,1),𝐻01 (0,1)
⎨ F

⎪ lim |𝑣1,𝑛 − 𝑣1 |𝐿2 (𝑡,𝑇 ;𝐿4 (𝛺;𝐿2 (0,1))) = 0, 𝑇⟨


𝑛→∞ ⟩

F
lim |𝑣2,𝑛 − 𝑣2 |𝐿2 (𝑡,𝑇 ;𝐿4 (𝛺;𝐿2 (0,1))) = 0. +E 𝜑1 , 𝑄∗ 𝑣2 𝑑𝑠.
⎩ 𝑛→∞ F
∫𝑡 𝐻01 (0,1),𝐻 −1 (0,1)

Due to (4.88), we have that Step 6. In this step, we complete the proof of Theorem 4.2.
⟨ ⟩ First, we prove the uniqueness of the pair (𝑃 , 𝑄) which fulfills
E 𝑃𝑇 𝜑1,𝑛 (𝑇 ), 𝜑2,𝑛 (𝑇 ) 𝐿2 (0,1)
(4.103). Assume that there are (𝑃1 , 𝑄1 ), (𝑃2 , 𝑄2 ) ∈ 𝐷F,𝑤 ([0, 𝑇 ]; 𝐿2
𝑇⟨ ⟩ (𝛺; (𝐿2 (0, 1)))) × 𝐿2F (0, 𝑇 ; 2 (𝐿2 (0, 1); 𝐻 −1 (0, 1))) fulfilling (4.103). By
−E 𝐹 (𝑠)𝜑1,𝑛 , 𝜑2,𝑛 𝐿2 (0,1) 𝑑𝑠
∫𝑡 choosing 𝑢1 = 𝑣1 = 0 and 𝑢2 = 𝑣2 = 0 in (4.26) and (4.27), respectively,
⟨ ⟩ we find from (4.103) that
= E 𝑃 (𝑡)𝜉1,𝑛 , 𝜉2,𝑛 𝐻 −1 (0,1),𝐻 1 (0,1)
0
𝑇⟨
⟨ ⟩ 𝑇⟨ ⟩
⟩ E 𝑃𝑇 𝜑1 (𝑇 ), 𝜑2 (𝑇 ) 𝐿2 (0,1) − E 𝐹 𝜑1 , 𝜑2 𝐿2 (0,1) 𝑑𝑠
+E 𝑃 (𝑠)𝑢1 , 𝜑2,𝑛 𝐻 −1 (0,1),𝐻 1 (0,1)
𝑑𝑠 ⟨ ⟩ ⟨ 𝑡 ∫ ⟩ (4.104)
∫𝑡 0
= E 𝑃1 (𝑡)𝜉1 , 𝜉2 𝐿2 (0,1) = E 𝑃2 (𝑡)𝜉1 , 𝜉2 𝐿2 (0,1) .
𝑇⟨ ⟩
+E 𝑃 𝜑1,𝑛 , 𝑢2,𝑛 𝐻 −1 (0,1),𝐻01 (0,1)
𝑑𝑠 (4.99) By the arbitrariness of 𝜉1 , 𝜉2 ∈ 𝐿4 (𝛺; 𝐿2 (0, 1)), we get from (4.104) that
∫𝑡 𝑡
𝑇⟨ ⟩ 𝑃1 (𝑡) = 𝑃2 (𝑡) for all 𝑡 ∈ [0, 𝑇 ]. This, together with (4.103), implies that
+E 𝑃 𝐾𝜑1,𝑛 , 𝑣2,𝑛 𝐻 −1 (0,1),𝐻01 (0,1)
𝑑𝑠 𝑇⟨ ⟩
∫𝑡
E 𝑄1 𝑣1 , 𝜑2 𝐻 −1 (0,1),𝐻01 (0,1)
𝑑𝑠
𝑇⟨ ( )⟩ ∫𝑡
+E 𝑣1,𝑛 , 𝑃 ∗ 𝐾𝜑2,𝑛 + 𝑣2,𝑛 𝐻 1 (0,1),𝐻 −1 (0,1) 𝑑𝑠 𝑇⟨ ⟩
∫𝑡 0
+E 𝜑1 , 𝑄∗1 𝑣2 𝐻01 (0,1),𝐻 −1 (0,1)
𝑑𝑠 (4.105)
𝑇⟨ ⟩ ∫𝑡
+E 𝑄𝑣1,𝑛 , 𝜑2,𝑛 𝐻 −1 (0,1),𝐻01 (0,1)
𝑑𝑠 𝑇⟨ ⟩
∫𝑡
=E 𝑄2 𝑣1 , 𝜑2 𝐻 −1 (0,1),𝐻01 (0,1)
𝑑𝑠
𝑇⟨ ⟩ ∫𝑡
+E 𝑄𝜑1,𝑛 , 𝑣2,𝑛 𝐻 −1 (0,1),𝐻 1 (0,1)
𝑑𝑠, 𝑇⟨ ⟩
∫𝑡 0
+E 𝜑1 , 𝑄∗2 𝑣2 𝐻01 (0,1),𝐻 −1 (0,1)
𝑑𝑠.
where 𝜑1,𝑛 (⋅) solves (4.26) with 𝑣1 replaced by 𝑣1,𝑛 and 𝜑2,𝑛 (⋅) is the ∫𝑡
solution to (4.27) with 𝜉2 , 𝑢2 and 𝑣2 replaced by 𝜉2,𝑛 , 𝑢2,𝑛 and 𝑣2,𝑛 , By choosing 𝑣2 = 0 in (4.27), it follows from (4.105) that
respectively. Clearly, 𝑇 ⟨( ) ⟩
{ E 𝑄1 − 𝑄2 𝑣1 , 𝜑2 𝐻 −1 (0,1),𝐻 1 (0,1) 𝑑𝑠 = 0.
lim 𝜑 = 𝜑1 in 𝐶F ([0, 𝑇 ];𝐿4 (𝛺;𝐿2 (0, 1))), ∫𝑡
𝑛→∞ 1,𝑛
0
(4.100)
lim 𝜑2,𝑛 = 𝜑2 in 𝐶F ([0, 𝑇 ];𝐿4 (𝛺;𝐿2 (0, 1))). This, together with Lemma 4.4, implies that
𝑛→∞
( )
Since 𝑃 (⋅) ∈ 𝐷F,𝑤 ([0, 𝑇 ]; 𝐿2 (𝛺; (𝐿2 (0, 1)))), 𝑄1 (𝑠) − 𝑄2 (𝑠) 𝑣1 (𝑠) = 0 for a.e. 𝑠 ∈ [0, 𝑇 ].
⟨ ⟩ ⟨ ⟩
E 𝑃 (𝑡)𝜉1,𝑛 , 𝜉2,𝑛 𝐻 −1 (0,1),𝐻 1 (0,1) = E 𝑃 (𝑡)𝜉1,𝑛 , 𝜉2,𝑛 𝐿2 (0,1) . By the arbitrariness of 𝑣1 ∈ 𝐿4F (0, 𝑇 ; 𝐿4 (𝛺; 𝐿2 (0, 1))), we get that 𝑄1 =
0
𝑄2 in 𝐿2F (0, 𝑇 ; (𝐿2 (0, 1); 𝐻 −1 (0, 1))).
Therefore, it holds that Next, since 𝑃𝑇 and 𝐹 are self-adjoint operator-valued, if (𝑃 , 𝑄)
⟨ ⟩
lim E 𝑃 (𝑡)𝜉1,𝑛 , 𝜉2,𝑛 𝐻 −1 (0,1),𝐻 1 (0,1) satisfies (4.103), then (𝑃 ∗ , 𝑄∗ ) also satisfies (4.103). By the uniqueness
𝑛→∞ ⟨ ⟩ 0 of the transposition solution, (𝑃 , 𝑄) = (𝑃 ∗ , 𝑄∗ ). Thus, both 𝑃 and 𝑄 are
= lim E 𝑃 (𝑡)𝜉1,𝑛 , 𝜉2,𝑛 𝐿2 (0,1) (4.101)

𝑛→∞ ⟩ self-adjoint operator-valued processes. □
= E 𝑃 (𝑡)𝜉1 , 𝜉2 𝐿2 (0,1) .
Similarly, we can get that 4.3.4. Pontryagin-type maximum principle for nonconvex control domain
We assume the following further conditions for the optimal control
( 𝑇⟨ ⟩
lim E 𝑃 𝑢1 , 𝜑2,𝑛 𝐻 −1 (0,1),𝐻 1 (0,1) 𝑑𝑠 problem (OP).
𝑛→∞ ∫𝑡 0 (B4) 𝑎(𝑡, 𝑥, ⋅, 𝑢) and 𝑏(𝑡, 𝑥, ⋅, 𝑢), 𝑔(𝑡, 𝑥, ⋅, 𝑢) and ℎ(𝑥, ⋅) are 𝐶 2 , such that for
𝑇⟨ ⟩ 𝜑 = 𝑎, 𝑏, 𝑔, it holds that 𝜑𝑦 (𝑡, 𝑥, 𝑟, ⋅) and 𝜑𝑦𝑦 (𝑡, 𝑥, 𝑟, ⋅) are continuous, and
+E 𝑃 𝜑1,𝑛 , 𝑢2,𝑛 𝐻 −1 (0,1),𝐻 1 (0,1) 𝑑𝑠
∫𝑡 0 for any (𝑟, 𝑢) ∈ R × R and a.e. (𝑡, 𝑥) ∈ [0, 𝑇 ] × (0, 1),
𝑇⟨ ⟩ {
|𝜑𝑦 (𝑡, 𝑥, 𝑟, 𝑢)| + |ℎ𝑦 (𝑥, 𝑟)| ≤ ,
+E 𝑃 𝐾𝜑1,𝑛 , 𝑣2,𝑛 𝐻 −1 (0,1),𝐻 1 (0,1) 𝑑𝑠 (4.102)
∫𝑡 0 |𝜑𝑦𝑦 (𝑡, 𝑥, 𝑟, 𝑢)| + |ℎ𝑦𝑦 (𝑥, 𝑟)| ≤ .
𝑇⟨ ( )⟩ )
+E ∗
𝑣1,𝑛 , 𝑃 𝐾𝜑2,𝑛 +𝑣2,𝑛 𝐻 1 (0,1),𝐻 −1 (0,1) 𝑑𝑠 Let
∫𝑡 0
H(𝑡, 𝑦, 𝑢, 𝑘1 , 𝑘2 )
𝑇⟨ ⟩ 𝑇⟨ ⟩ 𝛥 ⟨ ⟩ ⟨ ⟩
=E 𝑃 𝑢1 , 𝜑2 𝐿2 (0,1) 𝑑𝑠 + E 𝑃 (𝑠)𝜑1 , 𝑢2 𝐿2 (0,1) 𝑑𝑠 = 𝑘1 , 𝑎(𝑡, 𝑦, 𝑢) 𝐿2 (0,1) + 𝑘2 , 𝑏(𝑡, 𝑦, 𝑢) 𝐿2 (0,1) − 𝑔(𝑡, 𝑦, 𝑢), (4.106)
∫𝑡 ∫𝑡
𝑇⟨ (𝑡, 𝑦, 𝑢, 𝑘1 , 𝑘2 ) ∈ [0, 𝑇 ]×𝐿2 (0, 1)×𝑈 ×𝐿2 (0, 1)×𝐿2 (0, 1).

+E 𝑃 𝐾𝜑1 , 𝑣2 𝐿2 (0,1) 𝑑𝑠 For any optimal pair (𝑦(⋅),
̄ 𝑢(⋅))
̄ of Problem (OP), let (𝑧(⋅), 𝑍(⋅)) be the
∫𝑡
𝑇⟨ ⟩ corresponding transposition solution to Eq. (4.5), and let (𝑃 (⋅), 𝑄(⋅)) be
+E 𝑃 𝑣1 , 𝐾𝜑2 + 𝑣2 𝐿2 (0,1) 𝑑𝑠. the transposition solution to Eq. (4.25) in which
∫𝑡
( )
From (4.99), (4.101) and (4.102), we see that ⎧𝑃𝑇 = −ℎ𝑦𝑦 𝑦(𝑇 ̄ ) , 𝐽 (𝑡) = 𝑎𝑦 (𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡)),
̄

⟨ ⟩ 𝑇⟨ ⟩ ⎨𝐾(𝑡) = 𝑏𝑦 (𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡)),
̄ (4.107)
E 𝑃𝑇 𝜑1 (𝑇 ), 𝜑2 (𝑇 ) 𝐿2 (0,1) − E 𝐹 𝜑1 , 𝜑2 𝑑𝑠 ⎪ ( )
∫𝑡 𝐿2 (0,1) ⎩𝐹 (𝑡) = −H𝑦𝑦 𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡),
̄ 𝑧(𝑡), 𝑍(𝑡) .

296
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

We have the following result6 . Step 2. In this step, we provide some estimates on 𝑦𝜀𝑖 (𝑖 = 1, 2, 3).
First of all, applying Theorem B.7 to (4.112), by (B1) and (B4), we
Theorem 4.3. Let (B1), (B2) and (B4) hold and 𝐿2 (𝛺) be separable. find that
𝑇
Then, for any 𝜌 ∈ 𝑈 ,
( ) ( ) sup E|𝑦𝜀1 (𝑡)|8𝐿2 (0,1) + |𝑦𝜀1 |8 2
𝑡∈[0,𝑇 ] 𝐿F (0,𝑇 ;𝐻01 (0,1))
H 𝑡, 𝑦(𝑡), ̄ 𝑧(𝑡), 𝑍(𝑡) − H 𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡), ̄ 𝜌, 𝑧(𝑡), 𝑍(𝑡) [ ( 𝑇 )8
1 ⟨ ( ( ) ( ))
≤ E 𝜒𝐸𝜏,𝜀 (𝑠)|𝛿𝑎(𝑠)|𝐿2 (0,1) 𝑑𝑠
− 𝑃 (𝑡) 𝑏 𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡)
̄ − 𝑏 𝑡, 𝑦(𝑡),
̄ 𝜌 , ∫0 (4.117)
2 ( ) ( )⟩ (4.108)
𝑏 𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡)̄ − 𝑏 𝑡, 𝑦(𝑡),
̄ 𝜌 𝐿2 (0,1) ( 𝑇 )4 ]
+E 𝜒𝐸𝜏,𝜀 (𝑠)|𝛿𝑏(𝑠)|2𝐿2 (0,1) 𝑑𝑠 .
≥ 0, a.e. (𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺. ∫0
From (B4), it follows that
Proof. We divide the proof into six steps.
Step 1. For any 𝜏 ∈ (0, 𝑇 ), we choose 𝜀 > 0 such that 𝐸𝜏,𝜀 = |𝛿𝑎(𝑠)|𝐿2 (0,1)
[𝜏, 𝜏 + 𝜀] ⊂ [0, 𝑇 ]. Put | ( ) ( )|
= |𝑎 𝑠, 𝑦(𝑠),
̄ 𝑢(𝑠) − 𝑎 𝑠, 𝑦(𝑠),
̄ 𝑢(𝑠)
̄ | 2
{ | |𝐿 (0,1)
𝑢𝜀 (𝑡) =
𝑢(𝑡),
̄ 𝑡 ∈ [0, 𝑇 ] ⧵ 𝐸𝜏,𝜀 , | ( ) ( )|
≤ |𝑎 𝑠, 𝑦(𝑠),
̄ 𝑢(𝑠) − 𝑎 𝑠, 0, 𝑢(𝑠) | 2
𝑢(𝑡), 𝑡 ∈ 𝐸𝜏,𝜀 , | |𝐿 (0,1)
| ( ) ( )|
where 𝑢(⋅) is an arbitrary given element in  [0, 𝑇 ]. For 𝜑 = 𝑎, 𝑏, 𝑔, write +|𝑎 𝑠, 0, 𝑢(𝑠)
̄ − 𝑎 𝑠, 𝑦(𝑠),
̄ 𝑢(𝑠)
̄ | 2 (4.118)
| |𝐿 (0,1)
| ( ) ( )|
+|𝑎 𝑠, 0, 𝑢(𝑠) − 𝑎 𝑠, 0, 𝑢(𝑠) ̄ | 2
⎧ | |𝐿 (0,1)
⎪ 𝜑1 (𝑡) = 𝜑𝑦 (𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡)),
̄ 1 (
| ) |
⎪ 𝜑11 (𝑡) = 𝜑𝑦𝑦 (𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡)),
̄ ≤| 𝑎 𝑠, 𝜎 𝑦(𝑠),
̄ 𝑢(𝑠) 𝑦(𝑠)𝑑𝜎
̄ |
⎨ (4.109) | ∫0 𝑦 |𝐿2 (0,1)
⎪ 𝛿𝜑(𝑡) = 𝜑(𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡)) − 𝜑(𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡)),
̄
1 ( )
⎪ 𝛿𝜑1 (𝑡) = 𝜑𝑦 (𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡)) − 𝜑𝑦 (𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡)).
̄ | |
⎩ +| 𝑎 𝑠, 𝜎 𝑦(𝑠),
̄ 𝑢(𝑠)
̄ 𝑦(𝑠)𝑑𝜎
̄ | +
| ∫0 𝑦 |𝐿2 (0,1)
Let 𝑦𝜀 (⋅) solve ( )
≤  |𝑦(𝑠)|
̄ 𝐿2 (0,1) + 1 , a.e. 𝑠 ∈ [0, 𝑇 ].
⎧ 𝜀
( 𝜀 𝜀 𝜀
)
⎪ 𝑑𝑦 = 𝑦𝑥𝑥 +𝑎(𝑡, 𝑦 , 𝑢 ) 𝑑𝑡 Hence,
⎪ 𝜀 𝜀
+𝑏(𝑡, 𝑦 , 𝑢 )𝑑𝑊 (𝑡) in (0, 𝑇 ] × (0, 1),
⎨ 𝑦𝜀 = 0 (4.110) ( 𝑇 )8
⎪ 𝜀 on (0, 𝑇 ] × {0, 1}, E 𝜒𝐸𝜏,𝜀 |𝛿𝑎|𝐿2 (0,1) 𝑑𝑠
⎪ 𝑦 (0) = 𝑦0 in (0, 1). ∫0
⎩ [ 𝑇 ( ) ]8
From Theorem B.7, it follows that ≤ E 𝜒𝐸𝜏,𝜀 |𝑦| ̄ 𝐿2 (0,1) + 1 𝑑𝑠
∫0
|𝑦𝜀 |𝐶F ([0,𝑇 ];𝐿8 (𝛺;𝐿2 (0,1))) {( 𝑇 )7∕8[ 𝑇 ( 8 ) ]1∕8}8
( ) (4.111) ≤ E 𝜒𝐸𝜏,𝜀𝑑𝑠 𝜒𝐸𝜏,𝜀 |𝑦|̄ 2 +1 𝑑𝑠
≤  1 + |𝑦0 |𝐿2 (0,1) , ∀ 𝜀 ∈ (0, 𝑇 − 𝜏). ∫0 ∫0 𝐿 (0,1)

Let 𝑦𝜀1 (⋅) = 𝑦𝜀 (⋅) − 𝑦(⋅).


̄ Then, 𝑦𝜀1 (⋅) satisfies the following stochastic
𝑇 ( )
≤ 𝜀7 𝜒𝐸𝜏,𝜀 (𝑠) E|𝑦| ̄ 82 + 1 𝑑𝑠 (4.119)
parabolic equation: ∫0 𝐿 (0,1)

( ) 𝑇
⎧ 𝑑𝑦𝜀 = (𝑦𝜀 + 𝑎𝜀 𝑦𝜀 + 𝜒 𝛿𝑎)𝑑𝑡 ≤  |𝑦|8 + 1 𝜀7 𝜒𝐸𝜏,𝜀 (𝑠)𝑑𝑠
⎪ 1 (1,𝑥𝑥 1 1 )
𝐸𝜏,𝜀 𝐶F ([0,𝑇 ];𝐿8 (𝛺;𝐿2 (0,1))) ∫0
⎪ + 𝑏𝜀1 𝑦𝜀1 + 𝜒𝐸𝜏,𝜀 𝛿𝑏 𝑑𝑊 (𝑡) in (0, 𝑇 ] × (0, 1),
⎨ 𝜀 (4.112) ≤ (𝑦0 )𝜀8 .
⎪ 1𝑦 = 0 on (0, 𝑇 ] × {0, 1},
⎪ 𝑦1 (0) = 0
𝜀
in (0, 1), Here and henceforth, (𝑦0 ) is a generic constant (depending on 𝑦0 and

𝑇 ), which may be different from line to line. As (4.118), we have
where for 𝜑 = 𝑎, 𝑏, ( )
1 |𝛿𝑏(𝑠)|𝐿2 (0,1) ≤  |𝑦(𝑠)|
̄ 𝐿2 (0,1) + 1 , a.e. 𝑠 ∈ [0, 𝑇 ].
𝜑𝜀1 (𝑡) = ̄ + 𝜎𝜀𝑦𝜀1 (𝑡), 𝑢𝜀 (𝑡))𝑑𝜎.
𝜑𝑦 (𝑡, 𝑦(𝑡) (4.113)
∫0 Hence, similar to (4.119), one has
Consider the following two stochastic parabolic equations: ( 𝑇 )4
( ) E 𝜒𝐸𝜏,𝜀 |𝛿𝑏|2𝐿2 (0,1) 𝑑𝑠
⎧ 𝑑𝑦𝜀2 = 𝑦𝜀2,𝑥𝑥 + 𝑎1 𝑦𝜀2 𝑑𝑡
∫0
⎪ ( 𝜀 ) [ 𝑇 ( 2 ) ]4
⎪ + 𝑏1 𝑦2 + 𝜒𝐸𝜏,𝜀 𝛿𝑏 𝑑𝑊 (𝑡) in (0, 𝑇 ] × (0, 1), ≤ E 𝜒𝐸𝜏,𝜀 |𝑦| ̄ 𝐿2 (0,1) + 1 𝑑𝑠
⎨ 𝜀
(4.114) ∫0
⎪ 𝑦2 = 0 on (0, 𝑇 ] × {0, 1},
{( 𝑇 )3∕4 [ 𝑇 ( 8 ) ]1∕4 }4
⎪ 𝜀
𝑦2 (0) = 0 in (0, 1) ≤ E 𝜒𝐸𝜏,𝜀 |𝑦| + 1 𝑑𝑠
⎩ ∫0
𝜒𝐸𝜏,𝜀 𝑑𝑠
∫0
̄ 2
𝐿 (0,1)
and 𝑇 ( )
[ ] ≤ 𝜀3 𝜒𝐸𝜏,𝜀 E|𝑦| ̄ 82 + 1 𝑑𝑠 (4.120)
⎧ 𝜀 𝜀 1 ( )2 ∫0 𝐿 (0,1)
⎪ 𝑑𝑦3 = 𝑦3,𝑥𝑥 +𝑎1 𝑦𝜀3 +𝜒𝐸𝜏,𝜀 𝛿𝑎+ 𝑎11 𝑦𝜀2 𝑑𝑡
⎪ [ 2 ( ) ] ( ) 𝑇
1 2
≤  |𝑦|8 + 1 𝜀3
⎪ + 𝑏1 𝑦𝜀3 +𝜒𝐸𝜏,𝜀 𝛿𝑏1 𝑦𝜀2 + 𝑏11 𝑦𝜀2 𝑑𝑊 (𝑡) 𝐶F ([0,𝑇 ];𝐿8 (𝛺;𝐿2 (0,1))) ∫0
𝜒𝐸𝜏,𝜀 𝑑𝑠
⎨ 2 (4.115)
⎪ 𝜀 in (0, 𝑇 ] × (0, 1), ≤ (𝑦0 )𝜀4 .
⎪ 𝑦3 = 0 on (0, 𝑇 ] × {0, 1},
⎪ 𝑦𝜀 (0) = 0 in (0, 1). Therefore, combining (4.117), (4.119) and (4.120), we end up with
⎩ 3
In the following Steps 2–4, we shall prove that |𝑦𝜀1 |8𝐶 8 2 + |𝑦𝜀1 |8 2 ≤ (𝑦0 )𝜀4 . (4.121)
F ([0,𝑇 ];𝐿 (𝛺;𝐿 (0,1))) 𝐿F (0,𝑇 ;𝐿8 (𝛺;𝐻01 (0,1)))

|𝑦𝜀1 − 𝑦𝜀2 − 𝑦𝜀3 |𝐶 ([0,𝑇 ];𝐿2 (𝛺;𝐿2 (0,1))) = 𝑜(𝜀), as 𝜀 → 0. (4.116) From (4.121) and Hölder’s inequality, we find that
F

⎧ 𝜀4 2
⎪|𝑦1 |𝐶F ([0,𝑇 ];𝐿4 (𝛺;𝐿2 (0,1)))∩𝐿2F (0,𝑇 ;𝐿4 (𝛺;𝐻01 (0,1))) ≤ (𝑦0 )𝜀 ,
6
See Stannat and Wessels (0000) for a similar result under the additional ⎨ 𝜀2 (4.122)
assumption that the filtration 𝐅 is natural. ⎪|𝑦1 |𝐶F ([0,𝑇 ];𝐿2 (𝛺;𝐿2 (0,1)))∩𝐿2 (0,𝑇 ;𝐻 1 (0,1)) ≤ (𝑦0 )𝜀.
⎩ F 0

297
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330
( )
By a similar computation, for 𝑦𝜀2 (⋅), by (4.114), we have ≤ (𝑦0 ) |𝑦𝜀1 (𝑠)|𝐿∞ (0,1) + 𝜒𝐸𝜏,𝜀 (𝑠)
( )
⎧|𝑦𝜀 |8 ≤ (𝑦0 )𝜀4 , ≤ (𝑦0 ) |𝑦𝜀1 (𝑠)|𝐻 1 (0,1) + 𝜒𝐸𝜏,𝜀 (𝑠) , a.e. 𝑠 ∈ [0, 𝑇 ].
⎪ 2 𝐶F ([0,𝑇 ];𝐿8 (𝛺;𝐿2 (0,1)))∩𝐿2F (0,𝑇 ;𝐿8 (𝛺;𝐻01 (0,1))) 0
⎪ 𝜀4 2
⎨|𝑦2 |𝐶F ([0,𝑇 ];𝐿4 (𝛺;𝐿2 (0,1)))∩𝐿2 (0,𝑇 ;𝐿4 (𝛺;𝐻 1 (0,1))) ≤ (𝑦0 )𝜀 , (4.123) It follows from (4.123) and (4.129) that
⎪ 𝜀2 F 0
[ 𝑇 ( ]2
⎪|𝑦2 |𝐶 ([0,𝑇 ];𝐿2 (𝛺;𝐿2 (0,1)))∩𝐿2 (0,𝑇 ;𝐻 1 (0,1)) ≤ (𝑦0 )𝜀. )
⎩ F F 0 E | 𝑎𝜀1 − 𝑎1 𝑦𝜀2 |𝐿2 (0,1) 𝑑𝑠
∫0
Furthermore, by Lemma 4.5, we have that 𝑇( )1∕2
≤ |𝑦𝜀2 |2𝐶 ([0,𝑇 ];𝐿4 (𝛺;𝐿2 (0,1))) E|𝑎𝜀1 − 𝑎1 |4𝐿∞ (0,1) 𝑑𝑠 (4.130)
|𝑦𝜀2 |2 ∞ ≤ (𝑦0 , 𝜏)𝜀. (4.124) F ∫0
𝐿F (0,𝑇 ;𝐿2 (𝛺;𝐻01 (0,1))) 𝑇( ( )1∕2 )
≤ (𝑦0 )𝜀 𝜒𝐸𝜏,𝜀 + E|𝑦𝜀1 |4𝐿∞ (0,1) 𝑑𝑡
Here and in what follows, we use (𝑦0 , 𝜏) to denote a positive constant ∫0
depending on 𝜏, 𝑦0 and 𝑇 . 𝑇 ( ) 1∕2
Similar to (4.119), we have ≤ (𝑦0 )𝜀 𝜒𝐸𝜏,𝜀 + E|𝑦𝜀1 |4 1 𝑑𝑡
∫0 𝐻0 (0,1)
( 𝑇 )4 𝑇( ( )1∕2 )
E 𝜒𝐸𝜏,𝜀 (𝑠)|𝛿𝑎(𝑠)|𝐿2 (0,1) 𝑑𝑠 ≤ (𝑦0 )𝜀4 . ≤ (𝑦0 )𝜀 𝜒𝐸𝜏,𝜀 + E|𝑦𝜀1 |4 1 𝑑𝑡
∫0 ∫0 𝐻0 (0,1)

Hence, applying Theorem B.7 to (4.115), it follows from (4.123) that ≤ (𝑦0 )𝜀2 .

sup E|𝑦𝜀3 (𝑡)|4𝐿2 (0,1) Similar to (4.119), we obtain that


𝑡∈[0,𝑇 ] ( 𝑇 )2
{ [ 𝑇( ) ]4 𝜒𝐸𝜏,𝜀 |𝛿𝑎|𝐿2 (0,1) 𝑑𝑠 ≤ (𝑦0 )𝜀2 . (4.131)
| | E
≤ E 𝜒𝐸𝜏,𝜀 |𝛿𝑎|𝐿2 (0,1) + |𝑎11 |𝑦𝜀2 |2 | 2 𝑑𝑠 ∫0
∫0 | |𝐿 (0,1)
[ 𝑇( ) ]2 } As (4.129), we have for a.e. 𝑠 ∈ [0, 𝑇 ],
| |2 ( )
+E 𝜒𝐸𝜏,𝜀|𝛿𝑏1 𝑦𝜀2 |2𝐿2 (0,1) + |𝑏11 |𝑦𝜀2 |2 | 2 𝑑𝑠
∫0 | |𝐿 (0,1) |𝑏𝜀1 (𝑠) − 𝑏1 (𝑠)|𝐿∞ (0,1) ≤  |𝑦𝜀1 (𝑠)|𝐻 1 (0,1) + 𝜒𝐸𝜏,𝜀 (𝑠) .
[( 𝑇 )4 𝑇 0
≤ E 𝜒𝐸𝜏,𝜀 |𝛿𝑎|𝐿2 (0,1) 𝑑𝑠 + |𝑦𝜀2 |8𝐿2 (0,1) 𝑑𝑠
∫0 ∫0 Hence, similar to (4.130), one gets that
𝑇 ] 𝑇 ( ) 2
| |2
+| |𝜒𝐸𝜏,𝜀 𝑦𝜀2 |2 2 𝑑𝑡| (4.125) | 𝑏𝜀1 − 𝑏1 𝑦𝜀2 |𝐿2 (0,1) 𝑑𝑠
| ∫0 𝐿 (0,1) |
E
∫0
[ ( 𝑇 𝑇 ) ] 𝑇
(4.132)
| |1∕2 | |1∕2 2 |𝑏𝜀1 − 𝑏1 |2𝐿∞ (0,1) |𝑦𝜀2 |2𝐿2 (0,1) 𝑑𝑠
≤  𝜀4 + E | 𝜒𝐸𝜏,𝜀 𝑑𝑠| | 𝜒𝐸𝜏,𝜀 |𝑦𝜀2 |4𝐿2 (0,1) 𝑑𝑠| ≤ E
∫0
|∫0 | |∫0 |
( 𝑇 ) ≤ (𝑦0 )𝜀2 .
≤  𝜀4 + 𝜀 𝜒𝐸𝜏,𝜀 E|𝑦𝜀2 |4𝐿2 (0,1) 𝑑𝑠
∫0 Combining (4.128) and (4.130)–(4.132), we find that
4
≤ (𝑦0 )𝜀 . |𝑦𝜀4 |𝐶 ≤ (𝑦0 )𝜀. (4.133)
2 2
F ([0,𝑇 ];𝐿 (𝛺;𝐿 (0,1)))
Then, by Hölder’s inequality, we conclude that Step 4. We are now in a position to estimate
|𝑦𝜀3 (⋅)|2𝐶 ([0,𝑇 ];𝐿2 (𝛺;𝐿2 (0,1))) 2
≤ (𝑦0 )𝜀 . (4.126) E|𝑦𝜀1 (𝑡) − 𝑦𝜀2 (𝑡) − 𝑦𝜀3 (𝑡)|2𝐿2 (0,1) = E|𝑦𝜀4 (𝑡) − 𝑦𝜀3 (𝑡)|2𝐿2 (0,1) .
F

𝛥 𝛥
Step 3. We now estimate 𝑦𝜀4 = 𝑦𝜀1 − 𝑦𝜀2 . Clearly, 𝑦𝜀4 solves Let 𝑦𝜀5 (⋅) = 𝑦𝜀4 (⋅) − 𝑦𝜀3 (⋅). It is clear that
[ ( ) ]
⎧ 𝑑𝑦𝜀4 = 𝑦𝜀4,𝑥𝑥 + 𝑎𝜀1 𝑦𝜀4 + 𝑎𝜀1 − 𝑎1 𝑦𝜀2 + 𝜒𝐸𝜏,𝜀 𝛿𝑎 𝑑𝑡 𝑦𝜀5 (⋅) = 𝑦𝜀1 (𝑡) − 𝑦𝜀2 (𝑡) − 𝑦𝜀3 (𝑡) = 𝑦𝜀 (⋅) − 𝑦(⋅)
̄ − 𝑦𝜀2 (𝑡) − 𝑦𝜀3 (𝑡).
⎪ [ 𝜀 𝜀 ( 𝜀 ) 𝜀]
+ 𝑏1 𝑦4 + 𝑏1 − 𝑏1 𝑦2 𝑑𝑊 (𝑡)
⎪ By (4.110) and (4.112)–(4.115), the drift term in the equation solved
⎨ in (0, 𝑇 ] × (0, 1), (4.127) by 𝑦𝜀5 (⋅) is
⎪ 𝑦𝜀4 = 0 on (0, 𝑇 ] × {0, 1},
⎪ 𝑦𝜀𝑥𝑥 + 𝑎(𝑡, 𝑦𝜀 , 𝑢𝜀 ) − 𝑦̄𝑥𝑥 − 𝑎(𝑡, 𝑦, ̄ − 𝑦𝜀2,𝑥𝑥 − 𝑎1 (𝑡)𝑦𝜀2
̄ 𝑢)
⎩ 𝑦𝜀4 (0) = 0 in (0, 1).
1 ( )2
Applying Theorem B.7 to (4.127), by (B4), we have −𝑦𝜀3,𝑥𝑥 − 𝑎1 (𝑡)𝑦𝜀3 − 𝜒𝐸𝜏,𝜀 (𝑡)𝛿𝑎(𝑡) − 𝑎11 (𝑡) 𝑦𝜀2
2
sup E|𝑦𝜀4 (𝑡)|2𝐿2 (0,1) + |𝑦𝜀4 |𝐿2 (0,𝑇 ;𝐻 1 (0,1)) = 𝑦𝜀5,𝑥𝑥 + 𝑎(𝑡, 𝑦𝜀 , 𝑢𝜀 ) − 𝑎(𝑡, 𝑦,
̄ 𝑢𝜀 ) − 𝑎1 (𝑡)(𝑦𝜀2 + 𝑦𝜀3 )
𝑡∈[0,𝑇 ]
{ 𝑇 [ (
F 0
1 ( )2
| ) 𝜀 ] |2 − 𝑎11 (𝑡) 𝑦𝜀2 . (4.134)
≤ E| | 𝑎1 − 𝑎1 𝑦2 |𝐿2 (0,1) + 𝜒𝐸𝜏,𝜀 |𝛿𝑎|𝐿2 (0,1) 𝑑𝑠|
𝜀
(4.128) 2
| ∫0 |
𝑇 ( } For 𝜑 = 𝑎, 𝑏, let
) 𝜀2
+E | 𝑏1 − 𝑏1 𝑦2 |𝐿2 (0,1) 𝑑𝑠 .
𝜀
∫0 ⎧ 𝜀 𝛥
1
⎪𝜑11 (𝑡) = 2 ̄ + 𝜎𝑦𝜀1 (𝑡), 𝑢𝜀 (𝑡))𝑑𝜎,
(1 − 𝜎)𝜑𝑦𝑦 (𝑡, 𝑦(𝑡)
By (B4) again, we get that ⎨ ∫ 0
⎪𝛿𝜑 (𝑡) = 𝛥
𝜑𝑦𝑦 (𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡)) − 𝜑𝑦𝑦 (𝑡, 𝑦(𝑡), 𝑢(𝑡)).
⎩ 11
|𝑎𝜀1 (𝑠) − 𝑎1 (𝑠)|𝐿∞ (0,1)
1( (
For 𝜎 ∈ [0, 1], write 𝑓 (𝜎) = 𝑎(𝑡, 𝑦̄ + 𝜎𝑦𝜀1 , 𝑢𝜀 ). Then, by Taylor’s formula
| ) ) |
=| ̄ + 𝜎𝑦𝜀1 (𝑠), 𝑢𝜀 (𝑠) − 𝑎𝑦 (𝑠, 𝑦(𝑠),
𝑎𝑦 𝑠, 𝑦(𝑠) ̄ 𝑢(𝑠))
̄ 𝑑𝜎 | ∞ (4.129) with the integral type remainder, we see that
| ∫0 |𝐿 (0,1)
1
1( ( )
| 𝑓 (1) − 𝑓 (0) = 𝑓 ′ (0) + (1 − 𝜎)𝑓 ′′ (𝜎)𝑑𝜎.
=| ̄ + 𝜎𝑦𝜀1 (𝑠), 𝑢𝜀 (𝑠) − 𝑎𝑦 (𝑠, 𝑦(𝑠),
𝑎𝑦 𝑠, 𝑦(𝑠) ̄ 𝑢𝜀 (𝑠)) ∫0
| ∫0
) | Since 𝑓 ′ (𝜎) = 𝑎𝑦 (𝑡, 𝑦̄ + 𝜎𝑦𝜀1 , 𝑢𝜀 )𝑦𝜀1 and 𝑓 ′′ (𝜎) = 𝑎𝑦𝑦 (𝑡, 𝑦̄ + 𝜎𝑦𝜀1 , 𝑢𝜀 )(𝑦𝜀1 )2 , we
+𝑎𝑦 (𝑠, 𝑦(𝑠),
̄ 𝑢𝜀 (𝑠)) − 𝑎𝑦 (𝑠, 𝑦(𝑠),
̄ 𝑢(𝑠))
̄ 𝑑𝜎 | ∞
|𝐿 (0,1) obtain that
1( 1 ( )
|
=| 𝜎 ̄ + 𝜂𝜎𝑦𝜀1 (𝑠), 𝑢𝜀 (𝑠) 𝑦𝜀1 (𝑠)𝑑𝜂
𝑎 𝑠, 𝑦(𝑠) 𝑎(𝑡, 𝑦𝜀 , 𝑢𝜀 ) − 𝑎(𝑡, 𝑦,
̄ 𝑢𝜀 )
| ∫0 ∫0 𝑦𝑦
) 1
| ̄ 𝑢𝜀 )𝑦𝜀1 + (1 − 𝜎)𝑎𝑦𝑦 (𝑡, 𝑦̄ + 𝜎𝑦𝜀1 , 𝑢𝜀 )(𝑦𝜀1 )2 𝑑𝜎
+𝜒𝐸𝜏,𝜀 (𝑠)𝛿𝑎1 (𝑠) 𝑑𝜎 | ∞ = 𝑎𝑦 (𝑡, 𝑦,
|𝐿 (0,1) ∫0

298
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

1 𝜀
̄ 𝑢𝜀 )𝑦𝜀1 +
= 𝑎𝑦 (𝑡, 𝑦, 𝑎 (𝑡)(𝑦𝜀1 )2 . (4.135) We now estimate the ‘‘drift’’ terms in the right hand side of (4.139).
2 11
By (4.122) and (B4), we have the following estimate:
Next,
𝑇 ( ) 2
| | | |
̄ 𝑢 )𝑦𝜀1 − 𝑎1 (𝑡)(𝑦𝜀2 + 𝑦𝜀3 )
𝑎𝑦 (𝑡, 𝑦, 𝜀 E| 𝜒𝐸𝜏,𝜀 |𝛿𝑎1 𝑦𝜀1 |𝐿2 (0,1) + |𝛿𝑎11 |𝑦𝜀1 |2 | 2 𝑑𝑠|
| ∫0 | |𝐿 (0,1) |
( )
̄ 𝑢𝜀 )𝑦𝜀1 − 𝑎𝑦 (𝑡, 𝑦,
= 𝑎𝑦 (𝑡, 𝑦, ̄ 𝜀1
̄ 𝑢)𝑦+ 𝑎1 (𝑡) 𝑦𝜀1 − 𝑦𝜀2 − 𝑦𝜀3 [ 𝑇 𝑇 ( 𝜀2 ) ]
[ ] ≤ E 𝜒𝐸𝜏,𝜀 𝑑𝑠 𝜒𝐸𝜏,𝜀 |𝑦1 |𝐿2 (0,1) + |𝑦𝜀1 |4𝐿2 (0,1) 𝑑𝑠
= 𝜒𝐸𝜏,𝜀 𝑎𝑦 (𝑡, 𝑦,
̄ 𝑢) − 𝑎𝑦 (𝑡, 𝑦, ̄ 𝑦𝜀1 + 𝑎1 (𝑡)𝑦𝜀5
̄ 𝑢) ∫0 ∫0
𝑇 ( )
= 𝜒𝐸𝜏,𝜀 𝛿𝑎1 (𝑡)𝑦𝜀1 + 𝑎1 (𝑡)𝑦𝜀5 . (4.136) ≤ 𝜀 𝜒𝐸𝜏,𝜀 E|𝑦𝜀1 |2𝐿2 (0,1) + E|𝑦𝜀1 |4𝐿2 (0,1) 𝑑𝑠 (4.140)
∫0
( )
Further, ≤ 𝜀 |𝑦𝜀1 |2𝐶 ([0,𝑇 ];𝐿2 (𝛺;𝐿2 (0,1))) + |𝑦𝜀1 |4𝐶 ([0,𝑇 ];𝐿4 (𝛺;𝐿2 (0,1)))
F F
1 𝜀 1
𝑎 (𝑡)(𝑦𝜀1 )2 − 𝑎11 (𝑡)(𝑦𝜀2 )2 𝑇
2 11 2 × 𝜒𝐸𝜏,𝜀 𝑑𝑠 ≤ (𝑦0 )𝜀3 .
1 𝜀 𝜀 2 1 1 ∫0
= 𝑎11 (𝑦1 ) − 𝑎𝑦𝑦 (𝑡, 𝑥, ̄ 𝑢𝜀 )(𝑦𝜀1 )2 + 𝑎𝑦𝑦 (𝑡, 𝑥,
̄ 𝑢𝜀 )(𝑦𝜀1 )2
2 2 2 Recalling that 𝑦𝜀1 (⋅) = 𝑦𝜀 (⋅) − 𝑦(⋅),
̄ we see that, for a.e. 𝑠 ∈ [0, 𝑇 ],
1 1 1
− 𝑎11 (𝑡)(𝑦𝜀1 )2 + 𝑎11 (𝑡)(𝑦𝜀1 )2 − 𝑎11 (𝑡)(𝑦𝜀2 )2 | 𝜀 |
2 2 2 |𝑎11 (𝑠) − 𝑎𝑦𝑦 (𝑠, 𝑦(𝑠),
̄ 𝑢𝜀 (𝑠))| ∞
( ) | |𝐿 (0,1)
1 𝜀 1
= ̄ 𝑢𝜀 ) (𝑦𝜀1 )2 + 𝜒𝐸𝜏,𝜀 𝛿𝑎11 (𝑡)(𝑦𝜀1 )2
𝑎11 (𝑡) − 𝑎𝑦𝑦 (𝑡, 𝑦, |
1 ( )
2 2 = |2 ̄ + 𝜎𝑦𝜀1 (𝑠), 𝑢𝜀 (𝑠) 𝑑𝜎
(1 − 𝜎)𝑎𝑦𝑦 𝑠, 𝑦(𝑠)
1 𝜀 2 1 𝜀 2
| ∫0
+ 𝑎11 (𝑡)(𝑦1 ) − 𝑎11 (𝑡)(𝑦2 ) . (4.137) |
2 2 −𝑎𝑦𝑦 (𝑠, 𝑦(𝑠),
̄ 𝑢𝜀 (𝑠))| ∞
|𝐿 (0,1)
By (4.134)–(4.137), we conclude that ( (
|
1 )
𝑦𝜀𝑥𝑥 + 𝑎(𝑡, 𝑦𝜀 , 𝑢𝜀 ) − 𝑦̄𝑥𝑥 − 𝑎(𝑡, 𝑦, ̄ − 𝑦𝜀2,𝑥𝑥 − 𝑎1 (𝑡)𝑦𝜀2 = |2 ̄ + 𝜎𝑦𝜀1 (𝑠), 𝑢(𝑠)
(1 − 𝜎) 𝑎𝑦𝑦 𝑠, 𝑦(𝑠) ̄
̄ 𝑢) | ∫0
1 ( )2 )
−𝑦𝜀3,𝑥𝑥 − 𝑎1 (𝑡)𝑦𝜀3 − 𝜒𝐸𝜏,𝜀 (𝑡)𝛿𝑎(𝑡) − 𝑎11 (𝑡) 𝑦𝜀2 − 𝑎𝑦𝑦 (𝑠, 𝑦(𝑠),
̄ 𝑢(𝑠))
̄ 𝑑𝜎 (4.141)
2
= 𝑦𝜀5,𝑥𝑥 + 𝑎1 (𝑡)𝑦𝜀5 + 𝜒𝐸𝜏,𝜀 (𝑡)𝛿𝑎1 (𝑡)𝑦𝜀1 ( (
1 )
1 [( 𝜀 )( )2 ( )2 +2 (1 − 𝜎)𝜒𝐸𝜏,𝜀 (𝑠) 𝑎𝑦𝑦 𝑠, 𝑦(𝑠) ̄ + 𝜎𝑦𝜀1 (𝑠), 𝑢(𝑠)
+ ̄ 𝑢𝜀 ) 𝑦𝜀1 + 𝜒𝐸𝜏,𝜀 (𝑡)𝛿𝑎11 (𝑡) 𝑦𝜀1
𝑎11 (𝑡) − 𝑎𝑦𝑦 (𝑡, 𝑥, ∫0
2 ( 𝜀 )2 ( 𝜀 )2 ] ( ))
+𝑎11 (𝑡) 𝑦1 − 𝑎11 (𝑡) 𝑦2 . ̄ + 𝜎𝑦𝜀1 (𝑠), 𝑢(𝑠)
−𝑎𝑦𝑦 𝑠, 𝑦(𝑠) ̄ 𝑑𝜎
Similarly, the diffusion term (in the equation of 𝑦𝜀5 (⋅)) is ( )|
−𝜒𝐸𝜏,𝜀(𝑠) 𝑎𝑦𝑦 (𝑠, 𝑦(𝑠),
̄ 𝑢(𝑠))−𝑎𝑦𝑦 (𝑠, 𝑦(𝑠),
̄ 𝑢(𝑠))
̄ | ∞
|𝐿 (0,1)
𝑏(𝑡, 𝑦𝜀 , 𝑢𝜀 ) − 𝑏(𝑡, 𝑦, ̄ − 𝑏1 (𝑡)𝑦𝜀2 − 𝑏1 (𝑡)𝑦𝜀3
̄ 𝑢) ( 1 ( )
| 𝜀
1 ( )2 ≤ |𝑎 𝑠, 𝑦(𝑠)̄ + 𝜎𝑦1 (𝑠), 𝑢(𝑠)̄
−𝜒𝐸𝜏,𝜀 (𝑡)𝛿𝑏(𝑡)𝑦𝜀2 − 𝑏11 (𝑡) 𝑦𝜀2 ∫0 | 𝑦𝑦
2 )
|
= 𝑏1 (𝑡)𝑦𝜀5 + 𝜒𝐸𝜏,𝜀 (𝑡)𝛿𝑏1 (𝑡)𝑦𝜀4 − 𝑎𝑦𝑦 (𝑠, 𝑦(𝑠),
̄ ̄ | ∞
𝑢(𝑠)) 𝑑𝜎 + 𝜒𝐸𝜏,𝜀 (𝑠) .
|𝐿 (0,1)
1 [( 𝜀 )( )2
Hence, by (4.121) and noting the continuity of 𝑎𝑦𝑦 (𝑡, 𝑦, 𝑢) with respect
+ ̄ 𝑢𝜀 ) 𝑦𝜀1
𝑏11 (𝑡) − 𝑏𝑥𝑥 (𝑡, 𝑥,
2 to 𝑦, we have
( )2 ( )2 ( )2 ]
+ 𝜒𝐸𝜏,𝜀 (𝑡)𝛿𝑏11 (𝑡) 𝑦𝜀1 + 𝑏11 (𝑡) 𝑦𝜀1 − 𝑏11 (𝑡) 𝑦𝜀2 . [ 𝑇 ( ]2
| 𝜀 ) |
E ̄ 𝑢𝜀 ) |𝑦𝜀1 |2 | 2
| 𝑎 − 𝑎𝑦𝑦 (𝑠, 𝑦, 𝑑𝑠
This verifies that 𝑦𝜀5 (⋅) solves the following equation: ∫0 | 11 |𝐿 (0,1)
[ | 𝜀
𝑇
|2
⎧ 𝑑𝑦𝜀 = 𝑦𝜀 + 𝑎 𝑦𝜀 + 𝜒 𝛿𝑎 𝑦𝜀 ≤ E ̄ 𝑢𝜀 )| ∞
|𝑎 − 𝑎𝑦𝑦 (𝑠, 𝑦, |𝑦𝜀 |4 𝑑𝑡
⎪ 5 5,𝑥𝑥 1 5 𝐸𝜏,𝜀 1 1 ∫0 | 11 |𝐿 (0,1) 1 𝐿2 (0,1)
⎪ 1( )( )2 1 ( )2
+ 𝑎𝜀11 −𝑎𝑦𝑦 (𝑡, 𝑦, ̄ 𝑢𝜀 ) 𝑦𝜀1 + 𝜒𝐸𝜏,𝜀𝛿𝑎11 𝑦𝜀1 ≤ |𝑦𝜀1 |4𝐶 (4.142)
⎪ 2 2] 8 2
F ([0,𝑇 ];𝐿 (𝛺;𝐿 (0,1)))
⎪ 1 ( ) 1 ( )
2
+ 𝑎11 (𝑡) 𝑦𝜀1 − 𝑎11 (𝑡) 𝑦𝜀2
2
𝑑𝑡 𝑇( )1∕2
⎪ 2[ | |4

2 × E|𝑎𝜀11 − 𝑎𝑦𝑦 (𝑠, 𝑦,̄ 𝑢𝜀 )| ∞ 𝑑𝑠
𝜀
+ 𝑏1 𝑦5 + 𝜒𝐸𝜏,𝜀 𝛿𝑏1 𝑦4 𝜀 ∫0 | |𝐿 (0,1)
⎪ 𝑇(
1( )( )2 1 ( )2 | ( )
(4.138) 1

⎪ + 𝑏𝜀11 −𝑏𝑥𝑥 (𝑡, 𝑥, ̄ 𝑢𝜀 ) 𝑦𝜀1 + 𝜒𝐸𝜏,𝜀𝛿𝑏11 𝑦𝜀1 ≤ (𝜂)𝜀2 E |𝑎 𝑠, 𝑦̄ + 𝜎𝑦𝜀1 , 𝑢̄
2 ] 2 ∫0 ∫0 | 𝑦𝑦
⎪ 1 ( )2 1 ( )2 )1∕2
⎪ + 𝑏11 𝑦𝜀1 − 𝑏11 𝑦𝜀3 𝑑𝑊 (𝑡) |4
⎪ 2 2 −𝑎𝑦𝑦 (𝑠, 𝑦, ̄| ∞
̄ 𝑢) 𝑑𝜎 + 𝜒𝐸𝜏,𝜀 𝑑𝑠
in (0, 𝑇 ] × (0, 1), |𝐿 (0,1)
⎪ 2
⎪ 𝑦𝜀 = 0 on (0, 𝑇 ] × {0, 1}, = 𝑜(𝜀 ), as 𝜀 → 0.
⎪ 𝜀 5
⎩ 𝑦5 (0) = 0 in (0, 1).
By means of (4.122), (4.123) and (4.133), and noting that 𝑦𝜀4 =
Applying Theorem B.7 to (4.138), we see that 𝑦𝜀1 − 𝑦𝜀2 , we obtain that
( 𝑇 )2
sup E|𝑦𝜀5 (𝑡)|2𝐿2 (0,1) | |
E |𝑎 |𝑦𝜀 |2 − 𝑎11 |𝑦𝜀2 |2 | 2 𝑑𝑠
𝑡∈[0,𝑇 ] ∫0 | 11 1 |𝐿 (0,1)
{ 𝑇[ ( ) ( 𝑇 )2
| | | | |
≤ E | 𝜒𝐸𝜏,𝜀 |𝛿𝑎1 𝑦𝜀1 |𝐿2 (0,1) + |𝛿𝑎11 |𝑦𝜀1 |2 | 2 |𝑎 𝑦𝜀 𝑦𝜀 + 𝑎11 𝑦𝜀2 𝑦𝜀4 | 2 (4.143)
|∫0 | |𝐿 (0,1) =E
∫0 | 11 1 4 |𝐿 (0,1)
𝑑𝑠
|( ) | ( )
+| 𝑎𝜀11 − 𝑎𝑦𝑦 (𝑠, 𝑦(𝑠),
̄ 𝑢𝜀 (𝑠)) |𝑦𝜀1 |2 | 2 ≤  |𝑦𝜀1 |2𝐶 ([0,𝑇 ];𝐿2 (𝛺;𝐿2 (0,1))) + |𝑦𝜀2 |2𝐶 ([0,𝑇 ];𝐿2 (𝛺;𝐿2 (0,1)))
| |𝐿 (0,1)
] F F
| 𝜀 2 𝜀 2| |2 × |𝑦𝜀4 |2𝐶
+|𝑎11 |𝑦1 | − 𝑎11 |𝑦2 | | 2 𝑑𝑠| 2 2
| |𝐿 (0,1) | F ([0,𝑇 ];𝐿 (𝛺;𝐿 (0,1)))
𝑇[ ( ) ≤ (𝑦0 )𝜀 . 3
| |2
+ 𝜒𝐸𝜏,𝜀 |𝛿𝑏1 𝑦4 |𝐿2 (0,1)+ |𝛿𝑏11 |𝑦𝜀1 |2 | 2
𝜀 2
∫0 | |𝐿 (0,1) Next, we estimate the ‘‘diffusion’’ terms in the right hand side of
|( ) |2 (4.139). Similar to (4.140) and noting (4.133), we obtain that
+| 𝑏𝜀11 − 𝑏𝑥𝑥 (𝑠, 𝑥(𝑠),
̄ 𝑢𝜀 (𝑠)) |𝑦𝜀1 |2 | 2
| |𝐿 (0,1)
] } 𝑇 ( )
| |2 2
𝜒𝐸𝜏,𝜀 |𝛿𝑏1 𝑦𝜀4 |2𝐿2 (0,1) + |𝛿𝑏11 (𝑠)|𝑦𝜀1 |2 |𝐿2 (0,1) 𝑑𝑠
+|𝑏11 |𝑦𝜀1 |2 − 𝑏11 |𝑦𝜀2 |2 | 2 𝑑𝑠 . (4.139) E
| |𝐿 (0,1) ∫0

299
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

𝑇 ( ) Similarly to (4.141), we find that, for a.e. 𝑡 ∈ [0, 𝑇 ],


≤ 𝜒𝐸𝜏,𝜀 E|𝑦𝜀4 |2𝐿2 (0,1) + E|𝑦𝜀1 |4𝐿2 (0,1) 𝑑𝑠
∫0
|
1 ( ( )
≤ (𝑦0 )𝜀3 . (4.144) | (1 − 𝜎) 𝑔𝑦𝑦 𝑡, 𝑦̄ + 𝜎𝑦𝜀1 , 𝑢𝜀
| ∫0
( )) |
̄ 𝑢𝜀
−𝑔𝑦𝑦 𝑡, 𝑦, 𝑑𝜎 | ∞
By virtue of (4.121) again, similar to (4.142), we find that |𝐿 (0,1) (4.149)
( 1 ( )
|
𝑇
|( 𝜀 ) |2 ≤ |𝑔 𝑡, 𝑦̄ + 𝜎𝑦𝜀1 , 𝑢̄
E | 𝑏11 (𝑠) − 𝑏𝑦𝑦 (𝑠, 𝑦(𝑠),
̄ 𝑢𝜀 (𝑠)) 𝑦𝜀1 (𝑠)2 | 2 𝑑𝑠 ∫0 | 𝑦𝑦 )
∫0 | |𝐿 (0,1) ( )|
̄ 𝑢̄ | ∞
−𝑔𝑦𝑦 𝑡, 𝑦, 𝑑𝜎 + 𝜒𝐸𝜏,𝜀 (𝑡) ,
|𝐿 (0,1)
≤ |𝑦𝜀1 |4𝐶 8 2 (4.145)
F ([0,𝑇 ];𝐿 (𝛺;𝐿 (0,1)))
𝑇( )1∕2 and
| |4 ( ( ) ( )) |
× E|𝑏𝜀11 (𝑠)−𝑏𝑦𝑦 (𝑠, 𝑦(𝑠),
̄ 𝑢𝜀 (𝑠))| ∞ 𝑑𝑠 |
1
∫0 | |𝐿 (0,1) | (1−𝜎) ℎ𝑦𝑦 𝑦(𝑇 ̄ )+𝜎𝑦𝜀1 (𝑇 ) −ℎ𝑦𝑦 𝑦(𝑇 ̄ ) 𝑑𝜎 |
| ∫0 |𝐿∞ (0,1)
𝑇 (
| ( )
1
≤ (𝑦0 )𝜀2 ̄ + 𝜎𝑦𝜀1 (𝑠), 𝑢(𝑠)
|𝑏 𝑠, 𝑦(𝑠) ̄ ( 1
∫0
E
∫0 | 𝑦𝑦 | |
≤ ̄ ) + 𝜎𝑦𝜀1 (𝑇 )) − ℎ𝑦𝑦 (𝑦(𝑇
|ℎ (𝑦(𝑇 ̄ ))| ∞ 𝑑𝜎
) ∫0 | 𝑦𝑦 |𝐿 (0,1)
|4 )
−𝑏𝑦𝑦 (𝑠, 𝑦(𝑠),
̄ ̄ | ∞
𝑢(𝑠)) 𝑑𝜎 + 𝜒𝐸𝜏,𝜀 (𝑠) 1∕2 𝑑𝑠
|𝐿 (0,1) +𝜒𝐸𝜏,𝜀 (𝑡) . (4.150)
= 𝑜(𝜀2 ).
By (4.148), noting (4.121), (4.123), (4.125), (4.133), (4.147), (4.149)
Similarly to (4.143), we have that and (4.150), and using the continuity of both ℎ𝑦𝑦 (⋅) and 𝑔𝑦𝑦 (𝑡, ⋅, 𝑢), we
𝑇
end up with
| |2
E |𝑏 |𝑦𝜀 |2 − 𝑏11 |𝑦𝜀2 |2 | ∞ 𝑑𝑠 ≤ (𝑦0 )𝜀3 . (4.146)  (𝑢𝜀 (⋅)) −  (𝑢(⋅))
∫0 | 11 1 |𝐿 (0,1) ̄
𝑇 (⟨ ⟩ 1⟨ ⟩
From (4.139)–(4.140) and (4.142)–(4.146), we conclude that =E 𝑔1 , 𝑦𝜀2 + 𝑦𝜀3 𝐿2 (0,1) + 𝑔 𝑦𝜀 , 𝑦𝜀
∫0 2 11 2 2 𝐿2 (0,1)
)
|𝑦𝜀5 |2𝐶 2 (𝛺;𝐿2 (0,1))) = 𝑜(𝜀2 ), as 𝜀 → 0. (4.147) +𝜒𝐸𝜏,𝜀 𝛿𝑔 𝑑𝑡 (4.151)
F ([0,𝑇 ];𝐿
⟨ ( ) ⟩
This gives (4.116). +E ℎ𝑦 𝑦(𝑇 ̄ ) , 𝑦𝜀2 (𝑇 ) + 𝑦𝜀3 (𝑇 ) 𝐿2 (0,1)
Step 5. We now compute  (𝑢𝜀 (⋅)) −  (𝑢(⋅)).
̄ 1 ⟨ ( ) ⟩
+ E ℎ𝑦𝑦 𝑦(𝑇 ̄ ) 𝑦𝜀2 (𝑇 ), 𝑦𝜀2 (𝑇 ) 𝐿2 (0,1) + 𝑜(𝜀).
By the definition of  (⋅) in (4.3), using Taylor’s formula with the 2
integral type remainder as that in (4.135), we obtain that We now get rid of 𝑦𝜀2 (⋅) and 𝑦𝜀3 (⋅) in (4.151) by solutions to the Eqs. (4.5)
and (4.25). By the definition of transposition solution to Eq. (4.5), we
 (𝑢𝜀 (⋅)) −  (𝑢(⋅))
̄
𝑇( ) obtain that
=E 𝑔(𝑡, 𝑦𝜀 (𝑡), 𝑢𝜀 (𝑡)) − 𝑔(𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡))
̄ 𝑑𝑡 ⟨ ⟩ 𝑇⟨ ⟩
∫0 ( ) ( ) ̄ )), 𝑦𝜀2 (𝑇 ) 𝐿2 (0,1) − E
−E ℎ𝑦 (𝑦(𝑇 𝑔1 , 𝑦𝜀2 𝑑𝑡
+Eℎ 𝑦𝜀 (𝑇 ) − Eℎ 𝑦(𝑇 ̄ ) ∫0 𝐿2 (0,1)
𝑇 ( 𝑇⟨ (4.152)
⟨ ⟩ ⟩
=E ̄ 𝑢𝜀 (𝑡)), 𝑦𝜀1 (𝑡) 𝐿2 (0,1)
𝜒𝐸𝜏,𝜀 (𝑡)𝛿𝑔(𝑡) + 𝑔𝑦 (𝑡, 𝑦(𝑡), =E 𝑍, 𝛿𝑏 𝐿2 (0,1) 𝜒𝐸𝜏,𝜀 (𝑡)𝑑𝑡
∫0 ∫0
1 ⟨ ( )
+ (1 − 𝜎)𝑔𝑦𝑦 𝑡, 𝑦(𝑡)̄ + 𝜎𝑦𝜀1 (𝑡), 𝑢𝜀 (𝑡) 𝑦𝜀1 (𝑡), and
∫0 ) ⟨ ⟩ 𝑇⟨ ⟩

𝑦𝜀1 (𝑡) 𝐿2 (0,1) 𝑑𝜎 𝑑𝑡 ̄ )), 𝑦𝜀3 (𝑇 ) 𝐿2 (0,1) − E
−E ℎ𝑦 (𝑥(𝑇 𝑔1 , 𝑦𝜀3 𝐿2 (0,1)
𝑑𝑡
⟨ ⟩ ∫0
+E ℎ𝑦 (𝑦(𝑇̄ )), 𝑦𝜀1 (𝑇 ) 𝐿2 (0,1) 𝑇[ ( )
1 ⟨ ( )⟩ ⟨ ( )⟩

1 ( ) ⟩ =E 𝑧, 𝑎11 𝑦𝜀2 , 𝑦𝜀2 𝐿2 (0,1)+ 𝑍, 𝑏11 𝑦𝜀2 , 𝑦𝜀2 𝐿2 (0,1)
+E (1−𝜎)ℎ𝑦𝑦 𝑦(𝑇 ̄ )+𝜎𝑦𝜀1(𝑇 ) 𝑦𝜀1(𝑇 ), 𝑦𝜀1(𝑇 ) 𝐿2 (0,1) 𝑑𝜎. ∫0 2
∫0 (⟨ ⟩ ⟨ ⟩ )]
+𝜒𝐸𝜏,𝜀 𝑧, 𝛿𝑎 𝐿2 (0,1)+ 𝑍, 𝛿𝑏1 𝑦𝜀2 𝐿2 (0,1) 𝑑𝑡. (4.153)
This, together with the definition of 𝑦𝜀𝑗 (⋅) (𝑗 = 1, ⋯ , 5), yields that
According to (4.151)–(4.153), we conclude that, when 𝜀 → 0,
 (𝑢𝜀 (⋅)) −  (𝑢(⋅))
̄
𝑇[  (𝑢𝜀 (⋅)) −  (𝑢(⋅))
̄
⟨ ⟩ 𝑇 (⟨
=E 𝜒 𝛿𝑔 + 𝛿𝑔1 , 𝑦𝜀1 𝐿2 (0,1) 𝜒𝐸𝜏,𝜀 1 ⟩ ⟨ ⟩
∫0 𝐸𝜏,𝜀 = E 𝑔11 𝑦𝜀2 , 𝑦𝜀2 𝐿2 (0,1) − 𝑧, 𝑎11 |𝑦𝜀2 |2 𝐿2 (0,1)
⟨ ⟩ ⟨ ⟩ 2 ∫0
+ 𝑔1 , 𝑦𝜀2 +𝑦𝜀3 𝐿2 (0,1) + 𝑔1 , 𝑦𝜀5 𝐿2 (0,1) ⟨ ⟩ )
1⟨
− 𝑍, 𝑏11 |𝑦𝜀2 |2 𝐿2 (0,1) 𝑑𝑡 (4.154)
( ( )
(1 − 𝜎) 𝑔𝑦𝑦 𝑡, 𝑦̄ + 𝜎𝑦𝜀1 , 𝑢𝜀
+ 𝑇 ( ⟨ ⟩ ⟨ ⟩ )
∫0
( )) ⟩ +E 𝜒𝐸𝜏,𝜀 𝛿𝑔 − 𝑧, 𝛿𝑎 𝐿2 (0,1) − 𝑍, 𝛿𝑏 𝐿2 (0,1) 𝑑𝑡
−𝑔𝑦𝑦 𝑡, 𝑦,̄ 𝑢𝜀 𝑦𝜀1 , 𝑦𝜀1 𝐿2 (0,1) 𝑑𝜎 ∫0
1 ⟨ ( ) ⟩
1⟨ ⟩ 1⟨ ⟩ ̄ ) 𝑦𝜀2 (𝑇 ), 𝑦𝜀2 (𝑇 ) 𝐿2 (0,1) + 𝑜(𝜀).
+ E ℎ𝑦𝑦 𝑦(𝑇
+ 𝛿𝑔11 𝑦𝜀1 , 𝑦𝜀1 𝐿2 (0,1) 𝜒𝐸𝜏,𝜀+ 𝑔11 𝑦𝜀2 , 𝑦𝜀2 𝐿2 (0,1) 2
2 2
]
1⟨ ⟩ By the definition of the transposition solution to Eq. (4.25) (with 𝐽 ,
+ 𝑔11 𝑦𝜀4 , 𝑦𝜀1 +𝑦𝜀2 𝐿2 (0,1) 𝑑𝑡 (4.148) 𝐾, 𝐹 and 𝑃𝑇 given by (4.107)), we obtain that
[⟨ ( 2
) ⟩ ⟨ ( ) ⟩
+E ℎ𝑦 𝑦(𝑇̄ ) , 𝑦𝜀2 (𝑇 ) + 𝑦𝜀3 (𝑇 ) 𝐿2 (0,1) ̄ ) 𝑦𝜀2 (𝑇 ), 𝑦𝜀2 (𝑇 ) 𝐿2 (0,1)
−E ℎ𝑦𝑦 𝑦(𝑇
⟨ ( ) 𝜀 ⟩ 𝑇⟨
+ ℎ𝑦 𝑦(𝑇̄ ) , 𝑦5 (𝑇 ) 𝐿2 (0,1) ( ) ⟩
+E ̃ 𝑦𝑦 𝑡, 𝑦(𝑡),
H ̄ 𝑧(𝑡), 𝑍(𝑡) 𝑦𝜀2 , 𝑦𝜀2 𝐿2 (0,1) 𝑑𝑡
̄ 𝑢(𝑡),
1⟨ ( ) ⟩ ∫0
+ ℎ𝑦𝑦 𝑦(𝑇 ̄ ) 𝑦𝜀2 (𝑇 ), 𝑦𝜀2 (𝑇 ) 𝐿2 (0,1)
2 𝑇 ⟨ ⟩
1⟨ ( ) ⟩ = 2E 𝜒𝐸𝜏,𝜀 𝑃 𝑏1 𝑦𝜀2 , 𝛿𝑏 𝐿2 (0,1) 𝑑𝑡 (4.155)
+ ℎ𝑦𝑦 𝑦(𝑇 ̄ ) 𝑦𝜀4 (𝑇 ), 𝑦𝜀1 (𝑇 )+𝑦𝜀2 (𝑇 ) 𝐿2 (0,1) ∫0
2
1⟨
𝑇 ⟨ ⟩
( ( ) ( )) +E 𝜒𝐸𝜏,𝜀 𝑃 𝛿𝑏, 𝛿𝑏 𝐿2 (0,1) 𝑑𝑡
+ (1 − 𝜎) ℎ𝑦𝑦 𝑦(𝑇 ̄ ) + 𝜎𝑦𝜀1 (𝑇 ) − ℎ𝑦𝑦 𝑦(𝑇 ̄ ) ∫0
∫0
⟩ ] 𝑇⟨ ⟩
×𝑦𝜀1 (𝑇 ), 𝑦𝜀1 (𝑇 ) 𝐿2 (0,1) 𝑑𝜎 . +2E 𝜒𝐸𝜏,𝜀 𝑄𝛿𝑏, 𝑦𝜀2 𝐻 −1 (0,1),𝐻01 (0,1)
𝑑𝑡.
∫0

300
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330


𝑘
It follows from (4.123) that Let 𝐸0 = 𝐸𝑖,𝑗 . Then its Lebesgue measure 𝐦(𝐸0 ) = 0, and for any
𝑖,𝑗,𝑘∈N
|
𝑇 ⟨ ⟩ | 𝑖, 𝑗, 𝑘 ∈ N and 𝑡 ∈ [𝑡𝑖 , 𝑇 ) ⧵ 𝐸0 ,
|2E 𝜒𝐸𝜏,𝜀 𝑃 𝑏1 𝑦𝜀2 , 𝛿𝑏 𝐿2 (0,1) 𝑑𝑡|
| ∫0 | [( (
( 𝑇 )1 ) ( )
2 (4.156) E H 𝑡, 𝑥(𝑡),
̄ 𝑢(𝑡), ̄ 𝑣𝑘 (𝑡), 𝑦(𝑡), 𝑌 (𝑡)
̄ 𝑦(𝑡), 𝑌 (𝑡) − H 𝑡, 𝑥(𝑡),
≤ |𝑦2 |𝐶 ([0,𝑇 ];𝐿2 (𝛺;𝐿2 (0,1)))
𝜀
𝜒𝐸𝜏,𝜀 𝑑𝑡
F ∫0 1⟨ ( ( ) ( ))
×|𝛿𝑏|𝐿2 (0,𝑇 ;𝐿∞ (𝛺;𝐿∞ (0,1))) = 𝑜(𝜀). − 𝑃 (𝑡) 𝑏 𝑡, 𝑥(𝑡),
̄ 𝑢(𝑡)
̄ ̄ 𝑣𝑘 (𝑡) ,
− 𝑏 𝑡, 𝑥(𝑡), (4.160)
2 ) ]
F
( ) ( )⟩
By (4.124), we have that 𝑏 𝑡, 𝑥(𝑡),
̄ 𝑢(𝑡)
̄ ̄ 𝑣𝑘 (𝑡) 𝐿2 (0,1) 𝜒𝑀𝑖𝑗 ≤ 0.
− 𝑏 𝑡, 𝑥(𝑡),
𝑇⟨ ⟩
| | By the construction of {𝑀𝑖𝑗 }∞ , the right continuity of the filter F
|2E 𝜒𝐸𝜏,𝜀 𝑄𝛿𝑏, 𝑦𝜀2 𝐻 −1 (0,1),𝐻 1 (0,1) 𝑑𝑡| 𝑗=1
| ∫0 0 | and the density of {𝑣𝑘 }∞ , we conclude from (4.160) that for a.e.
𝑘=1
𝑇
(𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺 and all 𝜌 ∈ 𝑈 ,
≤ |𝑦𝜀2 |𝐿∞ (0,𝑇 ;𝐻 1 (0,1)) E 𝜒𝐸𝜏,𝜀 |𝑄𝛿𝑏|𝐻 −1 (0,1) 𝑑𝑡
0 ∫0 ( ) ( )
H 𝑡, 𝑦(𝑡), ̄ 𝑧(𝑡), 𝑍(𝑡) − H 𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡), ̄ 𝜌, 𝑧(𝑡), 𝑍(𝑡)
√ ( 𝑇 )1 ( 𝑇 )1
1⟨ [ ( ) ( )]
2 2
≤  𝜀E 𝜒𝐸𝜏,𝜀 𝑑𝑡 E |𝑄𝛿𝑏|2𝐻 −1 (0,1) 𝑑𝑡
∫0 ∫0 − 𝑃 (𝑡) 𝑏 𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡)
̄ − 𝑏 𝑡, 𝑦(𝑡),
̄ 𝜌 ,
2
( 𝑇 )1 ( ) ( )⟩
2 (4.161)
≤ 𝜀E |𝑄𝛿𝑏|2𝐻 −1 (0,1) 𝑑𝑡 = 𝑜(𝜀). (4.157) 𝑏 𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡)̄ − 𝑏 𝑡, 𝑦(𝑡),
̄ 𝜌 𝐿2 (0,1)
∫0
≥ 0.
By (4.154)–(4.157), we arrive at
Consequently, we get (4.108). This completes the proof of Theo-
 (𝑢𝜀 (⋅)) −  (𝑢(⋅))
̄
( ) rem 4.3. □
𝑇 ⟨ ⟩ ⟨ ⟩
=E 𝜒 𝛿𝑔− 𝑧, 𝛿𝑎 𝐿2 (0,1) − 𝑍, 𝛿𝑏 𝐿2 (0,1) 𝑑𝑡
∫0 𝐸𝜏,𝜀 Theorem 4.3 gives a necessary condition for optimal controls of
𝑇 ⟨ ⟩ Problem (OP), which is also sufficient under some further assumptions,
+E 𝜒𝐸𝜏,𝜀 𝑃 𝛿𝑏, 𝛿𝑏 𝐿2 (0,1) 𝑑𝑡 + 𝑜(𝜀). (4.158)
∫0 to be presented below.
Step 6. We are now ready to complete the proof, i.e., we shall To give a sufficient condition for optimal controls of Problem (OP),
( )
deduce (4.108) from (4.158). This is more or less standard. for a given 𝑢(⋅) ∈  [0, 𝑇 ], let 𝑦(⋅) be the corresponding state, 𝑧(⋅), 𝑍(⋅)
Since 𝐿2 (𝛺) is separable, for any 𝑡 ∈ [0, 𝑇 ], 𝑡 is countably the transposition solution to (4.5) with (𝑦(⋅),̄ 𝑢(⋅))
̄ replaced by (𝑦(⋅), 𝑢(⋅)),
𝑇
generated by a sequence {𝑀𝑘 }∞ 𝑘=1
⊂ 𝑡 , that is, for any 𝑀 ⊂ 𝑡 , there and (𝑃 (⋅), 𝑄(⋅)) the transposition solution to Eq. (4.25) in which 𝐹 (⋅),
exists a subsequence {𝑀𝑘𝑗 }∞
𝑗=1
⊂ {𝑀𝑘 }∞
𝑘=1
such that 𝐽 (⋅), 𝐾(⋅) and 𝑃𝑇 are given by
( ) ( )
lim P (𝑀 ⧵ 𝑀𝑘𝑗 ) ∪ (𝑀𝑘𝑗 ⧵ 𝑀) = 0. ⎧𝐹 (𝑡) = −H 𝑡, 𝑦(𝑡), 𝑢(𝑡), 𝑧(𝑡), 𝑍(𝑡) ,
𝑦𝑦
𝑗→∞ ⎪
⎪𝐽 (𝑡) = 𝑎𝑦 (𝑡, 𝑦(𝑡), 𝑢(𝑡)),
Denote by {𝑡𝑖 }∞ 𝑖=1
the sequence of rational numbers in (0, 𝑇 ), and by ⎨ (4.162)
{𝑣𝑖 }∞ a dense subset of 𝑈 . For each 𝑖 ∈ N, we choose {𝑀𝑖𝑗 }∞ ⊂  𝑡𝑖 ⎪𝐾(𝑡) = 𝑏𝑦 (𝑡, 𝑦(𝑡), 𝑢(𝑡)),
𝑖=1 𝑗=1 (
⎪𝑃 = −ℎ 𝑦(𝑇 ) . )
to be a sequence which generates 𝑡𝑖 . ⎩ 𝑇 𝑦𝑦
Fix 𝑖, 𝑗, 𝑘 ∈ N arbitrarily. For any 𝜏 ∈ [𝑡𝑖 , 𝑇 ) and 𝜃 ∈ (0, 𝑇 − 𝜏), write
𝐸𝜃𝑖 = [𝜏, 𝜏 + 𝜃). Put Put
{ 𝛥 ( )
(𝑡, 𝜂, 𝜌) = H 𝑡, 𝜂, 𝜌, 𝑧(𝑡), 𝑍(𝑡)
𝑣𝑘 , if (𝑡, 𝜔) ∈ 𝐸𝜃𝑖 × 𝑀𝑖𝑗 ,
𝑘,𝜃
𝑢𝑖𝑗 = 1⟨ ( ) ( )⟩
̄ 𝜔), if (𝑡, 𝜔) ∈ ([0, 𝑇 ] × 𝛺) ⧵ (𝐸𝜃𝑖 × 𝑀𝑖𝑗 ).
𝑢(𝑡, + 𝑃 (𝑡)𝑏 𝑡, 𝜂, 𝜌 , 𝑏 𝑡, 𝜂, 𝜌 𝐿2 (0,1)
⟨2 ( ) ( )⟩
− 𝑃 (𝑡)𝑏 𝑡, 𝑦(𝑡), 𝑢(𝑡) , 𝑏 𝑡, 𝜂, 𝜌 𝐿2 (0,1) ,
Clearly, 𝑢𝑘,𝜃
𝑖𝑗 ∈  [0, 𝑇 ] and ∀ (𝑡, 𝜂, 𝜌) ∈ [0, 𝑇 ] × 𝐿2 (0, 1) × 𝑈 .
𝑢𝑘,𝜃 ̄ 𝜔) = (𝑣𝑘 − 𝑢(𝑡,
𝑖𝑗 (𝑡, 𝜔) − 𝑢(𝑡, ̄ 𝜔))𝜒𝑀𝑖𝑗 (𝜔)𝜒𝐸 𝑖 (𝑡), We have the following sufficient condition of optimality for Problem
𝜃
(𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺. (OP).
From (4.158), it follows that
𝜃+𝜏 ( (
Theorem 4.4. Let (B1)–(B4) hold. Suppose that ℎ(⋅) is convex, H(𝑡, ⋅, ⋅,
)
E H 𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡),
̄ 𝑧(𝑡), 𝑍(𝑡) 𝑧(𝑡), 𝑍(𝑡)) is concave for a.e. 𝑡 ∈ [0, 𝑇 ], a.s., and
∫𝜏
( ) (𝑡, 𝑦(𝑡), 𝑢(𝑡)) = max (𝑡, 𝑦(𝑡), 𝜌), a.e. (𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺. (4.163)
̄ 𝑢𝑘,𝜃
− H 𝑡, 𝑦(𝑡), 𝑖𝑗 (𝑡), 𝑧(𝑡), 𝑍(𝑡) (4.159) 𝜌∈𝑈
1⟨ ( ( ) ( ))
− 𝑃 (𝑡) 𝑏 𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡)
̄ ̄ 𝑢𝑘,𝜃
−𝑏 𝑡, 𝑦(𝑡), 𝑖𝑗 (𝑡) ,
Then (𝑦(⋅), 𝑢(⋅)) is an optimal pair of Problem (OP).
2 )
( ) ( )⟩
𝑏 𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡)̄ ̄ 𝑢𝑘,𝜃
− 𝑏 𝑡, 𝑦(𝑡), 𝑖𝑗 (𝑡) 𝐿2 (0,1) 𝑑𝑡 Proof : First of all, we claim that, for a.e. 𝑡 ∈ [0, 𝑇 ] and 𝜔 ∈ 𝛺,
≤ 𝑜(𝜃).
H𝑢 (𝑡, 𝑦(𝑡), 𝑢(𝑡), 𝑧(𝑡), 𝑍(𝑡)) = 𝑢 (𝑡, 𝑦(𝑡), 𝑢(𝑡)). (4.164)
Divide both sides of (4.159) by 𝜃 and let 𝜃 → 0+ , by the property
of Lebesgue point, we deduce that for any 𝑖, 𝑗, 𝑘 ∈ N, there exists a To see this, we fix a 𝑡 ∈ [0, 𝑇 ] and denote
Lebesgue measurable set 𝐸𝑖,𝑗𝑘 ⊂ [𝑡 , 𝑇 ) with Lebesgue measure 𝐦(𝐸 𝑘 ) = 𝛥 ( )
𝑖 𝑖,𝑗 ⎧ H(𝜌) = H 𝑡, 𝑦(𝑡), 𝜌, 𝑧(𝑡), 𝑍(𝑡) ,
0 such that ⎪ 𝛥 ( ) 𝛥 ( )
[( ( ) ( ) ⎪ (𝜌) =  𝑡, 𝑦(𝑡), 𝜌 , 𝑏(𝜌) = 𝑏 𝑡, 𝑦(𝑡), 𝜌 ,
E H 𝑡, 𝑥(𝑡),
̄ 𝑢(𝑡), ̄ 𝑣𝑘 (𝑡), 𝑦(𝑡), 𝑌 (𝑡)
̄ 𝑦(𝑡), 𝑌 (𝑡) − H 𝑡, 𝑥(𝑡), ⎨ 𝛥 1⟨ ⟩
⎪ 𝜓(𝜌) = 𝑃 (𝑡)𝑏(𝜌), 𝑏(𝜌) 𝐿2 (0,1)
1⟨ ( ( ) ( )) ⎪ 2⟨ ⟩
− 𝑃 (𝑡) 𝑏 𝑡, 𝑥(𝑡),
̄ 𝑢(𝑡)
̄ ̄ 𝑣𝑘 (𝑡) ,
− 𝑏 𝑡, 𝑥(𝑡), ⎩ − 𝑃 (𝑡)𝑏(𝑢(𝑡)), 𝑏(𝜌) 𝐿2 (0,1) .
2 ) ]
( ) ( )⟩
𝑏 𝑡, 𝑥(𝑡),
̄ 𝑢(𝑡)
̄ ̄ 𝑣𝑘 (𝑡) 𝐿2 (0,1) 𝜒𝑀𝑖𝑗 ≤ 0,
− 𝑏 𝑡, 𝑥(𝑡), Then,
𝑘
∀ 𝑡 ∈ [𝑡𝑖 , 𝑇 ) ⧵ 𝐸𝑖,𝑗 . (𝑢) = H(𝑢) + 𝜓(𝑢).

301
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

Note that for any 𝑟 → 0+ and 𝑣 ∈ 𝑈 , 4.4. Pontryagin-type maximum principle for general stochastic evolution
equations
𝜓(𝑢(𝑡) + 𝑟𝑣) − 𝜓(𝑢(𝑡))
1 ⟨ [ ]
= 𝑃 (𝑡) 𝑏(𝑢(𝑡) + 𝑟𝑣) − 𝑏(𝑢(𝑡)) ,
2 ⟩ In this subsection, we shall give a brief introduction to Pontryagin
𝑏(𝑢(𝑡) + 𝑟𝑣) − 𝑏(𝑢(𝑡)) 𝐿2 (0,1) type maximum principle for general SEEs.
= 𝑜(𝑟). Let 𝐻 be a separable Hilbert space and 𝐴 be an unbounded linear
Hence, operator (with the domain 𝐷(𝐴) ⊂ 𝐻), which generates a 𝐶0 -semigroup
(𝑢(𝑡) + 𝑟𝑣) − (𝑢(𝑡)) {𝑆(𝑡)}𝑡≥0 on 𝐻. Denote by 𝐴∗ the adjoint operator of 𝐴. Let 𝑈 be a
lim separable metric space with a metric 𝐝(⋅, ⋅). Put
𝑟→0+ 𝑟
H(𝑢(𝑡) + 𝑟𝑣) − H(𝑢(𝑡)) 𝛥 { }
= lim+ , |
𝑟  [0, 𝑇 ] = 𝑢(⋅) ∶ [0, 𝑇 ] × 𝛺 → 𝑈 | 𝑢(⋅) is 𝐅-adapted .
𝑟→0 |
which gives (4.164). We assume the following condition:
Next, by (4.164) and the maximum condition (4.163), we have (B5) For 𝜓 = 𝑎, 𝑏, suppose that 𝜓(⋅, ⋅, ⋅) ∶ [0, 𝑇 ] × 𝛺 × 𝐻 × 𝑈 → 𝐻
0 = 𝑢 (𝑡, 𝑦(𝑡), 𝑢(𝑡)) = H𝑢 (𝑡, 𝑦(𝑡), 𝑢(𝑡), 𝑧(𝑡), 𝑍(𝑡)). is a (vector-valued) function satisfying: (i) For any (𝑦, 𝑢) ∈ 𝐻 × 𝑈 ,
𝜓(⋅, 𝑦, 𝑢) ∶ [0, 𝑇 ] × 𝛺 → 𝐻 is F-measurable; (ii) For any 𝑦 ∈ 𝐻 and a.e.
By the concavity of H(𝑡, ⋅, ⋅, 𝑧(𝑡), 𝑍(𝑡)), one obtains (𝑡, 𝜔) ∈ (0, 𝑇 ) × 𝛺, 𝜓(𝑡, 𝑦, ⋅) ∶ 𝑈 → 𝐻 is continuous; and (iii) For any
𝑇 ( (𝑦1 , 𝑦2 , 𝑢) ∈ 𝐻 × 𝐻 × 𝑈 and a.e. (𝑡, 𝜔) ∈ (0, 𝑇 ) × 𝛺,
∫0
H(𝑡, 𝑦(𝑡),
̃ 𝑢(𝑡),
̃ 𝑧(𝑡), 𝑍(𝑡)) {
) |𝜓(𝑡, 𝑦1 , 𝑢) − 𝜓(𝑡, 𝑦2 , 𝑢)|𝐻 ≤ |𝑦1 − 𝑦2 |𝐻 ,
−H(𝑡, 𝑦(𝑡), 𝑢(𝑡), 𝑧(𝑡), 𝑍(𝑡)) 𝑑𝑡 |𝜓(𝑡, 0, 𝑢)|𝐻 ≤ .
𝑇⟨ ⟩
≤ H𝑦 (𝑡, 𝑦(𝑡), 𝑢(𝑡), 𝑧(𝑡), 𝑍(𝑡)), 𝑦(𝑡)
̃ − 𝑦(𝑡) 𝐿2 (0,1) 𝑑𝑡, Consider the following controlled SEE:
∫0
for any admissible pair (𝑦(⋅),
̃ 𝑢(⋅)).
̃ ⎧ 𝑑𝑦(𝑡) = (𝐴𝑦(𝑡) + 𝑎(𝑡, 𝑦(𝑡), 𝑢(𝑡)))𝑑𝑡
𝛥 ⎪
Let 𝜉(𝑡) = 𝑦(𝑡)
̃ − 𝑦(𝑡), which satisfies the following equation: ⎨ +𝑏(𝑡, 𝑦(𝑡), 𝑢(𝑡))𝑑𝑊 (𝑡) in (0, 𝑇 ], (4.167)
⎪ 𝑦(0) = 𝑦0 ,
⎧ 𝑑𝜉(𝑡) = [𝐴𝜉(𝑡) + 𝑎 (𝑡, 𝑦(𝑡), 𝑢(𝑡))𝜉(𝑡) + 𝛼(𝑡)]𝑑𝑡 ⎩
⎪ [ ( 𝑦 ) ]
⎨ + 𝑏𝑦 𝑡, 𝑦(𝑡), 𝑢(𝑡) 𝜉(𝑡) + 𝛽(𝑡) 𝑑𝑊 (𝑡) in (0, 𝑇 ], where 𝑢(⋅) ∈  [0, 𝑇 ] is the control, 𝑦(⋅) is the state, and the (given) initial
⎪ 𝜉(0) = 0, 𝑝
state 𝑦0 ∈ 𝐿0 (𝛺; 𝐻) for some given 𝑝0 ≥ 2.
⎩ 0
𝑝
By Theorem B.7, for any 𝑦0 ∈ 𝐿0 (𝛺; 𝐻) and 𝑢(⋅) ∈  [0, 𝑇 ],
where 0
{ Eq. (4.167) admits a unique mild solution 𝑦(⋅) ≡ 𝑦(⋅ ; 𝑦0 , 𝑢) ∈ 𝐶F ([0, 𝑇 ];
𝛥 ( ) ( ) ( ) 𝑝
𝛼(𝑡) = −𝑎𝑦 𝑡,𝑦(𝑡), 𝑢(𝑡) 𝜉(𝑡)+𝑎 𝑡,𝑦(𝑡),
̃ 𝑢(𝑡)̃ −𝑎 𝑡,𝑦(𝑡), 𝑢(𝑡) , 𝐿 0 (𝛺; 𝐻)). Furthermore,
𝛥 ( ) ( ) ( ) ( )
𝛽(𝑡) = −𝑏𝑦 𝑡,𝑦(𝑡), 𝑢(𝑡) 𝜉(𝑡)+𝑏 𝑡,𝑦(𝑡),
̃ 𝑢(𝑡)̃ −𝑏 𝑡,𝑦(𝑡), 𝑢(𝑡) . |𝑦|𝐶F ([0,𝑇 ];𝐿𝑝0 (𝛺;𝐻)) ≤  1 + |𝑦0 |𝐿𝑝0 (𝛺;𝐻) .
0
It follows from the definition of transposition solution to (4.5) that
⟨ ⟩ Also, we need the following assumption:
E ℎ𝑦 (𝑦(𝑇 )), 𝜉(𝑇 ) 𝐿2 (0,1) (B6) Suppose that 𝑔(⋅, ⋅, ⋅) ∶ [0, 𝑇 ] × 𝛺 × 𝐻 × 𝑈 → R and ℎ(⋅) ∶ 𝛺 × 𝐻 → R
⟨ ⟩ ⟨ ⟩
= −E 𝑧(𝑇 ), 𝜉(𝑇 ) 𝐿2 (0,1) + E 𝑧(0), 𝜉(0) 𝐿2 (0,1) are two functions satisfying: (i) For any (𝑦, 𝑢) ∈ 𝐻 × 𝑈 , the function
𝑇(⟨ ⟩ ⟨ ⟩ 𝑔(⋅, 𝑦, 𝑢) ∶ [0, 𝑇 ]×𝛺 → R is F-measurable, and the function ℎ(𝑦) ∶ 𝛺 → R is
= −E 𝑔𝑦 (𝑡, 𝑦(𝑡), 𝑢(𝑡)), 𝜉(𝑡) 𝐿2 (0,1) + 𝑧(𝑡), 𝛼(𝑡) 𝐿2 (0,1) 𝑇 -measurable; (ii) For any 𝑦 ∈ 𝐻 and a.e. (𝑡, 𝜔) ∈ (0, 𝑇 ) × 𝛺, the function
∫0
⟨ ⟩ ) 𝑔(𝑡, 𝑦, ⋅) ∶ 𝑈 → R is continuous; and (iii) For any (𝑦1 , 𝑦2 , 𝑢) ∈ 𝐻 × 𝐻 × 𝑈
+ 𝑍(𝑡), 𝛽(𝑡) 𝐿2 (0,1) 𝑑𝑡 and a.e. (𝑡, 𝜔) ∈ (0, 𝑇 ) × 𝛺,
𝑇⟨ ⟩ {
=E H𝑦 (𝑡, 𝑦(𝑡), 𝑢(𝑡), 𝑧(𝑡), 𝑍(𝑡)), 𝜉(𝑡) 𝑑𝑡 |𝑔(𝑡, 𝑦1 , 𝑢)−𝑔(𝑡, 𝑦2 , 𝑢)|+|ℎ(𝑦1 )−ℎ(𝑦2 )| ≤ |𝑦1 −𝑦2 |𝐻 ,
∫0 𝐿2 (0,1)
|𝑔(𝑡, 0, 𝑢)| + |ℎ(0)| ≤ .
𝑇 (⟨ ⟩
−E 𝑧(𝑡), 𝑎(𝑡, 𝑦(𝑡),
̃ 𝑢(𝑡))
̃ − 𝑎(𝑡, 𝑦(𝑡), 𝑢(𝑡)) 𝐿2 (0,1) Define a cost functional  (⋅) as follows:
∫0
⟨ ⟩ ) ( 𝑇 )
𝛥
+ 𝑍(𝑡), 𝑏(𝑡, 𝑦(𝑡),
̃ 𝑢(𝑡))
̃ − 𝑏(𝑡, 𝑦(𝑡), 𝑢(𝑡)) 𝐿2 (0,1) 𝑑𝑡  (𝑢(⋅)) = E 𝑔(𝑡, 𝑦(𝑡), 𝑢(𝑡))𝑑𝑡 + ℎ(𝑦(𝑇 )) ,
∫0 (4.168)
𝑇(
∀ 𝑢(⋅) ∈  [0, 𝑇 ],
≥E H(𝑡, 𝑦(𝑡),
̃ 𝑢(𝑡),
̃ 𝑧(𝑡), 𝑍(𝑡)) (4.165)
∫0
) where 𝑦(⋅) is the corresponding solution to (4.167). Consider the fol-
−H(𝑡, 𝑦(𝑡), 𝑢(𝑡), 𝑧(𝑡), 𝑍(𝑡)) 𝑑𝑡 lowing optimal control problem for the control system (4.167) with the
𝑇 (⟨ ⟩ cost functional (4.168):
−E 𝑧(𝑡), 𝑎(𝑡, 𝑦(𝑡),
̃ 𝑢(𝑡))
̃ − 𝑎(𝑡, 𝑦(𝑡), 𝑢(𝑡)) 𝐿2 (0,1) ̄ ∈  [0, 𝑇 ] such that
Problem (GOP) Find a 𝑢(⋅)
∫0
⟨ ⟩ )
+ 𝑍(𝑡), 𝑏(𝑡, 𝑦(𝑡),
̃ 𝑢(𝑡))−𝑏(𝑡,
̃ 𝑦(𝑡), 𝑢(𝑡)) 𝐿2 (0,1) 𝑑𝑡  (𝑢(⋅))
̄ = inf  (𝑢(⋅)). (4.169)
𝑢(⋅) ∈  [0,𝑇 ]
𝑇( )
= −E 𝑔(𝑡, 𝑦(𝑡),
̃ 𝑢(𝑡))
̃ − 𝑔(𝑡, 𝑦(𝑡), 𝑢(𝑡)) 𝑑𝑡. Any 𝑢(⋅)̄ ∈  [0, 𝑇 ] satisfying (4.169) is called an optimal control.
∫0
The corresponding state 𝑦(⋅) ̄ (of (4.167)) is called an optimal state, and
On the other hand, the convexity of ℎ implies (𝑦(⋅),
̄ 𝑢(⋅))̄ is called an optimal pair (of Problem (GOP)).
⟨ ⟩ Similarly to Section 4.3, for the case of a general control domain,
E ℎ𝑦 (𝑦(𝑇 )), 𝜉(𝑇 ) 𝐿2 (0,1) ≤ Eℎ(𝑦(𝑇
̃ )) − Eℎ(𝑦(𝑇 )).
we need the following further assumption:
This, together with (4.165), implies that (B7) For 𝜓 = 𝑎, 𝑏, for any 𝑢 ∈ 𝑈 and a.e. (𝑡, 𝜔) ∈ (0, 𝑇 ) × 𝛺, the functions
𝜓(𝑡, ⋅, 𝑢) ∶ 𝐻 → 𝐻, 𝑔(𝑡, ⋅, 𝑢) ∶ 𝐻 → R and ℎ(⋅) ∶ 𝐻 → R are 𝐶 2 . For any
 (𝑢(⋅)) ≤  (𝑢(⋅)).
̃ (4.166)
𝑦 ∈ 𝐻 and a.e. (𝑡, 𝜔) ∈ (0, 𝑇 ) × 𝛺, the functions 𝜓𝑦 (𝑡, 𝑦, ⋅) ∶ 𝑈 → (𝐻),
̃ ∈  [0, 𝑇 ] is arbitrary, the desired result follows.
Since 𝑢(⋅) □ 𝑔𝑦 (𝑡, 𝑦, ⋅) ∶ 𝑈 → 𝐻, 𝜓𝑦𝑦 (𝑡, 𝑦, ⋅) ∶ 𝑈 → (𝐻, 𝐻; 𝐻), and 𝑔𝑦𝑦 (𝑡, 𝑦, ⋅) ∶

302
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

𝑈 → (𝐻) are continuous. Moreover, for any (𝑦, 𝑢) ∈ 𝐻 × 𝑈 and a.e. Write
(𝑡, 𝜔) ∈ (0, 𝑇 ) × 𝛺, [0, 𝑇 ]
{ (
⎧ |𝜓𝑦 (𝑡, 𝑦, 𝑢)| + |𝑔𝑦 (𝑡, 𝑦, 𝑢)|𝐻 + |ℎ𝑦 (𝑦)|𝐻 ≤ ,
𝛥 |
= 𝑃 (⋅, ⋅) | 𝑃 (⋅, ⋅) ∈ 𝑝𝑑 𝐿2F (0, 𝑇 ; 𝐿4 (𝛺; 𝐻)); 𝐿2 (0, 𝑇 ;
⎪ (𝐻) |
⎨ |𝜓𝑦𝑦 (𝑡, 𝑦, 𝑢)|(𝐻,𝐻;𝐻) + |𝑔𝑦𝑦 (𝑡, 𝑦, 𝑢)|(𝐻)
4 )
⎪ +|ℎ (𝑦)| 𝐿F3 (𝛺; 𝐻)) and for every 𝑡 ∈ [0, 𝑇 ] and 𝜉
⎩ 𝑦𝑦 (𝐻)
≤ . 4
∈ 𝐿4 (𝛺;𝐻), 𝜒[𝑡,𝑇 ]𝑃 (⋅)𝜉 ∈ 𝐷F ([𝑡,𝑇 ];𝐿 3 (𝛺;𝐻))
𝑡 }
As Section 4.3, in order to establish the Pontryagin-type maximum and |𝑃 (⋅)𝜉| 4 ≤ |𝜉|𝐿4 (𝛺;𝐻)
𝐷F ([𝑡,𝑇 ];𝐿 3 (𝛺;𝐻)) 𝑡
principle for an optimal pair (𝑦(⋅),
̄ 𝑢(⋅))
̄ of Problem (GOP), we need the
following 𝐻-valued BSEE: and
[0, 𝑇 ]
⎧ 𝑑𝑧 = −𝐴∗ 𝑧𝑑𝑡− ( 𝑎 (𝑡, 𝑦, ̄ ∗ 𝑧+𝑏𝑦 (𝑡, 𝑦, ̄ ∗𝑍 {
𝛥 ( )
⎪ )𝑦
̄ 𝑢) ̄ 𝑢)
= 𝑄(⋅) ,𝑄̂(⋅) || For any 𝑡 ∈ [0,𝑇 ], both 𝑄(𝑡) and 𝑄
̂(𝑡) are
⎨ −𝑔𝑦 (𝑡, 𝑦,
̄ 𝑢)̄ 𝑑𝑡 + 𝑍𝑑𝑊 (𝑡) in [0, 𝑇 ), (4.170) |
( )
⎪ 𝑧(𝑇 ) = −ℎ𝑦 𝑦(𝑇 ̄ ) , bounded linear operators from 𝑡
⎩ 4
to 𝐿2F (𝑡, 𝑇 ; 𝐿 3}(𝛺; 𝐻)) and 𝑄(𝑡) (0, 0, ⋅)∗
which serves as the first order adjoint equation. By Theorem B.16, ̂(𝑡) (0, 0, ⋅) .
=𝑄
Eq. (4.170) is well-posed in the sense of transposition solution (Note
that we do not assume the filtration 𝐅 is the natural one). where
Next, to deal with the case of non-convex control domain 𝑈 , we 𝛥
𝑡 = 𝐿4 (𝛺; 𝐻) × 𝐿2F (𝑡, 𝑇 ; 𝐿4 (𝛺; 𝐻))×𝐿2F (𝑡, 𝑇 ; 𝐿4 (𝛺; 𝐻)).
need the following formally (𝐻)-valued BSEE: 𝑡

⎧ 𝑑𝑃 = −(𝐴∗ + 𝐽 ∗ )𝑃 𝑑𝑡 − 𝑃 (𝐴 + 𝐽 )𝑑𝑡 − 𝐾 ∗ 𝑃 𝐾𝑑𝑡 Both [0, 𝑇 ] and [0, 𝑇 ] are Banach spaces w.r.t. the norms
⎪ 𝛥
⎨ −(𝐾 ∗ 𝑄 + 𝑄𝐾)𝑑𝑡 + 𝐹 𝑑𝑡 + 𝑄𝑑𝑊 (𝑡) in [0, 𝑇 ), (4.171) |𝑃 |[0,𝑇 ] = |𝑃 | 4
⎪ 𝑃 (𝑇 ) = 𝑃𝑇 , (𝐿2F (0,𝑇 ;𝐿4 (𝛺;𝐻));𝐿2 (0,𝑇 ;𝐿F3 (𝛺;𝐻)))

and
where
{ 𝛥
𝐽 , 𝐾 ∈ 𝐿4F (0, 𝑇 ; 𝐿∞ (𝛺; (𝐻))), |(𝑄𝑡 ,𝑄
̂𝑡 )|[0,𝑇 ] = sup |(𝑄𝑡 ,𝑄
̂𝑡 )| 4
𝑡∈[0,𝑇 ] (𝑡 ;𝐿2 (𝑡,𝑇 ;𝐿F3 (𝛺;𝐻)))2
𝐹 ∈ 𝐿1F (0, 𝑇 ; 𝐿2 (𝛺; (𝐻))), 𝑃𝑇 ∈ 𝐿2 (𝛺; (𝐻)).
𝑇
The relaxed transposition solution to (4.171) is defined as follows:
The Eq. (4.171) is a general version of Eq. (4.25). We formulate the
( )
transposition solution to (4.25) as in Definition 4.1. This notion is Definition 4.2. We call 𝑃 (⋅), 𝑄(⋅) , 𝑄 ̂(⋅) ∈ [0, 𝑇 ] × [0, 𝑇 ] a relaxed
enough to establish a Pontryagin-type maximum principle for the one transposition solution to Eq. (4.171) if for any 𝑡 ∈ [0, 𝑇 ], 𝜉1 , 𝜉2 ∈
dimensional stochastic parabolic equation (4.2) due to the following 𝐿4 (𝛺; 𝐻) and 𝑢1 (⋅), 𝑢2 (⋅), 𝑣1 (⋅), 𝑣2 (⋅) ∈ 𝐿2F (𝑡, 𝑇 ; 𝐿4 (𝛺; 𝐻)), it holds that
𝑡
reasons:
⟨ ⟩ 𝑇⟨ ⟩
• The embedding operator from 𝐻01 (0, 1) to 𝐿2 (0, 1) is a Hilbert– E 𝑃𝑇 𝜑1 (𝑇 ), 𝜑2 (𝑇 ) 𝐻 − E 𝐹 (𝑠)𝜑1 (𝑠), 𝜑2 (𝑠) 𝐻 𝑑𝑠
∫𝑡
Schmidt operator; ⟨ ⟩ 𝑇⟨ ⟩
• Solutions to stochastic parabolic equations have suitable smooth- = E 𝑃 (𝑡)𝜉1 , 𝜉2 𝐻 + E 𝑃 (𝑠)𝑢1 (𝑠), 𝜑2 (𝑠) 𝐻 𝑑𝑠
∫𝑡
ing effect. 𝑇⟨ ⟩
+E 𝑃 (𝑠)𝜑1 (𝑠), 𝑢2 (𝑠) 𝐻 𝑑𝑠
These are very restrictive and not fulfilled by many SPDEs. They ∫𝑡
are even not fulfilled for stochastic parabolic equations in a multidi- 𝑇⟨ ⟩
mensional domain. To overcome this difficulty, we need to introduce +E 𝑃 (𝑠)𝐾(𝑠)𝜑1 (𝑠), 𝑣2 (𝑠) 𝐻 𝑑𝑠 (4.175)
∫𝑡
the concept of relaxed transposition solutions to (4.171), which is a 𝑇⟨ ⟩
generalization of the one used for (4.25), and via which we can derive +E 𝑃 (𝑠)𝑣1 (𝑠), 𝐾(𝑠)𝜑2 (𝑠) + 𝑣2 (𝑠) 𝐻 𝑑𝑠
∫𝑡
the desired Pontryagin-type maximum principle for Problem (GOP) 𝑇⟨ ⟩
(See Lü & Zhang, 2014, 2015b, 2018). +E ̂(𝑡) (𝜉2 , 𝑢2 , 𝑣2 )(𝑠) 𝑑𝑠
𝑣1 (𝑠), 𝑄
∫𝑡 𝐻
To define the solution of (4.171), we need the following two SEEs: 𝑇⟨ ⟩
{ +E 𝑄(𝑡) (𝜉1 , 𝑢1 , 𝑣1 )(𝑠), 𝑣2 (𝑠) 𝐻 𝑑𝑠.
𝑑𝜑1 = (𝐴 + 𝐽 )𝜑1 𝑑𝑠 + 𝑢1 𝑑𝑠 + (𝐾𝜑1 + 𝑣1 )𝑑𝑊 (𝑠) in (𝑡, 𝑇 ], ∫𝑡
(4.172)
𝜑1 (𝑡) = 𝜉1
Here, 𝜑1 (⋅) and 𝜑2 (⋅) solve (4.172) and (4.173), respectively.
and
{ We have the following well-posedness result for Eq. (4.171) (See Lü
𝑑𝜑2 = (𝐴 + 𝐽 )𝜑2 𝑑𝑠 + 𝑢2 𝑑𝑠 + (𝐾𝜑2 + 𝑣2 )𝑑𝑊 (𝑠) in (𝑡, 𝑇 ], & Zhang, 2014 for its proof).
(4.173)
𝜑2 (𝑡) = 𝜉2 .

Here 𝑡 ∈ [0, 𝑇 ), 𝜉1 , 𝜉2 ∈ 𝐿4 (𝛺; 𝐻) and 𝑢1 , 𝑢2 , 𝑣1 , 𝑣2 ∈ 𝐿2F (𝑡, 𝑇 ; 𝐿4 (𝛺; 𝐻)). Theorem 4.5. Suppose that 𝐿𝑝 (𝛺) (1 ≤ 𝑝 < ∞) is separable.
𝑇
𝑡
Then Eq. (4.171) admits a unique relaxed transposition solution
Also, we should introduce the solution spaces for (4.171). Let 𝐻1 ( )
̂(⋅) ∈ [0, 𝑇 ] × [0, 𝑇 ]. Furthermore,
𝑃 (⋅), 𝑄(⋅) , 𝑄
and 𝐻2 be Hilbert spaces. For 1 ≤ 𝑝1 , 𝑝2 , 𝑞1 , 𝑞2 ≤ ∞, let
( 𝑝 ) ( )
𝑞
𝑝𝑑 𝐿F1 (0, 𝑇 ; 𝐿𝑞1 (𝛺; 𝐻1 )); 𝐿𝑝2 (0, 𝑇 ; 𝐿F1 (𝛺; 𝐻2 )) |𝑃 |[0,𝑇 ] + | 𝑄(⋅) , 𝑄
̂(⋅) |
{ ( [0,𝑇 ] )
𝛥 ( 𝑝 𝑞 )| ≤  |𝐹 |𝐿1 (0,𝑇 ; 𝐿2 (𝛺;(𝐻))) + |𝑃𝑇 |𝐿2 (𝛺; (𝐻)) .
= 𝐿 ∈  𝐿F1 (0,𝑇 ; 𝐿𝑞1 (𝛺; 𝐻1 )); 𝐿𝑝2 (0,𝑇 ; 𝐿F1 (𝛺; 𝐻2 )) |
| F 𝑇

𝐿 is pointwise defined, i.e., for a.e. (𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺, (4.174)


( )
̃ 𝜔) ∈ (𝐻1 ;𝐻2 ) such that 𝑢(⋅) (𝑡, 𝜔)
there is 𝐿(𝑡, Remark 4.1. It is well known that 𝐿𝑝 (𝛺) is separable if (𝛺, 𝑇 , P) is
} 𝑇
̃ 𝜔)𝑢(𝑡, 𝜔), ∀ 𝑢(⋅) ∈ 𝐿𝑝1 (0, 𝑇 ; 𝐿𝑞1 (𝛺; 𝐻1 )) . separable, i.e., there exists a countable family  ⊂ 𝑇 such that, for any
= 𝐿(𝑡, F ( )
𝜀 > 0 and 𝐵 ∈ 𝑇 one can find 𝐵1 ∈  with P (𝐵 ⧵ 𝐵1 ) ∪ (𝐵1 ⧵ 𝐵) < 𝜀
In the sequel, if there is no confusion, sometimes we identify 𝐿 ∈ (e.g., Bruckner, Bruckner, & Thomson, 1997, Section 13.4). Probability
( 𝑝 𝑞 )
̃ ⋅).
𝑝𝑑 𝐿F1 (0, 𝑇 ; 𝐿𝑞1 (𝛺; 𝐻)); 𝐿𝑝2 (0, 𝑇 ; 𝐿F1 (𝛺; 𝐻)) with 𝐿(⋅, space enjoying such kind of property is called a standard probability

303
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

space (also called Lebesgue–Rokhlin probability space or just Lebesgue term 𝑄 (in Eq. (4.171)) plays an important role in the study of the sec-
space). Except some artificial examples, almost all frequently used ond order necessary conditions for optimal controls (See Frankowska &
probability spaces are standard probability space (e.g., Itô, 1984). Lü, 2020; Frankowska & Zhang, 2020; Lü et al., 2018 for more details).
To our best knowledge, there exist only very few published works
Remark 4.2. One can use the Riesz representation theorem to derive on the sufficiency of the Pontryagin-type maximum principle for con-
𝑃 , which is the first part of the relaxed transposition solution to (4.171) trolled SEEs in infinite dimensions (e.g., Lenhart, Xiong, & Yong, 2016,
and is enough to derive a Pontryagin-type maximum principle for Theorem 6.3).
Problem (GOP) (e.g., Du & Meng, 2013; Fuhrman, Hu, & Tessitore, There is another method to solve optimal control problems, that
2013, 2018). Nevertheless, the relaxed transposition solution has other is, the Dynamic Programming Method. After the seminal work (Bellman,
important applications, such as deriving second order necessary con- 1957), there are extensively study of Dynamic Programming Method
ditions for optimal controls of Problem (GOP) (e.g., Frankowska & Lü, for different kinds of control systems (e.g., Fabbri, Gozzi, & Świȩch,
2020; Lü, Zhang, & Zhang, 2018). 2017; Fleming & Soner, 2006; Krylov, 2009; Lions, 1983; Lü & Zhang,
( ) 2013; Yong & Zhou, 1999 and the rich references therein). We refer the
For any optimal pair (𝑦(⋅),
̄ 𝑢(⋅))
̄ of Problem (GOP), let 𝑧(⋅), 𝑍(⋅) be
̂(⋅) ) the relaxed readers to Fabbri et al. (2017) for a very nice monograph of Dynamic
the transposition solution to (4.170), and (𝑃 (⋅), 𝑄(⋅) , 𝑄
Programming Method for SDPSs.
transposition solution to Eq. (4.171) in which 𝑃𝑇 , 𝐽 (⋅), 𝐾(⋅) and 𝐹 (⋅)
are given by There are many open problems related to the topic of this section.
We shall list below some of them which, in our opinion, are particularly
⎧ 𝐽 (𝑡) = 𝑎 (𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡)),
̄ 𝐾(𝑡) = 𝑏𝑦 (𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡)),
̄ interesting and/or important:
⎪ 𝑦 ( )
⎨ 𝐹 (𝑡) = −H𝑦𝑦 𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡),
̄ 𝑧(𝑡), 𝑍(𝑡) , (1) Well-posedness of (4.171) corresponding to general SPDEs
( )
⎪ 𝑃𝑇 = −ℎ𝑦𝑦 𝑦(𝑇 ̄ ) . In this section, we define the transposition solution to (4.25) as

Definition 4.1. With such a solution, we establish the Pontryagin-type
Put maximum principle (Theorem 4.6). The key points to do this are the
H(𝑡, 𝑦, 𝑢, 𝑘1 , 𝑘2 ) smoothing effect of solutions to stochastic parabolic equations and
𝛥 ⟨ ⟩ ⟨ ⟩
= 𝑘1 , 𝑎(𝑡, 𝑦, 𝑢) 𝐻 + 𝑘2 , 𝑏(𝑡, 𝑦, 𝑢) 𝐻 − 𝑔(𝑡, 𝑦, 𝑢), the embedding operator from 𝐻01 (0, 1) to 𝐿2 (0, 1) is a Hilbert–Schmidt
(𝑡, 𝑦, 𝑢, 𝑘1 , 𝑘2 ) ∈ [0, 𝑇 ] × 𝐻 × 𝑈 × 𝐻 × 𝐻. operator. This is very restrictive. To handle general SPDEs, we intro-
duce the concept of relaxed transposition solution and 𝑉 -transposition
We have the following result.
solution (Lü & Zhang, 2014, 2015b, 2018). It would be quite interesting
(and also very important for some problems) to prove that this equation
Theorem 4.6. Suppose that 𝐿𝑝 (𝛺) (1 ≤ 𝑝 < ∞) is a separable Banach
𝑇 is also well-posed in the sense of transposition solution given by Lü and
space, 𝑈 is a separable metric space, and the assumptions (B5)–(B7) hold.
Zhang (2014, Definition 1.2). Nevertheless, so far the well-posedness of
Then, for a.e. (𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺 and for all 𝜌 ∈ 𝑈 ,
( ) ( ) (4.171) in that sense is known only for some very special case (See Lü
H 𝑡, 𝑦(𝑡), ̄ 𝑧(𝑡), 𝑍(𝑡) − H 𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡), ̄ 𝜌, 𝑧(𝑡), 𝑍(𝑡) & Zhang, 2014, Theorem 4.2). As far as we know, it is a challenging
1 ⟨ [ ( ) ( )] (unsolved) problem to prove the existence of transposition solution to
− 𝑃 (𝑡) 𝑏 𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡)̄ − 𝑏 𝑡, 𝑦(𝑡),
̄ 𝜌 ,
2 ( ) ( )⟩ (4.171), even for the following special case:
𝑏 𝑡, 𝑦(𝑡),
̄ 𝑢(𝑡)̄ − 𝑏 𝑡, 𝑦(𝑡),
̄ 𝜌 𝐻
{
≥ 0. 𝑑𝑃 = −𝐴∗ 𝑃 𝑑𝑡 − 𝑃 𝐴𝑑𝑡 + 𝐹 𝑑𝑡 + 𝑄𝑑𝑊 (𝑡) in [0, 𝑇 ),
𝑃 (𝑇 ) = 𝑃𝑇 ,
4.5. Notes and open problems
where 𝐹 ∈ 𝐿1F (0, 𝑇 ; 𝐿2 (𝛺; (𝐻))), 𝑃𝑇 ∈ 𝐿2 (𝛺; (𝐻)) for some separa-
𝑇
The main body of this section is a simplified version of the results ble Hilbert space 𝐻. The same can be said even for concrete problems,
in Lü and Zhang (2014, 2015b). say when 𝐴 = 𝛥, the Laplacian with the usual homogeneous Dirichlet
A pioneer work for the Pontryagin-type maximum principle for con- boundary condition.
trol systems governed by SEEs is Bensoussan (1983). Further progresses (2) Optimal control problems with endpoint/state constraints
are available in the literature (Hu & Peng, 1990; Tang & Li, 2017; In this section, we do not consider the endpoint/state constraints
Tudor, 1989; Zhou, 1993) and so on. All published works before 2012 to the control systems. For some special constraints, such as E𝑦(𝑇 ) > 0,
on the necessary conditions for optimal controls of infinite dimensional one can use the Ekeland variational principle to establish a similar Pon-
SEEs addressed only the case that one of the following conditions holds: tryagin type maximum principle with nontrivial Lagrange multipliers.
However, for the general case, one does need some further condition to
• The diffusion term does NOT depend on the control variable, or obtain nontrivial results in this respect. In the study of optimal control
the control domain is convex; problems for deterministic PDEs, people introduce the so-called finite
• For a.e. (𝑡, 𝜔) ∈ (0, 𝑇 ) × 𝛺 and any (𝑦, 𝑢, 𝑘1 , 𝑘2 ) ∈ 𝐻 × 𝑈 × 𝐻 × 𝐻, codimension condition to guarantee the nontriviality of the Lagrange
𝑎𝑦𝑦 (𝑡, 𝑦, 𝑢)𝑘1 , 𝑏𝑦𝑦 (𝑡, 𝑦, 𝑢)𝑘2 , 𝑔𝑦𝑦 (𝑡, 𝑦, 𝑢) and ℎ𝑦𝑦 (𝑦) are Hilbert–Schmidt multiplier (e.g., Li & Yong, 1995; Liu, Lü, & Zhang, 2020). There is
operator-valued functions. some attempt to generalize this condition to the stochastic framework
As we have explained, the main difficulty in the infinite dimensional (e.g., Liu, Lü, Zhang, & Zhang, 0000) but so far the results are still not
setting is the well-posedness of (4.171). There were several works (Du so satisfactory. Another way is to use some tools from the set-valued
& Meng, 2013; Fuhrman et al., 2013; Lü & Zhang, 2014, of which the analysis, as developed in the recent paper (Frankowska & Lü, 2020).
arXiv versions were all posted in 2012) addressed to overcome the (3) Higher order necessary conditions for optimal controls
above difficulty. The main idea is to solve (4.171) in a weak sense. When there are many admissible controls which satisfy the first
In Du and Meng (2013), Fuhrman et al. (2013), a partial well-posedness order necessary conditions trivially, higher order necessary conditions
result of (4.171) was established, that is, only 𝑃 is obtained by Riesz should be established to distinguish optimal controls from these can-
Representation Theorem (See Fuhrman et al., 2018 for further related didates. Some results on this topic can be found in Frankowska and
works). In Lü and Zhang (2014), a complete well-posedness result for Lü (2020), Frankowska and Zhang (2020) and Lü et al. (2018). How-
(4.171) was derived by means of the stochastic transposition method. ever, these results are far from satisfactory. Indeed, only second order
Results in Lü and Zhang (2014) have been further improved in Lü and necessary conditions for stochastic optimal controls are studied and
Zhang (2015b, 2018). Although it is not needed in establishing the pointwise second order necessary conditions are obtained only under
Pontryagin-type maximum principle for controlled SEEs, the correction very strong assumptions.

304
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

5. Stochastic linear quadratic optimal control problems where 𝑦(⋅) solves (5.1). Then, for any 𝜂 ∈ 𝐿2 (0, 1) and 𝑢(⋅) ∈  [0, 𝑇 ], the
corresponding state process 𝑦(⋅) and its terminal value 𝑦(𝑇 ) are given
This section is addressed to LQ problems for SEEs in infinite dimen- by
sions, in which both drift and diffusion terms may contain the control {
𝑦(⋅) = (𝛤 𝜂)(⋅) + (𝛯𝑢(⋅))(⋅),
variables. As before, we first consider an LQ problem for a controlled (5.7)
𝑦(𝑇 ) = 𝛤̂𝜂 + 𝛯𝑢(⋅).
̂
one dimensional stochastic parabolic equation. Then, we survey some
We need to compute the adjoint operators of 𝛯, 𝛯, ̂ 𝛤 and 𝛤̂. To
recent progress for the general case and list some open problems.
this end, we introduce the following backward stochastic parabolic
equation:
5.1. Formulation of the problem
⎧ 𝑑𝑧 = −(𝑧 +𝑎 𝑍 +𝜉 )𝑑𝑡+ 𝑍𝑑𝑊 (𝑡) in [0, 𝑇 ) × (0, 1),
⎪ 𝑥𝑥 2
Let (𝛺,  , 𝐅, P) be a complete filtered probability space with the ⎨ 𝑧=0 in [0, 𝑇 ) × {0, 1}, (5.8)
filtration 𝐅 = {𝑡 }𝑡≥0 satisfying the usual condition,7 on which a one ⎪ 𝑧(𝑇 ) = 𝑧𝑇 in (0, 1),

dimensional standard Brownian motion {𝑊 (𝑡)}𝑡≥0 is defined. Denoted
by F the progressive 𝜎-field w.r.t. 𝐅. where 𝑧𝑇 ∈ 𝐿2 (𝛺; 𝐿2 (0, 1)) and 𝜉(⋅) ∈ 𝐿2F (0, 𝑇 ; 𝐿2 (0, 1)).
𝑇
Consider the following control system:
Proposition 5.1. (1) For any 𝜉(⋅) ∈ 𝐿2F (0, 𝑇 ; 𝐿2 (0, 1)),
⎧ 𝑑𝑦 = (𝑦 + 𝑎 𝑢)𝑑𝑡 + (𝑎 𝑦 + 𝑎 𝑢)𝑑𝑊 (𝑡)
⎪ 𝑥𝑥 1 2 3 in (0, 𝑇 ] × (0, 1), { ∗
(5.1) 𝛯 𝜉 = 𝑎1 𝑧 0 + 𝑎3 𝑍 0 ,
⎨ 𝑦=0 on (0, 𝑇 ] × {0, 1}, (5.9)
⎪ 𝑦(0) = 𝜂. 𝛤 ∗ 𝜉 = 𝑧0 (0),

where (𝑧0 (⋅), 𝑍 0 (⋅)) is the transposition solution to (5.8) with 𝑧𝑇 = 0.
Here 𝑎1 , 𝑎2 , 𝑎3 ∈ 𝐿∞
F
(0, 𝑇 ; 𝐿∞ (0, 1)) and 𝜂 ∈ 𝐿2 (0, 1). By Theorem B.9, (2) For any 𝑧𝑇 ∈ 𝐿2 (𝛺; 𝐿2 (0, 1)),
𝛥 𝑇
for any 𝑢 ∈  [0, 𝑇 ] = 𝐿2F (0, 𝑇 ; 𝐿2 (0, 1)), (5.1) admits a unique solution {
̂ ∗ 1 1
𝑦(⋅) = (𝑦(⋅; 𝜂, 𝑢)) ∈ 𝐿2F (𝛺; 𝐶([0, 𝑇 ]; 𝐿2 (0, 1)))∩𝐿2F (0, 𝑇 ; 𝐻01 (0, 1))) such that 𝛯 𝑧𝑇 = 𝑎1 𝑧 + 𝑎3 𝑍 ,
(5.10)
𝛤̂∗ 𝑧𝑇 = 𝑧1 (0),
|𝑦(⋅; 𝜂, 𝑢)|𝐿2 (𝛺;𝐶([0,𝑇 ];𝐿2 (0,1)))∩𝐿2 (0,𝑇 ;𝐻 1 (0,1)) where (𝑧1 (⋅), 𝑍 1 (⋅)) is the transposition solution to (5.8) with 𝜉(⋅) = 0.
( F F) 0 (5.2)
≤  |𝜂|𝐿2 (0,1) + |𝑢|𝐿2 (0,𝑇 ;𝐿2 (0,1)) .
F
Proof. For any 𝑧𝑇 ∈ 𝐿2 (𝛺; 𝐿2 (0, 1)) and 𝜉(⋅) ∈ 𝐿2F (0, 𝑇 ; 𝐿2 (0, 1)), by
𝑇
The cost functional is Theorem B.16, there exists a unique transposition solution (𝑧(⋅), 𝑍(⋅)) ∈
[ 𝑇 1( ) 1 ] 𝐷F ([0, 𝑇 ]; 𝐿2 (𝛺; 𝐿2 (0, 1))) × 𝐿2F (0, 𝑇 ; 𝐿2 (0, 1)) to (5.8) satisfying
1
 (𝜂; 𝑢(⋅)) = E 𝑞𝑦2 + 𝑟𝑢2 𝑑𝑡 + 𝑔𝑦(𝑇 )2 𝑑𝑥 . (5.3)
2 ∫0 ∫0 ∫0
|(𝑧(⋅), 𝑍(⋅))|𝐷 ([0,𝑇 ];𝐿2 (𝛺;𝐿2 (0,1)))×𝐿2 (0,𝑇 ;𝐿2 (0,1))
Here 𝑞 ∈ 𝐿∞ (0, 𝑇 ; 𝐿∞ (0, 1)), 𝑟 ∈ 𝐿∞ (0, 𝑇 ; 𝐿∞ (0, 1)) and 𝑔 ∈ 𝐿∞ (𝛺; ( F F )
F F 𝑇 ≤  |𝑧𝑇 |𝐿2 (𝛺;𝐿2 (0,1)) + |𝜉|𝐿2 (0,𝑇 ;𝐿2 (0,1)) .
𝐿∞ (0, 1)). 𝑇 F

We are concerned with the following optimal control problem: For any 𝜂 ∈ 𝐿2 (0, 1) and 𝑢(⋅) ∈  [0, 𝑇 ], let 𝑦(⋅) be the correspond-
Problem (SLQ). For any given 𝜂 ∈ 𝐿2 (0, 1), find a 𝑢(⋅) ̄ ∈  [0, 𝑇 ] such ing solution to the control system (5.1). From the definition of the
that transposition solution to Eq. (5.8), we find
1 1
 (𝜂; 𝑢(⋅))
̄ = inf  (𝜂; 𝑢(⋅)). (5.4) E 𝑦(𝑇 )𝑧𝑇 𝑑𝑥 − E 𝜂𝑧(0)𝑑𝑥
𝑢(⋅)∈ [0,𝑇 ] ∫0 ∫0
𝑇 1[ ]
Definition 5.1. (1) Problem (SLQ) is said to be solvable at 𝜂 ∈ 𝐿2 (0, 1) =E 𝑢(𝑎1 𝑦 + 𝑎3 𝑍) − 𝑦𝜉 𝑑𝑥𝑑𝑡.
∫0 ∫0
̄ ∈  [0, 𝑇 ], such that (5.4) holds. In this case,
if there exists a control 𝑢(⋅)
This implies that
̄ is called an optimal control, the corresponding 𝑦(⋅)
𝑢(⋅) ̄ and (𝑦(⋅),
̄ 𝑢(⋅))
̄ are
1[ ]
called an optimal state and an optimal pair, respectively. E (𝛤̂𝜂 + 𝛯𝑢)𝑧
̂ 𝑇 − 𝜂𝑧(0) 𝑑𝑥 (5.11)
(2) Problem (SLQ) is said to be solvable if it is solvable at any ∫0
𝑇 1[ ]
𝜂 ∈ 𝐿2 (0, 1).
=E 𝑢(𝑎1 𝑧 + 𝑎3 𝑍) − (𝛤 𝜂 + 𝛯𝑢)𝜉 𝑑𝑥𝑑𝑡.
∫0 ∫0
5.2. Method of completing the square Let 𝑧𝑇 = 0 and 𝜉 = 0 in (5.11). We then get
𝑇 1 𝑇 1
In this subsection, we are going to study the existence of optimal E 𝑢𝛯 ∗ 𝜉𝑑𝑥𝑑𝑡 = E (𝛯𝑢)𝜉𝑑𝑥𝑑𝑡
∫0 ∫0 ∫0 ∫0
controls for Problem (SLQ) by the method of completing the square. 𝑇 1
To begin with, let us define four operators =E 𝑢(𝑎1 𝑧0 + 𝑎3 𝑍 0 )𝑑𝑥𝑑𝑡.
∫0 ∫0
⎧𝛯 ∶  [0, 𝑇 ] → 𝐿2 (0, 𝑇 ; 𝐿2 (0, 1)), This proves the first equality in (5.9).
⎪ F
Choosing 𝑢 = 0 and 𝑧𝑇 = 0 in (5.11), we obtain
⎪𝛯̂ ∶  [0, 𝑇 ] → 𝐿2 (𝛺; 𝐿2 (0, 1)),
𝑇
⎨ 2 2 2
(5.5) 𝑇 1 𝑇 1
⎪𝛤 ∶ 𝐿 (0, 1) → 𝐿F (0, 𝑇 ; 𝐿 (0, 1)), E 𝜂𝛤 ∗ 𝜉𝑑𝑥𝑑𝑡 = E (𝛤 𝜂)𝜉𝑑𝑥𝑑𝑡
⎪𝛤̂ ∶ 𝐿2 (0, 1) → 𝐿2 (𝛺; 𝐿2 (0, 1)), ∫0 ∫0 ∫0 ∫0
⎩ 𝑇 1
=E 𝜂𝑧0 (0)𝑑𝑥.
as follows: ∀ 𝜂 ∈ 𝐿2 (0, 1), 𝑢(⋅) ∈  [0, 𝑇 ], ∫0
{ 𝛥 𝛥 This proves the second equality in (5.9).
̂
(𝛯𝑢(⋅))(⋅) = 𝑦(⋅; 0, 𝑢), 𝛯𝑢(⋅) = 𝑦(𝑇 ; 0, 𝑢),
𝛥 𝛥 (5.6) Letting 𝜂 = 0 and 𝜉 = 0 in (5.11), we find
(𝛤 𝜂)(⋅) = 𝑦(⋅; 𝜂, 0), 𝛤̂𝜂 = 𝑦(𝑇 ; 𝜂, 0), 1 1
E ̂∗ 𝑧𝑇 𝑑𝑥 = E
𝑢𝛯 ̂ 𝑇 𝑑𝑥
(𝛯𝑢)𝑧
∫0 ∫0
𝑇 1
7
See Definition B.9 for the definition of the usual condition =E 𝑢(𝑎1 𝑧1 + 𝑎3 𝑍 1 )𝑑𝑥𝑑𝑡.
∫0 ∫0

305
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

This proves the first equality in (5.10). 5.3. Pontryagin type maximum principle for problem (SLQ)
At last, letting 𝑢 = 0 and 𝜉 = 0 in (5.11), we see that
1 1 1 In this section, we derive a necessary condition of optimal pairs for
E 𝜂 𝛤̂∗ 𝑧𝑇 𝑑𝑥 = E (𝛤̂𝜂)𝑧𝑇 𝑑𝑥 = E 𝜂𝑧1 (0)𝑑𝑥. Problem (SLQ).
∫0 ∫0 ∫0
This verifies the second equality in (5.10). □ Similarly to the proof of Theorem 4.1, we have

By Proposition 5.1, we may re-write the cost functional (5.3) as Theorem 5.2. Let Problem (SLQ) be solvable at 𝜂 ∈ 𝐿2 (0,1) with (𝑦(⋅),
̄ 𝑢(⋅))
̄
follows.
being an optimal pair. Then
Proposition 5.2. It holds that 𝑟𝑢̄ − 𝑎1 𝑧 − 𝑎3 𝑍 = 0, a.e. (𝑡, 𝑥, 𝜔) ∈ [0, 𝑇 ] × (0, 1) × 𝛺,
 (𝜂; 𝑢(⋅)) where (𝑧(⋅), 𝑍(⋅)) is the transposition solution to the following equation:
𝑇 [ ] (5.12)
1
= E
2 ∫0
⟨ 𝑢, 𝑢⟩𝐿2 (0,1) + 2⟨(𝜂), 𝑢⟩𝐿2 (0,1) + (𝜂) 𝑑𝜎, ⎧ 𝑑𝑧(𝑡) = −(𝑧 +𝑎 𝑍 −𝑞 𝑦̄)𝑑𝑡+𝑍𝑑𝑊 (𝑡) in [0, 𝑇 ) × (0,1),
⎪ 𝑥𝑥 2
where ⎨𝑧 = 0 on [0, 𝑇 )×{0,1},
⎪ 𝑧(𝑇 ) = −𝑔 𝑦(𝑇
̄ ) in (0, 1).
⎧ ∗ ̂∗ ̂ ⎩
⎪  = 𝑟 + 𝛯 𝑞𝛯 + 𝛯 𝑔 𝛯,
⎪ (𝜂) = 𝛯 𝑞𝛤 𝜂 + 𝛯
∗ ̂∗ 𝑔 𝛤̂𝜂, Next, consider the following decoupled forward–backward stochas-
⎨ 1[ (5.13)
⎪ (𝜂) =
𝑇 ] tic parabolic equation:
𝑞(𝛤 𝜂)2 + 𝑔(𝛤̂𝜂)2 𝑑𝑥𝑑𝑡.
⎪ ∫0 ∫0
⎩ ⎧ ( ) ( )
⎪ 𝑑𝑦 = 𝑦𝑥𝑥 + 𝑎1 𝑢 𝑑𝑡 + 𝑎2 𝑦 + 𝑎3 𝑢 𝑑𝑊 (𝑡) in (0, 𝑇 ] × (0, 1),
By Proposition 5.2, we have the following result for the solvability ( )
⎪ 𝑑𝑧 = − 𝑧𝑥𝑥 − 𝑞𝑦 + 𝑎2 𝑍(𝑡) 𝑑𝑡 + 𝑍(𝑡)𝑑𝑊 (𝑡) in [0, 𝑇 ) × (0, 1),
of Problem (SLQ). ⎨ 𝑦=𝑧=0 on [0, 𝑇 ) × {0, 1},

⎪ 𝑦(0) = 𝜂, 𝑧(𝑇 ) = −𝑔𝑦(𝑇 ) in (0, 1).
Theorem 5.1. (1) Problem (SLQ) is solvable at 𝜂 ∈ 𝐿2 (0, 1) if and only ⎩
if  ≥ 0 and there exists a 𝑢(⋅)
̄ ∈  [0, 𝑇 ], such that
(5.17)
 𝑢(⋅)
̄ + (𝜂) = 0. (5.14)
We call (𝑦(⋅), 𝑧(⋅), 𝑍(⋅)) a transposition solution to Eq. (5.17) if 𝑦(⋅) (resp.
In this case, 𝑢(⋅)
̄ is an optimal control. (𝑧(⋅), 𝑍(⋅))) is the mild (resp. transposition) solution to the forward
(2) If  ≫ 0, then for any 𝜂 ∈ 𝐿2 (0, 1),  (𝜂; ⋅) admits a unique (resp. backward) stochastic parabolic equation in (5.17).
minimizer 𝑢(⋅)
̄ given by Given 𝜂 ∈ 𝐿2 (0, 1) and 𝑢(⋅) ∈  [0, 𝑇 ], one can first solve the forward
̄ = − −1 (𝜂).
𝑢(⋅) (5.15) equation in (5.17) to get 𝑦(⋅), and then find the transposition solution
(𝑧(⋅), 𝑍(⋅)) to the backward one. Hence, Eq. (5.17) admits a unique
In this case,
transposition solution (𝑦(⋅), 𝑧(⋅), 𝑍(⋅)). The following result is a further
inf  (𝜂; 𝑢(⋅)) =  (𝜂; 𝑢(⋅))
̄ consequence of Proposition 5.1.
𝑢(⋅)∈ [0,𝑇 ]
( )
1
= (𝜂) − ⟨ −1 (𝜂), (𝜂)⟩𝐿2 (0,1) , 𝜂 ∈ 𝐿2 (0, 1).
2 Proposition 5.3. For any (𝜂, 𝑢(⋅)) ∈ 𝐿2 (0, 1) ×  [0, 𝑇 ], it holds that

Proof. (1) The ‘‘if’’ part. From (5.12), we see that if  < 0, then  𝑢 + (𝜂) = 𝑟𝑢 − 𝑎1 𝑧 − 𝑎2 𝑍,
(5.18)
̄ ∈  [0, 𝑇 ] be an optimal
there would not exist optimal controls. Let 𝑢(⋅) a.e. (𝑡, 𝑥, 𝜔) ∈ [0, 𝑇 ] × (0, 1) × 𝛺,
control of Problem (SLQ) for 𝜂 ∈ 𝐿2 (0, 1). From the optimality of 𝑢(⋅),
̄ where (𝑦(⋅), 𝑧(⋅), 𝑍(⋅)) is the transposition solution to (5.17).
for any 𝑢(⋅) ∈  [0, 𝑇 ], it holds that
( )
1 Proof. By (5.13), we obtain that
0 ≤ lim inf  (𝜂; 𝑢̄ + 𝜆𝑢) −  (𝜂; 𝑢)
̄
𝜆→0 𝜆
𝑇  𝑢 + (𝜂)
= ⟨ 𝑢̄ + (𝜂), 𝑢⟩𝐿2 (0,1) 𝑑𝑡. = (𝑟 + 𝛯 ∗ 𝑞𝐿 + 𝛯̂∗ 𝑔 𝛯)𝑢
̂ + (𝛯 ∗ 𝑞𝛤 𝜂) + 𝛯̂∗ 𝑔 𝛤̂𝜂
∫0 ( ) (5.19)
Consequently, = 𝑟𝑢 + 𝛯 ∗ 𝑞 𝛤 𝜂 + 𝛯𝑢 + 𝛯 ̂∗ 𝐺(𝛤̂𝜂 + 𝛯𝑢)
̂
= 𝑟𝑢 + 𝛯 ∗ 𝑞𝑦 + 𝛯̂∗ 𝑔𝑦(𝑇 ).
 𝑢̄ + (𝜂) = 0.
From (5.9) and (5.10), it follows that
The ‘‘only if’’ part. Let (𝜂, 𝑢(⋅))
̄ ∈ 𝐿2 (0, 1) ×  [0, 𝑇 ] satisfy (5.14).
For any 𝑢 ∈  [0, 𝑇 ], we see 𝛯 ∗ (𝑞𝑦) + 𝛯
̂∗ (𝑔𝑦(𝑇 )) = −𝑎1 𝑧 − 𝑎3 𝑍.

 (𝜂; 𝑢(⋅)) −  (𝜂; 𝑢(⋅))


̄ This, together with (5.19), implies (5.18). □
( )
=  𝜂; 𝑢(⋅)
̄ + 𝑢(⋅) − 𝑢(⋅)̄ −  (𝜂; 𝑢(⋅))
̄
𝑇⟨ ⟩ The following result is a variant of Theorem 5.1.
=E  𝑢̄ + (𝜂), 𝑢 − 𝑢̄ 𝐿2 (0,1) 𝑑𝑡
∫0
𝑇⟨ ( ) ⟩
1 Theorem 5.3. Problem (SLQ) is solvable at 𝜂 ∈ 𝐿2 (0, 1) with an optimal
+ E  𝑢 − 𝑢̄ , 𝑢 − 𝑢̄ 𝐿2 (0,1) 𝑑𝑡
2 ∫ 0 pair (𝑦(⋅),
̄ 𝑢(⋅))
̄ if and only if there exists a unique solution (𝑦(⋅),
̄ 𝑢(⋅),
̄ 𝑧(⋅),
𝑇⟨ ( ) ⟩
1 𝑍(⋅)) to the following forward–backward stochastic parabolic equation
= E  𝑢 − 𝑢̄ , 𝑢 − 𝑢̄ 𝐿2 (0,1) 𝑑𝑡 ≥ 0.
2 ∫ 0
⎧ ( ) ( )
⎪ 𝑑 𝑦̄ = 𝑦̄𝑥𝑥 + 𝑎1 𝑢̄ 𝑑𝑡 + 𝑎2 𝑦̄ + 𝑎3 𝑢̄ 𝑑𝑊 (𝑡) in (0, 𝑇 ] × (0, 1),
This, together with  ≥ 0, concludes that 𝑢(⋅)
̄ is an optimal control. ( )
(2) Since all the optimal controls satisfy (5.14) and  is invertible, ⎪ 𝑑𝑧 = − 𝑧𝑥𝑥 − 𝑞 𝑦̄ + 𝑎2 𝑍 𝑑𝑡 + 𝑍𝑑𝑊 (𝑡) in [0, 𝑇 ) × (0, 1),
⎨ (5.20)
we get the conclusion (2) immediately. □ ⎪ 𝑦̄ = 𝑧 = 0 on [0, 𝑇 ) × {0, 1},
⎪ ̄ = 𝜂,
𝑦(0) 𝑧(𝑇 ) = −𝑔 𝑥(𝑇
̄ ) in (0, 1)
When ⎩
𝑟 ≫ 0, 𝑞 ≥ 0, 𝑔 ≥ 0, so that
(5.16)
a.e. (𝑡, 𝑥, 𝜔) ∈ (0, 𝑇 ) × (0, 1) × 𝛺,
𝑟𝑢̄ − 𝑎1 𝑧(𝑡) − 𝑎3 𝑍 = 0,
Problem (SLQ) is referred to as a standard SLQ problem. (5.21)
a.e. (𝑡, 𝑥, 𝜔) ∈ [0, 𝑇 ] × (0, 1) × 𝛺,

306
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

and for any 𝑢(⋅) ∈  [0, 𝑇 ], the unique transposition solution (𝑦0 (⋅), From now on, we study the optimal feedback control for Problem
𝑧0 (⋅), 𝑍 0 (⋅))
to (5.17) with 𝜂 = 0 satisfies (SLQ). To this end, we first introduce the following space:
𝑇 1( ) 𝐿𝑝,,2 (𝛺; 𝐿𝑞 (0, 𝑇 ; (𝐿2 (0, 1))))
E 𝑟𝑢 − 𝑎1 𝑧0 − 𝑎3 𝑍 0 𝑢𝑑𝑥𝑑𝑡 ≥ 0. (5.22) 𝛥{
F
∫0 ∫0 |
= 𝐹 ∶ [0, 𝑇 ] × 𝛺 → (𝐿2 (0, 1)) |
𝑝
|
|𝐹 |(𝐿2 (0,1)) ∈ 𝐿F (𝛺; 𝐿 (0, 𝑇 )),
𝑞
Proof. The ‘‘only if’’ part. The equality (5.21) follows from Theo-
𝐹 𝜂 ∈ 𝐿𝑝F (𝛺; 𝐿𝑞 (0, 𝑇 ; 𝐿2 (0, 1))), ∀ 𝜂 ∈ 𝐿2 (0, 1),
rem 5.2 while (5.22) follows from Theorem 5.1 along with (5.18) (for }
𝛤𝑛 𝐹 𝛤𝑛 ∈ 𝐿𝑝F (𝛺; 𝐿𝑞 (0, 𝑇 ; 2 (𝐿2 (0, 1)))) .
the special case 𝜂 = 0).
The ‘‘if’’ part. From Proposition 5.3, the inequality (5.22) is equiv- where 1 ≤ 𝑝, 𝑞 ≤ ∞.
alent to  ≥ 0. Now, let (𝑦(⋅),
̄ 𝑧(⋅), 𝑍(⋅)) be a transposition solution to
(5.20) such that (5.21) holds. Then, by Proposition 5.3, we see that Definition 5.2. An operator-valued progress 𝛩(⋅) ∈ 𝐿∞,,2 F
(𝛺;
2 2
𝐿 (0, 𝑇 ; (𝐿 (0, 1)))) is called an optimal feedback operator for Problem
(5.21) is the same as (5.14). Hence by Theorem 5.2, Problem (SLQ) is
solvable. □ (SLQ) if

Let us now consider the case that 𝑟 ≠ 0 for a.e. (𝑡, 𝑥, 𝜔) ∈ [0, 𝑇 ] ×  (𝜂; 𝛩(⋅)𝑦(⋅))
̄ ≤  (𝜂; 𝑢(⋅)), ∀ 𝜂 ∈ 𝐿2 (0, 1), 𝑢(⋅) ∈  [0, 𝑇 ], (5.26)
(0, 1) × 𝛺 and
where 𝑦(⋅)
̄ = 𝑦(⋅;
̄ 𝜂, 𝛩(⋅)𝑦(⋅))
̄ solves the following equation:
1
∈ 𝐿∞ (0, 𝑇 ; 𝐿∞ (0, 1)). (5.23) ( ) ( )
𝑟 F ⎧ 𝑑 𝑦(𝑡)
̄ = 𝑦̄𝑥𝑥 + 𝑎1 𝛩𝑦̄ 𝑑𝑡 + 𝑎2 𝑦+𝑎
̄ 3 𝛩𝑦̄ 𝑑𝑊(𝑡) in (0, 𝑇 ] × (0, 1),

Note that (5.23) does not imply that 𝑟 ≫ 0. If (5.23) holds, then ⎨ 𝑦=0 on (0, 𝑇 ] × {0, 1},
(5.20)–(5.21) are equivalent to the following equation: ⎪ 𝑦(0) in (0, 1).
⎩ ̄ =𝜂
⎧ (
⎪ 𝑎21 𝑎1 𝑎3 ) (5.27)
⎪ 𝑑 𝑦̄ = 𝑦̄𝑥𝑥 + 𝑟 𝑧 + 𝑟 𝑍 𝑑𝑡
⎪ ( 𝑎 𝑎 𝑎2 ) By Theorem B.9, the feedback control system (5.27) admits a unique
⎪ ̄ 1 3 𝑧+ 3 𝑍 𝑑𝑊(𝑡)
+ 𝑎2 𝑦+ in (0,𝑇 ]×(0,1), solution 𝑦(⋅) ∈ 𝐿2F (𝛺; 𝐶([0, 𝑇 ]; 𝐿2 (0, 𝑇 ))) ∩ 𝐿𝑝F (𝛺; 𝐿2 (0, 𝑇 ; 𝐻01 (0, 1))). In
⎨ ( 𝑟 𝑟) (5.24)
⎪ 𝑑𝑧(𝑡) = 𝑧𝑥𝑥 −𝑞 𝑦+𝑎 ̄ 2 𝑍 𝑑𝑡+𝑍𝑑𝑊 (𝑡) in [0,𝑇 )×(0,1), Definition 5.2, 𝛩(⋅) is required to be independent of 𝜂 ∈ 𝐿2 (0, 1). For
⎪ 𝑦̄ = 𝑧 = 0 on [0, 𝑇 ) × {0, 1}, a fixed 𝜂 ∈ 𝐿2 (0, 1), the inequality (5.26) implies that the control

⎪ 𝑦(0)
̄ = 𝜂, 𝑦(𝑇 ̄ ) = −𝑔 𝑥(𝑇
̄ ) in (0, 1). 𝑢(⋅)
̄ ≡ 𝛩(⋅)𝑦(⋅)
̄ ∈  [0, 𝑇 ] is optimal for Problem (SLQ). Thus, for
⎩ Problem (SLQ), the existence of an optimal feedback operator implies
The Eq. (5.24) is a coupled forward–backward stochastic parabolic equa- the existence of an optimal control for any 𝜂 ∈ 𝐿2 (0, 1).
tion. Similarly to (5.17), we call (𝑦, 𝑧, 𝑍) a transposition solution if 𝑦 is a In the study of deterministic LQ problems, one employs Riccati
mild solution to the first equation in (5.24) and (𝑧, 𝑍) is a transposition equations to construct the desired feedback control. Stimulated by this
solution to the second one in (5.24). Unlike (5.17), since (5.24) is fully and the pioneer work (Bismut, 1976) (for Problem (SLQ) when 𝐻 =
coupled, one cannot solve the first equation first and then solve the R𝑛 ), we need to introduce an operator-valued, backward stochastic
second one. The well-posedness of (5.24) is not straightforward, and as Riccati equation for Problem (SLQ) to study the optimal feedback
far as we know, there is no published works for it. operator (Lü & Zhang, 2019b). In this subsection, we shall give a brief
By Theorem 5.3, we have introduction to the main work in Lü and Zhang (2019b) (To simplify
the presentation, we consider here only a simple case, i.e., that for
Corollary 5.1. Let (5.23) hold and  ≥ 0. Then Problem (SLQ) is stochastic parabolic equations in one space dimension).
uniquely solvable at 𝜂 ∈ 𝐿2 (0, 1) if and only if Eq. (5.24) admits a unique For simplicity of notations, we introduce operator-valued processes
transposition solution (𝑦(⋅),
̄ 𝑧(⋅), 𝑍(⋅)). In this case, the optimal control is 𝐵, 𝐶, 𝐷 ∈ 𝐿∞ (0, 𝑇 ; (𝐿2 (0, 1))), 𝑀, 𝑅 ∈ 𝐿∞ (0, 𝑇 ; S(𝐿2 (0, 1))) and an
F F
given by ∞
operator-valued random variable 𝐺 ∈ 𝐿 (𝛺; S(𝐿2 (0, 1))) as follows:
( ) 𝑇
̄ = 1𝑟 𝑎1 𝑧 + 𝑎3 𝑍 ,
𝑢(𝑡) {
(5.25) 𝐵𝑓 = 𝑎1 𝑓 , 𝐶𝑓 = 𝑎2 𝑓 , 𝐷𝑓 = 𝑎3 𝑓 ,
a.e. (𝑡, 𝑥, 𝜔) ∈ [0, 𝑇 ] × (0, 1) × 𝛺. ∀ 𝑓 ∈ 𝐿2 (0, 1).
𝑀𝑓 = 𝑞𝑓 , 𝑅𝑓 = 𝑟𝑓 , 𝐺𝑓 = 𝑔𝑓 ,
Corollary 5.2. Let (5.16) hold. Then Eq. (5.24) admits a unique transpo- Consider the following backward stochastic Riccati equation:
sition solution (𝑦(⋅),
̄ 𝑧(⋅), 𝑍(⋅)) and Problem (SLQ) is uniquely solvable with
⎧ 𝑑𝑃 = − ( 𝑃 𝐴 + 𝐴∗ 𝑃 + 𝛬𝐶 + 𝐶 ∗ 𝛬 + 𝐶 ∗ 𝑃 𝐶
the optimal control given by (5.25). ⎪ )
⎨ +𝑀 −𝐿∗ 𝐾 −1 𝐿 𝑑𝑡+𝛬𝑑𝑊 (𝑡) in [0, 𝑇 ), (5.28)
⎪ 𝑃 (𝑇 ) = 𝐺,
5.4. The optimal feedback operator and the backward stochastic Riccati ⎩
equation where 𝐴 is defined by (4.23) and
𝛥 𝛥
In the previous subsection, we obtained an open-loop optimal con- 𝐾 = 𝑅 + 𝐷∗ 𝑃 𝐷 > 0, 𝐿 = 𝐵 ∗ 𝑃 + 𝐷∗ (𝑃 𝐶 + 𝛬). (5.29)
trol for Problem (SLQ). In Control Theory, one of the fundamental
The Eq. (5.28) is an operator-valued BSEE. As that in Section 4.3.1,
issues is to find optimal feedback controls, which is particularly impor-
we need to use the stochastic transposition method to solve this equa-
tant in practical applications. Indeed, the main advantage of feedback
tion. Let us first introduce the following two stochastic parabolic equa-
controls is that they keep the corresponding control strategy robust
tions:
with respect to (small) perturbations/disturbances, which are usually
unavoidable in realistic background. Unfortunately, it is actually very ⎧ 𝑑𝜑 = (𝜑 ) ( )
in (𝑡, 𝑇 ] × (0, 1),
⎪ 1 1,𝑥𝑥 + 𝑢1 𝑑𝜏 + 𝑎2 𝜑1 + 𝑣1 𝑑𝑊 (𝜏)
difficult to find optimal feedback controls for general optimal control
⎨ 𝜑1 = 0 on (𝑡, 𝑇 ] × {0, 1},
problems. So far, the most successful attempt in this respect is that for ⎪ 𝜑1 (𝑡) = 𝜉1 in (0, 1)
various LQ problems (e.g., Kalman, 1961; Lasiecka & Triggiani, 2000a, ⎩
2000b; Lü & Zhang, 2019b; Sun & Yong, 2020; Yong & Zhou, 1999). (5.30)

307
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

and Furthermore,
⎧ ( ) ( ) 1
𝑑𝜑2 = 𝜑2,𝑥𝑥 + 𝑢2 𝑑𝜏 + 𝑎2 𝜑2 + 𝑣2 𝑑𝑊 (𝜏) in (𝑡, 𝑇 ] × (0, 1), inf  (𝜂; 𝑢) = E⟨𝑃 (0)𝜂, 𝜂⟩𝐿2 (0,1) . (5.35)
⎪ 𝑢∈ [0,𝑇 ] 2
⎨ 𝜑2 = 0 on (𝑡, 𝑇 ] × {0, 1},
⎪ 𝜑2 (𝑡) = 𝜉2 in (0, 1). 5.5. Proof of Theorem 5.4

(5.31)
This subsection is devoted to the proof of Theorem 5.4. We first
Here 𝑡 ∈ [0, 𝑇 ), 𝜉1 , 𝜉2 ∈ 𝐿2 (𝛺; 𝐿2 (0, 1))
and 𝑢1 , 𝑢2 , 𝑣1 , 𝑣2 ∈ 𝐿2F (𝑡, 𝑇 ; introduce the following operator-valued backward stochastic Lyapunov
𝑡
2
𝐿 (0, 1)). equation:
Recall (4.22) (resp. (4.28)) for the definition of the space S(𝐿2 (0, 1))
(resp. S2 (𝐿2 (0, 1); 𝐻 −1 (0, 1))). Put ⎧ 𝑑𝑃 = − [ 𝑃 (𝐴 + 𝐵𝛩) + (𝐴 + 𝐵𝛩)∗ 𝑃 + 𝐶 ∗ 𝑃 𝐶
⎪ ] 𝛩 𝛩
⎨ +𝐶𝛩∗ 𝛬 + 𝛬𝐶𝛩 + 𝑀 + 𝛩∗ 𝑅𝛩 𝑑𝑡 + 𝛬𝑑𝑊 (𝑡) in [0, 𝑇 ),
𝐷F,𝑤 ([0, 𝑇 ]; 𝐿∞ (𝛺; S(𝐿2 (0, 1))))
𝛥{
⎪ 𝑃 (𝑇 ) = 𝐺,
| ⎩
= 𝑃 ∈ 𝐿∞ (0, 𝑇 ; S2 (𝐿2 (0, 1); 𝐻 −1 (0, 1))) |
F |
𝜒[𝑡,𝑇 ] 𝑃 (⋅)𝜁 ∈ 𝐷F ([𝑡, 𝑇 ]; 𝐿 (𝛺; 𝐿 (0, 1))), ∀ 𝜁 ∈ 𝐿2 (𝛺;
2 2 (5.36)
} 𝑡
𝐿2 (0, 1)), |𝑃 (⋅)|(𝐿2 (0,1)) ∈ 𝐿∞ F
(0, 𝑇 ) . 𝛥
where 𝐶𝛩 = 𝐶 + 𝐷𝛩. The Eq. (5.36) is a special case of (4.25). Hence,
( ) by Theorem 4.2, there exists a unique transposition solution
Definition 5.3. We call 𝑃 (⋅), 𝛬(⋅) ∈ 𝐷F,𝑤 ([0, 𝑇 ]; 𝐿∞ (𝛺; S(𝐿2 (0, 1))))×
2 2 −1
𝐿F (0, 𝑇 ; S2 (𝐿 ((0, 1); 𝐻 (0, 1))) a transposition )solution to (5.28) if (𝑃 (⋅), 𝛬(⋅)) ∈ 𝐷F,𝑤 ([0, 𝑇 ]; 𝐿2 (𝛺; S(𝐿2 (0, 1))))
(1) 𝐾(𝑡, 𝜔) ≡ 𝑅(𝑡, 𝜔) + 𝐷(𝑡, 𝜔)∗ 𝑃 (𝑡, 𝜔)𝐷(𝑡, 𝜔) > 0 and its left inverse ×𝐿2F (0, 𝑇 ; S2 (𝐿2 (0, 1); 𝐻 −1 (0, 1)))
𝐾(𝑡, 𝜔)−1 is a densely defined closed operator for a.e. (𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺; to (5.36) in the sense of Definition 4.1. Furthermore, noting 𝐺 ∈
(2) For any 𝑡 ∈ [0, 𝑇 ], 𝜉1 , 𝜉2 ∈ 𝐿4 (𝛺; 𝐿2 (0, 1)) and 𝑢1 (⋅), 𝑢2 (⋅), 𝑣1 (⋅), 𝐿∞ (𝛺; S(𝐿2 (0, 1))), 𝑀, 𝑅 ∈ 𝐿∞ (0, 𝑇 ; S(𝐿2 (0, 1))) and 𝛩(⋅) ∈ 𝐿∞,,2
𝑡 𝑇 F F
𝑣2 (⋅) ∈ 𝐿4F (𝛺; 𝐿2 (𝑡, 𝑇 ; 𝐿2 (0, 1))), it holds that 2 2
(𝛺; 𝐿 (0, 𝑇 ; (𝐿 (0, 1)))), by (4.92), we have that 𝑃 (⋅) ∈ 𝐷F,𝑤 ([0, 𝑇 ];
𝑇⟨ ⟩ 𝐿∞ (𝛺; S(𝐿2 (0, 1)))).
E⟨𝐺𝜑1 (𝑇 ), 𝜑2 (𝑇 )⟩𝐿2 (0,1) + E 𝑀𝜑1 , 𝜑2 𝐿2 (0,1)
𝑑𝜏 The following result reveals the relationship between the Riccati
∫𝑡
𝑇⟨ (5.28) and the Lyapunov equation (5.36).

−E 𝐾 −1 𝐿𝜑1 , 𝐿𝜑2 𝐿2 (0,1) 𝑑𝜏 (5.32)
∫𝑡
Lemma 5.1. Suppose 𝛩 is an optimal feedback operator, and (𝑃 , 𝛬) is
⟨ ⟩ 𝑇⟨ ⟩
= E 𝑃 (𝑡)𝜉1 , 𝜉2 𝐿2 (0,1) + E 𝑃 𝑢1 , 𝜑2 𝐿2 (0,1) 𝑑𝜏 the transposition solution to (5.36) corresponding to 𝛩. Let
∫𝑡
𝑇⟨ ⟩ 𝑇⟨ ⟩ 𝐾 = 𝑅 + 𝐷∗ 𝑃 𝐷, 𝐿 = 𝐷∗ 𝑃 𝐶 + 𝐵 ∗ 𝑃 + (𝛬𝐷)∗ . (5.37)
+E 𝑃 𝜑1 ,𝑢2 𝐿2 (0,1)𝑑𝜏 +E 𝑃 𝐶𝜑1 ,𝑣2 𝐿2 (0,1)𝑑𝜏
∫𝑡 ∫𝑡 Then,
𝑇⟨ ⟩
+E 𝑃 𝑣1 , 𝐶𝜑2 + 𝑣2 𝐿2 (0,1) 𝑑𝜏 𝐾(𝑡, 𝜔) ≥ 0, for a.e. (𝑡, 𝜔) ∈ (0, 𝑇 ) × 𝛺, (5.38)
∫𝑡
𝑇⟨ ⟩
+E
∫𝑡
𝛬𝑣1 , 𝜑2 𝐻 −1 (0,1),𝐻 1 (0,1) 𝑑𝜏 𝐿 ∈ 𝐿∞,,2
F
(𝛺; 𝐿2 (0, 𝑇 ; (𝐿2 (0, 1)))), (5.39)
0
𝑇⟨ ⟩ and
+E 𝜑1 , 𝛬𝑣2 𝐻 1 (0,1),𝐻 −1 (0,1) 𝑑𝜏,
∫𝑡 0 𝐾(𝑡, 𝜔)𝛩(𝑡, 𝜔) + 𝐿(𝑡, 𝜔) = 0 in (𝐿2 (0, 1))
(5.40)
where 𝜑1 (⋅) and 𝜑2 (⋅) solve (5.30) and (5.31), respectively. for a.e. (𝑡, 𝜔) ∈ (0, 𝑇 ) × 𝛺.
Before proving Lemma 5.1, we give the following preliminary result.
Remark 5.1. One may expect that Eq. (5.28) would admit a unique
transposition solution (𝑃 , 𝛬) ∈ 𝐶F ([0, 𝑇 ]; 𝐿∞ (𝛺; S(𝐿2 (0, 1)))) × 𝐿∞ (𝛺;
F Lemma 5.2. Suppose 𝑌 (⋅) ∈ 𝐿2F (0, 𝑇 ; 𝐿2 (0, 1)) is a given process satisfying
𝐿2 (0, 𝑇 ; S(𝐿2 (0, 1)))). Unfortunately, it is not the case even in finite
that for a.e. 𝑡 ∈ [0, 𝑇 ], there is a sequence of decreasing positive numbers
dimensions (e.g., Lü, Wang, & Zhang, 2017a, Example 6.2).
{𝜀𝑛 }∞𝑛=1
converging to 0 such that
( )
Remark 5.2. In Definition 5.3, we only require that 𝐾(𝑡, 𝜔) has left 𝑡+𝜀𝑛 E 𝑌 (𝑠)|
𝑡
inverse for a.e. (𝑡, 𝜔) ∈ (0, 𝑇 ) × 𝛺, and therefore 𝐾(𝑡, 𝜔)−1 may be lim 𝑑𝑠 = 0 in 𝐿2 (0, 1), a.s. (5.41)
𝜀𝑛 →0 ∫𝑡 𝜀𝑛
unbounded. Nevertheless, this cannot be improved (e.g., Lü & Zhang,
2019b). Then, 𝑌 = 0, for a.e. (𝑡, 𝜔) ∈ [0, 𝑇 ) × 𝛺.

The following result provides a relationship between the existence When 𝑌 ∈ 𝐿2F (0, 𝑇 ; R𝑛 ), a similar result was proved in Hu, Jin,
of an optimal feedback operator for Problem (SLQ) and the well- and Zhou (2017). Since 𝐿2 (0, 1) is a separable Hilbert space, one can
posedness of (5.28) in the sense of transposition solution. follow Hu et al. (2017) to prove Lemma 5.2 (hence we omit the details).

Theorem 5.4. Suppose that 𝑅 > 0. The Riccati equation (5.28) admits a Proof of Lemma 5.1. Let us divide the proof into four steps.
unique transposition solution Step 1. Given 𝑓 ∈ 𝐿2F (0, 𝑇 ; 𝐿2 (0, 1)), consider the following SEE:
( )
𝑃 (⋅), 𝛬(⋅) ∈ 𝐷F,𝑤 ([0, 𝑇 ]; 𝐿∞ (𝛺; S(𝐿2 (0, 1)))) { [( ) ] ( )
𝑑𝜑 = 𝐴 + 𝐵𝛩 𝜑 + 𝐵𝑓 𝑑𝑟 + 𝐶𝛩 𝜑 + 𝐷𝑓 𝑑𝑊 (𝑟) in (𝑡, 𝑇 ],
×𝐿2F (0, 𝑇 ; S2 (𝐿2 (0, 1); 𝐻 −1 (0, 1)))
𝜑(𝑡) = 𝜉.
so that
[ ] (5.42)
𝐾(⋅)−1 𝐵(⋅)∗ 𝑃 (⋅) + 𝐷(⋅)∗ 𝑃 (⋅)𝐶(⋅) + (𝛬(⋅)𝐷(⋅))∗
(5.33)
∈ 𝐿F∞,,2 (𝛺; 𝐿2 (0, 𝑇 ; (𝐿2 (0, 1)))) By the definition of transposition solution to (5.36), we have
if and only if Problem (SLQ) admits an optimal feedback operator 𝛩(⋅). ⟨ ⟩
E 𝑃𝑇 𝜑(𝑇 ), 𝜑(𝑇 ) 𝐿2 (0,1)
In this case,
𝑇⟨ ⟩
[ ] +E (𝑀 + 𝛩∗ 𝑅𝛩)𝜑, 𝜑 𝐿2 (0,1) 𝑑𝑟
𝛩(⋅) = −𝐾(⋅)−1 𝐵(⋅)∗ 𝑃 (⋅)+𝐷(⋅)∗ 𝑃 (⋅)𝐶(⋅)+(𝛬(⋅)𝐷(⋅))∗ . (5.34) ∫𝑡

308
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

⟨ ⟩ 𝑇⟨ ⟩ ⟨ ⟩
= E 𝑃 (𝑡)𝜉, 𝜉 𝐿2 (0,1) + E 𝑃 𝐵𝑓 , 𝜑 𝐿2 (0,1) 𝑑𝑟 = E 𝐺𝜑𝜏,𝑣,𝜀 (𝑇 ), 𝜑𝜏,𝑣,𝜀 (𝑇 ) 𝐿2 (0,1) (5.46)
∫𝑡 𝑇 ⟨(
𝑇⟨
) ⟩
⟩ 𝑇⟨ ⟩ +E 𝑀 + 𝛩∗ 𝑃 𝛩 𝜑𝜏,𝑣,𝜀 , 𝜑𝜏,𝑣,𝜀 𝐿2 (0,1) 𝑑𝑟
+E 𝑃 𝜑, 𝐵𝑓 𝐿2 (0,1) 𝑑𝑟+E 𝑃 𝐶𝛩 𝜑, 𝐷𝑓 𝐿2 (0,1) 𝑑𝑟 ∫0
∫𝑡 ∫𝑡 𝑇[
𝑇⟨
⟨ ⟩ ⟨ ⟩ ]
⟩ +E 2 𝑅𝛩𝜑𝜏,𝑣,𝜀 , 𝑓 𝐿2 (0,1) + 𝑅𝑓 , 𝑓 𝐿2 (0,1) 𝑑𝑟.
+E 𝑃 𝐷𝑓 , 𝐶𝛩 𝜑 + 𝐷𝑓 𝐿2 (0,1) 𝑑𝑟 ∫0
∫𝑡
𝑇⟨ By (5.43) again, we get that

+E 𝛬𝐷𝑓 , 𝜑 𝐻 −1 (0,1),𝐻 1 (0,1) 𝑑𝑟 ⟨ ⟩
∫𝑡 0 E 𝐺𝜑𝜏,𝑣,𝜀 (𝑇 ), 𝜑𝜏,𝑣,𝜀 (𝑇 ) 𝐿2 (0,1)
𝑇⟨ ⟩ 𝑇 ⟨( ) ⟩
+E 𝛬𝐷𝑓 , 𝜑 𝐻 −1 (0,1),𝐻 1 (0,1) 𝑑𝑟 (5.43) +E 𝑄 + 𝛩∗ 𝑃 𝛩 𝜑𝜏,𝑣,𝜀 , 𝜑𝜏,𝑣,𝜀 𝐿2 (0,1) 𝑑𝑟
∫𝑡 0 ∫0
⟨ ⟩ 𝑇⟨ ⟩ 𝑇[ ⟨ ⟩ ⟨ ⟩ ]
= E 𝑃 (𝑡)𝜉, 𝜉 𝐿2 (0,1) + E 𝑃 𝐵𝑓 , 𝜑 𝐿2 (0,1) 𝑑𝑟 +E 2 𝑅𝛩𝜑𝜏,𝑣,𝜀 , 𝑓 𝐿2 (0,1) + 𝑅𝑓 , 𝑓 𝐿2 (0,1) 𝑑𝑟
∫𝑡 ∫0
𝑇⟨ ⟩ 𝑇⟨ ⟩ ⟨ ⟩ 𝑇⟨ ⟩
+E 𝑃 𝜑, 𝐵𝑓 𝐿2 (0,1) 𝑑𝑟+2E 𝑃 𝐶𝛩 𝜑, 𝐷𝑓 𝐿2 (0,1) 𝑑𝑟 = E 𝑃 (0)𝜉, 𝜉 𝐿2 (0,1) + E 𝑃 𝐵𝑓 , 𝜑𝜏,𝑣,𝜀 (𝑟) 𝐿2 (0,1) 𝑑𝑟
∫𝑡 ∫𝑡 ∫0
𝑇⟨ ⟩ 𝑇⟨ ⟩
+E 𝑃 𝐷𝑓 , 𝐷𝑓 𝐿2 (0,1) 𝑑𝑟 +E 𝑃 𝜑𝜏,𝑣,𝜀 , 𝐵𝑓 𝑑𝑟 (5.47)
∫𝑡 ∫0 𝐿2 (0,1)
𝑇⟨ ⟩ 𝑇⟨ ( ) ⟩
+2E 𝛬𝐷𝑓 , 𝜑 𝐻 −1 (0,1),𝐻 1 (0,1) 𝑑𝑟. +2E 𝑃 𝐶 + 𝐷𝛩 𝜑𝜏,𝑣,𝜀 , 𝐷𝑓 𝐿2 (0,1) 𝑑𝑟
∫𝑡 0 ∫0
Denote by 𝜑(⋅)
̄ the solution to (5.42) with 𝑓 = 0 and 𝑡 = 0. 𝑇⟨ ⟩
Let 𝑣(⋅) ∈ 𝐿∞ (0, 𝑇 ; 𝐿2 (0, 1)), 𝜏 ∈ [0, 𝑇 ) and 𝜀 be small enough such +E 𝑃 𝐷𝑓 , 𝐷𝑓 𝐿2 (0,1)
𝑑𝑟
F ∫0
that 𝜏 + 𝜀 ≤ 𝑇 . Let 𝑓 (⋅) = 𝜒[𝜏,𝜏+𝜀] (⋅)𝑣(⋅) and denote by 𝜑𝜏,𝑣,𝜀 (⋅) the 𝑇⟨ ⟩
corresponding solution to (5.42) with 𝑡 = 0. It follows that 𝜑𝜏,𝑣,𝜀 (𝜏) = +2E 𝛬𝐷𝑓 , 𝜑𝜏,𝑣,𝜀 𝐻 −1 (0,1),𝐻01 (0,1)
𝑑𝑟
∫0
𝜑(𝜏).
̄ Furthermore, for 𝑟 ∈ [𝜏, 𝜏 + 𝜀], 𝑇[ ⟨ ⟩ ⟨ ⟩ ]
E|𝜑𝜏,𝑣,𝜀 (𝑟) − 𝜑(𝑟)|
̄ 2 +E 2 𝑅𝛩𝜑𝜏,𝑣,𝜀 , 𝑓 𝐿2 (0,1) + 𝑅𝑓 , 𝑓 𝐿2 (0,1) 𝑑𝑟
𝐿2 (0,1) ∫0
|
𝑟 ( ) (⟨ ⟩ ) 𝜏+𝜀 [ ⟨ ⟩
= E| 𝑆(𝑟 − 𝜎)𝐵𝛩 𝜑𝜏,𝑣,𝜀 − 𝜑̄ 𝑑𝜎 = E 𝑃 (0)𝜉, 𝜉 𝐿2 (0,1) + E 𝑣, 𝐾𝑣 𝐿2 (0,1)
| ∫𝜏 ∫𝜏
⟨ ( )∗ ⟩ ]
𝑟 ( ) +2 𝜑𝜏,𝑣,𝜀 , 𝐾𝛩 + 𝐿 𝑣 𝐻 1 (0,1),𝐻 −1 (0,1) 𝑑𝑟.
+ 𝑆(𝑟 − 𝜎)𝐶𝛩 𝜑𝜏,𝑣,𝜀 − 𝜑̄ 𝑑𝑊 (𝜎) 0
∫𝜏
𝑟 𝑟 Combining (5.45)–(5.47), we obtain that
|2
+ 𝑆(𝑟−𝜎)𝐵𝑣𝑑𝜎 + 𝑆(𝑟−𝜎)𝐷𝑣𝑑𝑊 (𝜎)| 2
∫𝜏 ∫𝜏 |𝐿 (0,1) 2 (𝜉; 𝛩𝜑𝜏,𝑣,𝜀 + 𝑓 )
( 𝑟 ( ) |2 𝜏+𝜀 [⟨ ⟩
| 𝜏,𝑣,𝜀
≤ E | 𝑆(𝑟 − 𝜎)𝐵𝛩 𝜑 − 𝜑̄ 𝑑𝜎 | 2 = 2 (𝜉; 𝛩𝜑)
̄ +E 𝑣, 𝐾𝑣 𝐿2 (0,1)
| ∫𝜏 |𝐿 (0,1) ∫𝜏
𝑟 ( 𝜏,𝑣,𝜀 ) ⟨ ( )∗ ⟩ ]
| |2 + 2 𝜑𝜏,𝑣,𝜀 , 𝐾𝛩 + 𝐿 𝑣 𝐻 1 (0,1),𝐻 −1 (0,1) 𝑑𝑟.
+| 𝑆(𝑟 − 𝜎)𝐶𝛩 𝜑 − 𝜑̄ 𝑑𝑊 (𝜎)| 2
| ∫𝜏 |𝐿 (0,1) 0

𝑟 2 Since 𝛩 is an optimal feedback operator of Problem (SLQ), for any


| |
+| 𝑆(𝑟 − 𝜎)𝐵𝑣𝑑𝜎 | 𝜏 ∈ [0, 𝑇 ), we have
| ∫𝜏 |𝐿2 (0,1)
𝑟 2 ) 2 (𝜉; 𝛩𝜑𝜏,𝑣,𝜀 + 𝑓 ) − 2 (𝜉; 𝛩𝜑)
̄
| | 𝜏+𝜀 [⟨
+| 𝑆(𝑟 − 𝜎)𝐷𝑣𝑑𝑊 (𝜎)| ⟩
| ∫𝜏 |𝐿2 (0,1) =E 𝑣, 𝐾𝑣 𝐿2 (0,1) (5.48)
𝑟( ) 𝜏,𝑣,𝜀 ∫𝜏
≤ E |𝐵𝛩|2(𝐿2 (0,1)) + |𝐶𝛩 |2(𝐿2 (0,1)) |𝜑 ̄ 2𝐿2 (0,1) 𝑑𝜎
− 𝜑| ⟨ 𝜏,𝑣,𝜀 ( )∗ ⟩ ]
∫𝜏 +2 𝜑 , 𝐾𝛩 + 𝐿 𝑣 𝐻 1 (0,1),𝐻 −1 (0,1) 𝑑𝑟 ≥ 0.
0
+(𝑟 − 𝜏).
Noting that 𝑣 ∈ 𝐿∞ F
(0, 𝑇 ; 𝐿2 (0, 1)), we get
This, together with Gronwall’s inequality, implies that ⟨ 𝜏,𝑣,𝜀 ( )∗ ⟩
| 𝜏+𝜀 𝜑 − 𝜑,
̄ 𝐾𝛩+𝐿 𝑣 𝐻 1 (0,1),𝐻 −1 (0,1) |
2
E|𝜑𝜏,𝑣,𝜀 (𝑟) − 𝜑(𝑟)|
̄ ≤ 𝜀, ∀ 𝑟 ∈ [𝜏, 𝜏 + 𝜀]. (5.44) |E 0
𝑑𝑟||
𝐿2 (0,1) | ∫
| 𝜏 𝜀 |
Step 2. From (5.43), we have that (
𝜏+𝜀 )
≤ E |𝐾𝛩|(𝐿2 (0,1);𝐻 −1 (0,1)) +|𝐿|(𝐿2 (0,1);𝐻 −1 (0,1))
2 (𝜉; 𝛩𝜑)
̄ ∫𝜏
⟨ ⟩ |𝜑𝜏,𝑣,𝜀 − 𝜑|
̄ 𝐻 1 (0,1)
= E 𝐺𝜑(𝑇̄ ), 𝜑(𝑇
̄ ) 𝐿2 (0,1) (5.45) 0
× 𝑑𝑟
𝑇 (⟨ ⟩ ⟨ ⟩ ) 𝜀
+E ̄ 𝜑̄ 𝐿2 (0,1) + 𝑅𝛩𝜑,
𝑀 𝜑, ̄ 𝛩𝜑̄ 𝐿2 (0,1) 𝑑𝑟 [ 𝜏+𝜀 (

∫0 ≤ E |𝐾𝛩|(𝐿2 (0,1);𝐻 −1 (0,1))
𝜀 ∫𝜏
⟨ ⟩ 𝑇⟨ ⟩
= E 𝑃𝑇 𝜑(𝑇
̄ ), 𝜑(𝑇
̄ ) 𝐿2 (0,1) + E (𝑀 +𝛩∗ 𝑅𝛩)𝜑,
̄ 𝜑̄ 𝐿2 (0,1)
𝑑𝑟 )2 ] 21
∫0 +|𝐿|(𝐿2 (0,1);𝐻 −1 (0,1)) 𝑑𝑟 (5.49)
⟨ ⟩
= E 𝑃 (0)𝜉, 𝜉 𝐿2 (0,1) [ 𝜏+𝜀 ]1
̄ 2
2
× E|𝜑𝜏,𝑣,𝜀 − 𝜑| 𝑑𝑟
and ∫𝜏 𝐻01 (0,1)
[ 𝜏+𝜀 (
2 (𝜉; 𝛩𝜑𝜏,𝑣,𝜀 + 𝑓 ) ≤ E |𝐾𝛩|(𝐿2 (0,1);𝐻 −1 (0,1))
⟨ ⟩ ∫𝜏
= E 𝐺𝜑𝜏,𝑣,𝜀 (𝑇 ), 𝜑𝜏,𝑣,𝜀 (𝑇 ) 𝐿2 (0,1)
)2 ] 12
𝑇 [⟨ ⟩ +|𝐿|(𝐿2 (0,1);𝐻 −1 (0,1)) 𝑑𝑟
+E 𝑀𝜑𝜏,𝑣,𝜀 , 𝜑𝜏,𝑣,𝜀 𝐿2 (0,1)
∫0 [ ]1
⟨ ( ) ⟩ ] 1
̄ 2
2
+ 𝑅 𝛩𝜑𝜏,𝑣,𝜀 + 𝑓 , 𝛩𝜑𝜏,𝑣,𝜀 + 𝑓 𝐿2 (0,1) 𝑑𝑟 × sup E|𝜑𝜏,𝑣,𝜀 − 𝜑| .
𝜀 𝑟∈[𝜏,𝜏+𝜀] 𝐻01 (0,1)

309
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

This, together with (5.44), implies that follows from (5.43) that
⟨ 𝜏,𝑣,𝜀 ( )∗ ⟩ ⟨ ⟩
𝜏+𝜀 𝜑 − 𝜑,
̄ 𝐾𝛩+𝐿 𝑣 𝐻 1 (0,1),𝐻 −1 (0,1) | ̂ ), 𝜑(𝑇
E 𝐺𝜑(𝑇 ̂ ) 𝐿2 (0,1)
|
lim |E 0
𝑑𝑟|| = 0. (5.50) 𝑇 (⟨ ⟩ ⟨ ⟩ )
𝜀→0 || ∫𝜏 𝜀 | +E ̂ 𝜑̂ 𝐿2 (0,1) + 𝑅𝛩𝜑,
𝑀 𝜑, ̂ 𝛩𝜑̂ 𝐿2 (0,1) 𝑑𝑟
∫𝑡
It follows from (5.48) and (5.50) that ⟨ ⟩ 𝑇⟨ ⟩
= E 𝑃𝑇 𝜑(𝑇
̂ ), 𝜑(𝑇
̂ ) 𝐻+E (𝑀 +𝛩∗ 𝑅𝛩)𝜑,
̂ 𝜑̂ 𝐿2 (0,1) 𝑑𝑟
[ ∫
1 ⟨ ⟩ ⟨ ⟩
𝜏+𝜀 𝑡
lim inf E 𝑣, 𝐾𝑣 𝐿2 (0,1) = E 𝑃 (𝑡)𝜉, 𝜉 𝐿2 (0,1)
𝜀→0 ∫𝜏 𝜀 ] (5.51)
⟨ ( )∗ ⟩
+2 𝜑,̄ 𝐾𝛩+𝐿 𝑣 𝐻 1 (0,1),𝐻 −1 (0,1) 𝑑𝑟 ≥ 0. and that
0 ⟨ ⟩
E 𝐺𝜑𝑣,𝜀 (𝑇 ), 𝜑𝑣,𝜀 (𝑇 ) 𝐿2 (0,1)
Therefore, by Lebesgue’s differentiation theorem, there exists a measure 𝑇 [⟨ ⟩
subset 𝛤 of [0, 𝑇 ], depending on 𝑣(⋅), 𝛩(⋅), 𝐾(⋅), 𝐿(⋅) and 𝜑(⋅),
̄ such that +E 𝑀𝜑𝑣,𝜀 , 𝜑𝑣,𝜀 𝐿2 (0,1)
∫⟨𝑡 ⟩ ]
𝐦([0, 𝑇 ] ⧵ 𝛤 ) = 0 (where 𝐦 denotes the Lebesgue measure on [0, 𝑇 ]),
+ 𝑅(𝛩𝜑𝑣,𝜀 + 𝑓 ), 𝛩𝜑𝑣,𝜀 + 𝑓 𝐿2 (0,1) 𝑑𝑟
and for each 𝑡 ∈ 𝛤 , [
[⟨ ⟨ ⟩ 𝑇 ⟨ ⟩
⟩ = E 𝑃 (𝑡)𝜉, 𝜉 𝐿2 (0,1) + E 𝐾𝑓 , 𝑓 𝐿2 (0,1)
E 𝑣(𝑡), 𝐾(𝑡)𝑣(𝑡) 𝐿2 (0,1) ∫𝑡 ]
⟨ ( )∗ ⟩ ] ⟨ ( ) ⟩
+2 𝜑(𝑡),
̄ 𝐾(𝑡)𝛩(𝑡) + 𝐿(𝑡) 𝑣(𝑡) 𝐻 1 (0,1),𝐻 −1 (0,1) ≥ 0. +2 𝜑𝑣,𝜀 , 𝐾𝛩 + 𝐿 𝑓 𝐻 1 (0,1),𝐻 −1 (0,1) 𝑑𝑠
0
0 ⟨ ⟩ 𝑡+𝜀 ⟨ ⟩
= E 𝜉, 𝑃 (𝑡)𝜉 𝐿2 (0,1) + E 𝑣, 𝐾𝑣 𝐿2 (0,1) 𝑑𝑠
Hence, for any 𝑣 ∈ 𝐿∞
F
(0, 𝑇 ; 𝐿2 (0, 1)), ∫𝑡
𝑡+𝜀 ⟨ ( )∗ ⟩
[⟨ ] +2E 𝜑𝑣,𝜀 , 𝐾𝛩 + 𝐿 𝑣 𝐻 1 (0,1),𝐻 −1 (0,1) 𝑑𝑠.
𝑇 ⟩ ⟨ ( )∗ ⟩ ∫𝑡 0
E ̄ 𝐾𝛩 + 𝐿 𝑣 𝐻 1 (0,1),𝐻 −1 (0,1) 𝑑𝑡 ≥ 0.
𝑣, 𝐾𝑣 𝐿2 (0,1) + 2 𝜑,
∫0 0 By the optimality of 𝛩, one has
(5.52) 𝑡+𝜀 ⟨ ⟩
1
E 𝑣, 𝐾𝑣 𝐿2 (0,1) 𝑑𝑠
Step 3. Put 𝜀 ∫𝑡
𝑡+𝜀 ⟨ ( )∗ ⟩
2
𝛥 ⟨ ⟩ + E 𝜑𝑣,𝜀 , 𝐾𝛩 + 𝐿 𝑣 𝐻 1 (0,1),𝐻 −1 (0,1) 𝑑𝑠
(𝑡, 𝑣) = 𝑣, 𝐾(𝑡)𝑣 𝐿2 (0,1) 𝜀 ∫𝑡 0
⟨ ( )∗ ⟩ ≥ 0.
+2 𝜑(𝑡),
̄ 𝐾(𝑡)𝛩(𝑡)+𝐿(𝑡) 𝑣 𝐻 1 (0,1),𝐻 −1 (0,1) ,
0
for 𝑣 ∈ 𝐿2 (0, 1), 𝑡 ∈ [0, 𝑇 ]. Similarly to the proof of (5.51), we can get that for any 𝑣 ∈ 𝐿2 (0, 1),
𝑡+𝜀 ⟨ ⟩
We claim that 𝛥 1
H(𝑡, 𝑣) = lim inf E 𝑣, 𝐾𝑣 𝐿2 (0,1) 𝑑𝑠
𝜀→0 𝜀 ∫𝑡
(𝑡, 𝑣) ≥ 0, ∀ 𝑣 ∈ 𝐿2 (0, 1),
(5.53) 2 ⟨ (
𝑡+𝜀 )∗ ⟩
for a.e. (𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺. + lim inf E 𝜉, 𝐾𝛩+𝐿 𝑣 𝐻 1 (0,1),𝐻 −1 (0,1) 𝑑𝑠
𝜀→0 𝜀 ∫𝑡 0

Otherwise there would exist 𝑣′ ∈ 𝐿2 (0, 1) such that for some 𝛿 > 0, the ≥ 0. (5.56)
set
{ } By (5.56), we get that for all 𝑛 ∈ N and 𝑣 ∈ 𝐿2 (𝛺; 𝐿2 (0, 1)),
𝛥 | 𝑡
K = (𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺 | (𝑡, 𝑣′ ) ≤ −𝛿 ⊂ [0, 𝑇 ] × 𝛺 ( )
| 𝑣
𝑛H 𝑡, ≥ 0, for a.e. (𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺. (5.57)
is F-measurable and 𝑛
Letting 𝑛 → ∞ in (5.57), we get that
(𝐦 × P)(K) > 0, ( )∗
𝑡+𝜀 ⟨ 𝐾𝛩 + 𝐿 𝑣 ⟩
where 𝐦 × P denotes the product measure of 𝐦 and P. lim inf E 𝜉, 𝑑𝑟 ≥ 0,
𝜀→0 ∫𝑡 𝜀 𝐻01 (0,1),𝐻 −1 (0,1)
Define a process 𝑣0 by
∀ 𝑣 ∈ 𝐿2 (0, 1). (5.58)
𝑣0 (𝑡, 𝜔) = 𝜒K (𝑡, 𝜔)𝑣′ , (𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺.
Furthermore, we have that
Clearly, 𝑣0 ∈ 𝐿∞ (0, 𝑇 ; 𝐿2 (0, 1)) and ( )∗
F 𝑡+𝜀 ⟨ 𝐾𝛩 + 𝐿 𝑣 ⟩
𝑇 lim inf E 𝜉, 𝑑𝑟
𝜀→0 ∫𝑡 𝜀 𝐻01 (0,1),𝐻 −1 (0,1)
E (𝑡, 𝑣0 )𝑑𝑡 ≤ −𝛿(𝐦 × P)(K) < 0. ( )∗
∫0 ⟨ ( 𝑡+𝜀 𝐾𝛩+𝐿 𝑣 )⟩
|
= lim inf E 𝜉, E |𝑡 𝑑𝑟
This contradicts (5.52). As a result, we get (5.53). 𝜀→0 ∫𝑡 𝜀 | 𝐻01 (0,1),𝐻 −1 (0,1)
It follows from (5.53) that
= 0, ∀ 𝜉 ∈ 𝐿2 (𝛺; 𝐻01 (0, 1)). (5.59)
𝑡
2
(𝑡, 𝑣) + (𝑡, −𝑣) ≥ 0, ∀ 𝑣 ∈ 𝐿 (0, 1),
Similarly, we can prove that
for a.e. (𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺.
( )
⟨ ( 𝑡+𝜀 𝐾𝛩+𝐿 ∗ 𝑣 )⟩
Thus |
lim sup E 𝜉, E |𝑡 𝑑𝑟
⟨ ⟩ 𝜀→0 ∫𝑡 𝜀 | 𝐻01 (0,1),𝐻 −1 (0,1)
𝑣, 𝐾(𝑡)𝑣 𝐿2 (0,1) ≥ 0, ∀ 𝑣 ∈ 𝐿2 (0, 1),
(5.54) = 0, ∀ 𝜉 ∈ 𝐿2 (𝛺; 𝐻01 (0, 1)). (5.60)
for a.e. (𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺. 𝑡

This concludes that It follows from (5.59) and (5.60) that


( )
( 𝑡+𝜀 𝐾𝛩 + 𝐿 ∗ 𝑣 )
𝐾(𝑡) ≥ 0 for a.e. (𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺. (5.55) |
lim E |𝑡 = 0.
𝜀→0 ∫𝑡 𝜀 |
Step 4. Denote by 𝜑(⋅)
̂ the solution to (5.42) with 𝜉 ∈ 𝐻01 (0, 1).
Let
This, together with Lemma 5.2, implies that
𝑡 ∈ [0, 𝑇 ) and 𝜀 > 0 be such that 𝑡 + 𝜀 ≤ 𝑇 . Write 𝜑𝑣,𝜀 (⋅) for the solution
( )∗
to (5.42) with the same 𝜉 and 𝑓 (⋅) = 𝜒[𝑡,𝑡+𝜀] (⋅)𝑣 for some 𝑣 ∈ 𝐻01 (0, 1). It 𝐾(𝑡)𝛩(𝑡) + 𝐿(𝑡) 𝑣 = 0 for a.e. 𝑡 ∈ [0, 𝑇 ], a.s.

310
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

𝑇⟨ ⟩ 𝑇⟨ ⟩
By the separability of 𝐿2 (0, 1), we can find a countable dense subset 𝑈̃1
+E 𝛩∗ 𝐾𝛩𝑦, 𝑦 𝐿2 (0,1) 𝑑𝑟 + E 𝑅𝑢, 𝑢 𝐿2 (0,1) 𝑑𝑟
2
of 𝐿 (0, 1) and a measurable subset 𝛤2 of [0, 𝑇 ] such that 𝐦(𝛤2 ) = 𝑇 and ∫0 ∫0
[⟨ ⟩
̃1 ,
that for all 𝑣 ∈ 𝑈 = E 𝑃 (0)𝜂, 𝜂 𝐿2 (0,1)
( )∗ 𝑇 (⟨
𝐾(𝑡)𝛩(𝑡) + 𝐿(𝑡) 𝑣 = 0 for all 𝑡 ∈ 𝛤2 , a.s. ⟩ ⟨ ⟩
+ 𝛩∗ 𝐾𝛩𝑦, 𝑦 𝐿2 (0,1) + 2 𝐿𝑦, 𝑢 𝐿2 (0,1)
∫0
This implies that for all 𝑣 ∈ 𝐿2 (0, 1), ⟨ ⟩ ) ]
( )∗ + 𝐾𝑢, 𝑢 𝐿2 (0,1) 𝑑𝑟 .
𝐾(𝑡)𝛩(𝑡) + 𝐿(𝑡) 𝑣 = 0 for all 𝑡 ∈ 𝛤2 , a.s.
This, together with (5.34), implies that
Consequently,
( )∗  (𝜂; 𝑢(⋅))
𝐾(𝑡)𝛩(𝑡) + 𝐿(𝑡) = 0 in (𝐿2 (0, 1); 𝐻 −1 (0, 1)), [
1 ⟨ ⟩ 𝑇 (⟨ ⟩
for a.e. (𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺. = E 𝑃 (0)𝜂, 𝜂 𝐿2 (0,1) + 𝐾𝛩𝑦, 𝛩𝑦 𝐿2 (0,1)
2 ∫0
Thus, ⟨ ⟩ ) ]
−2 𝐾𝛩𝑦, 𝑢 𝐿2 (0,1) + ⟨𝐾𝑢, 𝑢⟩𝐿2 (0,1) 𝑑𝑟
𝐾(𝑡)𝛩(𝑡) + 𝐿(𝑡) = 0 in (𝐿2 (0, 1); 𝐻 −1 (0, 1)), (
1 ⟨ ⟩
for a.e. (𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺. = E 𝑃 (0)𝜂, 𝜂 𝐿2 (0,1)
2
𝑇⟨ ⟩ )
This, together with (5.37), implies that + 𝐾(𝑢 − 𝛩𝑦), 𝑢 − 𝛩𝑦 𝐿2 (0,1) 𝑑𝑟
∫0
(𝛬(𝑡)𝐷(𝑡))∗ = −𝐷(𝑡)∗ 𝑃 (𝑡)𝐶(𝑡) − 𝐵(𝑡)∗ 𝑃 (𝑡) − 𝐾(𝑡)𝛩(𝑡),
(5.61) ( ) 1 𝑇⟨ ⟩
for a.e. (𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺. =  𝜂; 𝛩𝑦̄ + E 𝐾(𝑢 − 𝛩𝑦), 𝑢 − 𝛩𝑦 𝐿2 (0,1) 𝑑𝑟.
2 ∫0
Noting that the right hand side of (5.61) belongs to 𝐿∞,,2 F
(𝛺; Hence,
𝐿2 (0, 𝑇 ; (𝐿2 (0, 1)))), we have (𝛬𝐷)∗ ∈ 𝐿∞,,2 F
(𝛺; 𝐿 2 (0, 𝑇 ; (𝐿2 (0, 1)))).

Thus, 𝐿 ∈ 𝐿∞,,2 (𝛺; 𝐿2 (0, 𝑇 ; (𝐿2 (0, 1)))) and  (𝜂; 𝛩𝑦)
̄ ≤  (𝜂; 𝑢), ∀ 𝑢(⋅) ∈  [0, 𝑇 ].
F

𝐾(𝑡)𝛩(𝑡) + 𝐿(𝑡) = 0 in (𝐿2 (0, 1)), Consequently, 𝛩(⋅) is an optimal feedback operator for Problem (SLQ),
□ and (5.35) holds. This completes the proof of the ‘‘only if’’ part of
for a.e. (𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺.
Theorem 5.4.
Now we are in a position to prove Theorem 5.4. The ‘‘if’’ part. First, similarly to Step 7 in the proof of (Lü & Zhang,
2019b, Theorem 2.2), we can prove that
Proof of Theorem 5.4. : The ‘‘only if’’ part. For any 𝜂 ∈ 𝐿2 (0, 1) and
𝑢(⋅) ∈  [0, 𝑇 ], let 𝑦(⋅) ≡ 𝑦(⋅; 𝜂, 𝑢(⋅)) be the corresponding solution to 𝐾 > 0 for a.e. (𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺 (5.63)
(5.1). Choose 𝜉1 = 𝜉2 = 𝜂, 𝑢1 = 𝑢2 = 𝐵𝑢 and 𝑣1 = 𝑣2 = 𝐷𝑢 in (5.30)–
and 𝐾(𝑡, 𝜔)−1 is a densely defined closed operator on 𝐿2 (0, 1) for a.e.
(5.31). From (5.29) and the pointwise symmetry of 𝐾(⋅), we obtain
(𝑡, 𝜔) ∈ (0, 𝑇 ) × 𝛺.
that
Let 𝑢1 = 𝑢̃ 1 − 𝐵𝛩𝜑1 (resp. 𝑢2 = 𝑢̃ 2 − 𝐵𝛩𝜑2 ) and 𝑣1 = 𝑣̃1 − 𝐷𝛩𝜑1
𝑇⟨ ⟩
E⟨𝐺𝑦(𝑇 ), 𝑦(𝑇 )⟩𝐿2 (0,1) + E 𝑀𝑦, 𝑦 𝐿2 (0,1) 𝑑𝑟 (resp. 𝑣2 = 𝑣̃2 − 𝐷𝛩𝜑2 ) in (5.30) (resp. (5.31)), where 𝑢̃ 1 , 𝑢̃ 2 , 𝑣̃1 , 𝑣̃2 ∈
∫0 𝐿2F (𝑡, 𝑇 ; 𝐿2 (0, 1)). We have that
𝑇⟨ ⟩ ⟨ ⟩
−E 𝛩∗ 𝐾𝛩𝑦, 𝑦 𝐿2 (0,1) 𝑑𝑟 E 𝑃𝑇 𝜑1 (𝑇 ), 𝜑2 (𝑇 ) 𝐿2 (0,1)
∫0
𝑇⟨ ⟩
⟨ ⟩ 𝑇⟨ ⟩
= E 𝑃 (0)𝜂, 𝜂 𝐿2 (0,1) + E 𝑃 𝐵𝑢, 𝑦 𝐿2 (0,1) 𝑑𝑟 (5.62) +E (𝑀 + 𝛩∗ 𝑅𝛩)𝜑1 , 𝜑2 𝐿2 (0,1)
𝑑𝑟
∫0 ∫𝑡
𝑇⟨ ⟩ 𝑇⟨ ⟩ ⟨ ⟩ 𝑇⟨ ( ) ⟩
+E 𝑃 𝑦, 𝐵𝑢 𝐿2 (0,1) 𝑑𝑟 + E 𝑃 𝐶𝑦, 𝐷𝑢 𝐿2 (0,1) 𝑑𝑟 = E 𝑃 (𝑡)𝜉1 , 𝜉2 𝐿2 (0,1)+E 𝑃 𝑢̃ 1−𝐵𝛩𝜑1 , 𝜑2 𝐿2 (0,1) 𝑑𝑟
∫0 ∫0 ∫𝑡
𝑇⟨ 𝑇⟨ ⟩

+E 𝑃 𝐷𝑢, 𝐶𝑦 + 𝐷𝑢 𝑑𝑟 +E 𝑃 𝜑1 , 𝑢̃ 2 − 𝐵𝛩𝜑2 𝐿2 (0,1)
𝑑𝑟
∫0 𝐿2 (0,1) ∫𝑡
𝑇⟨ 𝑇⟨ ( ) ⟩

+E 𝛬𝐷𝑢, 𝑦 𝐻 −1 (0,1),𝐻 1 (0,1) 𝑑𝑟 +E 𝑃 𝐶 + 𝐷𝛩 𝜑1 , 𝑣̃2 − 𝐷𝛩𝜑2 𝐿2 (0,1) 𝑑𝑟
∫0 0
∫𝑡
𝑇⟨ 𝑇⟨ ( )

+E 𝑦, 𝛬𝐷𝑢 𝐻 1 (0,1),𝐻 −1 (0,1) 𝑑𝑟. +E 𝑃 𝑣̃1 − 𝐷𝛩𝜑1 ,
∫0 ∫𝑡
0
( ) ⟩
𝐶 + 𝐷𝛩 𝜑2 + 𝑣̃2 − 𝐷𝛩𝜑2 𝐿2 (0,1) 𝑑𝑟
Then, by (5.62), and recalling (5.29) for the definition of 𝐿(⋅) and 𝐾(⋅),
𝑇⟨ ⟩
we arrive at
+E 𝛬𝑣̃1 , 𝜑2 𝐻 −1 (0,1),𝐻01 (0,1)
𝑑𝑟
∫𝑡
2 (𝜂; 𝑢(⋅)) 𝑇⟨
[ 𝑇 (⟨ ⟩
⟩ ⟨ ⟩ ) −E 𝛬𝐷𝛩𝜑1 , 𝜑2 𝑑𝑟
=E 𝑀𝑦, 𝑦 𝐿2 (0,1) + 𝑅𝑢, 𝑢 𝐿2 (0,1) 𝑑𝑟 ∫𝑡 𝐻 −1 (0,1),𝐻01 (0,1)
∫0
] 𝑇⟨ ⟩
+⟨𝐺𝑦(𝑇 ), 𝑦(𝑇 )⟩𝐿2 (0,1) +E 𝛬𝑣̃2 , 𝜑1 𝐻 −1 (0,1),𝐻01 (0,1)
𝑑𝑟
∫𝑡
⟨ ⟩ 𝑇⟨ ⟩ 𝑇⟨ ⟩
= E 𝑃 (0)𝜂, 𝜂 𝐿2 (0,1) + E 𝑃 𝐵𝑢, 𝑦 𝐿2 (0,1) 𝑑𝑟 −E 𝛬𝐷𝛩𝜑2 , 𝜑1 𝐻 −1 (0,1),𝐻01 (0,1)
𝑑𝑟
∫0 ∫𝑡
𝑇⟨ ⟩ ⟩ 𝑇⟨ ⟨ ⟩ 𝑇⟨ ⟩
+E 𝑃 𝑦, 𝐵𝑢 𝐿2 (0,1) 𝑑𝑟 + E 𝑃 𝐶𝑦, 𝐷𝑢 𝐿2 (0,1) 𝑑𝑟 = E 𝑃 (𝑡)𝜉1 , 𝜉2 𝐿2 (0,1) + E 𝑃 𝑢̃ 1 , 𝜑2 𝐿2 (0,1)
𝑑𝑟 (5.64)
∫0 ∫0 ∫𝑡
𝑇⟨ ⟩ 𝑇⟨ ⟩ 𝑇⟨ ⟩
+E 𝑃 𝐷𝑢, 𝐶𝑦 + 𝐷𝑢 𝐿2 (0,1) 𝑑𝑟 +E 𝑃 𝜑1 , 𝑢̃ 2 𝐿2 (0,1)
𝑑𝑟+E 𝑃 𝐶𝜑1 , 𝑣̃2 𝐿2 (0,1)
𝑑𝑟
∫0 ∫𝑡 ∫𝑡
𝑇⟨ ⟩ 𝑇⟨ ⟩
+2E 𝑦, 𝛬𝐷𝑢 𝐻 1 (0,1),𝐻 −1 (0,1) 𝑑𝑟 +E 𝑃 𝑣̃1 , 𝐶𝜑2 + 𝑣̃2 𝐿2 (0,1)
𝑑𝑟
∫0 0 ∫𝑡

311
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

𝑇⟨ ⟩ 𝑇⟨ ⟩ 𝑇⟨ ⟩
+E 𝛬𝑣̃1 , 𝜑2 𝐻 −1 (0,1),𝐻01 (0,1)
𝑑𝑟 +E 𝑣̃1 , 𝛬𝜑2 𝐿2 (0,1)
𝑑𝜏 + E 𝛬𝜑1 , 𝑣̃2 𝐿2 (0,1)
𝑑𝜏.
∫𝑡 ∫𝑡 ∫𝑡
𝑇⟨ ⟩ From (5.63) and (5.65), we find that (𝑃 , 𝛬) is a transposition solu-
+E 𝜑1 , 𝛬𝑣̃2 𝐻01 (0,1),𝐻 −1 (0,1)
𝑑𝑟
∫𝑡 tion to (5.28).
𝑇⟨ ⟩ Finally, we prove the uniqueness of transposition solutions to (5.28).
−E 𝑃 𝐵𝛩𝜑1 , 𝜑2 𝐿2 (0,1)
𝑑𝑟 Assume that
∫𝑡
𝑇⟨ ⟩ (𝑃1 (⋅), 𝛬1 (⋅)), (𝑃2 (⋅), 𝛬2 (⋅))
−E 𝐵𝛩𝜑2 , 𝑃 𝜑1 𝑑𝑟
∫𝑡 𝐿2 (0,1) ∈ 𝐷F,𝑤 ([0, 𝑇 ]; 𝐿∞ (𝛺; (𝐿2 (0, 1)))) × 𝐿2F (0, 𝑇 ; 2 (𝐿2 (0, 1); 𝐻 −1 (0, 1)))
𝑇⟨ ⟩
−E 𝑃 𝐶𝜑1 , 𝐷𝛩𝜑2 𝑑𝑟 are two transposition solutions to (5.28).
∫𝑡 𝐿2 (0,1)
By choosing 𝑢1 = 𝑣1 = 𝑢2 = 𝑣2 = 0 in (5.32), we see that, for any
𝑇⟨ ⟩ 𝑡 ∈ [0, 𝑇 ) and 𝜉1 , 𝜉2 ∈ 𝐿2 (𝛺; 𝐿2 (0, 1)),
−E 𝑃 𝐷𝛩𝜑1 , 𝐷𝛩𝜑2 𝐿2 (0,1)
𝑑𝑟 𝑡
∫𝑡
𝑇⟨ ⟩ E⟨𝑃1 (𝑡)𝜉1 , 𝜉2 ⟩𝐿2 (0,1) = E⟨𝑃2 (𝑡)𝜉1 , 𝜉2 ⟩𝐿2 (0,1) . (5.66)
−E 𝐶𝜑2 , 𝑃 𝐷𝛩𝜑1 𝐿2 (0,1)
𝑑𝑟
∫𝑡 Thus, for any 𝜉, 𝜂 ∈ 𝐿2 (𝛺; 𝐿2 (0, 1)),
𝑡
𝑇⟨ ( )∗ ⟩
−E 𝛩𝜑1 , 𝛬𝐷 𝜑2 𝐻 −1 (0,1),𝐻 1 (0,1) 𝑑𝑟 E⟨𝑃1 (𝑡)(𝜂 + 𝜉), 𝜂 + 𝜉⟩𝐿2 (0,1) = E⟨𝑃2 (𝑡)(𝜂 + 𝜉), 𝜂 + 𝜉⟩𝐿2 (0,1) ,
∫𝑡 0
𝑇⟨ ( )∗ ⟩ and
−E 𝛩𝜑2 , 𝛬𝐷 𝜑1 𝐻 −1 (0,1),𝐻 1 (0,1) 𝑑𝑟
∫𝑡 0
E⟨𝑃1 (𝑡)(𝜂 − 𝜉), 𝜂 − 𝜉⟩𝐿2 (0,1) = E⟨𝑃2 (𝑡)(𝜂 − 𝜉), 𝜂 − 𝜉⟩𝐿2 (0,1) .
⟨ ⟩ 𝑇⟨ ⟩
= E 𝑃 (𝑡)𝜉1 , 𝜉2 𝐿2 (0,1) + E 𝑃 𝑢̃ 1 , 𝜑2 𝐿2 (0,1)
𝑑𝑟 These, together with 𝑃1 (⋅) = 𝑃1 (⋅)∗ and 𝑃2 (⋅) = 𝑃2 (⋅)∗ , imply that
∫𝑡
𝑇⟨ ⟩ 𝑇⟨ ⟩
+E 𝑃 𝜑1 ,𝑢̃ 2 𝑑𝑟+E 𝑃 𝐶𝜑1 ,𝑣̃2 𝑑𝑟 E⟨𝑃1 (𝑡)𝜂, 𝜉⟩𝐿2 (0,1) = E⟨𝑃2 (𝑡)𝜂, 𝜉⟩𝐿2 (0,1) , ∀ 𝜉, 𝜂 ∈ 𝐿2 (𝛺; 𝐿2 (0, 1)).
∫𝑡 𝐿2 (0,1)
∫𝑡 𝐿2 (0,1) 𝑡

𝑇⟨ ⟩ (5.67)
+E 𝑃 𝑣̃1 , 𝐶𝜑2 + 𝑣̃2 𝐿2 (0,1) 𝑑𝑟
∫𝑡 Hence,
𝑇⟨ ⟩
+E 𝛬𝑣̃1 , 𝜑2 𝐻 −1 (0,1),𝐻 1 (0,1)
𝑑𝑟 𝑃1 (𝑡) = 𝑃2 (𝑡) for any 𝑡 ∈ [0, 𝑇 ], a.s.
∫𝑡 0
𝑇⟨ ⟩ Let 𝑣2 = 0 in (5.31). By (5.32) and noting
+E 𝜑1 , 𝛬𝑣̃2 𝐻01 (0,1),𝐻 −1 (0,1)
𝑑𝑟 ⟨ ⟩
∫𝑡
𝐾(⋅)−1 𝐿(⋅)𝜑1 (⋅), 𝐿(⋅)𝜑2 (⋅) 𝐿2 (0,1)
𝑇⟨ [ ( )∗ ] ⟩ ⟨ ⟩
−E 𝛩𝜑1 , 𝐵 ∗ 𝑃 +𝐷∗ 𝑃 𝐶 + 𝛬𝐷 𝜑2 𝐿2 (0,1) 𝑑𝑟 = − 𝛩(⋅)𝜑1 (⋅), 𝐾(⋅)𝛩(⋅)𝜑2 (⋅) 𝐿2 (0,1) ,
∫𝑡
𝑇⟨ [ ( )∗ ] ⟩ we see that for any 𝑣̃1 ∈ 𝐿4F (𝛺; 𝐿2 (0, 𝑇 ; 𝐿2 (0, 1))),
−E 𝛩𝜑2 , 𝐵 ∗ 𝑃 +𝐷∗ 𝑃 𝐶 + 𝛬𝐷 𝜑1 𝐿2 (0,1) 𝑑𝑟
∫𝑡 𝑇 ⟨( ) ⟩
𝑇⟨ ⟩ 0=E 𝛬1 − 𝛬2 𝑣̃1 , 𝜑2 𝐻 −1 (0,1),𝐻 1 (0,1) 𝑑𝑠. (5.68)
−E 𝑃 𝐷𝛩𝜑1 , 𝐷𝛩𝜑2 𝑑𝑟. ∫0 0
∫𝑡 𝐿2 (0,1)
This, together with Lemma 4.4, implies that
Noting ( ) 4
𝛬1 − 𝛬2 𝑣̃1 = 0 in 𝐿F3 (𝛺; 𝐿2 (0, 𝑇 ; 𝐻 −1 (0, 1))). (5.69)
𝑇⟨ ⟩
E 𝐾 −1 𝐿𝜑1 , 𝐿𝜑2 𝐿2 (0,1)
𝑑𝑟 Consequently,
∫𝑡
𝑇 ⟨[ ( )∗ ] ⟩
= −E 𝐵 ∗ 𝑃 +𝐷∗ 𝑃̄ 𝐶 + 𝛬𝐷 𝜑2 , 𝛩𝜑1 𝐿2 (0,1) 𝑑𝑟 𝛬1 = 𝛬2 in 𝐿2F (0, 𝑇 ; 2 (𝐿2 (0, 1); 𝐻 −1 (0, 1))).
∫𝑡
Hence, the desired uniqueness follows. This completes the proof of
and
Theorem 5.4. □
𝑇⟨ ⟩
E 𝛩∗ 𝑅𝛩𝜑1 , 𝜑2 𝐿2 (0,1)
𝑑𝑟
∫𝑡 5.6. Stochastic linear quadratic optimal control problems with deterministic
𝑇⟨ ⟩
+E 𝑃 𝐷𝛩𝜑1 , 𝐷𝛩𝜑2 𝑑𝑟 coefficients
∫𝑡 𝐿2 (0,1)
𝑇 ⟨( ) ⟩
=E 𝑅 + 𝐷∗ 𝑃 𝐷 𝛩𝜑1 , 𝛩𝜑2 𝐿2 (0,1) 𝑑𝑟 In this subsection, we pay our attention to a special case of Problem
∫𝑡 (SLQ), that is, all coefficients under consideration are deterministic,
𝑇 ⟨[ ( )∗ ] ⟩
= −E 𝐵 ∗ 𝑃 +𝐷∗ 𝑃 𝐶 + 𝛬𝐷 𝜑1 , 𝛩𝜑2 𝐿2 (0,1) 𝑑𝑟, namely, we assume that
∫𝑡 𝑎1 , 𝑎2 , 𝑎3 , 𝑞, 𝑟 ∈ 𝐿∞ (0, 𝑇 ; 𝐿∞ (0, 1)), 𝑔 ∈ 𝐿∞ (0, 1).
we get from (5.64) that For notational brevity, we introduce operator valued functions
𝐵, 𝐶, 𝐷 ∈ 𝐿∞ (0, 𝑇 ; (𝐿2 (0, 1))), 𝑀, 𝑅 ∈ 𝐿∞ (0, 𝑇 ; S(𝐿2 (0, 1))) and 𝐺 ∈
𝑇⟨ ⟩
E⟨𝐺𝜑1 (𝑇 ), 𝜑2 (𝑇 )⟩𝐿2 (0,1) + E 𝑀𝜑1 , 𝜑2 𝑑𝑟 S(𝐿2 (0, 1)) as follows:
∫𝑡 𝐿2 (0,1) {
𝑇⟨ 𝐵𝑓 = 𝑎1 𝑓 , 𝐶𝑓 = 𝑎2 𝑓 , 𝐷𝑓 = 𝑎3 𝑓 ,
⟩ ∀ 𝑓 ∈ 𝐿2 (0, 1).
−E 𝐾 −1 𝐿𝜑1 , 𝐿𝜑2 𝐿2 (0,1)
𝑑𝜏 𝑀𝑓 = 𝑞𝑓 , 𝑅𝑓 = 𝑟𝑓 , 𝐺𝑓 = 𝑔𝑓 ,
∫𝑡
⟨ ⟩ 𝑇⟨ ⟩ Also, recall (4.23) for the definition of the unbounded linear operator
= E 𝑃 (𝑡)𝜉1 , 𝜉2 𝐿2 (0,1) + E 𝑃 𝑢̃ 1 , 𝜑2 𝐿2 (0,1)
𝑑𝜏
∫𝑡 𝐴.
𝑇⟨ ⟩ 𝑇⟨ ⟩ The backward stochastic Riccati equation (5.28) becomes the fol-
+E 𝑃 𝜑1 , 𝑢̃ 2 𝑑𝜏 +E
𝐿2 (0,1)
𝑃 𝐶𝜑1 , 𝑣̃2 𝐿2 (0,1)
𝑑𝜏 lowing deterministic Riccati equation:
∫𝑡 ∫𝑡
𝑇⟨
{
⟩ 𝑃̇ +𝑃 𝐴+𝐴∗ 𝑃 +𝐶 ∗ 𝑃 𝐶 +𝑀 −𝐿∗ 𝐾 −1 𝐿 = 0 in [0, 𝑇 ),
+E 𝑃 𝑣̃1 , 𝐶𝜑2 + 𝑣̃2 𝐻 𝑑𝑟 (5.65) (5.70)
∫𝑡 𝑃 (𝑇 ) = 𝐺,

312
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

where 5.7. Linear quadratic optimal control problem for general stochastic evolu-
𝛥 ∗ 𝛥 ∗ ∗
tion equation
𝐾 = 𝑅 + 𝐷 𝑃 𝐷, 𝐿 = 𝐵 𝑃 + 𝐷 𝑃 𝐶.

Compared with the backward stochastic Riccati equation (5.28), In this subsection, let 𝐻 and 𝑈 be two separable Hilbert spaces, and
𝐴 be an unbounded linear operator (with domain 𝐷(𝐴) ⊂ 𝐻), which
there is no stochastic term in (5.70) and its solution can be defined in
generates a 𝐶0 -semigroup {𝑆(𝑡)}𝑡≥0 on 𝐻. Denote by 𝐴∗ the adjoint
the classical way. Denote by 𝐶 ([0, 𝑇 ]; S(𝐿2 (0, 1))) the set of all strongly
operator of 𝐴.
continuous mappings 𝐹 ∶ [0, 𝑇 ] → S(𝐿2 (0, 1)), that is, 𝐹 (⋅)𝜉 is continu-
Let 𝐻1 be a Hilbert space. Denote by S(𝐻1 ) the Banach space of all
ous on [0, 𝑇 ] for each 𝜉 ∈ 𝐿2 (0, 1). Let {𝐹𝑛 }∞
𝑛=1
⊂ 𝐶 ([0, 𝑇 ]; S(𝐿2 (0, 1))).
∞ self-adjoint (linear bounded) operators on 𝐻1 . For 𝑀, 𝑁 ∈ S(𝐻1 ), we
We say that {𝐹𝑛 }𝑛=1 converges strongly to 𝐹 ∈ 𝐶 ([0, 𝑇 ]; S(𝐿2 (0, 1))) if
use the notation 𝑀 ≥ 𝑁 (resp. 𝑀 > 𝑁, 𝑀 ≫ 𝑁) to indicate that 𝑀 −𝑁
lim 𝐹𝑛 (⋅)𝜉 = 𝐹 (⋅)𝜉, ∀ 𝜉 ∈ 𝐿2 (0, 1), is positive semi-definite (resp. positive definite, 𝑀 − 𝑁 ≥ 𝛿𝐼 for some
𝑛→∞
𝛿 > 0). For any S(𝐻1 )-valued stochastic process 𝐹 on [0, 𝑇 ], we write
and write 𝐹 ≥ 0 (resp. 𝐹 > 0, 𝐹 ≫ 0) if 𝐹 (𝑡, 𝜔) ≥ 0 (resp. 𝐹 (𝑡, 𝜔) > 0, 𝐹 (𝑡, 𝜔) ≥ 𝛿𝐼
for some 𝛿 > 0) for a.e. (𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺. One can define 𝐹 ≪ 0 and so
lim 𝐹𝑛 = 𝐹 in 𝐶 ([0, 𝑇 ]; S(𝐿2 (0, 1))). on in a similar way.
𝑛→∞
For any given 𝜂 ∈ 𝐻, we consider a control system governed by the
If 𝐹 ∈ 𝐶 ([0, 𝑇 ]; S(𝐿2 (0, 1))), then, by the Uniform Boundedness Theo-
following SEE:
rem, ( ) ( )
{
𝛥 𝑑𝑦 = 𝐴𝑦+𝐵𝑢 𝑑𝑡+ 𝐶𝑦+𝐷𝑢 𝑑𝑊 (𝑡) in (0, 𝑇 ],
|𝐹 |𝐶 ([0,𝑇 ];S((𝐿2 (0,1)))) = sup |𝐹 (𝑡)|(𝐿2 (0,1)) (5.74)
𝑦(0) = 𝜂,
𝑡∈[0,𝑇 ]
where 𝐵(⋅), 𝐷(⋅) ∈ 𝐿∞ (0, 𝑇 ; (𝑈 ; 𝐻)) and 𝐶(⋅) ∈ 𝐿∞ (0, 𝑇 ; (𝐻)). In
is finite. F F
(5.74), 𝑢(⋅)(∈ 𝐿2F (0, 𝑇 ; 𝑈 )) is the control and 𝑦(⋅) is the state. In view
Let
of Theorem B.7, for any 𝑢 ∈ 𝐿2F (0, 𝑇 ; 𝑈 ), the system (5.74) admits a
𝐿2, (0, 𝑇 ; (𝐿2 (0, 1))) unique (mild) solution 𝑦(⋅) ≡ 𝑦(⋅; 𝜂, 𝑢) ∈ 𝐶F ([0, 𝑇 ]; 𝐿2 (𝛺; 𝐻)) such that
𝛥{ |
= 𝐹 ∶ [0, 𝑇 ] → (𝐿2 (0, 1)) | 𝐹 𝜂 ∈ 𝐿2 (0, 𝑇 ; 𝐿2 (0, 1)), ( )
| } |𝑦|𝐶F ([0,𝑇 ];𝐿2 (𝛺;𝐻)) ≤  |𝜂|𝐻 + |𝑢|𝐿2 (0,𝑇 ;𝑈 ) . (5.75)
∀𝜂 ∈ 𝐿 (0, 1), |𝐹 |(𝐿2 (0,1)) ∈ 𝐿2 (0, 𝑇 ) .
2 F

Choose a quadratic cost functional as follows


Particularly,
 (𝜂; 𝑢(⋅))
𝐿2, (0, 𝑇 ; S(𝐿2 (0, 1))) [ 𝑇( ) ] (5.76)
𝛥 { } 1
| = E ⟨𝑀𝑦, 𝑦⟩𝐻 + ⟨𝑅𝑢, 𝑢⟩𝑈 𝑑𝑡 + ⟨𝐺𝑦(𝑇 ), 𝑦(𝑇 )⟩𝐻 ,
= 𝐹 ∶ [0, 𝑇 ] → S(𝐿2 (0,1)) | 𝐹 ∈ 𝐿2, (0, 𝑇 ; (𝐿2 (0,1))) . 2 ∫0
|
where 𝑀(⋅) ∈ 𝐿∞ F
(0, 𝑇 ; S(𝐻)), 𝑅(⋅) ∈ 𝐿∞
F
(0, 𝑇 ; S(𝑈 )) and 𝐺 ∈ 𝐿∞ 𝑇
(𝛺;
Definition 5.4. We call 𝑃 ∈ 𝐶 ([0, 𝑇 ]; S(𝐿2 (0, 1))) a mild solution to S(𝐻)). We consider the following optimal control problem:
(5.70) if for any 𝜂 ∈ (𝐿2 (0, 1)) and 𝑡 ∈ [0, 𝑇 ], Problem (GSLQ) For each 𝜂 ∈ 𝐻, find a control 𝑢(⋅) ̄ ∈ 𝐿2F (0, 𝑇 ; 𝑈 ) such
that
𝑃 (𝑡)𝜂 = 𝑆(𝑇 − 𝑡)∗ 𝐺𝑆(𝑇 − 𝑡)𝜂
𝑇 ( )  (𝜂; 𝑢(⋅))
̄ = inf  (𝜂; 𝑢(⋅)). (5.77)
+ 𝑆(𝜏 −𝑡)∗ 𝐶 ∗𝑃 𝐶 + 𝑀 − 𝐿∗ 𝐾 −1 𝐿 𝑆(𝜏 − 𝑡)𝜂𝑑𝜏. 𝑢(⋅) ∈ 𝐿2F (0,𝑇 ;𝑈 )
∫𝑡
The following result gives a sufficient and necessary condition for To define the optimal feedback control operator, we need to intro-
the existence of solutions to the Riccati equation (5.70) (See Lü, 2019, duce some notations.
Theorem 2.2 for its proof). Fix 𝑝, 𝑞 ≥ 1 and two Hilbert spaces 𝐻1 , 𝐻2 , write
𝛶𝑝,𝑞 (𝐻1 ; 𝐻2 )
Theorem 5.5. The following statements are equivalent: 𝛥{ |
= 𝐽 ∈ 𝑝𝑑 (𝐿∞ (0, 𝑇 ; 𝐿𝑝 (𝛺; 𝐻1 )); 𝐿𝑞F (0, 𝑇 ; 𝐿𝑝 (𝛺; 𝐻2 ))) |
(i) The map 𝑢(⋅) ↦  (0; 𝑢(⋅)) is uniformly convex, i.e., there exists a F
𝑞 } |

|𝐽 |(𝐻1 ;𝐻2 ) ∈ 𝐿F (0, 𝑇 ; 𝐿 (𝛺)) .
𝜆 > 0 such that
𝑇 1 In the sequel, we shall simply denote 𝛶𝑝,𝑝 (𝐻1 ; 𝐻2 ) (resp. 𝛶𝑝,𝑝 (𝐻; 𝐻)) by
 (0; 𝑢(⋅)) ≥ 𝜆E |𝑢|2 𝑑𝑥𝑑𝑡, ∀ 𝑢(⋅) ∈  [0, 𝑇 ], (5.71) 𝛶𝑝 (𝐻1 ; 𝐻2 ) (resp. 𝛶𝑝 (𝐻)).
∫0 ∫0
(ii) The Riccati equation (5.70) admits a solution 𝑃 (⋅) ∈ 𝐶 ([0, 𝑇 ];
Definition 5.5. An operator 𝛩(⋅) ∈ 𝛶2 (𝐻; 𝑈 ) is called an optimal
S(𝐿2 (0, 1))) such that feedback operator for Problem (GSLQ) if
𝐾(𝑡) ≥ 𝜆𝐼, a.e. 𝑡 ∈ [0, 𝑇 ], (5.72)  (𝜂; 𝛩(⋅)𝑦(⋅))
̄ ≤  (𝜂; 𝑢(⋅)), ∀ 𝜂 ∈ 𝐻, 𝑢(⋅) ∈ 𝐿2F (0, 𝑇 ; 𝑈 ),
for some 𝜆 > 0. where 𝑦(⋅)
̄ = 𝑦(⋅;
̄ 𝜂, 𝛩(⋅)𝑦(⋅))
̄ solves the following equation:
{ ( ) ( )
𝑑 𝑦̄ = 𝐴𝑦̄ + 𝐵𝛩𝑦̄ 𝑑𝑡 + 𝐶 𝑦̄ + 𝐷𝛩𝑦̄ 𝑑𝑊 (𝑡) in (0, 𝑇 ],
Remark 5.3. Clearly, if (5.16) holds, then the map 𝑢(⋅) ↦  (0; 𝑢(⋅)) is
̄ = 𝜂.
𝑦(0)
uniformly convex. On the other hand, there are some interesting cases
for which the map 𝑢(⋅) ↦  (0; 𝑢(⋅)) is uniformly convex but (5.16) does Similarly to Section 5.4, we introduce the following operator-valued,
not hold (See Lü, 2019). backward stochastic Riccati equation:
⎧ 𝑑𝑃 = − ( 𝑃 𝐴+𝐴∗ 𝑃 +𝛬𝐶 +𝐶 ∗ 𝛬 + 𝐶 ∗ 𝑃 𝐶
As a consequence of Theorem 5.5, we obtain the following result. ⎪ )
⎨ +𝑀 −𝐿∗ 𝐾 −1 𝐿 𝑑𝑡+𝛬𝑑𝑊 (𝑡) in [0, 𝑇 ), (5.78)
⎪ 𝑃 (𝑇 ) = 𝐺,
Corollary 5.3. Let (5.71) hold. Then, Problem (SLQ) admits an optimal ⎩
feedback operator where
−1 2, 2 𝛥 𝛥
𝛩(⋅) = −𝐾(⋅) 𝐿(⋅) ∈ 𝐿 (0, 𝑇 ; (𝐿 (0, 1))). (5.73) 𝐾 = 𝑅 + 𝐷∗ 𝑃 𝐷 > 0, 𝐿 = 𝐵 ∗ 𝑃 + 𝐷∗ (𝑃 𝐶 + 𝛬). (5.79)

313
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

One can follow the method that we employed to solve (5.28) to ⟨ ⟩ 𝑇⟨ ⟩


= E 𝑃 (𝑡)𝜉1 , 𝜉2 𝐻 + E 𝑃 (𝜏)𝑢1 (𝜏), 𝜑2 (𝜏) 𝐻 𝑑𝜏
study (5.78). Nevertheless, as in Section 4.4, we need to revise it to ∫𝑡
𝑇⟨ ⟩
handle the general case.
+E 𝑃 (𝜏)𝜑1 (𝜏), 𝑢2 (𝜏) 𝐻 𝑑𝜏
Let us first introduce the following two SEEs: ∫𝑡
{ ( ) ( ) 𝑇⟨ ⟩
𝑑𝜑1 = 𝐴𝜑1 + 𝑢1 𝑑𝜏 + 𝐶𝜑1 + 𝑣1 𝑑𝑊 (𝜏) in (𝑡, 𝑇 ], +E 𝑃 (𝜏)𝐶(𝜏)𝜑1 (𝜏), 𝐷(𝜏)𝑣2 (𝜏) 𝐻 𝑑𝜏
(5.80) ∫𝑡
𝜑1 (𝑡) = 𝜉1
𝑇⟨ ⟩
and +E 𝑃 (𝜏)𝐷(𝜏)𝑣1 (𝜏), 𝐶(𝜏)𝜑2 (𝜏) + 𝐷(𝜏)𝑣2 (𝜏) 𝐻 𝑑𝜏
{ ( ) ( ) ∫𝑡
𝑑𝜑2 = 𝐴𝜑2 + 𝑢2 𝑑𝜏 + 𝐶𝜑2 + 𝑣2 𝑑𝑊 (𝜏) in (𝑡, 𝑇 ], 𝑇⟨ ⟩
(5.81)
𝜑2 (𝑡) = 𝜉2 . +E 𝛬(𝜏)𝐷(𝜏)𝑣1 (𝜏), 𝜑2 (𝜏) 𝐻 𝑑𝜏
∫𝑡
Here 𝑡 ∈ [0, 𝑇 ), 𝜉1 , 𝜉2 are suitable random variables and 𝑢1 , 𝑢2 , 𝑣1 , 𝑣2 are 𝑇⟨ ⟩
suitable stochastic processes. +E 𝐷(𝜏)∗ 𝛬(𝜏)𝜑1 (𝜏), 𝑣2 (𝜏) 𝑈 𝑑𝜏.
∫𝑡
Also, we need to give the solution spaces for (5.78). Let 𝑉 be a
Here, 𝜑1 (⋅) and 𝜑2 (⋅) solve (5.80) and (5.81) with 𝑣1 and 𝑣2 replaced
Hilbert space such that 𝐻 ⊂ 𝑉 and the embedding from 𝐻 to 𝑉 is
by 𝐷𝑣1 and 𝐷𝑣2 , respectively.9
a Hilbert–Schmidt operator. Denote by 𝑉 ′ the dual space of 𝑉 with
respect to the pivot space 𝐻. Put
Theorem 5.6. If the Riccati equation (5.78) admits a transposition
( )
𝐷F,𝑤 ([0, 𝑇 ]; 𝐿∞ (𝛺; (𝐻))) solution 𝑃 (⋅), 𝛬(⋅) ∈ 𝐷F,𝑤 ([0, 𝑇 ]; 𝐿∞ (𝛺; (𝐻))) × 𝐿2F,𝑤 (0, 𝑇 ; (𝐻)) such
𝛥{ |
= 𝑃 ∈ 𝐷F ([0, 𝑇 ]; 𝐿∞ (𝛺; 2 (𝐻; 𝑉 ))) | that
|
𝑃 (𝑡, 𝜔) ∈ S(𝐻), a.e. (𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺, and [ ]
} 𝐾(⋅)−1 𝐵(⋅)∗ 𝑃 (⋅) +𝐷(⋅)∗ 𝑃 (⋅)𝐶(⋅) +𝐷(⋅)∗ 𝛬(⋅) ∈ 𝛶2 (𝐻; 𝑈 ),
𝜒[𝑡,𝑇 ] 𝑃 (⋅)𝜁 ∈ 𝐷F ([𝑡, 𝑇 ]; 𝐿2 (𝛺; 𝐻)), ∀ 𝜁 ∈ 𝐿2 (𝛺; 𝐻)
𝑡
then Problem (GSLQ) admits an optimal feedback operator 𝛩(⋅) ∈
and
𝛶2 (𝐻; 𝑈 ), which is given by
𝐿2F,𝑤 (0, 𝑇 ; (𝐻))
𝛥{ | 𝛩(⋅) = −𝐾(⋅)−1 [𝐵(⋅)∗ 𝑃 (⋅) + 𝐷(⋅)∗ 𝑃 (⋅)𝐶(⋅) + 𝐷(⋅)∗ 𝛬(⋅)].
= 𝛬 ∈ 𝐿2F (0, 𝑇 ; 2 (𝐻; 𝑉 )) |
( | )}
𝐷∗ 𝛬 ∈ 𝑝𝑑 𝐿∞ (0, 𝑇 ; 𝐿2 (𝛺; 𝐻)); 𝐿2F (0, 𝑇 ; 𝑈 ) . Furthermore,
F
1
Now, we introduce the transposition solution to (5.78): inf  (𝜂; 𝑢) = E⟨𝑃 (0)𝜂, 𝜂⟩𝐻 .
𝑢∈𝐿2F (0,𝑇 ;𝑈 ) 2
( )
Definition 5.6. We call 𝑃 (⋅), 𝛬(⋅) ∈ 𝐷F,𝑤 ([0, 𝑇 ]; 𝐿∞ (𝛺; (𝐻))) × By Theorem 5.6, the existence of an optimal feedback operator
2
𝐿F,𝑤 (0, 𝑇 ; (𝐻)) a transposition solution to (5.78) if the following three is reduced to the existence of suitable transposition solutions to the
conditions hold: Riccati equation (5.78), which is in general a challenging problem.
( )
(1) 𝐾(𝑡, 𝜔) ≡ 𝑅(𝑡, 𝜔) + 𝐷(𝑡, 𝜔)∗ 𝑃 (𝑡, 𝜔)𝐷(𝑡, 𝜔) > 0 and its left inverse There are only some partial answers to this problem (e.g., Lü & Zhang,
−1
𝐾(𝑡, 𝜔) is a densely defined closed operator for a.e. (𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺; 2019b). Due to the limitation of space, we refer the readers to Lü and
(2) For any 𝑡 ∈ [0, 𝑇 ], 𝜉1 , 𝜉2 ∈ 𝐿4 (𝛺; 𝐻), 𝑢1 (⋅), 𝑢2 (⋅) ∈ 𝐿4F (𝛺; Zhang (2019b) for the details.
𝑡
𝐿 (𝑡, 𝑇 ; 𝐻)) and 𝑣1 (⋅), 𝑣2 (⋅) ∈ 𝐿4F (𝛺; 𝐿2 (𝑡, 𝑇 ; 𝑉 ′ )), it holds that
2

𝑇⟨ 5.8. Notes and open problems



E⟨𝐺𝜑1 (𝑇 ), 𝜑2 (𝑇 )⟩𝐻 + E 𝑄(𝜏)𝜑1 (𝜏), 𝜑2 (𝜏) 𝑑𝜏
∫𝑡 𝐻
The material of this section is a simplified version of Lü (2019), Lü
𝑇⟨ ⟩
−E 𝐾(𝜏)−1 𝐿(𝜏)𝜑1 (𝜏), 𝐿(𝜏)𝜑2 (𝜏) 𝐻 𝑑𝜏 and Wang (0000) and Lü and Zhang (2019b).
∫𝑡 Compared with LQ problems for deterministic distributed param-
⟨ ⟩ 𝑇⟨ ⟩ eter systems and stochastic differential equations, there exist only a
= E 𝑃 (𝑡)𝜉1 , 𝜉2 𝐻 + E 𝑃 (𝜏)𝑢1 (𝜏), 𝜑2 (𝜏) 𝐻 𝑑𝜏
∫𝑡 quite limited number of works addressing to LQ problems for SDPSs
𝑇⟨ ⟩ (e.g., Ahmed, 1981; Dou & Lü, 2020; Guatteri & Tessitore, 2005, 2014;
+E 𝑃 (𝜏)𝜑1 (𝜏), 𝑢2 (𝜏) 𝐻 𝑑𝜏 Hafizoglu, Lasiecka, Levajković, Mena, & Tuffaha, 2017; Ichikawa,
∫𝑡
𝑇⟨ ⟩ 1979; Lü, 2019; Lü & Wang, 0000; Lü & Zhang, 2019b; Tessitore, 1992).
+E 𝑃 (𝜏)𝐶(𝜏)𝜑1 (𝜏), 𝑣2 (𝜏) 𝐻 𝑑𝜏 We list below some of these typical works:
∫𝑡
𝑇⟨ In Ichikawa (1979) and Tessitore (1992), Problem (GSLQ) was

+E 𝑃 (𝜏)𝑣1 (𝜏), 𝐶(𝜏)𝜑2 (𝜏) + 𝑣2 (𝜏) 𝐻 𝑑𝜏 studied with the assumption that the diffusion term in the control
∫𝑡
system (5.74) is 𝐶𝑥(𝑡)𝑑𝑊1 (𝑡) + 𝐷𝑢(𝑡)𝑑𝑊2 (𝑡), where 𝑊1 (⋅) and 𝑊2 (⋅) are
𝑇⟨ ⟩
+E 𝑣1 (𝜏), 𝛬(𝜏)𝜑2 (𝜏) 𝑉 ′ ,𝑉 𝑑𝜏 mutually independent Brownian motions and the coefficients in (5.74)
∫𝑡 and the cost functional (5.76) are deterministic. In such context, the
𝑇⟨ ⟩ corresponding Riccati equation becomes
+E 𝛬(𝜏)𝜑1 (𝜏), 𝑣2 (𝜏) 𝑉 ,𝑉 ′ 𝑑𝜏,
∫𝑡
⎧ 𝑑𝑃 = − [ 𝑃 𝐴+𝐴∗ 𝑃 +𝐶 ∗ 𝑃 𝐶 +𝑀
where 𝜑1 (⋅) and 𝜑2 (⋅) solve (5.30) and (5.31), respectively8 . ⎪ ]
⎨ −𝑃 𝐵(𝑅+𝐷∗ 𝑃 𝐷)−1 𝐵 ∗ 𝑃 𝑑𝑡 in [0, 𝑇 ), (5.82)
(3) For any 𝑡 ∈ [0, 𝑇 ], 𝜉1 , 𝜉2 ∈ 𝐿2 (𝛺; 𝐻), 𝑢1 (⋅), 𝑢2 (⋅) ∈ 𝐿2F (𝑡, 𝑇 ; 𝐻) and ⎪ 𝑃 (𝑇 ) = 𝐺.
𝑡

𝑣1 (⋅), 𝑣2 (⋅) ∈ 𝐿2F (𝑡, 𝑇 ; 𝑈 ), it holds that
𝑇⟨
This is a deterministic operator-valued Riccati equation rather than

E⟨𝐺𝜑1 (𝑇 ), 𝜑2 (𝑇 )⟩𝐻 + E 𝑀(𝜏)𝜑1 (𝜏), 𝜑2 (𝜏) 𝐻 𝑑𝜏 an operator-valued backward stochastic Riccati equation. Under the
∫𝑡 assumption that 𝑅 > 0 and 𝑅−1 ∈ (𝑈 ), the well-posedness of (5.82)
𝑇⟨ ⟩ was obtained. Their results can be improved according to the method
−E 𝐾(𝜏)−1 𝐿(𝜏)𝜑1 (𝜏), 𝐿(𝜏)𝜑2 (𝜏) 𝐻 𝑑𝜏
∫𝑡 in Lü (2019).

8 9
By Theorem B.7, one has 𝜑1 (⋅), 𝜑2 (⋅) ∈ 𝐶F ([0, 𝑇 ]; 𝐿4 (𝛺; 𝐻)). By Theorem B.7, one has 𝜑1 (⋅), 𝜑2 (⋅) ∈ 𝐶F ([0, 𝑇 ]; 𝐿2 (𝛺; 𝐻)).

314
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

In Ahmed (1981), by assuming that the diffusion term in (5.74) is By Corollary 5.1, one can find the optimal control of Problem (SLQ)
𝜎𝑑𝑊 (𝑡) with 𝜎 being a suitable 𝐅-adapted 𝐻-valued process indepen- by solving a coupled forward–backward stochastic parabolic equations
dent of both the state and the control, Problem (GSLQ) was studied with general filtration. This can be generalized to LQ problems for
and the optimal feedback control was obtained by solving a random general SPDEs. However, as we have said, there is no published paper
operator-valued Riccati equation (similarly to (5.82)) and a BSEE. about this. We believe that one can combine the methods in Lü and
In Guatteri and Tessitore (2005), Problem (GSLQ) was considered Zhang (2014) and Ma and Yong (2007) to obtain some results.
for the special case 𝑅 = 𝐼𝑈 , the identity operator on 𝑈 and 𝐷 = 0, for
Declaration of competing interest
which Eq. (5.78) is specialized as
⎧ 𝑑𝑃 = − ( 𝑃 𝐴+𝐴∗ 𝑃 +𝛬𝐶 +𝐶 ∗ 𝛬+𝐶 ∗ 𝑃 𝐶 +𝑀 The authors declare that they have no known competing finan-
⎪ )
⎨ −𝑃 𝐵𝐵 ∗ 𝑃 𝑑𝑡 + 𝛬𝑑𝑊 (𝑡) in [0, 𝑇 ), (5.83) cial interests or personal relationships that could have appeared to
⎪ 𝑃 (𝑇 ) = 𝐺. influence the work reported in this paper.

Although (5.83) is simpler than (5.78), it is also an operator-valued Appendix A. Some preliminaries on functional analysis and par-
BSEE (because the ‘‘bad’’ term ‘‘𝛬𝑑𝑊 (𝑡)’’ is still in (5.83)). In Guatteri tial differential equations
and Tessitore (2005), 𝑃 was obtained as a ‘‘weak limit’’ of solutions to
some suitable finite dimensional approximations of (5.83) and nothing For the readers’ convenience, we recall some basic knowledge of
was said about 𝛬. This is enough for 𝐷 = 0, because the corresponding Functional Analysis and Partial Differential Equations. For more details,
optimal feedback operator is given by we refer to Yosida (1995). In this paper, all linear spaces are over the
field R (of real numbers) or over the field C (of complex numbers).
𝛩(⋅) = −𝐾(⋅)−1 𝐵(⋅)∗ 𝑃 (⋅), Clearly, any linear space over the field C is a linear space over the
field R.
which is independent of 𝛬.
In Guatteri and Tessitore (2014), the well-posedness of (5.83) was A.1. Banach spaces and Hilbert spaces
further studied when 𝐴 is a self-adjoint operator on 𝐻 and there
exists an orthonormal basis {𝑒𝑗 }∞ 𝑗=1
in 𝐻 and an increasing sequence Definition A.1. Let  be a linear space. A map | ⋅ | ∶  → R is called
of positive numbers {𝜇𝑗 }∞ 𝑗=1
so that 𝐴𝑒𝑗 = −𝜇𝑗 𝑒𝑗 for 𝑗 ∈ N and a norm on  if it satisfies the following:
∑∞
𝜇𝑗−𝑟 < ∞ for some 𝑟 ∈ ( 14 , 12 ). This assumption is not satisfied by ⎧ |𝑥| ≥ 0, ∀ 𝑥 ∈ ; and |𝑥| = 0 ⟺ 𝑥 = 0;
⎪  
⎨ |𝛼𝑥| = |𝛼||𝑥| , ∀ 𝛼 ∈ C, 𝑥 ∈ ;
𝑗=1
many controlled SPDEs, such as stochastic wave equations, stochas- ⎪ |𝑥 + 𝑦| ≤ |𝑥| + |𝑦| ,
⎩ ∀ 𝑥, 𝑦 ∈ .
tic Schrödinger equations, stochastic KdV equations, stochastic beam
equations, etc. It is even not fulfilled by the classical 𝑚-dimensional A linear space  with the norm | ⋅ | is called a normed linear space,
stochastic heat equation for 𝑚 ≥ 2. denoted by (, | ⋅ | ) (or simply by  if the norm | ⋅ | is clear from the
The above mentioned conditions were dropped in Lü and Wang context).
(0000) and Lü and Zhang (2019b), where the concept of transposition We call {𝑥𝑘 }∞
𝑘=1
⊂  a Cauchy sequence (in (, | ⋅ | )) if for any 𝜀 > 0,
solution to (5.78) played a key role. there is 𝑘0 ∈ N such that |𝑥𝑘 − 𝑥𝑗 | < 𝜀 for all 𝑘, 𝑗 ≥ 𝑘0 .
There are many open problems related to LQ problems for SEEs. We We call {𝑥𝑘 }∞
𝑘=1
⊂  converges to some 𝑥0 (∈ ) in  if lim 𝑥𝑘 = 𝑥0
𝑘→∞
shall list below some of them which, in our opinion, are particularly in , i.e., lim |𝑥𝑘 − 𝑥0 | = 0.
𝑘→∞
interesting: We call 𝐺 ⊂  bounded (in (, | ⋅ | )), if there is a constant  such
(1) Control systems with unbounded control operators. that |𝑥| ≤  for any 𝑥 ∈ 𝐺.
In Lü (2019), Lü and Wang (0000) and Lü and Zhang (2019b), it was We call 𝐺 dense in  if for any 𝑥 ∈ , one can find a sequence
assumed that both 𝐵(⋅) and 𝐷(⋅) are bounded operator-valued processes. {𝑔𝑘 }∞
𝑘=1
⊂ 𝐺 such that lim 𝑔𝑘 = 𝑥 in .
𝑘→∞
As a result, the system (5.74) does not cover controlled SPDEs with
boundary/pointwise controls. To drop this restriction, one needs to Definition A.2. A normed linear space (, | ⋅ | ) is called a Banach
make some further assumptions, such as the semigroup {𝑆(𝑡)}𝑡≥0 has space if it is complete, i.e., for any Cauchy sequence {𝑥𝑘 }∞ ⊂ ,
𝑘=1
some smoothing effect. When 𝐷 = 0, some results along this line are there exists 𝑥0 ∈  so that lim 𝑥𝑘 = 𝑥0 in .
obtained (e.g., Guatteri & Tessitore, 2014; Hafizoglu et al., 2017). On 𝑘→∞

the other hand, when 𝐷 ≠ 0, there is no published result in this respect. If a normed linear space (, | ⋅ | ) is not complete, then one can
always find a unique Banach space (, ̃ | ⋅ | ̃ ) such that (1)  ⊂ ;
̃ (2)
(2) LQ problems for SEEs in infinite horizon. 
There will be some new phenomena when 𝑇 = ∞. For example, |𝑥| = |𝑥|̃ for any 𝑥 ∈ 𝑋; and (3)  is dense in . ̃
the existence of admissible controls is not guaranteed a priori, which The Banach space ̃ is called the completion (with respect to the norm
is also connected to the stabilization problems for SEEs. As far as we | ⋅ | ) of .
know, nothing was published in this topic.
(3) Time-inconsistent LQ problems for SEEs. Definition A.3. A Banach space  is called separable if there exists a
Time-inconsistent LQ problems are studied extensively for stochastic countable dense subset of .
differential equations (e.g., Hu, Jin, & Zhou, 2012; Hu et al., 2017;
Definition A.4. Let  be a linear space. A map ⟨⋅, ⋅⟩ ∶  ×  → C is
Yong, 2015). As far as we know, Dou and Lü (2020) is the only
called an inner product on  if it satisfies the following:
published paper for time-inconsistent LQ problems for SEEs.
(4) LQ problems for mean-field SEEs. ⎧
⎪ ⟨𝑥, 𝑥⟩ ≥ 0, ∀ 𝑥 ∈ ; and ⟨𝑥, 𝑥⟩ = 0 ⟺ 𝑥 = 0;
LQ problem for mean-field stochastic differential equations is an- ⎪ ⟨𝑥, 𝑦⟩ = ⟨𝑦, 𝑥⟩ , ∀ 𝑥, 𝑦 ∈ ;
other interesting topic (e.g., Acciaio, Backhoff-Veraguas, & Carmona, ⎨ (A.1)
2019; Wang, 2020; Yong, 2013, 2015). As far as we know, Lü (2020) ⎪ ⟨𝛼𝑥 + 𝛽𝑦, 𝑧⟩ = 𝛼⟨𝑥, 𝑧⟩ + 𝛽⟨𝑦, 𝑧⟩ ,
⎪ ∀ 𝛼, 𝛽 ∈ C, 𝑥, 𝑦, 𝑧 ∈ .
is the only published paper for optimal feedback control of LQ problem ⎩
for mean-field SEEs. A linear space  with the inner product ⟨⋅, ⋅⟩ is called an inner
(5) Well-posedness of forward–backward SEEs with general product space, denoted by (, ⟨⋅, ⋅⟩ ) (or simply by  if the inner product
filtration. ⟨⋅, ⋅⟩ is clear from the context).

315
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

The following result gives a relationship between the norm and the For any 𝐴 ∈ (; ), we define a map 𝐴∗ ∶  ′ →  ′ by the
inner product. following:

⟨𝐴∗ 𝑦′ , 𝑥⟩ ′ , = ⟨𝑦′ , 𝐴𝑥⟩ ′ , , ∀ 𝑦′ ∈  ′ , 𝑥 ∈ . (A.7)


√ Let (, ⟨⋅, ⋅⟩ ) be an inner product space. Then, the
Proposition A.1.
map defined by ⟨𝑥, 𝑥⟩ , ∀ 𝑥 ∈ 𝑋, is a norm on . Clearly, 𝐴∗
is linear and bounded. We call 𝐴∗
the adjoint operator of
By Proposition A.1, any inner product space√(, ⟨⋅, ⋅⟩ ) can be 𝐴. It is easy to check that for any 𝐴, 𝐵 ∈ (; ) and 𝛼, 𝛽 ∈ R,
⟨ ⟩ (𝛼𝐴 + 𝛽𝐵)∗ = 𝛼𝐴∗ + 𝛽𝐵 ∗ .
regarded as a normed linear space. We call |𝑥| = 𝑥, 𝑥  the norm
When  is a Hilbert space, we identify  and  ′ . If 𝐴 ∈ (), then
induced by ⟨⋅, ⋅⟩ .
𝐴 ∈ (). If 𝐴 = 𝐴∗ , then we call 𝐴 a (bounded) self-adjoint operator.

Put
Definition A.5. An inner product space  is called a Hilbert space if
𝛥 { | }
it is complete under the norm induced by its inner product. S() = 𝐴 ∈ () | 𝐴 is self-adjoint .
|
At last, we recall that a subset 𝐺 of some vector space is said to
be convex if for any 𝑥, 𝑦 ∈ 𝐺 and 𝜆 ∈ [0, 1], one has 𝜆𝑥 + (1 − 𝜆)𝑦 ∈ 𝐺. A.3. Unbounded linear operator
Clearly, the intersection of any number of convex sets is convex; but the
union of two convex sets is not necessarily convex. Also, if 𝐺1 and 𝐺2 In this subsection, unless other stated, 𝐻 is a Hilbert space.
{
are convex, then for any 𝜆1 , 𝜆2 ∈ R, the set 𝜆1 𝐺1 + 𝜆2 𝐺2 ≡ 𝜆1 𝑥1 + 𝜆2 𝑥2 ∣
}
𝑥1 ∈ 𝐺1 , 𝑥2 ∈ 𝐺2 is convex. Definition A.7. For a linear map 𝐴 ∶ 𝐷(𝐴) → 𝐻, where 𝐷(𝐴) is a
linear subspace in 𝐻, we call 𝐷(𝐴) the domain of 𝐴. The graph of 𝐴 is
A.2. Bounded linear operator the subset of 𝐻 × 𝐻 consisting of all elements of the form (𝑥, 𝐴𝑥) with
𝑥 ∈ 𝐷(𝐴). The operator 𝐴 is called closed (resp. densely defined) if its
In the rest of this section, unless otherwise stated, 1 and 1 are graph is a closed subspace of 𝐻 × 𝐻 (resp. 𝐷(𝐴) is dense in 𝐻).
normed linear spaces, and  and  are Banach spaces.
Unlike bounded linear operators, the linear operator 𝐴 in Defini-
Definition A.6. A map 𝐴 ∶ 1 → 1 is called a bounded linear operator tion A.7 might be unbounded, i.e., it may map some bounded sets in
𝐻 to unbounded sets in 𝐻. A typical densely defined (on 2
if it is linear, i.e., √ 𝐻 = 𝐿 (0, 1),
1
𝐴(𝛼𝑥 + 𝛽𝑦) = 𝛼𝐴𝑥 + 𝛽𝐴𝑦, ∀ 𝑥, 𝑦 ∈ 1 , 𝛼, 𝛽 ∈ C, (A.2) the completion of 𝐶([0, 1]) with respect to the norm ∫0 |𝑓 |2 𝑑𝑡) and
𝑑
closed unbounded linear operator is the differential operator 𝑑𝑥 , with
and 𝐴 maps bounded subsets of 1 into bounded subsets of 1 . the domain
{ }
Denote by (1 ; 1 ) the set of all bounded linear operators from 1 | 𝑑𝑦
𝑦 ∈ 𝐿2 (0, 1) | 𝑦 is absolutely continuous, ∈ 𝐿2 (0, 1), 𝑦(0) = 0 .
to 1 . When 1 = 1 , we simply write (1 ) instead of (1 ; 1 ). For | 𝑑𝑥
any 𝛼, 𝛽 ∈ R and 𝐴, 𝐵 ∈ (1 ; 1 ), we define 𝛼𝐴 + 𝛽𝐵 as follows: In the sequel, we shall assume that the operator 𝐴 is densely
defined. The domain 𝐷(𝐴∗ ) of the adjoint operator 𝐴∗ of 𝐴 is defined as
(𝛼𝐴 + 𝛽𝐵)(𝑥) = 𝛼𝐴𝑥 + 𝛽𝐵𝑥, ∀ 𝑥 ∈ 1 . (A.3)
the set of any element 𝑓 ∈ 𝐻 such that, for some 𝑔𝑓 ∈ 𝐻,
Then, (1 ; 1 ) is also a linear space. Define
⟨𝐴𝑥, 𝑓 ⟩𝐻 = ⟨𝑥, 𝑔𝑓 ⟩𝐻 , ∀ 𝑥 ∈ 𝐷(𝐴).
|𝐴𝑥|1
|𝐴|(1 ;1 ) = sup |𝐴𝑥|1 = sup . (A.4) 𝛥
|𝑥| ≤1 𝑥≠0 |𝑥|1 In this case, we define 𝐴∗ 𝑓 = 𝑔𝑓 .
1
Denote by 𝐼 the identity operator on 𝐻. The resolvent set 𝜌(𝐴) of 𝐴
One can show that, | ⋅ |(1 ;1 ) defined by (A.4) is a norm on (1 ; 1 ),
is defined by
and (; ) is a Banach space under such a norm.
𝛥{ |
Let us consider the special case 1 = C. Any 𝑓 ∈ (1 ; C) is called a 𝜌(𝐴) = 𝜆 ∈ C | 𝜆𝐼 − 𝐴 ∶ 𝐷(𝐴) → 𝐻 is bijective and
bounded linear functional on 1 . Hereafter, we write 1′ = (1 ; C) and (| )−1 }
𝜆𝐼 − 𝐴 ∈ (𝐻) .
call it the dual (space) of 1 . We also denote
The resolvent set 𝜌(𝐴) is open in C.
𝑓 (𝑥) = ⟨𝑓 , 𝑥⟩ ′ ,1 , ∀ 𝑥 ∈ 1 . (A.5)
1

The symbol ⟨⋅ , ⋅⟩ ′ ,1 is referred to as the duality pairing between 1′ A.4. Hilbert–Schmidt operator
1
and 1 . It follows from (A.4) that
Let 𝑉1 and 𝑉2 be separable Hilbert spaces, and {𝑒𝑗 }∞ be an or-
|𝑓 | ′ = sup |𝑓 (𝑥)|, ∀ 𝑓 ∈ 1′ . (A.6) 𝑗=1
1 𝑥∈1 ,|𝑥| ≤1 thonormal basis of 𝑉1 .
1

The following results are quite useful.


Definition A.8. An operator 𝐹 ∈ (𝑉1 ; 𝑉2 ) is said to be a Hilbert–

Schmidt operator if ∞ 2
𝑗=1 |𝐹 𝑒𝑗 |𝑉 < ∞.
Theorem A.1 (Hahn-Banach). Let 0 be a linear subspace of 1 and 2

𝑓0 ∈ 0′ . Then, there exists 𝑓 ∈ 1′ with |𝑓 | ′ = |𝑓0 | ′ , such that ∑


1 0
Remark A.1. The number ∞ 2
𝑗=1 |𝐹 𝑒𝑗 |𝑉2 is independent of the choice of
𝑓 (𝑥) = 𝑓0 (𝑥), ∀ 𝑥 ∈ 0 . ∞
orthonormal basis {𝑒𝑗 }𝑗=1 in 𝑉1 .

Theorem A.2 (Riesz Representation). Let (, ⟨⋅, ⋅⟩ ) be a Hilbert space and Denote by 2 (𝑉1 ; 𝑉2 ) the space of all Hilbert–Schmidt operators from
𝐹 ∈  ′ . Then there is 𝑦 ∈  such that 𝑉1 into 𝑉2 . One can show that, 2 (𝑉1 ; 𝑉2 ) equipped with the inner
⟨ ⟩ product
𝐹 (𝑥) = 𝑥, 𝑦  , ∀ 𝑥 ∈ .

An immediate consequence of Theorem A.1 is as follows: ⟨𝐹 , 𝐺⟩2 (𝑉1 ;𝑉2 ) = ⟨𝐹 𝑒𝑗 , 𝐺𝑒𝑗 ⟩𝑉2 , ∀ 𝐹 , 𝐺 ∈ 2 (𝑉1 ; 𝑉2 ),
𝑗=1

Proposition A.2. For any 𝑥0 ∈ , there is 𝑓 ∈  ′ with |𝑓 | ′ = 1 so that is a separable Hilbert space. When 𝑉1 = 𝑉2 , we simply write 2 (𝑉1 )
𝑓 (𝑥0 ) = |𝑥0 | . instead of 2 (𝑉1 ; 𝑉1 ).

316
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

For any 𝑣 ∈ 𝑉1 and ℎ ∈ 𝑉2 , the tensor product ℎ ⊗ 𝑣 (of 𝑣 and ℎ) is If a certain proposition concerning the points of (𝛺,  , 𝜇) is true
a bounded linear operator from 𝑉1 to 𝑉2 , defined by for every point 𝜔 ∈ 𝛺, with the exception at most of a set of points
⟨ ⟩ which form a 𝜇-null set, it is customary to say that the proposition is
(ℎ ⊗ 𝑣)𝑙 = 𝑙, 𝑣 𝑉 ℎ, ∀ 𝑙 ∈ 𝑉1 . (A.8)
1 true 𝜇-a.e. (or simply a.e., if the measure 𝜇 is clear from the context,
Next, let 𝑉2 ⊗ 𝑉1 be the completion of in particular when 𝜇 is the usual Lebesgue measure). For example, a
{ | } function 𝑔 ∶ (𝛺,  , 𝜇) → R is called essentially bounded (or simply
span ℎ ⊗ 𝑣 | 𝑣 ∈ 𝑉1 , ℎ ∈ 𝑉2 bounded) if it is bounded 𝜇-a.e., i.e. if there exists a positive, finite
| { }
|
constant 𝑐 such that 𝜔 ∈ 𝛺 | |𝑔(𝜔)| > 𝑐 is a 𝜇-null set. The infimum
with respect to the inner product |
⟨ ⟩ ⟨ ⟩ ⟨ ⟩ of the values of 𝑐 for which this statement is true is called the essential
ℎ1 ⊗ 𝑣1 , ℎ2 ⊗ 𝑣2 𝑉 ⊗𝑉 = ℎ1 , ℎ2 𝑉 𝑣1 , 𝑣2 𝑉 .
2 1 2 1
supremum of |𝑔|, abbreviated to esssup𝜔∈𝛺 |𝑔(𝜔)|.
For any 𝑣 ∈ 𝑉1 and ℎ ∈ 𝑉2 , it is easy to show that the tensor product
ℎ ⊗ 𝑣 ∈ 2 (𝑉1 ; 𝑉2 ). Moreover, we have the following result. Definition A.13. Let (𝛺1 , 1 ), … , (𝛺𝑛 , 𝑛 ) be measurable spaces, 𝑛 ∈ N.
Denote by 1 ×⋯×𝑛 the 𝜎-field10 (on the Cartesian product 𝛺1 ×⋯×𝛺𝑛 )
generated by the subsets of the form 𝐸1 × ⋯ × 𝐸𝑛 , where 𝐸𝑖 ∈ 𝑖 ,
Proposition A.3. The space 𝑉2 ⊗ 𝑉1 given above is a separable Hilbert
1 ≤ 𝑖 ≤ 𝑛. Let 𝜇𝑖 be a measure on (𝛺𝑖 , 𝑖 ). We call 𝜇 a product measure
space and it is isometrically isomorphic to 2 (𝑉1 ; 𝑉2 ).
on (𝛺1 × ⋯ × 𝛺𝑛 , 1 × ⋯ × 𝑛 ) induced by 𝜇1 , … , 𝜇𝑛 if
Proposition A.4. Let 𝑉̃ be a separable Hilbert space. If 𝐹1 ∈ (𝑉̃ ; 𝑉1 ), ∏
𝑛

𝐹2 ∈ (𝑉2 ; 𝑉̃ ) and 𝐹 ∈ 2 (𝑉1 ; 𝑉2 ), then 𝐹 𝐹1 ∈ 2 (𝑉̃ ; 𝑉2 ) and 𝐹2 𝐹 ∈ 𝜇(𝐸1 × ⋯ × 𝐸𝑛 ) = 𝜇𝑖 (𝐸𝑖 ), ∀ 𝐸𝑖 ∈  𝑖 .


𝑖=1
2 (𝑉1 ; 𝑉̃ ). Furthermore,
{
|𝐹 𝐹1 | (𝑉̃ ;𝑉 ) ≤ |𝐹 |2 (𝑉1 ;𝑉2 ) |𝐹1 |(𝑉̃ ;𝑉 ) , Theorem A.3. Let (𝛺𝑖 , 𝑖 , 𝜇𝑖 ) be a 𝜎-finite measure space, 1 ≤ 𝑖 ≤ 𝑛.
2 2 1
|𝐹2 𝐹 | ≤ |𝐹 |2 (𝑉1 ;𝑉2 ) |𝐹2 |(𝑉 Then there is a unique product measure 𝜇 (denoted by 𝜇1 × ⋯ × 𝜇𝑛 ) on
̃
2 (𝑉1 ;𝑉 ) ;𝑉̃ ) .
2 (𝛺1 × ⋯ × 𝛺𝑛 , 1 × ⋯ × 𝑛 ) induced by {𝜇𝑖 }𝑛𝑖=1 .
For more details on the Hilbert–Schmidt operators, we refer to
Schatten (1970) for example. Definition A.14. The smallest 𝜎-field containing all open subsets of 
is called the Borel 𝜎-field of , denoted by (). Any set 𝐸 ∈ () is
A.5. Measure and integration called a Borel set (in ).

In this subsection, let us recall some basic definitions and results for Let (𝛺′ ,  ′ ) be another measurable space, 𝑓 ∶ 𝛺 → 𝛺′ be a map11
measure and integration. and P be a property concerning the map 𝑓 at some elements in 𝛺. We
{ | }
Let 𝛺 be a nonempty set. For any subset 𝐸 ⊂ 𝛺, denote by 𝜒𝐸 (⋅) the shall simply denote by {P} the subset 𝜔 ∈ 𝛺 | P holds for 𝑓 (𝜔) .
| {
characteristic function of 𝐸, defined on 𝛺, i.e., For example, when 𝛺′ = R, to simplify the notation we denote 𝜔 ∈
{ | }
1, if 𝜔 ∈ 𝐸, 𝛺 | 𝑓 (𝜔) ≥ 0 by {𝑓 ≥ 0}.
𝜒𝐸 (𝜔) = |
0, if 𝜔 ∈ 𝛺 ⧵ 𝐸.
Definition A.15. The map 𝑓 ∶ 𝛺 → 𝛺′ is said to be  ∕ ′ -
Definition A.9. Let  be a family of subsets of 𝛺.  is called a 𝜎-field measurable or simply  -measurable or even measurable (in the case

on 𝛺 if (1) 𝛺 ∈  ; (2) 𝛺 ⧵ 𝐸 ∈  for any 𝐸 ∈  ; and (3) ∞ 𝑖=1 𝐸𝑖 ∈ 
that no confusion would occur) if 𝑓 −1 ( ′ ) ⊂  . Particularly if (𝛺′ ,  ′ ) =
whenever each 𝐸𝑖 ∈  . (, ()), then 𝑓 is said to be an (-valued)  -measurable (or simply
If  is a 𝜎-field on 𝛺, then (𝛺,  ) is called a measurable space. measurable) function.
Any element 𝐸 ∈  is called a measurable set on (𝛺,  ), or simply a
measurable set. Definition A.16. Let {𝑓𝑖 }∞ 𝑖=0
be a sequence of -valued functions
defined on 𝛺. The sequence {𝑓𝑘 }∞ 𝑘=1
is said to converge to 𝑓0 (denoted
In the sequel, we shall fix a measurable space (𝛺,  ). by lim 𝑓𝑘 = 𝑓0 ) in , 𝜇-a.e., if { lim 𝑓𝑘 ≠ 𝑓0 } ∈  , and
𝑘→∞ 𝑘→∞
𝜇({ lim 𝑓𝑘 ≠ 𝑓0 }) = 0.
Definition A.10. A set function 𝜇 ∶  → [0, +∞] is called a measure 𝑘→∞

on (𝛺,  ) if 𝜇(∅) = 0 and 𝜇 is countably additive, i.e., 𝜇( ∞ 𝑖=1 𝐸𝑖 ) =
∑∞
𝜇(𝐸 𝑖 ) whenever {𝐸 𝑖 } ∞ ⊂  are mutually disjoint, i.e., 𝐸𝑖 ∩ 𝐸𝑗 = ∅ Definition A.17. Let 𝑓 ∶ 𝛺 →  be an -valued function.
𝑖=1 𝑖=1
for 𝑖, 𝑗 ∈ N, 𝑖 ≠ 𝑗. The triple (𝛺,  , 𝜇) is called a measure space. (1) We call 𝑓 (⋅) an  -simple function (or simply a simple function
when  is clear from the context) if
We shall fix below a measure space (𝛺,  , 𝜇).

𝑘
𝑓 (⋅) = 𝜒𝐸𝑖 (⋅)ℎ𝑖 , (A.9)
Definition A.11. The measure 𝜇 is called finite (resp. 𝜎-finite) if 𝜇(𝛺) <
⋃ 𝑖=1
∞ (resp. there exists a sequence {𝐸𝑖 }∞ ⊂  so that 𝛺 = ∞ 𝑖=1 𝐸𝑖 and
𝑖=1 for some 𝑘 ∈ N, ℎ𝑖 ∈ , and mutually disjoint sets 𝐸1 , … , 𝐸𝑘 ∈ 
𝜇(𝐸𝑖 ) < ∞ for each 𝑖 ∈ N). ⋃
satisfying 𝑘𝑖=1 𝐸𝑖 = 𝛺;
We call any 𝐸 ⊂  a 𝜇-null (measurable) set if 𝜇(𝐸) = 0. (2) The function 𝑓 (⋅) is said to be strongly  -measurable w.r.t 𝜇
(or simply strongly measurable) if there exists a sequence of  -simple
Definition A.12. The measure space (𝛺,  , 𝜇) is said to be complete functions {𝑓𝑘 }∞
𝑘=1
converging to 𝑓 in , 𝜇-a.e.
(and 𝜇 is said to be complete on  ) if
For a measurable map 𝑓 ∶ (𝛺,  ) → (𝛺′ ,  ′ ), it is easy to show
{ }
 = 𝐸 ̃ ⊂ 𝛺 || 𝐸
̃ ⊂ 𝐸 for some 𝜇-null set 𝐸 ⊂  . that 𝑓 −1 ( ′ ) is a sub-𝜎-field of  . We call it the 𝜎-field generated by 𝑓 ,
|
and denote it by 𝜎(𝑓 ). Further, for a given index set 𝛬 and a family
If the measure space (𝛺,  , 𝜇) is not complete, then the class  of

all sets of the form (𝐸 ⧵ 𝑁) (𝑁 ⧵ 𝐸), with 𝐸 ∈  and 𝑁 ∈  , is a
𝜎-field which contains  as a proper sub-class, and the set function 𝜇̄ 10
Note that here and henceforth the 𝜎-field 1 × ⋯ × 𝑛 does not stand for
defined by 𝜇((𝐸
̄ ⧵ 𝑁) ∪ (𝑁 ⧵ 𝐸)) = 𝜇(𝐸) is a complete measure on  . The the Cartesian product of 1 , ⋯, 𝑛 .
11
measure 𝜇̄ is called the completion of 𝜇. When 𝛺′ = , we also call 𝑓 an -valued function.

317
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

of measurable maps {𝑓𝜆 }𝜆∈𝛬 (defined on (𝛺,  ), with possibly different Theorem A.7. Let (𝛺1 , 1 , 𝜇1 ) and (𝛺2 , 2 , 𝜇2 ) be 𝜎-finite measure spaces.

ranges), we denote by 𝜎(𝑓𝜆 ; 𝜆 ∈ 𝛬) the 𝜎-field generated by 𝜆∈𝛬 𝜎(𝑓𝜆 ). If 𝑓 (⋅, ⋅) ∶ 𝛺1 × 𝛺2 →  is Bochner integrable (w.r.t. 𝜇1 × 𝜇2 ), then
Let us fix below a 𝜎-finite measure space (𝛺,  , 𝜇). 𝑦(⋅) ≡ ∫𝛺 𝑓 (𝑡, ⋅)𝑑𝜇1 (𝑡) and 𝑧(⋅) ≡ ∫𝛺 𝑓 (⋅, 𝑠)𝑑𝜇2 (𝑠) are a.e. defined on
1 2
𝛺2 and 𝛺1 and Bochner integrable w.r.t. 𝜇2 and 𝜇1 , respectively. Moreover,
Definition A.18. Let 𝑓 (⋅) be an (-valued) simple function in the form
(A.9). We call 𝑓 (⋅) Bochner integrable if 𝜇(𝐸𝑖 ) < ∞ for each 𝑖 = 1, … , 𝑘. 𝑓 (𝑡, 𝑠)𝑑𝜇1 × 𝜇2 (𝑡, 𝑠) = 𝑧(𝑡)𝑑𝜇1 (𝑡) = 𝑦(𝑠)𝑑𝜇2 (𝑠).
∫𝛺1 ×𝛺2 ∫𝛺1 ∫𝛺2
In this case, for any 𝐸 ∈  , the Bochner integral of 𝑓 (⋅) over 𝐸 is
𝛥
defined by For any 𝑝 ∈ [1, ∞), denote by 𝐿𝑝 (𝛺;) = 𝐿𝑝 (𝛺,  ,𝜇; ) the set of all

𝑘 (equivalence classes of) strongly measurable functions 𝑓 ∶ 𝛺 →  such
𝑓 (𝑠)𝑑𝜇 = 𝜇(𝐸 ∩ 𝐸𝑖 )ℎ𝑖 . that ∫𝛺 |𝑓 |𝑝 𝑑𝜇 < ∞. It is a Banach space with the norm
∫𝐸
𝑖=1 ( )1∕𝑝
In general, we have the following notion. |𝑓 |𝐿𝑝 (𝛺;) = |𝑓 |𝑝 𝑑𝜇 .
 ∫𝛺
Definition A.19. A strongly measurable function 𝑓 (⋅) ∶ 𝛺 →  is said When  is a Hilbert space, so is 𝐿2 (𝛺; ). The inner product on
to be Bochner integrable (w.r.t. 𝜇) if there exists a sequence of Bochner 𝐿2 (𝛺; ) is
integrable simple functions {𝑓𝑖 (⋅)}∞
𝑖=1
converging to 𝑓 (⋅) in , 𝜇-a.e., so ⟨ ⟩
that 𝑓, 𝑔 
𝑑𝜇, ∀𝑓 , 𝑔 ∈ 𝐿2 (𝛺; ).
∫𝛺
lim |𝑓𝑖 (𝑠) − 𝑓𝑗 (𝑠)| 𝑑𝜇 = 0. 𝛥
𝑖,𝑗→∞ ∫𝛺 Denote by 𝐿∞ (𝛺; ) = 𝐿∞ (𝛺,  , 𝜇; ) the set of all (equivalence

In this case, we also say that 𝑓 (⋅) ∶ 𝛺 →  is Bochner integrable. classes of) strongly measurable (-valued) functions 𝑓 such that esssup
𝜔∈𝛺
For any 𝐸 ∈  , the Bochner integral of 𝑓 (⋅) over 𝐸 is |𝑓 (𝜔)| < ∞. This is a Banach space with the norm
𝛥
𝑓 (𝑠)𝑑𝜇 = lim 𝜒𝐸 (𝑠)𝑓𝑖 (𝑠)𝑑𝜇(𝑠) in . (A.10) |𝑓 |𝐿∞ (𝛺;) = esssup |𝑓 (𝜔)| .
∫𝐸 𝑖→∞ ∫𝛺  𝜔∈𝛺

It is easy to verify that the limit in the right hand side of (A.10) For 1 ≤ 𝑝 ≤ ∞ and any non-empty open subset 𝐺 of R𝑛 , we shall simply
exists and its value is independent of the choice of the sequence denote 𝐿𝑝 (𝐺, , 𝐦; ) by 𝐿𝑝 (𝐺; ), where  is the family of Lebesgue
{𝑓𝑖 (⋅)}∞𝑖=1
. Clearly, when  = R𝑛 (for some 𝑛 ∈ N), the above Bochner measurable sets in 𝐺, and 𝐦 is the Lebesgue measure on 𝐺. Also, we
integral coincides the usual Lebesgue integral for R𝑛 -valued functions. simply denote 𝐿𝑝 (𝛺; R) and 𝐿𝑝 (𝐺; R) by 𝐿𝑝 (𝛺) and 𝐿𝑝 (𝐺), respectively.
The following result reveals the relationship between the Bochner Particularly, if 𝐺 = (0, 𝑇 ) ⊂ R for some 𝑇 > 0, we simply denote
integral (for vector-valued functions) and the usual (Lebesgue) integral 𝐿𝑝 ((0, 𝑇 ); ) and 𝐿𝑝 ((0, 𝑇 )) respectively by 𝐿𝑝 (0, 𝑇 ; ) and 𝐿𝑝 (0, 𝑇 ).
(for scalar functions). The following result gives the dual space of 𝐿𝑝 (𝛺; 𝑋).

Theorem A.4. Let 𝑓 (⋅) ∶ 𝛺 →  be strongly measurable. Then, 𝑓 (⋅) Proposition A.5. Let  be a reflexive Banach space and 𝑝 ∈ [1, ∞),
𝑝
is Bochner integrable (w.r.t. 𝜇) if and only if the scalar function |𝑓 (⋅)| ∶ 𝑝′ = 𝑝−1 . Then,
𝛺 → R is (Lebesgue) integrable (w.r.t. 𝜇). ′
𝐿𝑝 (𝛺; )′ = 𝐿𝑝 (𝛺;  ′ ). (A.11)
Further properties for Bochner integral are collected as follows.
Let 𝛷 ∶ (𝛺,  ) → (𝛺′ ,  ′ ) be a measurable map. Then, for the
Theorem A.5. Let 𝑓 (⋅), 𝑔(⋅) ∶ 𝛺 →  be Bochner integrable. Then, measure 𝜇 on (𝛺,  ), the map 𝛷 induces a measure 𝜇′ on (𝛺′ ,  ′ ) via
(1) For any 𝑎, 𝑏 ∈ R, the function 𝑎𝑓 (⋅) + 𝑏𝑔(⋅) is Bochner integrable, 𝛥
𝜇 ′ (𝐸 ′ ) = 𝜇(𝛷−1 (𝐸 ′ )), ∀ 𝐸′ ∈  ′. (A.12)
and for any 𝐸 ∈  ,
( ) The following is a change-of-variable formula:
𝑎𝑓 (𝑠) + 𝑏𝑔(𝑠) 𝑑𝜇 = 𝑎 𝑓 (𝑠)𝑑𝜇 + 𝑏 𝑔(𝑠)𝑑𝜇.
∫𝐸 ∫𝐸 ∫𝐸
Theorem A.8. A function 𝑓 (⋅) ∶ 𝛺′ →  is Bochner integrable w.r.t.
(2) For any 𝐸 ∈  ,
𝜇 ′ if and only if 𝑓 (𝛷(⋅)) (defined on (𝛺,  )) is Bochner integrable w.r.t. 𝜇.
| | Furthermore,
| 𝑓 (𝑠)𝑑𝜇| ≤ |𝑓 (𝑠)| 𝑑𝜇.
| ∫𝐸 | ∫𝐸
𝑓 (𝜔′ )𝑑𝜇′ (𝜔′ ) = 𝑓 (𝛷(𝜔))𝑑𝜇(𝜔). (A.13)
(3) The Bochner integral is 𝜇-absolutely continuous, that is, ∫𝛺′ ∫𝛺
To end this subsection, we recall the notions of continuity and
lim 𝑓 (𝑠)𝑑𝜇 = 0 in .
𝐸∈ , 𝜇(𝐸)→0 ∫𝐸 differentiability for vector-valued functions.
(4) If 𝐹 ∈ (; ), then 𝐹 𝑓 (⋅) is an -valued Bochner integrable
function, and for any 𝐸 ∈  , Definition A.20. Let 0 ⊂  and 𝐹 ∶ 0 →  be a function (not
necessarily linear).
𝐹 𝑓 (𝑠)𝑑𝜇 = 𝐹 𝑓 (𝑠)𝑑𝜇. (1) We say that 𝐹 is continuous at 𝑥0 ∈ 0 if |𝐹 (𝑥) − 𝐹 (𝑥0 )| → 0
∫𝐸 ∫𝐸
whenever 𝑥 ∈ 0 and 𝑥 → 𝑥0 (in ). If 𝐹 is continuous at each point
The following result, known as Dominated Convergence Theorem, is of 0 , we say that 𝐹 is continuous on 0 .
very useful. (2) We say that 𝐹 is Fréchet differentiable at 𝑥0 ∈ 0 if there exists
𝐹1 ∈ (; ) such that
Theorem A.6. Let 𝑓 ∶ 𝛺 →  be strongly measurable, and let 𝑔 ∶ 𝛺 → R
|𝐹 (𝑥) − 𝐹 (𝑥0 ) − 𝐹1 (𝑥 − 𝑥0 )|
be a real valued nonnegative integrable function. Assume that, 𝑓𝑖 ∶ 𝛺 →  lim = 0. (A.14)
𝑥∈0 ,𝑥→𝑥0 |𝑥 − 𝑥0 |
is Bochner integrable so that |𝑓𝑖 | ≤ 𝑔, 𝜇-a.e. for each 𝑖 ∈ N and lim 𝑓𝑖 = 𝑓
𝑖→∞
in , 𝜇-a.e. Then, 𝑓 is Bochner integrable (w.r.t. 𝜇), and In this case, we write 𝐹𝑥 (𝑥0 ) = 𝐹1 and call it the Fréchet derivative
of 𝐹 at 𝑥0 . If 𝐹 is Fréchet differentiable at each point of 0 , we say
lim 𝑓𝑖 (𝑠)𝑑𝜇 = 𝑓 (𝑠)𝑑𝜇 in , ∀ 𝐸 ∈ . that 𝐹 is Fréchet differentiable on 0 . If the Fréchet derivative of 𝐹 is
𝑖→∞ ∫𝐸 ∫𝐸
continuous on 0 , we say that 𝐹 is continuously Fréchet differentiable
Also, one has the following Fubini Theorem (on Bochner integrals). on 0 .

318
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

The set of all continuous (resp. continuously Fréchet differentiable) Example A.2. Let 𝑎 ∈ 𝐺. The Dirac 𝛿𝑎 -function (in 𝐺) is defined as
functions from 0 to  is denoted by 𝐶(0 ; ) (resp. 𝐶 1 (0 ; )). When
 = R, we simply denote it by 𝐶(0 ) (resp. 𝐶 1 (0 )). 𝛿𝑎 (𝜑) = 𝜑(𝑎), ∀ 𝜑 ∈ (𝐺).

It is easy to show that 𝛿𝑎 ∈ ′ (𝐺). Also, one can show that 𝛿𝑎 cannot
A.6. Generalized functions and Sobolev spaces
be identified as a ‘‘usual’’ function in 𝐺.

We begin with some notations. For any 𝛼 = (𝛼1 , 𝛼2 , ⋯, 𝛼𝑛 ) (with Let 𝐹 ∈ ′ (𝐺). The support of 𝐹 is defined as follows:

𝑛
𝛼1 𝛼2 𝛼𝑛
𝛼𝑗 ∈ N ∪ {0}, 𝑗 = 1, ⋯ , 𝑛), put |𝛼| = |𝛼𝑗 | and 𝜕 𝛼 ≡ 𝜕 𝛼1 𝜕 𝛼2 ⋯ 𝜕 𝛼𝑛 . { | }
𝜕𝑥1 𝜕𝑥2 𝜕𝑥𝑛 supp 𝐹 = 𝑥 ∈ 𝐺 | 𝐹 ≠ 0 in any neighborhood of 𝑥 . (A.17)
𝑗=1 |
Let 𝐺 ⊂ R𝑛
be a bounded domain with the boundary 𝛤 , and write 𝐺
( ) The generalized derivative of 𝐹 w.r.t. 𝑥𝑘 (for some 𝑘 ∈ {1, 2, … , 𝑛}), 𝜕𝐹
for its closure. For any 𝑥0 ∈ R𝑛 and 𝑟 > 0, denote by 𝐵 𝑥0 , 𝑟 the open 𝜕𝑥𝑘
0
ball with radius 𝑟, centered at 𝑥 . is defined by
( ) ( 𝜕𝜑 )
𝜕𝐹 𝛥
(𝜑) = −𝐹 , ∀ 𝜑 ∈ ′ (𝐺).
Definition A.21. We say the boundary 𝛤 (of 𝐺) is 𝐶 𝑘 (for some 𝜕𝑥𝑘 𝜕𝑥𝑘
𝑘 ∈ N) if for each point 𝑥0 ∈ 𝛤 there exist 𝑟 > 0 and a 𝐶 𝑘 function 𝜕𝐹
𝛾 ∶ R𝑛−1 → R such that, upon relabeling and reorienting the coordinates One can easily show that 𝜕𝑥𝑘
∈ ′ (𝐺).
axes if necessary, Let 𝑝 ∈ [1, ∞) and 𝛤 be 𝐶𝑚.
For 𝑓 ∈ 𝐶 𝑚 (𝐺), define
( ) { ( ) ( )} ( ∑ )1∕𝑝
𝐺 ∩ 𝐵 𝑥0 , 𝑟 = 𝑥 ∈ 𝐵 𝑥0 , 𝑟 ∣ 𝑥𝑛 > 𝛾 𝑥1 , … , 𝑥𝑛−1 . 𝛥
|𝑓 |𝑊 𝑚,𝑝 (𝐺) = |𝜕 𝛼 𝑓 |𝑝 𝑑𝑥 , (A.18)
∫𝐺
|𝛼|≤𝑚
We say that 𝛤 is 𝐶 ∞ if 𝛤 is 𝐶 𝑘 for any 𝑘 = 1, 2, ….
which is a norm on 𝐶 𝑚 (𝐺). Write 𝑊 𝑚,𝑝 (𝐺) for the completion of 𝐶 𝑚 (𝐺)
If 𝛤 is 𝐶 1 , then along 𝛤 one can define the unit outward normal
vector field (of 𝐺): w.r.t. the norm (A.18). Particularly, 𝑊 0,𝑝 (𝐺) is denoted by 𝐿𝑝 (𝐺).
( ) Similarly, the completion of 𝐶0𝑚 (𝐺) w.r.t. the norm (A.18) is denoted
𝜈 = 𝜈1, … , 𝜈𝑛 . (A.15) by 𝑊0𝑚,𝑝 (𝐺). For 𝑝 = 2, we also denote 𝑊 𝑚,2 (𝐺) by 𝐻 𝑚 (𝐺) and 𝑊0𝑚,𝑝 (𝐺)
The unit outward normal vector at each point 𝑥 ∈ 𝛤 is hence 𝜈 (𝑥) = by 𝐻0𝑚 (𝐺). Both 𝐻 𝑚 (𝐺) and 𝐻0𝑚 (𝐺) are Hilbert spaces w.r.t. the inner
𝜕𝑢𝛥 product
(𝜈 1 (𝑥), ⋯, 𝜈 𝑛 (𝑥)). For any 𝑢 ∈ 𝐶 1 (𝐺), we call 𝜕𝜈 = ∇𝑢 ⋅ 𝜈 the outward
normal derivative of 𝑢. ⟨ ⟩ 𝛥 ∑
𝑓, 𝑔 𝐻 𝑚 (𝐺)
= 𝜕 𝛼 𝑓 𝜕 𝛼 𝑔𝑑𝑥, ∀ 𝑓 , 𝑔 ∈ 𝐻 𝑚 (𝐺).
Let 𝐺 ⊂ R𝑛 be a bounded domain and write 𝐺 for its closure. For ∫𝐺
|𝛼|≤𝑚
any 𝑚 ∈ N ∪ {0}, denote by 𝐶 𝑚 (𝐺) and 𝐶 𝑚 (𝐺) the sets of all 𝑚 times
continuously differentiable functions on 𝐺 and 𝐺, respectively, and It can be proved that a function 𝑦 ∈ 𝑊 𝑚,𝑝 (𝐺), if and only if there exist
𝛥 functions 𝑓𝛼 ∈ 𝐿𝑝 (𝐺), |𝛼| ≤ 𝑚, such that
by 𝐶0𝑚 (𝐺) be the set of all functions 𝑓 ∈ 𝐶 𝑚 (𝐺), such that supp 𝑓 =
{𝑥 ∈ 𝐺 ∣ 𝑓 (𝑥) ≠ 0} is compact in 𝐺. 𝐶 𝑚 (𝐺) is a Banach space with the 𝑦𝜕 𝛼 𝜑𝑑𝑥 = (−1)|𝛼| 𝑓𝛼 𝜑𝑑𝑥, ∀ 𝜑 ∈ 𝐶0∞ (𝐺).
norm ∫𝐺 ∫𝐺
𝛥 ∑ The above function 𝑓𝛼 is referred to as the 𝛼-th generalized derivative
|𝑓 |𝐶 𝑚 (𝐺) = max |𝜕 𝛼 𝑓 (𝑥)|, ∀ 𝑓 ∈ 𝐶 𝑚 (𝐺). (A.16)
𝑥∈𝐺 |𝛼|≤𝑚 of 𝑦. Denote by 𝑊 𝑚,∞ (𝐺) the Banach space of all ′ (𝐺) generalized
functions 𝑦 so that its 𝛼-th generalized derivative 𝜕 𝛼 𝑦 ∈ 𝐿∞ (𝐺) for all
In the sequel, we shall simply write 𝐶(𝐺) for 𝐶 0 (𝐺).
|𝛼| ≤ 𝑚, with the following norm:
Denote by 𝐶 ∞ (𝐺) the set of infinitely differentiable functions on 𝐺.
𝛥 ∑
Put |𝑦|𝑊 𝑚,∞ (𝐺) = |𝜕 𝛼 𝑦|𝐿∞ (𝐺) .
{ | } |𝛼|≤𝑚
𝐶0∞ (𝐺) = 𝑓 ∈ 𝐶 ∞ (𝐺) | supp 𝑓 is compact in 𝐺 .
|
Noting that 𝐶0∞ (𝐺) ⊂ 𝑊0𝑚,𝑝 (𝐺), people introduce the following
Definition A.22. Let {𝑓𝑘 }∞
be a sequence in 𝐶0∞ (𝐺). We say that space:
𝑘=1
𝑓𝑘 → 0 in 𝐶0∞ (𝐺) as 𝑘 → ∞, if
(1) For some compact subset 𝐾 in 𝐺, Definition A.23. Each element of (𝑊0𝑚,𝑝 (𝐺))′ , the dual space of
𝑊0𝑚,𝑝 (𝐺), determines a ′ (𝐺) generalized function. All of these general-
supp 𝑓𝑘 ⊂ 𝐾, ∀ 𝑘 ∈ N; ized functions form a subspace of ′ (𝐺), and we denote it by 𝑊 −𝑚,𝑞 (𝐺),
𝑝
(2) For all multi-index 𝛼 = (𝛼1 , … , 𝛼𝑛 ), where 𝑞 = 𝑝−1 . Especially, we denote 𝐻 −𝑚 (𝐺) = 𝑊 −𝑚,2 (𝐺).

sup |𝜕 𝛼 𝑓𝑘 (𝑥)| → 0, as 𝑘 → ∞. 𝑊 −𝑚,𝑞 (𝐺) is a Banach space with the canonical norm, i.e.,
𝑥∈𝐾
𝐹 (𝜑)
Generally, for 𝑓 ∈ 𝐶0∞ (𝐺), we say that 𝑓𝑘 → 𝑓 in 𝐶0∞ (𝐺) as 𝑘 → ∞, if |𝐹 |𝑊 −𝑚,𝑞 (𝐺) = sup , ∀ 𝐹 ∈ 𝑊 −𝑚,𝑞 (𝐺).
𝑚,𝑝 |𝜑|𝑊 𝑚,𝑝 (𝐺)
𝑓𝑘 − 𝑓 → 0 in 𝐶0∞ (𝐺) as 𝑘 → ∞. 𝜑∈𝑊0 (𝐺)⧵{0} 0

Denote by (𝐺) the linear space 𝐶0∞ (𝐺) equipped with the sequen- Especially, 𝐻 −𝑚 (𝐺) is a Hilbert space.
tial convergence given in Definition A.22. A linear functional 𝐹 ∶
(𝐺) → C is said to be a ′ (𝐺) generalized function, in symbol 𝐹 ∈ Remark A.2. Since 𝐻0𝑚 (𝐺) is a Hilbert space, by Riesz representation
′ (𝐺), if 𝐹 is continuous on (𝐺), i.e., for any sequence {𝑓𝑘 }∞ 𝑘=1
⊂ theorem (i.e., Theorem A.2), there exists an isomorphism between
𝐶0∞ (𝐺) with 𝑓𝑘 → 0 in 𝐶0∞ (𝐺) as 𝑘 → ∞, one has lim 𝐹 (𝑓𝑘 ) = 0. 𝐻0𝑚 (𝐺) and 𝐻 −𝑚 (𝐺) ≡ (𝐻0𝑚 (𝐺))′ . But, this does not mean that these
𝑘→∞
two spaces are the same. Indeed, the elements in 𝐻0𝑚 (𝐺) are functions
with certain regularities; while the elements in 𝐻 −𝑚 (𝐺) need not even
Example A.1. Any function 𝑓 ∈ 𝐿1 (𝐺) can be identified as a ′ (𝐺)
to be a ‘‘usual’’ function.
generalized function as follows:

𝐹𝑓 (𝜑) = 𝑓 (𝑥)𝜑(𝑥)𝑑𝑥, ∀ 𝜑 ∈ (𝐺). All the above spaces 𝑊 𝑚,𝑝 (𝐺), 𝑊 𝑚,∞ (𝐺), 𝑊0𝑚,𝑝 (𝐺) and 𝑊 −𝑚,𝑞 (𝐺) are
∫𝐺 called Sobolev spaces.

319
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

A.7. Operator semigroups Example A.4. Let 𝐺 be as in Example A.3, 𝐻 = 𝐻01 (𝐺) × 𝐿2 (𝐺) and
define an unbounded linear operator 𝐴 on 𝐻 as follows:
Recall that 𝐻 is a Hilbert space.
⎧ 𝐷(𝐴) = {(𝑦, 𝑧)⊤ || 𝑦 ∈ 𝐻 2 (𝐺) ∩ 𝐻 1 (𝐺), 𝑧 ∈ 𝐻 1 (𝐺)},
⎪ ( ) ( | )( )0 0
Definition A.24. A 𝐶0 -semigroup on 𝐻 is a family of bounded linear ⎨ 𝑦 0 𝐼 𝑦 ⊤
⎪ 𝐴 = , ∀ (𝑦, 𝑧) ∈ 𝐷(𝐴).
operators {𝑆(𝑡)}𝑡≥0 (on 𝐻) such that ⎩ 𝑧 𝛥 0 𝑧
(1) 𝑆(𝑡)𝑆(𝑠) = 𝑆(𝑡 + 𝑠) for any 𝑠, 𝑡 ≥ 0;
By Theorem A.9, both 𝐴 and −𝐴 generate a 𝐶0 -semigroup on 𝐻.
(2) 𝑆(0) = 𝐼;
This semigroup arises in the study of the following (homogeneous)
(3) for any 𝑦 ∈ 𝐻, lim 𝑆(𝑡)𝑦 = 𝑦 in 𝐻.
𝑡→0+ wave equation:
For any 𝐶0 -semigroup {𝑆(𝑡)}𝑡≥0 on 𝐻, define a linear operator 𝐴 on
⎧ 𝑦 − 𝛥𝑦 = 0 in 𝐺 × (0, +∞),
𝐻 as follows: ⎪ 𝑡𝑡
⎧ { } ⎨ 𝑦=0 on 𝛤 × (0, +∞), (A.22)
𝛥 | 𝑆(𝑡)𝜂−𝜂 ⎪ 𝑦(0) = 𝑦0 , 𝑦𝑡 (0) = 𝑦1
⎪𝐷(𝐴) = 𝜂 ∈ 𝐻 || 𝑡→0+
lim
𝑡
exists in 𝐻 ,
⎩ in 𝐺.
⎨ 𝑆(𝑡)𝜂 − 𝜂 (A.19)
⎪𝐴𝜂 = lim , ∀ 𝜂 ∈ 𝐷(𝐴). In fact, if we set 𝑧 = 𝑦𝑡 , then, (A.22) can be transformed into the
⎩ 𝑡→0+ 𝑡
following
We call the above operator 𝐴 the infinitesimal generator of {𝑆(𝑡)}𝑡≥0 , {
or 𝐴 generates the 𝐶0 -semigroup {𝑆(𝑡)}𝑡≥0 . One can show that, 𝐴 is a 𝑤𝑡 = 𝐴𝑤 in (0, +∞),
(A.23)
densely defined closed operator, and 𝑆(𝑡)𝐷(𝐴) ⊂ 𝐷(𝐴) for all 𝑡 ≥ 0. 𝑤(0) = 𝑤0 ,
The following results hold:
where 𝑤 = (𝑦, 𝑧) and 𝑤0 = (𝑦0 , 𝑦1 ).
Proposition A.6. Let {𝑆(𝑡)}𝑡≥0 be a 𝐶0 -semigroup on 𝐻. Then As applications of 𝐶0 -semigroup, we present some results for well-
(1) There exist positive constants 𝑀 and 𝛼 such that
posedness of evolution equations. Consider the following semilinear
|𝑆(𝑡)|(𝐻) ≤ 𝑀𝑒𝛼𝑡 , ∀ 𝑡 ≥ 0; equation:
{
(2) 𝑑𝑆(𝑡)𝜂 = 𝐴𝑆(𝑡)𝜂 = 𝑆(𝑡)𝐴𝜂 for every 𝜂 ∈ 𝐷(𝐴) and 𝑡 ≥ 0; ̇ = 𝐴𝑦(𝑡) + 𝑓 (𝑡, 𝑦(𝑡)),
𝑦(𝑡) 𝑡 ∈ [0, 𝑇 ],
𝑑𝑡 ⟨ ⟩ (A.24)
(3) For every 𝓁 ∈ 𝐷(𝐴∗ ) and 𝜂 ∈ 𝐻, the map 𝑡 ↦ 𝓁, 𝑆(𝑡)𝜂 𝐻 is 𝑦(0) = 𝑦0 ,
differentiable and where 𝐴 ∶ (𝐴) ⊂ 𝐻 → 𝐻 generates a 𝐶0 -semigroup {𝑆(𝑡)}𝑡≥0 on 𝐻
𝑑⟨ ⟩ ⟨ ⟩ and 𝑓 ∶ [0, 𝑇 ] × 𝐻 → 𝐻 satisfies the following:
𝓁, 𝑆(𝑡)𝜂 𝐻 = 𝐴∗ 𝓁, 𝑆(𝑡)𝜂 𝐻 ;
𝑑𝑡 (i) For each 𝜂 ∈ 𝐻, 𝑓 (⋅, 𝜂) is strongly measurable.
(4) If a function 𝑦 ∈ 𝐶 1 ([0, +∞); 𝐷(𝐴)) satisfies 𝑦𝑡 (𝑡) = 𝐴𝑦(𝑡) for every (ii) There exists a function 𝐿 ∈ 𝐿1 (0, 𝑇 ), such that
𝑡 ∈ [0, +∞), then 𝑦(𝑡) = 𝑆(𝑡)𝑦(0); ⎧ |𝑓 (𝑡, 𝜂) − 𝑓 (𝑡, 𝜂̂)| ≤ 𝐿(𝑡)|𝜂 − 𝜂̂| ,
(5) {𝑆(𝑡)∗ }𝑡≥0 is a 𝐶0 -semigroup on 𝐻 with the generator 𝐴∗ . ⎪ 𝐻 𝐻
⎨ ∀ 𝑡 ∈ [0, 𝑇 ], 𝜂, 𝜂̂ ∈ 𝐻, (A.25)
Stimulating from the conclusions (2) and (4) in Proposition A.6, it ⎪ |𝑓 (𝑡, 0)|𝐻 ≤ 𝐿(𝑡), ∀ 𝑡 ∈ [0, 𝑇 ].

is natural to consider the following evolution equation:
{
𝑦𝑡 (𝑡) = 𝐴𝑦(𝑡), ∀ 𝑡 > 0, Definition A.25. (i) 𝑦 ∈ 𝐶([0, 𝑇 ]; 𝐻) is called a strong solution to (A.24)
(A.20)
𝑦(0) = 𝑦0 , if 𝑦(0) = 𝑦0 , 𝑦 is differentiable almost everywhere on [0, 𝑇 ], 𝑦(𝑡) ∈ 𝐷(𝐴),
and for any 𝑦0 ∈ 𝐻, we call 𝑦(⋅) = 𝑆(⋅)𝑦0 the mild solution to (A.20). a.e. 𝑡 ∈ [0, 𝑇 ] and the equation in (A.24) is satisfied almost everywhere.
The following result is quite useful to show that some specific un- (ii) 𝑦 ∈ 𝐶([0, 𝑇 ]; 𝐻) is called a weak solution to (A.24) if for any
bounded linear operator under consideration generates a 𝐶0 -semigroup. 𝜑 ∈ (𝐴∗ ), ⟨𝑦(⋅), 𝜑⟩𝐻 is absolutely continuous on [0, 𝑇 ] and for any
𝑡 ∈ [0, 𝑇 ],
Theorem A.9. Let 𝐴 ∶ (𝐴) ⊂ 𝐻 → 𝐻 be a linear densely defined closed 𝑡
operator. If ⟨𝑦(𝑡), 𝜑⟩𝐻 = ⟨𝑦0 , 𝜑⟩𝐻 + [⟨𝑦(𝑠), 𝐴∗ 𝜑⟩𝐻 +⟨𝑓 (𝑠, 𝑦(𝑠)), 𝜑⟩𝐻 ]𝑑𝑠.
∫0
⟨ ⟩
𝐴𝜂, 𝜂 𝐻 ≤ 0, ∀ 𝜂 ∈ 𝐷(𝐴) (iii) 𝑦 ∈ 𝐶([0, 𝑇 ]; 𝐻) is called a mild solution to (A.24) if
and 𝑡

⟨ ∗ ′ ′⟩ 𝑦(𝑡) = 𝑆(𝑡)𝑦0 + 𝑆(𝑡 − 𝑠)𝑓 (𝑠, 𝑦(𝑠))𝑑𝑠, 𝑡 ∈ [0, 𝑇 ].


∫0
𝐴 𝜂 , 𝜂 𝐻 ≤ 0, ∀ 𝜂 ′ ∈ 𝐷(𝐴∗ ),
then 𝐴 generates a 𝐶0 -semigroup on 𝐻. Proposition A.7. 𝑦 ∈ 𝐶([0, 𝑇 ]; 𝐻) is a mild solution to (A.24) if and
only if 𝑦 is a weak solution to (A.24).
Example A.3. Let 𝐺 ⊂ R𝑛 be a bounded domain with a 𝐶 2 boundary
𝛤 . Take 𝐻 = 𝐿2 (𝐺), and define an unbounded linear operator 𝐴 on 𝐻 Hereafter, we will not distinguish the mild solution and the weak
as follows: solution to (A.24). The following result is concerned with the existence
⎧𝐷(𝐴) = 𝐻 2 (𝐺) ⋂ 𝐻 1 (𝐺), and uniqueness of the solution to (A.24).
⎪ 0
⎨ ∑𝑛
𝜕2 𝑓 Proposition A.8. Let (A.25) hold. Then, for any 𝑦0 ∈ 𝐻, (A.24)
⎪𝐴𝑓 = 𝛥𝑓 = 2
, ∀ 𝑓 ∈ 𝐷(𝐴).
⎩ 𝑘=1 𝜕𝑥𝑘 admits a unique mild solution 𝑦. Moreover, if we let 𝑦(⋅ ; 𝑦0 ) be the solution
Then, by Theorem A.9, 𝐴 generates a 𝐶0 -semigroup {𝑆(𝑡)}𝑡≥0 on corresponding to 𝑦0 , and let the 𝐶0 -semigroup {𝑆(𝑡)}𝑡≥0 satisfy
𝐿2 (𝐺). This semigroup arises in the study of (homogeneous) heat
|𝑆(𝑡)|(𝐻) ≤ 𝑀𝑒𝛼𝑡 , 𝑡 ≥ 0,
equation:
⎧ 𝑦 − 𝛥𝑦 = 0 for some 𝑀 ≥ 1 and 𝛼 ∈ R, then, for all 𝑡 ≥ 0,
in 𝐺 × (0, +∞), {
⎪ 𝑡 𝑡
⎨ 𝑦=0 on 𝛤 × (0, +∞), (A.21) |𝑦(𝑡; 𝑦0 )|𝐻 ≤ 𝑀𝑒𝛼𝑡+𝑀 ∫0 𝐿(𝑠)𝑑𝑠 (1 + |𝑦0 |𝐻 ),
⎪ 𝑦(0) = 𝑦0 in 𝐺. 𝑡
⎩ |𝑦(𝑡; 𝑦0 ) − 𝑦(𝑡, 𝑦̂0 )|𝐻 ≤ 𝑀𝑒𝛼𝑡+𝑀 ∫0 𝐿(𝑠)𝑑𝑠 |𝑦0 − 𝑦̂0 |𝐻 .

320
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

The solution 𝑦 to (A.24) is not necessarily differentiable because 𝐴 is It is easy to show the following result.
unbounded. This is inconvenient for many applications. The following
result sometimes help us to overcome such an inconvenience. Proposition B.1. Let 𝐻 be a Hilbert space. If 𝑓 and 𝑔 are independent,
Bochner integrable random variables on (𝛺,  , P), valued in 𝐻, then
Proposition A.9. Let 𝑦 be the mild solution to (A.24) and let 𝑦𝜆 be the (𝑓 , 𝑔)𝐻 is integrable, and
solution to the following:
𝑡 E(𝑓 , 𝑔)𝐻 = (E𝑓 , E𝑔)𝐻 . (B.1)
𝑦𝜆 (𝑡) = 𝑆𝜆 (𝑡)𝑦0 + 𝑆𝜆 (𝑡 − 𝑠)𝑓 (𝑠, 𝑦𝜆 (𝑠))𝑑𝑠, 𝑡 ∈ [0, 𝑇 ],
∫0
B.2. Distribution, density and Gaussian random variable
where {𝑆𝜆 (𝑡)}𝑡≥0 is the 𝐶0 -semigroup generated by 𝐴𝜆 = 𝜆𝐴(𝜆−𝐴)−1 . Then,
lim sup |𝑦𝜆 (𝑡) − 𝑦(𝑡)|𝐻 = 0. Let (𝛺,  , P) be a probability space, and 𝑋 ∶ 𝛺 →  be a strongly
𝜆→∞ 𝑡∈[0,𝑇 ]
measurable random variable. Then, 𝑋 induces a probability measure
The following result is a Newton–Leibniz type formula.
P𝑋 on (, ()) via
Proposition A.10. Let 𝜑 ∈ 𝐶 1 ([0, 𝑇 ] × 𝐻) with 𝐴∗ 𝜑𝑥 ∈ 𝐶([0, 𝑇 ] × 𝐻). 𝛥
P𝑋 (𝐴) = P(𝑋 −1 (𝐴)), ∀ 𝐴 ∈ (). (B.2)
Let 𝑦 be the mild solution to (A.24). Then, for 0 ≤ 𝑠 < 𝑡 ≤ 𝑇 ,
𝜑(𝑡, 𝑦(𝑡)) We call P𝑋 the distribution of 𝑋. If 𝑋 is Bochner integrable w.r.t. P,
𝑡 [ then using (A.13) in Theorem A.8, we see that
= 𝜑(𝑠, 𝑦(𝑠)) + 𝜑𝑡 (𝑟, 𝑦(𝑟)) + ⟨𝐴∗ 𝜑𝑥 (𝑟, 𝑦(𝑟)), 𝑦(𝑟)⟩
∫𝑠 ]
+⟨𝜑𝑥 (𝑟, 𝑦(𝑟)), 𝑓 (𝑟, 𝑦(𝑟))⟩ 𝑑𝑟. E𝑋 = 𝜂𝑑 P𝑋 (𝜂).
∫
Appendix B. Some preliminary results on probability theory and In the case of  = R𝑚 (for some 𝑚 ∈ N), P𝑋 can be uniquely
stochastic analysis determined by the following function:
𝛥
In this section, we recall some basic knowledge of Probability 𝐹 (𝑥) = 𝐹 (𝑥1 , … , 𝑥𝑚 ) = P{𝑋𝑖 ≤ 𝑥𝑖 , 1 ≤ 𝑖 ≤ 𝑚}, (B.3)
Theory and Stochastic Analysis which will be used in this paper. More
where 𝑥 = (𝑥1 , … , 𝑥𝑚 ) and 𝑋 = (𝑋1 , … , 𝑋𝑚 ). We call 𝐹 (𝑥) the distri-
details can be found in Lü and Zhang (0000b) and Da Prato and Zabczyk
(1992). bution function of 𝑋. If P𝑋 is absolutely continuous w.r.t. the Lebesgue
measure in R𝑚 , then there exists a (nonnegative) function 𝑓 ∈ 𝐿1 (R𝑚 )
B.1. Probability, random variables and expectation such that

P𝑋 (𝐴) = 𝑓 (𝑥)𝑑𝑥, ∀ 𝐴 ∈ (R𝑚 ).


Definition B.1. A probability space is a measure space (𝛺,  , P) for ∫𝐴
which P(𝛺) = 1. In this case, P is called a probability measure or a
Particularly,
probability.
𝑥1 𝑥𝑚
In the rest of this section, we fix a probability space (𝛺,  , P). 𝐹 (𝑥) = ⋯ 𝑓 (𝜉1 , … , 𝜉𝑚 )𝑑𝜉1 ⋯ 𝑑𝜉𝑚 .
∫−∞ ∫−∞
Any 𝜔 ∈ 𝛺 is called a sample point ; any 𝐴 ∈  is called an event,
and P(𝐴) represents the probability of the event 𝐴. The function 𝑓 (⋅) is called the density of 𝑋. As a special case, if
If an event 𝐴 ∈  satisfying that P(𝐴) = 1, then we may alternatively { }
1
say that 𝐴 holds, P-a.s., or simply 𝐴 holds a.s. (if the probability P is 𝑓 (𝑥) = [(2𝜋)𝑚 det 𝑄]−1∕2 exp − (𝑥−𝜆)𝑄−1 (𝑥−𝜆)⊤ , 𝑥 ∈ R𝑚 ,
2
clear from the context).
Recall that  is a Banach space. Denote by () the Borel 𝜎-algebra for some 𝜆 ∈ R𝑚 and 𝑄 ∈ R𝑚×𝑚 with 𝑄⊤ = 𝑄 > 0, then we say
of . Each -valued, strongly measurable function 𝑓 ∶ 𝛺 →  is called that 𝑋 has a normal distribution with parameter (𝜆, 𝑄), denoted by
an (-valued) random variable. Clearly, 𝑓 −1 (()) is a sub-𝜎-field of  , 𝑋 ∼  (𝜆, 𝑄). We call 𝑋 a Gaussian random variable (valued in R𝑚 )
which is called the 𝜎-field generated by 𝑓 and denoted by 𝜎(𝑓 ). For a if 𝑋 has a normal distribution or 𝑋 is constant.
given index set 𝛬 and a family of -valued, random variables {𝑓𝜆 }𝜆∈𝛬
(defined on (𝛺,  )), we denote by 𝜎(𝑓𝜆 ; 𝜆 ∈ 𝛬) the 𝜎-field generated by Remark B.1. Gaussian random variable appears in many physical
∪𝜆∈𝛬 𝜎(𝑓𝜆 ). models. Indeed, under some mild conditions, the mean of many in-
If 𝑓 is Bochner integrable w.r.t. the measure P, then we denote the dependent and identically-distributed random variables is close to a
integral by E𝑓 and call it the mean or mathematical expectation of 𝑓 . Gaussian random variable (We refer to Bryc, 1995 for more details).
Next, we introduce an important notation, independence, which
distinguishes probability theory from the usual measure theory.
B.3. Conditional expectation
Definition B.2. Let (𝛺,  , P) be a probability space.
(1) We say that two events 𝐴 and 𝐵 are independent if P(𝐴 ∩ 𝐵) = In this subsection, we fix a probability space (𝛺,  , P), and a func-
𝛥
P(𝐴)P(𝐵); tion 𝑓 ∈ 𝐿1 (𝛺; 𝐉) = 𝐿1 (𝛺,  , P; 𝐉).
(2) Let 1 and 2 be two subsets of  . We say that 1 and 2 are
independent if P(𝐴 ∩ 𝐵) = P(𝐴)P(𝐵) for any 𝐴 ∈ 1 and 𝐵 ∈ 2 ;
Definition B.3. Let 𝐵 ∈  with P(𝐵) > 0. For any event 𝐴 ∈  , put
(3) Let 𝑓 and 𝑔 be two random variables defined on (𝛺,  ) (with
possibly different ranges), and  ⊂  . We say that 𝑓 and 𝑔 are P(𝐴 ∩ 𝐵)
P(𝐴 ∣ 𝐵) = .
independent if 𝜎(𝑓 ) and 𝜎(𝑔) are independent. We say that 𝑓 and  P(𝐵)
are independent if 𝜎(𝑓 ) and  are independent; Then P(⋅ ∣ 𝐵) is a probability measure on (𝛺,  ), called the con-
(4) Let {𝐴𝜆 }𝜆∈𝛬 ⊂  . We say that {𝐴𝜆 }𝜆∈𝛬 is a class of mutually ditional probability given the event 𝐵, and denoted by P𝐵 (⋅). For any
independent events if given 𝐴 ∈  , P(𝐴 ∣ 𝐵) is called the conditional probability of 𝐴 given
P(𝐴𝜆1 ∩ ⋯ ∩ 𝐴𝜆𝑘 ) = P(𝐴𝜆1 ) ⋯ P(𝐴𝜆𝑘 ) 𝐵. The conditional expectation of 𝑓 given the event 𝐵 is defined by

for any 𝑘 ∈ N, 𝜆1 , … , 𝜆𝑘 ∈ 𝛬 satisfying 𝜆𝑖 ≠ 𝜆𝑗 whenever 𝑖 ≠ 𝑗, 1


E(𝑓 ∣ 𝐵) = 𝑓 𝑑P𝐵 = 𝑓 𝑑P.
𝑖, 𝑗 = 1, … , 𝑘. ∫𝛺 P(𝐵) ∫𝐵

321
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

Clearly, the conditional expectation of 𝑓 given the event 𝐵 repre- B.4. Stochastic processes
sents the average value of 𝑓 on 𝐵.
In many problems, it is not enough to consider the conditional Definition B.5. Let  = [0, 𝑇 ] with 𝑇 > 0. A family of -valued random
expectation given by one event. Instead, it is quite useful to define the variables {𝑋(𝑡)}𝑡∈ is called a stochastic process. For any 𝜔 ∈ 𝛺, the map
conditional expectation to be a suitable random variable. For example, 𝑡 ↦ 𝑋(𝑡, 𝜔) is called a sample path (of 𝑋).
when consider two conditional expectations E(𝑓 ∣ 𝐵) and E(𝑓 ∣ 𝐵 𝑐 )
simultaneously, we simply define it as a function E(𝑓 ∣ 𝐵)𝜒𝐵 (𝜔) + E(𝑓 ∣ We will interchangeably use {𝑋(𝑡)}𝑡∈ , 𝑋(⋅) or even 𝑋 to denote a
𝐵 𝑐 )𝜒𝐵 𝑐 (𝜔) rather than regarding it as two numbers. Before considering (stochastic) process.
the general setting, we begin with the following special case.
Definition B.6. An (-valued) process 𝑋(⋅) is said to be continuous
Definition B.4. Let {𝐵𝑖 }∞ ⊂  be a sequence of mutually disjoint (resp., cádlàg, i.e., right-continuous with left limits) if there is a P-null
𝑖=1 ⋃
sets in 𝛺 such that P(𝐵𝑖 ) > 0 for all 𝑖 = 1, 2, … and 𝛺 = ∞
𝑖=1 𝐵𝑖 . Put set 𝑁 ∈  , such that for any 𝜔 ∈ 𝛺 ⧵ 𝑁, the sample path 𝑋(⋅, 𝜔) is
 = 𝜎{𝐵1 , 𝐵2 , …}. Then the following -measurable function continuous (resp. ., cádlàg) in .
𝛥 ∑

E(𝑓 ∣ )(𝜔) = E(𝑓 ∣ 𝐵𝑘 )𝜒𝐵𝑘 (𝜔) Definition B.7. Two (-valued) processes 𝑋(⋅) and 𝑋(⋅) are said to be
𝑘=1
stochastically equivalent if P({𝑋(𝑡) = 𝑋(𝑡)}) = 1 for any 𝑡 ∈ . In this
is called the conditional expectation of 𝑓 w.r.t. the 𝜎-field . case, one is said to be a modification of the other.
Clearly, when  = 𝜎{𝐵1 , 𝐵2 , …}, the conditional expectation E(𝑓 ∣
Definition B.8. We call a family of sub-𝜎-fields {𝑡 }𝑡∈ in  a filtration
) of 𝑓 takes its average value in every 𝐵𝑘 . Also, it is easy to check
if 𝑡1 ⊂ 𝑡2 for all 𝑡1 , 𝑡2 ∈  with 𝑡1 ≤ 𝑡2 . For any 𝑡 ∈ , we put
that
𝛥 ⋂ 𝛥 ⋃
E(𝑓 ∣ )𝑑P = 𝑓 𝑑P, 𝑘 = 1, 2, … . (B.4) 𝑡+ = 𝑠 , 𝑡− = 𝑠 .
∫𝐵𝑘 ∫𝐵𝑘 𝑠∈(𝑡,+∞)∩ 𝑠∈[0,𝑡)∩

Generally, we have the following result: We call {𝑡 }𝑡∈ right (resp. left) continuous if 𝑡+ = 𝑡 (resp. 𝑡− = 𝑡 ).

Theorem B.1. Let  be a given sub-𝜎-field of  . There is a function in In the sequel, for simplicity, we write 𝐅 = {𝑡 }𝑡∈ unless we want
𝛥
𝐿1 (𝛺;𝐻) = 𝐿1(𝛺, , P;𝐻), denoted by E(𝑓 ∣ ), such that to emphasize what 𝑡 or 𝐼 exactly is. We call (𝛺,  , 𝐅, P) a filtered
probability space.
E(𝑓 ∣ )𝑑P = 𝑓 𝑑P, ∀ 𝐵 ∈ . (B.5)
∫𝐵 ∫𝐵
Definition B.9. We say that (𝛺,  , 𝐅, P) satisfies the usual condition
The function E(𝑓 ∣ ) is called the conditional expectation of 𝑓 w.r.t. if (𝛺,  , P) is complete, 0 contains all P-null sets in  , and 𝐅 is right
the 𝜎-field . continuous.

We collect some basic properties of the conditional expectation as Unless otherwise stated, we shall always assume that (𝛺,  , 𝐅, P)
follows. satisfies the usual condition.

Theorem B.2. Let  be a sub-𝜎-field of  . It holds that:


Lemma B.1. For any 𝑟 ≥ 1, 𝜉 ∈ 𝐿𝑟 (𝛺; 𝐻) and 𝑡 ∈ [0, 𝑇 ), it holds that
(1) The map E(⋅ ∣ ) ∶ 𝐿1 (𝛺; 𝐻) → 𝐿1 (𝛺; 𝐻) is bounded and linear; 𝑇

(2) E(𝑎 ∣ ) = 𝑎, P| -a.s. , ∀ 𝑎 ∈ 𝐻; | | ||


lim | sup |E(𝜉 ∣ 𝜏 ) − E(𝜉 ∣ 𝑡 )|| = 0. (B.6)
(3) If 𝑓1 , 𝑓2 ∈ 𝐿1 (𝛺) with 𝑓1 ≥ 𝑓2 , then 𝑠→𝑡+ | 𝑡≤𝜏≤𝑠 | ||𝐿𝑟 (𝛺;𝐻)
𝑇

E(𝑓1 ∣ ) ≥ E(𝑓2 ∣ ), P| -a.s.;


Definition B.10. Let 𝑋(⋅) be an -valued process.
(4) Let 𝑚, 𝑛, 𝑘 ∈ N, 𝑓1 ∈ 𝐿1 (𝛺; R𝑚×𝑛 ) and 𝑓2 ∈ 𝐿1 (𝛺; R𝑛×𝑘 ) such that (1) 𝑋(⋅) is said to be measurable if the map (𝑡, 𝜔) ↦ 𝑋(𝑡, 𝜔) is
𝑓1 𝑓2 ∈ 𝐿1 (𝛺; R𝑚×𝑘 ). Then strongly (() ×  )∕()-measurable;
E(𝑓1 𝑓2 ∣ ) = 𝑓1 E(𝑓2 ∣ ), P| -a.s. (2) 𝑋(⋅) is said to be 𝐅-adapted if it is measurable, and for each
𝑡 ∈ , the map 𝜔 ↦ 𝑋(𝑡, 𝜔) is strongly 𝑡 ∕()-measurable;
Particularly, E(𝑓1 ∣ ) = 𝑓1 , P| -a.s. Also, for any 𝑓3 ∈ 𝐿1 (𝛺; R𝑚×𝑛 ), (3) 𝑋(⋅) is said to be 𝐅-progressively measurable if for each 𝑡 ∈ ,
the map (𝑠, 𝜔) ↦ 𝑋(𝑠, 𝜔) from [0, 𝑡] × 𝛺 to  is strongly (([0, 𝑡]) ×
E(E(𝑓3 ∣ )𝑓2 ∣ ) = E(𝑓3 ∣ )E(𝑓2 ∣ ), P| -a.s.;
𝑡 )∕()-measurable.
(5) If 𝑓 is independent of , then
Definition B.11. A set 𝐴 ∈  × 𝛺 is called progressively measurable
E(𝑓 ∣ ) = E𝑓 , P| -a.s.;
w.r.t. 𝐅 if the process 𝜒𝐴 (⋅) is progressive. The class of all progressively
(6) Let ′ be a sub-𝜎-field of . Then measurable sets is a 𝜎-field, called the progressive 𝜎-field w.r.t. 𝐅,
denoted by F.
E(E(𝑓 |)|′ ) = E(E(𝑓 |′ )|) = E(𝑓 |′ ), P|′ -a.s.;

(7) (Jensen’s inequality) Let 𝜙 ∶ 𝐻 → R be a convex function such that Proposition B.2. An (-valued) process 𝜑 ∶ [0, 𝑇 ] × 𝛺 →  is
𝜙(𝑓 ) ∈ 𝐿1 (𝛺). Then 𝐅-progressively measurable if and only if it is strongly F-measurable.

𝜙(E(𝑓 ∣ )) ≤ E(𝜙(𝑓 ) ∣ ), P| -a.s. It is clear that if 𝑋(⋅) is 𝐅-progressively measurable, it must be 𝐅-
adapted. Conversely, it can be proved that, for any 𝐅-adapted process
Particularly, for any 𝑝 ≥ 1, we have ̃
𝑋(⋅), there is an 𝐅-progressively measurable process 𝑋(⋅) which is
| |𝑝 𝑝
|E(𝑓 ∣ )| ≤ E(|𝑓 |𝐻 ∣ ), P| -a.s. stochastically equivalent to 𝑋(⋅). For this reason, in the sequel, by say-
| |𝐻
ing that a process 𝑋(⋅) is 𝐅-adapted, we mean that it is 𝐅-progressively
provided that E|𝑓 |𝑝𝐻 exists. measurable.

322
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

For any 𝑝, 𝑞 ∈ [1, ∞), write of an 𝐻-valued, 𝐅-adapted stochastic process 𝑓 (⋅) (satisfying suitable
conditions) w.r.t. a Brownian motion 𝑊 (𝑡). Note that one cannot define
𝐿𝑝F (𝛺; 𝐿𝑞 (0, 𝑇 ; ))
{ (B.8) to be a Lebesgue–Stieltjes type integral by regarding 𝜔 as a
𝛥 |
= 𝜑 ∶ (0, 𝑇 )×𝛺 →  | 𝜑(⋅) is 𝐅-adapted and parameter. Indeed, the map 𝑡(∈ [0, 𝑇 ]) ↦ 𝑊 (𝑡, ⋅) is not of bounded
|
( 𝑇 )𝑝 } variation, a.s.
|𝜑(𝑡)|𝑞 𝑑𝑡 < ∞ ,
𝑞
E
∫0 Write 0 for the set of 𝑓 ∈ 𝐿2F (0, 𝑇 ; 𝐻) of the form:
𝐿𝑞F (0, 𝑇 ; 𝐿𝑝 (𝛺; )) ∑
𝑛
{
𝛥 | 𝑓 (𝑡, 𝜔) = 𝑓𝑗 (𝜔)𝜒[𝑡𝑗 ,𝑡𝑗+1 ) (𝑡), (𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺, (B.9)
= 𝜑 ∶ (0, 𝑇 )×𝛺 →  | 𝜑(⋅) is 𝐅-adapted and
| 𝑗=0
𝑇( )𝑞 }
E|𝜑(𝑡)|𝑝 𝑑𝑡 < ∞ .
𝑝
where 𝑛 ∈ N, 0 = 𝑡0 < 𝑡1 < ⋯ < 𝑡𝑛+1 = 𝑇 , 𝑓𝑗 is 𝑡𝑗 -measurable and
∫0
{ | }
Similarly, we can define (for 1 ≤ 𝑝, 𝑞 < ∞) sup |𝑓𝑗 (𝜔)|𝐻 | 𝑗 ∈ {0, … , 𝑛}, 𝜔 ∈ 𝛺 < ∞.
|
⎧ 𝐿∞ (𝛺; 𝐿𝑞 (0, 𝑇 ; )), 𝐿𝑝 (𝛺; 𝐿∞ (0, 𝑇 ; )), One can show that 0 is dense in 𝐿2F (0, 𝑇 ; 𝐻).
⎪ ∞F

F
∞ 𝑝 Assume that 𝑓 ∈ 0 takes the form of (B.9). Then we set
⎨ 𝐿F (𝛺; 𝐿 (0, 𝑇 ; )), 𝐿F (0, 𝑇 ; 𝐿 (𝛺; )),
⎪ 𝐿𝑞 (0, 𝑇 ; 𝐿∞ (𝛺; )), 𝐿∞ (0, 𝑇 ; 𝐿∞ (𝛺; )). ∑
𝑛
⎩ F F
𝐼(𝑓 )(𝑡, 𝜔) = 𝑓𝑗 (𝜔)[𝑊 (𝑡 ∧ 𝑡𝑗+1 , 𝜔) − 𝑊 (𝑡 ∧ 𝑡𝑗 , 𝜔)]. (B.10)
All the above spaces are Banach spaces (with the canonical norms). 𝑗=0
We shall simply denote 𝐿𝑝F (𝛺; 𝐿𝑝 (0, 𝑇 ; )) ≡ 𝐿𝑝F (0, 𝑇 ; 𝐿𝑝 (𝛺; )) by It is easy to show that 𝐼(𝑓 )(𝑡) ∈ 𝐿2 (𝛺; 𝐻) and the following Itô
𝐿𝑝F (0, 𝑇 ; ); and further simply denote 𝐿𝑝F (0, 𝑇 ; R) by 𝐿𝑝F (0, 𝑇 ). isometry holds:
𝑡

For any 𝑝 ∈ [1, ∞), set


|𝐼(𝑓 )(𝑡)|𝐿2 = |𝑓 |𝐿2 (0,𝑡;𝐻) . (B.11)
𝐿𝑝F (𝛺; 𝐶([0, 𝑇 ]; )) 𝑡
(𝛺;𝐻) F
{
𝛥 | Generally, for 𝑓 ∈ 𝐿2F (0, 𝑇 ; 𝐻), one can find a sequence of {𝑓𝑘 }∞
= 𝜑 ∶ [0, 𝑇 ] × 𝛺 →  | 𝜑(⋅) is continuous, ⊂ 0
(| } 𝑘=1
) such that
𝐅-adapted and E |𝜑(⋅)|𝑝𝐶([0,𝑇 ];) < ∞
lim |𝑓𝑘 − 𝑓 |𝐿2 (0,𝑇 ;𝐻) = 0.
and 𝑘→∞ F

𝑝
𝐶F ([0, 𝑇 ]; 𝐿 (𝛺; )) Since
{
𝛥 | |𝐼(𝑓𝑘 )(𝑡) − 𝐼(𝑓𝑗 )(𝑡)|𝐿2 = |𝑓𝑘 − 𝑓𝑗 |𝐿2 (0,𝑡;𝐻) ,
= 𝜑 ∶ [0, 𝑇 ] × 𝛺 →  | 𝜑(⋅) is 𝐅-adapted (𝛺;𝐻)
| } 𝑡 F

and 𝜑(⋅) ∶ [0, 𝑇 ] → 𝐿𝑝 (𝛺; ) is continuous . one gets that {𝐼(𝑓𝑘 )(𝑡)}∞
is a Cauchy sequence in 𝐿2 (𝛺;𝐻) and
𝑇 𝑘=1 𝑡

One can show that both 𝐿𝑝F (𝛺; 𝐶([0, 𝑇 ]; ))


and 𝐶F ([0, 𝑇 ]; 𝐿𝑝 (𝛺; )) are therefore, it converges to a unique 𝑋 ∈ 𝐿2 (𝛺; 𝐻), which is determined
𝑡
Banach spaces with norms uniquely by 𝑓 and is independent of the particular choice of {𝑓𝑘 }∞ 𝑘=1
.
( )1∕𝑝 We call this element the Itô integral of 𝑓 (w.r.t. the Brownian motion
|𝜑(⋅)|𝐿𝑝 (𝛺;𝐶([0,𝑇 ];)) = E(|𝜑(⋅)|𝑝𝐶([0,𝑇 ];) ) 𝑊 (⋅)) on [0, 𝑡] and denote it by 𝑋(𝑡) to emphasize the time variable 𝑡.
F
For 0 ≤ 𝑠 < 𝑡 ≤ 𝑇 , we call 𝑋(𝑡) − 𝑋(𝑠) the Itô integral of 𝑓 ∈
and 𝐿2F (0, 𝑇 ; 𝐻) (w.r.t. the Brownian motion 𝑊 (⋅)) on [𝑠, 𝑡]. We shall denote
( )1∕𝑝 𝑡 𝑡
|𝜑(⋅)|𝐶F ([0,𝑇 ];𝐿𝑝 (𝛺;)) = max E(|𝜑(𝑡)|𝑝 ) , it by ∫𝑠 𝑓 (𝜏)𝑑𝑊 (𝜏) or simply by ∫𝑠 𝑓 𝑑𝑊 .
𝑡∈[0,𝑇 ]
For any 𝑝 ∈ [1, ∞), denote by 𝐿𝑝,𝑙𝑜𝑐 F
(0, 𝑇 ; 𝐻) the set of 𝐻-valued,
respectively. Also, we denote by 𝐷F ([0, 𝑇 ]; 𝐿𝑝 (𝛺; )) the Banach space 𝑇
𝐅-adapted stochastic processes 𝑓 (⋅) satisfying only ∫0 |𝑓 (𝑡)|𝑝𝐻 𝑑𝑡 < ∞,
of all processes such that 𝑋(𝑡) is càdlàg in 𝐿𝑝 (𝛺; ), w.r.t. 𝑡 ∈ [0, 𝑇 ], 𝑡
a.s. One can also define the Itô integral ∫0 𝑓 𝑑𝑊 for 𝑓 ∈ 𝐿𝑝,𝑙𝑜𝑐 (0, 𝑇 ; 𝐻),
𝑇
1∕𝑝 F
such that |E|𝑋(⋅)|𝑝 |𝐿∞ (0,𝑇 ) < ∞, with the canonical norm. especially for 𝑓 ∈ 𝐿𝑝F (𝛺; 𝐿2 (0, 𝑇 ; 𝐻)) (See Lü & Zhang, 0000b for more
details). The Itô integral has the following properties.
Definition B.12. Let (𝛺,  , 𝐅, P) (with 𝐅 = {𝑡 }𝑡∈ ) be a filtered
probability space. A continuous 𝐅-adapted process 𝑊 (⋅) is called a 1- Theorem B.3. Let 𝑝 ∈ [1, ∞). Let 𝑓 , 𝑔 ∈ 𝐿𝑝F (𝛺; 𝐿2 (0, 𝑇 ; 𝐻)) and
dimensional Brownian motion (over ), if for all 𝑠, 𝑡 ∈  with 0 ≤ 𝑠 < 𝑎, 𝑏 ∈ 𝐿2 (𝛺), 0 ≤ 𝑠 < 𝑡 ≤ 𝑇 . Then
𝑠
𝑡
𝑡 < 𝑇 , 𝑊 (𝑡) − 𝑊 (𝑠) is independent of 𝑠 , and normally distributed with (1) 𝑓 𝑑𝑊 ∈ 𝐿𝑝F (𝛺; 𝐶([𝑠, 𝑡]; 𝐻));
mean 0 and variance 𝑡 − 𝑠. In addition, if P(𝑊 (0) = 0) = 1, then 𝑊 (⋅) is ∫𝑠
𝑡 𝑡 𝑡
called a 1-dimensional standard Brownian motion. (2) (𝑎𝑓 + 𝑏𝑔)𝑑𝑊 = 𝑎 𝑓 𝑑𝑊 + 𝑏 𝑔𝑑𝑊 , a.s.;
∫𝑠 ∫𝑠 ∫𝑠
In the sequel, we fix a 1-dimensional standard Brownian motion on ( 𝑡 )
(𝛺,  , 𝐅, P). Write (3) E 𝑓 𝑑𝑊 = 0;
∫𝑠
𝛥 (4) When 𝑝 ≥ 2,
𝑡𝑊 = 𝜎(𝑊 (𝑠); 𝑠 ∈ [0, 𝑡]) ⊂ 𝑡 , ∀ 𝑡 ∈ . (B.7) (⟨ 𝑡 ⟩ ) ( 𝑡⟨ )
𝑡 ⟩
E 𝑓 𝑑𝑊 , 𝑔𝑑𝑊 =E 𝑓 (𝑟, ⋅), 𝑔(𝑟, ⋅) 𝐻 𝑑𝑟 .
Generally, the filtration {𝑡𝑊 }𝑡∈ is left-continuous, but not necessarily ∫𝑠 ∫𝑠 𝐻 ∫𝑠
right-continuous. Nevertheless, the augmentation {̂ 𝑡𝑊 }𝑡∈ of {𝑡𝑊 }𝑡∈ The following useful result, known as the Burkholder–Davis–Gundy
by adding all P-null sets is continuous, and 𝑊 (⋅) is still a Brownian mo- inequality, links Itô’s integral to the Lebesgue/ Bochner integral.
tion on the (augmented) filtered probability space (𝛺,  , {̂ 𝑡𝑊 }𝑡∈ , P).
By saying that 𝐅 is the natural filtration generated by 𝑊 (⋅), we mean Theorem B.4. For any 𝑝 ∈ [1, ∞), there exists a constant 𝑝 > 0 such
that 𝐅 is generated as in (B.7) with the above augmentation, and hence that for any 𝑇 > 0 and 𝑓 ∈ 𝐿𝑝F (𝛺; 𝐿2 (0, 𝑇 ; 𝐻)),
in this case 𝐅 is continuous.
( 𝑇 )𝑝
1 2
E |𝑓 (𝑠)|2𝐻 𝑑𝑠
B.5. Itô’s integral and its properties 𝑝 ∫ 0
( 𝑡 𝑝 )
| |
≤ E sup | 𝑓 (𝑠)𝑑𝑊 (𝑠)| (B.12)
Let 𝐻 be a Hilbert space. We now define the Itô integral |
𝑡∈[0,𝑇 ] ∫0 | 𝐻
( 𝑇 )𝑝
𝑇 2 2
≤ 𝑝 E |𝑓 (𝑠)|𝐻 𝑑𝑠 .
𝑓 (𝑡)𝑑𝑊 (𝑡) (B.8) ∫0
∫0

323
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

We need the following notion for a class of important stochastic If 𝑋 ∈ 𝐿2F (0, 𝑇 ; ) (resp. 𝑌 ∈ 𝐿2F (0, 𝑇 ; )), then 𝑋(⋅) ∈ 𝐶([0, 𝑇 ]; 𝐻),
processes. a.s.(resp. 𝑌 ∈ 𝐶([0, 𝑇 ]; 𝐻), a.s.), and for any 𝑡 ∈ [0, 𝑇 ],
⟨ ⟩ ⟨ ⟩
𝑋(𝑡), 𝑌 (𝑡) 𝐻 − 𝑋0 , 𝑌0 𝐻
Definition B.13. An 𝐻-valued, 𝐅-adapted, continuous process 𝑋(⋅) is 𝑡 (⟨ ⟩ ⟨ ⟩ )
called an (𝐻-valued) Itô process if there exist two 𝐻-valued stochastic = 𝜙2 (𝑠), 𝑋(𝑠)  ∗ , + 𝜙1 (𝑠), 𝑌 (𝑠)  ∗ , 𝑑𝑠
processes 𝜙(⋅) ∈ 𝐿1,𝑙𝑜𝑐 (0, 𝑇 ; 𝐻) and 𝛷(⋅) ∈ 𝐿2,𝑙𝑜𝑐 (0, 𝑇 ; 𝐻) such that for ∫0
F F 𝑡⟨
any 𝑡 ∈ [0, 𝑇 ], ⟩
+ 𝛷1 (𝑠), 𝛷2 (𝑠) 𝐻 𝑑𝑠 (B.16)
𝑡 𝑡 ∫0
𝑋(𝑡) = 𝑋(0) + 𝜙(𝑠)𝑑𝑠 + 𝛷(𝑠)𝑑𝑊 (𝑠), a.s. (B.13) (⟨
𝑡 ⟩ ⟨ ⟩ )
∫0 ∫0 + 𝛷2 (𝑠), 𝑋(𝑠) 𝐻 + 𝛷1 (𝑠), 𝑌 (𝑠) 𝐻 𝑑𝑊 (𝑠), a.s.
∫0
The following fundamental result is known as Itô’s formula.
Remark B.3. For simplicity, we usually write the formula (B.15) in
Theorem B.5. Let 𝑋(⋅) be given by (B.13). Let 𝐹 ∶ [0, 𝑇 ] × 𝐻 → R the following differential form:
be a function such that its partial derivatives 𝐹𝑡 , 𝐹𝑥 and 𝐹𝑥𝑥 are uniformly ⟨ ⟩ ⟨ ⟩
continuous on any bounded subset of [0, 𝑇 ] × 𝐻. Then for any 𝑡 ∈ [0, 𝑇 ], 𝑑|𝑋(𝑡)|2𝐻 = 2 𝜙(𝑡), 𝑋(𝑡)  ∗ , + |𝛷(𝑡)|2𝐻 𝑑𝑡 + 2 𝛷(𝑡), 𝑋(𝑡) 𝐻 𝑑𝑊 (𝑡)

𝐹 (𝑡, 𝑋(𝑡)) − 𝐹 (0, 𝑋(0)) and denote |𝛷(𝑡)|2𝐻 𝑑𝑡 by |𝑑𝑋|2𝐻 for simplicity. Similarly, we write the
𝑡[ ⟨ ⟩ formula (B.16) in the following differential form:
= 𝐹𝑡 (𝑠, 𝑋(𝑠)) + 𝐹𝑥 (𝑠, 𝑋(𝑠)), 𝜙(𝑠) 𝐻 ⟨ ⟩
∫0 𝑑 𝑋(𝑡), 𝑌 (𝑡) 𝐻
] ⟨ ⟩ ⟨ ⟩
1 = 𝜙2 (𝑡), 𝑋(𝑡)  ∗ , + 𝜙1 (𝑡), 𝑌 (𝑡)  ∗ ,
+ ⟨𝐹𝑥𝑥 (𝑠, 𝑋(𝑠))𝛷(𝑠), 𝛷(𝑠)⟩𝐻 𝑑𝑠 ⟨ ⟩ ⟨ ⟩
2 + 𝛷1 (𝑡), 𝛷2 (𝑡) 𝐻 𝑑𝑡 + 𝛷2 (𝑡), 𝑋(𝑡) 𝐻 𝑑𝑊 (𝑡)
𝑡 ⟨ ⟩
+ 𝐹𝑥 (𝑠, 𝑋(𝑠))𝛷(𝑠)𝑑𝑊 (𝑠), a.s. (B.14) + 𝛷1 (𝑡), 𝑌 (𝑡) 𝐻 𝑑𝑊 (𝑡)
∫0 ⟨ ⟩ ⟨ ⟩
and denote 𝛷1 (𝑡), 𝛷2 (𝑡) 𝐻 𝑑𝑡 by 𝑑𝑋, 𝑑𝑌 𝐻 for simplicity.
Remark B.2. Usually, people write formula (B.14) in the following
B.6. Stochastic evolution equations
differential form:
⟨ ⟩
𝑑𝐹 (𝑡, 𝑋(𝑡)) = 𝐹𝑡 (𝑡, 𝑋(𝑡))𝑑𝑡 + 𝐹𝑥 (𝑡, 𝑋(𝑡)), 𝜙(𝑡) 𝐻 𝑑𝑡 In what follows, we shall always assume that 𝐻 is a separable
1
+ ⟨𝐹𝑥𝑥 (𝑡, 𝑋(𝑡))𝛷(𝑡), 𝛷(𝑡)⟩𝐻 𝑑𝑡 Hilbert space, and 𝐴 is a linear operator (with domain 𝐷(𝐴) on 𝐻),
2 which generates a 𝐶0 -semigroup {𝑆(𝑡)}𝑡≥0 . Denote by 𝐴∗ the adjoint
+𝐹𝑥 (𝑡, 𝑋(𝑡))𝛷(𝑡)𝑑𝑊 (𝑡).
operator of 𝐴, which generates {𝑆 ∗ (𝑡)}𝑡≥0 , the dual 𝐶0 -semigroup of
Theorem B.5 works well for Itô’s processes in the (strong) form {𝑆(𝑡)}𝑡≥0 .
(B.13). However, usually this is too restrictive in the study of SEEs in Consider the following SEE:
infinite dimensions. Indeed, in the infinite dimensional setting some- { [ ]
times one has to handle Itô processes in a weaker form, to be presented 𝑑𝑋(𝑡) = 𝐴𝑋(𝑡) + 𝐹 (𝑡, 𝑋(𝑡)) 𝑑𝑡 + 𝐹̃(𝑡, 𝑋(𝑡))𝑑𝑊 (𝑡) in (0, 𝑇 ],
below. 𝑋(0) = 𝑋0 .
Let  be a Hilbert space such that the embedding  ⊂ 𝐻 is (B.17)
continuous and dense. Denote by  ∗ the dual space of  w.r.t. the pivot
space 𝐻. Hence,  ⊂ 𝐻 = 𝐻 ∗ ⊂  ∗ , continuously and densely and Here 𝑋0 ∈ 𝐿𝑝 (𝛺; 𝐻)
(for some 𝑝 ≥ 2), and 𝐹 (⋅, ⋅) and 𝐹̃(⋅, ⋅) are
0
⟨ ⟩ ⟨ ⟩ measurable functions from [0, 𝑇 ] × 𝛺 × 𝐻 to 𝐻, satisfying
𝑧, 𝑣 , ∗ = 𝑧, 𝑣 𝐻 , ∀ 𝑣 ∈ 𝐻, 𝑧 ∈ .
⎧ 𝐹 (⋅, 0) ∈ 𝐿𝑝F (𝛺; 𝐿1 (0, 𝑇 ; 𝐻)),

We have the following Itô’s formula for a weak form of Itô process. ⎪ 𝐹̃(⋅, 0) ∈ 𝐿𝑝F (𝛺; 𝐿2 (0, 𝑇 ; 𝐻)),
⎪ |𝐹 (𝑡, 𝑦) − 𝐹 (𝑡, 𝑧)|𝐻 ≤ 𝐿1 (𝑡)|𝑦 − 𝑧|𝐻 ,
⎨ (B.18)
Theorem B.6. Suppose that 𝑋0 ∈ 𝐿2 (𝛺; 𝐻), 𝜙(⋅) ∈ 𝐿2F (0, 𝑇 ;  ∗ ), and ⎪ ∀ 𝑦, 𝑧 ∈ 𝐻, a.e. 𝑡 ∈ [0, 𝑇 ], a.s.,
0
𝛷(⋅) ∈ 𝐿𝑝F (𝛺; 𝐿2 (0, 𝑇 ; 𝐻)) for some 𝑝 ≥ 1. Let ⎪ |𝐹̃(𝑡, 𝑦) − 𝐹̃(𝑡, 𝑧)|𝐻 ≤ 𝐿2 (𝑡)|𝑦 − 𝑧|𝐻 ,
⎪ ∀ 𝑦, 𝑧 ∈ 𝐻, a.e. 𝑡 ∈ [0, 𝑇 ], a.s.,
𝑡 𝑡 ⎩
𝑋(𝑡) = 𝑋0 + 𝜙(𝑠)𝑑𝑠 + 𝛷(𝑠)𝑑𝑊 (𝑠), 𝑡 ∈ [0, 𝑇 ].
∫0 ∫0 for some 𝐿1 (⋅) ∈ 𝐿1 (0, 𝑇 ) and 𝐿2 (⋅) ∈ 𝐿2 (0, 𝑇 ).
First, we give the notion of strong solution to Eq. (B.17).
If 𝑋 ∈ 𝐿2F (0, 𝑇 ; ), then 𝑋(⋅) ∈ 𝐶([0, 𝑇 ]; 𝐻), a.s., and for any 𝑡 ∈ [0, 𝑇 ],
Definition B.14. An 𝐻-valued stochastic process 𝑋(⋅) ∈ 𝐶F ([0, 𝑇 ];
|𝑋(𝑡)|2𝐻 𝐿𝑝 (𝛺; 𝐻)) is called a strong solution to (B.17) if 𝑋(𝑡, 𝜔) ∈ 𝐷(𝐴) for
𝑡 ⟨ ⟩ a.e. (𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺, 𝐴𝑋(⋅) ∈ 𝐿1,𝑙𝑜𝑐 (0, 𝑇 ; 𝐻), and for all 𝑡 ∈ [0, 𝑇 ],
= |𝑋0 |2𝐻 + 2 𝜙(𝑠), 𝑋(𝑠)  ∗ , 𝑑𝑠 F
∫0 (B.15)
𝑡⟨ 𝑡( ) 𝑡
𝑡 ⟩ 𝑋(𝑡) = 𝑋0 + 𝐴𝑋(𝑠) + 𝐹 (𝑠, 𝑋(𝑠)) 𝑑𝑠 + 𝐹̃(𝑠, 𝑋(𝑠))𝑑𝑊 (𝑠), a.s.
+ |𝛷(𝑠)|2𝐻 𝑑𝑠+ 2 𝛷(𝑠), 𝑋(𝑠) 𝐻 𝑑𝑊 (𝑠), a.s. ∫0 ∫0
∫0 ∫0
As an immediate corollary of Theorem B.6, we have the following One needs very restrictive conditions to guarantee the existence of a
result. strong solution. Thus, people introduce two types of ‘‘weak’’ solutions.

Corollary B.1. Suppose that 𝑋0 , 𝑌0 ∈ 𝐿2 (𝛺; 𝐻), 𝜙1 (⋅), 𝜙2 (⋅) ∈ Definition B.15. An 𝐻-valued stochastic process 𝑋(⋅) ∈ 𝐶F ([0, 𝑇 ];
0 𝐿𝑝 (𝛺; 𝐻)) is called a weak solution to (B.17) if for any 𝑡 ∈ [0, 𝑇 ] and
𝐿F (0, 𝑇 ;  ), and 𝛷1 (⋅), 𝛷2 (⋅) ∈ 𝐿𝑝F (𝛺; 𝐿2 (0, 𝑇 ; 𝐻)) for some 𝑝 ≥ 1. Let
2 ∗
𝜉 ∈ 𝐷(𝐴∗ ),
𝑡 𝑡 ⟨ ⟩
𝑋(𝑡) = 𝑋0 + 𝜙1 (𝑠)𝑑𝑠 + 𝛷1 (𝑠)𝑑𝑊 (𝑠), 𝑡 ∈ [0, 𝑇 ] 𝑋(𝑡), 𝜉 𝐻
∫0 ∫0 ⟨ ⟩ 𝑡(⟨ ⟩ ⟨ ⟩ )
= 𝑋0 , 𝜉 𝐻 + 𝑋(𝑠), 𝐴∗ 𝜉 𝐻 + 𝐹 (𝑠, 𝑋(𝑠)), 𝜉 𝐻 𝑑𝑠
and ∫0
𝑡⟨ ⟩
𝑡 𝑡
+ 𝐹̃(𝑠, 𝑋(𝑠)), 𝜉 𝐻 𝑑𝑊 (𝑠), a.s.
𝑌 (𝑡) = 𝑌0 + 𝜙2 (𝑠)𝑑𝑠 + 𝛷2 (𝑠)𝑑𝑊 (𝑠), 𝑡 ∈ [0, 𝑇 ]. ∫0
∫0 ∫0

324
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

𝛥
Definition B.16. An 𝐻-valued stochastic process 𝑋(⋅) ∈ 𝐶F ([0, 𝑇 ]; Here 𝜆 ∈ 𝜌(𝐴), the resolvent set of 𝐴, and 𝑅(𝜆) = 𝜆(𝜆𝐼 − 𝐴)−1 with 𝐼
𝐿𝑝 (𝛺; 𝐻)) is called a mild solution to (B.17) if for any 𝑡 ∈ [0, 𝑇 ], being the identity operator on 𝐻.
𝑡
𝑋(𝑡) = 𝑆(𝑡)𝑋0 +
∫0
𝑆(𝑡 − 𝑠)𝐹 (𝑠, 𝑋(𝑠))𝑑𝑠 Theorem B.11. For any 𝑋0 ∈ 𝐿𝑝 (𝛺; 𝐻) (𝑝 ≥ 2) and 𝜆 ∈ 𝜌(𝐴),
0
𝑡 Eq. (B.21) admits a unique strong solution 𝑋 𝜆 (⋅) ∈ 𝐶F ([0, 𝑇 ]; 𝐿𝑝 (𝛺; 𝐻)).
+ 𝑆(𝑡 − 𝑠)𝐹̃(𝑠, 𝑋(𝑠))𝑑𝑊 (𝑠), a.s.
∫0 Moreover, as 𝜆 → ∞, the solution 𝑋 𝜆 (⋅) converges to 𝑋(⋅) in 𝐶F ([0, 𝑇 ];
It is easiest to show the well-posedness of (B.17) in the framework 𝐿𝑝 (𝛺; 𝐻)), where 𝑋(⋅) solves (B.17) in the sense of the mild solution.
of mild solution among the above three kinds of solutions. Indeed, we
For many results in this paper, rigorous proofs should employ the
have the following result.
above process of approximation. However, write it down every time
could be quite cumbersome. Hence, we omit it.
Theorem B.7. Let 𝑝 ≥ 2. Then, there is a unique mild solution 𝑋(⋅) ∈
𝐶F ([0, 𝑇 ]; 𝐿𝑝 (𝛺; 𝐻)) to (B.17). Moreover,
B.7. Backward stochastic evolution equations
|𝑋(⋅)|𝐶F ([0,𝑇 ];𝐿𝑝 (𝛺;𝐻))
(
≤  |𝑋0 |𝐿𝑝 (𝛺;𝐻) + |𝐹 (⋅, 0)|𝐿𝑝 (𝛺;𝐿1 (0,𝑇 ;𝐻)) (B.19)
0
) F Consider the following 𝐻-valued BSEE:
+|𝐹̃(⋅, 0)|𝐿𝑝 (𝛺;𝐿2 (0,𝑇 ;𝐻)) . { [ ]
F
𝑑𝑧(𝑡) = − 𝐴𝑧(𝑡) + 𝐹 (𝑡, 𝑧(𝑡), 𝑍(𝑡)) 𝑑𝑡 − 𝑍(𝑡)𝑑𝑊 (𝑡) in [0, 𝑇 ),
If {𝑆(𝑡)}𝑡≥0 is a contraction semigroup, then one can get a better 𝑧(𝑇 ) = 𝜉.
regularity for the mild solution w.r.t. time, i.e., 𝑋(⋅, 𝜔) ∈ 𝐶([0, 𝑇 ]; 𝐻),
a.s. (B.22)

Here 𝜉 ∈ 𝐿𝑝 (𝛺; 𝐻) (for some 𝑝 ≥ 1), 𝐹 ∶ [0, 𝑇 ] × 𝛺 × 𝐻 × 𝐻 → 𝐻 is a


Theorem B.8. If 𝐴 generates a contraction semigroup and 𝑝 ≥ 1, then 𝑇
measurable function satisfying that
(B.17) admits a unique mild solution 𝑋(⋅) ∈ 𝐿𝑝F (𝛺; 𝐶([0, 𝑇 ]; 𝐻)). Moreover,

⎪ 𝐹 (⋅, 0, 0) ∈ 𝐿𝑝F (𝛺; 𝐿1 (0, 𝑇 ; 𝐻)),
|𝑋(⋅)|𝐿𝑝 (𝛺;𝐶([0,𝑇 ];𝐻)) ⎪ |𝐹 (𝑡, 𝑧1 , 𝑍1 ) − 𝐹 (𝑡, 𝑧2 , 𝑍2 )|𝐻
( F ⎨ (B.23)
≤  |𝑋0 |𝐿𝑝 (𝛺;𝐻) + |𝐹 (⋅, 0)|𝐿𝑝 (𝛺;𝐿1 (0,𝑇 ;𝐻)) (B.20) ⎪ ≤ 𝐿1 (𝑡)|𝑧1 − 𝑧2 |𝐻 + 𝐿2 (𝑡)|𝑍1 − 𝑍2 |𝐻 ,
0
) F ⎪ ∀ 𝑧1 , 𝑧2 , 𝑍1 , 𝑍2 ∈ 𝐻, a.e. 𝑡 ∈ [0, 𝑇 ], a.s.
+|𝐹̃(⋅, 0)|𝐿𝑝 (𝛺;𝐿2 (0,𝑇 ;𝐻)) . ⎩
F

The following result indicates the space smoothing effect of mild for some 𝐿1 (⋅) ∈ 𝐿1 (0, 𝑇 ) and 𝐿2 (⋅) ∈ 𝐿2 (0, 𝑇 ).
solutions to a class of SEEs, such as the stochastic parabolic equation. Similarly to the case of SEEs, one introduces below the notions of
strong, weak and mild solutions to Eq. (B.22).
Theorem B.9. Let 𝑝 ≥ 1. Assume that 𝐴 is a self-adjoint, negative definite
(unbounded linear) operator on 𝐻. Then, Eq. (B.17) admits a unique mild Definition B.17. A stochastic process (𝑧(⋅), 𝑍(⋅)) ∈ 𝐿𝑝F (𝛺; 𝐶([0, 𝑇 ];
solution 𝐻)) × 𝐿𝑝F (𝛺; 𝐿2 (0, 𝑇 ; 𝐻)) is called a strong solution to (B.22) if 𝑧(𝑡) ∈
1 𝐷(𝐴) for a.e. (𝑡, 𝜔) ∈ [0, 𝑇 ] × 𝛺, 𝐴𝑧(⋅) ∈ 𝐿1,𝑙𝑜𝑐 (0, 𝑇 ; 𝐻), and for all
𝑋(⋅) ∈ 𝐿𝑝F (𝛺; 𝐶([0, 𝑇 ]; 𝐻)) ∩ 𝐿𝑝F (𝛺; 𝐿2 (0, 𝑇 ; 𝐷((−𝐴) 2 ))). F
𝑡 ∈ [0, 𝑇 ],
Moreover, 𝑇( ) 𝑇
𝑧(𝑡) = 𝜉 + 𝐴𝑧(𝑠) + 𝐹 (𝑠, 𝑧(𝑠), 𝑍(𝑠)) 𝑑𝑠 + 𝑍(𝑠)𝑑𝑊 (𝑠), a.s.
|𝑋(⋅)|𝐿𝑝 (𝛺;𝐶([0,𝑇 ];𝐻)) + |𝑋(⋅)| 𝑝 1 ∫𝑡 ∫𝑡
F 𝐿F (𝛺;𝐿2 (0,𝑇 ;𝐷((−𝐴) 2 )))
(
≤  |𝑋0 |𝐿𝑝 (𝛺;𝐻) + |𝐹 (⋅, 0)|𝐿𝑝 (𝛺;𝐿1 (0,𝑇 ;𝐻))
0
) F Definition B.18. A stochastic process (𝑧(⋅), 𝑍(⋅)) ∈ 𝐿𝑝F (𝛺; 𝐶([0, 𝑇 ];
+|𝐹̃(⋅, 0)|𝐿𝑝 (𝛺;𝐿2 (0,𝑇 ;𝐻)) . 𝐻)) × 𝐿𝑝F (𝛺; 𝐿2 (0, 𝑇 ; 𝐻)) is called a weak solution to (B.22) if for any
F
𝑡 ∈ [0, 𝑇 ] and 𝜂 ∈ 𝐷(𝐴∗ ),
Next result gives the relationship between mild and weak solutions
to (B.17). ⟨ ⟩ ⟨ ⟩ 𝑇 ⟨ ⟩
𝑧(𝑡), 𝜂 = 𝜉, 𝜂 𝐻 + 𝑧(𝑠), 𝐴∗ 𝜂 𝑑𝑠
𝐻 ∫𝑡 𝐻
Theorem B.10. Any weak solution to (B.17) is also a mild solution to 𝑇 ⟨ ⟩
− 𝐹 (𝑠, 𝑧(𝑠), 𝑍(𝑠)), 𝜂 𝑑𝑠
(B.17) and vice versa. ∫𝑡 𝐻
𝑇 ⟨ ⟩
In many applications, the mild solution does not have enough reg- − 𝑍(𝑠), 𝜂 𝐻 𝑑𝑊 (𝑠), a.s.
∫𝑡
ularity. For example, to establish the pointwise identity for Carleman
estimate, we need the functions to be second order differentiable in
Definition B.19. A stochastic process (𝑧(⋅), 𝑍(⋅)) ∈ 𝐿𝑝F (𝛺; 𝐶([0, 𝑇 ];
the sense of weak derivative w.r.t. the spatial variable. Nevertheless, as
𝐻)) × 𝐿𝑝F (𝛺; 𝐿2 (0, 𝑇 ; 𝐻)) is called a mild solution to (B.22) if for any
Appendix A.7, these problems can be solved by the following strategy:
𝑡 ∈ [0, 𝑇 ],
1. Introduce a family of equations with strong solutions such that 𝑇

the limit of these strong solutions is the mild or weak solution 𝑧(𝑡) = 𝑆(𝑇 − 𝑡)𝜉 + 𝑆(𝑠 − 𝑡)𝐹 (𝑠, 𝑧(𝑠), 𝑍(𝑠))𝑑𝑠
∫𝑡
to the original equation. 𝑇
+ 𝑆(𝑠 − 𝑡)𝑍(𝑠)𝑑𝑊 (𝑠), a.s.
2. Obtain the desired properties for these strong solutions. ∫𝑡
3. Utilize the density argument to establish the desired properties
It is easy to show the following result.
for the mild/weak solutions.

Let us introduce an approximation equation of (B.17) as follows: Proposition B.3. If (𝑧(⋅), 𝑍(⋅)) is a strong solution to Eq. (B.22), then it
is also a weak solution to the same equation. Conversely, if a weak solution
⎧ 𝑑𝑋 𝜆 (𝑡) = 𝐴𝑋 𝜆 (𝑡)𝑑𝑡 + 𝑅(𝜆)𝐹 (𝑡, 𝑋 𝜆 (𝑡))𝑑𝑡 (𝑧(⋅), 𝑍(⋅)) to Eq. (B.22) satisfies that 𝑦(𝑡) ∈ 𝐷(𝐴) for a.e. (𝑡, 𝜔) ∈ [0, 𝑇 ]×𝛺,

⎨ +𝑅(𝜆)𝐹̃(𝑡, 𝑋 𝜆 (𝑡))𝑑𝑊 (𝑡) in (0, 𝑇 ], (B.21) and 𝐴𝑧(⋅) ∈ 𝐿1 (0, 𝑇 ; 𝐻) a.s., then (𝑧(⋅), 𝑍(⋅)) is also a strong solution
⎪ 𝑋 𝜆 (0) = 𝑅(𝜆)𝑋0 ∈ 𝐷(𝐴). to Eq. (B.22).

325
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

Similar to Theorem B.7 (but here one needs that the filtration 𝐅 is Definition B.20. We call (𝑧(⋅), 𝑍(⋅)) ∈ 𝐷F ([0, 𝑇 ];𝐿2 (𝛺;𝐻)) ×𝐿2F (0, 𝑇 ; 𝐻)
the one generated by 𝑊 (⋅)), we can get the well-posedness of (B.22) in a transposition solution to (B.22) if for any 𝑡 ∈ [0, 𝑇 ], 𝑣1 (⋅) ∈ 𝐿1F (𝑡, 𝑇 ;
the sense of mild solution. 𝐿2 (𝛺; 𝐻)), 𝑣2 (⋅) ∈ 𝐿2F (𝑡, 𝑇 ; 𝐻), 𝜂 ∈ 𝐿2 (𝛺; 𝐻) and the corresponding
𝑡
solution 𝑧 ∈ 𝐶F ([𝑡, 𝑇 ]; 𝐿2 (𝛺; 𝐻)) to (B.27), it holds that
Theorem B.12. Assume that 𝐅 is the natural filtration generated by 𝑊 (⋅).
⟨ ⟩ 𝑇⟨ ⟩
Then, for any 𝑝 ≥ 1 and 𝜉 ∈ 𝐿𝑝 (𝛺; 𝐻), Eq. (B.22) admits a unique mild E 𝑋(𝑇 ), 𝜉 𝐻 − E 𝑋(𝑠), 𝐹 (𝑠, 𝑧(𝑠), 𝑍(𝑠)) 𝐻 𝑑𝑠
𝑇 ∫𝑡
solution (𝑧(⋅), 𝑍(⋅)) ∈ 𝐿𝑝F (𝛺; 𝐶([0, 𝑇 ]; 𝐻)) × 𝐿𝑝F (𝛺; 𝐿2 (0, 𝑇 ; 𝐻)) satisfying ⟨ ⟩ 𝑇⟨ ⟩
that = E 𝜂, 𝑧(𝑡) 𝐻 + E 𝑣1 (𝑠), 𝑧(𝑠) 𝐻 𝑑𝑠 (B.28)
∫𝑡
𝑇⟨ ⟩
|(𝑧, 𝑍)|𝐿𝑝 (𝛺;𝐶([0,𝑇 ];𝐻))×𝐿𝑝 (𝛺;𝐿2 (0,𝑇 ;𝐻))
( F F ) (B.24) +E 𝑣2 (𝑠), 𝑍(𝑠) 𝐻 𝑑𝑠.
≤  |𝜉|𝐿𝑝 (𝛺;𝐻) + |𝐹 (⋅, 0, 0)|𝐿𝑝 (𝛺;𝐿1 (0,𝑇 ;𝐻)) . ∫𝑡
𝑇 F
We have the following well-posedness result for (B.22).
Also, similar to Theorem B.10, we have the following relationship
between the weak and mild solutions to (B.22). Theorem B.16. For any 𝜉 ∈ 𝐿2 (𝛺; 𝐻) and 𝐹 (⋅, 0, 0) ∈ 𝐿1F (0, 𝑇 ;
𝑇
2
𝐿 (𝛺; 𝐻)), Eq. (B.22) admits a unique transposition solution
Theorem B.13. A stochastic process (𝑧, 𝑍) is a weak solution to (B.22)
if and only if it is a mild solution to the same equation. (𝑧(⋅), 𝑍(⋅)) ∈ 𝐷F ([0, 𝑇 ]; 𝐿2 (𝛺; 𝐻)) × 𝐿2F (0, 𝑇 ; 𝐻).

Similarly to Theorem B.9, the following result gives the smoothing Furthermore,
effect of mild solutions to a class of BSEEs. |(𝑧(⋅), 𝑍(⋅))|𝐷 ([0,𝑇 ];𝐿2 (𝛺;𝐻))×𝐿2 (0,𝑇 ;𝐻)
( F F )
≤  |𝜉|𝐿2 (𝛺;𝐻) + |𝐹 (⋅, 0, 0)|𝐿1 (0,𝑇 ;𝐿2 (𝛺;𝐻)) .
Theorem B.14. Let 𝐅 be the natural filtration generated by 𝑊 (⋅), 𝑇 F

𝐹 (⋅, 0, 0) ∈ 𝐿1F (0, 𝑇 ; 𝐿2 (𝛺; 𝐻)), and 𝐴 be a self-adjoint, negative defi- In particular, if 𝐴 is a self-adjoint, negative definite (unbounded
nite (unbounded linear) operator on 𝐻. Then, for any 𝜉 ∈ 𝐿2 (𝛺; 𝐻), linear) operator on 𝐻, similar to Theorem B.14, one can see some
( 𝑇
Eq. (B.22) admits a unique mild solution (𝑧(⋅), 𝑍(⋅)) ∈ 𝐿2F (𝛺; 𝐶([0, 𝑇 ]; smooth effect of the transposition solution. We only consider a special
1 ) case, namely, 𝐹 (𝑡, 𝑧, 𝑍) = 𝐵1 𝑧 + 𝐵2 𝑍 + 𝑓 , where 𝐵1 , 𝐵2 ∈ 𝐿∞ (0, 𝑇 ; (𝐻))
𝐻)) ∩ 𝐿2F (0, 𝑇 ; 𝐷((−𝐴) 2 )) ×𝐿2F (0, 𝑇 ; 𝐻). 1
F
Moreover, and 𝑓 ∈ 𝐿𝑝F (0, 𝑇 ; (𝐷((−𝐴) 2 )))′ ) for 1 < 𝑝 ≤ 2.
|𝑧(⋅)|𝐿2 (𝛺;𝐶([0,𝑇 ];𝐻)) + |𝑧(⋅)| 1
F 𝐿2F (0,𝑇 ;𝐷((−𝐴) 2 )) Theorem B.17. Let 𝐴 be a self-adjoint, negative definite (unbounded
+|𝑍(⋅)|𝐿2 (0,𝑇 ;𝐻) (B.25) linear) operator on 𝐻. For any 𝜉 ∈ 𝐿𝑝 (𝛺; 𝐻), Eq. (B.22) (with 𝐹 given
( F ) 𝑇
≤  |𝜉|𝐿2 (𝛺;𝐻) + |𝐹 (⋅, 0, 0)|𝐿1 (0,𝑇 ;𝐿2 (𝛺;𝐻)) . above) admits a unique mild solution
𝑇 F

Let 𝜆 ∈ 𝜌(𝐴). Similarly to (B.21), we introduce an approximating (𝑧(⋅), 𝑍(⋅)) ∈ 𝐷F ([0, 𝑇 ]; 𝐿𝑝 (𝛺; 𝐻)) × 𝐿2F (0, 𝑇 ; 𝐿𝑝 (𝛺; 𝐻)).
equation of (B.22) as follows: Moreover,
⎧ 𝑑𝑧𝜆 (𝑡) = −[𝐴𝑧𝜆 (𝑡) + 𝑅(𝜆)𝐹 (𝑡, 𝑧(𝑡), 𝑍(𝑡))]𝑑𝑡 |(𝑧(⋅), 𝑍(⋅))|𝐷 ([0,𝑇 ];𝐿2 (𝛺;𝐻))×𝐿2 (0,𝑇 ;𝐻)
⎪ ( F F )
⎨ −𝑍 𝜆 (𝑡)𝑑𝑊 (𝑡) in (0, 𝑇 ], (B.26)
≤  |𝜉|𝐿2 (𝛺;𝐻) + |𝑓 | 𝑝 1 .
⎪ 𝑧𝜆 (𝑇 ) = 𝑅(𝜆)𝜉 ∈ 𝐷(𝐴). 𝑇 𝐿F (0,𝑇 ;𝐷((−𝐴) 2 )′ )

The following result holds. The proof of Theorems B.16 and B.17 are based on a Riesz type
representation theorem in Lü, Yong, and Zhang (2012, 2017b). We refer
Theorem B.15. Assume that 𝐅 is the natural filtration generated by the readers to Lü and Zhang (2014) for that.
𝑊 (⋅), and 𝐹 (⋅, 0, 0) ∈ 𝐿1F (0, 𝑇 ; 𝐿2 (𝛺; 𝐻)). Then, for each 𝜉 ∈ 𝐿2 (𝛺; 𝐻)
𝑇
and 𝜆 ∈ 𝜌(𝐴), Eq. (B.26) has a unique strong solution (𝑧𝜆 (⋅), 𝑍 𝜆 (⋅)) ∈ References
2 2
𝐿F (𝛺; 𝐶([0, 𝑇 ]; 𝐷(𝐴))) × 𝐿F (0, 𝑇 ; 𝐷(𝐴)). Moreover, as 𝜆 → ∞, (𝑧𝜆 (⋅),
𝑍 𝜆 (⋅)) converges to (𝑧(⋅), 𝑍(⋅)) (in 𝐿2F (𝛺; 𝐶([0, 𝑇 ]; 𝐻)) × 𝐿2F (0, 𝑇 ; 𝐻)), the Acciaio, B., Backhoff-Veraguas, J., & Carmona, R. A. (2019). Extended mean field
control problems: Stochastic maximum principle and transport perspective. SIAM
mild solution to (B.22). Journal on Control and Optimization, 57(6), 3666–3693. http://dx.doi.org/10.1137/
18m1196479, arXiv:1802.05754, URL: https://epubs.siam.org/doi/abs/10.1137/
In Theorems B.12 and B.14–B.15, we need the filtration 𝐅 to be the 18M1196479.
natural one generated by the Brownian motion 𝑊 (⋅). For the general Ahmed, N. U. (1981). Stochastic control on Hilbert space for linear evolution equations
filtration, we should employ the stochastic transposition method (devel- with random operator-valued coefficients. SIAM Journal on Control and Optimization,
19(3), 401–430. http://dx.doi.org/10.1137/0319023, URL: http://epubs.siam.org/
oped in Lü & Zhang, 2013, 2014, 2015b) to show the well-posedness
doi/10.1137/0319023.
of Eq. (B.22). For simplicity, we only consider the case that 𝑝 = 2. To Barbu, V. (2013). Note on the internal stabilization of stochastic parabolic equations
define the transposition solution to (B.22), we introduce the following with linearly multiplicative Gaussian noise. ESAIM: Control, Optimisation and Calcu-
SEE: lus of Variations, 19(4), 1055–1063. http://dx.doi.org/10.1051/cocv/2012044, URL:
{ http://www.esaim-cocv.org/10.1051/cocv/2012044.
𝑑𝑋 = (𝐴∗ 𝑋 + 𝑣1 )𝑑𝑠 + 𝑣2 𝑑𝑊 (𝑠) in (𝑡, 𝑇 ], Barbu, V., Răşcanu, A., & Tessitore, G. (2003). Carleman estimates and controllability
(B.27)
𝑋(𝑡) = 𝜂, of linear stochastic heat equations. Applied Mathematics and Optimization, 47(2),
97–120. http://dx.doi.org/10.1007/s00245-002-0757-z, URL: http://link.springer.
where 𝑡 ∈ [0, 𝑇 ], 𝑣1 ∈ 𝐿1F (𝑡, 𝑇 ; 𝐿2 (𝛺; 𝐻)), 𝑣2 ∈ 𝐿2F (𝑡, 𝑇 ; 𝐻), 𝜂 ∈ com/10.1007/s00245-002-0757-z.
𝐿2 (𝛺; 𝐻). By Theorem B.7, Eq. (B.27) admits a unique mild solution Bardos, C., Lebeau, G., & Rauch, J. (1992). Sharp sufficient conditions for the
𝑡
observation, control, and stabilization of waves from the boundary. SIAM Journal on
𝑧 ∈ 𝐶F ([𝑡, 𝑇 ]; 𝐿2 (𝛺; 𝐻)), and
Control and Optimization, 30(5), 1024–1065. http://dx.doi.org/10.1137/0330055,
|𝑋|𝐶F ([𝑡,𝑇 ];𝐿2 (𝛺;𝐻)) URL: http://epubs.siam.org/doi/10.1137/0330055.
( ) Bashirov, A. E., & Kerimov, K. R. (1997). On controllability conception for stochastic
≤  |𝜂|𝐿2 (𝛺;𝐻) +|𝑣1 |𝐿1 (𝑡,𝑇 ;𝐿2 (𝛺;𝐻)) +|𝑣2 |𝐿2 (𝑡,𝑇 ;𝐻) .
𝑡 F F systems. SIAM Journal on Control and Optimization, 35(2), 384–398. http://dx.
doi.org/10.1137/s0363012994260970, URL: http://epubs.siam.org/doi/10.1137/
We now present the following notion. S0363012994260970.

326
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

Bashirov, A. E., & Mahmudov, N. I. (1999). On concepts of controllability for Fleming, W. H., & Soner, H. M. (2006). Stochastic modelling and applied probability: vol.
deterministic and stochastic systems. SIAM Journal on Control and Optimization, 25, Controlled Markov processes and viscosity solutions (2nd ed.). New York: Springer-
37(6), 1808–1821. http://dx.doi.org/10.1137/s036301299732184x, URL: http:// Verlag, http://dx.doi.org/10.1007/0-387-31071-1, URL: http://link.springer.com/
epubs.siam.org/doi/10.1137/S036301299732184X. 10.1007/0-387-31071-1.
Bellman, R. E. (1957). Dynamic programming (1st ed.). Princeton, NJ, USA: Princeton Frankowska, H., & Lü, Q. (2020). First and second order necessary optimality
University Press. conditions for controlled stochastic evolution equations with control and state
Bellman, R. E., Glicksberg, I. L., & Gross, O. A. (1958). Some aspects of the mathematical constraints. Journal of Differential Equations, 268(6), 2949–3015. http://dx.doi.
theory of control processes. Santa Monica, CA: Rand Corporation, Santa Monica, org/10.1016/j.jde.2019.09.045, URL: https://linkinghub.elsevier.com/retrieve/pii/
Calif., Rep. No. R-313. S0022039619304528.
Bensoussan, A. (1983). Stochastic maximum principle for distributed parameter sys- Frankowska, H., & Zhang, X. (2020). Necessary conditions for stochastic optimal control
tems. Journal of the Franklin Institute, 315(5–6), 387–406. http://dx.doi.org/10. problems in infinite dimensions. Stochastic Processes and their Applications, 130(7),
1016/0016-0032(83)90059-5, URL: https://linkinghub.elsevier.com/retrieve/pii/ 4081–4103. http://dx.doi.org/10.1016/j.spa.2019.11.010, URL: https://linkinghub.
0016003283900595. elsevier.com/retrieve/pii/S0304414918307592.
Bismut, J.-M. (1976). Linear quadratic optimal stochastic control with random coef- Fu, X., & Liu, X. (2017a). Controllability and observability of some stochastic complex
ficients. SIAM Journal on Control and Optimization, 14(3), 419–444. http://dx.doi. ginzburg-landau equations. SIAM Journal on Control and Optimization, 55(2), 1102–
org/10.1137/0314028, URL: http://epubs.siam.org/doi/10.1137/0314028. 1127. http://dx.doi.org/10.1137/15m1039961, URL: https://epubs.siam.org/doi/
Boltyanskiy, V. G., Gamkrelidze, R. V., Mischenko, E. F., & Pontryagin, L. S. (1962). 10.1137/15M1039961.
Mathematical theory of optimal processes. New York: Wiley. Fu, X., & Liu, X. (2017b). A weighted identity for stochastic partial differential operators
Bruckner, A. M., Bruckner, J. B., & Thomson, B. S. (1997). Real analysis. Upper Saddle and its applications. Journal of Differential Equations, 262(6), 3551–3582. http://dx.
River, N.J: Prentice-Hall. doi.org/10.1016/j.jde.2016.11.035, URL: https://linkinghub.elsevier.com/retrieve/
pii/S0022039616304442.
Bryc, W. (1995). Lecture notes in statistics, The normal distribution. characterizations with
Fu, X., Lü, Q., & Zhang, X. (2019). Springerbriefs in mathematics, Carleman estimates for
applications. New York, NY: Springer-Verlag New York, http://dx.doi.org/10.1007/
second order partial differential operators and applications, a unified approach. Cham:
978-1-4612-2560-7, URL: http://link.springer.com/10.1007/978-1-4612-2560-7.
Springer International Publishing, http://dx.doi.org/10.1007/978-3-030-29530-1,
Carmona, R. A., & Rozovskii, B. (Eds.), (1999). Mathematical surveys and monographs:
URL: http://link.springer.com/10.1007/978-3-030-29530-1.
vol. 64, Stochastic partial differential equations: six perspectives. Providence, Rhode
Fu, X., Yong, J., & Zhang, X. (2007). Exact controllability for multidimensional
Island: American Mathematical Society, http://dx.doi.org/10.1090/surv/064, URL:
semilinear hyperbolic equations. SIAM Journal on Control and Optimization, 46(5),
http://www.ams.org/surv/064.
1578–1614. http://dx.doi.org/10.1137/040610222, URL: http://epubs.siam.org/
Chen, H.-F. (1980). On stochastic observability and controllability. Automatica,
doi/10.1137/040610222.
16(2), 179–190. http://dx.doi.org/10.1016/0005-1098(80)90053-9, URL: https://
Fuhrman, M., Hu, Y., & Tessitore, G. (2013). Stochastic maximum principle for
linkinghub.elsevier.com/retrieve/pii/0005109880900539.
optimal control of SPDEs. Applied Mathematics & Optimization, 68(2), 181–217.
Chen, M. (2018). Null controllability with constraints on the state for stochastic heat
http://dx.doi.org/10.1007/s00245-013-9203-7, URL: http://link.springer.com/10.
equation. Journal of Dynamical and Control Systems, 24(1), 39–50. http://dx.doi.
1007/s00245-013-9203-7.
org/10.1007/s10883-016-9357-0, URL: http://link.springer.com/10.1007/s10883-
Fuhrman, M., Hu, Y., & Tessitore, G. (2018). Stochastic maximum principle for optimal
016-9357-0.
control of partial differential equations driven by white noise. Stochastics and Partial
Coron, J.-M. (2007). Mathematical surveys and monographs: vol. 136, Control and
Differential Equations: Analysis and Computations, 6(2), 255–285. http://dx.doi.org/
nonlinearity. Providence: American Mathematical Society.
10.1007/s40072-017-0108-3, URL: http://link.springer.com/10.1007/s40072-017-
Da Prato, G., & Zabczyk, J. (1992). Stochastic equations in infinite dimensions. 0108-3.
Cambridge New York: Cambridge University Press, http://dx.doi.org/10.1017/ Funaki, T. (1983). Random motion of strings and related stochastic evolution
cbo9780511666223, URL: https://www.cambridge.org/core/product/identifier/ equations. Nagoya Mathematical Journal, 89, 129–193. http://dx.doi.org/10.1017/
9780511666223/type/book. s0027763000020298, URL: https://www.cambridge.org/core/product/identifier/
Dawson, D. A. (1972). Stochastic evolution equations. Mathematical Biosciences, S0027763000020298/type/journal_article.
15(3–4), 287–316. http://dx.doi.org/10.1016/0025-5564(72)90039-9, URL: https: Fursikov, A. V., & Imanuvilov, O. Y. (1996). Lecture notes series: vol. 34, Controllability of
//linkinghub.elsevier.com/retrieve/pii/0025556472900399. evolution equations. Seoul, Korea: Research Institute of Mathematics, Seoul National
Dou, F., & Lü, Q. (2019). Partial approximate controllability for linear stochas- University, URL: https://books.google.com.hk/books?id=piHvAAAAMAAJ.
tic control systems. SIAM Journal on Control and Optimization, 57(2), 1209– Gao, P., Chen, M., & Li, Y. (2015). Observability estimates and null controllability
1229. http://dx.doi.org/10.1137/18m1164640, URL: https://epubs.siam.org/doi/ for forward and backward linear stochastic Kuramoto–Sivashinsky equations. SIAM
10.1137/18M1164640. Journal on Control and Optimization, 53(1), 475–500. http://dx.doi.org/10.1137/
Dou, F., & Lü, Q. (2020). Time-inconsistent linear quadratic optimal control problems 130943820, URL: http://epubs.siam.org/doi/10.1137/130943820.
for stochastic evolution equations. SIAM Journal on Control and Optimization, Greenwood, P. E., & Ward, L. M. (2016). Mathematical biosciences institute lecture series.
58(1), 485–509. http://dx.doi.org/10.1137/19m1250339, URL: https://epubs.siam. stochastics in biological systems, 1.5, Stochastic neuron models. Cham: Springer In-
org/doi/10.1137/19M1250339. ternational Publishing, http://dx.doi.org/10.1007/978-3-319-26911-5, URL: http:
Du, K., & Meng, Q. (2013). A maximum principle for optimal control of stochastic //link.springer.com/10.1007/978-3-319-26911-5.
evolution equations. SIAM Journal on Control and Optimization, 51(6), 4343– Guatteri, G., & Tessitore, G. (2005). On the backward stochastic riccati equation in
4362. http://dx.doi.org/10.1137/120882433, URL: http://epubs.siam.org/doi/10. infinite dimensions. SIAM Journal on Control and Optimization, 44(1), 159–194.
1137/120882433. http://dx.doi.org/10.1137/s0363012903425507, URL: http://epubs.siam.org/doi/
Duyckaerts, T., Zhang, X., & Zuazua, E. (2008). On the optimality of the ob- 10.1137/S0363012903425507.
servability inequalities for parabolic and hyperbolic systems with potentials. Guatteri, G., & Tessitore, G. (2014). Well posedness of operator valued backward
Annales de l’Institut Henri Poincaré Non Linéaire, 25(1), 1–41. http://dx.doi.org/ stochastic riccati equations in infinite dimensional spaces. SIAM Journal on Control
10.1016/j.anihpc.2006.07.005, URL: https://linkinghub.elsevier.com/retrieve/pii/ and Optimization, 52(6), 3776–3806. http://dx.doi.org/10.1137/140966873, URL:
S0294144906001156. http://epubs.siam.org/doi/10.1137/140966873.
Fabbri, G., Gozzi, F., & Świȩch, A. (2017). Probability theory and stochastic modelling: vol. Hafizoglu, C., Lasiecka, I., Levajković, T., Mena, H., & Tuffaha, A. (2017). The stochastic
82, Stochastic optimal control in infinite dimension. Cham: Springer International Pub- linear quadratic control problem with singular estimates. SIAM Journal on Control
lishing, http://dx.doi.org/10.1007/978-3-319-53067-3, URL: http://link.springer. and Optimization, 55(2), 595–626. http://dx.doi.org/10.1137/16m1056183, URL:
com/10.1007/978-3-319-53067-3. https://epubs.siam.org/doi/10.1137/16M1056183.
Fattorini, H., & Russell, D. L. (1971). Exact controllability theorems for linear parabolic Halmos, P. R. (1950). Measure theory. New York: D. Van Nostrand Company, Inc.,
equations in one space dimension. Archive for Rational Mechanics and Analysis, http://dx.doi.org/10.1007/978-1-4684-9440-2, URL: http://link.springer.com/10.
43(4), 272–292. http://dx.doi.org/10.1007/bf00250466. 1007/978-1-4684-9440-2.
Fernández-Bertolin, A., & Zhong, J. (2020). Hardy’s uncertainty principle and unique Hernández-Santamaría, V., Balc’h, K. L., & Peralta, L. (2020). Global null-controllability
continuation property for stochastic heat equations. ESAIM: Control, Optimisation for stochastic semilinear parabolic equations. arXiv:2010.08854.
and Calculus of Variations, 26, 9. http://dx.doi.org/10.1051/cocv/2019009, URL: Hernández-Santamaría, V., & Peralta, L. (2020). Controllability results for stochastic
https://www.esaim-cocv.org/10.1051/cocv/2019009. coupled systems of fourth- and second-order parabolic equations. arXiv:2003.01334,
Fernández-Cara, E., & Zuazua, E. (2000a). The cost of approximate controllability for URL: http://arxiv.org/abs/2003.01334.
heat equations: the linear case. Advances in Differential Equations, 5(4–6), 465–514, Holden, H., Øksendal, B., Ubøe, J., & Zhang, T. (2010). Stochastic partial differential
URL: https://projecteuclid.org:443/euclid.ade/1356651338. equations. A modeling, white noise functional approach. New York, NY: Springer New
Fernández-Cara, E., & Zuazua, E. (2000b). Null and approximate controllability for York.
weakly blowing up semilinear heat equations. Annales de l’I.H.P. Analyse non Hu, Y., Jin, H., & Zhou, X. Y. (2012). Time-inconsistent stochastic linear–quadratic
linéaire, 17(5), 583–616, URL: http://www.numdam.org/item/AIHPC_2000_17_5_ control. SIAM Journal on Control and Optimization, 50(3), 1548–1572. http://dx.doi.
583_0. org/10.1137/110853960, URL: http://epubs.siam.org/doi/10.1137/110853960.

327
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

Hu, Y., Jin, H., & Zhou, X. Y. (2017). Time-inconsistent stochastic linear-quadratic Liu, X., Lü, Q., & Zhang, X. (2020). Finite codimensional controllability and optimal
control: Characterization and uniqueness of equilibrium. SIAM Journal on Control control problems with endpoint state constraints. Journal de Mathématiques Pures
and Optimization, 55(2), 1261–1279. http://dx.doi.org/10.1137/15M1019040, URL: et Appliquées, 138, 164–203. http://dx.doi.org/10.1016/j.matpur.2020.03.004, URL:
https://epubs.siam.org/doi/10.1137/15M1019040. https://linkinghub.elsevier.com/retrieve/pii/S0021782420300581.
Hu, Y., & Peng, S. (1990). Maximum principle for semilinear stochastic evolution Liu, X., Lü, Q., Zhang, H., & Zhang, X. (0000). Finite codimensionality technique in
control systems. Stochastics and Stochastic Reports, 33(3–4), 159–180. http:// optimization and optimal control problems. URL https://arxiv.org/abs/2102.00652.
dx.doi.org/10.1080/17442509008833671, URL: https://www.tandfonline.com/doi/ Liu, X., & Yu, Y. (2019). Carleman estimates of some stochastic degenerate parabolic
full/10.1080/17442509008833671. equations and application. SIAM Journal on Control and Optimization, 57(5), 3527–
Ichikawa, A. (1979). Dynamic programming approach to stochastic evolution equations. 3552. http://dx.doi.org/10.1137/18M1221448, URL: https://epubs.siam.org/doi/
SIAM Journal on Control and Optimization, 17(1), 152–174. http://dx.doi.org/10. 10.1137/18M1221448.
1137/0317012, URL: http://epubs.siam.org/doi/10.1137/0317012. Liu, X., & Zhang, X. (2012). Local controllability of multidimensional quasi-linear
Imanuvilov, O. Y. (2002). On carleman estimates for hyperbolic equations. Asymptotic parabolic equations. SIAM Journal on Control and Optimization, 50(4), 2046–
Analysis, 32(3, 4), 185–220. 2064. http://dx.doi.org/10.1137/110851808, URL: http://epubs.siam.org/doi/10.
Itô, K. (1984). Introduction to probability theory. Cambridge University Press, 1137/110851808.
URL: https://www.ebook.de/de/product/3600355/kiyoshi_ito_introduction_to_ Lü, Q. (2011). Some results on the controllability of forward stochastic heat equations
probability_theory.html. with control on the drift. Journal of Functional Analysis, 260(3), 832–851. http://dx.
Kalman, R. E. (1961). On the general theory of control systems. In Proceedings of the doi.org/10.1016/j.jfa.2010.10.018, URL: https://linkinghub.elsevier.com/retrieve/
first IFAC congress. Moscow, 1960 (vol. 1) (pp. 481–492). London: Butterworth. pii/S0022123610004209.
Kotelenez, P. (2008). Stochastic modelling and applied probability formerly: applica- Lü, Q. (2012). Carleman estimate for stochastic parabolic equations and inverse stochas-
tions of mathematics: vol. 58, Stochastic ordinary and stochastic partial differential tic parabolic problems. Inverse Problems, 28(4), Article 045008. http://dx.doi.
equations. transition from microscopic to macroscopic equations. New York, NY: org/10.1088/0266-5611/28/4/045008, URL: https://iopscience.iop.org/article/10.
Springer New York, http://dx.doi.org/10.1007/978-0-387-74317-2, URL: http:// 1088/0266-5611/28/4/045008.
link.springer.com/10.1007/978-0-387-74317-2. Lü, Q. (2013a). Exact controllability for stochastic Schrödinger equations. Journal of
Krylov, N. V. (2009). Stochastic modelling and applied probability: vol. 14, Controlled Differential Equations, 255(8), 2484–2504. http://dx.doi.org/10.1016/j.jde.2013.06.
diffusion processes. reprint of the 1980 edition. Berlin: Springer-Verlag, http://dx. 021, URL: https://linkinghub.elsevier.com/retrieve/pii/S0022039613002647.
doi.org/10.1007/978-3-540-70914-5, URL: http://link.springer.com/10.1007/978- Lü, Q. (2013b). Observability estimate and state observation problems for stochastic
3-540-70914-5. hyperbolic equations. Inverse Problems, 29(9), Article 095011. http://dx.doi.org/10.
Lasiecka, I., & Triggiani, R. (2000a). Encyclopedia of mathematics and its applications: 1088/0266-5611/29/9/095011, URL: https://iopscience.iop.org/article/10.1088/
vol. 1, Control theory for partial differential equations: continuous and approximation 0266-5611/29/9/095011.
theories. I. Abstract parabolic systems. (74), Cambridge: Cambridge University Press, Lü, Q. (2013c). Observability estimate for stochastic Schrödinger equations and its
http://dx.doi.org/10.1017/cbo9781107340848. applications. SIAM Journal on Control and Optimization, 51(1), 121–144. http://dx.
Lasiecka, I., & Triggiani, R. (2000b). Encyclopedia of mathematics and its applications: doi.org/10.1137/110830964, URL: http://epubs.siam.org/doi/10.1137/110830964.
vol. 2, Control theory for partial differential equations: continuous and approximation Lü, Q. (2014). Exact controllability for stochastic transport equations. SIAM Journal on
theories. II. Abstract hyperbolic-like systems over a finite time horizon. (75), Cambridge: Control and Optimization, 52(1), 397–419. http://dx.doi.org/10.1137/130910373,
Cambridge University Press. URL: http://epubs.siam.org/doi/10.1137/130910373.
Lebeau, G., & Robbiano, L. (1995). Contróle Exact De Léquation De La Chaleur. Lü, Q. (2019). Well-posedness of stochastic Riccati equations and closed-loop solvability
Communications in Partial Differential Equations, 20(1–2), 335–356. http://dx.doi. for stochastic linear quadratic optimal control problems. Journal of Differential Equa-
org/10.1080/03605309508821097, URL: http://www.tandfonline.com/doi/abs/10. tions, 267(1), 180–227. http://dx.doi.org/10.1016/j.jde.2019.01.008, URL: https:
1080/03605309508821097. //linkinghub.elsevier.com/retrieve/pii/S0022039619300166.
Lenhart, S., Xiong, J., & Yong, J. (2016). Optimal controls for stochastic partial Lü, Q. (2020). Stochastic linear quadratic optimal control problems for mean-
differential equations with an application in population modeling. SIAM Journal on field stochastic evolution equations. ESAIM: Control, Optimisation and Calculus of
Control and Optimization, 54(2), 495–535. http://dx.doi.org/10.1137/15M1010233, Variations, 26, 127. http://dx.doi.org/10.1051/cocv/2020081.
URL: http://epubs.siam.org/doi/10.1137/15M1010233. Lü, Q., & Wang, T. (0000). Characterization of optimal feedback for linear quadratic
Li, T. (2010). AIMS Serieson applied mathematics: vol. 3, Controllability and observability control problems of stochastic evolution equations.
for quasilinear hyperbolic systems. MO: American Institute of Mathematical Sciences, Lü, Q., Wang, T., & Zhang, X. (2017a). Characterization of optimal feedback for stochas-
Springfield. tic linear quadratic control problems. Probability, Uncertainty and Quantitative Risk,
Li, H., & Lü, Q. (2012). Null controllability for some systems of two backward stochastic 2(1), 11. http://dx.doi.org/10.1186/s41546-017-0022-7, URL: http://probability-
heat equations with one control force. Chinese Annals of Mathematics. Series B, risk.springeropen.com/articles/10.1186/s41546-017-0022-7.
33(6), 909–918. http://dx.doi.org/10.1007/s11401-012-0743-y, URL: http://link. Lü, Q., & Yin, Z. (2015). Unique continuation for stochastic heat equations. ESAIM:
springer.com/10.1007/s11401-012-0743-y. Control, Optimisation and Calculus of Variations, 21(2), 378–398. http://dx.doi.org/
Li, H., & Lü, Q. (2013). A quantitative boundary unique continuation for stochastic 10.1051/cocv/2014027, URL: http://www.esaim-cocv.org/10.1051/cocv/2014027.
parabolic equations. Journal of Mathematical Analysis and Applications, 402(2), Lü, Q., Yong, J., & Zhang, X. (2012). Representation of Itô integrals by
518–526. http://dx.doi.org/10.1016/j.jmaa.2013.01.038, URL: https://linkinghub. Lebesgue/Bochner integrals. Journal of the European Mathematical Society, 14(6),
elsevier.com/retrieve/pii/S0022247X13000553. 1795–1823. http://dx.doi.org/10.4171/JEMS/347, URL: http://www.ems-ph.org/
Li, X., & Yong, J. (1995). Systems & control: foundations & applications, Optimal control doi/10.4171/JEMS/347.
theory for infinite dimensional systems. Boston, MA: Birkhäuser Boston, http://dx. Lü, Q., Yong, J., & Zhang, X. (2017b). Erratum to ‘‘Representation of Itô integrals by
doi.org/10.1007/978-1-4612-4260-4, URL: http://link.springer.com/10.1007/978- Lebesgue/Bochner integrals’’ (J. Eur. Math. soc. 14, 1795–1823 (2012)). Journal
1-4612-4260-4. of the European Mathematical Society, 20(1), 259–260. http://dx.doi.org/10.4171/
Lions, P. L. (1983). Optimal control of diffusion processes and hamilton-Jacobi-bellman jems/765.
equations, III, regularity of the optimal cost function. In Collège de France seminar: Lü, Q., & Zhang, X. (0000). Control theory for stochastic distributed parameter systems:
vol. V, Nonlinear partial differential equations and applications (pp. 95–205). Boston: Recent progresses and open problems. In A survey paper in preparation.
Research Notes in Mathematics. Lü, Q., & Zhang, X. (0000). Mathematical control theory for stochastic partial
Lions, J. L. (1988). Contrôlabilité exacte, perturbations et stabilisation de systèmes differential equations. Springer-Verlag.
distribués, Tome 1,Contrôlabilité exacte. Research in Applied Mathematics, 8. Lü, Q., & Zhang, X. (2013). Well-posedness of backward stochastic differen-
Liu, K. (2005). Pitman monographs and surveys in pure and applied mathematics: vol. tial equations with general filtration. Journal of Differential Equations, 254(8),
135, Stability of infinite dimensional stochastic differential equations with applications. 3200–3227. http://dx.doi.org/10.1016/j.jde.2013.01.010, URL: https://linkinghub.
Chapman and Hall/CRC, http://dx.doi.org/10.1201/9781420034820, URL: https: elsevier.com/retrieve/pii/S0022039613000211.
//www.taylorfrancis.com/books/9781420034820. Lü, Q., & Zhang, X. (2014). SpringerBriefs in Mathematics, General Pontryagin-Type
Liu, X. (2014a). Controllability of some coupled stochastic parabolic systems with Stochastic Maximum Principle and Backward Stochastic Evolution Equations in Infi-
fractional order spatial differential operators by one control in the drift. SIAM nite Dimensions. New York: Springer International Publishing, http://dx.doi.org/
Journal on Control and Optimization, 52(2), 836–860. http://dx.doi.org/10.1137/ 10.1007/978-3-319-06632-5, URL: http://link.springer.com/10.1007/978-3-319-
130926791, URL: http://epubs.siam.org/doi/10.1137/130926791. 06632-5.
Liu, X. (2014b). Global carleman estimate for stochastic parabolic equations, and its Lü, Q., & Zhang, X. (2015a). Global uniqueness for an inverse stochastic hyperbolic
application. ESAIM: Control, Optimisation and Calculus of Variations, 20(3), 823– problem with three unknowns. Communications on Pure and Applied Mathematics,
839. http://dx.doi.org/10.1051/cocv/2013085, URL: http://www.esaim-cocv.org/ 68(6), 948–963. http://dx.doi.org/10.1002/cpa.21503, URL: http://doi.wiley.com/
10.1051/cocv/2013085. 10.1002/cpa.21503.
Liu, L., & Liu, X. (2018). Controllability and observability of some coupled stochastic Lü, Q., & Zhang, X. (2015b). Transposition method for backward stochastic evo-
parabolic systems. Mathematical Control & Related Fields, 8(3), 829–854. http: lution equations revisited, and its application. Mathematical Control & Related
//dx.doi.org/10.3934/mcrf.2018037, URL: http://aimsciences.org//article/doi/10. Fields, 5(3), 529–555. http://dx.doi.org/10.3934/mcrf.2015.5.529, URL: http://
3934/mcrf.2018037. aimsciences.org//article/doi/10.3934/mcrf.2015.5.529.

328
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

Lü, Q., & Zhang, X. (2018). Operator-valued backward stochastic Lyapunov equations Tang, S., & Zhang, X. (2009). Null controllability for forward and backward stochastic
in infinite dimensions, and its application. Mathematical Control & Related Fields, parabolic equations. SIAM Journal on Control and Optimization, 48(4), 2191–
8(1), 337–381. http://dx.doi.org/10.3934/mcrf.2018014, URL: http://aimsciences. 2216. http://dx.doi.org/10.1137/050641508, URL: http://epubs.siam.org/doi/10.
org//article/doi/10.3934/mcrf.2018014. 1137/050641508.
Lü, Q., & Zhang, X. (2019a). Exact controllability for a refined stochastic wave equation. Tessitore, G. (1992). Some remarks on the Riccati equation arising in an optimal
arXiv:1901.06074, URL: http://arxiv.org/abs/1901.06074. control problem with state- and control-dependent noise. SIAM Journal on Control
Lü, Q., & Zhang, X. (2019b). Optimal feedback for stochastic linear quadratic control and Optimization, 30(3), 717–744. http://dx.doi.org/10.1137/0330040, URL: http:
and backward stochastic riccati equations in infinite dimensions. arXiv:1901.00978, //epubs.siam.org/doi/10.1137/0330040.
URL: http://arxiv.org/abs/1901.00978. Tessitore, G. (1993). Some remarks on the mean square stabilizability of a linear SPDE.
Lü, Q., & Zhang, X. (2021). A concise introduction to control theory for stochastic Dynamic Systems and Applications, 2, 251–266.
partial differential equations. Mathematical Control & Related Fields, http://dx.doi. Tessitore, G. (1994). Hautus condition for the pathwise stabilizability of an
org/10.3934/mcrf.2021020. infinite dimensional stochastic system. Stochastic Analysis and Applications,
Lü, Q., Zhang, H., & Zhang, X. (2018). Second order optimality conditions for 12(5), 617–637. http://dx.doi.org/10.1080/07362999408809376, URL: http://
optimal control problems of stochastic evolution equations. arXiv:1811.07337, www.tandfonline.com/doi/abs/10.1080/07362999408809376.
http://dx.doi.org/1811.07337. URL http://arxiv.org/abs/1811.07337.
Tudor, C. (1989). Optimal control for semilinear stochastic evolution equations.
Ma, J., & Yong, J. (2007). Lecture notes in mathematics: vol. 1702, Forward-backward Applied Mathematics & Optimization, 20(1), 319–331. http://dx.doi.org/10.1007/
stochastic differential equations and their applications. Berlin, Heidelberg: Springer BF01447659, URL: http://link.springer.com/10.1007/BF01447659.
Berlin Heidelberg, http://dx.doi.org/10.1007/978-3-540-48831-6, URL: http://
Wang, T. (2020). On closed-loop equilibrium strategies for mean-field stochastic linear
link.springer.com/10.1007/978-3-540-48831-6.
quadratic problems. ESAIM: Control, Optimisation and Calculus of Variations, 26,
Mahmudov, N. I. (2001a). Controllability of linear stochastic systems. IEEE Transactions
41. http://dx.doi.org/10.1051/cocv/2019057, URL: https://www.esaim-cocv.org/
on Automatic Control, 46(5), 724–731. http://dx.doi.org/10.1109/9.920790, URL:
10.1051/cocv/2019057.
http://ieeexplore.ieee.org/document/920790/.
Wu, B., Chen, Q., & Wang, Z. (2020). Carleman estimates for a stochastic degen-
Mahmudov, N. I. (2001b). Controllability of linear stochastic systems in Hilbert
erate parabolic equation and applications to null controllability and an inverse
spaces. Journal of Mathematical Analysis and Applications, 259(1), 64–82. http://dx.
random source problem. Inverse Problems, 36(7), Article 075014. http://dx.doi.
doi.org/10.1006/jmaa.2000.7386, URL: https://linkinghub.elsevier.com/retrieve/
org/10.1088/1361-6420/ab89c3, URL: https://iopscience.iop.org/article/10.1088/
pii/S0022247X00973864.
1361-6420/ab89c3.
Mahmudov, N. I. (2003). Controllability of semilinear stochastic systems in Hilbert
spaces. Journal of Mathematical Analysis and Applications, 288(1), 197–211. http: Wu, H.-N., & Zhang, X.-M. (2020). Exponential stabilization for 1-D linear Itô-type state-
//dx.doi.org/10.1016/s0022-247x(03)00592-4, URL: https://linkinghub.elsevier. dependent stochastic parabolic PDE systems via static output feedback. Automatica,
com/retrieve/pii/S0022247X03005924. 121, Article 109173. http://dx.doi.org/10.1016/j.automatica.2020.109173, URL:
https://linkinghub.elsevier.com/retrieve/pii/S000510982030371X.
Majda, A. J., Timofeyev, I., & Vanden Eijnden, E. (2001). A mathematical framework
for stochastic climate models. Communications on Pure and Applied Mathematics, Yan, Y. (2018). Carleman estimates for stochastic parabolic equations with Neumann
54(8), 891–974. http://dx.doi.org/10.1002/cpa.1014, URL: http://doi.wiley.com/ boundary conditions and applications. Journal of Mathematical Analysis and Ap-
10.1002/cpa.1014. plications, 457(1), 248–272. http://dx.doi.org/10.1016/j.jmaa.2017.08.003, URL:
Munteanu, I. (2018). Boundary stabilization of the stochastic heat equation https://linkinghub.elsevier.com/retrieve/pii/S0022247X17307515.
by proportional feedbacks. Automatica, 87, 152–158. http://dx.doi.org/10. Yan, Y., & Sun, F. (2011). Insensitizing controls for a forward stochastic heat
1016/j.automatica.2017.10.003, URL: https://linkinghub.elsevier.com/retrieve/pii/ equation. Journal of Mathematical Analysis and Applications, 384(1), 138–150. http:
S0005109817305022. //dx.doi.org/10.1016/j.jmaa.2011.05.058, URL: https://linkinghub.elsevier.com/
Murray, R. M. (Ed.), (2003). Control in an information rich world: report of the retrieve/pii/S0022247X11005105.
panel on future directions in control, dynamics, and systems. Society for industrial Yan, L., Wu, B., Lu, S., & Wang, Y. (2020). Null controllability and inverse source
and applied mathematics, http://dx.doi.org/10.1137/1.9780898718010, URL: http: problem for stochastic grushin equation with boundary degeneracy and singularity.
//epubs.siam.org/doi/book/10.1137/1.9780898718010. arXiv:2001.01877, URL: http://arxiv.org/abs/2001.01877.
van Neerven, J. M. A. M., Veraar, M. C., & Weis, L. (2008). Stochastic evolution Yang, D., & Zhong, J. (2016). Observability inequality of backward stochastic heat
equations in UMD banach spaces. Journal of Functional Analysis, 255(4), 940–993. equations for measurable sets and its applications. SIAM Journal on Control
http://dx.doi.org/10.1016/j.jfa.2008.03.015, URL: https://linkinghub.elsevier.com/ and Optimization, 54(3), 1157–1175. http://dx.doi.org/10.1137/15M1033289, URL:
retrieve/pii/S0022123608001146. http://epubs.siam.org/doi/10.1137/15M1033289.
Peng, S. (1990). A general stochastic maximum principle for optimal control problems. Yin, Z. (2015). A quantitative internal unique continuation for stochastic parabolic
SIAM Journal on Control and Optimization, 28(4), 966–979. http://dx.doi.org/10. equations. Mathematical Control & Related Fields, 5(1), 165–176. http://dx.doi.org/
1137/0328054, URL: http://epubs.siam.org/doi/10.1137/0328054. 10.3934/mcrf.2015.5.165, URL: http://aimsciences.org//article/doi/10.3934/mcrf.
Russell, D. L. (1978). Controllability and stabilizability theory for linear partial 2015.5.165.
differential equations: Recent progress and open questions. SIAM Review, 20(4), Yong, J. (2013). Linear-quadratic optimal control problems for mean-field stochastic
639–739. http://dx.doi.org/10.1137/1020095, URL: http://epubs.siam.org/doi/10. differential equations. SIAM Journal on Control and Optimization, 51(4), 2809–
1137/1020095. 2838. http://dx.doi.org/10.1137/120892477, URL: http://epubs.siam.org/doi/10.
Schatten, R. (1970). Ergebnisse der mathematik und ihrer grenzgebiete: vol. 27, Norm ideals 1137/120892477.
of completely continuous operators. Berlin-New York: Springer-Verlag, http://dx.
Yong, J. (2015). Linear-quadratic optimal control problems for mean-field stochastic
doi.org/10.1007/978-3-662-35155-0, URL: http://link.springer.com/10.1007/978-
differential equations–time-consistent solutions. Transactions of the American Math-
3-662-35155-0.
ematical Society, 369(8), 5467–5523. http://dx.doi.org/10.1090/tran/6502, URL:
Schlick, T. (1995). Modeling superhelical DNA: recent analytical and dynamic ap- https://www.ams.org/tran/2017-369-08/S0002-9947-2015-06502-7/.
proaches. Current Opinion in Structural Biolog, 5(2), 245–262. http://dx.doi.org/10.
Yong, J., & Zhou, X. Y. (1999). Stochastic controls: Hamiltonian systems and HJB
1016/0959-440x(95)80083-2, URL: https://linkinghub.elsevier.com/retrieve/pii/
equations. New York, NY: Springer-Verlag, http://dx.doi.org/10.1007/978-1-4612-
0959440X95800832.
1466-3, URL: http://link.springer.com/10.1007/978-1-4612-1466-3.
Stannat, W., & Wessels, L. (0000). Peng’s maximum principle for stochastic partial
Yosida, K. (1995). Classics in mathematics: vol. 123, Functional analysis. Berlin, Heidel-
differential equations.
berg: Springer Berlin Heidelberg, http://dx.doi.org/10.1007/978-3-642-61859-8,
Sun, J., & Yong, J. (2020). Stochastic linear-quadratic optimal control theory: Open-loop
URL: http://link.springer.com/10.1007/978-3-642-61859-8.
and closed-loop solutions. Springer International Publishing, http://dx.doi.org/10.
1007/978-3-030-20922-3. Yuan, G. (2015). Determination of two kinds of sources simultaneously for a stochastic
Sunahara, Y., Aihara, S., Kamejima, K., & Kishino, K. (1977). On stochastic observability wave equation. Inverse Problems, 31(8), Article 085003. http://dx.doi.org/10.1088/
and controllability for nonlinear distributed parameter systems. Information and 0266-5611/31/8/085003, URL: https://iopscience.iop.org/article/10.1088/0266-
Control, 34(4), 348–371. http://dx.doi.org/10.1016/s0019-9958(77)90399-0, URL: 5611/31/8/085003.
https://linkinghub.elsevier.com/retrieve/pii/S0019995877903990. Yuan, G. (2017). Conditional stability in determination of initial data for stochas-
Sunahara, Y., Kabeuchi, T., Asada, Y., Aihara, S., & Kishino, K. (1974). On stochastic tic parabolic equations. Inverse Problems, 33(3), Article 035014. http://dx.doi.
controllability for nonlinear systems. IEEE Transactions on Automatic Control, 19(1), org/10.1088/1361-6420/aa5d7a, URL: https://iopscience.iop.org/article/10.1088/
49–54. http://dx.doi.org/10.1109/tac.1974.1100464, URL: http://ieeexplore.ieee. 1361-6420/aa5d7a.
org/document/1100464/. Zabczyk, J. (1981). Controllability of stochastic linear systems. Systems & Control Let-
Tang, S., & Li, X. (2017). Maximum principle for optimal control of distributed ters, 1(1), 25–31. http://dx.doi.org/10.1016/s0167-6911(81)80008-4, URL: https:
parameter stochastic systems with random jumps. In Lecture notes in pure and applied //linkinghub.elsevier.com/retrieve/pii/S0167691181800084.
mathematics: vol. 152, Differential equations,dynamical systems, and control science Zhang, X. (2008). Unique continuation for stochastic parabolic equations. Differential
(pp. 867–890). New York: Routledge, URL: https://www.taylorfrancis.com/books/ Integral Equations, 21(1–2), 81–93, URL: https://projecteuclid.org:443/euclid.die/
9781315141237. 1356039060.

329
Q. Lü and X. Zhang Annual Reviews in Control 51 (2021) 268–330

Zhang, X. (2010). Remarks on the controllability of some quasilinear equations. In Series Zhou, X. Y. (1993). On the necessary conditions of optimal controls for stochastic
in contemporary applied mathematics, Some problems on nonlinear hyperbolic equations partial differential equations. SIAM Journal on Control and Optimization, 31(6),
and applications (pp. 437–452). Beijing: Higher Education Press, http://dx.doi.org/ 1462–1478. http://dx.doi.org/10.1137/0331068, URL: http://epubs.siam.org/doi/
10.1142/9789814322898_0020, URL: http://www.worldscientific.com/doi/abs/10. 10.1137/0331068.
1142/9789814322898_0020. Zhou, Y., & Lei, Z. (2007). Local exact boundary controllability for nonlinear wave
Zhang, X. (2011). A unified controllability/observability theory for some stochastic equations. SIAM Journal on Control and Optimization, 46(3), 1022–1051. http://dx.
and deterministic partial differential equations. In Proceedings of the interna- doi.org/10.1137/060650222, URL: http://epubs.siam.org/doi/10.1137/060650222.
tional congress of mathematicians 2010 (pp. 3008–3034). New Delhi: Hindustan Zuazua, E. (2007). Controllability and observability of partial differential equa-
Book Agency (HBA), http://dx.doi.org/10.1142/9789814324359_0177, URL: http: tions: Some results and open problems. In Handbook of differential equations:
//www.worldscientific.com/doi/abs/10.1142/9789814324359_0177. evolutionary equations (vol. 3) (pp. 527–621). Elsevier, http://dx.doi.org/10.
1016/s1874-5717(07)80010-7, URL: https://linkinghub.elsevier.com/retrieve/pii/
S1874571707800107.

330

You might also like