Download as pdf or txt
Download as pdf or txt
You are on page 1of 324

Progress in Nonlinear Differential Equations

and Their Applications


Subseries in Control
94

Tatsien Li
Bopeng Rao

Boundary
Synchronization
for Hyperbolic
Systems
Progress in Nonlinear Differential Equations
and Their Applications

PNLDE Subseries in Control

Volume 94

Series Editor
Jean-Michel Coron, Laboratory Jacques-Louis Lions, Pierre and Marie Curie
University, Paris, France

Editorial Board
Viorel Barbu, Faculty of Mathematics, Alexandru Ioan Cuza University, Iaşi,
Romania
Piermarco Cannarsa, Department of Mathematics, University of Rome Tor Vergata,
Rome, Italy
Karl Kunisch, Institute of Mathematics and Scientific Computing, University of
Graz, Graz, Austria
Gilles Lebeau, Dieudonné Laboratory J.A., University of Nice Sophia Antipolis,
Nice, Paris, France
Tatsien Li, School of Mathematical Sciences, Fudan University, Shanghai, China
Shige Peng, Institute of Mathematics, Shandong University, Jinan, China
Eduardo Sontag, Department of Electrical & Computer Engineering, Northeastern
University, Boston, MA, USA
Enrique Zuazua, Department of Mathematics, Autonomous University of Madrid,
Madrid, Spain

More information about this subseries at http://www.springer.com/series/15137


Tatsien Li Bopeng Rao

Boundary Synchronization
for Hyperbolic Systems
Tatsien Li Bopeng Rao
School of Mathematical Sciences Institut de Recherche Mathématique
Fudan University Avancée
Shanghai, China Université de Strasbourg
Strasbourg, France

ISSN 1421-1750 ISSN 2374-0280 (electronic)


Progress in Nonlinear Differential Equations and Their Applications
PNLDE Subseries in Control
ISBN 978-3-030-32848-1 ISBN 978-3-030-32849-8 (eBook)
https://doi.org/10.1007/978-3-030-32849-8
© Springer Nature Switzerland AG 2019
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, expressed or implied, with respect to the material contained
herein or for any errors or omissions that may have been made. The publisher remains neutral with regard
to jurisdictional claims in published maps and institutional affiliations.

This book is published under the imprint Birkhäuser, www.birkhauser-science.com by the registered
company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Contents

1 Introduction and Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1


1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Exact Boundary Null Controllability . . . . . . . . . . . . . . . . . . . . 4
1.3 Exact Boundary Synchronization . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Exact Boundary Synchronization by p-Groups . . . . . . . . . . . . . 7
1.5 Approximate Boundary Null Controllability . . . . . . . . . . . . . . . 9
1.6 Approximate Boundary Synchronization . . . . . . . . . . . . . . . . . . 11
1.7 Approximate Boundary Synchronization by p-Groups . . . . . . . . 13
1.8 Induced Approximate Boundary Synchronization . . . . . . . . . . . 15
1.9 Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2 Algebraic Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.1 Bi-orthonormality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2 Kalman’s Criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3 Condition of Cp-Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . 25

Part I Synchronization for a Coupled System of Wave Equations


with Dirichlet Boundary Controls: Exact Boundary
Synchronization
3 Exact Boundary Controllability and Non-exact Boundary
Controllability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.1 Exact Boundary Controllability . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2 Non-exact Boundary Controllability . . . . . . . . . . . . . . . . . . . . . 39
4 Exact Boundary Synchronization and Non-exact Boundary
Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 43
4.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 43
4.2 Condition of Compatibility . . . . . . . . . . . . . . . . . . . . . . . .... 44
4.3 Exact Boundary Synchronization and Non-exact Boundary
Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 47

v
vi Contents

5 Exactly Synchronizable States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49


5.1 Attainable Set of Exactly Synchronizable States . . . . . . . . . . . . 49
5.2 Determination of Exactly Synchronizable States . . . . . . . . . . . . 50
5.3 Approximation of Exactly Synchronizable States . . . . . . . . . . . 56
6 Exact Boundary Synchronization by Groups . . . . . . . . . . . . . . . . . 59
6.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
6.2 A Basic Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
6.3 Condition of Cp-Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . 63
6.4 Exact Boundary Synchronization by p-Groups . . . . . . . . . . . . . 65
7 Exactly Synchronizable States by p-Groups . . . . . . ............ 69
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . ............ 69
7.2 Determination of Exactly Synchronizable States
by p-Groups . . . . . . . . . . . . . . . . . . . . . . . . . . ............ 70
7.3 Determination of Exactly Synchronizable States
by p-Groups (Continued) . . . . . . . . . . . . . . . . . ............ 73
7.4 Precise Consideration on the Exact Boundary
Synchronization by 2-Groups . . . . . . . . . . . . . ............ 76

Part II Synchronization for a Coupled System of Wave Equations


with Dirichlet Boundary Controls: Approximate Boundary
Synchronization
8 Approximate Boundary Null Controllability . . . . . . . . . . . . . . . . . . 81
8.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
8.2 D-Observability for the Adjoint Problem . . . . . . . . . . . . . . . . . 82
8.3 Kalman’s Criterion. Total (Direct and Indirect) Controls . . . . . . 86
8.4 Sufficiency of Kalman’s Criterion for T [ 0 Large Enough
for the Nilpotent System . . . . . . . . . . . . . . . . . . . . . . . . . . ... 89
8.5 Sufficiency of Kalman’s Criterion for T [ 0 Large Enough
for 2  2 Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... 95
8.6 The Unique Continuation for Nonharmonic Series . . . . . . . ... 97
8.7 Sufficiency of Kalman’s Criterion for T [ 0 Large Enough
in the One-Space-Dimensional Case . . . . . . . . . . . . . . . . . . . . . 100
8.8 An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
9 Approximate Boundary Synchronization . . . . . . . . . . . . . . . . . . . . 105
9.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
9.2 Condition of C1-Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . 106
9.3 Fundamental Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
9.4 Properties Related to the Number of Total Controls . . . . . . . . . 108
Contents vii

10 Approximate Boundary Synchronization by p-Groups . . . . . . . . . . 115


10.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
10.2 Fundamental Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
10.3 Properties Related to the Number of Total Controls . . . . . . . . . 118
10.4 Necessity of the Condition of Cp-Compatibility . . . . . . . . . . . . 125
10.5 Approximate Boundary Null Controllability . . . . . . . . . . . . . . . 130
11 Induced Approximate Boundary Synchronization . . . . . . . . . . . . . . 133
11.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
11.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
11.3 Induced Approximate Boundary Synchronization . . . . . . . . . . . 138
11.4 Minimal Number of Direct Controls . . . . . . . . . . . . . . . . . . . . 142
11.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

Part III Synchronization for a Coupled System of Wave Equations


with Neumann Boundary Controls: Exact Boundary
Synchronization
12 Exact Boundary Controllability and Non-exact Boundary
Controllability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
12.2 Proof of Lemma 12.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
12.3 Observability Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
12.4 Exact Boundary Controllability . . . . . . . . . . . . . . . . . . . . . . . . 166
12.5 Non-exact Boundary Controllability . . . . . . . . . . . . . . . . . . . . . 169
13 Exact Boundary Synchronization and Non-exact Boundary
Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
13.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
13.2 Condition of C1-Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . 174
13.3 Exact Boundary Synchronization and Non-exact Boundary
Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
13.4 Attainable Set of Exactly Synchronizable States . . . . . . . . . . . . 177
14 Exact Boundary Synchronization by p-Groups . . . . . . . . . . . . . . . . 179
14.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
14.2 Condition of Cp-Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . 180
14.3 Exact Boundary Synchronization by p-Groups
and Non-exact Boundary Synchronization by p-Groups . . . . . . . 182
14.4 Attainable Set of Exactly Synchronizable States
by p-Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
15 Determination of Exactly Synchronizable States by p-Groups . . . . 185
15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
15.2 Determination and Estimation of Exactly Synchronizable
States by p-Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
viii Contents

15.3 Determination of Exactly Synchronizable States . . . . . . . . . . . . 192


15.4 Determination of Exactly Synchronizable States
by 3-Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193

Part IV Synchronization for a Coupled System of Wave Equations


with Neumann Boundary Controls: Approximate Boundary
Synchronization
16 Approximate Boundary Null Controllability . . . . . . . . . . . . . . . . . . 201
16.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
16.2 Equivalence Between the Approximate Boundary Null
Controllability and the D-Observability . . . . . . . . . . . . . . . . . . 202
16.3 Kalman’s Criterion. Total (Direct and Indirect) Controls . . . . . . 207
16.4 Sufficiency of Kalman’s Criterion for T [ 0 Large Enough
in the One-Space-Dimensional Case . . . . . . . . . . . . . . . . . . . . . 209
16.5 Unique Continuation for a Cascade System of Two
Wave Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
17 Approximate Boundary Synchronization . . . . . . . . . . . . . . . . . . . . 215
17.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
17.2 Condition of C1-Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . 216
17.3 Fundamental Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
17.4 Properties Related to the Number of Total Controls . . . . . . . . . 218
17.5 An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
18 Approximate Boundary Synchronization by p-Groups . . . . . . . . . . 223
18.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
18.2 Fundamental Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
18.3 Properties Related to the Number of Total Controls . . . . . . . . . 226

Part V Synchronization for a Coupled System of Wave Equations


with Coupled Robin Boundary Controls: Exact Boundary
Synchronization
19 Preliminaries on Problem (III) and (III0) . . . . . . . . . . . . . . . . . . . . 231
19.1 Regularity of Solutions with Neumann Boundary
Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
19.2 Well-Posedness of a Coupled System of Wave Equations
with Coupled Robin Boundary Conditions . . . . . . . . . . . . . . . . 234
19.3 Regularity of Solutions to Problem (III) and (III0) . . . . . . . . . . 237
20 Exact Boundary Controllability and Non-exact Boundary
Controllability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
20.1 Exact Boundary Controllability . . . . . . . . . . . . . . . . . . . . . . . . 239
20.2 Non-exact Boundary Controllability . . . . . . . . . . . . . . . . . . . . . 240
Contents ix

21 Exact Boundary Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . 243


21.1 Exact Boundary Synchronization . . . . . . . . . . . . . . . . . . . . . . . 243
21.2 Conditions of C1-Compatibility . . . . . . . . . . . . . . . . . . . . . . . . 244
22 Determination of Exactly Synchronizable States . . . . . . . . . . . . . . . 247
22.1 Determination of Exactly Synchronizable States . . . . . . . . . . . . 247
22.2 Estimation of Exactly Synchronizable States . . . . . . . . . . . . . . . 251
23 Exact Boundary Synchronization by p-Groups . . . . . . . . . . . . . . . . 255
23.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
23.2 Exact Boundary Synchronization by p-Groups . . . . . . . . . . . . . 256
24 Necessity of the Conditions of Cp-Compatibility . . . . . . . . . . . . . . . 259
24.1 Condition of Cp-Compatibility for the Internal Coupling
Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
24.2 Condition of Cp-Compatibility for the Boundary Coupling
Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
25 Determination of Exactly Synchronizable States by p-Groups . . . . 267
25.1 Determination of Exactly Synchronizable States
by p-Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
25.2 Estimation of Exactly Synchronizable States by p-Groups . . . . . 271

Part VI Synchronization for a Coupled System of Wave Equations


with Coupled Robin Boundary Controls: Approximate
Boundary Synchronization
26 Some Algebraic Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
27 Approximate Boundary Null Controllability . . . . . . . . . . . . . . . . . . 281
28 Unique Continuation for Robin Problems . . . . . . . . . . . . . . . . . . . . 285
28.1 General Consideration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
28.2 Examples in Higher Dimensional Cases . . . . . . . . . . . . . . . . . . 287
28.3 One-Dimensional Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
29 Approximate Boundary Synchronization . . . . . . . . . . . . . . . . . . . . 297
29.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
29.2 Fundamental Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
30 Approximate Boundary Synchronization by p-Groups . . . . . . . . . . 305
30.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
30.2 Fundamental Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
31 Approximately Synchronizable States by p-Groups . . . . . . . . . . . . . 315
x Contents

32 Closing Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321


32.1 Related Literatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
32.2 Prospects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
Chapter 1
Introduction and Overview

An introduction and overview of the whole book can be found in this chapter.

1.1 Introduction

Synchronization is a widespread natural phenomenon. Thousands of fireflies may


twinkle at the same time; audiences in the theater can applaud with a rhythmic beat;
pacemaker cells of the heart function simultaneously; and field crickets give out a
unanimous cry. All these are phenomena of synchronization (cf. [71, 74]).
In principle, synchronization happens when different individuals possess likeness
in nature, that is, they conform essentially to the same governing equation, and
meanwhile, the individuals should bear a certain coupled relation.
The phenomenon of synchronization was first observed by Huygens [22] in 1665.
The research on synchronization from a mathematical point of view dates back to
Wiener [81] in the 1950s.
The previous studies focused on systems described by ordinary differential equa-
tions (ODEs), such as


N
X i = f (t, X i ) + Ai j X j (i = 1, · · · , N ), (1.1)
j=1

where X i (i = 1, · · · , N ) are n-dimensional state vectors, “  ” stands for the time


derivative, Ai j (i, j = 1, · · · , N ) are n × n matrices, and f (t, X ) is an
n-dimensional vector function independent of i = 1, · · · , N (cf. [71, 74]). The right-
hand side of (1.1) shows that every X i (i = 1, · · · , N ) possesses two basic features:
satisfying a fundamental governing equation and bearing a coupled relation among

© Springer Nature Switzerland AG 2019 1


T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_1
2 1 Introduction and Overview

one another. If for any given initial data

t =0: X i = X i(0) (i = 1, · · · , N ), (1.2)

the solution X = (X 1 , · · · , X N )T = X (t) to the system satisfies

X i (t) − X j (t) → 0 (i, j = 1, · · · , N ) as t → +∞, (1.3)

namely, all the states X i (t) (i = 1, · · · , N ) tend to coincide with each other
as t → +∞, then we say that the system possesses the synchronization in the
consensus sense, or, in particular, if the solution X = X (t) satisfies

X i (t) − a(t) → 0 (i = 1, · · · , N ) as t → +∞, (1.4)

where a(t) is a state vector which is a priori unknown, then we say that the system
possesses the synchronization in the pinning sense. Obviously, the synchronization
in the pinning sense implies that in the consensus sense. These kinds of synchro-
nizations are all called the asymptotic synchronization, which should be realized
on the infinite time interval [0, +∞).
What the authors of this monograph have been doing in the recent years is
to extend, in both concept and method, the universal phenomena of synchroniza-
tion from finite-dimensional dynamical systems of ordinary differential equations
to infinite-dimensional dynamical systems of partial differential equations (PDEs).
This should be the first attempt in this regard, and the earliest relevant paper was
published by the authors in [42] in a special issue of Chinese Annals of Mathematics
in honor of the scientific heritage of Lions, the results of which were announced in
2012 in CRAS [41].
For fixing the idea, in this chapter we will only consider the following coupled
system of wave equations with Dirichlet boundary conditions (cf. Part 1 and Part 2 of
this monograph. The corresponding consideration with Neumann or coupled Robin
boundary conditions will be given in Part 3 and Part 4 or in Part 5 and Part 6):
⎧ 
⎨ U − U + AU = 0 in (0, +∞) × ,
U =0 on (0, +∞) × 0 , (1.5)

U = DH on (0, +∞) × 1

with the initial data


t =0: 0 , U  = U
U =U 1 in , (1.6)

where  is a bounded domain with smooth boundary  = 0 ∪ 1


such that  0 ∩  1 = ∅ and mes(1 ) > 0, U = (u (1) , · · · , u (N ) )T is the state vari-
n
∂2
able,  = ∂x2
is the n-dimensional Laplacian operator, A ∈ M N (R) is an N × N
i
i=1
coupling matrix with constant elements D, called the boundary control matrix,
1.1 Introduction 3

is an N × M full column-rank matrix (M  N ) with constant elements, H =


(h (1) , · · · , h (M) )T stands for the boundary control.
Here, a boundary control matrix D is added to the boundary condition on 1 .
This approach is more flexible: we will see in what follows that the introduction of
D enables us to simplify the statement and the discussion to a great extent.
For systems governed by PDEs, we can similarly consider the asymptotic syn-
chronization on an infinite time interval as in the case of systems governed by ODEs,
namely, we may ask the following questions: under what conditions do the sys-
tem states with any given initial data possess the asymptotic synchronization in the
consensus sense:

u (i) (t, ·) − u ( j) (t, ·) → 0 (i, j = 1, . . . , N ) as t → +∞, (1.7)

or, in particular, if the system states with any given initial data possess the asymptotic
synchronization in the pinning sense:

u (i) (t, ·) − u(t, ·) → 0 (i = 1, · · · , N ) as t → +∞, (1.8)

where u = u(t, ·) is called the asymptotically synchronizable state, which is a


priori unknown? if the answer of this question is positive, these conclusions should
be realized spontaneously on an infinite time interval [0, +∞), and is a naturally
developed result decided by the nature of the system itself.
But for systems governed by partial differential equations, as there are bound-
ary conditions, another possibility exists, i.e., to give artificial intervention to the
evolution of state variables through appropriate boundary controls, which combines
synchronization with controllability and introduces the study of synchronization to
the field of control. This is also a new perspective on the investigation of synchro-
nization for systems of partial differential equations.
Here, the boundary control comes from the boundary condition U = D H on 1 .
The elements in H are adjustable boundary controls, the number of which is M( N ).
To put the boundary control matrix D before H will provide many possibilities for
combining boundary controls.
On the other hand, precisely due to the artificial intervention of control, we can
make a higher demand, i.e., to meet the requirement of synchronization within a
limited time, instead of waiting until t → +∞.
The corresponding question is whether there is a suitably large T > 0, such that
for any given initial data (U 1 ), through proper boundary controls with compact
0 , U
support in [0, T ] (that is, to exert the boundary control at the time interval [0, T ],
and abandon the control from the time t = T ), the solution U = U (t, x) to the
corresponding problem (1.5)–(1.6) satisfies, as t  T ,

u (1) (t, ·) ≡ u (2) (t, ·) ≡ · · · ≡ u (N ) (t, ·) := u(t, ·), (1.9)

that is, all state variables tend to be the same since the time t = T , while u = u(t, x)
is called the corresponding exactly synchronizable state which is unknown before-
hand. If the above is satisfied, we say that the system possesses the exact boundary
4 1 Introduction and Overview

synchronization. Here, “exact” means that the synchronization of state variables is


exact without error, and the so-called “boundary” indicates the means or method of
control, i.e., to realize the synchronization through boundary controls.
In the above definition of synchronization, through boundary controls on the time
interval [0, T ], we not only demand synchronization at the time t = T , but also
require synchronization to continue when t  T , i.e., after all boundary controls are
eliminated. This kind of synchronization is not a short-lived one, but exists once and
for all, as is needed in applications.
We have to point out that in Part 1 and Part 2 of this monograph we always
assume that the initial value (U 1 ) ∈ (L 2 ()) N × (H −1 ()) N , and the solution
0 , U
to problem (1.5)–(1.6) belongs to the corresponding function space, which will not
be specified one by one in this chapter. As to the synchronization considered in the
framework of classical solutions in the one-space-dimensional case, see [20, 21, 34,
57, 58].

1.2 Exact Boundary Null Controllability

The exact boundary synchronization on a finite time interval is closely related to the
exact boundary null controllability.
0 , U
If there exists T > 0, such that for any given initial data (U 1 ), through bound-
ary controls with compact support in [0, T ], the solution U = U (t, x) to the corre-
sponding problem (1.5)–(1.6) satisfies, as t  T ,

u (1) (t, x) ≡ u (2) (t, x) ≡ · · · ≡ u (N ) (t, x) ≡ 0, (1.10)

then we say that the system possesses the so-called exact boundary null controllability
in control theory. This is of course a very special case of the abovementioned exact
boundary synchronization.
For a single wave equation, the exact boundary null controllability can be proved
by the HUM method proposed by Lions [61, 62]. For a coupled system of wave equa-
tions, since for the purpose of studying the synchronization, the coupling matrix A
should be an arbitrarily given matrix, the proof of the exact boundary null control-
lability cannot be simply reduced to the case of a single wave equation, however,
using a compact perturbation result in [69], it is possible to establish a corresponding
observability inequality for the corresponding adjoint system, and then the HUM
method can be still applied.
As a result, we can get the following conclusions (cf. Chap. 3):
(1) Assume that M = N and let the domain  satisfy the usual multiplier geo-
metrical condition (cf. [7, 26, 61, 62]): there exists x0 ∈ Rn , such that, setting
m = x − x0 , we have

(m, ν) > 0, ∀x ∈ 1 ; (m, ν)  0, ∀x ∈ 0 , (1.11)

where ν is the unit outward normal vector, and (·, ·) denotes the inner product in Rn .
1.2 Exact Boundary Null Controllability 5

Then through N boundary controls, when T > 0 is suitably large, the exact
boundary null controllability can surely be realized for all (U 1 ) ∈ (L 2 ()) N ×
0 , U
(H −1 ()) N .
(2) Assume that M < N , that is, if the number of boundary controls is fewer than
N , then no matter how large T > 0 is, the exact boundary null controllability cannot
0 , U
be achieved for all (U 1 ) ∈ (L 2 ()) N × (H −1 ()) N .
Thus, in the case of partial lack of boundary controls, which kind of controllability
in a weaker sense can be realized by means of fewer boundary controls? It is a
significant problem from both theoretical and practical points of view, and can be
discussed in many different cases as will be shown in the sequel.

1.3 Exact Boundary Synchronization

Really meaningful synchronization should exclude the trivial situation of null con-
trollability, and thus we get the following results (cf. Chap. 4):
Assume that the system under consideration is exactly synchronizable, but not
exactly null controllable, that is to say, assume that the system is exactly synchro-
nizable with rank(D) < N . Then the coupling matrix A = (ai j ) should satisfy the
following condition of compatibility:


N
akp = a (k = 1, · · · , N ), (1.12)
p=1

where a is a constant independent of k = 1, · · · , N , namely, the sum of all elements


in every row of A is the same (the row-sum condition).
Let e1 = (1, . . . , 1)T . The condition of compatibility (1.12) is equivalent to that
e1 is an eigenvector of A, corresponding to the eigenvalue a.
Let
⎛ ⎞
1 −1
⎜ 1 −1 ⎟
⎜ ⎟
C1 = ⎜ . . ⎟ (1.13)
⎝ .. .. ⎠
1 −1 (N −1)×N
6 1 Introduction and Overview

be the corresponding synchronization matrix. C1 is a full row-rank matrix. Obvi-


ously, the synchronization requirement (1.9) can be written as

tT : C1 U (t, x) ≡ 0. (1.14)

Moreover, Ker(C1 ) = Span{e1 }, and then the condition of compatibility (1.12) is


equivalent to that Ker(C1 ) is a one-dimensional invariant subspace of A:

AKer(C1 ) ⊆ Ker(C1 ). (1.15)

Then, the condition of compatibility (1.12), often called the condition of


C1 -compatibility in what follows, is also equivalent to that there exists a unique
matrix A1 of order (N − 1), such that

C 1 A = A1 C 1 . (1.16)

Such matrix A1 is called the reduced matrix of A by C1 .


Under the condition of C1 -compatibility, let

W1 = (w (1) , · · · , w (N −1) )T = C1 U. (1.17)

It is easy to see that the original system (1.5) for the variable U can be reduced to
the following self-closed system for the variable W1 :

⎨ W1 − W1 + A1 W1 = 0 in (0, +∞) × ,
W =0 on (0, +∞) × 0 , (1.18)
⎩ 1
W1 = C 1 D H on (0, +∞) × 1 .

Obviously, under the condition of C1 -compatibility, the exact boundary synchro-


nization of the original system (1.5) for U is equivalent to the exact boundary null
controllability of the reduced system (1.18) for W. Hence, we have:
Assume that the condition of C1 -compatibility is satisfied and the domain 
satisfies the usual multiplier geometrical condition, then there exists a suitably large
T > 0, such that for any boundary control matrix D with rank (C1 D) = N − 1, the
exact boundary synchronization of system (1.5) can be realized at the time t = T .
On the contrary, if rank(C1 D) < N − 1, in particular, if rank(D) < N − 1, i.e., the
number of boundary controls is fewer than (N − 1), then no matter how large T > 0
is, the exact boundary synchronization can never be achieved for all initial data
0 , U
(U 1 ) ∈ (L 2 ()) N × (H −1 ()) N .
Therefore, the above condition of C1 -compatibility is not only sufficient but also
necessary to ensure the exact boundary synchronization. Under this condition, for the
boundary control matrix D such that rank(D) = rank(C1 D) = N − 1, appropriately
chosen (N − 1) boundary controls suffice to meet the requirement.
1.3 Exact Boundary Synchronization 7

We point out that in the study of synchronization for systems governed by ODEs,
the row-sum condition (1.12) is imposed on the system according to physical mean-
ings as a reasonable sufficient condition. However, for our systems governed by
PDEs and for the synchronization on a finite time interval, it is actually a necessary
condition, which makes the theory of synchronization more complete for systems
governed by PDEs.
In the case that system (1.5) possesses the exact boundary synchronization at the
time T > 0, as t  T , the exactly synchronizable state u = u(t, x) satisfies

u  − u + au = 0 in (T, +∞) × ,
(1.19)
u=0 on (T, +∞) × ,

where a is given by the row-sum condition (1.12). Thus, the evolution of the exactly
synchronizable state u = u(t, x) with respect to t can be uniquely determined by its
initial data:
t=T : u0, u = 
u = u1. (1.20)

However, generally speaking, the value ( u0, u 1 ) of (u, u  ) at t = T should depend


 
on the original initial data (U0 , U1 ) as well as on the boundary controls H , which
realize the exact boundary synchronization. Moreover, the value of ( u0, 
u 1 ) at t = T
0 , U
for a given initial data (U 1 ) can be determined, and in some special cases it is
independent of boundary controls H , which realize the exact boundary synchroniza-
tion.
The attainable set of all possible values of ( u0, 
u 1 ) at t = T is the whole
space L 2 () × H −1 (), when the original initial data (U 0 , U
1 ) vary in the space
(L ()) × (H ()) . That is to say, any given (u0 , u1 ) in L 2 () × H −1 () can
2 N −1 N

be the value of an exactly synchronizable state (u, u  ) at t = T (cf. Chap. 5).

1.4 Exact Boundary Synchronization by p-Groups

How will be the situation if the number of boundary controls is further reduced?
Then we can only further lower the standard to be reached.
One possible way is not to require synchronization of all state variables, but to
divide them into several groups, e.g., p groups ( p  1), and then demand synchro-
nization of state variables within every group, whereby we get the concept of exact
boundary synchronization by p-groups.
Precisely speaking, let p  1 be an integer and let

0 = n0 < n1 < n2 < · · · < n p = N (1.21)

be integers such that n r − n r −1  2 for all 1  r  p. We rearrange the components


of U into p groups
8 1 Introduction and Overview

(u (1) , · · · , u (n 1 ) ), (u (n 1 +1) , · · · , u (n 2 ) ), · · · , (u (n p−1 +1) , · · · , u (n p ) ). (1.22)

System (1.5) is exactly synchronizable by p-groups at the time T > 0, if for any
0 , U
given initial data (U 1 ) ∈ (L 2 ()) N × (H −1 ()) N , there exist suitable boundary
controls H ∈ L loc (0, +∞; (L 2 (1 )) M ) with compact support in [0, T ], such that the
2

corresponding solution U = U (t, x) satisfies the following final conditions:


⎧ (1)

⎪ u ≡ · · · ≡ u (n 1 ) := u 1 ,
⎨ (n 1 +1)
u ≡ · · · ≡ u (n 2 ) := u 2 ,
tT : (1.23)

⎪ ···
⎩ (n p−1 +1)
u ≡ · · · ≡ u (n p ) := u p ,

where u = (u 1 , · · · , u p )T is called the exactly synchronizable state by p-groups,


which is a priori unknown.
Let Sr be the following (n r − n r −1 − 1) × (n r − n r −1 ) matrix:
⎛ ⎞
1 −1 0 ··· 0
⎜0 1 −1 ··· 0 ⎟
⎜ ⎟
Sr = ⎜ . .. .. .. .. ⎟ (1.24)
⎝ .. . . . . ⎠
0 0 · · · 1 −1

and let C p be the following (N − p) × N full row-rank matrix of synchronization


by p-groups: ⎛ ⎞
S1
⎜ S2 ⎟
⎜ ⎟
Cp = ⎜ .. ⎟. (1.25)
⎝ . ⎠
Sp

The exact boundary synchronization by p-groups (1.23) is equivalent to

tT : C p U ≡ 0. (1.26)

We can establish all the previous results in a similar way after having overcome
some technical difficulties (see Chaps. 6 and 7)). For instance, if the exact boundary
synchronization by p-groups can be realized by means of (N − p) boundary controls,
we can get the corresponding condition of C p -compatibility: There exists a unique
reduced matrix A p of order (N − p), such that

C p A = A pC p, (1.27)

which actually means that the coupling matrix A satisfies the row-sum condition
by blocks.
1.4 Exact Boundary Synchronization by p-Groups 9

Table 1.1 The exact boundary synchronization by p-groups


Condition of C p -compatibility Minimal number of boundary
controls
Exact boundary null N
controllability
Exact boundary C 1 A = A1 C 1 N −1
synchronization
Exact boundary C 2 A = A2 C 2 N −2
synchronization by 2-groups
·········
Exact boundary C p A = A pC p N−p
synchronization by p-groups

Correspondingly, under the condition of C p -compatibility, we assume that the


domain  satisfies the usual multiplier geometrical condition, then there exists
a suitably large T > 0, such that for any given boundary control matrix D with
rank (C p D) = N − p, the exact boundary synchronization by p-groups of system
(1.5) can be realized at the time t = T . On the contrary if rank(C p D) < N − p,
in particular, if rank(D) < N − p, then no matter how large T > 0 is, the exact
boundary synchronization by p-groups can never be achieved for all initial data
0 , U
(U 1 ) ∈ (L 2 ()) N × (H −1 ()) N .
In summary, we get the following table (Table 1.1).

What can we do when the number of boundary controls that can be chosen further
decreases?
The aforementioned controllability and synchronization should both be estab-
lished in the exact sense, however, from the practical point of view, some minor error
will not affect the general situation, so it also makes sense that these requirements are
tenable only in the approximate sense. What we are considering here is the situation
that no matter how small the error given beforehand is, we can always find suit-
able boundary controls so that the controllability or synchronization can be realized
within the permitted range of accuracy. Since the error can be chosen progressively
smaller, this corresponds actually to a limit process, which thus allows the analytical
method to be applied more effectively. The corresponding controllability and syn-
chronization are called approximate boundary null controllability and approximate
boundary synchronization.

1.5 Approximate Boundary Null Controllability

We first consider the approximate boundary null controllability given by the following
definition:
10 1 Introduction and Overview

System (1.5) possesses the approximate boundary null controllability at the


time T > 0 if for any given initial data (U 1 ) ∈ (L 2 ()) N × (H −1 ()) N , there
0 , U
exists a sequence {Hn } of boundary controls, Hn ∈ L loc 2
(0, +∞; (L 2 (1 )) M ) with
compact support in [0, T ], such that the corresponding sequence {Un } of solutions
to problem (1.5)–(1.6) satisfies the following condition:

Un → 0 as n → +∞ (1.28)

in the space
0
Cloc ([T, +∞); (L 2 ()) N ) ∩ Cloc
1
([T, +∞); (H −1 ()) N ). (1.29)

Obviously, the exact boundary null controllability leads to the approximate boundary
null controllability. However, since from the previous definition we cannot get the
convergence of the sequence {Hn } of boundary controls, the approximate boundary
null controllability cannot lead to the exact boundary null controllability in general.
Consider the corresponding adjoint problem
⎧ 
⎨  −  + AT  = 0 in (0, +∞) × ,
=0 on (0, +∞) × , (1.30)
⎩ 1 in ,
 0 ,  = 
t =0: =

where A T denotes the transpose of A. We give the following definition:


The adjoint problem (1.30) is D-observable on the interval [0, T ], if

0 , 
D T ∂ν  ≡ 0 on [0, T ] × 1 =⇒ ( 1 ) ≡ 0, i.e.,  ≡ 0, (1.31)

where ∂ν denotes the outward normal derivative on the boundary.


The D-observability is only a weak observability in the sense of uniqueness, which
cannot guarantee the exact boundary null controllability. However, we can prove the
following result (cf. Chap. 8):
System (1.5) is approximately null controllable at the time T > 0 if and only if
the adjoint problem (1.30) is D-observable on the interval [0, T ].
Obviously, if M = N , then by Holmgren’s uniqueness theorem (cf. [18, 62, 75]),
system (1.5) is always approximately null controllable by means of N boundary
controls for T > 0 large enough, even without the multiplier geometrical condition.
However, it should be pointed out that the approximate boundary null controlla-
bility can be realized even if M < N , namely, by means of fewer boundary controls.
To transform the approximate boundary null controllability of the original system
equivalently to the D-observability of the adjoint problem cannot solve specifically
the problem of judging whether a system possesses the approximate boundary null
controllability, nor is it clear to what extent the weakened concept may reduce the
number of boundary controls needed. But we can thereby prove the following result
(cf. Chap. 8):
1.5 Approximate Boundary Null Controllability 11

Assuming that system (1.5) is approximately null controllable at the time T > 0,
then we necessarily have that the following enlarged matrix composed of A and D
is of full rank:

rank(D, AD, · · · , A N −1 D) = N . (1.32)

This is a necessary condition which helps to conveniently eliminate a set of sys-


tems that do not meet the requirements. Equation (1.32) is nothing but the so-called
Kalman’s criterion for guaranteeing the exact controllability for the following system
of ODEs (cf. [23, 73]):

X  = AX + Du, (1.33)

where u stands for the vector of control variables, however here we get it from a
different point of view.
Since condition (1.32) is independent of T , it is not a sufficient condition of the
D-observability for the adjoint problem (1.30) in general, otherwise the D-observability
should be realized almost immediately, but it is not the case because of the finite speed
of wave prorogation.
However, under certain assumptions on A, condition (1.32) is also sufficient for
guaranteeing the approximate boundary null controllability for T > 0 large enough
for some special kinds of systems, for instance, some one-space-dimensional sys-
tems, some 2 × 2 systems, the cascade system, and more generally, the nilpotent
system under the multiplier geometrical condition, etc.
For the exact boundary null controllability of system (1.5), the number M =
rank(D), namely, the number of boundary controls, should be equal to N , the num-
ber of state variables. However, the approximate boundary null controllability of
system (1.5) could be realized if the number M = rank(D) is substantially small,
even if M = rank(D) = 1. Nevertheless, even if the rank of D might be small, but
because of the existence and influence of the coupling matrix A, in order to real-
ize the approximate boundary null controllability, the rank of the enlarged matrix
(D, AD, · · · , A N −1 D) should be still equal to N , the number of state variables.
From this point of view, we may say that the rank M of D is the number of “direct”
boundary controls acting on 1 , and rank(D, AD, · · · , A N −1 D) denotes the number
of “total” controls. Differently from the exact boundary null controllability, for the
approximate boundary null controllability, we should consider not only the number
of direct boundary controls, but also the number of total controls.

1.6 Approximate Boundary Synchronization

Similar to the approximate boundary null controllability, we give the following def-
inition:
12 1 Introduction and Overview

System (1.5) possesses the approximate boundary synchronization at the time


0 , U
T > 0 if for any given initial data (U 1 ) ∈ (L 2 ()) N × (H −1 ()) N , there exist a
sequence {Hn } of boundary controls, Hn ∈ L loc 2
(0, +∞; (L 2 (1 )) M ) with compact
support in [0, T ], such that the corresponding sequence {Un } = {(u (1) (N ) T
n , · · · , un ) }
of solutions to problem (1.5)–(1.6) satisfies

u (k) (l)
n − u n → 0 as n → +∞ (1.34)

for all 1  k, l  N in the space


0
Cloc ([T, +∞); L 2 ()) ∩ Cloc
1
([T, +∞); H −1 ()). (1.35)

Obviously, if system (1.5) is exactly synchronizable, then it must be approximately


synchronizable; however, the inverse is not true in general.
Moreover, the approximate boundary null controllability obviously leads to the
approximate boundary synchronization. We should exclude this trivial situation in
advance.
Assume that system (1.5) is approximately synchronizable, but not approximately
null controllable. Then, as in the case of exact boundary synchronization, the coupling
matrix A should satisfy the same condition of C1 -compatibility (1.12) (cf. Chap. 9).
Then, under the condition of C1 -compatibility, setting W1 = C1 U as in (1.17),
we get again the reduced system (1.18) and its adjoint problem (the reduced adjoint
problem):
⎧ T
⎨ 1 − 1 + A1 1 = 0 in (0, +∞) × ,
1 = 0 on (0, +∞) × , (1.36)
⎩ 0 , 1 = 
1 in .
t = 0 : 1 = 

Similarly to the D-observability, we say that the reduced adjoint problem (1.36)
is C1 D-observable on the interval [0, T ], if

0 , 
(C1 D)T ∂ν 1 ≡ 0 on [0, T ] × 1 =⇒ ( 1 ) ≡ 0, i.e.,  ≡ 0. (1.37)

We can prove that (cf. Chap. 9):


Under the condition of C1 -compatibility, system (1.5) is approximately synchro-
nizable at the time T > 0 if and only if the reduced adjoint problem (1.36) is
C1 D-observable on the interval [0, T ].
Then, it is easy to see that under the condition of C1 -compatibility if rank(C1 D) =
N − 1 (which implies M  N − 1), then, system (1.5) is always approximately
synchronizable, even without the multiplier geometrical condition.
We should point out that even if rank(C1 D) < N − 1, and in particular, if we
essentially use fewer than (N − 1) boundary controls, it is still possible to realize
the approximate boundary synchronization.
1.6 Approximate Boundary Synchronization 13

Moreover, as in the case of approximate boundary null controllability, under the


condition of C1 -compatibility, we have:
Assume that system (1.5) is approximately synchronizable at the time T > 0,
then we necessarily have

rank(C1 D, C1 AD, · · · , C1 A N −1 D) = N − 1. (1.38)

Condition (1.38) is not sufficient in general for the approximate boundary synchro-
nization, however, it is still sufficient for T > 0 large enough for some special systems
under certain additional assumptions on A.
On the other hand, we can prove:
Assume that system (1.5) is approximately synchronizable under the action of a
boundary control matrix D. No matter whether the condition of C1 -compatibility is
satisfied or not, we necessarily have

rank(D, AD, · · · , A N −1 D)  N − 1, (1.39)

namely, at least (N − 1) total controls are needed in order to realize the approximate
boundary synchronization of system (1.5).
Furthermore, assume that system (1.5) is approximately synchronizable under the
minimal rank condition

rank(D, AD, · · · , A N −1 D) = N − 1. (1.40)

Then the matrix A should satisfy the condition of C1 -compatibility and some alge-
braic properties, and there exists a scalar function u as the approximately synchro-
nizable state, such that
u (k)
n → u as n → +∞ (1.41)

in the space (1.35) for all 1  k  N . Moreover, the approximately synchronizable


state u is independent of the sequence {Hn } of applied boundary controls. Thus the
original approximate boundary synchronization in the consensus sense reduces to
that in the pinning sense.

1.7 Approximate Boundary Synchronization by p-Groups

Generally speaking, we can define the approximate boundary synchronization by


p-groups ( p  1). Let us re-group the components of the state variable as in (1.22).
We say that system (1.5) is approximately synchronizable by p-groups at the time
T > 0 if for any given initial data (U 1 ) ∈ (L 2 ()) N × (H −1 ()) N , there exists
0 , U
a sequence {Hn } of boundary controls in L loc 2
(0, +∞; (L 2 (1 )) M ) with compact
support in [0, T ], such that the sequence {Un } of the corresponding solutions satisfies
the following conditions:
14 1 Introduction and Overview

u (k) (l)
n − u n → 0 as n → +∞ (1.42)

in the space (1.35) for all n r −1 + 1  k, l  n r and 1  r  p, or equivalently

C p Un → 0 as n → +∞ (1.43)

in the space
0
Cloc ([T, +∞); (L 2 ()) N − p ) ∩ Cloc
1
([T, +∞); (H −1 ()) N − p ), (1.44)

where C p is given by (1.24)–(1.25).


As in the case of exact boundary synchronization by p-groups, we can get the
corresponding condition of C p -compatibility (cf. Chap. 10): There exists a unique
reduced matrix A p of order (N − p), such that (1.27) holds. Under the condition of
C p -compatibility, we necessarily get the following Kalman’s criterion:

rank(C p D, C p AD, · · · , C p A N −1 D) = N − p. (1.45)

On the other hand, we can prove:


Assume that system (1.5) is approximately synchronizable by p-groups under the
action of a boundary control matrix D. Then, no matter whether the condition of
C p -compatibility is satisfied or not, we necessarily have

rank(D, AD, · · · , A N −1 D)  N − p, (1.46)

namely, at least (N − p) total controls are needed in order to realize the approximate
boundary synchronization by p-groups of system (1.5).
Furthermore, assume that system (1.5) is approximately synchronizable by
p-groups under the minimal rank condition

rank(D, AD, · · · , A N −1 D) = N − p, (1.47)

then the matrix A should satisfy the condition of C p -compatibility and Ker(C p )
admits a supplement V , which is also invariant for A. Moreover, there exist some
linearly independent scalar functions u 1 , · · · , u p such that the approximately syn-
chronizable state u = (u 1 , · · · , u p )T is independent of the sequence {Hn } of applied
boundary controls. Thus the original approximate synchronization by p-groups in
the consensus sense reduces to that in the pinning sense.
In summary, we get the following table (Table 1.2).
1.8 Induced Approximate Boundary Synchronization 15

Table 1.2 The approximate boundary synchronization by p-groups


Condition of C p -compatibility Minimal number of total
controls
Approximate boundary null N
controllability
Approximate boundary C 1 A = A1 C 1 N −1
synchronization
Approximate boundary C 2 A = A2 C 2 N −2
synchronization by 2-groups
·········
Approximate boundary C p A = A pC p N−p
synchronization by p-groups

1.8 Induced Approximate Boundary Synchronization

We cannot always realize the approximate boundary synchronization by p-groups


under the minimal rank condition (1.47). In fact, let D p be the set of all boundary
control matrices D which realize the approximate boundary synchronization by p-
groups for system (1.5). Define the minimal number N p of total controls by

N p = inf rank(D, AD, · · · , A N −1 D). (1.48)


D∈D p

Under a suitable condition on the matrix A, we can prove

N p = N − q, (1.49)

where q  p is the dimension of the largest subspace W , which is contained in


Ker(C p ) and admits a supplement V , both W and V are invariant for the matrix A.
So, generally speaking, (N − q) total controls are necessary for the approximate
boundary synchronization by p-groups of system (1.5), while (1.43) provides only
the convergence of (N − p) components of state variables. Hence, there is a loss of
( p − q) information hidden in the minimal rank condition

rank(D, AD, · · · , A N −1 D) = N − q. (1.50)

Let Cq∗ be the induced extension matrix defined by Ker(Cq∗ ) = W. Ker(Cq∗ ) is


the biggest subspace of A, which is contained in Ker(C p ) and admits a supplement V ,
both W and V are invariant for A. Moreover, we have the following rank condition:

rank(Cq∗ D, Cq∗ AD, · · · , Cq∗ A N −1 D) = N − q, (1.51)


16 1 Introduction and Overview

which is necessary for the approximate boundary null controllability of the corre-
sponding reduced system:
⎧ 
⎨ W − W + Aq∗ W = 0 in (0, +∞) × ,
W =0 on (0, +∞) × 0 , (1.52)

W = Cq∗ D H on (0, +∞) × 1

with the initial data

t =0: 0 , W  = Cq∗ U
W = Cq∗ U 1 in , (1.53)

where W = Cq∗ U and Aq∗ is given by Cq∗ A = Aq∗ Cq∗ .


In some specific situations discussed in Chap. 11, the above rank condition (1.51)
is indeed sufficient for the approximate boundary null controllability of the reduced
system (1.52). On this occasion, we have

Cq∗ Un → 0 as n → +∞ (1.54)

in the space
0
Cloc ([T, +∞); (L 2 ()) N −q ) ∩ Cloc
1
([T, +∞); (H −1 ()) N −q ). (1.55)

In this case, we say that system (1.5) is induced approximately synchronizable.


Since (1.54) provides the convergence of (N − q) components of state variables,
by this way, we recover the ( p − q) missed information in (1.43). Moreover, there
exist some scalar functions u 1 , · · · , u p independent of the sequence {Hn } of applied
boundary controls, such that the sequence {Un } of the corresponding solutions sat-
isfies the following conditions:

u (k)
n → u r as n → +∞ (1.56)

for all n r −1 + 1  k  n r and 1  r  p in the space (1.35). Thus, the approximate


boundary synchronization by p-groups in the consensus sense is in fact in the pinning
sense. Nevertheless, unlike the case N p = N − p, these functions u 1 , · · · , u p are
linearly dependent (cf. Chap. 11).

1.9 Organization

The organization of this monograph is as follows.


Some preliminaries on linear algebra which are necessary in the whole book are
presented in Chap. 2
We consider the exact boundary synchronization in Part 1 (from Chaps. 3 to 7)
and the approximate boundary synchronization in Part 2 (from Chaps. 8 to 11) for a
coupled system of wave equations with Dirichlet boundary controls.
1.9 Organization 17

In Part 3 (from Chaps. 12 to 15) and Part 4 (from Chaps. 16 to 18), we study
the same subjects, respectively, for the coupled system of wave equations with the
following Neumann boundary controls:

∂ν U = D H on (0, +∞) × 1 , (1.57)

where ∂ν denotes the outward normal derivative.


Part 5 (from Chaps. 19 to 25) and Part 6 (from Chaps. 26 to 31) are devoted to
the corresponding consideration for the coupled system of wave equations with the
following coupled Robin boundary controls:

∂ν U + BU = D H on (0, +∞) × 1 . (1.58)

where B = (bi j ) is a boundary coupling matrix of order N with constant elements.


Although the consideration in Part 3 and Part 5 (resp. Part 4 and Part 6) seems to
be similar to that in Part 1 (resp. Part 2), however, the solution to the corresponding
problem with Neumann or coupled Robin boundary conditions has less regularity
than that with Dirichlet boundary conditions, moreover, there is a second coupling
matrix in the coupled Robin boundary conditions, technically speaking, we will
encounter more difficulties and some different and new treatments are necessarily
needed. We omitted the details here.
In the last chapter—Chap. 32, we will give some related literature as well as certain
prospects for the further study of exact and approximate boundary synchronizations.
Chapter 2
Algebraic Preliminaries

This chapter contains some algebraic preliminaries, which are useful in the whole
book.

In this chapter, we denote by A a matrix of order N , by D a full column-rank


matrix of order N × M with M  N , and by C p a full row-rank matrix of order
(N − p) × N with 0 < p < N . All these matrices are of constant entries.

2.1 Bi-orthonormality

Definition 2.1 A subspace V of R N is a supplement of a subspace W of R N , if

W + V = R N , W ∩ V = {0}, (2.1)

where the sum of subspaces W and V is defined as

W + V = {w + v : w ∈ W, v ∈ V }. (2.2)

In this case, W is also a supplement of V .

As a direct consequence of the dimension equality:

dim(W + V ) = dim(W ) + dim(V ) − dim(W ∩ V ), (2.3)

we have the following


Proposition 2.2 A subspace V of R N is a supplement of a subspace W of R N if and
only if
© Springer Nature Switzerland AG 2019 19
T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_2
20 2 Algebraic Preliminaries

dim(W ) + dim(V ) = N , W ∩ V = {0}. (2.4)

In particular, W and V ⊥ (or W ⊥ and V ) have the same dimension.


Definition 2.3 (cf. [15, 83]) Two d-dimensional (0 < d  N ) subspaces V and
W of R N are bi-orthonormal if there exist a basis (1 , . . . , d ) of V and a basis
(η1 , . . . , ηd ) of W, such that

(k , ηl ) = δkl , 1  k, l  d, (2.5)

where δkl is the Kronecker symbol.


Proposition 2.4 Let V and W be two subspaces in R N . Then the equality

dim(V ⊥ ∩ W ) = dim(V ∩ W ⊥ ) (2.6)

holds if and only if V and W have the same dimension.


Proof First, by the relationship

(V ⊥ + W )⊥ = V ∩ W ⊥ , (2.7)

we have
dim(V ⊥ + W ) = N − dim(V ∩ W ⊥ ). (2.8)

Next, noting (2.3), we have

dim(V ⊥ + W ) = dim(V ⊥ ) + dim(W ) − dim(V ⊥ ∩ W ). (2.9)

Then, it follows that

dim(V ⊥ ) + dim(W ) − dim(V ⊥ ∩ W ) = N − dim(V ∩ W ⊥ ), (2.10)

namely,
dim(W ) − dim(V ) = dim(V ⊥ ∩ W ) − dim(V ∩ W ⊥ ). (2.11)

The proof is thus complete. 

Proposition 2.5 Let V and W be two nontrivial subspaces of R N . Then V and W


are bi-orthonormal if and only if

V ⊥ ∩ W = V ∩ W ⊥ = {0}. (2.12)

Proof Assume that (2.12) holds. By Proposition 2.4, V and W have the same dimen-
sion d.
In order to show that V and W are bi-orthonormal, we will construct, under
condition (2.12), a basis (1 , . . . , d ) of V and a basis (η1 , . . . , ηd ) of W , such
2.1 Bi-orthonormality 21

that (2.5) holds. For this purpose, let (1 , . . . , d ) be a basis of V . For each i with
1  i  d we define a subspace Vi of V by

Vi = Span{1 , . . . , i−1 , i+1 , . . . , d }. (2.13)

Since it is easy to see from (2.3) that

dim(W ∩ Vi⊥ )  dim(W ) + dim(Vi⊥ ) − N (2.14)


= d + (N − d + 1) − N = 1, (2.15)

there exists a nontrivial vector ηi ∈ W ∩ Vi⊥ , such that (i , ηi ) = 0. Otherwise, we


have ηi ∈ V ⊥ , then ηi ∈ W ∩ V ⊥ = {0} because of (2.12), which leads to a contra-
diction. Thus, we can choose ηi such that (i , ηi ) = 1. Moreover, since ηi ∈ Vi⊥ ,
obviously we have

( j , ηi ) = 0, j = 1, . . . , i − 1, i + 1, . . . , d. (2.16)

In this way, we obtain a family of vectors (η1 , . . . , ηd ) of W , which satisfies the


relation (2.5), therefore the vectors η1 , . . . , ηd are linearly independent. Since W
has the same dimension d as V , the family of these linearly independent vectors
(η1 , . . . , ηd ) is in fact a basis of W .
Conversely, assume that V and W are bi-orthonormal. Let (1 , . . . , d ) and
(η1 , . . . , ηd ) be the bases of V and W , respectively, such that (2.5) holds. Since
V and W have the same dimension, by Proposition 2.4, it is sufficient to check the
first condition of (2.12). For this purpose, let x ∈ V ⊥ ∩ W. Since x ∈ W , there exist
some coefficients αk (k = 1, . . . , d) such that


d
x= αk ηk . (2.17)
k=1

Noting that x ∈ V ⊥ , the bi-orthonormal relationship (2.5) implies that


d
0 = (x, l ) = αk (ηk , l ) = αl , 1  l  d. (2.18)
k=1

Thus x = 0, therefore V ⊥ ∩ W = {0}. 


Definition 2.6 A subspace V of R N is invariant for A, if

AV ⊆ V. (2.19)

Proposition 2.7 (cf. [25]) A subspace V of R N is invariant for A if and only if its
orthogonal complement V ⊥ is invariant for A T . In particular, for any given matrix C,
since {Ker(C)}⊥ = Im(C T ), the subspace Ker(C) is invariant for A if and only if the
22 2 Algebraic Preliminaries

subspace Im(C T ) is invariant for A T . Furthermore, V admits a supplement W such


that both V and W are invariant for A if and only if their orthogonal complements
V ⊥ and W ⊥ are invariant for A T , and W ⊥ is a supplement of V ⊥ .

Proof Assume that the subspace V of R N is invariant for A. By definition, we have

(Ax, y)R N = (x, A T y)R N = 0, ∀x ∈ V, ∀y ∈ V ⊥ . (2.20)

This proves the first part of the Proposition. Now let V admit a supplement W such
that both V and W are invariant for A. By the first part of this Proposition, both V ⊥
and W ⊥ are invariant for A T . Moreover, the relation

V ⊥ ∩ W ⊥ = {V ⊕ W }⊥ = {0} (2.21)

implies that W ⊥ is a supplement of V ⊥ . This finishes the proof of the second part of
the Proposition. 

Proposition 2.8 Let W be a nontrivial invariant subspace of A T . Then W admits a


supplement V ⊥ , which is also invariant for A T if and only if V is bi-orthonormal to
W and invariant for A.

Proof Let V be a subspace which is invariant for A and bi-orthonormal to W . By


Proposition 2.7, V ⊥ is an invariant subspace for A T . By Definition 2.1, (2.12) holds.
By Proposition 2.5, V and W have the same dimension, then it follows that

dim(V ⊥ ) + dim(W ) = N − dim(V ) + dim(W ) = N . (2.22)

By Proposition 2.2, V ⊥ is a supplement of W .


Conversely, assume that V ⊥ is a supplement of W , and invariant for A T . By
Proposition 2.7, (V ⊥ )⊥ = V is invariant for A. On the other hand, since V ⊥ is a
supplement of W , by Proposition 2.2, we have

N = dim(V ⊥ ) + dim(W ) = N − dim(V ) + dim(W ), (2.23)

so V and W have the same dimension d > 0. Moreover, noting that V ⊥ ∩ W =


{0}, by Proposition 2.1, (2.12) holds. Then by Proposition 2.5, V and W are
bi-orthonormal. The proof is then complete. 

By duality, Proposition 2.8 can be formulated as

Proposition 2.9 Let V be a nontrivial invariant subspace of A. Then V admits a


supplement W ⊥ , which is also invariant for A if and only if W is bi-orthonormal to
V and invariant for A T .

Remark 2.10 The subspace W ⊥ is a supplement of V and is invariant for A, so


the matrix A is diagonalizable by blocks according to the decomposition V ⊕ W ⊥ ,
where ⊕ stands for the direct sum of subspaces.
2.1 Bi-orthonormality 23

Proposition 2.11 Let C and K be two matrices of order M × N and N × L, respec-


tively. Then, the equality
rank(C K ) = rank(K ) (2.24)

holds if and only if


Ker(C) ∩ Im(K ) = {0}. (2.25)

Proof Define the linear map C by Cx = C x for all x ∈ Im(K ). Then we have

Im(C) = Im(C K ), Ker(C) = Ker(C) ∩ Im(K ). (2.26)

From the rank-nullity theorem:

dim Im(C) + dim Ker(C) = rank(K ), (2.27)

namely,
rank (C K ) + dim (Ker(C) ∩ Im(K )) = rank(K ), (2.28)

we achieve the proof of Proposition 2.11. 

2.2 Kalman’s Criterion

Proposition 2.12 Let d  0 be an integer. We have the following assertions:


(i) The rank condition

rank(D, AD, . . . , A N −1 D)  N − d (2.29)

holds if and only if the dimension of any given invariant subspace of A T , contained
in Ker(D T ), does not exceed d.
(ii) The rank condition

rank(D, AD, . . . , A N −1 D) = N − d (2.30)

holds if and only if the largest dimension of invariant subspaces of A T , contained in


Ker(D T ), is equal to d.

Proof (i) Let


K = (D, AD, . . . , A N −1 D) (2.31)

and V be an invariant subspace of A T , contained in Ker(D T ). Clearly, we have

V ⊆ Ker(K T ). (2.32)
24 2 Algebraic Preliminaries

Assume that (2.29) holds, then

dim(V )  dim Ker(K T ) = N − rank(K T )  N − (N − d) = d. (2.33)

Conversely, assume that (2.29) does not hold, then

dim Ker(K T ) = N − rank(K T ) > N − (N − d) = d. (2.34)

Hence, there exist (d + 1) linearly independent vectors w1 , . . . , wd , wd+1 ∈


Ker(K T ). In particular, we have

Span {wk , A T wk , . . . , ( A T ) N −1 wk } ⊆ Ker(D T ). (2.35)
1kd+1

By Cayley–Hamilton’s Theorem, the subspace



Span {wk , A T wk , . . . , (A T ) N −1 wk } (2.36)
1kd+1

is invariant for A T and its dimension is greater than or equal to (d + 1).


(ii) Equation (2.30) can be written as

rank(D, AD, . . . , A N −1 D)  N − d (2.37)

and
rank(D, AD, . . . , A N −1 D)  N − d. (2.38)

The rank condition (2.37) means that dim(V )  d for any given invariant subspace
V of A T , contained in Ker(D T ). While, by (i), the rank condition (2.38) implies the
existence of a subspace V0 , which is invariant for A T and is contained in Ker(D T ),
such that dim(V0 )  d. It proves (ii). The proof is complete. 
As a direct consequence, we get easily the following well-known Hautus test
(cf. [17]).
Corollary 2.13 Kalman’s rank condition

rank(D, AD, . . . , A N −1 D) = N (2.39)

is equivalent to Hautus test

rank(D, A − λI ) = N , ∀λ ∈ C. (2.40)

Proof Noting that

Ker(D, A − λI )T = Ker(D T ) ∩ Ker(A T − λI ), (2.41)


2.2 Kalman’s Criterion 25

Equation (2.40) is equivalent to

Ker(D T ) ∩ Ker(A T − λI ) = {0}, (2.42)

which means that none of eigenvectors of A T is contained in Ker(D T ). Therefore,


Ker(D T ) does not contain any invariant subspace of A T , which is just what Propo-
sition 2.12 indicates in the case d = 0. 

2.3 Condition of C p -Compatibility

Definition 2.14 The matrix A satisfies the condition of C p -compatibility if there


exists a unique matrix A p of order (N − p), such that

C p A = A pC p. (2.43)

The matrix A p will be called the reduced matrix of A by C p .

Proposition 2.15 The matrix A satisfies the condition of C p -compatibility if and


only if the kernel of C p is an invariant subspace of A:

AKer(C p ) ⊆ Ker(C p ). (2.44)

Moreover, the reduced matrix A p is given by

A p = C p AC +
p, (2.45)

where
C+ T −1
p = C p (C p C p )
T
(2.46)

is the Moore–Penrose inverse of C p .

Proof Assume that (2.43) holds. Then

Ker(C p ) ⊆ Ker(A p C p ) = Ker(C p A). (2.47)

Since C p Ax = 0 for any given x ∈ Ker(C p ), we get Ax ∈ Ker(C p ) for any given
x ∈ Ker(C p ), namely, (2.44) holds.
Conversely, assume that (2.44) holds true. Then by Proposition 2.7, we have

A T Im(C Tp ) ⊆ Im(C Tp ). (2.48)

Then, there exists a matrix A p of order (N − p), such that


26 2 Algebraic Preliminaries

T
A T C Tp = C Tp A p , (2.49)

namely, (2.43) holds.


Moreover, let (e1 , . . . , e p ) be a basis of Ker(C p ). Noting (2.44), we get

(C p A − A p C p )(e1 , . . . , e p , C Tp ) = (0, . . . , 0, (C p A − A p C p )C Tp ). (2.50)

Since the N × N matrix (e1 , . . . , e p , C Tp ) is invertible, (2.43) is equivalent to

(C p A − A p C p )C Tp = 0. (2.51)

Since the matrix C p C Tp is invertible, it follows that

A p = C p AC Tp (C p C Tp )−1 , (2.52)

which gives just (2.45)–(2.46). The proof is complete. 

Proposition 2.16 Assume that the matrix A satisfies the condition of


C p -compatibility. Then for given matrix D of order N × M, we have

N − p−1
rank(C p D, A p C p D, . . . , A p C p D) (2.53)
N −1
= rank(C p D, C p AD, . . . , C p A D).

Proof By Cayley–Hamilton’s Theorem, we have


N − p−1
rank(C p D, A p C p D, . . . , A p C p D) (2.54)
N −1
= rank(C p D, A p C p D, . . . , A p C p D).

l
Then, noting C p Al = A p C p for any given integer l  0, we get

N −1
(C p D, A p C p D, . . . , A p C p D) (2.55)
= (C p D, C p AD, . . . , C p A N −1 D).

The proof is complete. 

Definition 2.17 A subspace V is called A-marked, if it is invariant for A and there


exists a Jordan basis of A in V , which can be extended (by adding new vectors) to a
Jordan basis of A in C N . V is strongly A-marked, if it is invariant for A and every
Jordan basis of A in V can be extended to a Jordan basis of A in C N .

Obviously, any subspace composed of eigenvectors of A is strongly A-marked.


Moreover, if each eigenvalue λ of A is either semi-simple (λ has the same alge-
2.3 Condition of C p -Compatibility 27

braic and geometrical multiplicity), or dim Ker(A − λI ) = 1, then every invariant


subspace of A is strongly A-marked.
The existence of non-marked invariant subspaces is sometimes overlooked in
linear algebra. We refer [9] for a complete discussion on this topic.

Definition 2.18 Assume that the matrix A satisfies the condition of C p -compatibility
(2.43). An (N − q) × N (0  q < p) full row-rank matrix Cq∗ is called the induced
extension matrix of C p , related to the matrix A, if

(a) Ker(Cq∗ ) is contained in Ker(C Tp ),


(b) Ker(Cq∗ ) is invariant for A and admits a supplement which is also invari-
ant for A,
(c) Ker(Cq∗ ) is the largest one satisfying the previous two conditions.

By duality, the above requirements are equivalent to the following ones:


(i) Im(Cq∗ T ) contains Im(C Tp ),
(ii) Im(Cq∗ T ) is invariant for A T and admits a supplement which is also invariant
for A T ,
(iii) Im(Cq∗ T ) is the least one satisfying the previous two conditions.
This leads to the following explicit construction for the induced extension matrix
Cq∗ .
Let
( j) ( j) ( j) ( j)
E0 = 0, A T Ei = λ j Ei + Ei−1 , 1  i  m j , 1  j  r (2.56)

denote the Jordan chain of A T contained in Im(C Tp ). Assume that the subspace
Im(C p T ) is A T -marked. Then for any j with 1  j  r , there exist new root vectors
( j) ( j)
Em j +1 , . . . , Em ∗ which extend the Jordan chain in Im(C p T ) to a Jordan chain in C N :
j

( j) ( j) ( j) ( j)
E0 = 0, A T Ei = λ j Ei + Ei−1 , 1  i  m ∗j , 1  j  r. (2.57)

Then, we define the (N − q) × N full row-rank matrix Cq∗ by

Cq∗T = (E1(1) , . . . , Em(1)∗ ; . . . . . . ; E1(r ) , . . . , Em(rr∗) ), (2.58)


1

where

r
q=N− m ∗j . (2.59)
j=1

We justify the above construction by

Proposition 2.19 Assume that the matrix A satisfies the condition of


C p -compatibility (2.43) and that the subspace Im(C Tp ) is A T -marked. Then the matrix
Cq∗ defined by (2.58) satisfies the requirements (i)–(iii).
28 2 Algebraic Preliminaries

Proof Clearly, Im(Cq∗ T ) contains Im(C Tp ). On the other hand, by Jordan’s theorem,
the space C N can be decomposed into a direct sum of all the Jordan subspaces of
A T , therefore, Im(Cq∗ T ), being direct sum of some Jordan subspaces, is invariant
and admits a supplement which is invariant for A T . Thus, we only have to check
that Im(Cq∗ T ) is the least one. If q = p, then Cq∗ = C p is obviously the least one.
( j)
Otherwise if q < p, there exists at least a root vector Em ∗ , which does not belong to
j

Im(C p T ). Without loss of generality, assume that the root vector Em(1)∗ ∈
/ Im(C Tp ). We
1
∗T
remove it from Im(Cq ), and let

q∗T = (E1(1) , . . . , E (1)∗ ; . . . , E1(r ) , . . . , Em(r∗) ).


C (2.60)
m −1 1 r

We claim that Im(Cq∗T ) does not admit any supplement, which is invariant for A T .
Otherwise, assume that Im(C q∗T ) admits a supplement W , which is invariant for
A . Then, there exist 
T ∗T
x ∈ Im(Cq ) and  
y ∈ W , such that

Em(1)∗ = 
x +
y. (2.61)
1

Noting that
A T Em(1)∗ = λ1 Em(1)∗ + Em(1)∗ −1 , (2.62)
1 1 1

we have
AT 
x + AT 
y = λ1 ( y) + Em(1)∗ −1 .
x + (2.63)
1

It then follows that

x − Em(1)∗ −1 + (A T − λ1 )
(A T − λ1 ) y = 0. (2.64)
1

On the other hand, noting that

q∗T )
x − Em(1)∗ −1 ∈ Im(C
(A T − λ1 ) and (A T − λ1 ) ,
y∈W (2.65)
1

it follows from (2.64) that

AT  x + Em(1)∗ −1
x = λ1 and AT 
y = λ1
y, (2.66)
1

therefore,
x − Em(1)∗ ) = λ1 (
A T ( x − Em(1)∗ ). (2.67)
1 1

x − Em(1)∗ ) ∈ Span{E1(1) , . . . , Em(1)∗ } and E1(1) is the only eigenvector of A T in


Since (
1 1

Span{E1(1) , . . . , Em(1)∗ }, there exists a ∈ R, such that 


x − Em(1)∗ = aE1(1) , namely, 
x−
1 1
(1) (1)
aE1 = E ∗ . Since  (1) ∗T
x and E1 belong to Im(Cq ), so does the vector  (1)
x − aE1 . But
m1
because of (2.60), Em(1)∗ ∈ q∗T ), we get thus a contradiction.
/ Im(C 
1
2.3 Condition of C p -Compatibility 29

Proposition 2.20 Assume that the matrix A satisfies the condition of


C p -compatibility (2.43) and that Ker(C p ) is A-marked. Then, there exists a sub-
space Span{e1 , . . . , eq } such that
(a) K er (C p ) is contained in Span{e1 , . . . , eq }.
(b) A T admits an invariant subspace Span{E 1 , . . . , E q } which is bi-orthonormal
to Span{e1 , . . . , eq }.
(c) Span{e1 , . . . , eq } is the least one satisfying the previous two conditions.

Proof By Proposition 2.9, it suffices to show that Ker(C p ) has a least extension
Span{e1 , . . . , eq }, which is invariant for A and admits a supplement which is also
invariant for A.
Let Ker(C p ) = Span{e1 , . . . , e p }. Defining D Tp = (e1 , . . . , e p ), we have

Im(D Tp ) = Ker(C p ) and Ker(D p ) = Im(C Tp ). (2.68)

By Proposition 2.7, the condition of C p -compatibility (2.43) implies that


A T Im(C Tp ) ⊆ Im(C Tp ), namely, A T Ker(D p ) ⊆ Ker(D p ). Therefore, A T satisfies the
condition of D p -compatibility and Im(D Tp ) = Ker(C p ) is A-marked. Then, by Propo-
sition 2.19, D p has a least extension Dq∗ such that Im(Dq∗ T ) is invariant for A and
admits a supplement which is also invariant for A. In other words, noting (2.68),
Ker(C p ) has a least extension Span{e1 , . . . , eq }, which admits a supplement such
that both two subspaces are invariant for A. The proof is then complete. 

Proposition 2.21 Assume that the matrix A satisfies the condition of


C p -compatibility (2.43). Let {xl(k) }1kd,1lrk be a system of root vectors of the
matrix A, corresponding to the eigenvalues λk (1  k  d), such that for each
k(1  k  d) we have

Axl(k) = λk xl(k) + xl+1


(k)
, 1  l  rk . (2.69)

Define the following projected vectors by

x l(k) = C p xl(k) , 1  k  d, 1  l  r k , (2.70)

where d(1  d  d) and r k (1  r k  rk ) are given by (2.71) below, then the system
{x l(k) }1kd,1lr k forms a system of root vectors of the reduced matrix A p given by
(2.43). In particular, if A is similar to a real symmetric matrix, so is A p .

Proof Since Ker(C p ) is an invariant subspace of A, without loss of generality, we


may assume that there exist some integers d(1  d  d) and r k (1  r k  rk ), such
that {xl(k) }1lr k ,1kd forms a root system for the restriction of A to the invariant
subspace Ker(C p ). Then,

Ker(C p ) = Span{xl(k) : 1  k  d, 1  l  r k }. (2.71)


30 2 Algebraic Preliminaries

In particular, we have

d
(rk − r k ) = N − p. (2.72)
k=1

Noting that C Tp (C p C Tp )−1 C p is a projection from R N onto Im(C Tp ), we have

C Tp (C p C Tp )−1 C p x = x, ∀x ∈ Im(C Tp ). (2.73)

On the other hand, by R N = Im(C Tp ) ⊕ Ker(C p ) we can write

xl(k) = 
xl(k) + 
xl(k) with 
xl(k) ∈ Im(C Tp ), 
xl(k) ∈ Ker(C p ). (2.74)

It follows from (2.70) that

x l(k) = C p
xl(k) , 1  k  d, 1  l  r k . (2.75)

Then, noting (2.45)–(2.46) and (2.73), we have

A p x l(k) = C p AC Tp (C p C Tp )−1 C p
xl(k) = C p A
xl(k) . (2.76)

Since Ker(C p ) is invariant for A, A xl(k) ∈ Ker(C p ), then C p A xl(k) = 0. It follows


that
A p x l(k) = C p A(
xl(k) + 
xl(k) ) = C p Axl(k) . (2.77)

Then, using (2.69), it is easy to see that

A p x l(k) = C p (λk xl(k) + xl+1


(k)
) = λk x l(k) + x l+1
(k)
. (2.78)

Therefore, x (k) (k) (k)


1 , x 2 , . . . , x r k is a Jordan chain with length r k of the reduced matrix
A p , corresponding to the eigenvalue λk .
Since dim Ker(C p ) = p, the projected system {x l(k) }1kd,1lr k is of rank
(N − p). On the other hand, because of (2.72), the system {x l(k) }1lr k ,1kd con-
tains (N − p) vectors, therefore, forms a system of root vectors of the reduced matrix
A p . The proof is complete. 
Part I
Synchronization for a Coupled System of
Wave Equations with Dirichlet Boundary
Controls: Exact Boundary Synchronization

We consider the following coupled system of wave equations with Dirichlet boundary
controls:
⎧ 
⎨ U − U + AU = 0 in (0, +∞) × ,
U =0 on (0, +∞) × 0 , (I )

U = DH on (0, +∞) × 1

with the initial condition

t =0: U =U 1 in ,
0 , U  = U (I0)

where  ⊂ Rn is a bounded domain with smooth boundary  = 1  ∪ 0 such that


 1 ∩ 0 = ∅ and mes(1 ) > 0; “” stands for the time derivative;  = nk=1 ∂∂x 2 is the
2

 T  T k

Laplacian operator; U = u (1) , · · · , u (N ) and H = h (1) , · · · , h (M) (M  N )


denote the state variables and the boundary controls, respectively; the coupling
matrix A = (ai j ) is of order N and D as the boundary control matrix is a full
column-rank matrix of order N × M, both with constant elements.
In this part, the exact boundary synchronization and the exact boundary Synchro-
nization by groups for system (I) will be presented and discussed, while, correspond-
ingly, the approximate boundary synchronization and the approximate boundary
synchronization by groups for system (I) will be introduced and considered in the
next part (Part 2).
Chapter 3
Exact Boundary Controllability
and Non-exact Boundary Controllability

Since the exact boundary synchronization on a finite time interval is closely linked
with the exact boundary null controllability, we first consider the exact boundary
null controllability and the non-exact boundary null controllability for system (I) of
wave equations with Dirichlet boundary controls in this chapter.

3.1 Exact Boundary Controllability

Since the exact boundary synchronization on a finite time interval is closely linked
with the exact boundary null controllability, we first consider the exact boundary null
controllability and the non-exact boundary null controllability in this chapter.
Let  ⊂ Rn be a bounded domain with smooth boundary  = 1 ∪ 0 with
mes(1 ) > 0, such that  1 ∩  0 = ∅. Furthermore, we assume that there exists
x0 ∈ Rn such that, setting m = x − x0 , we have the following multiplier geomet-
rical condition (cf. [7, 26, 61, 62]):

(m, ν) > 0, ∀x ∈ 1 ; (m, ν)  0, ∀x ∈ 0 , (3.1)

where ν is the unit outward normal vector and (·, ·) denotes the inner product in Rn .
Let
(1) (M)
W = (w (1) , · · · , w (M) )T , H = (h , · · · , h )T . (3.2)

Consider the following coupled system of wave equations:


⎧ 
⎨ W − W + AW = 0 in (0, +∞) × ,
W =0 on (0, +∞) × 0 , (3.3)

W =H on (0, +∞) × 1
© Springer Nature Switzerland AG 2019 33
T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_3
34 3 Exact Boundary Controllability and Non-exact Boundary Controllability

with the initial condition

t =0: W =W 1 in ,
0 , W  = W (3.4)

where the coupling matrix A = (a i j ) of order M has real constant elements.

Definition 3.1 System (3.3) is exactly null controllable if there exists a positive
constant T > 0, such that for any given initial data

0 , W
(W 1 ) ∈ (L 2 ()) M × (H −1 ()) M , (3.5)

one can find a boundary control H ∈ L 2 (0, T ; (L 2 (1 )) M ), such that problem (3.3)–
(3.4) admits a unique weak solution W = W (t, x) with

(W, W  ) ∈ C 0 ([0, T ]; (L 2 () × H −1 ()) M ), (3.6)

which satisfies the final null condition

t=T : W = 0, W  = 0 in . (3.7)

In other words, one can find a control H ∈ L loc


2
(0, +∞; (L 2 (1 )) M ) with compact
support in [0, T ], such that the solution W = W (t, x) to the corresponding problem
(3.3)–(3.4) satisfies the following condition:

tT : W ≡ 0 in . (3.8)

Remark 3.2 More generally, system (3.3) will be called to be exactly controllable
if the final null condition (3.7) is replaced by the following inhomogeneous final
condition
t=T : W =W 0 , W  = W 1 in . (3.9)

However, it is well known that for a linear time-invertible system, the exact boundary
null controllability with (3.7) is equivalent to the exact boundary controllability
with (3.9) (cf. [61, 62, 73]). So, we will not distinguish these two notions of exact
controllability later.

For a single wave equation, by means of the Hilbert Uniqueness Method (HUM)
proposed by J.-L. Lions in [61], the exact boundary null controllability has been
studied in the literature (cf. [26, 62]), but a few were done for general coupled systems
of wave equations. If the coupling matrix A is symmetric and positive definite, then
the exact boundary null controllability of system (3.3) can be transformed to the case
of a single wave equation (cf. [61, 62]). However, in order to study the exact boundary
synchronization, we have to establish the exact boundary null controllability for the
coupled system (3.3) of wave equations with any given coupling matrix A. In this
chapter, we will use a result on the observability of compactly perturbed systems in
3.1 Exact Boundary Controllability 35

[69] to get the observability of the corresponding adjoint problem, and then the exact
boundary null controllability follows from the standard HUM.
Now let
 = (φ (1) , · · · , φ (M) )T . (3.10)

Consider the corresponding adjoint system:


 T
 −  + A  = 0 in (0, T ) × , (3.11)
=0 on (0, T ) × ,

T
A being the transpose of A, with the initial condition

t =0: = 1 in .
 0 ,  =  (3.12)

It is well known (cf. [60, 64, 70]) that the adjoint problem (3.11)–(3.12) is well-posed
in the space V × H:
 M  M
V = H01 () , H = L 2 () . (3.13)

Moreover, we will prove the following direct and inverse inequalities.


Theorem 3.3 Let T > 0 be suitably large. Then there exist positive constants c and
0 , 
c such that for any given initial data( 1 ) ∈ V × H, the solution  to problem
(3.11)–(3.12) satisfies the following inequalities:

T

T

c 0 2V + 
|∂ν |2 ddt   1 2H  c |∂ν |2 ddt, (3.14)
0 1 0 1

where ∂ν stands for the outward normal derivative on the boundary.


Before proving Theorem 3.3, we first give a unique continuation result as follows.
Proposition 3.4 Let B be a matrix of order M, and let  ∈ H 2 () be a solution to
the following problem: 
 = B in ,
(3.15)
=0 on .

Assume furthermore that


∂ν  = 0 on 1 . (3.16)

Then we have  ≡ 0.
Proof Let
 = P (3.17)

and
36 3 Exact Boundary Controllability and Non-exact Boundary Controllability
⎛ ⎞

b11 0 ··· 0
⎜b21 b22 ··· 0 ⎟

B = P B P −1 =⎜

⎟,
⎠ (3.18)
···
b M1 
b M2 
· · · bM M

where 
B is a lower triangular matrix of complex entries. Then (3.15)–(3.16) can be
reduced to ⎧ 
⎨ φ (k) = kp=1  ( p) in ,
bkp φ
 =0
φ (k)
on , (3.19)
⎩ (k)
∂ν φ = 0 on 1

for k = 1, · · · , M.
In particular, for k = 1 we have

⎨ φ (1) =  (1) in ,
b11 φ
 =0
φ (1)
on , (3.20)
⎩ (1)
∂ν φ = 0 on 1 .

Thanks to Carleman’s unique continuation (cf. [14]), we get

(1) ≡ 0.
φ (3.21)

Inserting (3.21) into the second set of (3.19) leads to



⎨ φ (2) =  (2) in ,
b22 φ
 =0
φ (2)
on , (3.22)
⎩ (2)
∂ν φ = 0 on 1

and we can repeat the same procedure. Thus, by a simple induction, we get succes-
sively that
(k) ≡ 0, k = 1, · · · , M.
φ (3.23)

This yields that


 ≡ 0 =⇒  ≡ 0. (3.24)

The proof is complete. 

Proof of Theorem 3.3. We rewrite system (3.11) as


     
  
=A +B , (3.25)
  

where    
0 IM 0 0
A= ; B= T , (3.26)
 0 −A 0
3.1 Exact Boundary Controllability 37

and I M is the unit matrix of order M. It is easy to see that A is a skew-adjoint operator
with compact resolvent in V × H, and B is a compact operator in V × H. Therefore,
they can generate, respectively, C 0 groups SA (t) and SA+B (t) in the energy space
V × H.
Following a perturbation result in [69], in order to prove the observability inequal-
ities (3.14) for a system of this kind, it is sufficient to check the following assertions:
(i) The direct and inverse inequalities

T

T

c |∂ν |2 ddt  


0 2V + 
1 2H  c |∂ν |2 ddt (3.27)
0 1 0 1

hold for the solution  = SA (t)( 0 , 


1 ) to the decoupled problem (3.11)–(3.12)
with A = 0.
(ii) The system of root vectors of A + B forms a Riesz basis of subspaces in
V × H. That is to say, there exists a family of subspaces Vi × Hi (i  1) composed
of root vectors of A + B, such that for any given x ∈ V × H, there exists a unique
xi ∈ Vi × Hi for each i  1, such that
+∞
 +∞

x= xi , c1 x 2  xi 2  c2 x 2 , (3.28)
i=1 i=1

where c1 , c2 are positive constants.


(iii) If (, ) ∈ V × H and λ ∈ C, such that

(A + B)(, ) = λ(, ) and ∂ν  = 0 on 1 , (3.29)

then (, ) ≡ 0.
For simplification of notation, we will still denote by V × H the complex Hilbert
space corresponding to V × H.
Since the assertion (i) is well known under the multiplier geometrical condition
(3.1), we only have to verify (ii) and (iii).
Verification of (ii). Let μi2 > 0 be an eigenvalue corresponding to an eigenvector
φi of − with homogeneous Dirichlet boundary condition:

−φi = μi2 φi in ,
(3.30)
φi = 0 on .

Let
Vi × Hi = {(αφi , βφi ) : α, β ∈ C M }. (3.31)

Obviously, the subspaces Vi × Hi (i = 1, 2, · · · ) are mutually orthogonal and



V ×H= Vi × Hi , (3.32)
i1
38 3 Exact Boundary Controllability and Non-exact Boundary Controllability

where ⊕ stands for the direct sum of subspaces. In particular, for any given x ∈
V × H, there exist xi ∈ Vi × Hi for i  1, such that
+∞
 +∞

x= xi , x 2 = xi 2 . (3.33)
i=1 i=1

On the other hand, for any given i  1, Vi × Hi is an invariant subspace of A + B


and of finite dimension. Then, the restriction of A + B to the subspace Vi × Hi is
a linear bounded operator, therefore its root vectors constitute a basis in the finite-
dimensional complex space Vi × Hi . This together with (3.32)–(3.33) implies that
the system of root vectors of A + B form a Riesz basis of subspaces in V × H.
Verification of (iii). Let (, ) ∈ V × H and λ ∈ C, such that (3.29) holds. We
have
T
= λ and  − A  = λ , (3.34)

namely,  T
 = (λ2 I + A ) in , (3.35)
=0 on .

It follows from the classic elliptic theory that  ∈ H 2 (). Moreover, we have

∂ν  = 0 on 1 . (3.36)

Thus, applying Proposition 3.4 to (3.35)–(3.36), we get  ≡ 0 then ≡ 0. The


proof is complete. 
By a standard application of HUM, from Theorem 3.3 we get the following

Theorem 3.5 The coupled system (3.3) composed of M wave equations is exactly
null controllable in the space (L 2 ()) M × (H −1 ()) M by means of M boundary
controls.

Remark 3.6 In Theorem 3.5, we do not need any assumption on the coupling
matrix A.

Remark 3.7 Similar results on the exact boundary null controllability for a coupled
system of 1-D wave equations in the framework of classical solutions can be found
in [19, 32].

Let us denote by Uad the admissible set of all boundary controls H , which realize
the exact boundary null controllability of system (3.3). Since system (3.3) is exactly
null controllable, Uad is not empty. Moreover, we have the following

Theorem 3.8 Assume that system (3.3) is exactly null controllable in the space
(L 2 ()) M × (H −1 ()) M . Then for > 0 small enough, the values of H ∈ Uad on
(T − , T ) × 1 can be arbitrarily chosen.
3.1 Exact Boundary Controllability 39

Proof First recall that there exists a positive constant T0 > 0 independent of the
initial data, such that for all T > T0 system (3.3) is exactly null controllable at the
time T .
Next let > 0 be such that T − > T0 and let

 ∈ L 2 (T − , T ; (L 2 (1 )) M )
H (3.37)

be arbitrarily given. We solve the backward problem for system (3.3) to get a solution
 on the time interval [T − , T ] with the boundary function H = H
W  and the final
data
t=T : W  = W  = 0. (3.38)

Since T − > T0 , system (3.3) is still exactly controllable on the interval [0, T − ],
then we can find a boundary control

 ∈ L 2 (0, T − ; (L 2 (1 )) M ),
H (3.39)

 satisfies the initial condition:


such that the corresponding solution W

t =0: W   = W1
 = W0 , W (3.40)

and the final condition:

t = T − : W  , W
 = W   = W
  . (3.41)

Thus, setting 
 , t ∈ (T − , T ),
H
H=  , t ∈ (0, T − ) (3.42)
H

and 
 , t ∈ (T − , T ),
W
W =  , t ∈ (0, T − ), (3.43)
W

we check easily that W is a weak solution to problem (3.3)–(3.4) and the boundary
control H realizes the exact boundary null controllability. The proof is then complete.


3.2 Non-exact Boundary Controllability

We have showed the exact boundary null controllability of the coupled system (3.3)
composed of M wave equations by means of M boundary controls. In this para-
graph, we will show that if the number of the boundary controls is fewer than M,
then we cannot realize the exact boundary null controllability for the coupled sys-
40 3 Exact Boundary Controllability and Non-exact Boundary Controllability

tem (3.3) composed of M wave equations for all initial data (W 0 , W


1 ) in the space
(L 2 ()) M × (H −1 ()) M . For this purpose, we further investigate the exact bound-
ary null controllability.
Now we are interested in finding a control H 0 ∈ Uad , which has the least norm
among all the others:

H 0 L 2 (0,T ;(L 2 (1 )) M ) = inf H L 2 (0,T ;(L 2 (1 )) M ) . (3.44)


H ∈Uad

 ∈ L 2 (0, T ; (L 2 (1 )) M ), we solve the backward problem


For any given H
⎧ 

⎪ V − V + AV = 0 in (0, T ) × ,

V =0 on (0, T ) × 0 ,
 (3.45)

⎪ V =H on (0, T ) × 1 ,

t = T : V = V = 0 in 

and define the linear map

R:  → (V (0), V  (0))
H (3.46)

which is continuous from L 2 (0, T ; (L 2 (1 )) M ) into (L 2 ()) M × (H −1 ()) M


because of the well-posedness. Let N be the kernel of R. Problem (3.44) becomes

inf H − q L 2 (0,T ;(L 2 (1 )) M ) . (3.47)


q∈N

Let P denote the orthogonal projection from L 2 (0, T ; (L 2 (1 )) M ) to N , which is


a closed subspace. Then the boundary control H 0 = (I − P)H has the least norm.
Moreover, we have the following

Theorem 3.9 Assume that system (3.3) is exactly null controllable in (L 2 ()) M ×
(H −1 ()) M . Then there exists a positive constant c > 0, such that the control H 0
with the least norm given by (3.44) satisfies the following estimate:

0 , W
H 0 L 2 (0,T ;(L 2 (1 )) M )  c (W 1 ) (L 2 ()) M ×(H −1 ()) M (3.48)

0 , W
for any given (W 1 ) ∈ (L 2 ()) M × (H −1 ()) M .

Proof Consider the linear map R ◦ (I − P) from the quotient space L 2 (0, T ;
(L 2 (1 )) M )/N into (L 2 ()) M × (H −1 ()) M .
First if R ◦ (I − P)H = 0, then (I − P)H ∈ N , thus H = P H ∈ N . Therefore
R ◦ (I − P) is an injective.
On the other hand, the exact boundary null controllability of system (3.3)
implies that R ◦ (I − P) is a surjection, therefore a bijection which is continu-
ous from L 2 (0, T ; (L 2 (1 )) M )/N into (L 2 ()) M × (H −1 ()) M . By Banach’s the-
orem of closed graph, the inverse of R ◦ (I − P) is also bounded from (L 2 ()) M ×
3.2 Non-exact Boundary Controllability 41

(H −1 ()) M into L 2 (0, T ; (L 2 (1 )) M )/N . This yields the inequality (3.48). The
proof is complete. 
In the case of partial lack of boundary controls, we have the following negative
result.
Theorem 3.10 Assume that the number of boundary controls is fewer than M.
Then, no matter how large T > 0 is, the coupled system (3.3) composed of M
0 , W
wave equations is not exactly null controllable for all initial data (W 1 ) ∈
(L 2 ()) M × (H −1 ()) M .
(1)
Proof Without loss of generality, we assume that h ≡ 0. Let θ ∈ D(). We choose
the special initial data as

W 1 = 0.
0 = (θ, 0, · · · , 0)T , W (3.49)

If system (3.3) is exactly null controllable, following Theorem 3.9, the boundary
control H0 with the least norm satisfies the following estimate:

H 0 L 2 (0,T ;(L 2 (1 )) M )  c θ L 2 () . (3.50)

Because of the well-posedness (cf. [60, 64, 70]), there exists a constant c > 0, such
that

W L 2 (0,T ;(L 2 ()) M ) (3.51)



0 , W
c (W 1 ) (L 2 ()) M ×(H −1 ()) M + H 0 L 2 (0,T ;(L 2 (1 )) M ) .

Then it follows that

W L 2 (0,T ;(L 2 ()) M )  c (1 + c) θ L 2 () . (3.52)

(1)
Now consider the first set in problem (3.3)–(3.4) with h ≡ 0 into the following
backward problem:
⎧ (1) M
⎨ wtt − w (1) = − j=1 a 1 j w ( j) in (0, T ) × ,
w (1) = 0 on (0, T ) × , (3.53)

t = T : w (1) = 0, ∂t w (1) = 0 in 

with the initial data

t =0: w(1) = θ, ∂t w (1) = 0 in . (3.54)

Once again by the well-posedness for the backward problem (3.53), there exists a
constant c > 0, such that

θ H01 ()  c W L 2 (0,T ;(L 2 ()) M ) . (3.55)


42 3 Exact Boundary Controllability and Non-exact Boundary Controllability

This, together with (3.52), gives a contradiction:

θ H01 ()  c c (1 + c) θ L 2 () , ∀θ ∈ D(). (3.56)

The proof is complete. 

In order to make the control problem more flexible, we introduce a matrix D of


order M with constant elements, and consider the following mixed problem for a
coupled system of wave equations:
⎧ 
⎨ W − W + AW = 0 in (0, T ) × ,
W =0 on (0, T ) × 0 , (3.57)

W =DH on (0, T ) × 1

with the initial condition

t =0: 0 , W  = W
W =W 1 in . (3.58)

Combining Theorems 3.5 and 3.10, we have the following result.

Theorem 3.11 Under the multiplier geometrical condition (3.1), the coupled system
(3.57) composed of M wave equations is exactly null controllable for any given initial
data (W 1 ) ∈ (L 2 ()) M × (H −1 ()) M if and only if the matrix D is of rank M.
0 , W
Chapter 4
Exact Boundary Synchronization and
Non-exact Boundary Synchronization

In the case of partial lack of boundary controls, we consider the exact boundary syn-
chronization and the non-exact boundary synchronization in this chapter for system
(I) with Dirichlet boundary controls.

4.1 Definition

Let
U = (u (1) , · · · , u (N ) )T andH = (h (1) , · · · , h (M) )T (4.1)

with M  N . Consider the coupled system (I) with the initial condition (I0).
According to Theorem 3.11, system (I) is exactly null controllable if and only
if rank(D) = N , namely, M = N and D is invertible. In the case of partial lack of
boundary controls, we now give the following

Definition 4.1 System (I) is exactly synchronizable, if there exists a positive con-
   
stant T > 0, such that for any given initial data (U0 , U
1 ) ∈ L 2 () N × H −1 () N ,
there exists a suitable boundary control H ∈ L loc2
(0, +∞; (L 2 (1 )) M ) with compact
support in [0, T ], such that the corresponding solution U = U (t, x) to problem (I)
and (I0) satisfies the following final condition

tT : u (1) ≡ u (2) ≡ · · · ≡ u (N ) := u, (4.2)

where u = u(t, x), being unknown a priori, is called the exactly synchronizable
state.

In the above definition, through the boundary control acts on the time interval
[0, T ], the synchronization is required not only at the time t = T , but also for all

© Springer Nature Switzerland AG 2019 43


T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_4
44 4 Exact Boundary Synchronization and Non-exact Boundary Synchronization

t  T , namely, after all boundary controls are eliminated. Thus, this kind of syn-
chronization is not a short-lived one, but exists once and for all, as is needed in
applications.
In fact, assuming that (4.2) is realized only at some moment T > 0 if we set
hereafter H ≡ 0 for t > T , then, generally speaking, the corresponding solution to
problem (I) and (I0) does not satisfy automatically the synchronization condition
(4.2) for t > T . This is different from the exact boundary null controllability, where
the solution vanishes with H ≡ 0 for t  T . To illustrate it, let us consider the
following system:
⎧ 

⎪ u − u = 0 in (0, +∞) × ,
⎨ 
v − v = u in (0, +∞) × ,
(4.3)

⎪ u=0 on (0, +∞) × ,

v=h on (0, +∞) × .

Since the first equation is separated from the second one, for any given initial data
u0, 
( u 1 ) we can first find a solution u. Once u is determined, we can find (cf. [85]) a
boundary control h such that the solution v to the second equation satisfies the final
conditions coming from the requirement of synchronization:

t=T : v = u, v  = u  . (4.4)

If we set h ≡ 0 for t > T , generally speaking, we cannot get v ≡ u for t  T . So,


in order to keep the synchronization for t  T , we have to maintain the boundary
control h in action for t  T . However, for the sake of applications, what we want is
to get the exact boundary synchronization by some boundary controls with compact
support.

Remark 4.2 If system (I) is exactly null controllable, then we have certainly the exact
boundary synchronization. This trivial situation should be excluded. Therefore, in
Definition 4.1 we should restrict ourselves to the case that the number of boundary
controls is fewer than N , namely, M = rank(D) < N , so that system (I) is not exactly
null controllable.

4.2 Condition of Compatibility

Theorem 4.3 Assume that system (I) is exactly synchronizable but not exactly null
controllable. Then the coupling matrix A = (ai j ) should satisfy the following con-
dition of compatibility (row-sum condition):


N
akp := a, k = 1, · · · , N , (4.5)
p=1
4.2 Condition of Compatibility 45

where a is a constant independent of k = 1, · · · , N .

Remark 4.4 By Theorem 3.11, the rank condition

rank(D) < N (4.6)

implies the non-exact boundary null controllability.

Proof of Theorem 4.3. By synchronization (4.2), there exist a constant T > 0 and a
scalar function u such that



N
tT : u  − u + akp u = 0 in , k = 1, · · · , N . (4.7)
p=1

In particular, we have



N

N
tT : akp u = alp u in , k, l = 1, · · · , N . (4.8)
p=1 p=1

On the other hand, since system (I) is not exactly null controllable, there exists at
0 , U
least an initial data (U 1 ) for which the corresponding synchronizable state u does
not identically vanish for t  T , whatever the boundary controls H would be chosen.
This yields the condition of compatibility (4.5). The proof is complete. 

Remark 4.5 In the study of synchronization for systems of ordinary differential


equations, the row-sum condition (4.5) is imposed on the system according to phys-
ical meanings as a reasonable sufficient condition (in the most cases one takes a = 0
there). However, as we have seen before, for the synchronization of systems of partial
differential equations on a finite time interval, the row-sum condition (4.5) is actually
a necessary condition, which makes the theory of synchronization more complete
for systems of partial differential equations.

Let the synchronization matrix C1 be defined by


⎛ ⎞
1 −1 0 ··· 0
⎜0 1 −1 ··· 0 ⎟
⎜ ⎟
C1 = ⎜ . .. .. .. .. ⎟ . (4.9)
⎝ .. . . . . ⎠
0 0 · · · 1 −1 (N −1)×N

C1 is an (N − 1) × N full row-rank matrix, and the exact boundary synchroniza-


tion (4.2) can be equivalently written as
46 4 Exact Boundary Synchronization and Non-exact Boundary Synchronization

tT : C1 U ≡ 0. (4.10)

Let
e1 = (1, · · · , 1)T . (4.11)

Then, the condition of compatibility (4.5) is equivalent to the fact that the vector e1
is an eigenvector of A, corresponding to the eigenvalue a:

Ae1 = ae1 . (4.12)

On the other hand, since Ker(C1 )=Span{e1 }, condition (4.12) means that Ker(C1 )
is a one-dimensional invariant subspace of A:

AKer(C1 ) ⊆ Ker(C1 ). (4.13)

Moreover, by Proposition 2.15 (in which we take p = 1), the condition (4.13) is also
equivalent to the existence of a unique matrix A1 of order (N − 1), such that

C 1 A = A1 C 1 . (4.14)

Such a matrix A1 = (a i j ) is called the reduced matrix of A by C1 , which can be


given explicitly by
A1 = C1 AC1+ , (4.15)

where C1+ denotes the Moore–Penrose generalized inverse of C1 :

C1+ = C1T (C1 C1T )−1 , (4.16)

in which C1T is the transpose of C1 . More precisely, the elements of A1 are given by


j

N
ai j = (ai p − ai+1, p ) = (ai+1, p − ai p ) i, j = 1, · · · , N − 1. (4.17)
p=1 p= j+1

Remark 4.6 The synchronization matrix C1 given by (4.9) is not the unique choice. In
fact, noting that Ker(C1 ) = Span{e1 } with e1 = (1, · · · , 1)T , any given
(N − 1) × N full row-rank matrix C1 with the zero row sum, or equivalently, such
that Ker(C1 ) is an invariant subspace of A (namely, (4.13) holds), can be always
taken as a synchronization matrix. However, for fixing the idea, in what follows we
always use the synchronization matrix C1 given by (4.9).
4.3 Exact Boundary Synchronization and Non-exact Boundary Synchronization 47

4.3 Exact Boundary Synchronization and Non-exact


Boundary Synchronization

We first give a rank condition for the boundary control matrix D, which is neces-
sary for the exact boundary synchronization of system (I) in the space (L 2 ()) N ×
(H −1 ()) N .

Theorem 4.7 Assume that system (I) is exactly synchronizable. Then we necessarily
have
rank(C1 D) = N − 1. (4.18)

Proof If Ker(D T ) ∩ Im(C1T ) = {0}, then by Proposition 2.11, we have

rank(C1 D) = rank(D T C1T ) = rank(C1T ) = N − 1. (4.19)

Otherwise, there exists a unit vector E ∈ Ker(D T ) ∩ Im(C1T ). Assume that system
(I) is exactly synchronizable, then for any given θ ∈ D(), there exists a boundary
control H such that the solution U to system (I) with the following initial condition

t =0: U = Eθ, U  = 0 in  (4.20)

satisfies the final condition (4.10). Then, applying E to problem (I) and (I0) and
setting φ = (E, U ), it is easy to see that
⎧ 
⎨ φ − φ = −(E, AU ) in (0, +∞) × ,
φ=0 on (0, +∞) × , (4.21)

t =T : φ≡0 in .

Moreover, by a similar procedure as in the proof of Theorem 3.9, the control H can
be chosen such that
H  L 2 (0,T ;(L 2 (1 )) N )  cθ L 2 () , (4.22)

where c > 0 is a positive constant. By the well-posedness of problem (I) and (I0),
there exists a constant c > 0, such that

U  L 2 (0,T ;(L 2 ()) N )  c θ L 2 () . (4.23)

Then, by the well-posedness for the backward problem given by (4.21), there exists
a constant c > 0, such that

θ H01 ()  c U  L 2 (0,T ;(L 2 ()) N )  c c θ L 2 () , ∀θ ∈ D(). (4.24)

We get thus a contradiction which achieves the proof. 

As an immediate consequence of Theorem 4.7, we have the following


48 4 Exact Boundary Synchronization and Non-exact Boundary Synchronization

Corollary 4.8 If
rank(C1 D) < N − 1, (4.25)

in particular if
rank(D) < N − 1, (4.26)

then, no matter how large T > 0 is, system (I) cannot be exactly synchronizable at
the time T .

Let
W1 = (w (1) , · · · , w (N −1) )T . (4.27)

Then, under the condition of compatibility (4.5) and noting (4.14), the original prob-
lem (I) and (I0) for the variable U can be reduced to the following self-closed problem
for the variable W = C1 U :

⎨ W1 − W1 + A1 W1 = 0 in (0, +∞) × ,
W =0 on (0, +∞) × 0 , (4.28)
⎩ 1
W1 = C 1 D H on (0, +∞) × 1 ,

associated with the initial data

t =0: 0 , W1 = C1 U
W1 = C 1 U 1 in , (4.29)

where A1 is given by (4.15).

Proposition 4.9 Under the condition of compatibility (4.5), the exact boundary syn-
chronization of the original system (I) is equivalent to the exact boundary null con-
trollability of the reduced system (4.28).

Proof Clearly, the linear map

(U 1 ) → (C1 U
0 , U 0 , C1 U
1 ) (4.30)

is surjective from the space (L 2 ()) N × (H −1 ()) N onto the space (L 2 ()) N −1 ×
(H −1 ()) N −1 . Then the exact boundary synchronization of system (I) implies the
exact boundary null controllability of the reduced system (4.28). On the other hand,
the exact boundary synchronization of system (I) obviously follows from the exact
boundary null controllability of system (4.28). The proof is complete. 

By Theorem 3.11 and Proposition 4.9, we immediately get the following

Theorem 4.10 Under the multiplier geometrical condition (3.1) and the condition
of compatibility (4.5), system (I) is exactly synchronizable in the space (L 2 ()) N ×
(H −1 ()) N , provided that the rank condition (4.18) is satisfied.
Chapter 5
Exactly Synchronizable States

When system (I) possesses the exact boundary synchronization, the corresponding
exactly synchronizable states will be studied in this chapter.

5.1 Attainable Set of Exactly Synchronizable States

In the case that system (I) possesses the exact boundary synchronization at the time
T > 0, it is easy to see that for t  T , the exactly synchronizable state u = u(t, x)
defined by (4.2) satisfies the following wave equation with homogenous Dirichlet
boundary condition:

u  − u + au = 0 in (T, +∞) × ,
(5.1)
u=0 on (T, +∞) × ,

where a is given by (4.5). Hence, the evolution of the exactly synchronizable state
u = u(t, x) with respect to t is completely determined by the values of (u, u t ) at the
time t = T :
t = T : u = u0, u = u 1 in . (5.2)

Theorem 5.1 Assume that the coupling matrix A satisfies the condition of compati-
bility (4.5). Then the attainable set of the values (u, u  ) at the time t = T of the exactly
synchronizable state u = u(t, x) is actually the whole space L 2 () × H −1 () as
   
0 , U
the initial data (U 1 ) vary in the space L 2 () N × H −1 () N .

Proof For any given ( u 1 ) ∈ L 2 () × H −1 (), by solving the following back-
u0, 
ward problem  
u − u + au = 0 in (0, T ) × ,
(5.3)
u=0 on (0, T ) × 

© Springer Nature Switzerland AG 2019 49


T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_5
50 5 Exactly Synchronizable States

with the final condition

t=T : u0, u = 
u = u 1 in , (5.4)

we get the corresponding solution u = u(t, x). Then, under the condition of com-
patibility (4.5), the function

U (t, x) = u(t, x)e1 , (5.5)

in which e1 = (1, · · · , 1)T , is the solution to problem (I) and (I0) with the null control
H ≡ 0 and the initial condition

t =0: U = u(0, x)e1 , U  = u  (0, x)e1 . (5.6)

Therefore, by solving problem (I) and (I0) with the null boundary control and the
initial condition (5.6), we can reach any given exactly synchronizable state ( u0, 
u1)
at the time t = T . This fact shows that any given state ( u 1 ) ∈ L 2 () × H −1 ()
u0, 
can be expected to be a exactly synchronizable state. Consequently, the set of the
values (u(T ), u  (T )) of the exactly synchronizable state u = (t, x) at the time T is
actually the whole space L 2 () × H −1 () as the initial data (U 0 , U
1 ) vary in the
 2  N  −1 N
space L () × H () . The proof is complete. 

5.2 Determination of Exactly Synchronizable States

We now try to determine the exactly synchronizable state of system (I) for each given
initial data (U 0 , U1 ).
As shown in Sect. 4.2, the vector e1 = (1, · · · , 1)T is an eigenvector of A, corre-
sponding to the real eigenvalue a given by (4.5).
Let 1 , · · · , r (resp. E1 , · · · , Er ) be a Jordan chain of length r of A (resp. of A T ),
such that

⎨ Al = al + l+1 , 1  l  r,
A T Ek = aEk + Ek−1 , 1  k  r, (5.7)

(Ek , l ) = δkl , 1  k, l  r,

where
r = (1, · · · , 1)T , r +1 = 0, E0 = 0. (5.8)

Clearly r = e1 is an eigenvector of A, respectively, E1 = E 1 is an eigenvector of A T


associated with the same eigenvalue a.
5.2 Determination of Exactly Synchronizable States 51

Consider the projection P on the subspace Span{1 , · · · , r } as follows:


r
P= k ⊗ Ek , (5.9)
k=1

where ⊗ stands for the tensor product such that

(k ⊗ Ek )U = (Ek , U )k , ∀U ∈ R N . (5.10)

P can be represented by a matrix of order N . We can then decompose

R N = Im(P) ⊕ Ker(P), (5.11)

where ⊕ stands for the direct sum of subspaces. Moreover, we have

Im(P) = Span{1 , · · · , r }, (5.12)


 ⊥
Ker(P) = Span{E1 , · · · , Er } (5.13)

and
P A = A P. (5.14)

Now let U = U (t, x) be the solution to problem (I) and (I0). We define

Uc := (I − P)U,
(5.15)
Us := PU.

If system (I) is exactly synchronizable, we have

tT : U = ur , (5.16)

where u = u(t, x) is the exactly synchronizable state and r = (1, · · · , 1)T . Then,
noting (5.15)–(5.16), we have

Uc = u(I − P)r = 0,
tT : (5.17)
Us = u Pr = ur .

Thus Uc and Us will be called the controllable part and the synchronizable part
of U , respectively.
Noting (5.14) and applying the projection P on problem (I) and (I0), we get
immediately
52 5 Exactly Synchronizable States

Proposition 5.2 The controllable part Uc is the solution to the following problem:
⎧ 

⎪ U − Uc + AUc = 0 in (0, +∞) × ,
⎨ c
Uc = 0 on (0, +∞) × 0 ,
(5.18)

⎪ Uc = (I − P)D H on (0, +∞) × 1 ,
⎩ 0 , Uc = (I − P)U
1
t = 0 : Uc = (I − P)U in ,

while the synchronizable part Us is the solution to the following problem:


⎧ 
⎪ Us − Us + AUs = 0


in (0, +∞) × ,
Us = 0 on (0, +∞) × 0 ,
(5.19)

⎪ Us = P D H on (0, +∞) × 1 ,
⎩ 0 , Us = P U
1
t = 0 : Us = P U in .

Remark 5.3 In fact, the boundary control H realizes the exact boundary null con-
0 , (I − P)U
trollability for Uc with the initial data ((I − P)U 1 ) ∈ Ker(P) × Ker(P)
on one hand, and the exact boundary synchronization for Us with the initial data
0 , P U
(P U 1 ) ∈ Im(P) × Im(P) on the other hand.

We define

D N −1 = {D ∈ M N ×(N −1) : rank(D) = rank(C1 D) = N − 1}. (5.20)

Proposition 5.4 Let a matrix D of order N × (N − 1) be defined by


 ⊥
Im(D) = Span{Er } . (5.21)

We have D ∈ D N −1 .

Proof Clearly, rank(D) = N − 1. On the other hand, since (r , Er ) = 1, we have


 ⊥
r ∈
/ Span{Er } = Im(D). Noting that Ker(C1 ) = Span{r }, we have Im(D) ∩
Ker(C1 ) = {0}. Then, by Proposition 2.11, it follows that rank(C1 D) = rank(D) =
N − 1. The proof is complete. 

Theorem 5.5 When r = 1, we can take a boundary control matrix D ∈ D N −1 , such


that the synchronizable part Us is independent of boundary controls H . Inversely, if
the synchronizable part Us is independent of boundary controls H , then we neces-
sarily have r = 1.

Proof When r = 1, by Proposition 5.4, we can take a boundary control matrix D ∈


D N −1 as in (5.21). Noting (5.13), we have
 ⊥
Ker(P) = Span{E1 } = Im(D). (5.22)
5.2 Determination of Exactly Synchronizable States 53

Then P D = 0, therefore (5.19) becomes a problem with homogeneous Dirichlet


boundary condition. Consequently, the solution Us is independent of boundary con-
trols H .
Inversely, let H1 and H2 be two boundary controls which realize simultaneously
the exact boundary synchronization of system (I). If the corresponding solutions Us
to problem (5.19) are independent of the boundary controls H1 and H2 , then we have

P D(H1 − H2 ) = 0 on (0, T ) × 1 . (5.23)

By Theorem 3.8 and Proposition 4.9, the values of C1 D(H1 − H2 ) on (T − , T ) ×


1 can be arbitrarily chosen. Since C1 D is invertible, the values of (H1 − H2 ) on
(T − , T ) × 1 can be arbitrarily chosen. This yields that P D = 0. It follows that

Im(D) ⊆ Ker(P). (5.24)

Noting (5.13), we have dim Ker(P) = N − r and dim Im(D) = N − 1, then r = 1.


The proof is complete. 

Corollary 5.6 Assume that both Ker(C1 ) and Im(C1T ) are invariant subspaces of
A. Then there exists a boundary control matrix D ∈ D N −1 , such that system (I) is
exactly synchronizable and the synchronizable part Us is independent of boundary
controls H .

Proof Recall that Ker(C1 )=Span{e1 } with e1 = (1, · · · , 1)T . Since Im(C1T ) is an
 ⊥
invariant subspace of A, by Proposition 2.7, Im(C1T ) = Ker(C1 ) is an invariant
subspace of A T . Thus, e1 is also an eigenvector of A T , corresponding to the same
eigenvalue a given by (4.5). Then taking E 1 = e1 /N , we have (E 1 , e1 ) = 1 and then
r = 1. Thus, by Theorem 5.5 we can chose a boundary control matrix D ∈ D N −1 ,
such that the synchronizable part Us of system (I) is independent of boundary con-
trols H . The proof is complete. 

Remark 5.7 Under the condition r = 1, there exists an eigenvector E 1 of A T , such


that (E 1 , e1 ) = 1. Clearly, if A is symmetric or A T satisfies also the condition of
compatibility (4.5), then e1 is also an eigenvector of A T . Consequently, we can take
E 1 = e1 /N such that (E 1 , e1 ) = 1. However, this condition is not always satisfied
for any given matrix A. For example, let

2 −1
A= . (5.25)
1 0

We have
1 1
a = 1, e1 = , E1 = , (5.26)
1 −1
54 5 Exactly Synchronizable States

then
(E 1 , e1 ) = 0. (5.27)

In general, if (5.27) holds, then


 ⊥  ⊥
E 1 ∈ Span{e1 } = Ker (C1 ) = Im(C1T ). (5.28)

This means that (E 1 , U ) is just a linear combination of the components of the


vector C1 U , therefore it does not provide any new information for the synchronizable
part Us .
 ⊥
Remark 5.8 Let ( c1 , · · · ,
c N −1 ) be a basis of Span{E 1 } . Then

a 0
A(e1 ,
c1 , · · · ,
c N −1 ) = (e1 ,
c1 , · · · ,
c N −1 ) , (5.29)
0 A22

where A22 is a matrix of order (N − 1). Therefore, A is diagonalizable by blocks


under the basis (e1 ,
c1 , · · · ,
c N −1 ).

We next discuss the general case r  1. Let us denote

φk = (Ek , U ), 1  k  r (5.30)

and write

r
r
Us = (Ek , U )k = φk k . (5.31)
k=1 k=1

Then, (φ1 , · · · , φr ) are the coordinates of Us on the bi-orthonormal basis {1 , · · · , er }


and {E1 , · · · , Er }.

Theorem 5.9 Let 1 , · · · , r (resp. E1 , · · · , Er ) be a Jordan chain of A (resp. A T )


corresponding to the eigenvalue a and r = (1, · · · , 1)T . Then the synchronizable
part Us = (φ1 , · · · , φr ) can be determined by the solution of the following problem
(1  k  r ):
⎧ 

⎪ φ − φk + aφk + φk−1 = 0 in (0, +∞) × ,
⎨ k
φk = 0 on (0, +∞) × 0 ,
(5.32)

⎪ φk = h k on (0, +∞) × 1 ,
⎩ 0 ), φk = (Ek , U
 1 ) in ,
t = 0 : φk = (Ek , U

where
φ0 = 0 and h k = (Ek , D H ). (5.33)

Moreover, the exactly synchronizable state is given by u = φr for t  T.


5.2 Determination of Exactly Synchronizable States 55

Proof First, for 1  k  r , we have

(Ek , U ) = (Ek , Us ) = φk , (5.34)


r
(Ek , P D H ) = (El , D H )(Ek , l ) = (Ek , D H ) (5.35)
l=1

and
(Ek , PU0 ) = (Ek , U0 ), (Ek , PU1 ) = (Ek , U1 ). (5.36)

Taking the inner product of (5.19) with Ek , we get (5.32)–(5.33).


On the other hand, noting (5.16), we have

tT : φk = (Ek , U ) = (Ek , ur ) = uδkr (5.37)

for 1  k  r. Thus, the exactly synchronizable state u is given by

tT : u = u(t, x) = φr (t, x). (5.38)

The proof is complete. 

In the special case r = 1, by Theorems 5.5 and 5.9, we have

Corollary 5.10 When r = 1, we can take D ∈ D N −1 , such that D T E1 = 0. Then


the exactly synchronizable state u is determined by u = φ for t  T , where φ is the
solution to the following problem with homogeneous Dirichlet boundary condition:
⎧ 
⎨ φ − φ + aφ = 0 in (0, +∞) × ,
φ=0 on (0, +∞) × , (5.39)
⎩ 0 ), φ = (E1 , U
1 ) in .
t = 0 : φ = (E1 , U

Inversely if the synchronizable part Us = (φ1 , · · · , φr ) is independent of boundary


controls H , then we have necessarily

r = 1 and D T E1 = 0. (5.40)

Consequently, the exactly synchronizable state u is given by u = φ for t  T , where


φ is the solution to problem (5.39). In particular, if

0 ) = (E1 , U
(E1 , U 1 ) = 0, (5.41)

0 , U
then system (I) is exactly null controllable for such initial data (U 1 ).
56 5 Exactly Synchronizable States

5.3 Approximation of Exactly Synchronizable States

The relation (5.37) shows that only the last component φr is synchronized, while
the others are steered to zero. However, in order to get φr , we have to solve the
whole problem (5.32)–(5.33) for (φ1 , · · · , φr ). Therefore, except in the case r = 1,
the exactly synchronizable state u depends on boundary controls which realize the
exact boundary synchronization, and then, generically speaking, one cannot uniquely
determine the exactly synchronizable state u. However, we have the following result.

Theorem 5.11 Assume that system (I) is exactly synchronizable by means of a


boundary control matrix D ∈ D N −1 . Let φ be the solution to the following homoge-
neous problem:
⎧ 
⎨ φ − φ + aφ = 0 in (0, +∞) × ,
φ=0 on (0, +∞) × , (5.42)
⎩ 0 ), φ = (Er , U
1 ) in .
t = 0 : φ = (Er , U

Assume furthermore that


D T Er = 0. (5.43)

Then there exists a positive constant cT > 0, depending on T , such that the exactly
synchronizable state u satisfies the following estimate:

(u, u  (T ) − (φ, φ )(T ) H01 ()×L 2 () (5.44)


0 , U
 cT C1 (U 1 ) (L 2 ()) N −1 ×(H −1 ()) N −1 .

Proof By Proposition 5.4, we can take a boundary control matrix D ∈ D N −1 , such


that (5.43) is satisfied. Then, considering the r th equation in (5.32), we get the
following problem with homogeneous Dirichlet boundary condition:
⎧ 
⎨ φr − φr + aφr = −φr −1 in (0, +∞) × ,
φr = 0 on (0, +∞) × , (5.45)
⎩ 0 ), φr = (Er , U
1 ) in .
t = 0 : φr = (Er , U

From (5.37) we have

tT : φr ≡ u, φr −1 ≡ 0. (5.46)

Noting that problems (5.42) and (5.45) have the same initial data and the same
homogeneous Dirichlet boundary condition, by well-posedness, there exists a posi-
tive constant c1 > 0, such that
5.3 Approximation of Exactly Synchronizable States 57

(u, u  )(T ) − (φ, φ )(T ) 2H 1 ()×L 2 () (5.47)


T
0

c1 ψr −1 (s) L 2 () ds.


2
0

Noting that the condition (Er −1 , r ) = 0 implies that


 ⊥  ⊥
Er −1 ∈ Span{r } = Ker(C1 ) = Im(C1T ), (5.48)

Er −1 is a linear combination of the columns of C1T . Therefore, there exists a positive


constant c2 > 0, such that

φr −1 (s) 2L 2 () = (Er −1 , U (s)) 2L 2 ()  c2 C1 U (s) 2(L 2 ()) N −1 . (5.49)

Recall that W = C1 U , due to the exact boundary null controllability of the reduced
system (4.28), there exists a positive constant cT > 0, such that
T
C1 U (s) 2(L 2 ()) N −1 ds (5.50)
0
0 , U
cT C1 (U 1 ) 2 2 N −1 −1 N −1 .
(L ()) ×(H ())

Finally, inserting (5.49)–(5.50) into (5.47), we get (5.44). The proof is


complete. 

Remark 5.12 When r > 1, since (E1 , r ) = 0, the exactly synchronizable state u
of system (I) cannot be determined independently of applied boundary controls H .
Nevertheless, by Theorem 5.11, if C1 (U 0 , U
1 ) is suitably small, then u is closed to
the solution to problem (5.42), the initial data of which is given by the weighted
0 , U
average of the original initial data (U 1 ) with the weight Er (a root vector of A T ).
Chapter 6
Exact Boundary Synchronization
by Groups

The exact boundary synchronization by groups will be considered in this chapter for
system (I) with further lack of Dirichlet boundary controls.

6.1 Definition

Let  T  T
U = u (1) , · · · , u (N ) and H = h (1) , · · · , h (M) (6.1)

with M  N . Consider the coupled system (I) of wave equations with Dirichlet
boundary controls with the initial condition (I0).
It was shown in Chaps. 3 and 4 that system (I) is neither exactly null controllable
nor exactly synchronizable with fewer boundary controls (M < N or M < N − 1,
respectively). In order to consider the situation that the number of boundary controls
is further reduced, we will investigate the exact boundary synchronization by groups
for system (I).
Let p  1 be an integer and let

0 = n0 < n1 < n2 < · · · < n p = N (6.2)

be integers such that n r − n r −1  2 for all 1  r  p.


We rearrange the components of U into p groups

(u (1) , · · · , u (n 1 ) ), (u (n 1 +1) , · · · , u (n 2 ) ), · · · , (u (n p−1 +1) , · · · , u (n p ) ). (6.3)

Definition 6.1 System (I) is exactly synchronizable by p-groups at the time T > 0
0 , U
if, for any given initial data (U 1 ) ∈ (L 2 ()) N × (H −1 ()) N , there exist a bound-
ary control H ∈ L loc (0, +∞; (L 2 (1 )) M ) with compact support in [0, T ], such that
2

© Springer Nature Switzerland AG 2019 59


T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_6
60 6 Exact Boundary Synchronization by Groups

the corresponding solution U = U (t, x) to the mixed initial-boundary value problem


(I) and (I0) satisfies the following final conditions:
⎧ (1)

⎪ u ≡ · · · ≡ u (n 1 ) := u 1 ,
⎨ (n 1 +1)
u ≡ · · · ≡ u (n 2 ) := u 2 ,
tT : (6.4)

⎪ ···
⎩ (n p−1 +1)
u ≡ · · · ≡ u (n p ) := u p ,

where u = (u 1 , · · · , u p )T is called the exactly synchronizable state by p-groups,


which is a priori unknown.

Remark 6.2 The functions u 1 , · · · , u p depend on the initial data and on applied
boundary controls. In the whole monograph, when we claim that u 1 , · · · , u p are
linearly independent, it means that there exists at least an initial data and an applied
boundary control such that the corresponding u 1 , · · · , u p are linearly independent;
while, when we claim that u 1 , · · · , u p are linearly dependent, it means that they
are linearly dependent for any given initial data and for any given applied boundary
control.

Let Sr (cf. Remark 4.6) be the following (n r − n r −1 − 1) × (n r − n r −1 ) matrix:


⎛ ⎞
1 −1 0 ··· 0
⎜0 1 −1 ··· 0 ⎟
⎜ ⎟
Sr = ⎜ . .. .. .. .. ⎟ (6.5)
⎝ .. . . . . ⎠
0 0 · · · 1 −1

and let C p be the following (N − p) × N full row-rank matrix of synchronization


by p-groups: ⎛ ⎞
S1
⎜ S2 ⎟
⎜ ⎟
Cp = ⎜ .
. .. ⎟
(6.6)
⎝ ⎠
Sp

The exact boundary synchronization by p-groups (6.4) is equivalent to

tT : C p U ≡ 0. (6.7)

For r = 1, · · · , p, setting

1, n r −1 + 1  i  n r ,
(er )i = (6.8)
0, otherwise,

it is clear that
Ker(C p ) = Span{e1 , e2 , · · · , e p } (6.9)
6.1 Definition 61

and the exact boundary synchronization by p-groups (6.4) becomes


p
tT : U= u r er . (6.10)
r =1

We first observe that the exact boundary synchronization by p-groups is achieved


not only at the time T by means of the action of boundary controls, but is also
maintained forever. Since the control is removed from the time T , the coupling matrix
A should satisfy some additional conditions in order to maintain the exact boundary
synchronization by p-groups. As in Chap. 4, we should derive the corresponding
condition of compatibility for the exact boundary synchronization by p-groups of
system (I).

6.2 A Basic Lemma

In what follows, we will prove the necessity of the condition of compatibility by


a repeating reduction procedure. For clarity, this section is entirely devoted to this
basic result.

Lemma 6.3 Suppose that C p is a full row-rank matrix of order (N − p) × N , such


that
t  T : C pU ≡ 0 (6.11)

for any given solution U to the system of equations

U  − U + AU = 0 in (0, +∞) × , (6.12)

we have either
AKer(C p ) ⊆ Ker(C p ), (6.13)

p−1 of order (N − p + 1) × N , such that


or C p has a full row-rank extension C

p−1 U ≡ 0.
tT : C (6.14)

Proof et er ∈ R N (r = 1, · · · , p) such that

Ker(C p ) = Span{e1 , · · · , e p }. (6.15)

Noting that (6.11) implies (6.10) and applying the matrix C p to system (6.12), we
get
 p
tT : u r C p Aer = 0. (6.16)
r =1
62 6 Exact Boundary Synchronization by Groups

If C p Aer = 0 (r = 1, · · · , p), then we get the inclusion (6.13). Otherwise, not-


ing that C p Aer (r = 1, · · · , p) are constant vectors, there exist some real constant
coefficients αr (r = 1, · · · , p − 1) such that


p−1
up = αr u r . (6.17)
r =1

Then (6.10) becomes


p−1
tT : U= u r (er + αr e p ). (6.18)
r =1

Setting
er = er + αr e p , r = 1, · · · , p − 1,
 (6.19)

we get

p−1
tT : U= u r
er . (6.20)
r =1

p−1 such that (6.14) holds. For this purpose,


We next construct an enlarged matrix C
let 
c p+1 be a row vector defined by

ep  αl el
p−1
c Tp+1 =
 − . (6.21)
e p 2
l=1
el 2

Noting (6.19) and the orthogonality of the set {e1 , · · · , e p }, it is easy to check that

(e p , er )  αl (el , er )
p−1
c p+1
 er = − (6.22)
e p 2 l=1
el 2
 (e , e )  p−1
αl (el , e p ) 
p p
+ αr − = −αr + αr = 0.
e p 2 l=1
el 2

p−1 by
for all r with 1  r  p − 1. Then we define the enlarged matrix C
 
p−1 = Cp
C . (6.23)

c p+1

Noting that 
er ∈ Ker(C p ) for r = 1, · · · , p − 1 and using (6.22), we have
     
p−1 Cp C p
er 0
C er = 
e = = , r = 1, · · · , p − 1, (6.24)

c p+1 r 
c p+1
er 0
6.2 A Basic Lemma 63

then it follows immediately from (6.20) that

tT : p−1 U ≡ 0.
C (6.25)

Finally, noting that


c Tp+1 ∈ Ker(C p ) = {Im(C Tp )}⊥ ,
 (6.26)

we havec Tp+1 ∈ p−1 ) = N − p + 1, namely, C


/ Im(C Tp ), then rank(C p−1 is a full row-
rank matrix of order (N − p + 1) × N . The proof is complete. 

6.3 Condition of C p -Compatibility

We will show that the exact boundary synchronization by p-groups of system (I)
requires at least (N − p) boundary controls. In particular, if rank(D) = N − p,
then we get the following condition of C p -compatibility:

AKer(C p ) ⊆ Ker(C p ), (6.27)

which is necessary for the exact boundary synchronization by p-groups of system


(I). Conversely, we will show in the next section that under the condition of C p -
compatibility (6.27), there exists a boundary control matrix D with M = N − p,
such that system (I) is exactly synchronizable by p-groups.
We first give the lower bound estimate on the rank of the boundary control matrix
D, which is necessary for the exact boundary synchronization by p-groups.

Theorem 6.4 Assume that system (I) is exactly synchronizable by p-groups. Then
we necessarily have
rank(C p D) = N − p. (6.28)

In particular, we have
rank(D)  N − p. (6.29)

Proof From (6.7), we have C p U ≡ 0 for t  T . If AKer(C p )  Ker(C p ), then by


Lemma 6.3, we can construct a full row-rank (N − p + 1) × N matrix C p−1 such that
p−1 U ≡ 0 for t  T . If AKer(C
C p−1 )  Ker(C p−1 ), once again by Lemma 6.3, we
can construct a full row-rank (N − p + 2) × N matrix C p−2 such that Cp−2 U ≡ 0
for t  T , and so forth. The procedure should stop at some step r with 0  r  p.
So, we get an enlarged full row-rank (N − p + r ) × N matrix C p−r (when r = 0,
p = C p , and this special situation means that the condition of compatibility
we take C
(6.27) holds) such that

tT : p−r U ≡ 0
C (6.30)
64 6 Exact Boundary Synchronization by Groups

and
AKer(C p−r ).
p−r ) ⊆ Ker(C (6.31)

Then, by Proposition 2.15, there exists a unique matrix A p−r of order (N − p + r ),


such that
p−r A = A p−r C
C p−r . (6.32)

p−r to problem (I) and (I0) and setting W = C


Applying C p−r U , we get the following
reduced system:
⎧ 
⎨ W − W + A p−r W = 0 in (0, +∞) × ,
W =0 on (0, +∞) × 0 , (6.33)
⎩ p−r D H
W =C on (0, +∞) × 1

with the initial condition

t =0: p−r U
W =C p−r U
0 , W  = C 1 in . (6.34)

Moreover, (6.30) implies that

tT : W ≡ 0. (6.35)

p−r is a (N − p + r ) × N full row-rank matrix, the


On the other hand, since C
linear map

(U 1 ) → (C
0 , U p−r U p−r U
0 , C 1 ) (6.36)

is surjective from the space (L 2 ()) N × (H −1 ()) N onto the space (L 2 ()) N − p+r ×
(H −1 ()) N − p+r . We thus get the exact boundary null controllability of the reduced
system (6.33) in the space (L 2 ()) N − p+r × (H −1 ()) N − p+r . By Theorem 3.11, the
rank of the corresponding boundary control matrix C p−r D satisfies

p−r D) = N − p + r.
rank(C (6.37)

p−r D is of full row-rank, then the


Finally, noting that the (N − p + r ) × N matrix C
sub-matrix C p D composed of the first (N − p) rows of Cp−r D is of full row-rank,
we get thus (6.28). 

Theorem 6.5 Assume that system (I) is exactly synchronizable by p-groups under
the minimal rank condition

M = rank(D) = N − p. (6.38)

Then, we necessarily have the condition of C p -compatibility (6.27).


6.3 Condition of C p -Compatibility 65

Proof Noting (6.37) and (6.38), it follows easily from

p−r D)
rank(D)  rank(C

that r = 0. It means that it is not necessary to proceed the extension in the proof of
Theorem 6.4, we have already the condition of C p -compatibility (6.27). 

Remark 6.6 The condition of C p -compatibility (6.27) is necessary for the exact
boundary synchronization by p-groups of system (I) only under the minimal rank
condition (6.38), namely, under the minimal number of boundary controls.

Remark 6.7 Noting (6.9), the condition of C p -compatibility (6.27) can be equiva-
lently written as
p
Aer = αsr es , 1  r  p, (6.39)
s=1

where αsr are some constant coefficients. Moreover, because of the specific expres-
sion of er given in (6.8), the above expression (6.39) can be written in the form of
row-sum condition by blocks:


ns
ai j = αr s (6.40)
j=n s−1 +1

for all 1  r, s  p and n r −1 + 1  i  n r , which is a natural generalization of the


row-sum condition (4.5) in the case p = 1.

Remark 6.8 By Proposition 2.15, the condition of C p -compatibility (6.27) is equiv-


alent to the existence of a unique matrix A p of order (N − p), such that

C p A = A pC p. (6.41)

A p is called the reduced matrix of A by C p .

6.4 Exact Boundary Synchronization by p-Groups

Theorem 6.4 shows that the exact boundary synchronization by p-groups of system
(I) requires at least (N − p) boundary controls, namely, we have the following

Corollary 6.9 If
rank(C p D) < N − p, (6.42)
66 6 Exact Boundary Synchronization by Groups

or, in particular, if
rank(D) < N − p, (6.43)

then, no matter how large T > 0 is, system (I) cannot be exactly synchronizable by
p-groups at the time T .

The following result shows that the converse also holds.

Theorem 6.10 Assume that the coupling matrix A satisfies the condition of C p -
compatibility (6.27). Let M = N − p and let the N × (N − p) boundary control
matrix D satisfy the rank condition

rank(C p D) = N − p. (6.44)

Then, under the multiplier geometrical condition (3.1), system (I) is exactly synchro-
nizable by p-groups in the space (L 2 ()) N × (H −1 ()) N .

Proof By Remark 6.8, applying C p to problem (I) and (I0) and setting W p = C p U ,
we get the following self-closed reduced system:
⎧ 
⎨ W p − W p + A p W p = 0 in (0, +∞) × ,
W =0 on (0, +∞) × 0 , (6.45)
⎩ p
Wp = Cp D H on (0, +∞) × 1

with
t =0: 0 , W p = C p U
W p = C pU 1 in . (6.46)

Then, by Theorem 3.11 and noting the rank condition (6.44), the reduced system
(6.45) is exactly null controllable at the time T > 0 by means of controls H ∈
2
L loc (0, +∞; (L 2 (1 )) N − p ) with compact support in [0, T ]. Thus, we have

tT : C p U ≡ W p ≡ 0.

The proof is then complete. 

Remark 6.11 The rank M of the boundary control matrix D presents the number of
boundary controls applied to the original system (I), while, the rank of the matrix
C p D presents the number of boundary controls effectively applied to the reduced
system (6.45). Let us write

D H = H0 + H1 with H0 ∈ Ker(C p ) and H1 ∈ Im(C Tp ). (6.47)

The part H0 will disappear in the reduced system (6.45), and is useless for the exact
boundary synchronization by p-groups of the original system (I). The exact bound-
ary null controllability of the reduced system (6.45), therefore, the exact boundary
synchronization by p-groups of the original system (I) is in fact realized only by the
6.4 Exact Boundary Synchronization by p-Groups 67

part H1 . So, in order to minimize the number of boundary controls, we are interested
in the matrices D such that

Im(D) ∩ Ker(C p ) = {0}, (6.48)

or, by Proposition 2.11, such that

rank(C p D) = rank(D) = N − p. (6.49)

Proposition 6.12 Let D N − p be the set of N × (N − p) matrices satisfying (6.49),


namely,

D N − p = {D ∈ M N ×(N − p) : rank(D) = rank(C p D) = N − p}. (6.50)

Then, for any given boundary control matrix D ∈ D N − p , the subspaces Ker(D T )
and Ker(C p ) are bi-orthonormal. Moreover, we have

D N − p = {C Tp D1 + (e1 , · · · , e p )D0 }, (6.51)

where D1 is an invertible matrix of order (N − p), D0 is a matrix of order p × (N −


p), and the vectors e1 , · · · , e p are given by (6.8).

Proof Noting that {Ker(D T )}⊥ = Im(D), and Ker(D T ) and Ker(C p ) have the same
dimension p, by Propositions 2.4 and 2.5, to prove that Ker(D T ) and Ker(C p )
are bi-orthonormal, it suffices to show that Ker(C p ) ∩ Im(D) = {0}, which, by
Proposition 2.11, is equivalent to the condition rank(C p D) = rank(D).
Now we prove (6.51). Let D be a matrix of order N × (N − p). Noting that
Im(C Tp ) ⊕ Ker(C p ) = R N , there exist a matrix D0 of order p × (N − p) and a square
matrix D1 of order (N − p), such that

D = C Tp D1 + (e1 , · · · , e p )D0 , (6.52)

where e1 , · · · , e p are given by (6.8). Moreover, C p D = C p C Tp D1 is of rank (N − p),


if and only if D1 is invertible. On the other hand, let x ∈ R N − p , such that

Dx = C Tp D1 x + (e1 , · · · , e p )D0 x = 0. (6.53)

Since Im (C Tp ) ⊥ Ker(C p ), it follows that

C Tp D1 x = (e1 , · · · , e p )D0 x = 0. (6.54)

Then, noting that C Tp D1 is of full column-rank, we have x = 0. Hence, D is of full


column-rank (N − p). This proves (6.51). 
68 6 Exact Boundary Synchronization by Groups

Remark 6.13 Under the multiplier geometrical condition 3.1, for any given matrix
D ∈ D N − p , the condition of C p -compatibility (6.27) is necessary and sufficient for
the exact boundary synchronization by p-groups of system (I) by Theorem 6.10 and
Remark 6.6. In particular, if we take D = C Tp C p ∈ D N − p , we can really achieve the
exact boundary synchronization by p-groups for system (I) by means of (N − p)
boundary controls.
Chapter 7
Exactly Synchronizable States by
p-Groups

When system (I) possesses the exact boundary synchronization by p-groups, the cor-
responding exactly synchronizable states by p-groups will be studied in this chapter.

7.1 Introduction

Under the condition of C p -compatibility (6.27), it is easy to see that for t  T , the
exactly synchronizable state by p-groups u = (u 1 , · · · u p )T satisfies the following
coupled system of wave equations with homogenous Dirichlet boundary condition:

u  − u + Ãu = 0 in (T, +∞) × ,
(7.1)
u=0 on (T, +∞) × ,

where à = (αr s ) is given by (6.39). Hence, the evolution of the exactly synchroniz-
able state by p-groups u = (u 1 , · · · u p )T with respect to t is completely determined
by the values of (u, u t ) at the time t = T :

t=T : u0, u = 
u = u1. (7.2)

As in Theorem 5.1 for the case p = 1, we can show that the attainable set of all
possible values of (u, u  ) at t = T is the whole space (L 2 ()) p × (H −1 ()) p , when
0 , U
the initial data (U 1 ) vary in the space (L 2 ()) N × (H −1 ()) N .
In this chapter, we will discuss the determination of exactly synchronizable states
0 , U
by p-groups u = (u 1 , · · · u p )T for each given initial data (U 1 ). Since there is an
infinity of boundary controls which can realize the exact boundary synchronization by
p-groups for system (I), exactly synchronizable states by p-groups u = (u 1 , · · · u p )T
naturally depend on applied boundary controls H . However, for some coupling
matrices A, for example, the symmetric ones, the exactly synchronizable states by

© Springer Nature Switzerland AG 2019 69


T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_7
70 7 Exactly Synchronizable States by p-Groups

p-groups u = (u 1 , · · · u p )T could be independent of applied boundary controls H .


In the general case, exactly synchronizable states by p-groups u = (u 1 , · · · u p )T
depend on applied boundary controls, however we can give an estimate on the dif-
ference between each exactly synchronizable state by p-groups u = (u 1 , · · · u p )T
and the solution to a problem which is independent of applied boundary controls (cf.
Theorems 7.1 and 7.2 below).

7.2 Determination of Exactly Synchronizable States by


p-Groups

Now we return to the determination of exactly synchronizable states by p-groups.


The case p = 1 was considered in Sect. 5.2.
We first consider the case that A T admits an invariant subspace Span{E 1 , · · · , E p },
which is bi-orthonormal to Span{e1 , · · · , e p }, namely, we have

(ei , E j ) = δi j , 1  i, j  p, (7.3)

where e1 , · · · , e p are given by (6.8)–(6.9).

Theorem 7.1 Assume that the matrix A satisfies the condition of C p -compatibility
(6.27). Assume furthermore that A T admits an invariant subspace Span{E 1 , · · · , E p },
which is bi-orthonormal to Ker(C p ) = Span{e1 , · · · , e p }. Then there exists a bound-
ary control matrix D ∈ D N − p (cf. (6.51)), such that the exactly synchronizable state
by p-groups u = (u 1 , · · · , u p )T is uniquely determined by

tT : u = φ, (7.4)

where φ = (φ1 , · · · , φ p )T is the solution to the following problem independent of


applied boundary controls H : for s = 1, 2, · · · , p,
⎧  p
⎨ φs − φs + r =1 αsr φr = 0 in (0, +∞) × ,
φs = 0 on (0, +∞) × , (7.5)
⎩ 0 ), φs = (E s , U
1 ) in ,
t = 0 : φs = (E s , U

where αsr are given by (6.39).

Proof Since the subspaces Span{E 1 , · · · , E p } and Span{e1 , · · · , e p } are


bi-orthonormal, then taking

D1 = I N − p , D0 = −E T C Tp with E = (E 1 , · · · , E p ) (7.6)
7.2 Determination of Exactly Synchronizable States by p-Groups 71

in (6.51), we obtain a boundary control matrix D ∈ D N − p , such that

E s ∈ Ker(D T ), s = 1, · · · , p. (7.7)

On the other hand, since Span{E 1 , · · · , E p } is invariant for A T , and noting (6.39)
and (7.3), it is easy to check that


p
AT Es = αsr Er , s = 1, · · · , p. (7.8)
r =1

Then, taking the inner product of E s with problem (I) and (I0), and setting φs =
(E s , U ), we get problem (7.5). Finally, by the exact boundary synchronization by
p-groups (6.10) and the relationship (7.3), we get


p
tT : φs = (E s , U ) = (E s , er )u r = u s , 1  s  p. (7.9)
r =1

The proof is then complete. 

Theorem 7.2 Assume that the condition of C p -compatibility (6.27) holds. Then for
any given boundary control matrix D ∈ D N − p , there exists a positive constant cT
independent of initial data, but depending on T , such that each exactly synchronizable
state by p-groups u = (u 1 , · · · , u p )T satisfies the following estimate:

(u, u  )(T ) − (φ, φ )(T )(H01 ()) p ×(L 2 ()) p (7.10)


cT C p (U0 , U
1 )(L 2 ()) N − p ×(H −1 ()) N − p ,

where φ = (φ1 , · · · , φ p )T is the solution to problem (7.5), in which


Span{E 1 , · · · , E p } is bi-orthonormal to Span{e1 , · · · , e p }.

Proof By Proposition 6.12, the subspaces Ker(D T ) and Ker(C p ) are bi-orthonormal.
Then we can chose E 1 , · · · , Er ∈ Ker(D T ), such that Span{E 1 , · · · , E p } and
Span{e1 , · · · , e p } are bi-orthonormal.
Moreover, noting (6.39) and (7.3), by a direct computation, we get


p
(A T E s − αsr Er , ek ) (7.11)
r =1

p
=(E s , Aek ) − αsr (Er , ek )
r =1

p
= αlk (E s , el ) − αsk = αsk − αsk = 0
l=1
72 7 Exactly Synchronizable States by p-Groups

for all s, k = 1, · · · , p, and then


p
AT Es − αsr Er ∈ {Ker(C p )}⊥ = Im(C Tp ), s = 1, · · · , p. (7.12)
r =1

Thus, there exists a vector Rs ∈ R N − p , such that


q
AT Es − αsr Er = −C Tp Rs . (7.13)
r =1

Taking the inner product of E s with problem (I) and (I0), and setting ψs = (E s , U )
(s = 1, · · · , p), it is easy to see that


⎪ 
p
⎪ 
⎨ ψs − ψs + αsr ψr = (Rs , C p U ) in (0, +∞) × ,
r =1 (7.14)

⎪ ψs = 0 on (0, +∞) × ,

⎩ 0 ), ψs = (E s , U
t = 0 : ψs = (E s , U 1 ) in .

By the well-posedness of problems (7.5) and (7.14), there exists a constant c > 0
independent of initial data, such that

(ψ, ψ  )(T ) − (φ, φ )(T )2(H 1 ()) p ×(L 2 ()) p (7.15)



T
0

c C p U (s)2(L 2 ()) N − p ds.


0

Since C p U = W p , the exact boundary null controllability of the reduced system


(6.45) shows that there exists another positive constant cT > 0 independent of initial
data, but depending on T , such that

T
C p U (s)2(L 2 ()) N − p ds (7.16)
0
0 , U
cT C p (U 1 )2 2 N − p
(L ()) ×(H −1 ()) N − p .

Finally, noting that


p
tT : ψs = (E s , U ) = (E s , er )u r = u s , s = 1, · · · , p, (7.17)
r =1

and inserting (7.16)–(7.17) into (7.15), we get (7.10). The proof is complete. 
7.2 Determination of Exactly Synchronizable States by p-Groups 73

Remark 7.3 Since {e1 , · · · , e p } is an orthonormal system, we can take E s = es /es 2


(s = 1, · · · , p) as a specific choice. Accordingly, the boundary control matrix D has
the tendency to drive the initial data to the average value.

Remark 7.4 Let Ker(C p ) be invariant for A and bi-orthonormal to W =


Span{E 1 , E 2 , · · · , E p }. By Proposition 2.9, W ⊥ is a supplement of Ker(C p ) and
invariant for the matrix A if and only if W is invariant for the matrix A T .
Accordingly, the coupling matrix A can be diagonalized by blocks under the
decomposition R N = K er (C p ) ⊕ W ⊥ .
In particular, if A T satisfies the condition of C p -compatibility (6.27), then Ker(C p )
is also invariant for A T . By Proposition 2.7, Im(C Tp ) is an invariant subspace of A.
Then, we can write

0
A
A(e1 , · · · , e p , C Tp ) = (e1 , · · · , e p , C Tp )  , (7.18)
0 A

= (αr s ) is given by (6.40) and A


where A  is given by

 = (C p C Tp )−1 C p AC Tp .
A (7.19)

In that case, Theorem 7.1 shows that the exactly synchronizable state by p-groups is
independent of applied boundary controls. Otherwise, it depends on applied boundary
controls; however, as shown in Theorem 7.2, we can give an estimate on the difference
between each exactly synchronizable state by p-groups and the solution to a problem
independent of applied boundary controls.

7.3 Determination of Exactly Synchronizable States


by p-Groups (Continued)

We now consider the determination of exactly synchronizable states by p-groups in


the case that Ker(C p ) = Span{e1 , · · · , e p } is an invariant subspace of A, but A T does
not admit any invariant subspace which is bi-orthonormal to Ker(C p ). In this case, we
will extend the subspace Span{e1 , · · · , e p } to an invariant subspace Span{e1 , · · · , eq }
of A with q  p, so that A T admits an invariant subspace Span{E 1 , · · · , E q } that is
bi-orthonormal to Span{e1 , · · · , eq }.
When Ker(C p ) is A-marked, the procedure for obtaining the subspace
Span{e1 , · · · , eq } is given in Proposition 2.20.
When Ker(C p ) is not A-marked, let λ j (1  j  d) denote the eigenvalues of the
restriction of A to Ker(C p ). We define


d
Span{e1 , · · · , eq } = Ker(A − λ j I )m j (7.20)
j=1
74 7 Exactly Synchronizable States by p-Groups

and

d
Span{E 1 , · · · , E q } = Ker(A T − λ j I )m j , (7.21)
j=1

where m j is an integer such that Ker(A T − λ j I )m j = Ker(A T − λ j I )m j +1 and


d
q= Dim Ker(A T − λ j I )m j . (7.22)
j=1

Clearly, the subspaces given by (7.20) and (7.21) satisfy well the first two condi-
tions in Proposition 2.20. However, since Ker(C p ) is not A-marked, the subspace
Span{e1 , · · · , eq } constructed by this way is not a priori the least one.
In the above two cases, we can define the projection P on the subspace
Span{e1 , · · · , eq } as follows:


q
P= er ⊗ Er , (7.23)
r =1

where the tensor product ⊗ is defined by

(er ⊗ Er )U = (Er , U )er , ∀U ∈ R N . (7.24)

Then we have
Im(P) = Span{e1 , · · · , eq }, (7.25)
 ⊥
Ker(P) = Span{E 1 , · · · , E q } (7.26)

and
P A = A P. (7.27)

Let U = U (t, x) be the solution to problem (I) and (I0). We may define the
synchronizable part Us and the controllable part Uc by

Us = PU, Uc = (I − P)U, (7.28)

respectively. In fact, if system (I) is exactly synchronizable by p-groups, we have

U ∈ Span{e1 , · · · , e p } ⊆ Span{e1 , · · · , eq } = Im(P) (7.29)

for all t  T , so that

tT : Us = PU ≡ U, Uc ≡ 0. (7.30)
7.3 Determination of Exactly Synchronizable States by p-Groups (Continued) 75

Moreover, noting (7.27), we have


⎧ 

⎪ U − Us + AUs = 0 in (0, +∞) × ,
⎪ s
⎨ Us = 0 on (0, +∞) × 0 ,
(7.31)

⎪ Us = P D H on (0, +∞) × 1 ,

⎩ 0 , Us = P U
t = 0 : Us = P U 1 in .

The condition p = q means exactly that the matrix A satisfies the condition of C p -
compatibility (6.27), and A T admits an invariant subspace, which is bi-orthonormal
to Ker(C p ). In this case, by Theorem 7.1, there exists a boundary control matrix D ∈
D N − p , such that the exactly synchronizable state by p-groups u = (u 1 , · · · , u p )T is
independent of applied boundary controls H . Conversely, we have the following

Theorem 7.5 Assume that the matrix A satisfies the condition of C p -compatibility
(6.27). Assume that system (I) is exactly synchronizable by p-groups. If the synchro-
nizable part Us is independent of applied boundary controls H , then A T admits an
invariant subspace which is bi-orthonormal to Ker(C p ).

Proof Let H1 and H2 be two boundary controls, which realize simultaneously the
exact boundary synchronization by p-groups for system (I). If the corresponding
solution Us to problem (7.31) is independent of applied boundary controls H1 and
H2 , then we have

P D(H1 − H2 ) = 0 on (0, T ) × 1 . (7.32)

By Theorem 3.8, it is easy to see that the values of C p D(H1 − H2 ) on (T − , T ) ×


1 can be arbitrarily chosen for  > 0 small enough. Since C p D is invertible, the
values of (H1 − H2 ) on (T − , T ) × 1 can be arbitrarily chosen, too. This yields
that P D = 0 so that
Im(D) ⊆ Ker(P). (7.33)

Noting (7.26), we have


dim Ker(P) = N − q, (7.34)

however, by Theorem 6.4, we have

dim Im(D)  dim Im(C p D) = rank(C p D) = N − p. (7.35)

Then, it follows from (7.33) that p = q. Then Span{E 1 , · · · , E p } will be the desired
subspace. The proof is complete. 
76 7 Exactly Synchronizable States by p-Groups

7.4 Precise Consideration on the Exact Boundary


Synchronization by 2-Groups

The case p = 1 was considered in Chap. 5. The condition p = q = 1 is equivalent


to the existence of an eigenvector E 1 of A T , such that (E 1 , e1 ) = 1. In this section,
we give the precise consideration for the case p = 2.
Let m  2 be an integer such that N − m  2. We rewrite the exact boundary
synchronization by 2-groups as

tT : u (1) = · · · u (m) := u, u (m+1) = · · · = u (N ) := v. (7.36)

Let
Sm
C2 = (7.37)
S N −m

be the matrix of synchronization by 2-groups and let

m N −m m N −m
           
e1 = (1, · · · , 1, 0, · · · , 0)T , e2 = (0, · · · , 0, 1, · · · , 1)T . (7.38)

Clearly, we have
Ker(C2 ) = Span{e1 , e2 } (7.39)

and the synchronization condition by 2-groups (6.10) means

tT : U = ue1 + ve2 . (7.40)

(i) Assume that A admits two eigenvectors r and ˜s , associated, respectively, with
the eigenvalues λ and μ, contained in the invariant subspace Ker(C2 ). Let 1 , · · · , r ,
respectively, ˜1 , · · · , ˜s denote the corresponding Jordan chains of A:

Ak = λk + k+1 , 1  k  r, r +1 = 0,
(7.41)
A˜i = μ˜i + ˜i+1 , 1  i  s, ˜s+1 = 0.

Accordingly, let E1 , · · · , Er and E 1 · · · , E s denote the corresponding Jordan chains


of A T :  T
A Ek = λEk + Ek−1 , 1  k  r, E0 = 0,
(7.42)
A T E i = μE i + E i−1 , 1  i  s, E 0 = 0.

Moreover, for any 1  k, l  r and 1  i, j  s, we have



(k , El ) = δkl , (˜i , E j ) = δi j ,
(7.43)
(k , E i ) = 0, (˜ j , El ) = 0.
7.4 Precise Consideration on the Exact Boundary Synchronization by 2-Groups 77

Taking the inner product of problem (I) and (I0) with Ek and E i , and setting φk =
(Ek , U ) and φ̃i = (E i , U ), we get the subsystem
⎧ 

⎪ φ − φk + λφk + φk−1 = 0 in (0, +∞) × ,
⎨ k
φk = 0 on (0, +∞) × 0 ,
(7.44)

⎪ φ k = (Ek , D H ) on (0, +∞) × 1 ,
⎩ 0 ), φk = (Ek , U
1 ) in 
t = 0 : ψk = (Ek , U

for k = 1, · · · , r , and the subsystem


⎧ 

⎪ φ̃ − φ̃i + μφ̃i + φ̃i−1 = 0 in (0, +∞) × ,
⎨ i
φ̃i = 0 on (0, +∞) × 0 ,
(7.45)

⎪ φ̃ = (E i , D H ) on (0, +∞) × 1 ,
⎩ i
t = 0 : φ̃i = (E i , U
0 ), φ̃i = (E i , U
1 ) in 

for i = 1, · · · , s, respectively.
Once the solutions (φ1 , · · · , φr ) and (φ̃1 , · · · , φ̃s ) are determined, we look for
the corresponding exactly synchronizable state by 2-groups (u, v)T .
Noting that r , ˜s ∈ Span{e1 , e2 }, we can write

e1 = αr + β ˜s , e2 = γr + δ ˜s (7.46)

with αδ − βγ = 0. Then the exact boundary synchronization by 2-groups (7.40)


gives
t  T : U = (αu + γv)r + (βu + δv)˜s . (7.47)

Noting (7.43), it follows that

φr = αu + γv, φ̃s = βu + δv. (7.48)

By solving this linear system, we get the exactly synchronizable state by 2-groups
(u, v).
In particular, if r = s = 1, we can choose a boundary control matrix D ∈ D N −2 ,
such that D T E1 = D T E 1 = 0. Then the exactly synchronizable state by 2-groups
(u, v)T can be uniquely determined independently of applied boundary controls.
Otherwise, we have to solve the whole systems (7.44) and (7.45) to get φ1 , · · · , φr
and φ̃1 , · · · , φ̃s , even though we only need φr and φ̃s .

(ii) Assume that A admits only one eigenvector r in the invariant subspace
Ker(C2 ), and Ker(C2 ) is A-marked. Let 1 , · · · , r denote a Jordan chain of A.
Respectively, let E1 , · · · , Er denote the corresponding Jordan chain of A T . We get
again the subsystem (7.44). Since Ker(C2 ) is A-marked, then r −1 , r ∈ Span{e1 , e2 }
so that we can write

e1 = αr + βr −1 , e2 = γr + δr −1 (7.49)


78 7 Exactly Synchronizable States by p-Groups

with αδ − βγ = 0. The exact boundary synchronization by 2-groups (7.40) becomes

tT : U = (αu + γv)r + (βu + δv)r −1 . (7.50)

Noting the first formula in (7.43), it follows that

φr = αu + γv, φr −1 = βu + δv, (7.51)

which gives the exactly synchronizable state by 2-groups (u, v).


In particular, if r = 2, we can choose a boundary control matrix D ∈ D N −2 , such
that D T E1 = D T E2 = 0. Then the exactly synchronizable state by 2-groups (u, v)T
can be uniquely determined independently of applied boundary controls. Otherwise,
we have to solve the whole systems (7.44) to get φ1 , · · · , φr , even though we only
need φr −1 and φr .

Remark 7.6 The above consideration can be used to the general case p  1 without
any essential difficulties. Later in Sect. 15.4, we will give the details in the case p = 3
for Neumann boundary controls.
Part II
Synchronization for a Coupled System of
Wave Equations with Dirichlet Boundary
Controls: Approximate Boundary
Synchronization

From the results given in Part 1, we know that, roughly speaking, if the domain
satisfies the multiplier geometrical condition, and the coupling matrix A satis-
fies the corresponding condition of compatibility in the case of various kinds of
synchronizations, then the exact boundary null controllability, the exact boundary
synchronization and the exact boundary synchronization by groups can be realized
for system (I) with Dirichlet boundary controls, respectively, provided that the con-
trol time T > 0 is large enough and there are enough boundary controls. However,
when the domain does not satisfy the multiplier geometrical condition or there is a
lack of boundary controls, “what weakened controllability or synchronization can
be obtained” becomes a very interesting and practically important problem.
In this part, in order to answer this question, we will introduce the concept of
approximate boundary null controllability, approximate boundary synchronization
and approximate boundary synchronization by groups, and establish the correspond-
ing theory for a coupled system of wave equations with Dirichlet boundary controls.
Moreover, we will show that Kalman’s criterion of various kinds will play an impor-
tant role in the discussion.
Chapter 8
Approximate Boundary Null
Controllability

In this chapter we will define the approximate boundary null controllability for
system (I) and the D-observability for the adjoint problem, and show that these
two concepts are equivalent to each other. Moreover, the corresponding Kalman’s
criterion is introduced and studied.

8.1 Definition

Let
U = (u (1) , · · · , u (N ) )T and H = (h (1) , · · · , h (M) )T (8.1)

with M  N . Consider the coupled system (I) with the initial condition (I0).
Let
H0 = L 2 (), H1 = H01 (), L = L loc 2
(0, +∞; L 2 (1 )). (8.2)

The dual of H1 is denoted by H−1 = H −1 ().


By Theorem 3.11, as M < N , system (I) is not exactly null controllable in the
space (H0 ) N × (H−1 ) N . So we look for some weakened controllability, for example,
the approximate boundary null controllability as follows.

Definition 8.1 System (I) is approximately null controllable at the time T > 0 if for
0 , U
any given initial data (U 1 ) ∈ (H0 ) N × (H−1 ) N , there exists a sequence {Hn } of
boundary controls in L with compact support in [0, T ], such that the corresponding
M

sequence {Un } of solutions to problem (I) and (I0) satisfies the following condition:

(Un , Un ) → (0, 0) in Cloc


0
([T, +∞); (H0 × H−1 ) N ) as n → +∞. (8.3)

© Springer Nature Switzerland AG 2019 81


T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_8
82 8 Approximate Boundary Null Controllability

Remark 8.2 Since Hn has a compact support in [0, T ], the corresponding solution
Un satisfies the homogeneous Dirichlet boundary condition on  for t  T . Hence,
there exist positive constants c and ω such that

(Un (t), Un (t))(H0 ) N ×(H−1 ) N (8.4)


ω(t−T )
ce (Un (T ), Un (T ))(H0 ) N ×(H−1 ) N

for all t  T and for all n  0. Then the convergence

(Un (T ), Un (T )) → (0, 0) in (H0 ) N × (H−1 ) N as n → +∞ (8.5)

implies the convergence (8.3).

Remark 8.3 In Definition 8.1, the convergence (8.3) of the sequence {Un } of solu-
tions does not imply the convergence of the sequence {Hn } of boundary controls. We
don’t even know if the sequence {Un , Un } is bounded in C 0 ([0, T ]; (H0 × H−1 ) N ).
However, since {Hn } has a compact support in [0, T ], the sequence {Un , Un } con-
verges uniformly to (0, 0) in Cloc
0
([T, +∞); (H0 × H−1 ) N ) as n → +∞.

8.2 D-Observability for the Adjoint Problem

Let
 = (φ(1) , · · · , φ(N ) )T . (8.6)

Consider the adjoint problem


⎧ 
⎨  −  + A T  = 0 in (0, +∞) × ,
=0 on (0, +∞) × , (8.7)
⎩ 1 in .
 0 ,  = 
t =0: =

Definition 8.4 The adjoint problem (8.7) is D-observable on the interval [0, T ] if
the observation
D T ∂ν  ≡ 0 on [0, T ] × 1 (8.8)

0 , 
implies that ( 1 ) ≡ 0, then  ≡ 0.

Let C be the set of all the initial states (V (0), V  (0)) given by the following
backward problem:
⎧ 

⎪ V − V + AV = 0 in (0, T ) × ,

V =0 on (0, T ) × 0 ,
(8.9)

⎪ V = DH on (0, T ) × 1 ,

t = T : V = V = 0 in 
8.2 D-Observability for the Adjoint Problem 83

as the boundary control H varies in L M with compact support in [0, T ].

Lemma 8.5 System (I) is approximately null controllable in (H0 ) N × (H−1 ) N if


and only if
C = (H0 ) N × (H−1 ) N . (8.10)

Proof Assume that (8.10) holds. Then for any given (U 1 ) ∈ (H0 ) N × (H−1 ) N
0 , U
there exists a sequence {Hn } of boundary controls in L , such that the corresponding
M

sequence {Vn } of solutions to problem (8.9) satisfies

0 , U
(Vn (0), Vn (0)) → (U 1 ) in (H0 ) N × (H−1 ) N as n → +∞. (8.11)

Let
R: (U 1 , H ) → (U, U  )
0 , U (8.12)

be the resolution of problem (I) and (I0). R being linear, we have

R(U 1 , Hn )
0 , U (8.13)
0 − Vn (0), U
=R(U 1 − Vn (0), 0) + R(Vn (0), Vn (0), Hn ).

By the definition of Vn , we have

R(Vn (0), Vn (0), Hn )(T ) = 0, (8.14)

then
R(U 1 , Hn )(T ) = R(U
0 , U 0 − Vn (0), U
1 − Vn (0), 0)(T ). (8.15)

By means of the well-posedness of problem (I) and (I0), there exists a positive
constant c such that

R(U 1 , Hn )(T )(H0 ) N ×(H−1 ) N


0 , U (8.16)
0 − Vn (0), U
c(U 1 − Vn (0))(H0 ) N ×(H−1 ) N .

Then, it follows from (8.11) that

R(U 1 , Hn )(T )(H0 ) N ×(H−1 ) N → 0


0 , U (8.17)

as n → +∞. This gives the approximate boundary null controllability of system (I).
Inversely, assume that system (I) is approximately null controllable. Then for
any given (U 1 ) ∈ (H0 ) N × (H−1 ) N , there exists a sequence {Hn } of boundary
0 , U
controls in L with compact support in [0, T ], such that the corresponding solution
M

Un of problem (I) and (I0) satisfies


84 8 Approximate Boundary Null Controllability

(Un (T ), Un (T )) (8.18)


=R(U0 , U
1 , Hn )(T ) → (0, 0) in (H0 ) N × (H−1 ) N

as n → +∞. Taking the boundary control H = Hn , we solve the backward problem


(8.9) and denote by Vn the corresponding solution. By linearity, we have

R(U 1 , Hn ) − R(Vn (0), Vn (0), Hn )


0 , U (8.19)
0 − Vn (0), U
=R(U 1 − Vn (0), 0).

Once again, by means of the well-posedness of problem (I) and (I0) regarding as a
backward problem, and noting (8.18), we get

R(U 1 − Vn (0), 0)(0)(H0 ) N ×(H−1 ) N


0 − Vn (0), U (8.20)
c(Un (T ) − Vn (T ), Un (T ) − Vn (T ))(H0 ) N ×(H−1 ) N
=c(Un (T ), Un (T )(H0 ) N ×(H−1 ) N → 0

as n → +∞, which together with (8.19) implies that

(U 1 ) − (Vn (0), Vn (0))(H0 ) N ×(H−1 ) N


0 , U (8.21)
=R(U 1 , Hn )(0) − R(Vn (0), Vn (0), Hn )(0)(H0 ) N ×(H−1 ) N
0 , U
0 − Vn (0), U
cR(U 1 − Vn (0), 0)(0)(H0 ) N ×(H−1 ) N → 0

as n → +∞. This shows C = (H0 ) N × (H−1 ) N . The proof is complete. 

Theorem 8.6 System (I) is approximately null controllable at the time T > 0 if and
only if the adjoint problem (8.7) is D-observable on the interval [0, T ].

Proof Assume that system (I) is not approximately null controllable. Then, by
Lemma 8.5, there exists a nontrivial element (− 1 , 
0 ) ∈ C ⊥ . Here, the orthogo-
nality is defined in the sense of duality, therefore (− 0 ) ∈ (H0 ) N × (H1 ) N .
1 , 
 
Taking (0 , 1 ) as the initial data, we solve the adjoint problem (8.7) to get a solu-
tion . Next, multiplying the backward problem (8.9) by  and integrating by parts,
we get
   T 
1 )d x −
(V (0),  0 )d x =
(V  (0),  (D H, ∂ν )ddt. (8.22)
  0 1

1 , 
Noting that (− 0 ) ∈ C ⊥ , it follows that
 T 
(D H, ∂ν )ddt = 0 (8.23)
0 1
8.2 D-Observability for the Adjoint Problem 85

for all H ∈ L M . This gives the observation (8.8) with  ≡ 0, therefore, contradicts
the D-observation of the adjoint problem (8.7).
Inversely, assume that the adjoint problem (8.7) is not D-observable. Then there
0 , 
exists a nontrivial initial data ( 1 ) ∈ (H1 ) N × (H0 ) N , such that the correspond-
ing solution  to the adjoint problem (8.7) satisfies the observation (8.8). Now for any
given (U 1 ) ∈ C, by the definition of C, there exists a sequence {Hn } in L M with
0 , U
compact support in [0, T ], such that the corresponding solution Vn to the backward
problem (8.9) satisfies

0 , U
(Vn (0), Vn (0)) → (U 1 ) in (H0 ) N × (H−1 ) N as n → +∞. (8.24)

Noting (8.8), the relation (8.22) becomes


 
1 )d x −
(Vn (0),  0 )d x = 0.
(Vn (0),  (8.25)
 

Passing to the limit as n → +∞, we get


(U 1 ), (−
0 , U 1 , 
0 ) (H0 ) N ×(H−1 ) N ;(H0 ) N ×(H1 ) N = 0 (8.26)


for all (U 1 ) ∈ C. In particular, we get (−
0 , U 1 , 
0 ) ∈ C . Therefore C = (H0 ) N ×
(H−1 ) .N


Corollary 8.7 If M = N , then system (I) is always approximately null controllable.

Proof Since M = N , D is invertible, then the observation (8.8) gives that

∂ν  ≡ 0 on [0, T ] × 1 . (8.27)

By means of Holmgren’s uniqueness theorem (cf. Theorem 8.2 in [62]), we deduce


the D-observability of the adjoint problem (8.7). Then by means of Theorem 8.6,
we get the approximate boundary null controllability of system (I). The proof is
complete. 

Remark 8.8 Corollary 8.7 is a unique continuation result, which is not sufficient
for the exact boundary null controllability. By Theorem 3.11, under the multiplier
geometrical condition (3.1), system (I) is exactly null controllable by means of N
boundary controls, however, without the multiplier geometrical condition (3.1), gen-
erally speaking, we cannot conclude the exact boundary null controllability even
though we have applied N boundary controls for system (I) of N wave equations.
We will discuss later the approximate boundary null controllability with fewer than
N boundary controls.
86 8 Approximate Boundary Null Controllability

8.3 Kalman’s Criterion. Total (Direct and Indirect) Controls

Theorem 8.9 Assume that the adjoint problem (8.7) is D-observable. Then, we have
necessarily the following Kalman’s criterion:

rank(D, AD, · · · , A N −1 D) = N . (8.28)

Proof By (ii) of Proposition 2.12, it is easy to see that we only need to prove that
Ker (D T ) does not contain a nontrivial subspace V which is invariant for A T .
Let φn be the solution to the following eigenvalue problem with μn > 0:

−φn = μ2n φn in ,
(8.29)
φn = 0 on .

Assume that A T possesses a nontrivial invariant subspace V ⊆ K er (D T ). For any


fixed integer n > 0, we define

W = {φn w : w ∈ V }. (8.30)

Clearly, W is a finite-dimensional invariant subspace of − + A T . Therefore, we


can solve the adjoint problem (8.7) in W and the corresponding solutions are given
by  = φn w(t), where w(t) ∈ V satisfies

w  + (μ2n I + A T )w = 0, 0 < t < ∞,
(8.31)
t =0: w=w 0 ∈ V, w  = w
1 ∈ V.

Since w(t) ∈ V for all t  0, it follows that

D T ∂ν  = ∂ν φn D T w(t) ≡ 0 on [0, T ] × 1 . (8.32)

Clearly,  ≡ 0, then we get a contradiction. The proof is complete. 

Thus, by Theorem 8.6, if system (I) is approximately null controllable, then we


have necessarily the Kalman’s criterion (8.28).
For the exact boundary null controllability, by Theorem 3.11 the number M =
rank(D), namely, the number of boundary controls, should be equal to N , the
number of state variables. However, as we will see in what follows, the approx-
imate boundary null controllability of system (I) could be realized if the number
M = rank(D) is very small, even if M = rank(D) = 1. Nevertheless, Theorems 8.6
and 8.9 show that if system (I) is approximately null controllable, then the enlarged
matrix (D, AD, · · · , A N −1 D), composed of the coupling matrix A and the boundary
control matrix D, should be of full row-rank. That is to say, even if the rank of D
might be small, but because of the existence and influence of the coupling matrix
8.3 Kalman’s Criterion. Total (Direct and Indirect) Controls 87

A, in order to realize the approximate boundary null controllability, the rank of the
enlarged matrix (D, AD, · · · , A N −1 D) should be still equal to N , the number of state
variables. From this point of view, we may say that the rank M of D is the number of
“direct” boundary controls acting on 1 , and rank(D, AD, · · · , A N −1 D) denotes
the “total” number of direct and indirect controls, while the number of “indirect”
controls is given by the difference: rank(D, AD, · · · , A N −1 D) − rank(D), which
is equal to (N − M) in the case of approximate boundary null controllability. It
is different from the exact boundary null controllability, in which only the number
rank(D) of direct boundary controls is concerned and M = rank(D) should be equal
to N , that for the approximate boundary null controllability, we should consider not
only the number of direct boundary controls, but also the number of indirect controls,
namely, the number of the total (direct and indirect) controls.
Remark 8.10 It is well known that Kalman’s criterion (8.28) is necessary and suffi-
cient for the exact controllability of systems of ODEs (cf. [23, 73]). But, the situation
is more complicated for hyperbolic distributed parameter systems. Because of the
finite speed of wave propagation, it is natural to consider Kalman’s criterion only
in the case that T > 0 is sufficiently large. However, the following theorem pre-
cisely shows that even on the infinite observation interval [0, +∞), the sufficiency
of Kalman’s criterion may fail. So, some algebraic assumptions on the coupling
matrix A should be required to guarantee the sufficiency of Kalman’s criterion.
Theorem 8.11 Let μ2n and φn be defined by (8.29). Assume that the set

 = {(m, n) : μm = μn , ∂ν φm = ∂ν φn on 1 } (8.33)

is not empty. For any given (m, n) ∈ , let

μ2m − μ2n
= . (8.34)
2
Then the adjoint system
⎧ 
⎨ φ − φ + ψ = 0 in (0, +∞) × ,
ψ  − ψ + φ = 0 in (0, +∞) × , (8.35)

φ=ψ=0 on (0, +∞) × 

admits a nontrivial solution (φ, ψ) such that

∂ν φ ≡ 0 on [0, +∞) × 1 , (8.36)

hence the corresponding adjoint problem (8.7) is not D-observable with D = (1, 0)T .
Proof Let

μ2m + μ2n
φλ = (φn − φm ), ψλ = (φn + φm ), λ2 = . (8.37)
2
88 8 Approximate Boundary Null Controllability

We check easily that (φλ , ψλ ) satisfies the following eigensystem:


⎧ 2
⎨ λ φλ + φλ − ψλ = 0 in ,
λ2 ψλ + ψλ − φλ = 0 in , (8.38)

φλ = ψ λ = 0 on .

Moreover, noting the definition (8.33) of , we have

∂ν φλ ≡ 0 on 1 . (8.39)

Now, let
φ = eiλt φλ , ψ = eiλt ψλ . (8.40)

It is easy to see that (φ, ψ) is a nontrivial solution to system (8.35) and satisfies the
observation (8.36).
In order to complete the proof, we examine the following situations (there are
many others!), in which the set  is indeed not empty.
(i)  = (0, π) with 1 = {0}. In this case, we have

1
μn = n, φn = sin nx and φn (0) = 1. (8.41)
n

So, (m, n) ∈  for all m = n.


(ii)  = (0, π) × (0, π) with 1 = {0} × [0, π]. Setting
1
μm,n = m 2 + n 2 , φm,n = sin mx sin ny, (8.42)
m
we have
∂ ∂
φm,n (0, y) = φm  ,n (0, y) = sin ny, 0  y  π. (8.43)
∂x ∂x


So {m, n}, {m  , n} ∈  for all m = m  and n  1.
The proof is thus complete. 

Remark 8.12 Note that



1 0 10
D= , A= , (D, AD) = (8.44)
0 0 0

correspond to the adjoint system (8.35) with the observation (8.36). Since the matrices
A and D satisfy well the corresponding Kalman’s criterion (8.28), Theorem 8.11
shows that even in the infinite interval of observation, Kalman’s criterion is not
sufficient for the D-observability of the adjoint problem (8.35) at least in the two cases
mentioned above. Therefore, in order to get the D-observability, some additional
algebraic assumptions on A should be imposed.
8.4 Sufficiency of Kalman’s Criterion for T > 0 Large … 89

8.4 Sufficiency of Kalman’s Criterion for T > 0 Large


Enough for the Nilpotent System

A matrix A of order N is called to be nilpotent if there exists an integer k with


1  k  N , such that Ak = 0. Then it is easy to see that A is nilpotent if and only if
all the eigenvalues of A are equal to zero. Thus, under a suitable basis B, a nilpotent
matrix A can be written in a diagonal form of Jordan blocks:
⎛ ⎞
Jp
⎜ ⎟
B −1 AB = ⎝ Jq ⎠, (8.45)
..
.

where J p is the following Jordan block of order p:


⎛ ⎞
01
⎜ 01 ⎟
⎜ ⎟

Jp = ⎜ · · ⎟ ⎟. (8.46)
⎝ 0 1⎠
0

When A is a sole Jordan block, so-called the cascade matrix, it was shown in [2]
that the observation on the last component of adjoint variable of the adjoint problem
(8.7) is sufficient for the corresponding D-observability. In this section, we will
generalize this result to the nilpotent system.
We first consider a very special case.
Proposition 8.13 Let A = a I , where a is a real number. If Kalman’s criterion (8.28)
holds, then the adjoint problem (8.7) is D-observable, provided that T > 0 is large
enough.
Proof In this case, Kalman’s criterion (8.28) implies that M = N . Consequently, the
observation (8.8) implies that

∂ν  ≡ 0 on [0, T ] × 1 . (8.47)

Thus, by the classical Holmgren’s uniqueness theorem, we get  ≡ 0, provided that


T > 0 is large enough. In this situation, we don’t need any multiplier geometrical
condition on the domain . The proof is complete. 
Lemma 8.14 Assume that there exists an invertible matrix P such that P A = A P.
Then the adjoint problem (8.7) is D-observable if and only if it is P D-observable.
Proof Let   satisfies the same
 = P −T . Noting P A = A P, the new variable 
system as in problem (8.7). On the other hand, since

 on 1 ,
D T ∂ν  = (P D)T ∂ν  (8.48)
90 8 Approximate Boundary Null Controllability

. The proof is
the D-observability on  is equivalent to the P D-observability on 
complete. 

Proposition 8.15 Let P be an invertible matrix. Define

à = P A P −1 and  = P D.
D

Then the matrices A and D satisfy Kalman’s criterion (8.28) if and only if the matrices
 do so.
à and D

Proof It is sufficient to note that

 Ã D,
[ D,  · · · , Ã N −1 D
 = P[D, AD, · · · , A N −1 D]

and that P is invertible. 

Theorem 8.16 Assume that  ⊂ Rn satisfies the multiplier geometrical condition


(3.1). Let T > 0 be large enough. Assume that the coupling matrix A is nilpotent.
Then Kalman’s criterion (8.28) is sufficient for the D-observability of the adjoint
system (8.7).

Proof (i) Case that A is a Jordan block (the cascade matrix):


⎛ ⎞
01
⎜ 01 ⎟
⎜ ⎟

A=⎜ · · ⎟ ⎟ =: JN . (8.49)
⎝ 0 1⎠
0

Noting that E = (0, · · · , 0, 1)T is the only eigenvector of A T , by Proposition 2.12


(ii), A and D satisfy Kalman’s criterion (8.28) if and only if

D T E = 0, (8.50)

namely, if and only if the last row of D is not a zero vector. We denote by d =
(d1 , d2 , · · · , d N )T a column of D with d N = 0 and let
⎛ ⎞
d N d N −1 · d1
⎜ 0 dN · d2 ⎟
⎜ ⎟
P=⎜
⎜ · · · · ⎟ ⎟. (8.51)
⎝0 0 d N d N −1 ⎠
0 0 0 dN

Obviously, P is invertible. Noting that

P = d N I + d N −1 JN + · · · + d1 JNN −1 , (8.52)
8.4 Sufficiency of Kalman’s Criterion for T > 0 Large … 91

it is easy to see that P A = A P. On the other hand, under the multiplier geometrical
condition (3.1) on , it was shown in [2] that the adjoint system (8.7) with the
coupling matrix (8.49) is D0 -observable with

D0 = (0, · · · , 0, 1)T . (8.53)

Thus, by Lemma 8.14, the same system is P D0 -observable, then, noting that

P D0 = (d1 , · · · , d N )T (8.54)

is a submatrix of D, problem (8.7) must be D-observable.


(ii) Case that A is composed of two Jordan blocks of the same size:

Jp 0
A= , (8.55)
0 Jp

where J p is the Jordan block of order p.


First, let i be defined by

(i)
i = (0, · · · , 0, 1 , 0, · · · , 0)T (8.56)

for i = 1, · · · , 2 p. Consider the special boundary control matrix

D0 = ( p , 2 p ). (8.57)

Noting that in this situation the adjoint system (8.7) and the observation (8.8) are
entirely decoupled into two independent subsystems, each of them satisfies Kalman’s
criterion (8.28) with N = p. Then, by the consideration in the previous case, the
adjoint system (8.7) is D0 -observable.
Now, let us consider the general boundary control matrix of order 2 p × M :
⎛ ⎞
a1 c1 · · · · · ·
⎜ .. .. ⎟
⎜. . ⎟
⎜ ⎟
⎜a p c p · · · · · ·⎟
D=⎜
⎜ b1
⎟. (8.58)
⎜ d1 · · · · · ·⎟⎟
⎜. .. ⎟
⎝ .. . ⎠
bp dp · · · · · ·

Noting that  p and 2 p are the only two eigenvectors of A T , associated with the
same eigenvalue zero, then for any given real numbers α and β with α2 + β 2 = 0,
α p + β2 p is also an eigenvector of A T , by Proposition 2.12 (ii), Kalman’s criterion
(8.28) holds if and only if D T  p and D T 2 p , namely, the row vectors
92 8 Approximate Boundary Null Controllability

(a p , c p , · · · · · · ), (b p , d p , · · · · · · ), (8.59)

are linearly independent. Without loss of generality, we may assume that

a p d p − b p c p = 0. (8.60)

Let the matrix P of order 2 p be defined by


⎛ ⎞
ap a p−1 ··· a1 b p b p−1 ··· b1
⎜0 ap ··· a2 0 bp ··· b2 ⎟
⎜ ⎟
⎜ .. .. .. .. .. .. .. .. ⎟
⎜ . . . . . . . . ⎟
⎜ ⎟
⎜0 0 · · · ap 0 0 ··· bp ⎟
P=⎜ ⎟ = P11 P12 . (8.61)
⎜ cp c p−1 · · · c1 d p d p−1 ··· d1 ⎟ P21 P22
⎜ ⎟
⎜0 cp · · · c2 0 dp ··· d2 ⎟
⎜ ⎟
⎜ . .. . . .. .. .. .. .. ⎟
⎝ .. . . . . . . . ⎠
0 0 · · · cp 0 0 · · · dp

Since P11 , P12 , P21 , and P22 have a similar structure as (8.52) for P, we check easily
that
P11 J p P12 J p J p P11 J p P12
PA = = = A P. (8.62)
P21 J p P22 J p J p P21 J p P22

Moreover, P is invertible under condition (8.60). Since the adjoint system (8.7) is
D0 -observable, by Lemma 8.14, it is P D0 -observable, then, D-observable since P D0
is composed of the first two columns of D.
(iii) Case that A is composed of two Jordan blocks with different sizes:

Jp 0
A= (8.63)
0 Jq

with q < p. In this case, the adjoint system (8.7) is composed of the first subsystem

φi − φi + φi−1 = 0 in (0, +∞) × ,
i = 1, · · · , p : (8.64)
φi = 0 on (0, +∞) × 

with φ0 = 0, and of the second subsystem



ψ j − ψ j + ψ j−1 = 0 in (0, +∞) × ,
j = p − q + 1, · · · , p : (8.65)
ψj = 0 on (0, +∞) × 

with ψ p−q = 0. These two subsystems (8.64) and (8.65) are coupled together by the
D-observations:
8.4 Sufficiency of Kalman’s Criterion for T > 0 Large … 93
⎧ p
⎪  p

⎪ ∂ φ + b j ∂ν ψ j = 0 on (0, T ) × 1 ,

⎪ a i ν i

⎪ i=1

⎪ j= p−q+1
⎨ p
 p
ci ∂ν φi + d j ∂ν ψ j = 0 on (0, T ) × 1 , (8.66)



⎪ i=1 j= p−q+1



⎪· · · · · · · · · · · ·


············

In order to transfer the problem to the case p = q, we expand the second subsystem
(8.65) from { p − q + 1, · · · , p} to {1, · · · , p}:

ψ j − ψ j + ψ j−1 = 0 in (0, +∞) × ,
j = 1, · · · , p : (8.67)
ψj = 0 on (0, +∞) × 

with ψ0 = 0, so that the two subsystems (8.64) and (8.67) have the same size.

Accordingly, the D-observations (8.66) can be extended to the D-observations:
⎧ p
⎪  p

⎪ ∂ φ + b j ∂ν ψ j = 0 on (0, T ) × 1 ,

⎪ ai ν i



⎪ i=1 j=1
⎨ p
p
ci ∂ν φi + d j ∂ν ψ j = 0 on (0, T ) × 1 , (8.68)



⎪ i=1 j=1



⎪ ············


············

with arbitrarily given b j and d j for j = 1, · · · , p − q.


Let us write the matrix D of order ( p + q) × M of the observations (8.66) as
⎛ ⎞
a1 c1 ······
⎜ .. .. ⎟
⎜ . . ⎟
⎜ ⎟
⎜ ap c · · · · · ·⎟
D=⎜
⎜b p−q+1 d p−q+1
p ⎟. (8.69)
⎜ · · · · · ·⎟

⎜ .. .. ⎟
⎝ . . ⎠
bp dp ······

Similarly to the case (ii),  p and  p+q are the only two eigenvectors of A T , associated
with the same eigenvalue zero, then for any given real numbers α and β with α2 +
β 2 = 0, α p + β p+q is also an eigenvector of A T . By Proposition 2.12 (ii), Kalman’s
criterion (8.28) holds if and only if D T  p and D T  p+q , namely, the row vectors

(a p , c p , · · · · · · ), (b p , d p , · · · · · · ), (8.70)
94 8 Approximate Boundary Null Controllability

are linearly independent. Without loss of generality, we may assume that (8.60) holds.
Similarly, we write the matrix D of order 2 p × M of the extended observations
(8.68) as ⎛ ⎞
a1 c1 · · · · · ·
⎜ .. .. ⎟
⎜ . . ⎟
⎜ ⎟
⎜ ap c p · · · · · ·⎟
⎜ ⎟
⎜ b1 d1 · · · · · ·⎟
⎜ ⎟
=⎜
D ⎜ ..
. .. ⎟
⎟. (8.71)
⎜ . ⎟
⎜ b p−q d p−q ⎟
⎜ ⎟
⎜b p−q+1 d p−q+1 · · · · · ·⎟
⎜ ⎟
⎜ . .. ⎟
⎝ .. . ⎠
bp dp · · · · · ·

The matrix A  of order 2 p of the extended adjoint system, composed of (8.64) and
(8.67), is the same as given in (8.55). Then, A  and D  satisfy the corresponding
Kalman’s criterion if and only if the condition (8.60) holds. Then by the conclusion

of (ii), the extended adjoint system composed of (8.64) and (8.67) is D-observable
in the space (H01 ())2 p × (L 2 ())2 p .
In particular, if we choose the specific initial data such that

t =0: ψ1 = · · · = ψ p−q = 0 and ψ1 = · · · = ψ p−q = 0 (8.72)

for the extended subsystem (8.67), then by well-posedness we have

ψ1 ≡ · · · ≡ ψ p−q ≡ 0 in (0, +∞) × . (8.73)

Thus, the extended adjoint system composed of (8.64) and (8.67) with the D- 
observations (8.68) reduces to the original adjoint system composed of (8.64) and
(8.65) with the D-observations (8.66), hence we get that the original adjoint system
is D-observable in the space (H01 ()) p+q × (L 2 ()) p+q .
The case that A is composed of several Jordan blocks can be carried on similarly.
Since any given nilpotent matrix can be decomposed in a diagonal form of Jordan
blocks under a suitable basis, by Proposition 8.15, the previous conclusion is still
valid for any given nilpotent matrix A. The proof is complete. 

Theorem 8.17 Assume that  ⊂ Rn satisfies the multiplier geometrical condition


(3.1). Let T > 0 be large enough. Assume furthermore that the coupling matrix A
admits one sole eigenvalue λ  0. Then the adjoint problem (8.7) is D-observable
if and only if the matrix D satisfies Kalman’s criterion (8.28).

Proof In fact, the operator − + λI is still self-adjoint and coercive in L 2 () and
A − λI is nilpotent. So, Theorem 8.16 remains true as − is replaced by − +
λI . 
8.5 Sufficiency of Kalman’s Criterion for T > 0 Large … 95

8.5 Sufficiency of Kalman’s Criterion for T > 0 Large


Enough for 2 × 2 Systems

Theorem 8.18 Let  ⊂ Rn satisfy the multiplier geometrical condition (3.1).


Assume that the 2 × 2 matrix A has real eigenvalues λ  0 and μ  0 such that

|λ − μ|  0 , (8.74)

where 0 > 0 is small enough. Then the adjoint problem (8.7) is D-observable for
T > 0 large enough if and only if the matrix D satisfies Kalman’s criterion (8.28).

Proof By Theorem 8.9, we only need to prove the sufficiency.


If D is invertible, then the observation (8.8) implies that

∂ν  ≡ 0 on [0, T ] × 1 (8.75)

and the classical Holmgren’s uniqueness theorem, implies that  ≡ 0, provided


that T > 0 is large enough. Noting that in this case, we do not need the multiplier
geometrical condition (3.1) on . Thus, we only need to consider the case that D is
of rank one. Without loss of generality, we may consider the following three cases.
(a) If λ = μ with two linearly independent eigenvectors, then any nontrivial vector
is an eigenvector of A T . By (ii) of Lemma 2.12 with d = 0, Ker(D T ) is reduced to
{0}, then the matrix D is invertible, which gives a contradiction.
(b) If λ = μ with one eigenvector, then

λ1
A∼ . (8.76)

Since λ  0, it suffices to apply Theorem 8.17.


(c) If λ = μ, then λ+μ
0 0 λ−μ
A∼ 2 + 2 . (8.77)
0 λ+μ
2
λ−μ
2
0

Since λ + μ  0, the operator − + λ+μ 2


I is still coercive in H01 (), then it is
sufficient to consider the adjoint problem (8.7), in which − is replaced by − +
λ+μ
I , with the matrix
2
0 λ−μ
A = λ−μ 2 . (8.78)
2
0

Since  satisfies the multiplier geometrical condition (3.1), following a result in [1]
(cf. also [65] in the case of different speeds of wave propagation), the corresponding
adjoint problem (8.7) is D0 -observable with
96 8 Approximate Boundary Null Controllability

1
D0 = . (8.79)
0

On the other hand, since D is of rank one, we have



a 0 a b
D= , A= , (D, AD) = (8.80)
b 0 b a

with  = λ−μ
2
. So, Kalman’s criterion (8.28) is satisfied if and only if a 2 = b2 . Then
the matrix
ab
P= (8.81)
ba

is invertible and commutes with A. Thus, by Lemma 8.14, the adjoint problem (8.7)
is also P D0 -observable, but
a
P D0 = = D. (8.82)
b

The proof is complete. 


Remark 8.19 Theorem 8.11 shows that the condition “0 > 0 is small enough” in
Theorem 8.18 is actually necessary.
Proposition 8.20 Let  ⊂ Rn satisfy the multiplier geometrical condition (3.1) and
|| > 0 be small enough. Let T > 0 be large enough. Then the adjoint system (8.35)
is D-observable if and only if the matrix D satisfies the corresponding Kalman’s
criterion.
Proof Since  satisfies the multiplier geometrical condition (3.1), following a result
in [1], the adjoint system (8.35) is D0 -observable with

1
D0 = . (8.83)
0

Noting that
a 0 a b
D= , A= , (D, AD) = , (8.84)
b 0 b a

Kalman’s criterion (8.28) is satisfied if and only if a 2 = b2 . On the other hand, the
matrix
ab
P= (8.85)
ba

is invertible and commutes with A. Thus, by Lemma 8.14, the adjoint problem (8.35)
is also P D0 -observable with

a
P D0 = = D. (8.86)
b
8.5 Sufficiency of Kalman’s Criterion for T > 0 Large … 97

The proof is complete. 

Remark 8.21 Because of the equivalence between the approximate boundary null
controllability and the D-observability (cf. Theorem 8.6), in the cases discussed in
Theorems 8.16 and 8.18, Kalman’s criterion (8.28) is indeed necessary and sufficient
to the approximate boundary null controllability for the corresponding system (I). In
the one-space-dimensional case, some more general results can be obtained in this
direction, and the following two sections are devoted to this end.

8.6 The Unique Continuation for Nonharmonic Series

In this section, we will establish the unique continuation for nonharmonic series,
which will be used in the next section to prove the sufficiency of Kalman’s criterion
(8.28) to the D-observability for T > 0 large enough for diagonalizable systems in
one-space-dimensional case. The study is based on a generalized Ingham’s inequality
(cf. [28]).
Let Z∗ denote the set of all nonzero integers and {βn(l) }1lm,n∈Z∗ be a strictly
increasing real sequence:
(1) (m)
· · · β−1 < · · · < β−1 < β1(1) < · · · < β1(m) < · · · . (8.87)

We want to show that the following condition


m
(l)
an(l) eiβn t = 0 on [0, T ] (8.88)
n∈Z∗ l=1

with

m
|an(l) |2 < +∞ (8.89)
n∈Z∗ l=1

implies that
an(l) = 0, n ∈ Z∗ , 1  l  m, (8.90)

provided that T > 0 is large enough. If this is true, we say that the sequence
(l)
{eiβn t }1lm;n∈Z∗ is ω-linearly independent in L 2 (0, T ).

Theorem 8.22 Assume that (8.87) holds and that there exist positive constants c, s,
and γ such that
(l)
βn+1 − βn(l)  mγ, (8.91)

c
 βn(l+1) − βn(l)  γ (8.92)
|n|s
98 8 Approximate Boundary Null Controllability

for all 1  l  m and all n ∈ Z∗ with |n| large enough. Then the sequence
(l)
{eiβn t }1lm;n∈Z∗ is ω-linearly independent in L 2 (0, T ), provided that T > 2π D + ,
where D + is the upper density of the sequence {βn(l) }1lm;n∈Z∗ , defined by

N (R)
D + = lim sup , (8.93)
R→+∞ 2R

where N (R) denotes the number of {βn(l) } contained in the interval [−R, R].

Proof We first define the sequence of difference quotients as follows:

l 
 
l  ( p)
en(l) (t) = (βn( p) − βn(q) )−1 eiβn t (8.94)
p=1 q=1,q = p

for l = 1, · · · , m and n ∈ Z∗ . Since the sequence {βn(l) }1lm,n∈Z∗ can be asymptot-


ically close at the rate |n|1 s , the classical Ingham’s theorem does not work. We will
use a generalized Ingham-type theorem based on the divided differences, which tol-
erates asymptotically close frequencies {βn(l) }1lm,n∈Z∗ (cf. Theorem 9.4 in [28]).
Namely, under conditions (8.87) and (8.91)–(8.92), the sequence of difference quo-
tients {en(l) }1lm,n∈Z∗ is a Riesz sequence in L 2 (0, T ), provided that T > 2π D + .
(l, p)
Let the lower triangular matrix An = (an ) be defined by


l
an(1,1) = 1; an(l, p) = (βn( p) − βn(q) )−1 (8.95)
q=1,q = p

for 1 < l  m and 1  p  l. Since the diagonals an(l,l) are positive for all 1  l  m,
(l, p)
An is invertible. Then, setting A−1
n = Bn = (bn ), we write (8.94) as

(l) 
l
eiβn t = bn(l, p) en( p) (t), l = 1, · · · , m. (8.96)
p=1

Inserting (8.96) into (8.88), we get


m
an( p) en( p) (t) = 0 on [0, T ],
 (8.97)
n∈Z∗ p=1

where

m
an( p) =
 bn(l, p) an(l) . (8.98)
l= p

Assume for the time being that


8.6 The Unique Continuation for Nonharmonic Series 99


m
an( p) |2 < +∞.
| (8.99)
n∈Z∗ p=1

Then by means of the property of Riesz sequence, it follows from (8.97) and (8.99)
that
an( p) = 0, 1  p  m, n ∈ Z∗ ,
 (8.100)

which implies (8.90).


Now we return to the verification of (8.99). From (8.98), it is sufficient to show
that the matrix Bn is uniformly bounded for all n. From (8.92) and (8.95), we have

1 
l−1
bn(l,l) = = (βn(l) − βn(q) )  c1 γ l−1 , 1  l  m, (8.101)
an(l,l) q=1

where c1 is a positive constant independent of n. Since Bn is also a lower triangular


matrix, without loss of generality, we assume that γ < 1. Then, it follows that the
spectral radius ρ(Bn )  c1 . It is well known that for any given ˜ > 0, there exists a
vector norm in Rm , such that the subordinate matrix norm satisfies

Bn   (ρ(Bn ) + ˜)  c1 + 1, ∀n ∈ Z∗ . (8.102)

The proof is then complete. 

Corollary 8.23 For


δ1 < δ2 < · · · < δm , (8.103)

we define 
βn(l) = n 2 + δl , l = 1, 2, · · · , m, n  1,
(8.104)
(l)
β−n = −βn(l) , l = 1, 2, · · · , m, n  1.

(l)
Then, for || > 0 small enough, the sequence {eiβn t }1lm;n∈Z∗ is ω-linearly inde-
pendent in L 2 (0, T ), provided that

T > 2mπ. (8.105)

Proof First, for || > 0 small enough, the sequence {βn(l) }1lm;n∈Z∗ satisfies (8.87).
On the other hand, a straightforward computation gives that
(l)
βn+1 − βn(l) = O(1) (8.106)

and   
(δl+1 − δl )  
βn(l+1) − βn(l) = =O   (8.107)
n + δl+1  + n + δl 
2 2 n
100 8 Approximate Boundary Null Controllability

for |n| large enough. Then the sequence {βn(l) }1lm;n∈Z∗ satisfies all the require-
ments of Theorem 8.22 with s = 1 and D + = m. Consequently, the sequence
(l)
{eiβn t }1lm;n∈Z∗ is ω-linearly independent in L 2 (0, T ), provided that (8.105)
holds. 

8.7 Sufficiency of Kalman’s Criterion for T > 0 Large


Enough in the One-Space-Dimensional Case

In this section, under suitable conditions on the coupling matrix A and for || > 0
small enough, we will first establish the sufficiency of Kalman’s criterion (8.28) for
the following one-space-dimensional problem:
⎧ 
⎨  − x x + A T  = 0 in (0, +∞) × (0, π),
(t, 0) = (t, π) = 0 on (0, +∞), (8.108)
⎩ 1 in (0, π)
 0 ,  = 
t =0: =

with the observation at the end x = 0:

D T x (t, 0) ≡ 0 on [0, T ]. (8.109)

We will next give the optimal time of observation for the coupling matrix A, which
has distinct real eigenvalues.
First assume that A T is diagonalizable with the real eigenvalues:

δ1 < δ2 < · · · < δm (8.110)

and the corresponding eigenvectors w(l,μ) such that

A T w (l,μ) = δl w (l,μ) , 1  l  m, 1  μ  μl , (8.111)

in which

m
μl = N . (8.112)
l=1

Let
en = sin nx, n  1 (8.113)

be the eigenfunctions of − in H01 (0, π). Then en w (l,μ) is an eigenvector of


− + A T corresponding to the eigenvalue n 2 + δl . Furthermore, we define
{βn(l) }1lm;n∈Z∗ as in (8.104) and the corresponding eigenvectors of the system given
in (8.108) by
8.7 Sufficiency of Kalman’s Criterion for T > 0 Large Enough … 101
⎛ ⎞
en w (l,μ)
E n(l,μ) = ⎝ iβn(l) ⎠ , 1  l  m, 1  μ  μl , n ∈ Z∗ , (8.114)
en w (l,μ)

in which we define e−n = en for all n  1. Since the eigenfunctions en (n  1) are


(l,μ)
orthogonal in L 2 (0, π) as well as in H01 (0, π), the linear hull Span{E n }1lm,1μμl
of finite dimension N are mutually orthogonal for all n  1. On the other hand, the
system of eigenvectors (8.114) is complete in (H01 (0, π)) N × (L 2 (0, π)) N , therefore
it forms a Hilbert basis of subspaces, then a Riesz basis in (H01 (0, π)) N × (L 2 (0, π)) N
(cf. [15] for the basis of subspaces).
For any given initial data
 μl
m 
0

1 = αn(l,μ) E n(l,μ) , (8.115)
 ∗ n∈Z l=1 μ=1

the corresponding solution of problem (8.108) is given by


 μl
m 
 (l)
= αn(l,μ) eiβn t E n(l,μ) . (8.116)

n∈Z∗ l=1 μ=1

In particular, we have
μl

m  (l,μ)
αn (l)
= (l)
eiβn t en w (l,μ) , (8.117)
n∈Z∗ l=1 μ=1 iβn

and the observation (8.109) becomes


m 
μl

(l,μ)  (l)
w (l,μ) eiβn t ≡ 0 on [0, T ].
n
DT (8.118)
n∈Z∗ l=1 μ=1 iβn(l)

Theorem 8.24 Assume that A and D satisfy Kalman’s criterion (8.28). Assume fur-
thermore that A T is real diagonalizable with (8.110)–(8.111). Then problem (8.108)
is D-observable for || > 0 small enough, provided that

T > 2mπ. (8.119)

Proof Applying Corollary 8.23 to each line of (8.118), we get


μl

(l,μ) 
w (l,μ) = 0, 1  l  m, n ∈ Z∗ .
n
DT (8.120)
μ=1 iβn(l)
102 8 Approximate Boundary Null Controllability

By virtue of Proposition 2.12 (ii), because of Kalman’s criterion (8.28), Ker(D T )


does not contain any nontrivial invariant subspace of A T , then it follows that
μl
 (l,μ)
nαn
w (l,μ) = 0, 1  l  m, n ∈ Z∗ . (8.121)
μ=1 iβn(l)

Hence
αn(l,μ) = 0, 1  μ  μl , 1  l  m, n ∈ Z∗ . (8.122)

The proof is complete. 


We now improve the estimate (8.119) on the observation time in the case that the
eigenvalues of A are distinct.
Theorem 8.25 Under the assumptions of Theorem 8.24, assume furthermore that
A T possesses N distinct real eigenvalues:

δ1 < δ2 < · · · < δ N . (8.123)

Then problem (8.108) is D-observable for || > 0 small enough, provided that

T > 2π(N − rank(D) + 1). (8.124)

Proof Let w(1) , w (1) , · · · , w (N ) be the corresponding eigenvectors of A T . Accord-


ingly, (8.118) becomes


N
nαn(l) (l)
DT w (l) eiβn t ≡ 0 on [0, T ]. (8.125)
n∈Z l=1 iβn(l)

Then, setting r = rank(D), without loss of generality, we assume that D T w (1) , · · · ,


D T w (r ) are linearly independent. There exists an invertible matrix S of order N , such
that
S D T (w (1) , · · · , w (r ) ) = (e1 , · · · er ), (8.126)

where e1 , · · · er are the canonic basis vectors in R N . Since S is invertible, (8.125)


can be equivalently rewritten as

 
r
nα(l) (l) 
N
nαn(l) (l)

n
(l)
el eiβn t + S D T w (l) eiβn t
≡0 (8.127)
n∈Z∗ l=1 iβn l=r +1 iβn(l)

for all 0  t  T . Once again, we apply Corollary 8.23 to each equation of (8.127),
but this time, the upper density of the sequence {βn(1) , βn(l) }r +1lN ;n∈Z∗ is equal to
(N − r + 1). Therefore, we get again (8.122), provided that (8.124) holds. The proof
is complete. 
8.8 An Example 103

8.8 An Example

Let || > 0 be small enough. By Corollary 8.20, we know that the following system
⎧ 
⎨ φ − φ + ψ = 0 in (0, +∞) × ,
ψ  − ψ + φ = 0 in (0, +∞) × , (8.128)

φ=ψ=0 on (0, +∞) × 

is observable by means of the trace ∂ν φ|1 or ∂ν ψ|1 on all the interval [0, T ] for
T > 0 large enough.
We now consider the following example which is approximately null controllable,
but not exactly null controllable:
⎧ 

⎪ u − u + v = 0 in (0, +∞) × ,
⎨ 
v − v + u = 0 in (0, +∞) × ,
(8.129)

⎪ u=v=0 on (0, +∞) × 0

u = h, v = 0 on (0, +∞) × 1 ,

1
where N = 2, M = 1, D = and h is a boundary control.
0
First, by means of Theorem 3.10, system (8.129) is never exactly null controllable
in the space (H0 )2 × (H−1 )2 because of the lack of boundary controls.
On the other hand, since its adjoint problem (8.128) is D-observable via the obser-
vation of the trace ∂ν φ|1 , then, applying Theorem 8.6, system (8.129) is approxi-
mately null controllable in the space (L 2 ())2 × (H −1 ())2 via only one boundary
control h ∈ L 2 (0, T ; L 2 (1 )), provided that T is large enough.
Chapter 9
Approximate Boundary Synchronization

The approximate boundary synchronization is defined and studied in this chapter for
system (I) with Dirichlet boundary controls.

9.1 Definition

We now give the definition on the approximate boundary synchronization as


follows.

Definition 9.1 System (I) is approximately synchronizable at the time T > 0, if for
0 , U
any given initial data (U 1 ) ∈ (H0 ) N × (H−1 ) N , there exists a sequence {Hn } of
boundary controls in L with compact support in [0, T ], such that the corresponding
M

sequence {Un } of solutions to problem (I) and (I0) satisfies

u (k) (l)
n − un → 0 as n → +∞ (9.1)

for all 1  k, l  N in the space


0
Cloc ([T, +∞); H0 ) ∩ Cloc
1
([T, +∞); H−1 ). (9.2)

Let C1 be the synchronization matrix of order (N − 1) × N , defined by


⎛ ⎞
1 −1
⎜ 1 −1 ⎟
C1 = ⎜

⎟.
⎠ (9.3)
· ·
1 −1

© Springer Nature Switzerland AG 2019 105


T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_9
106 9 Approximate Boundary Synchronization

Obviously, the approximate boundary synchronization (9.1) can be equivalently


rewritten as
C1 Un → 0 as n → +∞ (9.4)

in the space
0
Cloc ([T, +∞); (H0 ) N −1 ) ∩ Cloc
1
([T, +∞); (H−1 ) N −1 ). (9.5)

9.2 Condition of C1 -Compatibility

Theorem 9.2 Assume that system (I) is approximately synchronizable, but not
approximately null controllable. Then the coupling matrix A = (ai j ) should satisfy
the following condition of compatibility (the row-sum condition):


N
akp := a, k = 1, · · · , N , (9.6)
p=1

where a is a constant independent of k = 1, · · · , N .

Proof Let (U 0 , U
1 ) ∈ (H0 ) N × (H−1 ) N , and let {Hn } be a sequence of boundary
controls, which realize the approximate boundary synchronization of system (I). Let
{Un } be the sequence of the corresponding solutions. We have

Un − Un + AUn = 0 in (T, +∞) × . (9.7)

Noting Un = (u (1) (N ) T
n , · · · , u n ) , we have


N
u (k)
n − u (k)
n + akp u n( p) = 0 in (T, +∞) × , 1  k  N . (9.8)
p=1

Let wn(k) = u (k) (N )


n − un for 1  k  N − 1. It follows from (9.8) that

N −1



N
)
wn(k) − wn(k) + akp wn( p) + akp − a u (N
n = 0. (9.9)
p=1 p=1

If (9.6) fails, then, noting (9.1), it follows that




u (N
n
)
→ 0 in D (T, +∞) ×  as n → +∞. (9.10)

In order to avoid the technical details of the proof, here we claim (cf. Corollary 10.15
below) that the convergence (9.10) holds actually in the usual space
9.2 Condition of C1 -Compatibility 107

0
Cloc ([T, +∞); H0 ) ∩ Cloc
1
([T, +∞); H−1 ), (9.11)

which contradicts the non-approximate boundary null controllability. The proof is


complete. 

Remark 9.3 The condition of compatibility (9.6) is just the same as (4.5) for the
exact boundary synchronization.

The condition of compatibility (9.6) indicates that e1 = (1, · · · , 1)T is an eigen-


vector of the coupling matrix A, associated with the eigenvalue a given by (9.6).
Moreover, since Ker(C1 ) = Span{e1 }, this condition is equivalent to

AKer(C1 ) ⊆ Ker(C1 ), (9.12)

namely, Ker(C1 ) is an invariant subspace of A. We call (9.12) the condition of


C1 -compatibility, which is also equivalent to the existence of a unique matrix A1
of order (N − 1), such that
C 1 A = A1 C 1 , (9.13)

where the matrix A1 is called the reduced matrix of A by C1 .

9.3 Fundamental Properties

Under the condition of C1 -compatibility (9.12), let

W1 = (w (1) , · · · , w (N −1) )T = C1 U. (9.14)

The original problem (I) and (I0) for the variable U can be reduced to the following
self-closed problem for the variable W :

⎨ W1 − W1 + A1 W1 = 0 in (0, +∞) × ,
W =0 on (0, +∞) × 0 , (9.15)
⎩ 1
W1 = C 1 D H on (0, +∞) × 1

with the initial data

t =0: 0 , W1 = C1 U
W1 = C 1 U 1 in . (9.16)

Accordingly, let
1 = (ψ (1) , · · · , ψ (N −1) )T . (9.17)

Consider the adjoint problem of the reduced system (9.15)


108 9 Approximate Boundary Synchronization
⎧ T
⎨ 1 − 1 + A1 1 = 0 in (0, +∞) × ,
1 = 0 on (0, +∞) × , (9.18)
⎩ 0 , 1 = 
1 in .
t = 0 : 1 = 

In what follows, problem (9.18) will be called the reduced adjoint problem of
system (I).
From Definitions 8.1 and 9.1, we immediately get the following
Lemma 9.4 Assume that the coupling matrix A satisfies the condition of
C1 -compatibility (9.12). Then system (I) is approximately synchronizable at the
time T > 0 if and only if the reduced system (9.15) is approximately null controllable
at the time T > 0, or equivalently (cf. Theorem 8.6), the reduced adjoint problem
(9.18) is C1 D-observable on the time interval [0, T ].
Moreover, we have
Lemma 9.5 Under the condition of C1 -compatibility (9.12), assume that system (I)
is approximately synchronizable, then we necessarily have

rank(C1 D, C1 AD, · · · , C1 A N −1 D) = N − 1. (9.19)

Proof By Lemma 9.4 and Theorem 8.9, we have


N −2
rank(C1 D, A1 C1 D, · · · , A1 C1 D) = N − 1. (9.20)

Then, noting (9.13) and using Proposition 2.16, we get immediately (9.19). 

9.4 Properties Related to the Number of Total Controls

As we have already explained in Sect. 8.3, the rank of the enlarged matrix
(D, AD, · · · , A N −1 D) measures the number of total (direct and indirect) controls.
So, it is significant to determine the minimal number of total controls, which is nec-
essary to the approximate boundary synchronization of system (I), no matter whether
the condition of C1 -compatibility (9.12) is satisfied or not.
Theorem 9.6 Assume that system (I) is approximately synchronizable under the
action of a boundary control matrix D. Then we necessarily have

rank(D, AD, · · · , A N −1 D)  N − 1. (9.21)

In other words, at least (N − 1) total controls are needed in order to realize the
approximate boundary synchronization of system (I).
Proof First, assume that A satisfies the condition of C1 -compatibility (9.12). By
Lemma 9.5, we have (9.19). Next, assume that A does not satisfy the condition of
9.4 Properties Related to the Number of Total Controls 109

C
1 -compatibility
 (9.12). By Proposition 2.15, we have C1 Ae1 = 0. So, the matrix
C1
is of full column-rank. By (9.4), it is easy to see from problem (I) and (I0)
C1 A
with U = Un and H = Hn that

C1 Un → 0 and C1 AUn → 0 in (D ((T, +∞) × )) N −1 (9.22)

as n → +∞. Then we have

Un → 0 in (D ((T, +∞) × )) N as n → +∞. (9.23)

We claim that in this case we have

rank(D, AD, · · · , A N −1 D) = N . (9.24)

Otherwise, let d  1 be such that

rank(D, AD, · · · , A N −1 D) = N − d. (9.25)

By the assertion (ii) of Proposition 2.12, there exists a nontrivial subspace of A T ,


which is contained in Ker(D T ) and invariant for A T , therefore, there exists a nontrivial
vector E and a number λ ∈ C, such that

A T E 1 = λE 1 and D T E 1 = 0. (9.26)

Applying E to problem (I) and (I0) with U = Un and H = Hn , and noting φ =


(E, Un ), it follows that
⎧ 
⎨ φ − φ + λφ = 0 in (0, +∞) × ,
φ=0 on (0, +∞) × , (9.27)
⎩ 0 ), φ = (E 1 , U
1 ) in .
t = 0 : φ = (E 1 , U

Noting that φ is obviously independent of n if the initial data is chosen so that


0 ) = 0 or (E 1 , U
(E 1 , U 1 ) = 0, then φ ≡ 0 which contradicts (9.23).
Finally, combining (9.19) and (9.24), we get the rank condition (9.21). The proof
is then achieved. 
According to Theorem 9.6, it is natural to consider the approximate boundary
synchronization of system (I) with the minimal number (N − 1) of total controls. In
this case, the coupling matrix A should possess some fundamental properties related
to the synchronization matrix C1 .
Theorem 9.7 Assume that system (I) is approximately synchronizable under the
minimal rank condition

rank(D, AD, · · · , A N −1 D) = N − 1. (9.28)


110 9 Approximate Boundary Synchronization

Then we have the following assertions:


(i) The coupling matrix A satisfies the condition of C1 -compatibility (9.12).
(ii) There exists a scalar function u as the approximately synchronizable state,
such that
u (k)
n → u as n → +∞ (9.29)

for all 1  k  N in the space


0
Cloc ([T, +∞); H0 ) ∩ Cloc
1
([T, +∞); H−1 ). (9.30)

Moreover, the approximately synchronizable state u is independent of the sequence


{Hn } of applied controls.
(iii) The transpose A T of the coupling matrix A admits an eigenvector E 1 such
that (E 1 , e1 ) = 1, where e1 = (1, · · · , 1)T is the eigenvector of A, associated with
the eigenvalue a given by (9.6).

Proof (i) If A does not satisfy the condition of C1 -compatibility (9.13), then the
minimal rank condition (9.28) makes the rank condition (9.24) impossible.
(ii) By the assertion (ii) of Proposition 2.12 with d = 1, the rank condition (9.28)
implies the existence of one-dimensional invariant subspace of A T , contained in
Ker(D T ), therefore, there exist a vector E 1 ∈ Ker(D T ) and a number b ∈ C, such
that
E 1T D = 0 and A T E 1 = bE 1 . (9.31)

Applying E 1 to problem (I) and (I0) with U = Un and H = Hn , and setting φ =


(E 1 , Un ), it follows that
⎧ 
⎨ φ − φ + bφ = 0 in (0, +∞) × ,
φ=0 on (0, +∞) × , (9.32)
⎩ 0 ), φ = (E 1 , U
1 ) in .
t = 0 : φ = (E 1 , U

Clearly, φ is independent of n and of applied control Hn . Moreover, by (9.4) we have


     
C1 C 1 Un 0
U = → as n → +∞ (9.33)
E 1T n
(E 1 , Un ) φ

in the space
0
Cloc ([T, +∞); (H0 ) N ) ∩ Cloc
1
([T, +∞); (H−1 ) N ). (9.34)
 
C1
We will show that the matrix is invertible, then it follows that
E 1T
 −1  
C1 0
Un → =: U as n → +∞ (9.35)
E 1T φ
9.4 Properties Related to the Number of Total Controls 111

in the space
0
Cloc ([T, +∞); (H0 ) N ) ∩ Cloc
1
([T, +∞); (H−1 ) N ). (9.36)

Noting that
Ker(C1 ) = Span(e1 ) with e1 = (1, · · · , 1)T , (9.37)

it follows from (9.35) that there exists a scalar function u such that U = ue1 , thus,
we get (9.29). Since φ is independent of applied control Hn , so is u. Moreover,
noting (9.35), we have u ≡ 0 for all initial data (U 1 ) such that (E 1 , U
0 , U 0 ) ≡ 0 or
1 ) ≡ 0.
(E 1 , U  
C1
We now show that the matrix is invertible. In fact, assume that there exists
E 1T
a vector x ∈ C N , such that  
C1
xT = 0. (9.38)
E 1T

Then, applying x to (9.33), it is easy to see that


 
0
xT = 0. (9.39)
φ

0 , U
Clearly, φ ≡ 0 at least for an initial data (U 1 ). Then the last component of x must
be zero. We can thus write
 

x
x= with  x ∈ C N −1 . (9.40)
0

Then, it follows from (9.38) that 


x T C1 = 0. But C1 is of full row-rank, so 
x = 0,
therefore x = 0.
(iii) By (9.33) and noting U = ue1 , we get

φ = (E 1 , e1 )u. (9.41)

0 , U
Since φ ≡ 0 at least for an initial data (U 1 ), we get (E 1 , e1 ) = 0. Without loss
of generality, we may take (E 1 , e1 ) = 1. It follows that

a(E 1 , e1 ) = (E 1 , Ae1 ) = (A T E 1 , e1 ) = b(E 1 , e1 ). (9.42)

So b = a is a real number, and E 1 is a real eigenvector of A T , associated with the


eigenvalue a given by (9.6). The proof is then complete. 

Remark 9.8 What is surprising is the existence of the approximately synchronizable


state u under the minimal rank condition (9.28). Moreover, by Theorem 8.9, system
0 , U
(I) is not approximately null controllable. Then, at least for an initial data (U 1 ),
the approximately synchronizable state u ≡ 0. In this case, system (I) is called to be
112 9 Approximate Boundary Synchronization

approximately synchronizable in the pinning sense, while that originally given by


Definition 9.1 is in the consensus sense.
Remark 9.9 The rank condition (9.28) indicates that the number of total controls
is equal to (N − 1), but the state variable U of system (I) has N components,
so, there exists a direction E 1 , on which the projection (E 1 , Un ) of the solution
Un is independent of (N − 1) controls, therefore, (E 1 , Un ) converges in the space
0
Cloc ([0, +∞); H0 ) ∩ Cloc
1
([0, +∞); H−1 ) as n → +∞. Moreover, the necessity of
the condition of C1 -compatibility (9.12) becomes also a consequence of the minimal
value of the number of applied total controls.
Remark 9.10 By Lemma 9.5, under the condition of C1 -compatibility (9.12), for
the approximate boundary synchronization of system (I), we have (9.19), which
shows that the reduced system (9.15) is still submitted to (N − 1) total controls.
In other words, as we transform system (I) into system (9.15), only the number of
equations is reduced from N to (N − 1), but the number (N − 1) of the total controls
remains always unchanged, so that we get a reduced system of (N − 1) equations
still submitted to (N − 1) total controls, that is just what we want.
Remark 9.11 By Definition 2.3, the assertion (iii) means that the subspace Span{E 1 }
is bi-orthonormal to the subspace Span{e1 }. Then, by Proposition 2.5 we have

Span{e1 } ∩ (Span{E 1 })⊥ = {0}. (9.43)

Thus, it follows from Proposition 2.2 that Span{E 1 }⊥ is a supplement of Span{e1 }.


Moreover, since Span{E 1 } is an invariant subspace of A T , by Proposition 2.7,
Span{E 1 }⊥ is invariant for A. Therefore, A is diagonalizable by blocks according to
the decomposition Span{e1 } ⊕ Span{E 1 }⊥ .
Theorem 9.12 Let A satisfy the condition of C1 -compatibility (9.12). Assume that
A T admits an eigenvector E 1 such that (E 1 , e1 ) = 1 with e1 = (1, · · · , 1)T . Then
there exists a boundary control matrix D satisfying the minimal rank condition (9.28),
which realizes the approximate boundary synchronization of system (I).
Proof By Lemma 9.4, under the condition of C1 -compatibility (9.12), the approxi-
mate boundary synchronization of system (I) is equivalent to the C1 D-observability
of the reduced adjoint problem (9.18).
Let D be the following full column-rank matrix defined by

Ker(D T ) = Span{E 1 }. (9.44)

Since Span{E 1 } is the sole subspace contained in Ker(D T ) and invariant for A T ,
applying the assertion (ii) of Lemma 2.12 with d = 1, we get the rank condition
(9.28). On the other hand, the assumption (E 1 , e1 ) = 1 implies that Ker(C1 ) ∩
Im(D) = {0}, therefore, by Proposition 2.4 we have

rank(C1 D) = rank(D) = N − 1. (9.45)


9.4 Properties Related to the Number of Total Controls 113

Thus, the C1 D-observation of the reduced adjoint problem (9.18) becomes the com-
plete observation:
∂ν  ≡ 0 on [0, T ] × 1 , (9.46)

which implies well  ≡ 0 because of Holmgren’s uniqueness Theorem (cf. Theorem


8.2 in [62]), provided that T > 0 is large enough. The proof is then complete. 

Remark 9.13 Since the matrix D given by (9.44) is of rank (N − 1), Theorem 9.12
shows the approximate boundary synchronization of system (I) by means of (N − 1)
direct boundary controls. However, we are more interested in using fewer direct
boundary controls in practice. We will give later in Theorem 10.10 a matrix D with
the minimal rank, such that Kalman’s criteria (9.28) and (9.19) are simultaneously
satisfied. We have pointed out in Chap. 8 that Kalman’s criterion (9.19) is indeed
sufficient to the approximate boundary null controllability of the reduced system
(9.15), then to the approximate boundary synchronization of the original system (I),
for some special reduced systems such as the nilpotent system, 2 × 2 systems with
the small spectral gap condition (8.74) and some one-space-dimensional systems,
provided that T > 0 is large enough.

Remark 9.14 Let D1 be the set of all boundary control matrices D, which realize the
approximate boundary synchronization of system (I). Define the minimal number
N1 of total controls for the approximate boundary synchronization of system (I):

N1 = inf rank(D, AD, · · · , A N −1 D). (9.47)


D∈D1

Let Va denote the subspace of all the eigenvectors of A T , associated to the eigenvalue
a given by (9.6). By the results obtained in Theorems 9.6, 9.7 and 9.12, we have

N − 1, if A is C1 -compatible and (E 1 , e1 ) = 1,
N1 = (9.48)
N , if A is C1 -compatible but (E 1 , e1 ) = 0, ∀E 1 ∈ Va .

Example 9.15 Consider the following system:


⎧ 

⎪ u − u + v = 0 in (0, +∞) × ,
⎨ 
v − v − u + 2v = 0 in (0, +∞) × ,
(9.49)

⎪ u=v=0 on (0, +∞) × 0 ,

u = αh, v = βh on (0, +∞) × 1 .

Let   
0 1 α
A= , D= (9.50)
−1 2 β

and
C1 = (1, −1), Ker(C1 ) = Span{e1 } with e1 = (1, 1)T . (9.51)
114 9 Approximate Boundary Synchronization

Clearly, A satisfies the condition of C1 -compatibility with A1 = 1.


We point out that the only eigenvector E 1 = (1, −1)T of A T satisfies (E 1 , e1 ) = 0.
By Theorems 9.6 and 9.7, we have to use two (instead of one!) total controls to realize
the approximate boundary synchronization of system (9.49). More precisely, we have
 
α β
(D, AD) = , det(D, AD) = −(α − β)2 (9.52)
β 2β − α

and
(C1 D, C1 AD) = (α − β, α − β). (9.53)

It is easy to see that

rank(C1 D, C1 AD) = 1 ⇐⇒ rank(D, AD) = 2. (9.54)

By Corollary 8.7, the corresponding reduced system (9.15) (with A1 = 1):


⎧ 
⎨ w − w + w = 0 in (0, +∞) × ,
w=0 on (0, +∞) × 0 , (9.55)

w = (α − β)h on (0, +∞) × 1

is approximately null controllable as α = β, so the original system (9.49) is approx-


imately synchronizable, provided that α = β and T > 0 is large enough.
On the other hand, system (9.49) satisfies well the gap condition (8.74). By The-
orem 8.18, Kalman’s criterion rank(D, C D) = 2 is sufficient for its approximate
boundary null controllability, provided that T > 0 is large enough. So, when α = β,
we are required to use 2 total controls and then to get a better result that system (9.49)
is actually approximately null controllable under the action of the same boundary
controls.
Chapter 10
Approximate Boundary Synchronization
by p-Groups

The approximate boundary synchronization by p-groups is introduced and studied


in this chapter for system (I) with Dirichlet boundary controls.

10.1 Definition

In this chapter, we will consider the approximate boundary synchronization by


p-groups.
Let p  1 be an integer and let

0 = n0 < n1 < n2 < · · · < n p = N (10.1)

be integers such that n r − n r −1  2 for all 1  r  p. We divide the components of


U = (u (1) , · · · , u (N ) )T into p groups as

(u (1) , · · · , u (n 1 ) ), (u (n 1 +1) , · · · , u (n 2 ) ), · · · , (u (n p−1 +1) , · · · , u (n p ) ). (10.2)

Definition 10.1 System (I) is approximately synchronizable by p-groups at the


0 , U
time T > 0 if for any given initial data (U 1 ) ∈ (H0 ) N × (H−1 ) N , there exists
a sequence {Hn } of boundary controls in L with compact support in [0, T ], such
M

that the corresponding sequence {Un } of solutions to problem (I) and (I0) satisfies
the following condition:

u (k) (l)
n − u n → 0 in Cloc ([T, +∞); H0 ) ∩ Cloc ([T, +∞); H−1 )
0 1
(10.3)

as n → +∞ for all n r −1 + 1  k, l  n r and 1  r  p.

© Springer Nature Switzerland AG 2019 115


T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_10
116 10 Approximate Boundary Synchronization by p-Groups

Let Sr be the following (n r − n r −1 − 1) × (n r − n r −1 ) full row-rank matrix:


⎛ ⎞
1 −1
⎜ 1 −1 ⎟
⎜ ⎟
Sr = ⎜ . . ⎟, 1  r  p (10.4)
⎝ .. .. ⎠
1 −1

and let C p be the following (N − p) × N matrix of synchronization by p-groups:


⎛ ⎞
S1
⎜ S2 ⎟
⎜ ⎟
Cp = ⎜ .. ⎟. (10.5)
⎝ . ⎠
Sp

Clearly, the approximate boundary synchronization by p-groups (10.3) can be


equivalently rewritten in the following form:

C p Un → 0 as n → +∞ (10.6)

in the space
0
Cloc ([T, +∞); (H0 ) N − p ) ∩ Cloc
1
([T, +∞); (H−1 ) N − p ). (10.7)

10.2 Fundamental Properties

For r = 1, · · · , p, setting

1, n r −1 + 1  i  n r ,
(er )i = (10.8)
0, otherwise,

it is clear that
Ker(C p ) = Span{e1 , e2 , · · · , e p }. (10.9)

We will say that A satisfies the condition of C p -compatibility if Ker(C p ) is an


invariant subspace of A:
AKer(C p ) ⊆ Ker(C p ), (10.10)

or equivalently, by Proposition 2.15, there exists a matrix A p of order (N − p), such


that
C p A = A pC p. (10.11)

A p is called the reduced matrix of A by C p .


10.2 Fundamental Properties 117

Under the condition of C p -compatibility, setting

W p = C pU (10.12)

and noting (10.11), from problem (I) and (I0) we get the following self-closed
reduced system:

⎪ 
⎨W p − W p + A p W p = 0 in (0, +∞) × ,
Wp = 0 on (0, +∞) × 0 , (10.13)


Wp = Cp D H on (0, +∞) × 1

with the initial data:

t =0: 0 , W p = C p U
W p = C pU 1 in . (10.14)

Obviously, we have the following


Lemma 10.2 Under the condition of C p -compatibility (10.10), system (I) is approx-
imately synchronizable by p-groups at the time T > 0 if and only if the reduced
system (10.13) is approximately null controllable at the time T > 0, or equivalently
(cf. Theorem 8.6) if and only if the adjoint problem of the reduced system (10.13)
⎧ T
⎨  p −  p + A p  p = 0 in (0, +∞) × ,
p = 0 on (0, +∞) × , (10.15)
⎩  p0 ,  p = 
 p1 in ,
t = 0 : p = 

called the reduced adjoint problem, is C p D-observable on the time interval [0, T ],
namely, the partial observation

(C p D)T ∂ν  p ≡ 0 on [0, T ] × 1 (10.16)

 p0 = 
implies   p1 ≡ 0, then  p ≡ 0.
Thus, we have
Lemma 10.3 Under the condition of C p -compatibility (10.10), assume that system
(I) is approximately synchronizable by p-groups, we necessarily have the following
Kalman’s criterion:

rank(C p D, C p AD, · · · , C p A N −1 D) = N − p. (10.17)

Proof From Lemma 10.2 and Theorem 8.9, we get


N − p−1
rank(C p D, A p C p D, · · · , A p C p A N −1 D) = N − p. (10.18)

Thus, noting (10.11), (10.17) follows from Proposition 2.16. 


118 10 Approximate Boundary Synchronization by p-Groups

10.3 Properties Related to the Number of Total Controls

The following result indicates the lower bound on the number of total controls nec-
essary to the approximate boundary synchronization by p-groups for system (I), no
matter whether the condition of C p -compatibility (10.10) is satisfied or not.
Theorem 10.4 Assume that system (I) is approximately synchronizable by p-groups.
Then we necessarily have

rank(D, AD, · · · , A N −1 D)  N − p. (10.19)

In other words, at least (N − p) total controls are needed to realize the approximate
boundary synchronization by p-groups for system (I).

Proof Keeping in mind that the condition of C p -compatibility (10.10) is not assumed
to be satisfied a priori, we define an (N − p̃) × N full row-rank matrix
p̃ (0 p̃ p) by
C

Tp̃ ) = Span(C Tp , A T C Tp , · · · , (A T ) N −1 C Tp ).
Im(C (10.20)

T ) ⊆ Im(C
By Cayley–Hamilton’s theorem, we have A T Im(C T ), or equivalently,
p̃ p̃

p̃ ) ⊆ Ker(C
AKer(C p̃ ). (10.21)

p̃ of order (N − p̃), such that


By Proposition 2.15, there exists a matrix A

C p̃ C
p̃ A = A p̃ . (10.22)

Then, applying C p to the equations in system (I) with U = Un and H = Hn , by


(10.6) it is easy to see that for any given integer l  0, we have

C p Al Un → 0 in (D ((T, +∞) × )) N − p as n → +∞, (10.23)

then
p̃ Un → 0 in (D ((T, +∞) × )) N − p̃
C as n → +∞. (10.24)

We claim that

rank(C p̃ C
p̃ D, A N − p̃−1 C
p̃ D, · · · , A p̃ D)  N − p̃. (10.25)

p̃ is of order (N − p̃), then by the assertion (i) of Proposition


Otherwise, noting that A
2.12 with d = 0, there exists a nontrivial invariant subspace of A T , contained in

p̃ D)T . Therefore, there exists a nonzero vector E ∈ C N − p̃ and a number λ ∈ C,
Ker(C
such that
Tp̃ E = λE and (C
A p̃ D)T E = 0. (10.26)
10.3 Properties Related to the Number of Total Controls 119

Applying CT E to problem (I) and (I0) with U = Un and H = Hn , and noting

p̃ Un ), it is easy to see that
φ = (E, C
⎧ 
⎨ φ − φ + λφ = 0 in (0, +∞) × ,
φ=0 on (0, +∞) × , (10.27)
⎩ p̃ U p̃ U
0 ), φ  = (E, C 1 ) in .
t = 0 : φ = (E, C

p̃ U
If the initial data are chosen such that (E, C 0 ) = 0 or (E, Cp̃ U
1 ) = 0, then φ ≡ 0.
Noting that φ is independent of n, this contradicts (10.24).
Finally, by Proposition 2.16, under the condition of C p̃ -compatibility (10.22), the
rank condition (10.25) yields

rank(D, AD, · · · , A N −1 D) (10.28)


p̃ D, C
rank(C p̃ AD, · · · , C
p̃ A N −1 D)
p̃ D, A
=rank(C p̃ C N − p̃−1 C
p̃ D, · · · , A p̃ D)  N − p̃,

which leads to (10.19) because of p̃  p. The proof is then complete. 

According to Theorem 10.4, it is natural to consider the approximate boundary


synchronization by p-groups for system (I) with the minimal number (N − p) of total
controls. In this case, the coupling matrix A should possess some basic properties
related to C p , the matrix of synchronization by p-groups.

Theorem 10.5 Assume that system (I) is approximately synchronizable by p-groups


under the minimal rank condition

rank(D, AD, · · · , A N −1 D) = N − p. (10.29)

Then we have the following assertions:


(i) There exist some linearly independent scalar functions u 1 , u 2 , · · · , u p such
that
u (k)
n → u r as n → +∞ (10.30)

for all nr −1 + 1  k  n r and 1  r  p in the space


0
Cloc ([T, +∞); H0 ) ∩ Cloc
1
([T, +∞); H−1 ). (10.31)

(ii) The coupling matrix A satisfies the condition of C p -compatibility (10.10).


(iii) A T admits an invariant subspace, which is contained in Ker(D T ) and bi-
orthonormal to Ker(C p ).

Proof (i) By the assertion (ii) of Proposition 2.12 with d = p, the rank condition
(10.29) guarantees the existence of an invariant subspace V of A T , with dimension
p and contained in Ker(D T ). Let {E 1 , · · · , E p } be a basis of V , such that
120 10 Approximate Boundary Synchronization by p-Groups


p
A T Er = αr s E s and D T Er = 0, 1  r  p. (10.32)
s=1

Applying Er to problem (I) and (I0) with U = Un and H = Hn , and setting


φr = (Er , Un ) for r = 1, · · · , p, we get
p
φr − φr + s=1 αr s φs = 0 in (0, +∞) × ,
(10.33)
φr = 0 on (0, +∞) × 

with the initial data

t =0: 0 ), φr = (Er , U


φr = (Er , U 1 ) in . (10.34)

Clearly, φ1 , · · · , φr are independent of n and of applied controls Hn . Moreover, by


(10.6) we get
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
Cp C p Un 0
⎜ E 1T ⎟ ⎜ (E 1 , Un ) ⎟ ⎜ φ1 ⎟
⎜ ⎟ ⎜ ⎟ ⎜ ⎟
⎜ .. ⎟ Un = ⎜ .. ⎟ → ⎜ .. ⎟ as n → +∞ (10.35)
⎝ . ⎠ ⎝ . ⎠ ⎝ . ⎠
E Tp (E p , Un ) φp

0
in the space Cloc ([T, +∞); (H0 ) N ) ∩ Cloc
1
([T, +∞); (H−1 ) N ).
We claim that the matrix ⎛ ⎞
Cp
⎜ E 1T ⎟
⎜ ⎟
⎜ .. ⎟ (10.36)
⎝ . ⎠
E Tp

is invertible. Then,
⎛ ⎞−1 ⎛ ⎞
Cp 0
⎜ E 1T ⎟ ⎜ φ1 ⎟
⎜ ⎟ ⎜ ⎟
Un → ⎜ .. ⎟ ⎜ . ⎟ := U (10.37)
⎝ . ⎠ ⎝ .. ⎠
E Tp φp

0
in Cloc ([T, +∞); (H0 ) N ) ∩ Cloc
1
([T, +∞); (H−1 ) N ) as n → +∞. Since C p U = 0
because of (10.35) and (10.37), noting (10.9), there exist u r (r = 1, · · · , p) such that


p
U= u r er . (10.38)
r =1

Then, by the expression of er given in (10.8), the convergence (10.30) follows from
(10.37).
10.3 Properties Related to the Number of Total Controls 121

Now we go back to show that the matrix (10.36) is indeed invertible. In fact,
assume that there exists a vector x ∈ R N , such that
⎛ ⎞
Cp
⎜ E 1T ⎟
⎜ ⎟
x T ⎜ .. ⎟ = 0. (10.39)
⎝ . ⎠
E Tp

Let  

x
x= x ∈ C N − p and 
with  x ∈ Cp. (10.40)

x

Applying x to (10.35), it is easy to see that


⎞ ⎛
φ1
⎜ ⎟
x T ⎝ ... ⎠ = 0.
tT :  (10.41)
φp

Now for r = 1, · · · , p, let us take the initial data (10.34) as

t =0: φr = θr , φr = 0 in . (10.42)

Then, it follows from problem (10.33) and (10.34) that

(θ1 , · · · , θ p ) → (φ1 (T ), · · · , φ p (T )) (10.43)

defines an isomorphism of (H0 ) p . Thus, as (θ1 , · · · , θ p ) vary in (H0 ) p , the state


variable (φ1 (T ), · · · , φ p (T )) will run through the space (H0 ) p . Thus, it follows from
(10.41) that x T = 0, then by (10.39) we get  x T C p = 0. But C p is of full row-rank,
we have x = 0, then x = 0.
(ii) Applying C p to the equations in system (I) with U = Un and H = Hn , and
passing to the limit as n → +∞, it follows easily from (10.38) that


p
tT : u r C p Aer = 0. (10.44)
r =1

On the other hand, noting (10.35) and (10.38), we have


p
tT : φr = (Er , es )u s , r = 1, · · · , p. (10.45)
s=1

Since the state variable (φ1 (T ), · · · , φ p (T )) of system (10.33) runs through the
space (H0 ) p as (θ1 , · · · , θ p ) given in the initial data (10.42) vary in (H0 ) p , so is the
122 10 Approximate Boundary Synchronization by p-Groups

approximately synchronizable state by p-groups (u 1 , · · · , u p )T . In particular, the


functions u 1 , · · · , u p are linearly independent, then

C p Aer = 0, 1  r  p, (10.46)

namely, AKer(C p ) ⊆ AKer(C p ). We get thus (10.10).


(iii) Noting that Span{E 1 , · · · , E p } and Ker(C p ) have the same dimension,
and that {Ker(C p )}⊥ = Im(C Tp ), by Propositions 2.4 and 2.5, in order to show
that Ker(C p ) is bi-orthonormal to Span{E 1 , · · · , E p }, it is sufficient to show that
Span{E 1 , · · · , E p } ∩ Im(C Tp ) = {0}.
Let E ∈ Span{E 1 , · · · , E p } ∩ Im(C Tp ) be a nonzero vector. There exist some coef-
ficients α1 , · · · , α p and a nonzero vector r ∈ R N − p , such that


p
E= αr Er and E = C Tp r. (10.47)
r =1

Let

p

p
φ = (E, Un ) = αr (Er , Un ) = αr φr , (10.48)
r =1 r =1

where (φ1 , · · · , φ p ) is the solution to problem (10.33) and (10.34) with the homoge-
neous boundary condition, therefore, independent of n. In particular, for any given
nonzero initial data (θ1 , · · · , θ p ), the corresponding solution φ ≡ 0. On the other
hand, we have
φ = (C Tp r, Un ) = (r, C p Un ). (10.49)

Then, the convergence (10.6) implies that φ goes to zero as n → +∞ in the space
0
Cloc ([T, +∞); H0 ) ∩ Cloc
1
([T, +∞); H−1 ). (10.50)

We get thus a contradiction, which confirms that Span{E 1 , · · · , E p } ∩ Im(C Tp ) =


{0}. The proof is then complete. 

Remark 10.6 Under the rank condition (10.29), the requirement (10.6) actually
implies the existence of the approximately synchronizable state by p-groups
(u 1 , · · · , u p )T , which are independent of applied boundary controls. In this case,
system (I) is approximately synchronizable by p-groups in the pinning sense, while
that originally given by Definition 10.1 is in the consensus sense.

Remark 10.7 The rank condition (10.29) indicates that the number of total controls
is equal to (N − p), but the state variable U of system (I) has N independent com-
ponents, so if system (I) is approximately synchronizable by p-groups, there should
exist p directions E 1 , · · · , E p , on which the projections (E 1 , Un ), · · · , (E p , Un ) of
the solution Un to problem (I) and (I0) are independent of the (N − p) controls (cf.
10.3 Properties Related to the Number of Total Controls 123

(10.33)), therefore, converge as n → +∞. Consequently, we get the necessity of the


condition of C p -compatibility (10.10) in this situation.

Remark 10.8 Noting that the rank condition (10.29) indicates that the reduced sys-
tem (10.13) of (N − p) equations is still submitted to (N − p) total controls, which
is necessary for the corresponding approximate boundary null controllability. In
other words, the number of total controls is not reduced. The procedure of reduction
from system (I) to the reduced system (10.13) reduces only the number of equations,
but not the number of total controls, so that we get a reduced system of (N − p)
equations submitted to (N − p) total controls, that is just what we want.

Remark 10.9 Since the invariant subspace Span{E 1 , · · · , E p } of A T is


bi-orthonormal to the invariant subspace Span{e1 , · · · , e p } of A, by Proposition 2.8,
the invariant subspace Span{E 1 , · · · , E p }⊥ of A is a supplement of Span{e1 , · · · , e p }.
Therefore, A is diagonalizable by blocks under the decomposition Span
{e1 , · · · , e p } ⊕ Span{E 1 , · · · , E p }⊥ .

Theorem 10.10 Assume that the coupling matrix A satisfies the condition of C p -
compatibility (10.10) and that Ker(C p ) admits a supplement which is also invariant
for A. Then there exists a boundary control matrix D which satisfies the minimal
rank condition (10.29), and realizes the approximate boundary synchronization by
p-groups for system (I) in the pinning sense.

Proof By Lemma 10.2, under the condition of C p -compatibility (10.10), the approx-
imate boundary synchronization by p-groups of system (I) is equivalent to the C p D-
observability of the reduced adjoint problem (10.15).
Let W ⊥ be a supplement of Ker(C p ), which is invariant for A. By Proposition
2.8, the subspace W is invariant for A T and bi-orthonormal to Ker(C p ).
Define the boundary control matrix D by

Ker(D T ) = W. (10.51)

Clearly, W is the only invariant subspace of A T , with the maximal dimension p and
contained in Ker(D T ). Then, by the assertion (ii) of Proposition 2.12 with d = p,
the rank condition (10.29) holds for this choice of D. On the other hand, the bi-
orthonormality of W with Ker(C p ) implies that Ker(C p ) ∩ Im(D) = {0}. Therefore,
by Proposition 2.11 we have

rank(D) = rank(C p D) = N − p. (10.52)

Then the C p D-observation (10.16) becomes the complete observation:

∂ν  ≡ 0 on [0, T ] × 1 , (10.53)

which implies well  ≡ 0 because of Holmgren’s uniqueness Theorem (cf. Theorem


8.2 in [62]). The proof is thus complete. 
124 10 Approximate Boundary Synchronization by p-Groups

Remark 10.11 The matrix D defined by (10.51) is of rank (N − p). So, we realize
the approximate boundary synchronization by p-groups for system (I) by means of
(N − p) direct boundary controls. But we are more interested in using fewer direct
boundary controls to realize the approximate boundary synchronization by p-groups
for system (I). We will give later in Proposition 11.14 a matrix D with the minimal
rank, such that the rank conditions (10.29) and (10.17) are simultaneously satisfied.
We point out that, by the results given in Chap. 8, Kalman’s criterion (10.17) is indeed
sufficient for the approximate boundary null controllability of the reduced system
(10.13), then to the approximate boundary synchronization by p-groups for system
(I), for some special reduced systems such as the nilpotent system, 2 × 2 systems
with the gap condition (8.7.4) and some one-space-dimensional systems.
Remark 10.12 Let D p be the set of all matrices D, which realize the approximate
boundary synchronization by p-groups for system (I). Define the minimal number
N p of total controls for the approximate boundary synchronization by p-groups of
system (I) by
N p = inf rank(D, AD, · · · , A N −1 D). (10.54)
D∈D p

Then, summarizing the results obtained in Theorems 10.4, 10.5, and 10.10, we have

Np = N − p (10.55)

if and only if Ker(C p ) is an invariant subspace of A, and A T admits an invariant


subspace which is bi-orthonormal to Ker(C p ), or equivalently, by Proposition 2.8 if
and only if Ker(C p ) admits a supplement V , and both Ker(C p ) and V are invariant
for A.
The following result matches the rank conditions (10.29) and (10.17) with a certain
algebraic property of A with respect to the matrix C p .
Proposition 10.13 Let C p be the matrix of synchronization by p-groups, given by
(10.5). Then the rank conditions (10.29) and (10.17) simultaneously hold for some
boundary control matrix D if and only if A T admits an invariant subspace W which
is bi-orthonormal to Ker(C p ).
Proof By the assertion (ii) of Proposition 2.12 with d = p, the rank condition (10.29)
implies the existence of an invariant subspace W of A T , with the dimension p and
contained in Ker(D T ). It is easy to see that

W ⊆ Ker(D, AD, · · · , A N −1 D)T (10.56)

and
dim(W ) = dim Ker(D, AD, · · · , A N −1 D)T = p, (10.57)

then we have
W = Ker(D, AD, · · · , A N −1 D)T . (10.58)
10.3 Properties Related to the Number of Total Controls 125

By Proposition 2.11, the rank conditions (10.29) and (10.17) imply that

Ker(C p ) ∩ W ⊥ = Ker(C p ) ∩ Im(D, AD, · · · , A N −1 D) = {0}. (10.59)

Since dim(W ) = dim Ker(C p ), by Propositions 2.4 and 2.5, W is bi-orthonormal to


Ker(C p ).
Conversely, assume that W is an invariant subspace of A T , and bi-orthonormal to
Ker(C p ), therefore with dimension p. Define a full column-rank matrix D of order
N × (N − p) by
Ker(D T ) = W. (10.60)

Clearly, W is an invariant subspace of A T , with dimension p and contained in


Ker(D T ). Moreover, the dimension of Ker(D T ) is equal to p. Then by the asser-
tion (ii) of Proposition 2.12 with d = p, the rank condition (10.29) holds. Keeping
in mind that (10.58) remains true in the present situation, it follows that

Ker(C p ) ∩ Im(D, AD, · · · , A N −1 D) = Ker(C p ) ∩ W ⊥ = {0}, (10.61)

which, by Proposition 2.12, implies the rank condition (10.17). The proof is then
complete. 

10.4 Necessity of the Condition of C p -Compatibility

We will continue to discuss the approximate boundary synchronization by p-groups


of system (I) and clarify the necessity of the condition of C p -compatibility.
In Definition 10.1, we should exclude some trivial situations. For this purpose,
we assume that each group is not approximately null controllable.
p̃ introduced by (10.20) satisfies well the
Let us recall that the extension matrix C

condition of C p̃ -compatibility (10.21). Moreover, the convergence

p̃ Un → 0 in (D ((T, +∞) × )) N − p̃ as n → +∞


C (10.62)

p̃ has only (N − p̃) rows, we have


implies (10.25), then, noting that C

p̃ D, A
rank(C p̃ C N − p̃−1 C
p̃ D, · · · , A p̃ D) = N − p̃. (10.63)

Based on these results, we will introduce a stronger version of the approximate


boundary synchronization by p-groups.

Theorem 10.14 Assume that system (I) is approximately synchronizable by p-


groups. Then, for any given initial data (U 1 ) in the space (H0 ) N × (H−1 ) N ,
0 , U
there exists a sequence of boundary controls {Hn } in L 2 (0, +∞; (L 2 (1 )) M ) with
compact support in [0, T ], such that the corresponding sequence {Un } of solutions
126 10 Approximate Boundary Synchronization by p-Groups

to problem (I) and (I0) satisfies

p̃ Un → 0 as n → +∞
C (10.64)

in the space
0
Cloc ([T, +∞); (H0 ) N − p̃ ) ∩ Cloc
1
([T, +∞); (H−1 ) N − p̃ ). (10.65)

Since the proof of this result is quite long, we will give it at the end of this section.

We first consider the special case p = 1.


Corollary 10.15 In the case p = 1, assume that A does not satisfy the condition of
C1 -compatibility. If system (I) is approximately synchronizable, then it is approxi-
mately null controllable. Consequently if system (I) is approximately synchronizable,
but not approximately null controllable, then we necessarily have the condition of
C1 -compatibility.
Proof Since p = 1, the extension matrix C̃ p̃ is of order N and of full row-rank,
therefore invertible. Then the convergence (10.64) gives the approximate boundary
null controllability. 
Now we consider the general case p  1.
Theorem 10.16 Let p̃ be the integer given by (10.20). If system (I) is approximately
synchronizable by p-groups, then it is approximately synchronizable by p̃-groups
under a suitable basis.
Proof Let
0 = ñ 0 < ñ 1 < ñ 2 < · · · < ñ p̃ (10.66)

be an arbitrarily given partition of the integer set {0, 1, · · · , N } such that


ñ r − ñ r −1  2 for all 1  r  p̃.
Denote by C p̃ the corresponding matrix of synchronization by p̃-groups with

Ker(C p̃ ) = {ẽ1 , · · · , ẽ p̃ }. (10.67)

p̃ have the same rank, there exists an invertible matrix P of order N ,
Since C p̃ and C
such that
p̃ = C p̃ P.
C (10.68)

Then, it is easy to see that


p̃ ) = Ker(C p̃ ).
PKer(C (10.69)

Now, setting
 = PU and A
U  = P A P −1 , (10.70)

problem (I) and (I0) can be written into the following form:
10.4 Necessity of the Condition of C p -Compatibility 127


⎨U − U
+ A
U = 0 in (0, +∞) × ,

U =0 on (0, +∞) × 0 , (10.71)

⎩
U = P DH on (0, +∞) × 1

with the initial data

t =0:  = PU
U  = P U
0 , U 1 in . (10.72)

By Theorem 10.14 and noting (10.68), we have

C p̃ U p̃ Un → 0
n = C as n → +∞ (10.73)

in the space
0
Cloc ([T, +∞); (H0 ) N − p̃ ) ∩ Cloc
1
([T, +∞); (H−1 ) N − p̃ ). (10.74)

In other words, under the basis P, system (10.71) is approximately synchronizable


by p̃-groups. 

As a direct consequence, we have

Corollary 10.17 Assume that system (I) is approximately synchronizable by p-


groups, but not approximately synchronizable by p̃-groups with p̃ < p under a basis.
Then, we necessarily have the condition of C p -compatibility (10.10).

Now we return to the proof of Theorem 10.14. We will perceive that the stronger
convergence (10.64) follows from the convergence (10.6) and the assumption that
A does not satisfy the condition of C p -compatibility (10.10). We point out that only
the rank condition (10.63) can not guarantee this convergence.
Let us first generalize Definition 8.1 to weaker initial data.

Definition 10.18 Let m  0 be an integer. System (I) is approximately null control-


lable in the space (H−2m ) N × (H−(2m+1) ) N at the time T > 0, if for any given initial
data (U 1 ) ∈ (H−2m ) N × (H−(2m+1) ) N , there exists a sequence {Hn } of bound-
0 , U
ary controls in L 2 (0, +∞; (L 2 (1 ) M ) with compact support in [0, T ], such that the
corresponding sequence {Un } of solutions to problem (I) and (I0) satisfies

Un → 0 as n → +∞ (10.75)

in the space
0
Cloc ([T, +∞); (H−2m ) N ) ∩ Cloc
1
([T, +∞); (H−(2m+1) ) N ). (10.76)

Correspondingly, we give
128 10 Approximate Boundary Synchronization by p-Groups

Definition 10.19 The adjoint problem (8.7) is D-observable in the space


0 ,
(H2m+1 ) N × (H2m ) N on the time interval [0, T ] if for ( 1 ) ∈ (H2m+1 ) N ×
0 ≡
(H2m ) , the observation (8.8) implies
N 1 ≡ 0, then ≡ 0.
Similarly to Theorem 8.6 for the case m = 0, we can establish the following
Proposition 10.20 System (I) is approximately null controllable in the space
(H−2m ) N × (H−(2m+1) ) N at the time T > 0 if and only if the adjoint problem (8.7)
is D-observable in the space (H2m+1 ) N × (H2m ) N on the time interval [0, T ].
Proposition 10.21 Let m  0 be an integer. System (I) is approximately null con-
trollable in the space (H−2m ) N × (H−(2m+1) ) N if and only if it is approximately null
controllable in the space (H0 ) N × (H−1 ) N .
Proof By Proposition 10.20, it is sufficient to show that adjoint problem (8.7) is
D-observable in the space (H2m+1 ) N × (H2m ) N if and only if it is D-observable in
the space (H1 ) N × (H0 ) N .
Assume that adjoint problem (8.7) is D-observable in the space (H2m+1 ) N ×
(H2m ) N , then the following expression
 T 
0 ,
( 1 ) 2F = |D T ∂ν |2 ddt (10.77)
0 1

defines a Hilbert norm in the space (H2m+1 ) N × (H2m ) N . Let F be the closure of
(H2m+1 ) N × (H2m ) N with respect to the F-norm.
By the hidden regularity obtained for problem (8.7) (cf. [62]), we have
 T 
0 ,
|D T ∂ν |2 ddt  c ( 1 ) 2 N
(H1 ) ×(H0 ) N , (10.78)
0 1

then
(H1 ) N × (H0 ) N ⊆ F. (10.79)

Since problem (8.7) is D-observable in F, therefore, it is still D-observable in its


subspace (H1 ) N × (H0 ) N . The converse is trivial. The proof is complete. 
Remark 10.22 Similar results on the exact boundary controllability can be found in
[12].
Proposition 10.23 Assume that system (I) is approximately synchronizable by p-
0 , U
groups. Then, for any given integer l  0 and any given initial data (U 1 ) ∈
(H0 ) N × (H−1 ) N , we have

C p Al Un → 0 in Cloc
0
([T, +∞); (H−2l ) N − p ) as n → +∞, (10.80)

respectively,

C p Al Un → 0 in Cloc
0
([T, +∞); (H−(2l+1) ) N − p ) as n → +∞. (10.81)
10.4 Necessity of the Condition of C p -Compatibility 129

Proof Note that Un satisfies the homogeneous system



Un − Un + AUn = 0 in (T, +∞) × ,
(10.82)
Un = 0 on (T, +∞) × .

Applying C p Al−1 to system (10.82), it follows that

C p Al Un Cloc
0
([T,+∞);(H−2l ) N − p ) (10.83)
 C p A Un Cloc
l−1
0
([T,+∞);(H−2l ) N − p )

+ C p A Un Cloc
l−1
0
([T,+∞);(H−2l ) N − p )

c C p Al−1 Un Cloc
0
([T,+∞);(H−2(l−1) ) N − p )

cl C p Un Cloc
0
([T,+∞);(H0 ) N − p ) ,

where c > 0 is a positive constant. Similar result can be shown for C p Al Un . The
proof is complete. 

We now give the proof of Theorem 10.14.


0 , U
(i) Let (U 1 ) ∈ (H0 ) N × (H−1 ) N . Then, by (10.20) and noting (10.80)–(10.81)
with 0  l  N − 1, we get

p̃ Un → 0 in Cloc
C 0
([T, +∞); (H−2(N −1) ) N − p̃ ) as n → +∞, (10.84)

respectively,

p̃ Un → 0 in Cloc


C 0
([T, +∞); (H−(2N −1) ) N − p̃ ) as n → +∞. (10.85)

0 , U
(ii) Let (U 1 ) ∈ (H−2(N −1) ) N × (H−(2N −1) ) N . By density, there exists a
m m
sequence {(U0 , U1 )}m0 in (H0 ) N × (H−1 ) N , such that

(U 1m ) → (U
0m , U 0 , U
1 ) in (H−2(N −1) ) N × (H−(2N −1) ) N (10.86)

as m → +∞. For each fixed m, there exists a sequence of boundary controls


{Hnm }n0 , such that the corresponding sequence {Unm }n0 of solutions to problem
0m , U
(I) and (I0) with the initial data (U 1m ) satisfies

p̃ Unm → 0 in Cloc


C 0
([T, +∞); (H−2(N −1) ) N − p̃ ) as n → +∞. (10.87)

(iii) Let R denote the resolution of problem (I) and (I0) :

R: (U 1 ; Hn ) → (Un , Un ),
0 , U (10.88)

which is continuous from


130 10 Approximate Boundary Synchronization by p-Groups

(H−2(N −1) ) N × (H−(2N −1) ) N × L M (10.89)

into
0
Cloc ([T, +∞); (H−2(N −1) ) N ) ∩ Cloc
1
([T, +∞); (H−(2N −1) ) N ). (10.90)

0 , U
Now, for any given (U 1 ) ∈ (H−2(N −1) ) N × (H−(2N −1) ) N , we write

R(U 1 ; Hnm ) = R(U


0 , U 0m , U
1m ; Hnm ) + R(U
0 − U
0m , U
1 − U
1m ; 0). (10.91)

By well-posedness, for all 0  t  T  S we have

R(U 0m , U
0 − U 1 − U
1m ; 0)(t)  c S (U
0 − U
0m , U1 − U1m ) (10.92)

with respect to the norm of (H−2(N −1) ) N × (H−(2N −1) ) N , where c S is a positive
constant depending only on S. Then, noting (10.86) and (10.87), we can chose a
diagonal subsequence {Hnmk k }k0 such that

p̃ R(U
C 0 , U
1 ; Hnm k ) → 0 in Cloc
0
([T, +∞); (H−2(N −1) ) N − p̃ ) (10.93)
k

as k → +∞.
Hence, the reduced system (10.71) is approximately null controllable in the
space (H−2(N −1) ) N − p̃ × (H−(2N −1) ) N − p̃ , therefore, by Proposition 10.21, it is also
approximately null controllable in the space (H0 ) N − p̃ × (H−1 ) N − p̃ . The proof is
complete. 

10.5 Approximate Boundary Null Controllability

Let d be a column vector in D, or more generally, a linear combination of the column


vectors of D, namely, d ∈ Im(D). If d ∈ Ker(C p ), then it will be canceled in the
product matrix C p D, therefore it cannot give any effect to the reduced system (10.13).
However, the vectors in Ker(C p ) may play an essential role in the approximate
boundary null controllability. More precisely, we have the following

Theorem 10.24 Let the coupling matrix A satisfy the condition of C p -compatibility
(10.10). Assume that
e1 , · · · , e p ∈ Im(D), (10.94)

where Ker(C p ) = Span{e1 , · · · , e p }. If system (I) is approximately synchronizable


by p-groups, then it is in fact approximately null controllable.

Proof To prove this theorem, by Theorem 8.6, it is sufficient to show that the adjoint
problem (8.7) is D-observable.
10.5 Approximate Boundary Null Controllability 131

For 1  r  p, applying er to the adjoint problem (8.7) and noting φr = (er , ),


it follows from the condition of C p -compatibility (10.10) that
 p
φr − φr + s=1 βr s φs = 0 in (0, T ) × ,
(10.95)
φr = 0 on (0, T ) × ,

where βsr are given by


p
Aer = βr s es , 1  r  p. (10.96)
s=1

Noting er ∈ Im(D), there exists xr ∈ R M , such that er = Dxr . Then, the


D-observation (8.8) gives

∂ν φr = (er , ∂ν ) = (xr , D T ∂ν ) ≡ 0 on (0, T ) × 1 . (10.97)

Hence, by Holmgren’s uniqueness theorem, φr ≡ 0 for all 1  r  p. Consequently,


we have
∈ {Ker(C p )}⊥ = Im(C Tp ). (10.98)

Hence, the solution of the adjoint problem (8.7) can be written as

= C Tp , (10.99)

and it follows from adjoint problem (8.7) that



C Tp   − C Tp  + A T C Tp  = 0 in (0, T ) × ,
(10.100)
C Tp  = 0 on (0, T ) × .

Noting the condition of C p -compatibility (10.11), it follows that


 T
C Tp   − C Tp  + C Tp A p  = 0 in (0, T ) × ,
(10.101)
C Tp  = 0 on (0, T ) × .

Since the matrix C Tp is of full column-rank, we get the reduced adjoint problem
(10.15). Accordingly, the D-observation gives

D T ∂ν = D T C Tp ∂ν  ≡ 0 on (0, T ) × 1 . (10.102)

By Lemma 10.2, the reduced adjoint problem (10.15) is C p D-observable, it follows


that  ≡ 0. Then, from (10.99), we have ≡ 0. So, the adjoint problem (8.7) is
D-observable. The proof is thus complete. 
132 10 Approximate Boundary Synchronization by p-Groups

Remark 10.25 The extension matrix C p̃ defined by (10.20) satisfies the condition
p̃ -compatibility (10.21), and, by Theorem 10.14, the corresponding reduced
of C
system (similar to (10.13)) is approximately null controllable. Moreover, we have
p̃ ) ⊆ K er (C p ). So, if A does not satisfy the condition of C p -compatibility
K er (C
(10.10), we can replace C p by C p̃ , and Theorem 10.24 remains valid in this case.
Chapter 11
Induced Approximate Boundary
Synchronization

We introduce the induced approximate boundary synchronization for system (I) with
Dirichlet boundary controls and give some examples in this chapter.

11.1 Definition

We have precisely studied the approximate boundary synchronization by p-groups


for system (I) under the minimal rank condition (10.29). In this situation, the con-
vergence of the sequence {Un } of solutions to problem (I) and (I0) and the necessity
of the condition of C p -compatibility (10.10) are essentially the consequence of the
minimal number of applied total controls.
The objective of this chapter is to investigate the approximate boundary synchro-
nization by p-groups in the case that

N p > N − p, (11.1)

where N p is defined by (10.54).


By the condition of C p -compatibility (10.10), Ker(C p ) is invariant for A. How-
ever, by Remark 10.12, the situation (11.1) occurs if and only if Ker(C p ) does not
admit any supplement which is invariant for A. So, in order to determine the mini-
mal number N p of total controls, which are necessary for the approximate boundary
synchronization by p-groups of system (I), a natural thought is to extend the matrix
C p of synchronization by p-groups to the induced extension matrix Cq∗ given in
Definition 2.18. Thus, we can bring this situation to the framework of Chap. 10.
Accordingly, we introduce the following
Definition 11.1 System (I) is induced approximately synchronizable by Cq∗ at the
0 , U
time T > 0, if for any given initial data (U 1 ) ∈ (H0 ) N × (H−1 ) N , there exists a

© Springer Nature Switzerland AG 2019 133


T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_11
134 11 Induced Approximate Boundary Synchronization

sequence {Hn } of boundary controls in L M with compact support in [0, T ], such that
the corresponding sequence {Un } of solutions to problem (I) and (I0) satisfies

Cq∗ Un → 0 as n → +∞ (11.2)

0
in the space Cloc ([T, +∞); (H0 ) N −q ) ∩ Cloc
1
([T, +∞); (H−1 ) N −q ).
In general, the induced approximate boundary synchronization does not occur. To
see this aspect, let E1 , · · · , Em ∗ be a Jordan chain of A T associated with an eigen-
value λ:
E0 = 0, A T Ei = λEi + Ei−1 , i = 1, · · · , m. (11.3)

Let m with 1  m < m ∗ be an integer such that E1 , · · · Em ∈ Im(C Tp ). Then, applying


E1 , · · · , Em to system (I) with U = Un and H = Hn , and setting φi = (Ei , Un )(i =
1, · · · , m), we get
⎧ 

⎪ φ1,n − φ1,n + λφ1,n = 0 in (0, +∞) × ,


⎨······
φm,n − φm,n + λφm,n + φm−1,n = 0 in (0, +∞) × , (11.4)



⎪ ······
⎩ 
φm,n − φm,n + λφm,n + φm−1,n = 0 in (0, +∞) × .

Since E1 , · · · , Em ∈ Im(C Tp ), there exist some vectors ri ∈ R N − p for i = 1, · · · , m,


such that
Ei = C Tp ri , i = 1, · · · , m. (11.5)

By (10.6), for all i = 1, · · · , m we have

φi,n = (ri , C p Un ) → 0 as n → +∞ (11.6)

in the space
0
Cloc ([T, +∞); H0 ) ∩ Cloc
1
([T, +∞); H−1 ). (11.7)

Thus, the first m components φ1,n , · · · , φm,n converge to zero as n → +∞. However,
except in some specific situations (cf. Theorems 11.11 and 11.12 below), we don’t
know if the other components φm+1,n , · · · , φm,n converge also to zero or not.
However, when system (I) is induced approximately synchronizable, it is not
only approximately synchronizable by p-groups, but also possesses some additional
properties, which are hidden in the extension matrix Cq∗ . That is to say, in this case,
when we realize the approximate boundary synchronization by p-groups for system
(I), we could additionally get some unexpected results, which would improve the
desired approximate boundary synchronization by p-groups!
11.1 Definition 135

Since Im(Cq∗ ) admits a supplement which is invariant for A T if system (I) is


induced approximately synchronizable, by Remark 10.12, we immediately perceive
that
N p = N − q, (11.8)

namely, (N − q) total controls are necessary to realize the approximate boundary


synchronization by p-groups for system (I). We will show the minimal number (11.8)
in the general case. This result improves the estimate (10.19) and deeply reveals
that the minimal number of total controls necessary to the approximate boundary
synchronization by p-groups depends not only on the number p of groups, but also
on the algebraic structure of the coupling matrix A with respect to the matrix C p of
synchronization by p-groups.

11.2 Preliminaries

In what follows, for better understanding the driving idea, we always assume that
Im(C Tp ) is A T -marked. In this framework, the procedure for obtaining the matrix Cq∗
is given in Chap. 2. In Remark 11.3 below, we will explain that the non-marked case
is senseless for the induced approximate synchronization.

Proposition 11.2 Assume that the coupling matrix A satisfies the condition of C p -
compatibility (10.10). Let D satisfy

rank(C p D, C p AD, · · · , C p A N −1 D) = N − p. (11.9)

Then, we necessarily have

rank(D, AD, · · · , A N −1 D)  N − q, (11.10)

where q is given by (2.59).

Proof By the assertion (i) of Proposition 2.12, it is sufficient to show that the dimen-
sion of any invariant subspace W of A T , contained in Ker(D T ), does not exceed
q.
By the condition of C p -compatibility (10.10), there exists a matrix A p of order
(N − p), such that C p A = A p C p . By Proposition 2.16, the rank condition (11.9) is
equivalent to
N − p−1
rank(C p D, A p C p D, · · · , A p C p D) = N − p, (11.11)

which, by the assertion (ii) of Proposition 2.12, implies that Ker(C p D)T does not
T
contain any nontrivial invariant space of A p .
136 11 Induced Approximate Boundary Synchronization

Now let W be any given invariant subspace of A T contained in Im(C Tp ). The


projected subspace

W = (C p C Tp )−1 C p W = {x : C Tp x = x, ∀x ∈ W } (11.12)

T
is an invariant subspace of A p . In particular, we have

W ∩ Ker(C p D)T = {0}. (11.13)

For any given x ∈ W , there exists x ∈ W , such that x = C Tp x, then we have

D T x = D T C Tp x = (C p D)T x. (11.14)

Thus we get
W ∩ Ker(D T ) = {0} (11.15)

for any given invariant subspace W of A T , contained in Im(C Tp ).


Now let W ∗ be an invariant subspace of A T contained in Im(Cq∗T ) ∩ Ker(D T ).
Since W ∗ ∩ Im(C Tp ) is also an invariant subspace of A T , contained in Im(C Tp ) ∩
Ker(D T ), we have W ∗ ∩ Im(C Tp ) = {0} because of (11.15). Then, it follows that

W ∗ ⊆ Im(Cq∗T ) \ Im(C Tp ). (11.16)

Since Im(C Tp ) is A T -marked, by the construction, Im(Cq∗T ) \ Im(C Tp ) does not con-
tain any eigenvector of A T , then it follows that

W ∗ = {0}. (11.17)

Finally, let W be an invariant subspace of A T contained in Ker(D T ). Since


W ∩ Im(Cq∗T ) is an invariant subspace of A T , contained in Im(Cq∗T ) ∩ Ker(D T ),
it follows from (11.17) that

W ∩ Im(Cq∗T ) = {0}. (11.18)

Then, we get

dim Im(Cq∗T ) + dim(W ) = N − q + dim (W )  N , (11.19)

which implies that dim(W )  q. This achieves the proof. 

Remark 11.3 In the case that Im(C Tp ) is not A T -marked, similarly as in §7.3, we
can also construct the extension matrix Cq∗ . But the set Im(Cq∗T ) \ Im(C Tp ) possibly
11.2 Preliminaries 137

contains eigenvectors of A T , then (11.17) will not be satisfied. So, instead of (11.10),
we can only get a weaker estimation such as

rank(D, AD, · · · , A N −1 D)  N − q  (11.20)

with q  > q. In the case of equality, we have

rank(D, AD, · · · A N −1 D) = N − q  < N − q. (11.21)

So, system (I) is never induced approximately synchronizable. This explains why
we only study the A T -marked case at the beginning of the section.

Proposition 11.4 Assume that the coupling matrix A satisfies the condition of C p -
compatibility (10.10). If

rank(D, AD, · · · , A N −1 D) = N − q, (11.22)

then we necessarily have the rank condition

rank(Cq∗ D, Cq∗ AD, · · · Cq∗ A N −1 D) = N − q. (11.23)

Proof First, by (ii) of Proposition 2.12, the rank condition (11.22) implies the exis-
tence of an invariant subspace W of A T contained in Ker(D T ) with dimension q. On
the other hand, noting (11.18), we have

dim Im(Cq∗T ) + dim(W ) = (N − q) + q = N , (11.24)

then Im(Cq∗T ) is a supplement of W , which is also invariant for A T .


Furthermore, by the definition of W , it is easy to see that

W ⊆ Ker(D, AD, · · · , A N −1 D)T , (11.25)

or equivalently,
Im(D, AD, · · · , A N −1 D) ⊆ W ⊥ , (11.26)

which together with the rank condition (11.22) imply

Im(D, AD, · · · , A N −1 D) = W ⊥ . (11.27)

But W ⊥ is a supplement of Ker(Cq∗ ), which is invariant for A, then

Im(D, AD, · · · , A N −1 D) ∩ Ker(Cq∗ ) = {0}. (11.28)


138 11 Induced Approximate Boundary Synchronization

By Proposition 2.11, we get

rank(Cq∗ D, Cq∗ AD, · · · , Cq∗ A N −1 D) (11.29)


N −1
=rank(D, AD, · · · , A D).

This achieves the proof. 

11.3 Induced Approximate Boundary Synchronization

We first give a lower bound estimate on the number of total controls which are
necessary for the approximate boundary synchronization by p-groups of system (I).
Proposition 11.5 Assume that the coupling matrix A satisfies the condition of C p -
compatibility (10.10). If system (I) is approximately synchronizable by p-groups
under the action of the boundary control matrix D, then we necessarily have

rank(D, AD, · · · , A N −1 D)  N − q, (11.30)

where q is given by (2.59).


Proof By Lemma 10.3, we have the following Kalman’s criterion

rank(C p D, C p AD, · · · , C p A N −1 D) = N − p. (11.31)

Then, the rank condition (11.30) follows from Proposition 11.2. 


Now we go back to the induced approximate boundary synchronization.
Theorem 11.6 Assume that the coupling matrix A satisfies the condition of C p -
compatibility (10.10). If system (I) is induced approximately synchronizable with
the minimal number of total controls (11.22), then, there exist scalar functions
u ∗1 , · · · , u q∗ such that

q
Un → u ∗s es∗ as n → +∞ (11.32)
s=1

in the space
0
Cloc ([T, +∞); (H0 ) N ) ∩ (Cloc
1
([T, +∞); (H−1 ) N ), (11.33)

where Span{e1∗ , · · · eq∗ } = Ker(Cq∗ ).


In particular, system (I) is approximately synchronizable by p-groups in the pin-
ning sense.
Proof The proof is similar to that of Theorem 10.5. Here, we only give its sketch as
follows.
11.3 Induced Approximate Boundary Synchronization 139

By Proposition 2.12, the rank condition (11.22) guarantees the existence of a


subspace Span{E 1 , · · · , E q }, which is invariant for A T and contained in Ker(D T ).
Thus, it is easy to see that the projection of Un on Span{E 1 , · · · , E q } is indepen-
dent of boundary controls Hn . Similarly as in Theorem 10.5, we can show that
Span{E 1 , · · · , E q } is a supplement of Im(Cq∗ T ). We have


q
Un = u r∗ er∗ + Cq∗ T Rn , (11.34)
r =1

where Rn ∈ R N −q . Since Ker(Cq∗ ) = Span{e1∗ , · · · eq∗ }, the induced approximate


boundary synchronization (11.2) implies that Cq∗ Cq∗ T Rn → 0. Noting that Cq∗ Cq∗ T
is invertible, we get Rn → 0, which yields (11.32).
Since Ker(Cq∗ ) ⊆ Ker(C p ), there exist some coefficients αsr such that


p
es∗ = αsr er , s = 1, · · · , q. (11.35)
r =1

Then, setting

q
ur = αsr u ∗s , r = 1, · · · , p (11.36)
s=1

in (11.32), we get

p
Un → u r er as n → +∞ (11.37)
r =1

in the space
0
Cloc ([T, +∞); (H0 ) N ) ∩ Cloc
0
([T, +∞); (H−1 ) N ). (11.38)

Thus, system (I) is approximately synchronizable by p-groups in the pinning


sense. 
Remark 11.7 Because of the minimal rank condition (11.22), the functions
u ∗1 , · · · , u q∗ in (11.32) are linearly independent, while, since p > q, the func-
tions u 1 , · · · , u p given in (11.36) are linearly dependent. The relationship among
u 1 , · · · , u p , which depends on the structure of Ker(Cq∗ ), will be illustrated in Exam-
ples 11.16 and 11.17 below.
Theorem 11.8 Assume that the coupling matrix A satisfies the condition of C p -
compatibility (10.10). There exists a boundary control matrix D such that the cor-
responding system (I) is induced approximately synchronizable with the minimal
number of total controls (11.22).
Proof Since Ker(Cq∗ ) is an invariant subspace of A, by Proposition 2.15, there exists
a matrix Aq∗ of order (N − q), such that Cq∗ A = Aq∗ Cq∗ . Setting W = Cq∗ U in problem
140 11 Induced Approximate Boundary Synchronization

(I) and (I0), we get the following reduced system:


⎧ 
⎨ W − W + Aq∗ W = 0 in (0, +∞) × ,
W =0 on (0, +∞) × 0 , (11.39)

W = Cq∗ D H on (0, +∞) × 1

with the initial data:

t =0: 0 , W  = Cq∗ U
W = Cq∗ U 1 in . (11.40)

Clearly, the induced approximate boundary synchronization by Cq∗ for system (I)
is equivalent to the approximate boundary null controllability of the reduced sys-
tem (11.39), then equivalent to the Cq∗ D-observability of the corresponding reduced
adjoint problem
⎧ 
⎨  −  + Aq∗T  = 0 in (0, +∞) × ,
=0 on (0, +∞) × , (11.41)
⎩ 1 in .
0 ,   = 
t =0: =

Let W be a subspace which is invariant for A T and bi-orthonormal to Ker(Cq∗ ).


Clearly, W and Ker(Cq∗ ) have the same dimension q. Setting

Ker(D T ) = W, (11.42)

W is the largest invariant subspace of A T contained in Ker(D T ). By the assertion


(ii) of Proposition 2.12 with d = q, we have the rank condition (11.22).
On the other hand, since W is bi-orthonormal to Ker(Cq∗ ), by Proposition 2.5, we
have
Im(D) ∩ Ker(Cq∗ ) = W ⊥ ∩ Ker(Cq∗ ) = {0}. (11.43)

Then by Proposition 2.11, we have

rank(Cq∗ D) = rank(D) = N − q. (11.44)

Therefore, the Cq∗ D-observation becomes the complete observation

∂ν  ≡ 0 on [0, T ] × 1 , (11.45)

which, because of Holmgren’s uniqueness theorem (cf. Theorem 8.2 in [62]), implies
the Cq∗ D-observability of the reduced adjoint problem (11.41). We get thus the con-
vergence (11.2). The proof is complete. 
11.3 Induced Approximate Boundary Synchronization 141

As a direct consequence of Theorem 11.8, we have

Corollary 11.9 Let q with 0  q < p be given by (2.59). Then we have

N p = N − q. (11.46)

Remark 11.10 Noting that rank(D) = N − q for the boundary control matrix D
given by (11.42), Theorem 11.8 means that the approximate boundary synchroniza-
tion by p-groups can be realized by means of the minimal number (N − q) of direct
controls. However, in practice, we prefer to use a boundary control matrix D which
provides (N − q) total controls, and has the minimal rank(D). To this end, we will
replace the rank condition (11.44) by a weaker rank condition (11.23), which is only
necessary in general for the induced approximate boundary synchronization. The
following results give some confirmative answers under suitable additional condi-
tions.

Theorem 11.11 Let  ⊂ Rn satisfy the multiplier geometrical condition (3.1).


Assume that the coupling matrix A satisfies the condition of C p -compatibility (10.10)
and is similar to a nilpotent matrix. If system (I) is approximately synchronizable by
p-groups under the rank condition (11.22), then it is actually induced approximately
synchronizable.

Proof As explained at the beginning of the proof of Theorem 11.8, the induced
approximate boundary synchronization of system (I) is equivalent to the Cq∗ D-
observability of the reduced adjoint problem (11.41).
First, by Proposition 11.4, we have the rank condition (11.23). So, by Proposition
2.16, we have

rank(Cq∗ D, Aq∗ Cq∗ D, · · · , Aq∗ N −q−1 Cq∗ D) = N − q. (11.47)

On the other hand, the matrix A is similar to a nilpotent matrix. By Proposition


2.21, the projection of the system of root vectors of A provides the system of root
vectors of the reduced matrix Aq∗ . Thus, it is easy to see that Aq∗ is also similar to
a nilpotent matrix. Then by Theorem 8.16, the reduced adjoint problem (11.41) is
Cq∗ D-observable. The proof is thus complete. 

Theorem 11.12 Let the coupling matrix A satisfy the condition of C p -compatibility
(10.10). Assume that system (I) is approximately synchronizable by p-groups by
means of a boundary control matrix D. Let

 = (ẽ1 , · · · , ẽ p−q , D),


D (11.48)

where ẽ1 , · · · , ẽ p−q ∈ Ker(C p ) ∩ Im(Cq∗ T ). Then system (I) is induced approxi-
mately synchronizable under the action of the new boundary control matrix D. 
142 11 Induced Approximate Boundary Synchronization


Proof It is sufficient to show the Cq∗ D-observability of the reduced adjoint problem
∗T N −q
(11.41). Since Cq is a bijection from R onto Im(Cq∗ T ), we can set  = Cq∗ T .
∗ ∗ ∗
Then, noting Cq A = Aq Cq , it follows from system (11.41) that

 −  + A T  = 0 in (0, +∞) × ,
(11.49)
=0 on (0, +∞) × 

with the initial data 0 , 


( 1 ) ∈ Im(Cq∗ T ) × Im(Cq∗ T ). Accordingly, the

Cq∗ D-observation
T Cq∗ T  ≡ 0 on (0, T ) × 
D (11.50)


becomes the D-observation

T  ≡ 0 on (0, T ) × .
D (11.51)


Then, the Cq∗ D-observability 
of (11.41) becomes the D-observability of system
0 , 
(11.49) with the initial data ( 1 ) ∈ Im(Cq ) × Im(Cq∗ T ).
∗ T

Now let C p denote the restriction of C p to the subspace Im(Cq∗ T ). We have

Im(Cq∗ T ) = Im(C Tp ) ⊕ Ker(C p ). (11.52)

Since the induced approximate boundary synchronization can be considered as the


approximate boundary null controllability in the subspace Im(Cq∗ T ) × Im(Cq∗ T ), so,
applying Theorem 10.24, the proof is achieved. 

11.4 Minimal Number of Direct Controls

In this section, we first clarify the relation between the number of total controls and
that of direct controls. Then, we give an explicit construction for the boundary control
matrix D which has the minimal rank.

Proposition 11.13 Let A be a matrix of order N and D be an N × M matrix such


that
rank(D, AD, · · · , A N −1 D) = N . (11.53)

Then we have the following sharp lower bound estimate

rank(D)  μ, (11.54)
11.4 Minimal Number of Direct Controls 143

where
μ = max dim Ker(A T − λI ) (11.55)
λ∈Sp(A T )

is the largest geometrical multiplicity of the eigenvalues of A T .

Proof Let λ be an eigenvalue of A T , the geometrical multiplicity of which is equal


to μ, and let V be the subspace composed of all the eigenvectors associated with the
eigenvalue λ. By the assertion (ii) of Proposition 2.12, the rank condition (11.53)
implies that there does not exist any nontrivial invariant subspace of A T , contained
in Ker(D T ). Therefore, we have

dim Ker(D T ) + dim (V )  N , (11.56)

namely,
dim (V )  N − dim Ker(D T ) = rank(D), (11.57)

which yields the lower bound estimate (11.54).


In order to show the sharpness of (11.54), we will construct a matrix D0 of rank μ,
which does not contain any nontrivial invariant subspace of A T , therefore, satisfies
the rank condition (11.53) by Proposition 2.12.
Let λ1 , · · · , λd be the distinct eigenvalues of A, with geometrical multiplicity
μ1 , · · · , μd , respectively. For any given r = 1, · · · , d and k = 1, · · · , μr , we denote
by (xr,(k)p ) the following Jordan chain of length dr(k) :

x (k)(k) = 0, Axr,(k)p = λr xr,(k)p + xr,(k)p+1 (11.58)


r,dr +1

for p = 1, · · · , dr(k) .
(l)
Accordingly, for any given s = 1, · · · , d and l = 1, · · · , μs , let (ys,q ) be the
(l)
corresponding Jordan chain of length ds :

(l) (l) (l) (l)


ys,0 = 0, A T ys,q = λs ys,q + ys,q−1 , q = 1, · · · , ds(l) . (11.59)

We can chose
(xr,(k)p , ys,q
(l)
) = δkl δr s δ pq . (11.60)

Then we define the matrix D0 by

(1) (μ ) (1) (μ )
D0 = (x1,1 , · · · , x1,11 , · · · , xd,1 , · · · , xd,1d )D, (11.61)

where D is a (μ1 + · · · + μd ) × μ matrix defined by


144 11 Induced Approximate Boundary Synchronization
⎛ ⎞
Iμ1 0
⎜ Iμ2 0⎟
⎜ ⎟
D=⎜ . .. ⎟ , (11.62)
⎝ .. .⎠
Iμd 0

in which, if μr = μ, then the r th zero sub-matrix will disappear. Clearly,


rank(D0 ) = μ.
Now for any given s = 1, · · · , d, let
μs
 (l)
ys = αl ys,1 with (α1 , · · · , αμs ) = 0 (11.63)
l=1

be an eigenvector of A T , corresponding to the eigenvalue λs . Noting (11.60), it is


easy to see that

ysT D0 = (0, · · · , 0, α1 , α2 , · · · , αμs , 0, · · · , 0) = 0. (11.64)

Thus, Ker(D0T ) does not contain any eigenvector of A T , then, any nontrivial invariant
subspace of A T either. By Proposition 2.12, the matrix D0 satisfies well the rank
condition (11.53). The proof is complete. 
Proposition 11.14 There exists a boundary control matrix D with the minimal rank
μ, such that the rank conditions (11.22) and (11.23) simultaneously hold.
Proof Since the coupling matrix A always satisfies the condition of Cq∗ -compatibility,
there exists a matrix Aq∗ of order (N − q), such that Cq∗ A = Aq∗ Cq∗ . Then, by Propo-
sition 2.16, the rank condition (11.23) is equivalent to

rank(Cq∗ D, Aq∗ Cq∗ D, · · · , Aq∗N −q−1 Cq∗ D) = N − q. (11.65)

Noting that Aq∗ is of order (N − q), as in the proof of Proposition 11.13, there
exists a matrix D ∗ of order (N − q) × μ∗ with the minimal rank μ∗ , such that

rank(D ∗ , Aq∗ D ∗ , · · · , Aq∗N −q−1 D ∗ ) = N − q, (11.66)

where μ∗ is the largest geometrical multiplicity of the eigenvalues of the reduced


matrix Aq∗ T .
On the other hand, A T admits an invariant subspace W which is bi-orthonormal
to Ker(Cq∗ ). Then {Ker(Cq∗ )}⊥ = Im(Cq∗T ) is an invariant subspace of A T , and
W ⊥ = Span{eq+1 , · · · , e N } is an invariant subspace of A and bi-orthonormal to
Im(Cq∗T ), namely, we have

Cq∗ (eq+1 , · · · , e N ) = I N −q . (11.67)


11.4 Minimal Number of Direct Controls 145

Then, from (11.66) we get that

D = (eq+1 , · · · , e N )D ∗ (11.68)

satisfies (11.65), therefore, (11.23) holds.


Finally, since Im(D) ⊆ Span{eq+1 , · · · , e N } which is invariant for A, so

Im(Ak D) ⊆ Span{eq+1 , · · · , e N }, ∀k  0. (11.69)

Noting that Span{eq+1 , · · · , e N } is a supplement of Ker(Cq∗ ), it follows that

Ker(Cq∗ ) ∩ Im(D, AD, · · · , A N −1 D) = {0}, (11.70)

which, thanks to Proposition 2.11 and noting (11.23), implies the equality (11.22)
and the sharpness of the rank of D:

rank(Cq∗ D) = rank(D) = μ∗ . (11.71)

The proof is thus complete. 

11.5 Examples

Example 11.15 As shown in Proposition 11.13, a “good” coupling matrix A should


have distinct eigenvalues, or the geometrical multiplicity of its eigenvalues should
be as small as possible!

In particular, if all the eigenvalues λi (i = 1, · · · , N ) of A are simple, we can take


a boundary control matrix D of rank one:

D = x, (11.72)

where

N
x= xi with Axi = λi xi (i = 1, · · · , N ). (11.73)
i=1

Let y j ( j = 1, · · · , N ) be the eigenvectors of A T , such that (xi , y j ) = δi j . Then for


all j = 1, · · · N , we have


N
DT y j = x T y j = (xi , y j ) = 1. (11.74)
i=1

So, Ker(D T ) does not contain any eigenvector of A T , by (ii) of Proposition 2.12
(with d = 0), D satisfies Kalman’s criterion (11.53).
146 11 Induced Approximate Boundary Synchronization

If A has one double eigenvalue:

λ1 = λ2 < λ3 · · · < λ N with Axi = λi xi (i = 1, · · · , N ), (11.75)

we can take a boundary control matrix D of rank two:

D = (x1 , x), (11.76)

where

N
x= xi . (11.77)
i=2

Let y j ( j = 1, · · · N ) be the eigenvectors of A T , such that (xi , y j ) = δi j . Then we


have    
(x1 , y1 ) 1
D T y1 = = , (11.78)
(x, y1 ) 0

and for j = 2, · · · , N , we have


   
(x1 , y j ) 0
DT y j = = . (11.79)
(x, y j ) 1

So, Ker(D T ) does not contain any eigenvector of A T . Once again, by (ii) of Propo-
sition 2.12 (with d = 0), D satisfies Kalman’s criterion (11.53).

Example 11.16 Let N = 4, M = 1, p = 2,


⎛ ⎞
2 −2 −1 1
⎜1 −1 0 0⎟
A=⎜
⎝1
⎟ (11.80)
−1 −1 1⎠
0 0 0 0

and ⎛ ⎞ ⎛ ⎞
  1 0
1 −1 0 0 ⎜ 1⎟ ⎜0⎟
C2 = with e1 = ⎜ ⎟ ⎜ ⎟
⎝0⎠ , e2 = ⎝1⎠ . (11.81)
0 0 1 −1
0 1

First, noting that Ae1 = Ae2 = 0, we have the condition of C2 -compatibility:


AKer(C2 ) ⊆ Ker(C2 ), and the corresponding reduced matrix
 
1 −1
A2 = (11.82)
1 −1
 
01
is similar to the cascade matrix of order 2.
00
11.5 Examples 147

In order to determine the minimal number of total controls necessary to the approx-
imate boundary synchronization by 2-groups, we exhibit the system of root vectors
of the matrix A T :
⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞
0 1 0 0
⎜ 0 ⎟ ⎜ −1 ⎟ ⎜ 0 ⎟ ⎜ ⎟
E1(1) = ⎜ ⎟ (2) ⎜ ⎟ (2) ⎜ ⎟ (2) ⎜1⎟
⎝0⎠ , E1 = ⎝−1⎠ , E2 = ⎝ 1 ⎠ , E3 = ⎝−1⎠ . (11.83)
1 1 −1 0

Since E1(2) , E2(2) ∈ Im(C2T ), Im(C2T ) is A T -marked. Then, the induced matrix C1∗ can
be chosen as ⎛ ⎞
1 −1 0 0
C1∗ = C1 = ⎝0 1 −1 0 ⎠ . (11.84)
0 0 1 −1

This justifies well that we have to use 3 (instead of 2!) total controls to realize the
approximate boundary synchronization by 2-groups.
On the other hand, all the one-column matrices D satisfying the following rank
conditions:

rank(C2 D, A2 C2 D) = 2 and rank(D, AD, A2 D, A3 D) = 3 (11.85)

are given either by


⎛ ⎞
α+β
⎜ α ⎟
D=⎜ ⎟
⎝ β ⎠ , ∀α, β ∈ R (11.86)
1

or by ⎛ ⎞
γ
⎜α ⎟
D=⎜ ⎟
⎝β ⎠ , ∀α, β, γ ∈ R such that γ = α + β. (11.87)
0

Since the above one-column matrix D provides only 3 total controls, by Theorems 8.6
and 8.9, system (I) is not approximately boundary null controllable. However, the cor-
responding Kalman’s criterion rank(C2 D, A2 C2 D) = 2 is sufficient for the approx-
imate boundary null controllability of the corresponding reduced system (10.13)
of cascade type, then sufficient for the approximate boundary synchronization by
2-groups of the original system (I) in the consensus sense:

u (1) (2) (3) (4)


n − u n → 0 and u n − u n → 0 as n → +∞ (11.88)
148 11 Induced Approximate Boundary Synchronization

in the space
0
Cloc ([T, +∞); H0 ) ∩ Cloc
1
([T, +∞); H−1 ). (11.89)

Furthermore, the reduced matrix A∗1 given by


⎛ ⎞
1 0 −1
A∗1 = ⎝0 0 1 ⎠ (11.90)
1 0 −1

is similar to a cascade matrix. Moreover, noting that q = 1 and C1∗ A = A∗1 C1∗ , by
Proposition 11.4, we necessarily have the Kalman’s criterion

rank(C1∗ D, A∗1 C1∗ D, A∗1 2 C1∗ D, A∗1 3 C1∗ D) = 3 (11.91)

for the boundary control matrix D in the both cases (11.86) and (11.87). Since the
rank condition (11.91) is sufficient for the approximate boundary null controllability
of the corresponding reduced system (11.39), system (I) is induced approximately
synchronizable. By Theorem 11.6, there exists a nontrivial scalar function u ∗ such
that
Un → u ∗ e1∗ as n → +∞, (11.92)

where Ker(C1∗ ) = {e1∗ } with e1∗ = (1, 1, 1, 1)T , or equivalently,

u (1) (2) (3) (4)


n → u, u n → u, u n → u, u n → u as n → +∞ (11.93)

in the space
0
Cloc ([T, +∞); H0 ) ∩ Cloc
1
([T, +∞); H−1 ). (11.94)

The induced approximate boundary synchronization (11.93) reveals the hidden infor-
mation and makes clear the situation of the approximate boundary synchronization
by 2-groups (11.88) in the consensus sense.

Example 11.17 Let N = 4, M = 1, p = 2,


⎛ ⎞ ⎛

0 0 1 −1 0
⎜0 0 −1 1⎟ ⎜0⎟
A=⎜
⎝1
⎟, D=⎜ ⎟ (11.95)
−1 0 0⎠ ⎝1⎠
1 −1 0 0 −1

and ⎛ ⎞ ⎛ ⎞
  1 0
1 −1 0 0 ⎜1⎟ ⎜0 ⎟
C2 = with e1 = ⎜ ⎟ ⎜ ⎟
⎝0⎠ , e2 = ⎝1⎠ . (11.96)
0 0 1 −1
0 1
11.5 Examples 149

First, the rank of Kalman’s matrix


⎛ ⎞
0 2 0 0
⎜0 −2 0 0⎟
(D, AD, A D, A D) = ⎜
2 3
⎝1
⎟ (11.97)
0 4 0⎠
−1 0 4 0

is equal to 3. So, by Theorems 8.6 and 8.9, system (I) is not approximately null
controllable under the action of the boundary control matrix D.
Next, noting Ae1 = Ae2 = 0, we have the condition of C2 -compatibility (10.10):
AKer(C2 ) ⊆ Ker(C2 ). The reduced matrix is
 
02
A2 = C2 AC2T (C2 C2T )−1 = . (11.98)
00

Moreover,  
04
(C2 D, A2 C2 D) = . (11.99)
20

The reduced matrix A2 is of cascade type and then Kalman’s criterion:


rank(C2 D, A2 C2 D) = 2 is sufficient for the approximate boundary null control-
lability of the corresponding reduced system (10.13), hence, the original system (I)
is approximately synchronizable by 2-groups in the consensus sense:

u (1) (2) (3) (4)


n − u n → 0 and u n − u n → 0 (11.100)

as n → +∞ in the space
0
Cloc ([T, +∞); H0 ) ∩ Cloc
1
([T, +∞); H−1 ). (11.101)

Now, we exhibit the system of root vectors of the matrix A T :


⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞
1 0 1 0
⎜1⎟ ⎜0⎟ ⎜−1⎟ ⎜0⎟
E1(1) =⎜ ⎟ (2) ⎜ ⎟ (2) ⎜ ⎟ (2) ⎜ ⎟
⎝0⎠ , E1 = ⎝ 2 ⎠ , E2 = ⎝ 0 ⎠ , E3 = ⎝0⎠ . (11.102)
0 −2 0 1

Since E1(2) , E2(2) ∈ Im(C2T ), Im(C2T ) is A T -marked. Then, the induced matrix C1∗ can
be chosen as ⎛ ⎞
0 0 1 −1
C1∗ = ⎝1 −1 0 0 ⎠ . (11.103)
0 0 0 1
150 11 Induced Approximate Boundary Synchronization

Furthermore, the reduced matrix A∗1 given by


⎛ ⎞
000
A∗1 = ⎝1 0 0⎠ (11.104)
010

is a cascade matrix. Moreover, by Proposition 11.4, we necessarily have the following


Kalman’s criterion

rank(C1∗ D, A∗1 C1∗ D, A∗1 2 C1∗ D, A∗1 3 C1∗ D) = 3,

which is sufficient for the approximate boundary null controllability of the corre-
sponding reduced system (11.39). Then system (I) is induced approximately syn-
chronizable. By Theorem 11.6, there exists a nontrivial scalar function u ∗ , such that

Un → u ∗ e1∗ as n → +∞, (11.105)

where Ker(C1∗ ) = {e1∗ } with e1∗ = (1, 1, 0, 0)T , or equivalently,

u (1) ∗ (2)
n → u , un → u

and u (3) (4)
n → 0, u n → 0 (11.106)

as n → +∞ in the space
0
Cloc ([T, +∞); H0 ) ∩ Cloc
1
([T, +∞); H−1 ). (11.107)

Once again, the induced approximate boundary synchronization (11.106) clarifies


the approximate boundary synchronization by 2-groups (11.100) in the consensus
sense.

Remark 11.18 The nature of the induced approximate boundary synchronization


is determined by the structure of Ker(Cq∗ ). In Example 11.16, since C1∗ = C1 , the
induced approximate boundary synchronization becomes the approximate boundary
synchronization. This is purely a matter of chance. In fact, in Example 11.17, since
C1∗ = C1 , the induced approximate boundary synchronization implies the approx-
imate boundary synchronization for the first group and the approximate boundary
null controllability for the second one, namely, the approximate boundary null con-
trollability and synchronization by 2-groups, which can be treated in a way similar
to the approximate boundary synchronization by groups (cf. Chap. 10).
Other examples can be constructed to illustrate more complicated situations. How-
ever, the next example shows that the induced approximate boundary synchronization
can fail in the general case.
11.5 Examples 151

Example 11.19 Let N = 2, M = 1, p = 1, and

   
0
1
A= , D= , (11.108)

0 0

where
= 0 is a real number. We consider the following system:
⎧ 

⎪ u − u +
v = 0 in (0, +∞) × ,
⎨ 
v − v +
u = 0 in (0, +∞) × ,
(11.109)
⎪u = v = 0
⎪ on (0, +∞) × 0 ,

u = h, v = 0 on (0, +∞) × 1 .

Setting w = u − v, we get the reduced system


⎧ 
⎨ w − w −
w = 0 in (0, +∞) × ,
w=0 on (0, +∞) × 0 , (11.110)

w=h on (0, +∞) × 1 ,

which is approximately null controllable, therefore system (11.109) is approxi-


mately synchronizable. Moreover, it is easy to see that the Kalman’s criterion
rank(D, AD) = 2 holds for all
= 0, namely, the number of total controls is equal
to 2 > N − p = 1. However, as shown in Theorem 8.11, the adjoint system (8.35)
is not D-observable for some values of
. So, system (11.109) is not approximately
null controllable in general.
This example shows that for the approximate boundary synchronization by p-
groups, even rank(D, AD, · · · , A N −1 D), the number of total controls, is bigger
than N − p, we cannot always get more information from the point of view of the
induced approximate boundary synchronization.
Part III
Synchronization for a Coupled System of
Wave Equations with Neumann Boundary
Controls: Exact Boundary Synchronization

We consider the following coupled system of wave equations with Neumann boundary
controls:
⎧ 
⎨ U − U + AU = 0 in (0, +∞) × ,
U =0 on (0, +∞) × 0 , (II)

∂ν U = D H on (0, +∞) × 1

with the initial condition

t =0: U =U 1 in ,
0 , U  = U (II0)

where  ⊂ Rn is a bounded domain with smooth boundary  = 1 ∪  0 such that


 1 ∩  0 = ∅ and mes(1 ) > 0; “” stands for the time derivative;  = nk=1 ∂∂x 2 is
2

k
the Laplacian operator; ∂ν denotes the outward normal derivative on the boundary;
 (1)  T   T
U = u , · · · , u (N ) and H = h (1) , · · · , h (M) (M  N ) stand for the state
variables and the boundary controls, respectively; the coupling matrix A = (ai j ) is
of order N, and D as the boundary control matrix is a full column-rank matrix of
order N × M, both with constant elements.
The exact boundary synchronization and the exact boundary synchronization by
groups for system (II) will be presented and discussed in this part, while, corre-
spondingly, the approximate boundary synchronization and the approximate bound-
ary synchronization by groups for system (II) will be introduced and considered in
the next part (Part IV).
Chapter 12
Exact Boundary Controllability
and Non-exact Boundary Controllability

In this chapter, we will consider the exact boundary controllability and the non-exact
boundary controllability for system (II) of wave equations with Neumann boundary
controls.

12.1 Introduction

In this part, we will consider system (II) of wave equations with Neumann boundary
controls, together with the initial data (II0).
Let us denote
H0 = L 2 () and H1 = H10 (), (12.1)

where H10 () is the subspace of H 1 (), composed of functions with the null trace
on 0 . We denote by H−1 the dual space of H1 . When mes(0 ) = 0, namely, 1 = ,
instead of (12.1) we denote
⎧ 
 

⎪ H = φ ∈ L 2
(), φd x = 0 ,
⎨ 0

 (12.2)

⎪  
⎩ H1 = φ ∈ H 1 (), φd x = 0 .


We will show the exact boundary controllability of system (II) for any given initial
0 , U
data (U 1 ) ∈ (H1 ) N × (H0 ) N via the HUM approach.
To this end, let

 = (φ(1) , · · · , φ(N ) )T (12.3)

© Springer Nature Switzerland AG 2019 155


T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_12
156 12 Exact Boundary Controllability and Non-exact Boundary Controllability

denote the adjoint variables. We consider the following adjoint problem:


⎧ 

⎪  −  + A T  = 0 in (0, +∞) × ,

=0 on (0, +∞) × 0 ,
(12.4)

⎪ ∂ ν = 0 on (0, +∞) × 1 ,
⎩ 1
0 ,  = 
t =0: = in .

We will establish the following

Theorem 12.1 There exist positive constants T > 0 and C > 0, independent of
initial data, such that the following observability inequality
 T 
0 , 
( 1 )2 N
(H0 ) ×(H−1 ) N  C ||2 ddt (12.5)
0 1

0 , 
holds for the solution  to the adjoint problem (12.4) with any given ( 1 )
belonging to a subspace F ⊂ (H0 ) × (H−1 ) (cf. (12.64) below).
N N

Recall that without any assumption on the coupling matrix A, the usual multiplier
method cannot be applied directly. The absorption of coupling lower terms is a
delicate issue even for a single wave equation (cf. [62] and [26]). In order to deal with
the lower order terms, we propose a method based on the compactness-uniqueness
argument that we formulate in the following

Lemma 12.2 Let F be a Hilbert space endowed with the p-norm. Assume that

F =N L, (12.6)

where ⊕ denotes the direct sum and L is a finite co-dimensional closed subspace of
F. Assume that there is another norm—the q-norm in F, such that the projection
from F into N is continuous with respect to the q-norm. Assume furthermore that

q(y)  p(y), ∀y ∈ L. (12.7)

Then there exists a positive constant C > 0, such that

q(z)  C p(z), ∀z ∈ F. (12.8)

Following the above Lemma, we have to first show the observability inequality
for the initial data with higher frequencies in L. In order to extend this inequality to
the whole space F, it is sufficient to verify the continuity of the projection from F
into N for the q-norm. In many situations, it often occurs that the subspaces N and
L are mutually orthogonal with respect to the q-inner product, and this is true in the
12.1 Introduction 157

present case. This new approach turns out to be particularly simple and efficient for
getting the observability of some distributed systems with lower order terms.
As for the problem with Dirichlet boundary controls in Chap. 3, we show the
exact boundary controllability and the non-exact boundary controllability for system
(II) with Neumann boundary controls in the case M = N or with fewer boundary
controls (M < N ) (cf. Theorems 12.14 and 12.15), respectively. Roughly speaking,
in the framework that all the components of initial data are in the same energy space,
a coupled system of wave equations with Dirichlet or Neumann boundary controls is
exactly controllable if and only if one applies the same number of boundary controls
as the number of state variables of wave equations.

12.2 Proof of Lemma 12.2

Assume that (12.8) fails, then there exists a sequence z n ∈ F, such that

q(z n ) = 1 and p(z n ) → 0 as n → +∞. (12.9)

Using (12.6), we write z n = xn + yn with xn ∈ N and yn ∈ L. Since the projection


from F into N is continuous with respect to the q-norm, there exists a positive
constant c > 0, such that

q(xn )  cq(z n ) = c, ∀n ≥ 1. (12.10)

Noting that N is of finite dimension, we may assume that there exists x ∈ N , such
that xn → x in N . Then, since the second relation of (12.9) means that z n → 0 in F,
we deduce that yn → −x in L for the p-norm. Therefore, we get x ∈ L ∩ N , which
leads to x = 0. Then, we have

q(xn ) → 0 and p(yn ) → 0 (12.11)

as n → +∞. Then using (12.7), we get a contradiction:

1 = q(z n )  q(xn ) + q(yn )  q(xn ) + p(yn ) → 0 (12.12)

as n → +∞. The proof is then complete.

Remark 12.3 Noting that L is not necessarily closed with respect to the weaker
q-norm, so, a priori, the projection z → x is not continuous with respect to the
q-norm (cf. [8]).
158 12 Exact Boundary Controllability and Non-exact Boundary Controllability

12.3 Observability Inequality

In order to give the proof of Theorem 12.1, we first start with some useful preliminary
results. Assume that  ⊂ Rn is a bounded domain with smooth boundary . Let
 = 1 ∪ 0 be a partition of  such that  1 ∩  0 = ∅. Throughout this chapter, we
assume that  satisfies the usual multiplier geometrical condition (cf. [7, 26, 62]).
More precisely, assume that there exists x0 ∈ Rn , such that setting m = x − x0 , we
have
(m, ν)  0, ∀x ∈ 0 ; (m, ν) > 0, ∀x ∈ 1 , (12.13)

where (·, ·) denotes the inner product in Rn .


We define the linear unbounded operator − in H0 by

D(−) = {φ ∈ H 2 () : φ|0 = 0, ∂ν φ|1 = 0}. (12.14)

Clearly, − is a densely defined self-adjoint and coercive operator with a compact


resolvent in H0 . Then we can define the power operator (−)s/2 for any given
s ∈ R (cf. [64]). Moreover, the domain Hs = D((−)s/2 ) endowed with the norm
φs = (−)s/2 φH0 is a Hilbert space, and its dual space with respect to the pivot
space H0 is Hs = H−s . In particular, we have

H1 = D( −) = {φ ∈ H 1 () : φ = 0 on 0 }. (12.15)

Then we formulate the adjoint problem (12.4) into an abstract evolution problem in
the space (Hs ) N × (Hs−1 ) N for any given s ∈ R:

 −  + A T  = 0,
1 .
0 ,   =  (12.16)
t =0: =

Moreover, we have the following result (cf. [60, 64, 70]).


0 , 
Proposition 12.4 For any given initial data ( 1 ) ∈ (Hs ) N × (Hs−1 ) N with s ∈
R, the adjoint problem (12.16) admits a unique weak solution  in the sense of
C 0 -semigroups, such that

 ∈ C 0 ([0, +∞); (Hs ) N ) ∩ C 1 ([0, +∞); (Hs−1 ) N ). (12.17)

Now let em be the normalized eigenfunction defined by



⎨ −em = μ2m em in ,
em = 0 on 0 , (12.18)

∂ν em = 0 on 1 ,

where the positive sequence {μm }m1 is increasing such that μm → +∞ as m →


+∞.
12.3 Observability Inequality 159

For each m  1, we define the subspace Z m by

Z m = {αem : α ∈ R N }. (12.19)

Since A is a matrix with constant coefficients, for any given m  1 the subspace Z m
is invariant for A T . Moreover, for any given integers m, n with m = n and any given
vectors α, β ∈ R N , we have

(αem , βen )(Hs ) N (12.20)



=(α, β)R N (−)s/2 em , (−)s/2 en H0

=(α, β)R N μsm μsn em , en H0
=(α, β)R N μsm μsn δmn .

Then, the subspaces Z m (m  1) are mutually orthogonal in the Hilbert space (Hs ) N
for any given s ∈ R, and in particular, we have

1
(Hs ) N = (Hs+1 ) N , ∀ ∈ Z m . (12.21)
μm

Let m 0  1 be an integer. We denote by mm 0 (Z m × Z m ) the linear hull of the
subspaces Z m × Z m for m  m 0 . In other words, mm 0 (Z m × Z m ) is composed
of all finite linear combinations of elements of Z m × Z m for m  m 0 .
Proposition 12.5Let  be the solution to the adjoint problem (12.16) with the initial
0 , 
data ( 1 ) ∈ m1 (Z m × Z m ), and satisfy an additional condition

 ≡ 0 on [0, T ] × 1 (12.22)

0 ≡ 
for T > 0 large enough. Then, we have  1 ≡ 0.
Proof By Schur’s Theorem, we may assume that A = (ai j ) is an upper triangular
matrix so that problem (12.16) with the additional condition (12.22) can be rewritten
as for k = 1, · · · , N ,
⎧ (k)  

⎪ (φ ) − φ(k) + kp=1 a pk φ( p) = 0 in (0, +∞) × ,
⎨ (k)
φ =0 on (0, +∞) × ,
(12.23)
⎪ ∂ν φ(k) = 0
⎪ on (0, +∞) × 1 ,
⎩ (k) , (φ(k) ) = φ
(k)
t = 0 : φ(k) = φ 0 1 in ,

where corresponding to (12.3) we have

(1) , · · · , φ
0 = (φ
 (N ) )T and  (1) , · · · , φ
1 = (φ (N ) )T . (12.24)
0 0 1 1

Then, using Holmgren’s uniqueness theorem (cf. Theorem 8.2 in [62]), there exists a
(1) , φ
positive constant T > 0 large enough and independent of the initial data (φ (1) ),
0 1
160 12 Exact Boundary Controllability and Non-exact Boundary Controllability

such that φ(1) ≡ 0. Then, we get successively φ(k) ≡ 0 for k = 1, · · · , N . The proof
is then complete. 

Proposition 12.6 Let m 0  1 be an integer and  be the solution to the adjoint prob-
0 , 
lem (12.16) with the initial data ( 1 ) ∈ mm (Z m × Z m ). Define the energy
0
by 
1
E(t) = (| |2 + |∇|2 )d x. (12.25)
2 

Let σ denote the Euclidian norm of the matrix A. Then we have the following energy
estimates: −σt σt
e μm0 E(0)  E(t)  e μm0 E(0), t  0 (12.26)

for all ( 1 ) ∈
0 ,  mm 0 (Z m × Z m ), where the sequence (μm )m1 is defined by
problem (12.18).

Proof First, a straightforward computation yields



E  (t) = − (A , )d x. (12.27)


Then, using (12.21) we get


 
 
 (A , )d x  (12.28)

σ σ
σ H0 H0   H0 H1  E(t).
μm 0 μm 0

It then follows that σ σ


− E(t)  E  (t)  E(t). (12.29)
μm 0 μm 0
σt
Therefore, the function E(t)e μm0 is increasing with respect to the variable t; while,
−σt
the function E(t)e μm0 is decreasing with respect to the variable t. Thus we get (12.26)
and then the proof is complete. 

Proposition 12.7 There exist an integer m 0  1 and positive constants T > 0 and
C > 0 independent of initial data, such that the following observability inequality
 T 
( 1 )2 N
0 ,  | |2 ddt
(H1 ) ×(H0 ) N  C (12.30)
0 1

0 , 
holds for all solutions  to adjoint problem (12.16) with the initial data ( 1 ) ∈

mm 0 (Z m × Z m ).
12.3 Observability Inequality 161

Proof First we write the adjoint problem (12.16) as


⎧ (k)  

⎪ (φ ) − φ(k) + Np=1 a pk φ( p) = 0 in (0, +∞) × ,
⎨ (k)
φ =0 on (0, +∞) × 0 ,
(12.31)
⎪ ν φ(k) = 0
⎪ ∂ on (0, +∞) × 1 ,
⎩ (k) , (φ(k) ) = φ
(k)
t = 0 : φ(k) = φ 0 1 in 

for k = 1, 2, · · · , N . Then, multiplying the k-th equation of (12.31) by

M (k) := 2m · ∇φ(k) + (N − 1)φ(k) with m = x − x0 (12.32)

and integrating by parts, we get easily the following identities (cf. [26, 62]):
 
T
∂ν φ(k) M (k) + (m, ν)(|(φ(k) ) |2 − |∇φ(k) |2 ddt (12.33)
0 
 T  T 
= (φ(k) ) M (k) d x + |(φ(k) ) |2 + |∇φ(k) |2 d xdt
 0 0 
N  T
 
+ akp φ( p) M (k) d xdt, k = 1, · · · , N .
p=1 0 

Noting the multiplier geometrical condition (12.13), we have

∂ν φ(k) M (k) + (m, ν)(|(φ(k) ) |2 − |∇φ(k) |2 ) (12.34)


(k) 2
=(m, ν)|∂ν φ |  0 on (0, T ) × 0

and

∂ν φ(k) M (k) + (m, ν)(|(φ(k) ) |2 − |∇φ(k) |2 ) (12.35)


(k)  2 (k) 2
=(m, ν)(|(φ ) | − |∇φ | )
(m, ν)|φ(k) |2 on (0, T ) × 1 .

Then, it follows from (12.33) that


 
T (k)  2
|(φ ) | + |∇φ(k) |2 d xdt (12.36)
0 
 T   T
(k)  2
 (m, ν)|(φ ) | ddt − (φ(k) ) M (k) d x
0 1  0
N 
 T 
− akp φ( p) M (k) d xdt, k = 1, · · · , N .
p=1 0 
162 12 Exact Boundary Controllability and Non-exact Boundary Controllability

Taking the summation of (12.36) with respect to k = 1, · · · , N , we get


 T  T 
2 E(t)dt  (m, ν)| |2 ddt (12.37)
0 0 1
 T  T 
− ( , M)d x − (, AM)d xdt,
 0 0 

where M is the vector composed of M (k) (k = 1, · · · , N ) given by (12.32).


Next, we estimate the last two terms on the right-hand side of (12.37).
First, it follows from (12.32) that


N
M(H0 ) N  2R ∇φ(k) (H0 )n (12.38)
k=1
+ (N − 1)(H0 ) N  γ(H1 ) N ,

where R = m∞ is the diameter of  and



γ= 4R 2 + (N − 1)2 . (12.39)


 On the other hand, since Z m is invariant for A , for any given (0 , 1 ) ∈
T

mm 0 (Z m
× Z m ), the corresponding solution  to the adjoint problem (12.16)
belongs to mm 0 Z m for any given t  0. Thus, using (12.21) and (12.38), we have
 
 
 (, AM)d x   σ(H0 ) N M(H0 ) N (12.40)

2γσ
γσ(H0 ) N (H1 ) N  E(t).
μm 0

Similarly, we have
 
 
 ( , M)d x    (H0 ) N M(H0 ) N (12.41)

γ (H0 ) N (H1 ) N  γ E(t). (12.42)

Thus, setting
μm 0
T = (12.43)
σ
and noting (12.26), we get
  T 
 
  Md x   γ(E(T ) + E(0))  γ(1 + e)E(0). (12.44)
 0
12.3 Observability Inequality 163

Inserting (12.40) and (12.44) into (12.37) gives


 T  T 
2 E(t)dt  (m, ν)| |2 ddt (12.45)
0 0 1
 T
2σγ
+γ(1 + e)E(0) + E(t)dt.
μm 0 0

Thus, we have
 T  T 
E(t)dt  R | |2 ddt + γ(1 + e)E(0), (12.46)
0 0 1

provided that m 0 is so large that

μm 0  2σγ. (12.47)

Now, integrating the inequality on the left-hand side of (12.26) over [0, T ], we
get
 T
μm 0 −σT
1 − e μm0 E(0)  E(t)dt, (12.48)
σ 0

then, noting (12.26), we get


 T
−1
T (1 − e )E(0)  E(t)dt. (12.49)
0

Thus, it follows from (12.46) and (12.49) that


 T 
R
E(0)  | |2 ddt (12.50)
T (1 − e−1 ) − γ(1 + e) 0 1


0 , 
holds for any given ( 1 ) ∈ mm 0 (Z m × Z m ), provided that

γe(1 + e)
T > , (12.51)
e−1

which is guaranteed by the following choice:

2σγe(1 + e)
μm 0 > (12.52)
e−1

(cf. (12.43), (12.47), and (12.51)). The proof is then complete. 


164 12 Exact Boundary Controllability and Non-exact Boundary Controllability

Proposition 12.8 There exist an integer m 0  1 and positive constants T > 0 and
C > 0 independent of initial data, such that the following observability inequality
 T 
0 , 
( 1 )2 N
(H0 ) ×(H−1 ) N  C ||2 ddt (12.53)
0 1

solutions  to the adjoint problem (12.16) with the initial data


holds for all
0 , 
( 1 ) ∈ mm (Z m × Z m ).
0

Proof Noting that Ker(− + A T ) is of finite dimension, there exists an integer


m 0  1 so large that


Ker(− + A T ) Z m = {0}. (12.54)


mm 0

Let

(H0 ) N
W={ Zm } ⊆ (H0 ) N . (12.55)
mm 0


Since mm 0 Z m is an invariant subspace of (− + A T ), by Fredholm’s alternative,
(− + A T )−1 is an isomorphism from W onto its dual space W  . Moreover, we have

(− + A T )−1 2(H0 ) N ∼ 2(H−1 ) N , ∀ ∈ W. (12.56)



For any given ( 1 ) ∈
0 ,  mm 0 (Z m × Z m ), let

1 , 1 = 
0 = ( − A T )−1  0 . (12.57)

We have
1 2
0 2(H1 ) N + 1 2(H0 ) N ∼  2
(H−1 ) N + 0 (H0 ) N . (12.58)

Next let  be the solution to the adjoint problem (12.16) with the initial data (0 , 1 )
given by (12.57). We have

t =0: 0 ,   = 
 =  1 . (12.59)

By well-posedness, we get
  = . (12.60)

On the other hand, since the subspace mm 0 Z m is invariant for (− + A T ), we
have

(0 , 1 ) ∈ (Z m × Z m ). (12.61)
mm 0
12.3 Observability Inequality 165

Then, applying (12.30) to , we get


 T 
(0 , 1 )2(H1 ) N ×(H0 ) N C |  |2 ddt. (12.62)
0 1

Thus, using (12.58) and (12.60), we get immediately (12.53). The proof is
finished. 
We are now ready to give the
 proof of Theorem 12.1.
0 , 
For any given ( 1 ) ∈ m1 (Z m × Z m ), define

 
T
0 , 
p( 1 ) = ||2 ddt, (12.63)
0 1

where  is the solution to the adjoint problem(12.16). By Proposition 12.5, for T > 0
large enough, p(·, ·) defines
 well a norm in m1 (Z m × Z m ). Then, we denote by
F the completion of m1 (Z m × Z m ) with respect to the p-norm. Clearly, F is a
Hilbert space.
We next write

F =N L (12.64)

with

p
N = (Z m × Z m ), L = (Z m × Z m ) . (12.65)
1m<m 0 mm 0

Clearly, N is a finite-dimensional subspace and L is a closed subspace in F. In


particular, the observability inequality (12.53) can be extended to all the initial data
in the whole subspace L.
Now we introduce the q-norm by

q( 1 ) = (
0 ,  0 , 
1 )(H0 ) N ×(H−1 ) N , ∀(
0 , 
1 ) ∈ F. (12.66)

By (12.20), the subspaces (Z m × Z m ) for all m  1 are mutually orthogonal in


(H0 ) N × (H−1 ) N , then the subspace N is an orthogonal complement of L in
(H0 ) N × (H−1 ) N . In particular, the projection from F into N is continuous with
respect to the q-norm.
On the other hand, (12.53) means that (12.7) holds. Then, applying Lemma 12.2,
we get the inequality (12.8), namely, (12.5) in the present situation. In particular, we
have
F ⊂ (H0 ) N × (H−1 ) N . (12.67)

The proof of Theorem 12.1 is now completed. 


Remark 12.9 Without any additional assumptions on the coupling matrix A, the
adjoint problem (12.16) is not conservative and the usual multiplier method cannot
166 12 Exact Boundary Controllability and Non-exact Boundary Controllability

be applied directly. However, since each subspace Z m is invariant for the matrix A T ,
0 , 
for any given initial data ( 1 ) ∈ Z m × Z m , the corresponding solution  to the
adjoint problem (12.16) lies still in the subspace Z m . Then, because of the identity
(12.21), the coupling term A T (H0 ) N is negligible comparing with μ1m (H1 ) N .
Therefore, we first expect the observability inequality (12.30)  only for the initial
0 , 
data ( 1 ) with higher frequencies lying in the sublinear hull mm (Z m × Z m )
0
with an integer m 0  1 large enough. We next extend it to the whole linear hull

m1 (Z m × Z m ) by an argument of compact perturbation as shown in Lemma 12.2.

Remark 12.10 The compactness-uniqueness arguments are frequently used in the


study of the observability of distributed parameter systems. It turns out that this
method is particularly simple and efficient for dealing with some systems with lower
order terms. A natural formulation is to consider the problem as a compact per-
turbation of a skew-adjoint operator (cf. [27, 69]). This approach requests that the
eigensystem of the underlying system forms a Riesz basis in the energy space. How-
ever, since the Riesz basis is not stable even for the compact perturbation, this may
cause serious problems. By contrast, the method proposed here does not require any
spectral condition on the underlying system. In particular, instead of Riesz basis
property, we assume only that the projection from F into N is continuous with
respect to the q-norm. Moreover, it often occurs that the subspaces N and L are
mutually orthogonal with respect to the q-inner product, hence, the continuity of the
projection from F into N is much easier to be checked than the Riesz basis property.
Other approaches by energy method for systems with variable coefficients can be
found in [11, 82, 84] and the references therein.

12.4 Exact Boundary Controllability

We will first show the exact boundary null controllability of system (II) by a
standard application of the HUM method proposed by Lions in [61, 62].
Obviously, we have
Hs ⊂ H s (), s  0. (12.68)

On the other hand, by (12.67) and the trace embedding H s () → L 2 (1 ) for all
s > 1/2, noting (12.63), we get the following continuous embeddings:

(Hs ) N × (Hs−1 ) N ⊂ F ⊂ (H0 ) N × (H−1 ) N , s > 1/2. (12.69)

Multiplying the equations in system (II) by a solution  to the adjoint problem (12.4)
and integrating by parts, we get

(U  (t), (t))(H0 ) N − (U (t),  (t))(H0 ) N (12.70)


 t
1 , 
= (U 0 )(H0 ) N − (U 0 , 
1 )(H0 ) N + (D H (τ ), (τ ))ddτ .
0 1
12.4 Exact Boundary Controllability 167

Taking (H0 ) N as the pivot space and noting (12.69), (12.70) can be written as

(U  (t), −U (t)), ((t),  (t)) (12.71)


 t

= (U1 , −U0 ),(0 , 1 ) + (D H (τ ), (τ ))ddτ ,
0 1

where ·, · denotes the duality between the spaces (H−s ) N × (H1−s ) N and (Hs ) N ×
(Hs−1 ) N .

Definition 12.11 U is a weak solution to problem (II) and (II0), if

(U, U  ) ∈ C 0 ([0, T ]; (H1−s ) N × (H−s ) N ) (12.72)

0 , 
such that the variational Eq. (12.71) holds for any given ( 1 ) ∈ (Hs ) N ×
(Hs−1 ) with s > 1/2.
N

Proposition 12.12 For any given H ∈ L 2 (0, T ; (L 2 (1 )) M ) and any given
0 , U
(U 1 ) ∈ (H1−s ) N × (H−s ) N with s > 1/2, problem (II) and (II0) admits a unique
weak solution U . Moreover, the linear map

R: (U 1 , H ) → (U, U  )
0 , U (12.73)

is continuous with respect to the corresponding topologies.

Proof Define the linear form

0 , 
L t ( 1 ) (12.74)
 t
= (U 0 ), (
1 , −U 0 , 
1 ) + (D H (τ ), (τ ))ddτ .
0 1

By the definition (12.63) of the p-norm and the continuous embedding (12.69),
the linear form L t is bounded in (Hs ) N × (Hs−1 ) N for any given t  0. Let St be the
semigroup associated to the adjoint problem (12.16) on the Hilbert space (Hs ) N ×
(Hs−1 ) N , which is an isomorphism on (Hs ) N × (Hs−1 ) N . The composed linear form
L t ◦ St−1 is bounded in (Hs ) N × (Hs−1 ) N . Then, by Riesz–Fréchet’s representation
theorem, there exists a unique element (U  (t), −U (t)) ∈ (H−s ) N × (H1−s ) N , such
that
L t ◦ St−1 ((t),  (t)) = (U  (t), −U (t)), ((t),  (t)) (12.75)

0 , 
for any given ( 1 ) ∈ (Hs ) N × (Hs−1 ) N . Since

0 , 
L t ◦ St−1 ((t),  (t)) = L t ( 1 ), (12.76)
168 12 Exact Boundary Controllability and Non-exact Boundary Controllability

0 , 
we get (12.71) for any given ( 1 ) ∈ (Hs ) N × (Hs−1 ) N . Moreover, we have

(U  (t), −U (t))(H−s ) N ×(H1−s ) N (12.77)



1 ,U
 C T (U 0 )(H−s ) N ×(H1−s ) N + H  L 2 (0,T ;(L 2 (1 )) M ) .

Then, by a classic argument of density, we get the regularity (12.72). The proof is
thus complete. 

Definition 12.13 System (II) is exactly null controllable at the time T in the
space (H1 ) N × (H0 ) N , if there exists a positive constant T > 0, such that for any
given (U 0 ) ∈ (H1 ) N × (H0 ) N , there exists a boundary control H ∈ L 2 (0, T ;
1 , U
(L (1 )) ), such that problem (II) and (II0) admits a unique weak solution U sat-
2 M

isfying the final condition


t = T : U = U  = 0. (12.78)

Theorem 12.14 Assume that M = N . Then there exists a positive constant T > 0,
such that system (II) is exactly null controllable at the time T for any given initial
0 , U
data (U 1 ) ∈ (H1 ) N × (H0 ) N . Moreover, we have the continuous dependence

0 , U
H  L 2 (0,T ;(L 2 (1 )) N )  c(U 1 )(H1 ) N ×(H0 ) N . (12.79)

Proof Let  be the solution to the adjoint problem (12.4) in (Hs ) N × (Hs−1 ) N with
s > 1/2. Let
H = D −1 |1 . (12.80)

Because of the first inclusion in (12.69), we have H ∈ L 2 (0, T ; (L 2 (1 )) N ). Then,


by Proposition 12.12, the corresponding backward problem
⎧ 

⎪ U − U + AU = 0 in (0, T ) × ,

U =0 on (0, T ) × 0 ,
(12.81)

⎪ ∂ν U =  on (0, T ) × 1 ,

t = T : U = U = 0 in 

admits a unique weak solution U with (12.72). Accordingly, we define the linear
map
( 1 ) = (−U  (0), U (0)).
0 ,  (12.82)

Clearly,  is a continuous map from (Hs ) N × (Hs−1 ) N into (H−s ) N × (H1−s ) N .


Next, noting (12.78), it follows from (12.71) that
 T 
0 , 
( 1 ), (
0 , 
1 ) = (τ )(τ )ddτ , (12.83)
0 1
12.4 Exact Boundary Controllability 169

0 , 
where  is the solution to the adjoint problem (12.16) with the initial data ( 1 ).
It then follows that

( 1 ), (
0 ,  0 , 
1 )  (
0 , 
1 )F (
0 , 
1 )F . (12.84)

By definition, (Hs ) N × (Hs−1 ) N is dense in F, then the linear form

( 1 ) → (
0 ,  0 , 
1 ), (
0 , 
1 ) (12.85)

can be continuously extended to F, so that (0 , 1 ) ∈ F  . Moreover, we have

( 1 )F   (


0 ,  0 , 
1 )F . (12.86)

Then,  is a continuous linear map from F to F  . Therefore, the symmetric bilinear


form
( 1 ), (
0 ,  0 , 
1 ), (12.87)

where ·, · denotes the duality between the spaces (H−s ) N × (H1−s ) N and (Hs ) N ×
(Hs−1 ) N , is continuous and coercive in the product space F × F. By Lax–Milgram’s
Lemma,  is an isomorphism from F onto F  . Then for any given (−U 1 , U
0 ) ∈ F  ,

there exists an element (0 , 1 ) ∈ F, such that

0 , 
( 1 , U
1 ) = (−U 0 ). (12.88)

This is precisely the exact boundary controllability of system (II) for any given
initial data (U 0 ) ∈ F  , in particular, for any given initial data (U
1 , −U 1 , −U
0 ) ∈

(H0 ) × (H1 ) ⊂ F , because of the second inclusion in (12.69).
N N

Finally, from the definitions (12.80) and (12.63), we have

0 , 
H  L 2 (0,T ;(L 2 (1 )) N )  C L 2 (0,T ;(L 2 (1 )) N ) = C( 1 )F (12.89)

which together with (12.88) implies the continuous dependence:

0 , U
H  L 2 (0,T ;(L 2 (1 )) N )  C−1 L(F  ,F ) (U 1 )(H1 ) N ×(H0 ) N . (12.90)

The proof is thus complete. 

12.5 Non-exact Boundary Controllability

In the case of fewer boundary controls, we have the following negative result.

Theorem 12.15 Assume that M < N . Then system (II) is not exactly boundary con-
0 , U
trollable for all initial data (U 1 ) ∈ (H1 ) N × (H0 ) N .
170 12 Exact Boundary Controllability and Non-exact Boundary Controllability

Proof Since M < N , there exists a non-null vector e ∈ R N , such that D T e = 0. We


choose a special initial data as

U 1 = 0,
0 = θe, U (12.91)

where θ ∈ D() is arbitrarily given. If system (II) is exactly boundary controllable,


noting that Theorem 3.9 is still valid here, there exists a boundary control H such
that
H  L 2 (0,T ;(L 2 (1 )) M )  Cθ H 1 () . (12.92)

Then, by Proposition 12.12 we have

U  L 2 (0,T ;(H1−s ()) N )  Cθ H 1 () , ∀s > 1/2. (12.93)

Now, taking the inner product of e with problem (II) and (II0) and noting φ = (e, U ),
we get ⎧ 

⎪ φ − φ = −(e, AU ) in (0, T ) × ,


⎨φ = 0 on (0, T ) × 0 ,
∂ν φ = 0 on (0, T ) × 1 , (12.94)

⎪ 

⎪ t = 0 : φ = θ, φ = 0 in ,

t = T : φ = 0, φ = 0 in .

Noting (12.68), by well-posedness and noting (12.93), we get

θ H 2−s ()  CU  L 2 (0,T ;(H1−s ()) N )  C  θ H 1 () (12.95)

Choosing s such that 1 > s > 1/2, then 2 − s > 1, which gives a contradiction. The
proof is then complete. 

Remark 12.16 As shown in the proof of Theorem 12.14, a weaker regularity such
as (U, U  ) ∈ C 0 ([0, T ]; (H0 ) N × (H−1 ) N ) is sufficient to make sense to the value
(U (0), U  (0)), therefore, sufficient for proving the exact boundary controllability. At
this stage, it is not necessary to pay much attention to the regularity of the weak solu-
tion with respect to the space variable. However, in order to establish the non-exact
boundary controllability in Theorem 12.15, this regularity becomes indispensable for
the argument of compact perturbation. In the case with Dirichlet boundary controls,
the weak solution has the same regularity as the controllable initial data. This reg-
ularity yields the non-exact boundary controllability in the case of fewer boundary
controls (cf. Chap. 3). However, for Neumann boundary controls, the direct inequal-
ity is much weaker than the inverse inequality. For example, in Proposition 12.12,
we can get only (U, U  ) ∈ C 0 ([0, T ]; (H1−s ) N × (H−s ) N ) for any given s > 1/2,
while, the controllable initial data (U 1 ) lies in the space (H1 ) N × (H0 ) N . Even
0 , U
though this regularity is not sharp in general, it is already sufficient for the proof
of the non-exact boundary controllability of system (II). Nevertheless, the gap of
regularity between the solution and its initial data could cause some problems for
12.5 Non-exact Boundary Controllability 171

the non-exact boundary controllability with coupled Robin boundary controls (cf.
Part V and Part VI below).

Remark 12.17 As for the system with Dirichlet boundary controls discussed in Part
1, we have shown in Theorems 12.14 and 12.15 that system (II) with Neumann
boundary controls is exactly boundary controllable if and only if the boundary con-
trols have the same number as the state variables or the wave equations. Of course,
the non-exact boundary controllability is valid only in the framework that all the
components of the initial data are in the same energy space. For example, the authors
of [65] considered the exact boundary controllability by means of only one bound-
ary control for a system of two wave equations with initial data of different levels of
finite energy. More specifically, the exact boundary controllability by means of only
one boundary control for the a cascade system of N wave equations was established
in [1, 2]. On the other hand, in contrast with the exact boundary controllability, the
approximate boundary controllability is more flexible with respect to the number of
boundary controls, and is closely related to the so-called Kalman’s criterion on the
rank of an enlarged matrix composed of the coupling matrix A and the boundary
control matrix D (cf. Part IV below).
Chapter 13
Exact Boundary Synchronization and
Non-exact Boundary Synchronization

In the case of partial lack of boundary controls, we consider the exact boundary syn-
chronization and the non-exact boundary synchronization in this chapter for system
(II) with Neumann boundary controls.

13.1 Definition

Let  T  T
U = u (1) , · · · , u (N ) and h (1) , · · · , h (M) (13.1)

with M  N . Consider the coupled system (II) with the initial condition (II0).
For simplifying the statement and without loss of generality, in what follows we
always suppose that mes(0 ) = 0 and denote

H0 = L 2 (), H1 = H10 (), L = L loc


2
(0, +∞; L 2 (1 )), (13.2)

where H10 () is the subspace of H 1 () composed of all the functions with the null
trace on 0 .
According to Theorem 12.14, when M = N , there exists a constant T > 0, such
that system (II) is exactly null controllable at the time T for any given initial data
(U 1 ) ∈ (H1 ) N × (H0 ) N .
0 , U
On the other hand, according to Theorem 12.15 if there is a lack of boundary
controls, namely, when M < N , no matter how large T > 0 is, system (II) is not
0 , U
exactly null controllable at the time T for any given initial data (U 1 ) ∈ (H1 ) N ×
(H0 ) .N

Thus, it is necessary to discuss whether system (II) is controllable in some weaker


senses when there is a lack of boundary controls, namely, when M < N . Although
the results are similar to those for the coupled system of wave equations with Dirichlet

© Springer Nature Switzerland AG 2019 173


T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_13
174 13 Exact Boundary Synchronization and Non-exact Boundary Synchronization

boundary controls, discussed in Part I, since the solution to a coupled system of wave
equations with Neumann boundary conditions has a relatively weaker regularity,
in order to realize the desired result, we need stronger function spaces, and the
corresponding adjoint problem is also different.
First, we give the following

Definition 13.1 System (II) is exactly synchronizable at the time T > 0 in the
0 , U
space (H1 ) N × (H0 ) N if for any given initial data (U 1 ) ∈ (H1 ) N × (H0 ) N , there
exists a boundary control H ∈ L with compact support in [0, T ], such that the
M

weak solution U = U (t, x) to the mixed initial-boundary value problem (II) and
(II0) satisfies

tT : u (1) ≡ · · · ≡ u (N ) := u, (13.3)

where u = u(t, x), being unknown a priori, is called the corresponding exactly syn-
chronizable state.

The above definition requires that system (II) maintains the exactly synchronizable
state even though the boundary control is canceled after the time T .
Let
⎛ ⎞
1 −1
⎜ 1 −1 ⎟
⎜ ⎟
C1 = ⎜ .. .. ⎟ (13.4)
⎝ . . ⎠
1 −1 (N −1)×N

be the corresponding matrix of synchronization. C1 is a full row-rank matrix, and


Ker(C1 ) = Span{e1 }, where e1 = (1, · · · , 1)T . Clearly, the exact boundary syn-
chronization (13.3) can be equivalently written as

tT : C1 U (t, x) ≡ 0 in . (13.5)

13.2 Condition of C1 -Compatibility

We have

Theorem 13.2 Assume that M < N . If system (II) is exactly synchronizable in the
space (H1 ) N × (H0 ) N , then the coupling matrix A = (ai j ) should satisfy the follow-
ing condition of compatibility (row-sum condition: the sum of elements in every
row is equal to each other):
13.2 Condition of C1 -Compatibility 175


N
ai j := a, (13.6)
j=1

where a is a constant independent of i = 1, · · · , N .

Proof By Theorem 12.15, since M < N , system (II) is not exactly null controllable,
0 , U
then there exists an initial data (U 1 ) ∈ (H1 ) N × (H0 ) N , such that for any given
boundary control H , the corresponding exactly synchronizable state u(t, x) ≡ 0.
Then, noting (13.3), the solution to problem (II) and (II0) corresponding to this
initial data satisfies


N
u  − u + ai j u = 0 in D ((T, +∞) × ) (13.7)
j=1

for all i = 1, · · · , N . Then, we have


N
N
ak j − ai j u = 0 in D ((T, +∞) × ) (13.8)
j=1 j=1

for i, k = 1, · · · , N . It follows that


N
N
ak j = ai j , i, k = 1, · · · , N , (13.9)
j=1 j=1

which is just the required condition of compatibility (13.6). 

By Proposition 2.15, it is easy to get

Lemma 13.3 The following properties are equivalent:


(i) The condition of compatibility (13.6) holds;
(ii) e = (1, · · · , 1)T is an eigenvector of A corresponding to the eigenvalue a
given by (13.6);
(iii) Ker(C1 ) is a one-dimensional invariant subspace of A:

AKer(C1 ) ⊆ Ker(C1 ); (13.10)

(iv) There exists a unique matrix A1 of order (N − 1), such that

C 1 A = A1 C 1 . (13.11)

A1 = (a i j ) is called the reduced matrix of A by C1 , where


176 13 Exact Boundary Synchronization and Non-exact Boundary Synchronization


N
j
ai j = (ai+1, p − ai p ) = (ai p − ai+1, p ) (13.12)
p= j+1 p=1

for i, j = 1, · · · , N − 1.

In what follows we will call the row-sum condition (13.6) to be the condition of
C1 -compatibility.

13.3 Exact Boundary Synchronization and Non-exact


Boundary Synchronization

We have
Theorem 13.4 Under the condition of C1 -compatibility (13.10), system (II) is
exactly synchronizable at some time T > 0 in the space (H1 ) N × (H0 ) N if and only
if rank(C1 D) = N − 1. Moreover, we have the following continuous dependence:

0 , U
H  L 2 (0,T ;(L 2 (1 )) N −1 )  cC1 (U 1 )(H1 ) N ×(H0 ) N −1 , (13.13)

where c is a positive constant.

Proof Under the condition of C1 -compatibility (13.10), let

0 = C1 U
W = C1 U, W 1 = C1 U
0 , W 1 . (13.14)

Noting (13.11), it is easy to see that the original problem (II) and (II0) for U can be
reduced to the following self-closed system for W = (w(1) , · · · , w (N −1) )T :
⎧ 
⎨ W − W + A1 W = 0 in (0, +∞) × ,
W =0 on (0, +∞) × 0 , (13.15)

∂ν W = D H on (0, +∞) × 1

with the initial condition

t =0: 0 , W  = W
W =W 1 in , (13.16)

where D = C1 D.
Noting that C1 is a surjection from (H1 ) N × (H0 ) N onto (H1 ) N −1 × (H0 ) N −1 , we
easily check that the exact boundary synchronization of system (II) for U is equivalent
to the exact boundary null controllability of the reduced system (13.15) for W .
Thus, by means of Theorems 12.14 and 12.15, the exact boundary synchronization
of system (II) is equivalent to the rank condition rank(D) = rank(C1 D) = N − 1.
Moreover, the continuous dependence (13.13) comes directly from (12.79). 
13.4 Attainable Set of Exactly Synchronizable States 177

13.4 Attainable Set of Exactly Synchronizable States

In the case that system (II) possesses the exact boundary synchronization at the time
T > 0, under the condition of C1 -compatibility (13.6), it is easy to see that for t  T ,
the exactly synchronizable state u = u(t, x) defined by (13.3) satisfies the following
wave equation with homogenous boundary conditions:
⎧ 
⎨ u − u + au = 0 in (T, +∞) × ,
u=0 on (T, +∞) × 0 , (13.17)

∂ν u = 0 on (T, +∞) × 1 ,

where a is given by (13.3). Hence, the evolution of the exactly synchronizable state
u = u(t, x) with respect to t is completely determined by the values of (u, u  ) at the
time t = T :
t = T : u = u0, u = u 1 in . (13.18)

We have
Theorem 13.5 Under the condition of C1 -compatibility (13.6), the attainable set of
the values (u, u  ) at the time t = T of the exactly synchronizable state u = u(t, x)
is the whole space H1 × H0 as the initial data (U 0 , U
1 ) vary in the space (H1 ) N ×
(H0 ) .
N

Proof For any given (


u0, 
u 1 ) ∈ H1 × H0 , by solving the following backward
problem
⎧ 
⎨ u − u + au = 0 in (0, T ) × ,
u=0 on (0, T ) × 0 , (13.19)

∂ν u = 0 on (0, T ) × 1

with the final condition

t=T : u0, u = 
u = u 1 in , (13.20)

we get the corresponding solution u = u(t, x). Then, under the condition of
C1 -compatibility (13.6), the function

U (t, x) = u(t, x)e1 (13.21)

with e1 = (1, · · · , 1)T is the solution to problem (II) and (II0) with the null boundary
control H ≡ 0 and the initial condition

t =0: U = u(0, x)e1 , U  = u  (0, x)e1 . (13.22)

Therefore, by solving problem (II) and (II0) with the null boundary control and the
u0, 
initial condition (13.22), we can reach any given exactly synchronizable state ( u1)
178 13 Exact Boundary Synchronization and Non-exact Boundary Synchronization

at the time t = T . This fact shows that any given state ( u0, 
u 1 ) ∈ H1 × H0 can be
expected to be a exactly synchronizable state. Consequently, the set of the values
(u(T ), u  (T )) of the exactly synchronizable state u = (t, x) at the time T is the
whole space H1 × H0 as the initial data (U 0 , U
1 ) vary in the space (H1 ) N × (H0 ) N .
The proof is complete. 

The determination of the exactly synchronizable state u for each given initial data
0 , U
(U 1 ) will be considered in Chap. 15.
Chapter 14
Exact Boundary Synchronization
by p-Groups

The exact boundary synchronization by p-groups will be considered in this chapter


for system (II) with further lack of Neumann boundary controls.

14.1 Definition

When there is a further lack of boundary controls, we consider the exact boundary
synchronization by p-groups with p  1. This indicates that the components of U
are divided into p groups:

(u (1) , · · · , u (n 1 ) ), (u (n 1 +1) , · · · , u (n 2 ) ), · · · , (u (n p−1 +1) , · · · , u (n p ) ), (14.1)

where 0 = n 0 < n 1 < n 2 < · · · < n p = N are integers such that n r − n r −1  2 for
all 1  r  p.
Definition 14.1 System (II) is exactly synchronizable by p-groups at the time T > 0
0 , U
in the space (H1 ) N × (H0 ) N if for any given initial data (U 1 ) ∈ (H1 ) N × (H0 ) N ,
there exists a boundary control H ∈ L M with compact support in [0, T ], such that
the weak solution U = U (t, x) to problem (II) and (II0) satisfies

tT : u (i) = u r , n r −1 + 1  i  n r , 1  r  p, (14.2)

where, u = (u 1 , · · · , u p )T , being unknown a priori, is called the corresponding


exactly synchronizable state by p-groups.
Let Sr be a (n r − n r −1 − 1) × (n r − n r −1 ) full row-rank matrix:

© Springer Nature Switzerland AG 2019 179


T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_14
180 14 Exact Boundary Synchronization by p-Groups
⎛ ⎞
1 −1
⎜ 1 −1 ⎟
⎜ ⎟
Sr = ⎜ .. .. ⎟ , 1  r  p, (14.3)
⎝ . . ⎠
1 −1

and let C p be the following (N − p) × N matrix of synchronization by p-groups:


⎛ ⎞
S1
⎜ S2 ⎟
⎜ ⎟
Cp = ⎜ .. ⎟. (14.4)
⎝ . ⎠
Sp

Obviously, we have

Ker(C p ) = Span{e1 , · · · , e p }, (14.5)

where for 1  r  p,

1, n r −1 + 1  i  n r ,
(er )i = (14.6)
0, otherwise.

Thus, the exact boundary synchronization by p-groups (14.2) can be equivalently


written as

tT : C p U ≡ 0 in , (14.7)

or equivalently,

p
tT : U= u r er in . (14.8)
r =1

14.2 Condition of C p -Compatibility

We have
Theorem 14.2 Assume that system (II) is exactly synchronizable by p-groups. Then
we necessarily have M  N − p. In particular, when M = N − p, the coupling
matrix A = (ai j ) should satisfy the following condition of C p -compatibility:

AKer(C p ) ⊆ Ker(C p ). (14.9)


14.2 Condition of C p -Compatibility 181

Proof Noting that Lemma 6.3 is proved in a way independent of applied boundary
conditions, it can be still used in the case with Neumann boundary controls.
Since we have (14.7), if AKer(C p )  Ker(C p ), by Lemma 6.3, we can construct
an enlarged full row-rank (N − p + 1) × N matrix C p−1 such that

tT : p−1 U ≡ 0 in .
C

If AKer(C p−1 ), still by Lemma 6.3, we can construct another enlarged


p−1 )  Ker(C
full row-rank (N − p + 2) × N matrix C p−2 such that

tT : p−2 U ≡ 0 in ,
C

· · · · · · . This procedure should stop at the r th step with 0  r  p. Thus, we get an


enlarged full row-rank (N − p + r ) × N matrix C p−r such that

tT : p−r U ≡ 0 in 
C (14.10)

and

AKer(C p−r ).
p−r ) ⊆ Ker(C (14.11)

Then, by Proposition 2.15, there exists a unique matrix à of order (N − p + r ), such


that

C p−r .
p−r A = ÃC (14.12)

p−r U in problem (II) and (II0), we get the following reduced


Setting W = C
problem for W = (w (1) , · · · , w (N − p+r ) )T :
⎧ 
⎨ W − W + ÃW = 0 in (0, +∞) × ,
W =0 on (0, +∞) × 0 , (14.13)

∂ν W = D̃ H on (0, +∞) × 1

with the initial condition

t =0: p−r U
W =C p−r U
0 , W  = C 1 in , (14.14)

p−r D. Moreover, by (14.10) we have


where D̃ = C

tT : W ≡ 0. (14.15)

p−r is a (N − p + r ) × N full row-rank matrix, the linear mapping


Noting that C

(U 1 ) → (C
0 , U p−r U p−r U
0 , C 1 ) (14.16)
182 14 Exact Boundary Synchronization by p-Groups

is a surjection from (H1 ) N × (H0 ) N onto (H1 ) N − p+r × (H0 ) N − p+r , then sys-
tem (14.13) is exactly null controllable at the time T in the space (H1 ) N − p+r ×
(H0 ) N − p+r . By Theorems 12.14 and 12.15, we necessarily have

p−r D) = N − p + r,
rank(C

then we get

p−r D) = N − p + r  N − p.
M = rank(D)  rank(C (14.17)

In particular, when M = N − p, we have r = 0, namely, the condition of


C p -compatibility (14.9) holds. 

Remark 14.3 The condition of C p -compatibility (14.9) is equivalent to the fact that
there exist some constants αr s (1  r, s  p) such that

p
Aer = αsr es , 1  r  p, (14.18)
s=1

or, noting (14.6), A satisfies the following row-sum condition by blocks:

ns
ai j = αr s , n r −1 + 1  i  n r , 1  r, s  p. (14.19)
j=n s−1 +1

Especially, this condition of compatibility becomes the row-sum condition 13.6 when
p = 1.

14.3 Exact Boundary Synchronization by p-Groups and


Non-exact Boundary Synchronization by p-Groups

Theorem 14.4 Assume that the condition of C p -compatibility (14.9) holds. Then
system (II) is exactly synchronizable by p-groups if and only if rank(C p D) = N − p.
Moreover, we have the continuous dependence:

0 , U
H  L 2 (0;T ;(L 2 (1 )) N − p )  cC p (U 1 )(H1 ) N − p ×(H0 ) N − p , (14.20)

where c is a positive constant.

Proof Assume that the coupling matrix A = (ai j ) satisfies the condition of C p -
compatibility (14.9). By Proposition 2.15, there exists a unique matrix A p of order
(N − p), such that
14.3 Exact Boundary Synchronization by p-Groups … 183

C p A = A pC p. (14.21)

Setting

W = C p U, D = C p D, (14.22)

we get the following reduced system for W = (w(1) , · · · , w (N − p) )T :


⎧ 
⎨ W − W + A p W = 0 in (0, +∞) × ,
W =0 on (0, +∞) × 0 , (14.23)

∂ν W = D H on (0, +∞) × 1

with the initial condition

t =0: 0 , W  = C p U
W = C pU 1 in . (14.24)

Noting that C p is an (N − p) × N full row-rank matrix, it is easy to see that system


(II) is exactly synchronizable by p-groups if and only if the reduced system (14.23)
is exactly null controllable, or equivalently, by Theorem 12.14 and Theorem 12.15
if and only if rank(C p D) = N − p. Moreover, the continuous dependence (14.20)
comes from (12.79). 

14.4 Attainable Set of Exactly Synchronizable States


by p-Groups

Under the condition of C p -compatibility (14.9), if system (II) is exactly synchroniz-


able by p-groups at the time T > 0, then it is easy to see that for t  T , the exactly
synchronizable state by p-groups u = (u 1 , · · · u p )T satisfies the following coupled
system of wave equations with homogenous boundary conditions:
⎧ 
= 0 in (T, +∞) × ,
⎨ u − u + Au
u=0 on (T, +∞) × 0 , (14.25)

∂ν u = 0 on (T, +∞) × 1 ,

where A = (αr s ) is given by (14.18). Hence, the evolution of the exactly synchroniz-
able state by p-groups u = (u 1 , · · · u p )T with respect to t is completely determined
by the values of (u, u  ) at the time t = T :

t=T : u0, u = 
u = u1. (14.26)
184 14 Exact Boundary Synchronization by p-Groups

As in Theorem 13.5 for the case p = 1, we can show that the attainable set of
all possible values of (u, u  ) at t = T is the whole space (H1 ) p × (H0 ) p , when the
initial data (U 1 ) vary in the space (H1 ) N × (H0 ) N .
0 , U
The determination of the exactly synchronizable state by p-groups u =
0 , U
(u 1 , · · · , u p )T for each given initial data (U 1 ) will be considered in Chap. 15.
Chapter 15
Determination of Exactly Synchronizable
States by p-Groups

When system (II) possesses the exact boundary synchronization by p-groups, the cor-
responding exactly synchronizable states by p-groups will be studied in this chapter.

15.1 Introduction

Now, under the condition of C p -compatibility (14.9), we are going to discuss the
determination of exactly synchronizable states by p-groups u = (u 1 , · · · , u p )T with
p  1 for system (II). Generally speaking, exactly synchronizable states by p-groups
0 , U
should depend on the initial data (U 1 ) and applied boundary controls H . However,
when the coupling matrix A possesses some good properties, exactly synchroniz-
able states by p-groups are independent of applied boundary controls, and can be
determined entirely by the solution to a system of wave equations with homogeneous
boundary conditions for any given initial data (U 1 ).
0 , U
First, as in the case of Dirichlet boundary controls, we have the following

Theorem 15.1 Let Uad denote the set of all boundary controls H = (h (1) , · · · ,
h (N − p) )T which can realize the exact boundary synchronization by p-groups at the
time T > 0 for system (II). Assume that the condition of C p -compatibility (14.9)
holds. Then for  > 0 small enough, the value of H ∈ Uad on (0, ) × 1 can be
arbitrarily chosen.

Proof Under the condition of C p -compatibility (14.9), the exact boundary synchro-
nization by p-groups of system (II) is equivalent to the exact boundary null con-
trollability of the reduced system (14.23). Let T0 > 0 be a number independent of
the initial data, such that the reduced system (14.23) is exactly null controllable at
any given time T with T > T0 . Therefore, taking an  > 0 so small that T −  > T0 ,

© Springer Nature Switzerland AG 2019 185


T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_15
186 15 Determination of Exactly Synchronizable States by p-Groups

system (II) is still exactly synchronizable by p-groups at the time T for the initial
data given at t = .
For any given initial data (U 1 ) ∈ (H1 ) N × (H0 ) N , we arbitrarily give
0 , U

 ∈ (C0∞ ([0, ] × 1 )) N − p
H (15.1)

 . Let (U
and solve problem (II) and (II0) on the time interval [0, ] with H = H  , U
 )
denote the corresponding solution. We check easily that

(U  ) ∈ C 0 ([0, ]; (H1 ) N × (H0 ) N ).


 , U (15.2)

By Theorem 14.4, we can find a boundary control

 ∈ L 2 (, T ; (L 2 (1 )) N − p )
H (15.3)

which realizes the exact boundary synchronization by p-groups at the time t = T


for system (II) with the initial condition

t =: U  , U
 = U  = U
 . (15.4)

Let
 
 , t ∈ (0, ),
H U , t ∈ (0, ),
H= U= (15.5)

H , t ∈ (, T ), U , t ∈ (, T ).

It is easy to check that U is the solution to problem (II) and (II0) with the boundary
control H . Then, system (II) is exactly synchronizable by p-groups still at the time
T for the initial data (U 1 ) given at t = 0.
0 , U 

15.2 Determination and Estimation of Exactly


Synchronizable States by p-Groups

Exactly synchronizable states by p-groups are closely related to the properties of the
coupling matrix A.
Let

D N − p = {D ∈ M N ×(N − p) (R) : rank(D) = rank(C p D) = N − p}. (15.6)

By Proposition 6.12, D ∈ D N − p if and only if it can be expressed by

D = C Tp D1 + (e1 , · · · , e p )D0 , (15.7)


15.2 Determination and Estimation of Exactly Synchronizable States by p-Groups 187

where e1 , · · · , e p are given by (14.6), D1 is an invertible matrix of order (N − p),


and D0 is a p × (N − p) matrix.
We have

Theorem 15.2 Under the condition of C p -compatibility (14.9), assume that A T


possesses an invariant subspace Span{E 1 , · · · , E p } which is bi-orthonormal to
Ker(C p ) = Span{e1 , · · · , e p }:

(Er , es ) = δr s , 1  r, s  p. (15.8)

Then there exists a boundary control matrix D ∈ D N − p , such that each exactly syn-
chronizable state by p-groups u = (u 1 , · · · , u p )T is independent of applied bound-
ary controls, and can be determined as follows:

t  T : u = ψ, (15.9)

where ψ = (ψ1 , · · · , ψ p )T is the solution to the following problem with homogeneous


boundary conditions for r = 1, · · · , p:


⎪ p
⎪ψ
⎪ r

− ψ + αr s ψs = 0 in (0, +∞) × ,


r
⎨ s=1
ψr = 0 on (0, +∞) × 0 , (15.10)



⎪∂ν ψr = 0 on (0, +∞) × 1 ,


⎩ 0 ), ψr = (Er , U
1 )
t = 0 : ψr = (Er , U in ,

where αr s (1  r, s  p) are given by (14.19).

Proof Noting that Span{E 1 , · · · , E p } is bi-orthonormal to Ker(C p ) = Span


{e1 , · · · , e p }, and taking

D1 = I N − p , D0 = −E T C Tp with E = (E 1 , · · · , E p ) (15.11)

in (15.7), we get a boundary control matrix D ∈ D N − p and it is easy to see that

E s ∈ Ker(D T ), 1  s  p. (15.12)

On the other hand, since Span{E 1 , · · · , E p } is an invariant subspace of A T , noting


(14.18) and the bi-orthonormality (15.8), we get easily that


p
AT Es = αsr Er , 1  s  p. (15.13)
r =1

Let ψs = (E s , U ) for s = 1, · · · , p. Taking the inner product with E s on both


sides of problem (II) and (II0), we get (15.10). Finally, for the exactly synchronizable
188 15 Determination of Exactly Synchronizable States by p-Groups

state by p-groups u = (u 1 , · · · , u p )T , by (14.8) we have


p
tT : ψs = (E s , U ) = (E s , er )u r = u s , 1  s  p. (15.14)
r =1

The proof is complete. 


When A does not possess any invariant subspace Span{E 1 , · · · , E p } which is
T

bi-orthonormal to Ker(C p ) = Span{e1 , · · · , e p }, we can use the solution of problem


(15.10) to give an estimate on each exactly synchronizable state by p-groups.
Theorem 15.3 Under the condition of C p -compatibility (14.9), assume that there
exists a subspace Span{E 1 , · · · , E p } that is bi-orthonormal to the subspace
Span {e1 , · · · , e p }. Then there exist a boundary control matrix D ∈ D N − p , such
that each exactly synchronizable state by p-groups u = (u 1 , · · · , u p )T satisfies the
following estimate:

(u, u  )(t) − (ψ, ψ  )(t)(H2−s ) p ×(H1−s ) p (15.15)


cC p (U0 , U
1 )(H1 ) N − p ×(H0 ) N − p ,

for all t  T , where ψ = (ψ1 , · · · , ψ p )T is the solution to problem (15.10), s > 21 ,


and c is a positive constant independent of the initial data.
Proof Since Span{E 1 , · · · , E p } is bi-orthonormal to Span{e1 , · · · , e p }, still by
(15.11), there exists a boundary control matrix D ∈ D N − p , such that (15.12) holds.
Noting (14.18), it is easy to see that


p
AT Es − αsr Er ∈ {K er (C p )}⊥ = Im(C Tp ),
r =1

then there exists a vector Rs ∈ R N − p , such that


p
AT Es − αsr Er = C Tp Rs . (15.16)
r =1

Thus, taking the inner product with E s on both sides of problem (II) and (II0), and
setting φs = (E s , U ) for s = 1, · · · , p, we have


⎪ p



φs − φs + αsr φr = −(Rs , C p U ) in (0, +∞) × ,


⎨ r =1
φs = 0 on (0, +∞) × 0 , (15.17)



⎪ ∂ν φ s = 0 on (0, +∞) × 1 ,


⎩ 0 ), φs = (E s , U
1 )
t = 0 : φr = (E s , U in ,
15.2 Determination and Estimation of Exactly Synchronizable States by p-Groups 189

where αsr (1  r, s  p) are defined by (14.19), and U = U (t, x) ∈ Cloc (0, +∞;
(H1−s ) N ) ∩ Cloc
1
(0, +∞; (H−s ) N ) with s > 1/2 is the solution to problem (II) and
(II0). Moreover, we have


p
tT : φs = (E s , U ) = (E s , er )u r = u s (15.18)
r =1

for s = 1, · · · , p. Noting that (15.10) and (15.17) possess the same initial data and
the same boundary conditions, by the well-posedness for a system of wave equations
with Neumann boundary condition, we have (cf. Chap. 3 in [70]) that

(ψ, ψ  )(t) − (φ, φ )(t)2(H2−s ) p ×(H1−s ) p (15.19)



T
c C p U 2(H1−s ) N − p ds, ∀ 0  t  T,
0

where c is a positive constant independent of the initial data. Noting that W = C p U ,


by the well-posedness of the reduced problem (14.23), it is easy to get that

T
C p U 2(H1−s ) N − p ds (15.20)
0
 0 , U
c(C p (U 1 )2 N − p
×(H0 ) N − p + D H  L 2 (0;T ;(L 2 (1 )) N − p ) ).
2
(H1 )

Moreover, by (14.20) we have

0 , U
D H  L 2 (0;T ;(L 2 (1 )) N − p )  cC p (U 1 )(H1 ) N − p ×(H0 ) N − p . (15.21)

Substituting it into (15.20), we have



T
0 , U
C p U 2(H1−s ) N − p ds  cC p (U 1 )2 N − p
(H1 ) ×(H0 ) N − p , (15.22)
0

then, by (15.19) we get

(ψ, ψ  )(t) − (φ, φ )(t)2(H2−s ) p ×(H1−s ) p (15.23)


0 , U
cC p (U 1 )2 N − p N−p .
(H1 ) ×(H0 )

Finally, substituting (15.18) into (15.23), we get (15.15). 

Remark 15.4 Differently from the case with Dirichlet boundary controls, although
the solution to problem (II) and (II0) with Neumann boundary controls possesses
a weaker regularity, the solution to problem (15.10), which is used to estimate the
exactly synchronizable state by p-groups, possesses a higher regularity than the
original problem (II) and (II0) itself. This improved regularity makes it possible to
190 15 Determination of Exactly Synchronizable States by p-Groups

approach the exactly synchronizable state by p-groups by a solution to a relatively


smoother problem.
If there does not exist any subspace Span{E 1 , · · · , E p }, which is invariant for A T
and bi-orthonormal to Ker(C p ) = Span{e1 , · · · , e p }, in order to express the exactly
synchronizable state by p-groups of system (II), by the same procedure as in Sect. 7.3,
we can always extend the subspace Span{e1 , · · · , e p } to the minimal invariant sub-
space Span{e1 , · · · , eq } of A with q  p, so that A T possesses an invariant subspace
Span{E 1 , · · · , E q }, which is bi-orthonormal to Span{e1 , · · · , eq }.
Let P be the projection on the subspace Span{e1 , · · · , eq } given by


q
P= es ⊗ E s , (15.24)
s=1

in which the tensor product ⊗ is defined by

(e ⊗ E)U = (E, U )e = E T U e, ∀ U ∈ R N .

P can be represented by a matrix of order N , such that

Im(P) = Span{e1 , · · · , eq } (15.25)

and

Ker(P) = Span{E 1 , · · · , E q } . (15.26)

Moreover, it is easy to check that

P A = A P. (15.27)

Let U = U (t, x) be the solution to problem (II) and (II0). We define its synchroniz-
able part Us and controllable part Uc , respectively, as follows:

Us := PU, Uc := (I − P)U. (15.28)

If system (II) is exactly synchronizable by p-groups, then

U ∈ Span{e1 , · · · , e p } ⊆ Span{e1 , · · · , eq } = Im(P), (15.29)

hence we have

tT : Us = PU = U, Uc = (I − P)U = 0. (15.30)

Noting (15.27), multiplying P and (I − P) from the left on both sides of problem
(II) and (II0), respectively, we see that the synchronizable part Us of U satisfies the
15.2 Determination and Estimation of Exactly Synchronizable States by p-Groups 191

following problem:
⎧ 

⎪Us − Us + AUs = 0 in (0, +∞) × ,

⎨U = 0
s on (0, +∞) × 0 ,
(15.31)

⎪∂ν U s = P D H on (0, +∞) × 1 ,

⎩ 0 , Us = P U
t = 0 : Us = P U 1 in ,

while the controllable part Uc of U satisfies the following problem:


⎧ 

⎪Uc − Uc + AUc = 0 in (0, +∞) × ,

⎨U = 0
c on (0, +∞) × 0 ,
(15.32)

⎪ ∂ν Uc = (I − P)D H on (0, +∞) × 1 ,

⎩ 0 , Uc = (I − P)U
t = 0 : Uc = (I − P)U 1 in .

In fact, by (15.30), under the boundary control H , Uc with the initial data ((I −
P)U0 , (I − P)U 1 ) ∈ Ker(P) × Ker(P) is exactly null controllable, while Us with
0 , P U
the initial data (P U 1 ) ∈ Im(P) × Im(P) is exactly synchronizable.

Theorem 15.5 Assume that the condition of C p -compatibility (14.9) holds. If system
(II) is exactly synchronizable by p-groups, and the synchronizable part Us is inde-
pendent of applied boundary controls H for t  0, then A T possesses an invariant
subspace which is bi-orthonormal to Ker(C p ).

Proof It is sufficient to show that p = q. Then Span{E 1 , · · · , E p } will be the desired


subspace.
Suppose that the synchronizable part Us is independent of applied boundary con-
trols H for all t  0. Let H1 and H2 be two boundary controls which realize simulta-
neously the exact boundary synchronization by p-groups for system (II). It follows
from (15.31) that
P D(H1 − H2 ) = 0 on (0, ) × 1 .

By Theorem 15.1, the value of H1 on (0, ) × 1 can be arbitrarily taken, then

P D = 0,

hence
Im(D) ⊆ Ker(P).

Noting (15.25), we have

dim Ker(P) = N − q and dim Im(D) = N − p,

then p = q. The proof is complete. 


192 15 Determination of Exactly Synchronizable States by p-Groups

0 , U
Remark 15.6 In particular, for the initial data (U 1 ) such that P U
0 = P U
1 = 0,
system (II) is exactly null controllable.

15.3 Determination of Exactly Synchronizable States

In the case of exact boundary synchronization, by the condition of C1 -compatibility


(13.10), e1 = (1, · · · , 1)T is an eigenvector of A, corresponding to the eigenvalue a
defined by (13.6). Let 1 , · · · , r and E 1 , · · · , Er with r  1 be the Jordan chains of
A and A T , respectively, corresponding to the eigenvalue a, and Span{1 , · · · , r } is
bi-orthonormal to Span{E 1 , · · · , Er }. Thus, we have


⎨ Al = al + l+1 , 1  l  r,
A E k = a E k + E k−1 , 1  k  r,
T (15.33)


(E k , l ) = δkl , 1  k, l  r

with
r = e1 = (1, · · · , 1)T , r +1 = 0, E 0 = 0. (15.34)

Let U = U (t, x) be the solution to problem (II) and (II0). If system (II) is exactly
synchronizable, then

tT : U = ur , (15.35)

where u = u(t, x) is the corresponding exactly synchronizable state. For the syn-
chronizable part Us and the controllable part Uc , we have

tT : Us = ur , Uc = 0. (15.36)

Setting
ψk = (E k , U ), 1  k  r, (15.37)

and noting (15.24) and (15.28), similarly we have


r
r
Us = (E k , U )k = ψk k . (15.38)
k=1 k=1

Thus, (ψ1 , · · · , ψr ) can be regarded as the coordinates of Us under the basis


{1 , · · · , r }.
Taking the inner product with E k on both sides of (15.31), and noting (15.35), we
have
15.3 Determination of Exactly Synchronizable States 193

tT : ψk = (E k , U ) = (E k , r ) = δkr , 1  k  r, (15.39)

then we easily get the following

Theorem 15.7 Let 1 , · · · , r and E 1 , · · · , Er be the Jordan chains of A and A T ,


respectively, corresponding to the eigenvalue a given by (13.6), in which r = e1 =
(1, · · · , 1)T . Then, the exactly synchronizable state u is determined by

tT : u = ψr , (15.40)

where the synchronizable part Us = (ψ1 , · · · , ψr ) is determined by the following


system (1  k  r ):
⎧ 
⎪ψk − ψk + aψk + ψk−1 = 0,


in (0, +∞) × ,
⎨ψ = 0 on (0, +∞) × 0 ,
k
(15.41)

⎪ ∂ ν ψk = h k on (0, +∞) × 1 ,

⎩ 0 ), ψk = (E k , U
t = 0 : ψk = (E k , U 1 ) in ,

where
ψ0 = 0, h k = E kT P D H. (15.42)

Remark 15.8 When r = 1, A T possesses an eigenvector E 1 such that

(E 1 , 1 ) = 1. (15.43)

By Theorem 15.2, we can choose a boundary control matrix D such that the exactly
synchronizable state is independent of applied boundary controls H .
When r  1, the exactly synchronizable state depends on applied boundary con-
trols H . Moreover, in order to get the exactly synchronizable state, we must solve
the whole coupled problem (15.41)–(15.42).

15.4 Determination of Exactly Synchronizable States


by 3-Groups

In this section, for an example, we will give the details on the determination of exactly
synchronizable states by 3-groups for system (II). Here, we always assume that the
condition of C3 -compatibility (14.9) is satisfied and that Ker(C3 ) is A-marked. All
the exactly synchronizable states by p-groups for general p  1 can be discussed in
a similar way.
194 15 Determination of Exactly Synchronizable States by p-Groups

By the exact boundary synchronization by 3-groups, when t  T , we have

u (1) ≡ · · · ≡ u (n 1 ) := u 1 , (15.44)
(n 1 +1) (n 2 )
u ≡ ··· ≡ u := u 2 , (15.45)
(n 2 +1) (N )
u ≡ ··· ≡ u := u 3 . (15.46)

Let

⎪ n1 n 2 −n 1 N −n 2

⎪      

⎪ e1 = (1, · · · , 1, 0, · · · , 0, 0, · · · , 0)T ,


⎨ n1 n 2 −n 1 N −n 2
      (15.47)

⎪ e2 = ( 0, · · · , 0, 1, · · · , 1, 0, · · · , 0)T ,




n n −n N −n 2
⎩e = ( 0, ·     
1 2 1

3 · · , 0, 0, · · · , 0, 1, · · · , 1)T .

We have

Ker(C3 ) = Span{e1 , e2 , e3 } (15.48)

and the exact boundary synchronization by 3-groups given by (15.44)–(15.46) means


that

tT : U = u 1 e1 + u 2 e2 + u 3 e3 . (15.49)

Since the invariant subspace Span{e1 , e2 , e3 } of A is of dimension 3, it may contain


one, two, or three eigenvectors of A, and thus we can distinguish the following three
cases.
(i) A admits three eigenvectors fr , gs , and h t contained in Span{e1 , e2 , e3 }, cor-
responding to eigenvalues λ, μ and ν, respectively. Let f 1 , · · · , fr ; g1 , · · · , gs and
h 1 , · · · , h t be the Jordan chains corresponding to these eigenvectors of A:


⎨ A f i = λ f i + f i+1 , 1  i  r, fr +1 = 0,
Ag j = μg j + g j+1 , 1  j  s, gs+1 = 0, (15.50)


Ah k = νh k + h k+1 , 1  k  t, h t+1 = 0

and let ξ1 , · · · , ξr ; η1 , · · · , ηs and ζ1 , · · · , ζt be the Jordan chains corresponding to


the related eigenvectors of A T :


⎨ A ξi = λξi + ξi−1 , 1  i  r, ξ0 = 0,
T

A T η j = μη j + η j−1 , 1  j  s, η0 = 0, (15.51)

⎩ T
A ζk = νζk + ζk−1 , 1  k  t, ζ0 = 0,
15.4 Determination of Exactly Synchronizable States by 3-Groups 195

such that

( f i , ξl ) = δil , (g j , ηm ) = δ jm , (h k , ζn ) = δkn (15.52)

and

( f i , ηm ) = ( f i , ζn ) = (g j , ξl ) = (g j , ζn ) = (h k , ξl ) = (h k , ηm ) = 0 (15.53)

for all i, l = 1, · · · , r ; j, m = 1, · · · , s; and k, n = 1, · · · , t.


Taking the inner product with ξi , η j , and ζk on both sides of problem (II) and
(II0), respectively, and denoting

φi = (U, ξi ), ψ j = (U, η j ), θk = (U, ζk ) (15.54)

for i = 1, · · · , r ; j = 1, · · · , s; and k = 1, · · · t, we get


⎧ 

⎪ φi − φi + λφi + φi−1 = 0 in (0, +∞) × ,

⎨φ = 0
i on (0, +∞) × 0 ,
(15.55)

⎪ ∂ ν φi = ξi D H
T
on (0, +∞) × 1 ,

⎩ 0 ), φi = (ξi , U
t = 0 : φi = (ξi , U 1 ) in ,

⎧ 

⎪ ψ j − ψ j + μψ j + ψ j−1 = 0 in (0, +∞) × ,

⎨ψ = 0
j on (0, +∞) × 0 ,
(15.56)

⎪ ∂ ν ψ j = η Tj D H on (0, +∞) × 1 ,

⎩ 0 ), ψ j = (η j , U
t = 0 : ψ j = (η j , U 1 ) in 

and
⎧ 

⎪ θk − θk + νθk + θk−1 = 0 in (0, +∞) × ,

⎨θ = 0
k on (0, +∞) × 0 ,
(15.57)

⎪ ∂ν θk = ζkT D H on (0, +∞) × 1 ,

⎩ 0 ), θk = (ζk , U
t = 0 : θk = (ζk , U 1 ) in 

with
φ0 = ψ0 = θ0 = 0. (15.58)

Solving the reduced problems (15.55)–(15.57), we get φ1 , · · · , φr ; ψ1 , · · · , ψs ; and


θ1 , · · · , θt . Noting that fr , gs , and h t are linearly independent eigenvectors and con-
tained in Span{e1 , e2 , e3 }, we have
196 15 Determination of Exactly Synchronizable States by p-Groups


⎨e1 = α1 fr + α2 gs + α3 h t ,
e2 = β1 fr + β2 gs + β3 h t , (15.59)


e3 = γ1 fr + γ2 gs + γ3 h t .

Then, noting (15.52)–(15.53), it follows from (15.49) that




⎨φr = α1 u 1 + β1 u 2 + γ1 u 3 ,
tT : ψs = α2 u 1 + β2 u 2 + γ2 u 3 , (15.60)


θt = α3 u 1 + β3 u 2 + γ3 u 3 .

Since e1 , e2 , e3 are linearly independent, the linear system (15.59) is invertible. Then,
the corresponding exactly synchronizable state by 3-groups u = (u 1 , u 2 , u 3 )T can
be uniquely determined by solving the linear system (15.60).
In particular, when r = s = t = 1, the invariant subspace Span{ξ1 , η1 , ζ1 } of A T
is bi-orthonormal to Ker(C3 ) = Span{e1 , e2 , e3 }. By Theorem 15.2, there exists a
boundary control matrix D ∈ D N −3 , such that the exactly synchronizable state by
3-groups u = (u 1 , u 2 , u 3 )T is independent of applied boundary controls.
(ii) A admits two eigenvectors fr and gs contained in Span{e1 , e2 , e3 }, correspond-
ing to eigenvalues λ and μ, respectively. Let f 1 , f 2 , · · · , fr and g1 , g2 , · · · , gs be the
Jordan chains corresponding to these eigenvectors of A:

A f i = λ f i + f i+1 , 1  i  r, fr +1 = 0,
(15.61)
Ag j = μg j + g j+1 , 1  j  s, gs+1 = 0

and let ξ1 , ξ2 , · · · , ξr and η1 , η2 , · · · , ηs be the Jordan chains corresponding to the


related eigenvectors of A T :

A T ξi = λξi + ξi−1 , 1  i  r, ξ0 = 0,
(15.62)
A T η j = μη j + η j−1 , 1  j  s, η0 = 0,

such that
( f i , ξl ) = δil , (g j , ηm ) = δ jm (15.63)

and
( f i , ηm ) = (g j , ξl ) = 0 (15.64)

for all i, l = 1, · · · , r and j, m = 1, · · · , s.


Taking the inner product with ξi and η j on both sides of problem (II) and (II0),
respectively, and denoting
15.4 Determination of Exactly Synchronizable States by 3-Groups 197

φi = (U, ξi ), ψ j = (U, η j ), i = 1, · · · , r ; j = 1, · · · , s, (15.65)

we get again the reduced problems (15.55)–(15.56).


Since Span{e1 , e2 , e3 } is A-marked, either fr or gs lies in Span{e1 , e2 , e3 }. For
fixing idea, we assume that fr −1 ∈ Ker{e1 , e2 , e3 }. Then we have


⎨e1 = α1 fr + α2 fr −1 + α3 gs ,
e2 = β1 fr + β2 fr −1 + β3 gs , (15.66)


e3 = γ1 fr + γ2 fr −1 + γ3 gs .

Then, noting (15.63)–(15.64), it follows from (15.49) that




⎨φr = α1 u 1 + β1 u 2 + γ1 u 3 ,
tT : φr −1 = α2 u 1 + β2 u 2 + γ2 u 3 , (15.67)


ψs = α3 u 1 + β3 u 2 + γ3 u 3 .

Since e1 , e2 , e3 are linearly independent, the linear system (15.66) is invertible, then
the corresponding exactly synchronizable state by 3-groups u = (u 1 , u 2 , u 3 )T can
be determined by solving the linear system (15.67).
In particular, when r = 2, s = 1, the invariant subspace Span{ξ1 , ξ2 , η1 } of A T is
bi-orthonormal to Span{e1 , e2 , e3 }. By Theorem 15.2, there exists a boundary control
matrix D ∈ D N −3 , such that the corresponding exactly synchronizable state by 3-
groups u = (u 1 , u 2 , u 3 )T is independent of applied boundary controls.
(iii) A admits only one eigenvector fr contained in Span{e1 , e2 , e3 }, corresponding
to the eigenvalue λ. Let f 1 , f 2 , · · · , fr be the Jordan chain corresponding to this
eigenvector of A:

A f i = λ f i + f i+1 , 1  i  r, fr +1 = 0, (15.68)

and let ξ1 , ξ2 , · · · , ξr be the Jordan chain corresponding to the related eigenvector


of A T :
A T ξi = λξi + ξi−1 , 1  i  r, ξ0 = 0, (15.69)

such that
( f i , ξl ) = δil , i, l = 1, · · · , r. (15.70)

Taking the inner product with ξi on both sides of problem (II) and (II0), and
denoting
φi = (U, ξi ), i = 1, · · · r, (15.71)

we get again the reduced problem (15.55).


198 15 Determination of Exactly Synchronizable States by p-Groups

Since Span{e1 , e2 , e3 } is A-marked, fr −1 and fr −2 are necessarily contained in


Span{e1 , e2 , e3 }. We have


⎨e1 = α1 fr + α2 fr −1 + α3 fr −2 ,
e2 = β1 fr + β2 fr −1 + β3 fr −2 , (15.72)


e3 = γ1 fr + γ2 fr −1 + γ3 fr −2 .

Then, noting (15.70), it follows from (15.49) that




⎨φr = α1 u 1 + β1 u 2 + γ1 u 3 ,
tT : φr −1 = α2 u 1 + β2 u 2 + γ2 u 3 , (15.73)


φr −2 = α3 u 1 + β3 u 2 + γ3 u 3 .

Since e1 , e2 , e3 are linearly independent, the linear system (15.72) is invertible, then
the corresponding exactly synchronizable state by 3-groups u = (u 1 , u 2 , u 3 )T can
be uniquely determined by solving the linear system (15.73).
In particular, when r = 3, the invariant subspace {ξ1 , ξ2 , ξ3 } of A T is
bi-orthonormal to Span{e1 , e2 , e3 }. By Theorem 15.2, there exists a boundary con-
trol matrix D ∈ D N −3 , such that the exactly synchronizable state by 3-groups
u = (u 1 , u 2 , u 3 )T is independent of applied boundary controls.
Part IV
Synchronization for a Coupled System of
Wave Equations with Neumann Boundary
Controls: Approximate Boundary
Synchronization

In this part we will introduce the concept of approximate boundary null controlla-
bility, approximate boundary synchronization and approximate boundary synchro-
nization by groups, and establish the corresponding theory for system (II) of wave
equations with Neumann boundary controls in the case with fewer boundary con-
trols. Moreover, we will show that Kalman’s criterion of various kinds will play an
important role in the discussion.
Chapter 16
Approximate Boundary Null
Controllability

In this chapter, we will define the approximate boundary null controllability for
system (II) and the D-observability for the adjoint problem, and show that these
two concepts are equivalent to each other. Moreover, the corresponding Kalman’s
criterion is introduced and studied.

16.1 Definitions

For the coupled system (II) of wave equations with Neumann boundary controls, we
still use the notations given in Chaps. 12 and 13.

Definition 16.1 Let s > 21 . System (II) is approximately null controllable at the
0 , U
time T > 0 if for any given initial data (U 1 ) ∈ (H1−s ) N × (H−s ) N , there exists a
sequence {Hn } of boundary controls in L with compact support in [0, T ], such that
M

the sequence {Un } of solutions to the corresponding problem (II) and (II0) satisfies
 
Un (T ), Un (T ) −→ 0 in (H1−s ) N × (H−s ) N as n → +∞ (16.1)

or equivalently,
 
Un , Un −→ (0, 0) in (H1−s ) N × (H−s ) N as n → +∞ (16.2)

in the space Cloc ([T, +∞); (H1−s ) N ) × Cloc ([T, +∞); (H−s ) N ).

Obviously, the exact boundary null controllability implies the approximate


boundary null controllability. However, since we cannot get the convergence of
the sequence {Hn } of boundary controls from Definition 16.1, generally speaking,
the approximate boundary null controllability does not lead to the exact boundary
null controllability.

© Springer Nature Switzerland AG 2019 201


T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_16
202 16 Approximate Boundary Null Controllability

Let
 = (φ(1) , · · · , φ(N ) )T .

Consider the following adjoint problem:


⎧ 

⎪  −  + A T  = 0 in (0, +∞) × ,

⎨ = 0 on (0, +∞) × 0 ,
(16.3)

⎪ ∂ν  = 0 on (0, +∞) × 1 ,


t = 0 :  = 0 ,   =  1 in ,

where A T is the transpose of A.


As in the case with Dirichlet boundary controls (cf. Sect. 8.2), we give the fol-
lowing.

Definition 16.2 For (0 , 1 ) ∈ (Hs ) N × (Hs−1 ) N (s > 21 ), the adjoint problem
(16.3) is D-observable on [0, T ], if

D T  ≡ 0 on [0, T ] × 1 ⇒ (0 , 1 ) ≡ 0, then  ≡ 0. (16.4)

16.2 Equivalence Between the Approximate Boundary Null


Controllability and the D-Observability

In order to find the equivalence between the approximate boundary null controllabil-
ity of the original system (II) and the D-observability of the adjoint problem (16.3),
let C be the set of all the initial states (V (0), V  (0)) of the following backward
problem:
⎧ 

⎪ V − V + AV = 0 in (0, T ) × ,

⎨V = 0 on (0, T ) × 0 ,
(16.5)

⎪ ∂ν V = D H on (0, T ) × 1 ,


V (T ) = 0, V  (T ) = 0 in 

with all admissible boundary controls H ∈ L M .


By Proposition 12.12, we have the following.
Lemma 16.3 For any given T > 0 and any given (V (T ), V  (T )) ∈ (H1−s ) N ×
(H−s ) N with s > 21 , and for any given boundary function H in L M , the backward
problem (16.5) (in which the null final condition is replaced by the given final data)
admits a unique weak solution such that

V ∈ C 0 ([0, T ]; (H1−s ) N ) ∩ C 1 ([0, T ]; (H−s ) N ). (16.6)


16.2 Equivalence Between the Approximate Boundary Null … 203

Lemma 16.4 System (II) possesses the approximate boundary null controllability
if and only if

C = (H1−s ) N × (H−s ) N . (16.7)

Proof Assume that C = (H1−s ) N × (H−s ) N . By the definition of C, for any given
0 , U
(U 1 ) ∈ (H1−s ) N × (H−s ) N , there exists a sequence {Hn } of boundary controls
in L with compact support in [0, T ], such that the sequence {Vn } of solutions to
M

the corresponding backward problem (16.5) satisfies


 
0 , U
Vn (0), Vn (0) → (U 1 ) in (H1−s ) N × (H−s ) N as n → +∞. (16.8)

Recall that

R: (U 1 , H ) → (U, U  )
0 , U (16.9)

is the continuous linear mapping defined by (12.73). We have

R(U 1 , Hn )
0 , U (16.10)
0 − Vn (0), U
=R(U 1 − Vn (0), 0) + R(Vn (0), Vn (0), Hn ).

On the other hand, by the definition of Vn , we have

R(Vn (0), Vn (0), Hn )(T ) = 0. (16.11)

Therefore

R(U 1 , Hn )(T ) = R(U


0 , U 0 − Vn (0), U
1 − Vn (0), 0)(T ). (16.12)

By Proposition 12.12 and noting (16.8), we then get

R(U 1 , Hn )(T ) (H1−s ) N ×(H−s ) N


0 , U (16.13)
0 − Vn (0), U
c (U 1 − Vn (0)) (H1−s ) N ×(H−s ) N → 0

as n → +∞. Here and hereafter, c always denotes a positive constant. Thus, system
(II) is approximately null controllable.
Inversely, assume that system (II) is approximately null controllable. For any given
0 , U
(U 1 ) ∈ (H1−s ) N × (H−s ) N , there exists a sequence {Hn } of boundary controls
in L with compact support in [0, T ], such that the sequence {Un } of solutions to
M

the corresponding problem (II) and (II0) satisfies


 
0 , U
Un (T ), Un (T ) = R(U 1 , Hn )(T ) → (0, 0) (16.14)
204 16 Approximate Boundary Null Controllability

in (H1−s ) N × (H−s ) N as n → +∞. Taking such Hn as the boundary control, we


solve the backward problem (16.5) and get the corresponding solution Vn . By the
linearity of the mapping R, we have

R(U 1 , Hn ) − R(Vn (0), Vn (0), Hn )


0 , U (16.15)
0 − Vn (0), U
=R(U 1 − Vn (0), 0).

By Lemma 16.3 and noting (16.14), we have

R(U 1 − Vn (0), 0)(0) (H1−s ) N ×(H−s ) N


0 − Vn (0), U (16.16)
c (Un (T ) − Vn (T ), Un (T ) − Vn (T ) (H1−s ) N ×(H−s ) N
=c (Un (T ), Un (T )) (H1−s ) N ×(H−s ) N → 0 as n → +∞.

Noting (16.15), we then get

(U 1 ) − (Vn (0), Vn (0)) (H1−s ) N ×(H−s ) N


0 , U (16.17)
= R(U 1 , Hn )(0) − R(Vn (0), Vn (0), Hn )(0) (H1−s ) N ×(H−s ) N
0 , U
0 − Vn (0), U
c R(U 1 − Vn (0), 0)(0) (H1−s ) N ×(H−s ) N → 0

as n → +∞, which shows that C = (H1−s ) N × (H−s ) N . 

Theorem 16.5 System (II) is approximately null controllable at the time T > 0 if
and only if the adjoint problem (16.3) is D-observable on [0, T ].

Proof Assume that system (II) is not approximately null controllable at the time
T > 0. By Lemma 16.4, there is a nontrivial vector (−1 , 0 ) ∈ C ⊥ . Here and
hereafter, the orthogonal complement space is always defined in the sense of duality.
Thus, (−1 , 0 ) ∈ (Hs−1 ) N × (Hs ) N . Taking (0 , 1 ) as the initial data, we solve
the adjoint problem (16.3) and get the solution  ≡ 0. Multiplying  on both sides
of the backward problem (16.5) and integrating by parts, we get

V (0), 1 (H1−s ) N ;(Hs−1 ) N − V  (0), 0 (H−s ) N ;(Hs ) N (16.18)


T
= (D H, )ddt.
0 1

The right-hand side of (16.18) is meaningful due to

 N  N 1
 ∈ C 0 ([0, T ]; Hs ) ⊂ L 2 (0, T ; L 2 (1 )) , s > . (16.19)
2

Noticing (V (0), V  (0)) ∈ C and (−1 , 0 ) ∈ C ⊥ , it is easy to see from (16.18) that
for any given H in L M , we have
16.2 Equivalence Between the Approximate Boundary Null … 205
T
(D H, )ddt = 0.
0 1

Then, it follows that

D T  ≡ 0 on [0, T ] × 1 . (16.20)

But  ≡ 0, which implies that the adjoint problem (16.3) is not D-observable on
[0, T ].
Inversely, assume that the adjoint problem (16.3) is not D-observable on [0, T ],
then there exists a nontrivial initial data (0 , 1 ) ∈ (Hs ) N × (Hs−1 ) N , such that the
solution  to the corresponding adjoint problem (16.3) satisfies (16.20). For any
given (U 1 ) ∈ C, there exists a sequence {Hn } of boundary controls in L M , such
0 , U
that the solution Vn to the corresponding backward problem (16.5) satisfies

0 , U
(Vn (0), Vn (0)) → (U 1 ) in (H1−s ) N × (H−s ) N (16.21)

as n → +∞. Similarly to (16.18), multiplying  on both sides of the backward


problem (16.5) and noting (16.20), we get

Vn (0), 1 (H1−s ) N ;(Hs−1 ) N − Vn (0), 0 (H−s ) N ;(Hs ) N = 0. (16.22)

Then, taking n → +∞ and noting (16.21), it follows from (16.22) that

(U 1 ), (−1 , 0 ) (H1−s ) N ×(H−s ) N ;(Hs−1 ) N ×(Hs ) N = 0


0 , U (16.23)

for all (U 1 ) ∈ C, which indicates that (−1 , 0 ) ∈ C ⊥ , thus C = (H1−s ) N ×


0 , U
(H−s ) N . 
Theorem 16.6 If for any given initial data (U 1 ) ∈ (H1 ) N × (H0 ) N , system (II)
0 , U
is approximately null controllable for some s (> 21 ), then for any given initial data
0 , U
(U 1 ) ∈ (H1−s ) N × (H−s ) N , system (II) possesses the same approximate bound-
ary null controllability, too.
0 , U
Proof For any given initial data (U 1 ) ∈ (H1−s ) N × (H−s ) N (s > 1 ), by the den-
2
sity of (H1 ) × (H0 ) in (H1−s ) × (H−s ) N , we can find a sequence {(U
N N N 0n , U
1n )}n∈N
in (H1 ) × (H0 ) , satisfying
N N

(U 1n ) → (U
0n , U 0 , U
1 ) in (H1−s ) N × (H−s ) N (16.24)

as n → +∞. By the assumption, for any fixed n  1, there exists a sequence {Hkn }k∈N
of boundary controls in L M with compact support in [0, T ], such that the sequence
of solutions {Ukn } to the corresponding problem (II) and (II0) satisfies

(Ukn (T ), (Ukn ) (T )) → (0, 0) in (H1−s ) N × (H−s ) N (16.25)


206 16 Approximate Boundary Null Controllability

as k → +∞. For any given n  1, let kn be an integer such that

R(U 1n , Hkn )(T ) (H1−s ) N ×(H−s ) N


0n , U (16.26)
n

1
= (Uknn (T ), (Uknn ) (T )) (H1−s ) N ×(H−s ) N  .
2n
Thus, we get a sequence {kn } with kn → +∞ as n → +∞. For the sequence {Hknn }
of boundary controls in L M , we have

R(U 1n , Hkn )(T ) → 0 in (H1−s ) N × (H−s ) N


0n , U (16.27)
n

as n → +∞. Therefore, by the linearity of R, the combination of (16.24) and (16.27)


gives

R(U 1 , Hkn )
0 , U (16.28)
n

  n 
=R(U0 − U0 , U1 − U1n , 0) + R(U
0n , U
1n , Hkn ) → (0, 0)
n

in (H1−s ) N × (H−s ) N as n → +∞, which indicates that the sequence {Hknn } of


boundary controls realizes the approximate boundary null controllability for any
given initial data (U 1 ) ∈ (H1−s ) N × (H−s ) N .
0 , U 
Remark 16.7 Since (H1 ) N × (H0 ) N ⊂ (H1−s ) N × (H−s ) N (s > 21 ) if system (II)
is approximately null controllable for any given initial data (U 0 , U
1 ) ∈ (H1−s ) N ×
(H−s ) (s > 2 ), then it possesses the same approximate boundary null controllabil-
N 1

0 , U
ity for any given initial data (U 1 ) ∈ (H1 ) N × (H0 ) N , too.

Remark 16.8 Theorem 16.6 and Remark 16.7 indicate that for system (II), the
approximate boundary null controllability for the initial data in (H1−s ) N ×
(H−s ) N (s > 21 ) is equivalent to the approximate boundary null controllability for the
initial data in (H1 ) N × (H0 ) N with the same convergence space (H1−s ) N × (H−s ) N .
Remark 16.9 In a similar way, we can prove that if for any given initial data
0 , U
(U 1 ) ∈ (H1−s ) N × (H−s ) N (s > 1 ), system (II) is approximately null control-
2
0 , U
lable, then for any given initial data (U 1 ) ∈ (H1−s  ) N × (H−s  ) N (s  > s), system
(II) is still approximately null controllable.
Corollary 16.10 If M = N , then, no matter whether the multiplier geometrical con-
dition (12.3) is satisfied or not, system (II) is always approximately null controllable.
Proof Since M = N , the observations given by (16.4) become

 ≡ 0 on [0, T ] × 1 . (16.29)

By Holmgren’s uniqueness theorem (cf. Theorem 8.2 in [62]), we then get the D-
observability of the adjoint problem (16.3). Hence, by Theorem 16.5, we get the
approximate boundary null controllability of system (II). 
16.3 Kalman’s Criterion. Total (Direct and Indirect) Controls 207

16.3 Kalman’s Criterion. Total (Direct and Indirect)


Controls

By Theorem 16.5, similarly to Theorem 8.9, we can give the following necessary
condition for the approximate boundary null controllability.
Theorem 16.11 If system (II) is approximately null controllable at the time T > 0,
then we have necessarily the following Kalman’s criterion:

rank(D, AD, · · · , A N −1 D) = N . (16.30)

By Theorem 16.11, as in the case with Dirichlet boundary controls (cf. Sect. 8.3),
for Neumann boundary controls, in Part IV, we should consider not only the num-
ber of direct boundary controls acting on 1 , which is equal to the rank of D,
but also the number of total controls, given by the rank of the enlarged matrix
(D, AD, · · · , A N −1 D), composed of the coupling matrix A and the boundary con-
trol matrix D, which should be equal to N in the case of approximate boundary
null controllability. Therefore, the number of indirect controls should be equal to
rank(D, AD, · · · , A N −1 D) – rank(D) = N − M.
Generally speaking, similarly to the approximate boundary null controllability
for a coupled system of wave equations with Dirichlet boundary controls, Kalman’s
criterion is not sufficient in general. The reason is that Kalman’s criterion does not
depend on T , then if it is sufficient, the approximate boundary null controllability of
the original system (II), or the D-observability of the adjoint problem (16.3) could
be immediately realized; however, this is impossible since the wave propagates with
a finite speed.
First of all, we will give an example to show the insufficiency of Kalman’s crite-
rion.

Theorem 16.12 Let μ2n and en be defined by



⎨ −en = μ2n en in ,
en = 0 on 0 , (16.31)

∂ν en = 0 on 1 .

Assume that the set

 = {(m, n) : μn = μm , em = en on 1 } (16.32)

is not empty. For any given (m, n) ∈ , setting

μ2m − μ2n
= , (16.33)
2
208 16 Approximate Boundary Null Controllability

the adjoint problem


⎧ 

⎪ φ − φ + ψ = 0 in (0, +∞) × ,

⎨ψ  − ψ + φ = 0 in (0, +∞) × ,
(16.34)

⎪ φ=ψ=0 on (0, +∞) × 0 ,


∂ν φ = ∂ ν ψ = 0 on (0, +∞) × 1

admits a nontrivial solution (φ, ψ) ≡ (0, 0) with the observation on the infinite time
interval

φ ≡ 0 on [0 + ∞) × 1 , (16.35)

hence the corresponding D-observability of the adjoint problem (16.34) fails


Proof Let

μ2m + μ2n
φ = (en − em ), ψ = (en + em ), λ2 = . (16.36)
2
It is easy to check that (φ, ψ) satisfies the following system:
⎧ 2

⎪ λ φ + φ − ψ = 0 in (0, +∞) × ,

⎨λ2 ψ + ψ − φ = 0 in (0, +∞) × ,
(16.37)

⎪ φ=ψ=0 on (0, +∞) × 0 ,


∂ν φ = ∂ ν ψ = 0 on (0, +∞) × 1 .

Moreover, noting the definition (16.32) of , we have

φ = 0 on 1 . (16.38)

Then, let

φλ = eiλt φ, ψλ = eiλt ψ. (16.39)

It is easy to see that (φλ , ψλ ) is a nontrivial solution to the adjoint system (16.34),
which satisfies condition (16.35). 
In order to illustrate the validity of the assumptions given in Theorem 16.12, we
may examine the following situations, in which the set  is indeed not empty.
(1)  = (0, π), 1 = {π}. In this case, we have

1 1
μn = n + , en = (−1)n sin(n + )x, en (π) = em (π) = 1. (16.40)
2 2
Thus, (m, n) ∈  for all m = n.
16.3 Kalman’s Criterion. Total (Direct and Indirect) Controls 209

(2)  = (0, π) × (0, π), 1 = {π} × [0, π]. Let


1 1
μm,n = (m + )2 + n 2 , em,n = (−1)m sin(m + )x sin ny. (16.41)
2 2

We have
em,n (π, y) = em  ,n (π, y) = sin ny, 0  y  π. (16.42)

Thus, ({m, n}, {m  , n}) ∈  for all m = m  and n  1.

Remark 16.13 Theorem 16.12 implies that Kalman’s criterion (16.30) is not suffi-
cient in general. As a matter of fact, for the adjoint system (16.34) which satisfies
the condition of observation (16.35), we have N = 2,

0 1 10
A= , D= , (D, AD) = , (16.43)
0 0 0

and the corresponding Kalman’s criterion (16.30) is satisfied. Theorem 16.12 shows
that Kalman’s criterion cannot guarantee the D-observability of the adjoint system
(16.34) even the observation is given on the infinite time interval [0, +∞).

16.4 Sufficiency of Kalman’s Criterion for T > 0 Large


Enough in the One-Space-Dimensional Case

Similarly to the situation with Dirichlet boundary controls in the one-space-


dimensional case, we will show that Kalman’s criterion (16.30) is also sufficient
in certain cases for the approximate boundary null controllability of the original
system.
Now consider the following one-space-dimensional adjoint problem:
⎧ 

⎪  −  + A T  = 0, t > 0, 0 < x < π,

⎨(t, 0) = 0, t > 0,
(16.44)

⎪ ∂ν (t, π) = 0, t > 0,


t = 0 :  =  0 ,   = 1 , 0<x <π

with the observation on x = π:

D T (t, π) = 0 on [0, T ], (16.45)

where || > 0 is small enough.


Assume that the N × N matrix A T is diagonalizable with m ( N ) distinct real
eigenvalues:
210 16 Approximate Boundary Null Controllability

δ1 < δ2 < · · · < δm (16.46)

and the corresponding eigenvectors w(l,μ) :

A T w (l,μ) = δl w (l,μ) , 1  l  m, 1  μ  μl , (16.47)

we have


m
μl = N . (16.48)
l=1

Let
1
en = (−1)n sin(n + )x, n  0. (16.49)
2
en is an eigenfunction of − in H1 , satisfying


⎨−en = μn en , in 0 < x < π,
2

en = 0 on x = 0, (16.50)


∂ν en = 0 on x = π,

in which μn = (n + 21 ). Thus, en w (l,μ) is an eigenvector of (− + A T ), correspond-


ing to the eigenvalue (n + 21 )2 + δl .
Define
 
βn(l) = (n + 21 )2 + δl , l = 1, 2, · · · , m, n  0,
(l)
(16.51)
β−n = −βn(l) , l = 1, 2, · · · , m, n  1.

The eigenvectors associated with system (16.44) are given by


 
en w (l,μ)
E n(l,μ) = iβn(l) , 1  l  m, 1  μ  μl , n ∈ Z, (16.52)
en w (l,μ)

where we define e−n = en for all n  0. Similarly to Sect. 8.7, we can show that
(l,μ)
{E n }1lm,1μμl ,n∈Z forms a Riesz basis in (H1 ) N × (H0 ) N . Then, for any given
initial data in (H1 ) N × (H0 ) N :
μl
m
0
= αn(l,μ) E n(l,μ) , (16.53)
1
n∈Z l=1 μ=1

the solution to the joint problem (16.44) is given by


16.4 Sufficiency of Kalman’s Criterion for T > 0 Large Enough … 211

μl
m
 (l)
= αn(l,μ) eiβn t E n(l,μ) . (16.54)

n∈Z l=1 μ=1

In particular, we have
μl

m (l,μ)
αn (l)
= (l)
eiβn t en w (l,μ) , (16.55)
n∈Z l=1 μ=1 iβn

and the condition of D-observation (16.45) leads to


m 
μl
α
(l,μ)  (l)
w (l,μ) eiβn t = 0 on [0, T ].
n
DT (16.56)
n∈Z l=1 μ=1 iβn(l)

Now let us examine the sequence {βn(l) }1lm;n∈Z defined by (16.51):

(1) (m)
· · · β−1 < · · · < β−1 < β0(1) < · · · < β0(m) < β1(1) < · · · < β1(m) < · · · . (16.57)

First, for || > 0 small enough, the sequence {βn(l) }1lm;n∈Z is strictly increasing.
On the other hand, for || > 0 small enough and for |n| > 0 large enough, similarly
to (8.106) and (8.107), we check easily that
(l)
βn+1 − βn(l) = O(1) (16.58)

and   
 
βn(l+1) − βn(l) = O   . (16.59)
n

Thus, the sequence {βn(l) }1lm;n∈Z satisfies all the assumptions given in Theorem
8.22 (in which s = 1). Moreover, by definition given in (8.93), a computation shows
(l)
D + = m. Then the sequence {eiβn t }1lm;n∈Z is ω-linearly independent in L 2 (0, T ),
provided that T > 2mπ.

Theorem 16.14 Assume that A and D satisfy Kalman’s criterion (16.30). Assume
furthermore that A T is diagonalizable with (16.46)–(16.47). Then the adjoint prob-
lem (16.44) is D-observable for || > 0 small enough, provided that T > 2mπ.
(l)
Proof Since the sequence {eiβn t }1lm;n∈Z is ω-linearly independent in L 2 (0, T ) as
T > 2mπ, it follows from (16.56) that


μl
α
(l,μ) 
w (l,μ) = 0, 1  l  m, n ∈ Z.
T n
D (16.60)
μ=1 iβn(l)
212 16 Approximate Boundary Null Controllability

Noting Kalman’s criterion (16.30), by Proposition 2.12 (ii) with d = 0, Ker(D T )


does not contain any nontrivial invariant subspace of A T , then it follows that
μl
(l,μ)
αn
w (l,μ) = 0, 1  l  m, n ∈ Z. (16.61)
μ=1 iβn(l)

Then

αn(l,μ) = 0, 1  μ  μl , 1  l  m, n ∈ Z, (16.62)

hence,  ≡ 0, namely, the adjoint problem (16.44) is D-observable. 

Moreover, similarly to Theorem 8.25, we have


Theorem 16.15 Under the assumptions of Theorem 16.14, assume furthermore that
A T possesses N distinct real eigenvalues:

δ1 < δ2 < · · · < δ N . (16.63)

Then the adjoint problem (16.44) is D-observable for || > 0 small enough, provided
that T > 2π(N − M + 1), where M = rank(D).

16.5 Unique Continuation for a Cascade System of Two


Wave Equations

Consider the following problem for the cascade system of two wave equations:

⎪ 
⎪φ − φ = 0 in (0, T ) × ,



⎪ψ  − ψ + φ = 0 in (0, T ) × ,

φ=ψ=0 on (0, T ) × 0 , (16.64)



⎪ ∂ν φ = ∂ ν ψ = 0 on (0, T ) × 1 ,


⎩t = 0 : (φ, ψ, φ , ψ  ) = (φ , ψ , φ , ψ ) in 
0 0 1 1

with the Dirichlet observation

aφ + bψ = 0 on [0, T ] × 1 , (16.65)

where a and b are constants.


Let
01 a
A= and D = . (16.66)
00 b
16.5 Unique Continuation for a Cascade System of Two Wave Equations 213

It is easy to see that Kalman’s criterion (16.30) holds if and only if b = 0. Moreover,
we have (cf. [2])
Theorem 16.16 Let b = 0. If the solution (φ, ψ) to adjoint problem (16.64) with
the initial data (φ0 , ψ0 , φ1 , ψ1 ) ∈ H10 () × H10 () × L 2 () × L 2 () satisfies the
Dirichlet observation (16.65), then φ ≡ ψ ≡ 0, provided that T > 0 is large enough.
Theorem 16.16 can be generalized to the following system of n blocks of cascade
systems of two wave equations:
⎧ 

⎪ φ − φ j = 0 in (0, T ) × ,
⎪ j
⎨ ψ j − ψ j + φ j = 0 in (0, T ) × ,
(16.67)

⎪ φj = ψj = 0 on (0, T ) × 0 ,


∂ν φ j = ∂ν ψ j = 0 on (0, T ) × 1

for j = 1, . . . , n with the Dirichlet observations


n
(a ji φ j + b ji ψ j ) = 0 on [0, T ] × 1 , i = 1, · · · , n. (16.68)
j=1

Let N = 2n and
 = (φ1 , ψ1 , · · · φn , ψn )T , (16.69)

and let ⎛ ⎞
01
⎜ 00 ⎟
⎜ ⎟
⎜ . .. ⎟
A=⎜ ⎟ (16.70)
⎜ ⎟
⎝ 01 ⎠
00

and ⎛ ⎞
a11 a12 · · · a1n
⎜b11 b12 · · · b1n ⎟
⎜ ⎟
⎜ .. ⎟ .
D = ⎜ ... ..
. ··· . ⎟ (16.71)
⎜ ⎟
⎝an1 an2 · · · ann ⎠
bn1 bn2 · · · bnn

Setting (16.67) and (16.68) into the forms of (16.44) and (16.45), a straightforward
computation shows that Kalman’s criterion (16.30) holds if and only if the matrix
B = (bi j ) is invertible. Moreover, we have the following
Corollary 16.17 Assume that the matrix B = (bi j ) is invertible. If the solution
{(φi , ψi )} to system (16.67) with (φi0 , ψi0 ) ∈ H10 × L 2 () for i = 1, . . . , n satisfies
the observations (16.68), then φi ≡ ψi ≡ 0 for i = 1, . . . , n, provided that T > 0 is
large enough.
214 16 Approximate Boundary Null Controllability

Proof For i = 1, · · · , n, multiplying the first j-th equation of (16.67) by a ji and the
second j-th equation of (16.67) by b ji , respectively, then summing up for j from 1
to n, we get


N
N
N

(a ji φ j + b ji ψ j ) − (a ji φ j + b ji ψ j ) + b ji φ j = 0 (16.72)
j=1 j=1 j=1

for i = 1, · · · , n.
Let


N
N
ui = b ji φ j , vi = (a ji φ j + b ji ψ j ), i = 1, · · · , n. (16.73)
j=1 j=1

It follows from (16.72) that for each i = 1, · · · , n, we have


⎧ 

⎪u i − u i = 0 in (0, T ) × ,

⎨v  − v + u = 0
i i i in (0, T ) × ,
(16.74)

⎪u = v = 0 on (0, T ) × 0 ,


i i
∂ν u i = ∂ν vi = 0 on (0, T ) × 1

with the Dirichlet observations

vi ≡ 0 on [0, T ] × 1 . (16.75)

Applying Theorem 16.16 to each sub-system (16.74) with (16.75), it follows that

u i ≡ vi ≡ 0 in [0, T ] × , i = 1, · · · , n. (16.76)

Finally, since the matrix B is invertible, it follows from (16.76) that

φi ≡ ψi ≡ 0 in [0, T ] × , i = 1, · · · , n. (16.77)

The proof is thus complete. 


Chapter 17
Approximate Boundary Synchronization

The approximate boundary synchronization is defined and studied in this chapter for
system (II) with Neumann boundary controls.

17.1 Definition

Definition 17.1 Let s > 21 . System (II) is approximately synchronizable at the


0 , U
time T > 0 if for any given initial data (U 1 ) ∈ (H1−s ) N × (H−s ) N , there exists
a sequence {Hn } of boundary controls in L with compact support in [0, T ], such
M

that the sequence {Un } of solutions to problem (II) and (II0) satisfies

u (k) (l)
n − u n → 0 as n → +∞ (17.1)

for 1  k, l  N in the space


0
Cloc ([T, +∞); H1−s ) ∩ Cloc
1
([T, +∞); H−s ). (17.2)

Let C1 be the synchronization matrix of order (N − 1) × N , defined by


⎛ ⎞
1 −1
⎜ 1 −1 ⎟
⎜ ⎟
C1 = ⎜ . . ⎟. (17.3)
⎝ .. .. ⎠
1 −1

C1 is a full row-rank matrix, and

Ker(C1 ) = Span{e1 } (17.4)


© Springer Nature Switzerland AG 2019 215
T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_17
216 17 Approximate Boundary Synchronization

with
e1 = (1, · · · , 1)T . (17.5)

Obviously, the approximate boundary synchronization (17.1) can be equivalently


rewritten as
C1 Un → 0 as n → +∞ (17.6)

in the space
0
Cloc ([T, +∞); (H1−s ) N −1 ) ∩ Cloc
1
([T, +∞); (H−s ) N −1 ). (17.7)

17.2 Condition of C1 -Compatibility

Similarly to Theorem 9.2, we have


Theorem 17.2 Assume that system (II) is approximately synchronizable at the time
T > 0, but not approximately null controllable. Then the coupling matrix A = (ai j )
should satisfy the following row-sum condition:


N
ai j := a (i = 1, · · · , N ), (17.8)
j=1

where a is a constant independent of i = 1, · · · , N . This condition is called the


condition of C1 -compatibility.
Remark 17.3 The condition of C1 -compatibility (17.8) for the approximate bound-
ary synchronization is just the same as that given in (13.6) for the exact boundary
synchronization. In particular, Lemma 13.3 remains valid in the present situation.

17.3 Fundamental Properties

Under the condition of C1 -compatibility (17.8), setting

W1 = (w (1) , · · · , w (N −1) )T = C1 U (17.9)

and noting (13.11), the original problem (II) and (II0) for U can be transformed into
following reduced system for W1 :

⎨ W1 − W1 + A1 W1 = 0 in (0, +∞) × ,
W =0 on (0, +∞) × 0 , (17.10)
⎩ 1
∂ν W 1 = C 1 D H on (0, +∞) × 1
17.3 Fundamental Properties 217

with the initial data

t =0: 0 , W1 = C1 U
W1 = C 1 U 1 in . (17.11)

Accordingly, let

1 = (ψ (1) , · · · , ψ (N −1) )T . (17.12)

Consider the following adjoint problem of the reduced system (17.10):


⎧ T

⎪ 1 − 1 + A1 1 = 0 in (0, +∞) × ,


1 = 0 on (0, +∞) × 0 ,
(17.13)

⎪ ∂ν 1 = 0 on (0, +∞) × 1 ,


t = 0 : 1 = 0 , 1 = 
1 in ,

which is called the reduced adjoint problem of system (II).


From Definition 16.1 and Definition 17.1, we immediately get
Lemma 17.4 Assume that the coupling matrix A satisfies the condition of
C1 -compatibility (17.8). Then system (II) is approximately synchronizable at the
time T > 0 if and only if the reduced system given by (17.10) is approximately null
controllable at the time T > 0, or equivalently if and only if the reduced adjoint
problem (17.13) is C1 D-observable on the time interval [0, T ] (cf. Definition 16.2).

Corollary 17.5 Assume that the condition of C1 -compatibility (17.8) holds. If


rank(C1 D) = N − 1, then system (II) is always approximately synchronizable.

Proof Since (C1 D)T is an invertible matrix of order (N − 1), the condition of obser-
vation similarly given in Definition 16.2 becomes

1 ≡ 0 on [0, T ] × 1 . (17.14)

Thus, by means of Holmgren’s uniqueness theorem (cf. Theorem 8.2 in [62]), we


get the C1 D-observability of the reduced adjoint problem (17.13) on [0, T ], then by
Lemma 17.4, the approximate boundary synchronization of system (II). 

Similarly to Lemma 9.5, we have


Lemma 17.6 Under the condition of C1 -compatibility (17.8), if system (II) is
approximately synchronizable at T > 0, then we have the following criterion of
Kalman’s type:

rank(C1 D, C1 AD, · · · , C1 A N −1 D) = N − 1. (17.15)


218 17 Approximate Boundary Synchronization

17.4 Properties Related to the Number of Total Controls

Since the rank of the enlarged matrix (D, AD, · · · , A N −1 D) measures the number
of total (direct and indirect) controls, we now determine the minimal number of total
controls, which is necessary to the approximate boundary synchronization of system
(II), no matter whether the condition of C1 -compatibility (17.8) is satisfied or not.
Similarly to Theorem 9.6, we have
Theorem 17.7 Assume that system (II) is approximately synchronizable under the
action of a boundary control matrix D. Then we necessarily have

rank(D, AD, · · · , A N −1 D)  N − 1. (17.16)

In other words, at least (N − 1) total controls are needed in order to realize the
approximate boundary synchronization of system (II).
When system (II) is approximately synchronizable under the minimal rank con-
dition
rank(D, AD, · · · , A N −1 D) = N − 1, (17.17)

the coupling matrix A should possess some fundamental properties related to the
synchronization matrix C1 . Similarly to Theorem 9.7, we have

Theorem 17.8 Assume that system (II) is approximately synchronizable under the
minimal rank condition (17.17). Then we have the following assertions:
(i) The coupling matrix A satisfies the condition of C1 -compatibility (17.8).
(ii) There exists a scalar function u as the approximately synchronizable state,
such that
u (k)
n → u as n → +∞ (17.18)

for all 1  k  n in the space


0
Cloc ([T, +∞); H1−s ) ∩ Cloc
1
([T, +∞); H−s ) (17.19)

with s > 1/2. Moreover, the approximately synchronizable state u is independent of


the sequence {Hn } of applied boundary controls.
(iii) The transpose A T of the coupling matrix A admits an eigenvector E 1 such
that (E 1 , e1 ) = 1, where e1 = (1, · · · , 1)T is the eigenvector of A, associated with
the eigenvalue a given by (17.8).

Remark 17.9 Under the hypothesis of Theorem 17.8, system (II) is approximately
synchronizable in the pinning sense, while that originally given by Definition 17.1
is in the consensus sense.

On the other hand, similarly to Theorem 9.12, we have


17.4 Properties Related to the Number of Total Controls 219

Theorem 17.10 Let A satisfy the condition of C1 -compatibility (17.8). Assume that
A T admits an eigenvector E 1 such that (E 1 , e1 ) = 1 with e1 = (1, · · · , 1)T . Then
there exists a boundary control matrix D satisfying the minimal rank condition
(17.17), which realizes the approximate boundary synchronization of system (II).
Moreover, the approximately synchronizable state u is independent of applied bound-
ary controls.

17.5 An Example

In this subsection, we will examine the approximate boundary synchronization for


a coupled system of three wave equations, the reduced adjoint system of which is
given by (16.64).
For this purpose, we first look for all the matrices A of order 3, such that C1 A =
Ā1 C1 . More precisely, let
   
01 1 −1 0
N = 3, Ā1 = , C1 = . (17.20)
00 0 1 −1

For getting A, we solve the linear system:


 
0 1 −1
C1 A = Ā1 C1 = . (17.21)
00 0

Noting that the matrix ⎛ ⎞


0 1 −1
A0 = ⎝0 0 0 ⎠ (17.22)
00 0

satisfies (17.21), it is easy to see that


⎛ ⎞ ⎛ ⎞
αβγ α β+1 γ−1
A = A0 + ⎝α β γ ⎠ = ⎝α β γ ⎠, (17.23)
αβγ α β γ

where α, β, δ are arbitrarily given real numbers. Par construction, A satisfies the
condition of C1 -compatibility (17.8) with the row-sum a = α + β + δ.

Proposition 17.11 In the case

α + β + γ = 0 (17.24)

or
α + β + γ = 0 and α = 0, (17.25)
220 17 Approximate Boundary Synchronization

λ = α + β + γ is an eigenvalue of A associated with the eigenvector e1 = (1, 1, 1)T ,


and the corresponding eigenvector E 1 of A T can be chosen such that (E 1 , e1 ) = 1.
While, in the case
α + β + γ = 0 but α = 0, (17.26)

λ = 0 is the only eigenvalue of A, and all the corresponding eigenvectors E 1 of A T


necessarily satisfy (E 1 , e1 ) = 0.

Proof First, a straightforward computation gives

det (λI − A) = λ3 − (α + β + γ)λ2 . (17.27)

(i) If α + β + γ = 0, then λ = α + β + γ is a simple eigenvalue of A with the


eigenvector e1 . So, A T admits an eigenvector E 1 such that (E 1 , e1 ) = 1. More pre-
cisely, the vector E 1 is given by
⎛ ⎞

1 ⎝
E 1 = 2 aβ + α⎠ with a = α + β + γ. (17.28)
a aγ − α

(ii) If α + β + γ = 0 and α = 0, then λ = 0 is the eigenvalue of A with multi-


plicity 3 and dim Ker (A) = 2. Moreover, the eigenvector E 1 of A T given by
⎛ ⎞
−2β
1⎝
E1 = β + 1⎠ (17.29)
2 β+1

satisfies (E 1 , e1 ) = 1.
(iii) If α + β + γ = 0 and α = 0, then λ = 0 is also the eigenvalue of A with
multiplicity 3 and dim Ker (A) = 1. Then, the eigenvector E 1 of A T given by
⎛ ⎞
0
E1 = ⎝ 1 ⎠ (17.30)
−1

necessarily satisfies (E 1 , e1 ) = 0. The proof is complete. 

Now let us consider the corresponding problem (II) and (II0) with the boundary
control matrix D = (d1 , d2 , d3 )T and the coupling matrix A given by (17.23):

⎪ 
⎪u − u + αu + (β + 1)v + (γ − 1)w = 0


in (0, +∞) × ,

⎪ 
⎨v − v + αu + βv + γw = 0 in (0, +∞) × ,
w  − w + αu + βv + γw = 0 in (0, +∞) × , (17.31)


⎪u = v = w = 0
⎪ on (0, +∞) × 0 ,


⎩∂ u = d h, ∂ v = d h, ∂ w = d h on (0, +∞) × 1
ν 1 ν 2 ν 3
17.5 An Example 221

with the initial data

t =0: (u, v, w) = (u 0 , v0 , w0 ), (u  , v  , w  ) = (u 1 , v1 , w1 ) in . (17.32)

Since A satisfies the condition of C1 -compatibility, the reduced system (17.10) with
Ā1 given by (17.20) is written as
⎧ 

⎪ y − y + z = 0 in (0, +∞) × ,

⎨z  − z = 0 in (0, +∞) × ,
(17.33)

⎪ y=z=0 on (0, +∞) × 0 ,


∂ν y = (d1 − d2 )h, ∂ν z = (d2 − d3 )h on (0, +∞) × 1 ,

and the corresponding reduced adjoint system (17.13) becomes


⎧ 

⎪ψ1 − ψ1 = 0 in (0, +∞) × ,

⎨ψ  − ψ + ψ = 0
2 2 1 in (0, +∞) × ,
(17.34)

⎪ψ1 = ψ2 = 0 on (0, +∞) × 0 ,


∂ν ψ 1 = ∂ν ψ 2 = 0 on (0, +∞) × 1 .

Moreover, the C1 D-observation becomes

(d1 − d2 )ψ1 + (d2 − d3 )ψ2 = 0 on [0, T ] × 1 . (17.35)

Theorem 17.12 Let  satisfy the usual multiplier geometrical condition. Then,
there exists a boundary control matrix D = (d1 , d2 , d3 )T such that system (17.31) is
approximately synchronizable. Moreover, the approximately synchronizable state u
is independent of applied boundary controls in two cases (17.24) and (17.25).

Proof By Lemma 17.4, the approximate boundary synchronization of system (17.31)


is equivalent to the C1 D-observability of the reduced adjoint system (17.34). By The-
orem 16.16, under the multiplier geometrical condition, the reduced adjoint system
(17.34), together with the C1 D-observation (17.35), has only the trivial solution
ψ1 ≡ ψ2 ≡ 0, provided that d2 − d3 = 0 and that T > 0 is large enough. We get
thus the approximate boundary synchronization of system (17.31).
More precisely, in case (17.24), we can choose
⎧ γ
⎨d1 = σ − α , d2 = 0, d3 = 1,
⎪ if α = 0,
1

d1 = 0, d2 = 1, d3 = − βγ , if α = 0 and γ = 0, (17.36)


d1 = 0, d2 = − βγ , d3 = 1, if α = 0 and β = 0,

while, in case (17.25), we can choose

d1 = 0, d2 = −1, d3 = 1, (17.37)
222 17 Approximate Boundary Synchronization

such that E 1T D = 0 in both cases, where E 1 is given by (17.28) and (17.29),


respectively. Then, applying E 1 to system (17.31), and setting φ = (E 1 , Un ) with
Un = (u n , vn , wn )T , it follows that
⎧ 

⎪ φ − φ + aφ = 0 in (0, +∞) × ,

⎨φ = 0 on (0, +∞) × 0 ,
(17.38)

⎪ ∂ν φ = 0, on (0, +∞) × 1 ,

⎩ 0 ), φ = (E 1 , U
t = 0 : φ = (E 1 , U 1 ) in .

Since φ is independent of applied boundary controls, and (E 1 , e1 ) = 1, the conver-


gence
Un → ue1 as n → +∞ (17.39)

in the space
C 0 ([T, +∞); (H0 )3 ) ∩ C 1 ([T, +∞); (H−1 )3 ) (17.40)

will imply that


tT : φ = (E 1 , Un ) → (E 1 , e1 )u = u. (17.41)

Therefore, the approximately synchronizable state u is indeed independent of applied


boundary controls h. 

Remark 17.13 In two cases (17.24) and (17.25), under the minimal rank condition

rank(D, AD, AD 2 ) = 2, (17.42)

the approximately synchronizable state u is independent of applied boundary con-


trols. While, in case (17.26), A is similar to a Jordan block of order 3, we must have
the rank condition:
rank(D, AD, AD 2 ) = 3, (17.43)

which is necessary for the approximate boundary null controllability of the nilpotent
system (17.31). We hope, but we do not know up to now if it is also sufficient.
Chapter 18
Approximate Boundary Synchronization
by p-Groups

The approximate boundary synchronization by p-groups is introduced and studied


in this chapter for system (II) with Neumann boundary controls.

18.1 Definition

When rank(D, AD, · · · , A N −1 D), the number of total controls, is further reduced,
we can consider the approximate boundary synchronization by p-groups
( p  1). In the special case p = 1, it is just the approximate boundary synchro-
nization considered in Chap. 17.
Let p  1 be an integer and let

0 = n0 < n1 < n2 < · · · < n p = N (18.1)

be integers such that n r − n r −1  2 for all 1  r  p. The approximate boundary


synchronization by p-groups means that the components of U are divided into p
groups:

(u (1) , · · · , u (n 1 ) ), (u (n 1 +1) , · · · , u (n 2 ) ), · · · , (u (n p−1 +1) , · · · , u (n p ) ), (18.2)

and each group possesses the corresponding approximate boundary synchronization,


respectively.

Definition 18.1 Let s > 21 . System (II) is approximately synchronizable by


0 , U
p-groups at T > 0 if for any given initial data (U 1 ) ∈ (H1−s ) N × (H−s ) N , there
exists a sequence {Hn } of boundary controls in L with compact support in [0, T ],
M

such that the corresponding sequence {Un } of solutions to problem (II) and (II0)
satisfies

© Springer Nature Switzerland AG 2019 223


T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_18
224 18 Approximate Boundary Synchronization by p-Groups

u (k) (l)
n − u n → 0 in Cloc ([T, +∞); H1−s ) ∩ Cloc ([T, +∞); H−s )
0 1
(18.3)

as n → +∞ for all n r −1 + 1  k, l  n r and 1  r  p.

Let
⎛ ⎞
1 −1
⎜ 1 −1 ⎟
⎜ ⎟
Sr = ⎜ .. .. ⎟, 1  r  p (18.4)
⎝ . . ⎠
1 −1

be an (n r − n r −1 − 1) × (n r − n r −1 ) matrix with full row-rank, and


⎛ ⎞
S1
⎜ S2 ⎟
⎜ ⎟
Cp = ⎜ .. ⎟ (18.5)
⎝ . ⎠
Sp

be the (N − p) × N matrix of synchronization by p-groups. Clearly,

Ker(C p ) = Span{e1 , · · · , e p }, (18.6)

where

1, n r −1 + 1  i  n r ,
(er )i = (18.7)
0, otherwise

for 1  r  p. Thus, the approximate boundary synchronization by p-groups (18.3)


can be written as

C p Un → 0 as n → +∞ (18.8)

in the space
0
Cloc ([T, +∞); (H1−s ) N − p ) ∩ Cloc
1
([T, +∞); (H−s ) N − p ). (18.9)

18.2 Fundamental Properties

Similarly to Lemma 13.3, the coupling matrix A satisfies the condition of


C p -compatibility if Ker(C p ) is an invariant subspace of A:

AKer(C p ) ⊆ Ker(C p ), (18.10)


18.2 Fundamental Properties 225

or equivalently, there exists a unique matrix A p of order (N − p), such that

C p A = A pC p. (18.11)

A p is called the reduced matrix of A by C p .


Under the condition of C p -compatibility (18.10), setting

W p = C p U = (w (1) , · · · , w (N − p) )T (18.12)

and noting (18.11), from problem (II) and (II0) for U we get the following reduced
system for W p :
⎧ 
⎨ W p − W p + A p W p = 0 in (0, +∞) × ,
W =0 on (0, +∞) × 0 , (18.13)
⎩ p
∂ν W p = C p D H on (0, +∞) × 1

with the initial data

t =0: 0 , W p = C p U
W p = C pU 1 in . (18.14)

Accordingly, let

 p = (ψ (1) , · · · , ψ (N − p) )T . (18.15)

Consider the following reduced adjoint problem:


⎧ T

⎪ p −  p + A p  p = 0 in (0, +∞) × ,


p = 0 on (0, +∞) × 0 ,
(18.16)

⎪∂ νp = 0 on (0, +∞) × 1 ,


t = 0 : p =   p0 ,  p = 
 p1 in .

By Definitions 16.1 and 18.1, and using Theorem 16.5, we get

Lemma 18.2 Assume that the coupling matrix A satisfies the condition of
C p -compatibility (18.10). Then system (II) is approximately synchronizable by p-
groups at the time T > 0 if and only if the reduced system (18.13) is approximately
null controllable at the time T > 0, or equivalently, the reduced adjoint problem
(18.16) is C p D-observable on the time interval [0, T ], namely, the partial observa-
tion

(C p D)T ∂ν  p ≡ 0 on [0, T ] × 1 (18.17)

 p0 = 
implies   p1 ≡ 0, then  p ≡ 0.
226 18 Approximate Boundary Synchronization by p-Groups

Thus, by Theorem 16.11 and noting (18.11), it is easy to get


Theorem 18.3 Under the condition of C p -compatibility (18.10), assume that system
(II) is approximately synchronizable by p-groups, we necessarily have the following
criterion of Kalman’s type:

rank(C p D, C p AD, · · · , C p A N −1 D) = N − p. (18.18)

18.3 Properties Related to the Number of Total Controls

We now consider rank(D, AD, · · · , A N −1 D), the number of total (direct and indi-
rect) controls, and determine the minimal number of total controls, which is necessary
to the approximate boundary synchronization by p-groups for system (II), no matter
whether the condition of C p -compatibility (18.10) is satisfied or not.
Similarly to Theorem 10.4, we have
Theorem 18.4 Assume that system (II) is approximately synchronizable by p-groups
under the action of a boundary control matrix D. Then, we necessarily have

rank(D, AD, · · · , A N −1 D)  N − p. (18.19)

In other words, at least (N − p) total controls are needed in order to realize the
approximate boundary synchronization by p-groups for system (II).
Based on Theorem 18.4, we consider the approximate boundary synchronization
by p-groups for system (II) under the minimal rank condition

rank(D, AD, · · · , A N −1 D) = N − p. (18.20)

In this case, the coupling matrix A should possess some fundamental properties
related to C p , the matrix of synchronization by p-groups.
Similarly to Theorem 10.5, we have

Theorem 18.5 Assume that system (II) is approximately synchronizable by p-groups


under the minimal rank condition (18.20). Then we have the following assertions:
(i) The coupling matrix A satisfies the condition of C p -compatibility (18.10).
(ii) There exists some linearly independent scalar functions u 1 , · · · , u p such that

u (k)
n → u r as n → +∞ (18.21)

for all nr −1 + 1  k  n r and 1  r  p in the space


0
Cloc ([T, +∞); H1−s ) ∩ Cloc
1
([T, +∞); H−s ) (18.22)
18.3 Properties Related to the Number of Total Controls 227

with s > 1/2. Moreover, the approximately synchronizable state by p-groups


u = (u 1 , · · · , u p )T is independent of applied boundary controls Hn .
(iii) A T admits an invariant subspace Span{E 1 , · · · , E p }, which is contained in
Ker(D T ) and bi-orthonormal to Ker(C p )=Span{e1 , · · · , e p }.

Remark 18.6 Under hypothesis (18.20), system (II) is approximately synchronizable


by p-groups in the pinning sense, while that originally given by Definition 18.1 is
in the consensus sense.

Remark 18.7 Since the invariant subspace Span{E 1 , · · · , E p } of A T is


bi-orthonormal to the invariant subspace Span{e1 , · · · , e p } of A, by Proposition 2.8,
the invariant subspace Span{E 1 , · · · , E p }⊥ of A is a supplement of Span{e1 , · · · , e p }

Moreover, similarly to Theorem 10.10, we have

Theorem 18.8 Let A satisfy the condition of C p -compatibility (18.10). Assume that
Ker(C p ) admits a supplement which is also invariant for A. Then there exists a
boundary control matrix D satisfying the minimal rank condition (18.20), which
realizes the approximate boundary synchronization by p-groups for system (II) in
the pinning sense.
Part V
Synchronization for a Coupled System of
Wave Equations with Coupled Robin
Boundary Controls: Exact Boundary
Synchronization

We consider the following coupled system of wave equations with coupled Robin
boundary controls:
⎧ 
⎨ U − U + AU = 0 in (0, +∞) × ,
U =0 on (0, +∞) × 0 , (III)

∂ν U + BU = D H on (0, +∞) × 1

with the initial condition

t =0: U =U 1 in ,
0 , U  = U (III0)

where  ⊂ Rn is a bounded domain with smooth boundary  = 1 ∪  0 such that


 1 ∩  0 = ∅ and mes(1 ) > 0; “” stands for the time derivative;  = nk=1 ∂∂x 2 is
2

 T  T k
the Laplacian operator; U = u (1) , · · · , u (N ) and H = h (1) , · · · , h (M) (M 
N ) denote the state variables and the boundary controls, respectively; the internal
coupling matrix A = (ai j ) and the boundary coupling matrix B are of order N,
and D as the boundary control matrix is a full column-rank matrix of order N × M,
all with constant elements.
The exact boundary synchronization and the exact boundary synchronization by
groups for system (III) will be discussed in this part, while, the approximate boundary
synchronization and the approximate boundary synchronization by groups for system
(III) will be considered in the next part (Part VI).
Chapter 19
Preliminaries on Problem (III) and (III0)

In order to consider the exact boundary controllability and the exact boundary syn-
chronization of system (III), we first give some necessary results on problem (III) and
(III0) in this chapter.

In order to consider the exact boundary controllability and the exact boundary syn-
chronization of system (III), we first give some necessary results on problem (III)
and (III0) in this chapter.

19.1 Regularity of Solutions with Neumann Boundary


Conditions

Similarly to the problem of wave equations with Neumann boundary conditions, a


problem with Robin boundary conditions no longer enjoys the hidden regularity as
in the case with Dirichlet boundary conditions. As a result, the solution to problem
(III) and (III0) with coupled Robin boundary conditions is not smooth enough in
general for the proof of the non-exact boundary controllability of the system.
In order to overcome this difficulty, we should deeply study the regularity of
solutions to wave equations with Neumann boundary conditions. For this purpose,
consider the following second-order hyperbolic problem on a bounded domain  ⊂
R n (n  2) with boundary :

⎪ 
⎨ y + A(x, ∂)y = f in (0, T ) ×  = Q,
∂y
= g on (0, T ) ×  = , (19.1)
⎪ ∂ν
⎩ A 
t = 0 : y = y0 , y = y1 in ,

© Springer Nature Switzerland AG 2019 231


T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_19
232 19 Preliminaries on Problem (III) and (III0)

where


n
∂2  n

A(x, ∂) = − ai j (x) + bi (x) + c0 (x), (19.2)
i, j=1
∂x i ∂x j i=1
∂x i

in which ai j (x) with ai j (x) = a ji (x), b j (x) and c0 (x) (i, j = 1, · · · , n) are smooth
real coefficients, and the principal part of A(x, ∂) is supposed to be uniformly strong
elliptic in :
n 
n
ai j (x)ηi η j  c η 2j (19.3)
i, j=1 j=1

for all x ∈  and for all η = (η1 , · · · , ηn ) ∈ Rn , where c > 0 is a positive constant;
∂y
moreover, ∂ν A
is the outward normal derivative associated with A:

∂y  N N
∂y
= ai j (x) νj, (19.4)
∂ν A i=1 j=1
∂x i

ν = (ν1 , · · · , νn )T being the unit outward normal vector on the boundary .


Define the operator A by

 ∂y 
A = A(x, ∂), D(A) = y ∈ H 2 () : = 0 on  . (19.5)
∂ν A

We know that if −A is closed, and if −A is maximal dissipative, namely, if


−A is dissipative: Re(Au, u)  0, ∀u ∈ D(A), and it does not possess any proper
dissipative extension, then for any given α with 0 < α < 1, the fractional power Aα
can be defined in a natural way (cf. [6, 24, 70]), for example, we have

sin πα
Aα u = λα−1 A(λ + A)−1 u dλ, u ∈ D(A). (19.6)
π 0

We can verify that −A given by (19.5) is closed, and there exists a constant c > 0
so large that −(cI + A) is maximal dissipative, thus we can define the fractional
powers of (cI + A). Since a suitable translation of the operator does not change the
regularity of the solution in [0, T ] with T < +∞, for any α with 0 < α < 1, the
fractional powers Aα can be well defined, moreover, we have

y D(Aα ) = Aα y L 2 () . (19.7)

In [30] (cf. also [29, 31]), Lasiecka and Triggiani got the optimal regularity for the
solution to problem (19.1) by means of the theory of cosine operator. In particular,
more regularity results can be obtained when the domain is a parallelepiped. For
conciseness and clarity, we list only those results which are needed in what follows.
19.1 Regularity of Solutions with Neumann Boundary Conditions 233

Let  > 0 be an arbitrarily given small number. Here and hereafter, we always
assume that α and β are given, respectively, as follows:


⎪ α = 3/5 − , β = 3/5,  is a smooth bounded domain

⎨ and A(x, ∂) is defined by (19.2),
(19.8)

⎪ α = β = 2/3,  is a sphere and A(x, ∂) = −,


α = β = 3/4 − ,  is a parallelepiped and A(x, ∂) = −.

Lemma 19.1 Assume that y0 ≡ y1 ≡ 0 and f ≡ 0. For any given g ∈ L 2


(0, T ; L 2 ()), problem (19.1) admits a unique solution y such that

(y, y  ) ∈ C 0 ([0, T ]; H α () × H α−1 ()) (19.9)

and

y| ∈ H 2α−1 () = L 2 (0, T ; H 2α−1 ()) ∩ H 2α−1 (0, T ; L 2 ()), (19.10)

where H α () denotes the usual Sobolev space of order α and  = (0, T ) × .

Lemma 19.2 Assume that y0 ≡y1 ≡ 0 and g ≡ 0. For any given f ∈L 2 (0, T ; L 2 ()),
the unique solution y to problem (19.1) satisfies

(y, y  ) ∈ C 0 ([0, T ]; H 1 () × L 2 ()) (19.11)

and
y| ∈ H β (). (19.12)

Lemma 19.3 Assume that f ≡ 0 and g ≡ 0.


(1) If (y0 , y1 ) ∈ H 1 () × L 2 (), then problem (19.1) admits a unique solution
y such that
(y, y  ) ∈ C 0 ([0, T ]; H 1 () × L 2 ()) (19.13)

and
y| ∈ H β (). (19.14)

(2) If (y0 , y1 ) ∈ L 2 () × (H 1 ()) , where (H 1 ()) denotes the dual space of
H () with respect to L 2 (), then problem (19.1) admits a unique solution y such
1

that
(y, y  ) ∈ C 0 ([0, T ]; L 2 () × (H 1 ()) ) (19.15)

and
y| ∈ H α−1 (). (19.16)
234 19 Preliminaries on Problem (III) and (III0)

Remark 19.4 The regularities given in Lemmas 19.1, 19.2, and 19.3 remain true
when we replace the Neumann boundary condition on the whole boundary  by
the homogeneous Dirichlet boundary condition on 0 and the Neumann boundary
condition on 1 with 0 ∪ 1 =  and  0 ∩  1 = ∅.

Remark 19.5 In the results mentioned above, the mappings from the given data to
the solution are all continuous with respect to the corresponding topologies.

19.2 Well-Posedness of a Coupled System of Wave


Equations with Coupled Robin Boundary Conditions

We now prove the well-posedness of problem (III) and (III0). Throughout this part
and the next part, when mes(0 ) > 0, we define

H0 = L 2 (), H1 = H10 (), (19.17)

in which H10 () is the subspace of H 1 (), composed of all the functions with the
null trace on 0 , while, when mes(0 ) = 0, instead of (19.17), we define


H0 = u : u ∈ L 2 (), ud x = 0 , H1 = H 1 () ∩ H0 . (19.18)


For simplifying the statement, we often assume that mes(0 ) = 0; however, all the
conclusions are still valid when mes(0 ) = 0.
Let  = (φ(1) , · · · , φ(N ) )T . We first consider the following adjoint system:

⎪ 
⎨ −  + A  = 0 in (0, +∞) × ,
T

=0 on (0, +∞) × 0 , (19.19)




∂ν  + B  = 0
T
on (0, +∞) × 1

with the initial data


t =0: = 1 in ,
0 ,  =  (19.20)

where A T and B T denote the transposes of A and B, respectively.

Theorem 19.6 Assume that B is similar to a real symmetric matrix. Then for any
0 , 
given ( 1 ) ∈ (H1 ) N × (H0 ) N , the adjoint problem (19.19), (19.20) admits a
unique weak solution

(,  ) ∈ Cloc
0
([0, +∞); (H1 ) N × (H0 ) N ) (19.21)

in the sense of C0 -semigroup, where H1 and H0 are defined by (19.17).


19.2 Well-Posedness of a Coupled System of Wave Equations … 235

Proof Without loss of generality, we assume that B is a real symmetric matrix.


We first formulate system (19.19) into the following variational form:

)d x +
( ,  d x +
∇, ∇  )d +
(, B  )d x = 0
(, A
  1 
(19.22)
∈ (H1 ) N , where (·, ·) denotes the inner product of R N ,
for any given test function 
while ·, · denotes the inner product of M N ×N (R).
Recalling the following interpolation inequality [66]:

|φ|2 d  cφ H 1 () φ L 2 () , ∀φ ∈ H 1 (),
1

we have

(, B)d  B ||2 d  cB(H1 ) N (H0 ) N ,
1 1

then it is easy to see that



∇, ∇d x + (, B)d + λ2(H0 ) N  c 2(H1 ) N
 1

for some suitable constants λ > 0 and c > 0. Therefore, the bilinear symmetric form

d x +
∇, ∇  )d
(, B 
 1

is coercive in (H1 ) N × (H0 ) N . Moreover, the nonsymmetric part in (19.22) satisfies



)d x  A(H0 ) N 
(, A (H0 ) N .


By Theorem 1.1 of Chap. 8 in [60], the variational problem (19.22) with the initial
data (19.20) admits a unique solution  with the smoothness (19.21). The proof is
complete. 

Remark 19.7 From now on, in order to guarantee the well-posedness of problem
(III) and (III0), we always assume that B is similar to a real symmetric matrix.

Definition 19.8 U is a weak solution to problem (III) and (III0) if

U ∈ Cloc
0
([0, +∞); (H0 ) N ) ∩ Cloc
1
([0, +∞); (H−1 ) N ), (19.23)

0 , 
where H−1 denotes the dual space of H1 , such that for any given ( 1 ) ∈ (H1 ) N ×
(H0 ) and for all given t  0, we have
N
236 19 Preliminaries on Problem (III) and (III0)

(U  (t), −U (t)), ((t),  (t)) (19.24)


t

= (U1 , −U0 ),(0 , 1 ) + (D H (τ ), (τ ))d xdt,
0 1

in which (t) is the solution to the adjoint problem (19.19) and (19.20), and ·, ·
denotes the duality between the spaces (H−1 ) N × (H0 ) N and (H1 ) N × (H0 ) N .

Theorem 19.9 Assume that B is similar to a real symmetric matrix. For any given
H ∈ L loc
2 0 , U
(0, +∞; (L 2 (1 )) M ) and (U 1 ) ∈ (H0 ) N × (H−1 ) N , problem (III) and
(III0) admits a unique weak solution U . Moreover, the mapping

(U 1 , H ) → (U, U  )
0 , U (19.25)

is continuous with respect to the corresponding topologies.

Proof Let  be the solution to the adjoint problem (19.19) and (19.20). Define a
linear functional as follows:

0 , 
L t ( 1 ) (19.26)
t
= (U 0 ), (
1 , −U 0 , 
1 ) + (D H (τ ), (τ ))d xdt.
0 1

Clearly, L t is bounded in (H1 ) N × (H0 ) N . Let St be the semigroup in (H1 ) N ×


(H0 ) N , corresponding to the adjoint problem (19.19), (19.20). L t ◦ St−1 is bounded
in (H1 ) N × (H0 ) N . Then, by Riesz–Fréchet representation theorem, for any given
0 , 
( 1 ) ∈ (H1 ) N ×(H0 ) N , there exists a unique (U  (t), −U (t))∈(H−1 ) N × (H0 ) N ,
such that

L t ◦ St−1 ((t),  (t)) = (U  (t), −U (t)), ((t),  (t)). (19.27)

By

0 , 
L t ◦ St−1 ((t),  (t)) = L t ( 1 ) (19.28)

0 , 
for any given ( 1 ) ∈ (H1 ) N × (H0 ) N , (19.24) holds, then (U, U  ) is the unique
weak solution to problem (III) and (III0). Moreover, we have

(U  (t), −U (t))(H−1 ) N ×(H0 ) N = L t ◦ St−1  (19.29)


0 , U
 c((U 1 )(H0 ) N ×(H−1 ) N + H  L 2 (0,T ;(L 2 (1 )) M ) )

for all t ∈ [0, T ].


At last, by a classic argument of density, we obtain the regularity desired by
(19.23). 
19.3 Regularity of Solutions to Problem (III) and (III0) 237

19.3 Regularity of Solutions to Problem (III) and (III0)

We have
Theorem 19.10 Assume that B is similar to a real symmetric matrix. For any
0 , U
given H ∈ L 2 (0, T ; (L 2 (1 )) M ) and any given (U 1 ) ∈ (H1 ) N × (H0 ) N , the
weak solution U to problem (III) and (III0) satisfies

(U, U  ) ∈ C 0 ([0, T ]; (H α ()) N × (H α−1 ()) N ) (19.30)

and
U |1 ∈ (H 2α−1 (1 )) N , (19.31)

where 1 = (0, T ) × 1 and α is defined by (19.8). Moreover, the linear mapping

0 , U
(U 1 , H ) → (U, U  )

is continuous with respect to the corresponding topologies.

Proof Noting (19.13) and (19.14) in Lemma 19.3, it is easy to see that we need only
0 ≡ U
to consider the case that U 1 ≡ 0. Assume that  is sufficiently smooth, for
example, with C boundary, there exists a function h ∈ C 2 (), such that
3

∇h = ν on 1 , (19.32)

where ν is the unit outward normal vector on the boundary 1 (cf. [62]). Let λ be
an eigenvalue of B T and let e be the corresponding eigenvector:

B T e = λe.

Defining
φ = (e, U ), (19.33)

we have ⎧ 

⎪ φ − φ = −(e, AU ) in (0, T ) × ,

φ=0 on (0, T ) × 0 ,
(19.34)

⎪ ∂ν φ + λφ = (e, D H ) on (0, T ) × 1 ,

t = 0 : φ = 0, φ = 0 in .

Let
ψ = eλh φ. (19.35)

Problem (III) and (III0) can be rewritten to the following problem with Neumann
boundary conditions:
238 19 Preliminaries on Problem (III) and (III0)
⎧  λh
⎪ ψ − ψ + b(ψ) = −e (e, AU )


in (0, T ) × ,
ψ=0 on (0, T ) × 0 ,
(19.36)

⎪ ∂ν ψ = eλh (e, D H ) on (0, T ) × 1 ,

t = 0 : ψ = 0, ψ  = 0 in ,

where b(ψ) = 2λ∇h · ∇ψ + λ(h − λ|∇h|2 )ψ is a first-order linear form of ψ with


smooth coefficients.
By Theorem 19.9, U ∈ C 0 ([0, T ]; (H0 ) N ). By (19.11) in Lemma 19.2 and
Remark 19.4, the solution ψ to the following problem with homogeneous Neumann
boundary conditions:
⎧  λh
⎪ ψ − ψ + b(ψ) = −e (e, AU )


in (0, T ) × ,
ψ=0 on (0, T ) × 0 ,
(19.37)

⎪ ∂ν ψ = 0 on (0, T ) × 1 ,

t = 0 : ψ = 0, ψ  = 0 in 

satisfies
(ψ, ψ  ) ∈ C 0 ([0, T ]; H 1 () × L 2 ()). (19.38)

Next, we consider the following problem with inhomogeneous Neumann bound-


ary conditions but without internal force terms:
⎧ 

⎪ ψ − ψ + b(ψ) = 0 in (0, T ) × ,

ψ=0 on (0, T ) × 0 ,
(19.39)
⎪ ν ψ = eλh (e, D H )
⎪ ∂ on (0, T ) × 1 ,

t = 0 : ψ = 0, ψ  = 0 in .

By (19.9) and (19.10) in Lemma 19.1 and Remark 19.4, we have

(ψ, ψ  ) ∈ C 0 ([0, T ]; H α () × H α−1 ()) (19.40)

and
ψ|1 ∈ H 2α−1 (1 ), (19.41)

where α is given by (19.8). Since this regularity result holds for all the eigenvectors
of B T , and all the eigenvectors of B T constitute a set of basis in R N , we get the
desired (19.30) and (19.31). 
Chapter 20
Exact Boundary Controllability and
Non-exact Boundary Controllability

In this chapter, we will study the exact boundary controllability and the non-exact
boundary controllability for the coupled system (III) of wave equations with coupled
Robin boundary controls.

In this chapter, we will study the exact boundary controllability and the non-exact
boundary controllability for the coupled system (III) of wave equations with coupled
Robin boundary controls. We will prove that when the number of boundary controls
is equal to N , the number of state variables, system (III) is exactly controllable for
any given initial data (U 1 ) ∈ (H1 ) N × (H0 ) N , while, when M < N and  is a
0 , U
parallelepiped, system (III) is not exactly boundary controllable in (H1 ) N × (H0 ) N .

20.1 Exact Boundary Controllability

Definition 20.1 System (III) is exactly null controllable in the space (H1 ) N ×
1 , U
(H0 ) N , if there exists a positive constant T > 0, such that for any given (U 0 ) ∈
(H1 ) × (H0 ) , there exists a boundary control H ∈ L (0, T ; (L (1 )) ), such
N N 2 2 M

that problem (III) and (III0) admits a unique weak solution U satisfying the final
condition
t = T : U = U  = 0. (20.1)

Theorem 20.2 Assume that M = N . Assume furthermore that B is similar to a real


symmetric matrix. Then there exists a time T > 0, such that for any given initial data
(U 1 ) ∈ (H1 ) N ×(H0 ) N , there exists a boundary control H ∈L 2 (0, T ; (L 2 (1 )) N ),
0 , U
such that system (III) is exactly null controllable at the time T and the boundary con-
trol continuously depends on the initial data:

0 , U
H  L 2 (0,T ;(L 2 (1 )) N )  c(U 1 )(H1 ) N ×(H0 ) N , (20.2)

where c > 0 is a positive constant.


© Springer Nature Switzerland AG 2019 239
T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_20
240 20 Exact Boundary Controllability and Non-exact Boundary Controllability

Proof We first consider the corresponding problem (II) and (II0). By Theorem 12.14,
for any given initial data (U 1 ) ∈ (H1 ) N × (H0 ) N , there exists a boundary con-
0 , U
 ∈ L loc (0, +∞; (L (1 )) N ) with compact support in [0, T ], such that system
trol H 2 2

(II) with Neumann boundary controls is exactly controllable at the time T , and the
boundary control H  continuously depends on the initial data:

H 0 , U
 L 2 (0,T ;(L 2 (1 )) N )  c1 (U 1 )(H1 ) N ×(H0 ) N , (20.3)

where c1 > 0 is a positive constant.


Noting that M = N , D is invertible and the boundary condition in system (II)

 on (0, T ) × 1
∂ν U = D H (20.4)

can be rewritten as

 + D −1 BU ) := D H on (0, T ) × 1 .
∂ν U + BU = D( H (20.5)

Thus, problem (II) and (II0) with (20.4) can be equivalently regarded as problem
(III) and (III0) with (20.5). In other words, the boundary control H is given by

 + D −1 BU on (0, T ) × 1 ,
H=H (20.6)

where U is the solution to problem (II) and (II0) with (20.4), realizes the exact
boundary controllability of system (III).
It remains to check that H given in (20.6) belongs to the control space L 2 (0, T ;
(L (1 )) N ) with continuous dependence (20.2). By the regularity result given in
2

Theorem 19.10 (in which we take B = 0), the trace U |1 ∈ (H 2α−1 (1 )) N , where α is
defined by (19.8). Since 2α − 1 > 0, we have H ∈ L 2 (0, T ; (L 2 (1 )) N ). Moreover,
still by Theorem 19.9, we have
 
U (L 2 (1 )) N  c2 (U 1 )(H1 ) N ×(H0 ) N +  H
0 , U  L 2 (0,T ;(L 2 (1 )) N ) , (20.7)

where c2 > 0 is another positive constant. Noting (20.6), (20.2) follows from (20.3)
and (20.7). The proof is complete. 

20.2 Non-exact Boundary Controllability

Differently from the cases with Neumann boundary controls, the non-exact bound-
ary controllability for a coupled system with coupled Robin boundary controls in
a general domain is still an open problem. Fortunately, for some special domain,
the solutions to problem (III) and (III0) may possess higher regularity. In particu-
lar, when  is a parallelepiped, the optimal regularity of trace U |1 almost reaches
20.2 Non-exact Boundary Controllability 241

H 1/2 (1 ). This benefits a lot in the proof of the non-exact boundary controllability
with fewer boundary controls.

Lemma 20.3 Let L be a compact linear mapping from L 2 () to L 2 (0, T ; L 2 ()),
and let R be a compact linear mapping from L 2 () to L 2 (0, T ; H 1−α (1 )), where α
is defined by (19.8). Then we cannot find a T > 0, such that for any given θ ∈ L 2 (),
the solution to the following problem:
⎧ 

⎪ w − w = Lθ in (0, T ) × ,

w=0 on (0, T ) × 0 ,
(20.8)

⎪ ∂ ν w = Rθ on (0, T ) × 1 ,

t = 0 : w = 0, w = θ in 

satisfies the final condition


w(T ) = w (T ) = 0. (20.9)

Proof For any given θ ∈ L 2 (), let φ be the solution to the following problem:
⎧ 

⎪ φ − φ = 0 in (0, T ) × ,

φ=0 on (0, T ) × 0 ,
(20.10)

⎪ ∂ νφ = 0 on (0, T ) × 1 ,

t = 0 : φ = θ, φ = 0 in .

By (19.13) and (19.14) in Lemma 19.3, we have

φ L 2 (0,T ;L 2 ())  cθ L 2 () (20.11)

and
φ L 2 (0,T ;H α−1 (1 ))  cθ L 2 () . (20.12)

Noting (20.9) and taking the inner product with φ on both sides of (20.8) and
integrating by parts, it is easy to get
T T
θ2L 2 () = Lθφd x + Rθφd. (20.13)
0  0 1

Noting (20.11)–(20.12), we then have

θ L 2 ()  c(Lθ L 2 (0,T ;L 2 ()) + Rθ L 2 (0,T ;H 1−α ()) ) (20.14)

for all θ ∈ L 2 (), which contradicts the compactness of L and R. 

Remark 20.4 When  is a parallelepiped with 0 = ∅, by Fourier analysis, the


solution to the adjoint problem (19.19) and (19.20) is sufficiently smooth. Therefore,
Lemma 20.3 is still valid.
242 20 Exact Boundary Controllability and Non-exact Boundary Controllability

Theorem 20.5 Assume that rank(D) = M < N and that B is similar to a real sym-
metric matrix. Assume furthermore that  ⊂ Rn is a parallelepiped with 0 = ∅.
Then, no matter how large T > 0 is, system (III) is not exactly null controllable for
0 , U
any given initial data (U 1 ) ∈ (H1 ) N × (H0 ) N .

Proof Since M < N , there exists an e ∈ R N , such that D T e = 0. Take the special
initial data
t = 0 : U = 0, U  = eθ (20.15)

for system (III). If the system is exactly controllable for any given θ ∈ L 2 () at the
time T > 0, then there exists a boundary control H ∈ L 2 (0, T ; (L 2 ()) M ), such that
the corresponding solution satisfies

U (T ) = U  (T ) = 0. (20.16)

Let
w = (e, U ), Lθ = −(e, AU ), Rθ = −(e, BU )| . (20.17)

Noting that D T e = 0, we see that w satisfies problem (20.8) and the final condition
(20.9).
By Lemma 20.3, in order to show Theorem 20.5, it suffices to show that the linear
mapping L is compact from L 2 () to L 2 (0, T ; L 2 ()), and R is compact from
L 2 () to H 1−α ().
Assume that system (III) with special initial data (20.15) is exactly null control-
lable. The linear mapping θ → H is continuous from L 2 () to L 2 (0, T ; (L 2 ()) M ).
By Theorem 19.10, (θ, H ) → (U, U  ) is a continuous mapping from L 2 () ×
L 2 (0, T ; (L 2 ()) M ) to C 0 ([0, T ]; (H α ()) N ) ∩ C 1 ([0, T ]; (H α−1 ()) N ). Besides,
by Lions’ compact embedding theorem (cf. Theorem 5.1 in [63]), the embedding
L 2 (0, T ; (H α ()) N ) ∩ H 1 (0, T ; (H α−1 ()) N )} ⊂ L 2 (0, T ; (L 2 ()) N ) is compact,
hence the linear mapping L is compact from L 2 () to L 2 (0, T ; L 2 ()).
On the other hand, by (19.31), H → U | is a continuous mapping from L 2 (0, T ;
(L ()) M ) to (H 2α−1 ()) N , then R : θ → −(e, BU )| is a continuous mapping
2

from L 2 () into H 2α−1 (1 ). When  is a parallelepiped, α = 3/4 − , then 2α −


1 > 1 − α, hence H 2α−1 () → H 1−α () is a compact embedding; therefore, R is
a compact mapping from L 2 () to H 1−α (). The proof is complete. 

Remark 20.6 We obtain the non-exact boundary controllability for system (III) with
coupled Robin boundary controls in a parallelepiped  when there is a lack of
boundary controls. The main idea is to use the compact perturbation theory which
has a higher requirement on the regularity of the solution to the original problem
with coupled Robin boundary condition. How to generalize this result to the general
domain is still an open problem.
Chapter 21
Exact Boundary Synchronization

Based on the results of the exact boundary controllability and the non-exact boundary
controllability, we study the exact boundary synchronization for system (III) with
coupled Robin boundary controls.

21.1 Exact Boundary Synchronization

Based on the results of the exact boundary controllability and the non-exact boundary
controllability, we continue to study the exact boundary synchronization for system
(III) under coupled Robin boundary controls. Let
⎛ ⎞
1 −1
⎜ 1 −1 ⎟
⎜ ⎟
C1 = ⎜ .. .. ⎟ (21.1)
⎝ . . ⎠
1 −1 (N −1)×N

be the corresponding full row-rank matrix of synchronization. We have

Ker(C1 ) = Span{e1 }, (21.2)

where e1 = (1, · · · , 1)T .

Definition 21.1 System (III) is exactly synchronizable at the time T > 0 if for any
0 , U
given initial data (U 1 ) ∈ (H1 ) N × (H0 ) N , there exists a boundary control H ∈
(L loc (0, +∞; L (1 ))) M with compact support in [0, T ], such that the corresponding
2 2

solution U = U (t, x) to the mixed problem (III) and (III0) satisfies

tT : C1 U ≡ 0 in . (21.3)

© Springer Nature Switzerland AG 2019 243


T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_21
244 21 Exact Boundary Synchronization

Theorem 21.2 Let  ⊂ Rn be a parallelepiped. Assume that the coupled system


(III) of wave equations with coupled Robin boundary controls is exactly boundary
synchronizable. Then we have

rank(C1 D) = N − 1. (21.4)

Proof If K er (D T ) ∩ I m(C1T ) = {0}, then, by Proposition 2.11, we get

rank(C1 D) = rank(D T C1T ) = rank(C1T ) = N − 1. (21.5)

Otherwise, if K er (D T ) ∩ I m(C1T ) = {0}, then there exists a vector E = 0, such that

D T C1T E = 0. (21.6)

Let
w = (E, C1 U ), Lθ = −(E, C1 AU ), Rθ = −(E, C1 BU ). (21.7)

For any given initial data (U 1 ) = (0, θe1 ), where θ ∈ L 2 (), we can still get
0 , U
problem (20.8) for w. By the exact boundary synchronization of system (III), for
this initial data, there exists a boundary control H ∈ L 2 (0, T, (L 2 (1 )) M ), such that
(21.3) holds, then the corresponding solution w satisfies (20.9). Similarly to the
proof of Theorem 20.5, when  is a parallelepiped, we can prove that L is a compact
linear mapping from L 2 () to L 2 (0, T ; L 2 ()), and R is a compact linear map-
ping from L 2 () to L 2 (0, T ; H 1−α (1 )). It contradicts Lemma 20.3. The proof is
complete. 

21.2 Conditions of C1 -Compatibility

Remark 21.3 If rank(D) = N , by Theorem 20.2, system (III) is exactly null con-
trollable, then exactly synchronizable. In order to exclude this trivial situation, in
what follows, we always assume that rank(D) = N − 1.

Theorem 21.4 Let  ⊂ Rn be a parallelepiped. Assume that rank(D) = N − 1


and that B is similar to a real symmetric matrix. If the coupled system (III) of wave
equations with coupled Robin boundary controls is exactly boundary synchronizable,
then we have the following conditions of C1 -compatibility:

AK er (C1 ) ⊆ K er (C1 ) and B K er (C1 ) ⊆ K er (C1 ). (21.8)

Proof Let e1 = (1, · · · , 1)T . Then, we have

tT : U = ue1 , (21.9)

where u is the corresponding exactly synchronizable state.


21.2 Conditions of C1 -Compatibility 245

Taking the inner product with C1 on both sides of (III), we get

tT : C1 Ae1 u = 0 in  and C1 Be1 u = 0 on 1 . (21.10)

By Theorem 20.5, system (III) is not exactly boundary controllable when rank(D) =
0 , U
N − 1, hence there exists at least an initial data (U 1 ) such that

tT : u ≡ 0 in . (21.11)

Therefore, we get C1 Ae1 = 0. Noting that Ker(C1 ) = Span{e1 }, we have

AKer(C1 ) ⊆ Ker(C1 ). (21.12)

We next prove C1 Be1 = 0. Otherwise, by the second formula in (21.10), we get


u|1 ≡ 0. By (21.9), it is easy to see that as t  T , system (III) becomes

⎨ u − u + au = 0 in (0, T ) × ,
tT : u=0 on (0, T ) × 0 , (21.13)

∂ν u = u = 0 on (0, T ) × 1 ,

where a is defined by (17.8). Therefore, by Holmgren’s uniqueness theorem (cf.


Theorem 8.2 in [62]), we have u ≡ 0 as t  T . It contradicts the non-exact boundary
controllability of system (III). The proof is then complete. 

Theorem 21.5 Assume that rank(D) = N − 1 and that B is similar to a real


symmetric matrix. Assume furthermore that both A and B satisfy the conditions
of C1 -compatibility (21.8). Then there exists a boundary control matrix D with
rank(C1 D) = N − 1, such that system (III) is exactly synchronizable, and the bound-
ary control H possesses the following continuous dependence:

0 , U

H
L 2 (0,T,(L 2 (1 )) N −1 )  c
C1 (U 1 )
(H1 ) N −1 ×(H0 ) N −1 . (21.14)

Proof Since both A and B satisfy the conditions of C1 -compatibility (21.8), by


Proposition 2.15, there exist matrices A1 and B 1 of order (N − 1), such that

C 1 A = A1 C 1 and C1 B = B 1 C1 . (21.15)

Let
W = C1U and D 1 = C1 D. (21.16)

W satisfies ⎧
⎨ W − W + A1 W = 0 in (0, T ) × ,
W =0 on (0, T ) × 0 , (21.17)

∂ν W + B 1 W = D 1 H on (0, T ) × 1
246 21 Exact Boundary Synchronization

with the initial data

t =0: 0 , W = C1 U
W = C1U 1 in . (21.18)

Noting that C1 is a surjection from R N to R N −1 , any given initial data (U 0 , U


1 ) ∈
(H1 ) N × (H0 ) N corresponds to a unique initial data (C1 U0 , C1 U
1 ) of the reduced
system (21.17). Then, it follows that the exact boundary synchronization for system
(III) is equivalent to the exact boundary null controllability for the reduced system
(21.17), and then the boundary control H , which realizes the exact boundary null
controllability for the reduced system (21.17), is just the boundary control which
realizes the exact boundary synchronization for system (III).
By Proposition 2.21, when B is similar to a real symmetric matrix, its reduced
matrix B 1 is also similar to a real symmetric matrix, which guarantees the well-
posedness of the reduced problem (21.17) and (21.18).
Defining the boundary control matrix D by

D = C1T ,

D 1 = C1 D = C1 C1T (21.19)

is an invertible matrix of order (N − 1), by Theorem 20.2, the reduced system (21.17)
is exactly boundary controllable, then system (III) is exactly synchronizable. More-
over, by (20.2), we get (21.14). 
Chapter 22
Determination of Exactly Synchronizable
States

When system (III) possesses the exact boundary synchronization, the corresponding
exactly synchronizable states will be studied in this chapter.

Although under certain conditions, the exact boundary synchronization can be


realized by fewer boundary controls; however, exactly synchronizable states are a
priori unknown, which depend not only on the given initial data, but also on applied
boundary controls in general. In this section, we will discuss the determination and
the estimate of exactly synchronizable states.

22.1 Determination of Exactly Synchronizable States

Theorem 22.1 Assume that both A and B satisfy the conditions of C1 -compatibility
(21.8) and that B is similar to a real symmetric matrix. Assume furthermore that A T
and B T have a common eigenvector E 1 ∈ R N , such that (E 1 , e1 ) = 1, where e1 =
(1, · · · , 1)T . Then there exists a boundary control matrix D with rank(D) = N − 1,
such that system (III) is exactly synchronizable, and the exactly synchronizable state
is independent of applied boundary controls.

Proof Taking the boundary control matrix D such that

Ker(D T ) = Span{E 1 }, (22.1)

it is evident that rank(D) = N − 1. Noticing that (E 1 , e1 ) = 1, we have

Ker(C1 ) ∩ Im(D) = Span(e1 ) ∩ {Span(E 1 )}⊥ = {0}. (22.2)

© Springer Nature Switzerland AG 2019 247


T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_22
248 22 Determination of Exactly Synchronizable States

Thus, by Proposition 2.15, we have

rank(C1 D) = rank(D) = N − 1. (22.3)

Then, by Theorem 21.5, system (III) is exactly synchronizable.


Next, we prove that the exactly synchronizable state is independent of boundary
controls which realize the exact boundary synchronization. Noting that E 1 is a com-
mon eigenvector of A T and B T , and B is similar to a real symmetric matrix, there
exist λ ∈ C and μ ∈ R, such that

A T E 1 = λE 1 and B T E 1 = μE 1 . (22.4)

Noting (22.1), we have D T E 1 = 0, then φ = (E 1 , U ) satisfies


⎧ 

⎪ φ − φ + λφ = 0 in (0, +∞) × ,

φ=0 on (0, +∞) × 0 ,
(22.5)

⎪ ∂ ν φ + μφ = 0 on (0, +∞) × 1 ,
⎩ 0 ), φ = (E 1 , U
1 )
t = 0 : φ = (E 1 , U in .

Obviously, the solution φ to this problem is independent of boundary controls H .


On the other hand, noting that

tT : φ = (E 1 , U ) = (E 1 , e1 )u = u, (22.6)

the exactly synchronizable state is determined by the solution to problem (22.5), and
is independent of boundary controls H . The proof is complete. 

Remark 22.2 By Proposition 6.12, a matrix D satisfying rank(C1 D) = rank(D) =


N − 1 has the following form:

D = C1T D1 + e1 D0 ,

where D1 is an invertible matrix of order (N − 1) and D0 is a matrix of order


1 × (N − 1). Furthermore, if (E 1 , e1 ) = 1 and D T E 1 = 0, then

E 1T C1T D1 + D0 = 0,

it follows that
D = (I − e1 E 1T )C1T D1 .

Theorem 22.3 Assume that both A and B satisfy the conditions of C1 -compatibility
(21.8). Assume furthermore that B is similar to a real symmetric matrix. Assume
finally that system (III) is exactly synchronizable. Let E 1 = 0 be a vector in R N ,
such that the projection φ = (E 1 , U ) is independent of boundary controls H which
22.1 Determination of Exactly Synchronizable States 249

realize the exact boundary synchronization. Then E 1 must be a common eigenvector


of A T and B T , E 1 ∈ Ker(D T ) and (E 1 , e1 ) = 0.

Proof Taking (U0 , U


1 ) = (0, 0) in problem (III) and (III0), by Theorem 19.10, the
linear mapping
F : H → (U, U  )

is continuous from L 2 (0, T ; (L 2 (1 )) M ) to C 0 ([0, T ]; (H α ()) N × (H α−1 ()) N ),


where α is defined by (19.8). Let U  be the Fréchet derivative of U in the direction
of H:

U .
 = F  (0) H (22.7)

 satisfies the same problem as U :


By linearity, U
⎧ 
⎪U − U  + AU
=0 in (0, +∞) × ,


⎨U=0 on (0, +∞) × 0 ,
(22.8)

⎪  + BU
∂ν U  = DH  on (0, +∞) × 1

⎩  
t =0: U =U =0 
in .

Since the projection φ = (E 1 , U ) is independent of boundary controls H which


realize the exact boundary synchronization, we have

) ≡ 0, ∀ H
(E 1 , U  ∈ L 2 (0, T ; (L 2 (1 )) M ). (22.9)

/ Im(C1T ). Otherwise, there exists a vector R ∈ R N −1 , such


We first prove that E 1 ∈
that E 1 = C1 R, then
T

(R, C1 U ) = 0.
) = (E 1 , U (22.10)

Noting that C1 U is the solution to the reduced system (21.17) with the zero initial
data, by the exact boundary synchronization of system (III), the reduced system
 at the time T
(21.17) has the exact boundary controllability, then the value of C1 U
can be arbitrarily chosen, and as a result, from (22.10), we get

R = 0,

which contradicts the fact that E 1 = 0, then E 1 ∈ / Im(C1T ). Hence, noting (21.2), we
can choose E 1 such that
(E 1 , e1 ) = 1.

Thus, {E 1 , C1T } constitutes a basis in R N , and hence there exist a λ ∈ C and a vector
Q ∈ R N −1 , such that
A T E 1 = λE 1 + C1T Q. (22.11)
250 22 Determination of Exactly Synchronizable States

Then by (22.11), taking the inner product with E 1 on both sides of the equations in
(22.8), and noting (22.9), we get

0 = (AU , A T E 1 ) = (U
, E 1 ) = (U , C1T Q) = (C1 U
, Q). (22.12)

Thus, by the exact boundary controllability for the reduced system (21.17), we can
get Q = 0, then it follows from (22.11) that

A T E 1 = λE 1 .

On the other hand, by (22.10), taking the inner product with E 1 on both sides of
the boundary condition on 1 in (22.8), we get

) = (E 1 , D H
(E 1 , B U ) on 1 . (22.13)

By Theorem 19.10, we have

) H 2α−1 ((0,T )×1 )


(E 1 , D H (22.14)
) H 2α−1 ((0,T )×1 )  c H
= (E 1 , B U  L 2 (0,T ;(L 2 (1 )) M ) ,

here and hereafter, c denotes a positive constant.


 = D T E1v
We claim that D T E 1 = 0, namely, E 1 ∈ Ker(D T ). Otherwise, taking H
in (22.14), we get a contradiction:

v H 2α−1 (1 )  c v L 2 (0,T ;L 2 (1 )) , ∀v ∈ H 2α−1 (1 ), (22.15)

because of 2α − 1 > 0.
Thus, we have
) = 0 on 1 .
(E 1 , B U (22.16)

Similarly, since {E 1 , C1T } constitutes a basis in R N , there exist a μ ∈ R and a vector


P ∈ R N −1 , such that
B T E 1 = μE 1 + C1T P. (22.17)

Substituting it into (22.16) and noting (22.9), we have

) + (C1T P, U
(μE 1 , U ) = (P, C1 U
) = 0, (22.18)

then by the exact boundary controllability for the reduced system (21.17), we get
P = 0, and hence we have
B T E 1 = μE 1 .

Thus, E 1 must be a common eigenvector of A T and B T . The proof is complete. 


22.2 Estimation of Exactly Synchronizable States 251

22.2 Estimation of Exactly Synchronizable States

Theorem 22.4 Assume that both A and B satisfy the conditions of C1 -compatibility
(21.8), namely, there exist real numbers a and b such that

Ae1 = ae1 and Be1 = be1 with e1 = (1, · · · , 1)T . (22.19)

Assume furthermore that B is similar to a real symmetric matrix. Then there exists
a boundary control matrix D such that system (III) is exactly synchronizable, and
each exactly synchronizable state u satisfies the following estimate:

(u, u  )(T ) − (φ, φ )(T ) H α+1 ()×H α () (22.20)


c C1 (U0 , U1 ) (H1 ) N −1 ×(H0 ) N −1 ,

where α is defined by (19.8), and φ is the solution to the following problem:


⎧ 

⎪ φ − φ + aφ = 0 in (0, +∞) × ,

φ=0 on (0, +∞) × 0 ,
(22.21)

⎪ ∂ ν φ + bφ = 0 on (0, +∞) × 1 ,
⎩ 1 , E 1 )
0 , E 1 ), φ = (U
t = 0 : φ = (U in .

Proof We first show that there exists an eigenvector E 1 ∈ R N of B T , associated with


the eigenvalue b, satisfying (E 1 , e1 ) = 1.
Let P be a matrix such that P B P −1 is a real symmetric matrix. Setting

E 1 = P T Pe1 , (22.22)

we have

B T E 1 =P T (P B P −1 )T Pe1 = P T P B P −1 Pe1 (22.23)


=P P Be1 = b P Pe1 = bE 1
T T

and
(E 1 , e1 ) = (P T Pe1 , e1 ) = Pe1 2 > 0. (22.24)

In particular, we can choose P such that (E 1 , e1 ) = 1.


Next, define the boundary control matrix D such that

K er (D T ) = Span{E 1 }. (22.25)

Noting that Ker(C1 ) = Span{e1 } and (E 1 , e1 ) = 1, we have

Ker(C1 ) ∩ Im(D) = {e1 } ∩ {E 1 }⊥ = {0}, (22.26)


252 22 Determination of Exactly Synchronizable States

then, by Proposition 2.11, we have

rank(C1 D) = rank(D) = N − 1. (22.27)

Thus, by Theorem 21.5, system (III) is exactly synchronizable. Taking the inner
product with E 1 on both sides of problem (III) and (III0) and denoting ψ = (E 1 , U ),
we get
⎧ 

⎪ ψ − ψ + aψ = (a E 1 − A T E 1 , U ) in (0, +∞) × ,

ψ=0 on (0, +∞) × 0 ,
(22.28)

⎪ ∂ ν ψ + bψ = 0 on (0, +∞) × 1 ,
⎩ 1 , E 1 )
0 , E 1 ), ψ  = (U
t = 0 : ψ = (U in .

Since
(a E 1 − A T E 1 , e1 ) = (E 1 , ae1 − Ae1 ) = 0, (22.29)

we have
a E 1 − A T E 1 ∈ {Span(e1 )}⊥ = Im(C1T ). (22.30)

Therefore, there exists a vector R ∈ R N −1 , such that

a E 1 − A T E 1 = C1T R. (22.31)

Let ϕ = ψ − φ, where φ is the solution to problem (22.21). By (22.21) and (22.28),


we have ⎧ 

⎪ ϕ − ϕ + aϕ = (R, C1 U ) in (0, +∞) × ,

ϕ=0 on (0, +∞) × 0 ,
(22.32)

⎪ ∂ ν ϕ + bϕ = 0 on (0, +∞) × 1 ,
⎩ 
t =0: ϕ=ϕ =0 in .

Then, by the classic semigroups theory (cf. [70]), we have

(ϕ, ϕ )(T ) H α+1 ()×H α ()  c1 (R, C1 U ) L 2 (0,T ;H α ()) , (22.33)

here and hereafter ci for i = 1, 2, · · · denote different positive constants.


Recall that W = C1 U is the solution to the reduced problem (21.17)–(21.18).
Noting (20.2), it follows from Theorem 19.10 that

C1 U L 2 (0,T ;(H α ()) N −1 (22.34)


0 , U
c2 ( C1 (U 1 ) (H1 ) N −1 ×(H0 ) N −1 + D 1 H L 2 (0,T ;(L 2 (1 )) N −1 ) )
0 , U
c3 C1 (U 1 ) (H1 ) N −1 ×(H0 ) N −1 .
22.2 Estimation of Exactly Synchronizable States 253

Thus, we get

0 , U
(ϕ, ϕ )(T ) H α+1 ()×H α ()  c4 C1 (U 1 ) (H1 ) N −1 ×(H0 ) N −1 . (22.35)

On the other hand, we have

tT : ψ = (E 1 , U ) = (E 1 , e1 )u = u, (22.36)

thus ϕ = u − φ for t  T . Therefore, by (22.35) we get (22.20). 


Chapter 23
Exact Boundary Synchronization
by p-Groups

The exact boundary synchronization by groups will be considered in this chapter for
system (III) with further lack of coupled Robin boundary controls.

23.1 Definition

When there is a further lack of boundary controls, similarly to the case with Dirichlet
or Neumann boundary controls, we consider the exact boundary synchronization
by p-groups for system (III).
Let the components of U be divided into p groups:

(u (1) , · · · , u (n 1 ) ), (u (n 1 +1) , · · · , u (n 2 ) ), · · · , (u (n p−1 +1) , · · · , u (n p ) ), (23.1)

where 0 = n 0 < n 1 < n 2 < · · · < n p = N are integers such that n r − n r −1  2 for
all r = 1, · · · , p.

Definition 23.1 System (III) is exactly synchronizable by p-groups at the time


0 , U
T > 0 if for any given initial data (U 1 ) ∈ (H1 ) N × (H0 ) N , there exists a bound-
ary control H ∈ L loc (0, +∞; (L (1 )) M ) with compact support in [0, T ], such that
2 2

the corresponding solution U = U (t, x) to problem (III) and (III0) satisfies

tT : u (i) = u r , n r −1 + 1  i  n r , 1  r  p, (23.2)

where u = (u 1 , · · · , u p )T , being unknown a priori, is called the corresponding


exactly synchronizable state by p-groups.

© Springer Nature Switzerland AG 2019 255


T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_23
256 23 Exact Boundary Synchronization by p-Groups

Let Sr be an (n r − n r −1 − 1) × (n r − n r −1 ) full row-rank matrix:


⎛ ⎞
1 −1
⎜ 1 −1 ⎟
⎜ ⎟
Sr = ⎜ . . ⎟ , 1  r  p, (23.3)
⎝ .. .. ⎠
1 −1

and let C p be the following (N − p) × N matrix of synchronization by p-groups:


⎛ ⎞
S1
⎜ S2 ⎟
⎜ ⎟
Cp = ⎜ .. ⎟. (23.4)
⎝ . ⎠
Sp

Evidently, we have

Ker(C p ) = Span{e1 , · · · , e p }, (23.5)

where for 1  r  p,

1, n r −1 + 1  i  n r ,
(er )i =
0, others.

Thus, the exact boundary synchronization by p-groups (23.2) can be equivalently


written as

tT : C p U ≡ 0, (23.6)

or

p
tT : U= u r er . (23.7)
r =1

23.2 Exact Boundary Synchronization by p-Groups

Theorem 23.2 Let  ⊂ Rn be a parallelepiped. Assume that system (III) is exactly


synchronizable by p-groups. Then, we have

rank(C p D) = N − p. (23.8)
23.2 Exact Boundary Synchronization by p-Groups 257

In particular, we have
rank(D)  N − p. (23.9)

Proof If K er (D T ) ∩ I m(C Tp ) = {0}, by Proposition 2.11, we have

rank(C p D) = rank(D T C Tp ) = rank(C Tp ) = N − p. (23.10)

Next, we prove that it is impossible to have K er (D T ) ∩ I m(C Tp ) = {0}. Otherwise,


there exists a vector E = 0, such that

D T C Tp E = 0. (23.11)

Let

w = (E, C p U ), Lθ = −(E, C p AU ), Rθ = −(E, C p BU ). (23.12)

We get again problem (20.8) for w. Besides, the exact boundary synchronization by
p-groups for system (III) indicates that the final condition (20.9) holds. Similarly to
the proof of Theorem 21.2, we get a contradiction to Lemma 20.3. 

Theorem 23.3 Let C p be the (N − p) × N matrix of synchronization by p-groups


defined by (23.3)–(23.4). Assume that both A and B satisfy the following conditions
of C p -compatibility:

AK er (C p ) ⊆ K er (C p ), B K er (C p ) ⊆ K er (C p ). (23.13)

Then there exists a boundary control matrix D satisfying

rank(D) = rank(C p D) = N − p, (23.14)

such that system (III) is exactly synchronizable by p-groups, and the corresponding
boundary control H possesses the following continuous dependence:

0 , U
H L 2 (0,T,(L 2 (1 )) N − p )  c C p (U 1 ) (H1 ) N − p ×(H0 ) N − p , (23.15)

where c > 0 is a positive constant.

Proof Since both A and B satisfy the conditions of C p -compatibility (23.13), by


Proposition 2.15, there exist matrices A p and B p of order (N − p), such that

C p A = A pC p, C p B = B pC p. (23.16)

Let
W = C p U, D p = C p D. (23.17)
258 23 Exact Boundary Synchronization by p-Groups

We have ⎧

⎨ W − W + A p W = 0 in (0, +∞) × ,
W =0 on (0, +∞) × 0 , (23.18)

∂ν W + B p W = D p H on (0, +∞) × 1

with the initial data:

t =0: 0 , W
= C p U
W = C pU 1 in . (23.19)

Noting that C p is a surjection from R N to R N − p , the exact boundary synchronization


by p-groups for system (III) is equivalent to the exact boundary null controllabil-
ity for the reduced system (23.18), and the boundary control H , which realizes
the exact boundary null controllability for the reduced system (23.18), must be the
boundary control which realizes the exact boundary synchronization by p-groups for
system (III).
Let D be defined by

Ker(D T ) = Span{e1 , · · · , e p } = Ker(C p ). (23.20)

We have rank(D) = N − P, and

Ker(C p ) ∩ Im(D) = Ker(C p ) ∩ {Ker(C p )}⊥ = {0}. (23.21)

By Proposition 2.11, we get rank(C p D) = rank(D) = N − p, thus D p is an invert-


ible matrix of order (N − p). By Theorem 20.2, the reduced system (23.18) is exactly
null controllable, then system (III) is exactly synchronizable by p-groups. By (20.2),
we get (23.15). 
Chapter 24
Necessity of the Conditions
of C p -Compatibility

In this chapter, we will discuss the necessity of the conditions of C p -compatibility for
system (III) with coupled Robin boundary controls. This problem is closely related
to the number of applied boundary controls.

In this chapter, we will discuss the necessity of the conditions of C p -compatibility.


This problem is closely related to the number of applied boundary controls.

24.1 Condition of C p -Compatibility for the Internal


Coupling Matrix

We first study the condition of C p -compatibility for the coupling matrix A.


Theorem 24.1 Let  ⊂ Rn be a parallelepiped. Assume that M = rank(D) =
N − p. If system (III) is exactly synchronizable by p-groups, then the coupling matrix
A = (ai j ) should satisfy the following condition of C p -compatibility:

AKer(C p ) ⊆ Ker(C p ). (24.1)

Proof It suffices to prove that

C p Aer = 0, 1  r  p. (24.2)

By (23.7), taking the inner product with C p on both sides of the equations in
system (III), we get
 p
tT : C p Aer u r = 0 in . (24.3)
r =1

© Springer Nature Switzerland AG 2019 259


T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_24
260 24 Necessity of the Conditions of C p -Compatibility

If (24.2) does not hold, then there exist constant coefficients αr (1  r  p), not all
equal to zero, such that
p
αr u r = 0 in . (24.4)
r =1

Let
 p
αr erT
c p+1 = . (24.5)
r =1
er 2

Noting (er , es ) = er 2 δr s , we have


p
tT : c p+1 U = αr u r = 0 in . (24.6)
r =1

Let  
p−1 = Cp
C . (24.7)
c p+1

Noting (23.5) and (24.5), it is easy to see that c Tp+1 ∈ p−1 ) =


/ Im(C Tp ), then rank(C
T 
N − p + 1. Since rank(D) = N − p, we have Ker(D ) ∩ Im(C p−1 ) = {0}, then
T

there exists a vector E = 0, such that

Tp−1 E = 0.
DT C (24.8)

Let

w = (E, C p−1 AU ), Rθ = −(E, C


p−1 U ), Lθ = −(E, C p−1 BU ).

We get again problem (20.8) for w. Noting (23.6) and (24.6), we have
 
p−1 U ) = (E, Cp
t = T : w(T ) = (E, C U ) = 0. (24.9)
c p+1

Similarly, we have w  (T ) = 0, then (20.9) holds. Noting that  ⊂ Rn is a paral-


lelepiped, similarly to the proof of Theorem 21.2, we can get a conclusion that
contradicts Lemma 20.3. 

Remark 24.2 The condition of C p -compatibility (24.1) is equivalent to the fact that
there exist constants αr s (1  r, s  p) such that


p
Aer = αsr es 1  r  p, (24.10)
s=1
24.1 Condition of C p -Compatibility for the Internal Coupling Matrix 261

or A satisfies the following row-sum condition by blocks:


ns
ai j = αr s , n r −1 + 1  i  n r , 1  r, s  p. (24.11)
j=n s−1 +1

In particular, when

αsr = 0, 1  r, s  p, (24.12)

we say that A satisfies the zero-sum condition by blocks. In this case, we have

Aer = 0, 1  r  p. (24.13)

24.2 Condition of C p -Compatibility for the Boundary


Coupling Matrix

Comparing with the internal coupling matrix A, the study on the necessity of the con-
dition of C p -compatibility for the boundary coupling matrix B is more complicated.
It concerns the regularity of solution to the problem with coupled Robin boundary
conditions.
Let
(i)
εi = (0, · · · , 1 , · · · , 0)T , 1  i  N (24.14)

be a set of classical orthogonal basis in R N , and let

Vr = Span{εnr −1 +1 , · · · , εnr }, 1  r  p. (24.15)

Obviously, we have

er ∈ Vr , 1  r  p. (24.16)

In what follows, we will discuss the necessity of the condition of C p -compatibility


for the boundary coupling matrix B under the assumption that Aer ∈ Vr and Ber ∈
Vr (1  r  p).

Theorem 24.3 Let  ⊂ Rn be a parallelepiped. Assume that M = rank(D) =


N − p and
Aer ∈ Vr , Ber ∈ Vr (1  r  p). (24.17)
262 24 Necessity of the Conditions of C p -Compatibility

If system (III) is exactly synchronizable by p-groups, then the boundary coupling


matrix B should satisfy the following condition of C p -compatibility:

B K er (C p ) ⊆ K er (C p ). (24.18)

Proof Noting (23.7), we have


⎧ p

⎪ 

⎪ (u r er − u r er + u r Aer ) = 0 in (T, +∞) × ,

r =1
(24.19)

⎪ 
p

⎪ (∂ν u r er + u r Ber ) = 0 on (T, +∞) × 1 .

r =1

Noting (24.16)–(24.17) and the fact that subspaces Vr (1  r  p) are orthogonal to


each other, for 1  r  p, we have

u r er − u r er + u r Aer = 0 in (T, +∞) × ,


(24.20)
∂ν u r er + u r Ber = 0 on (T, +∞) × 1 .

Taking the inner product with C p on both sides of the boundary condition on 1 in
(24.20), and noting (23.5), we get

u r C p Ber ≡ 0 on (T, +∞) × 1 , 1  r  p. (24.21)

We claim that C p Ber = 0 (r = 1, · · · , p), which just means that B satisfies the
condition of C p -compatibility (24.18). Otherwise, there exists an r̄ (1  r̄  p)
such that C p Ber̄ = 0, and consequently, we have

u r̄ ≡ 0 on (T, +∞) × 1 . (24.22)

Then, it follows from the boundary condition in system (24.20) that

∂ν u r̄ ≡ 0 on (T, +∞) × 1 . (24.23)

Hence, applying Holmgren’s uniqueness theorem (cf. Theorem 8.2 in [62]) to (24.20)
yields that
u r̄ ≡ 0 in (T, +∞) × , (24.24)

then it is easy to check that

tT : er̄T U ≡ 0 in . (24.25)


24.2 Condition of C p -Compatibility for the Boundary Coupling Matrix 263

Let  
p−1 = CTp .
C (24.26)
er̄

Since er̄T ∈ p−1 ) = N − p + 1. On the other


/ Im(C Tp ), it is easy to show that rank(C
hand, since rank(Ker(D )) = p, we have Ker(D T ) ∩ Im(C
T Tp−1 ) = {0}, then there
exists a vector E = 0, such that

Tp−1 E = 0.
DT C (24.27)

Let

w = (E, C p−1 AU ), Rθ = −(E, C


p−1 U ), Lθ = −(E, C p−1 BU ). (24.28)

We get again problem (20.8) for w, and (20.9) holds. Therefore, noting that  ⊂ Rn
is a parallelepiped, similarly to the proof of Theorem 21.2, we can get a conclusion
that contradicts Lemma 20.3. 

Remark 24.4 Noting that condition (24.17) obviously holds for p = 1, so, the con-
ditions of C1 -compatibility (21.8) are always satisfied for both A and B in the case
p = 1. In the next result, we will remove the restricted conditions (24.17) in the case
p = 2, and further study the necessity of the condition of C2 -compatibility for B.

Theorem 24.5 Let  ⊂ Rn be a parallelepiped and let the matrix B be similar


to a real symmetric matrix. Assume that the coupling matrix A satisfies the zero-
sum condition by blocks (24.13). Assume furthermore that system (III) is exactly
synchronizable by 2-groups with rank(D) = N − 2. Then the coupling matrix B
necessarily satisfies the condition of C2 -compatibility:

B K er (C2 ) ⊆ K er (C2 ). (24.29)

Proof By the exact boundary synchronization by 2-groups, we have

tT : U = e1 u 1 + e2 u 2 in . (24.30)

Noting (24.13), as t  T , we have

AU = Ae1 u 1 + Ae2 u 2 = 0,

then it is easy to see that


⎧ 
⎨ U − U = 0 in (T, +∞) × ,
U =0 on (T, +∞) × 0 , (24.31)

∂ν U + BU = 0 on (T, +∞) × 1 .
264 24 Necessity of the Conditions of C p -Compatibility

Let P be a matrix such that


B = P B P −1 is a real symmetric matrix. Denote

u = (u 1 , u 2 )T . (24.32)

Taking the inner product on both sides of (24.31) with P T Pei for i = 1, 2, we get
⎧ 
⎨ Lu − Lu = 0 in (T, +∞) × ,
Lu = 0 on (T, +∞) × 0 , (24.33)

L∂ν u + u = 0 on (T, +∞) × 1 ,

where the matrices L and  are given by

L = (Pei , Pe j ) and  = (
B Pei , Pe j ), 1  i, j  2. (24.34)

Clearly, L is a symmetric and positive definite matrix and  is a symmetric matrix.


Taking the inner product on both sides of (24.33) with L − 2 and denoting w =
1

1
L 2 u, we get ⎧ 
⎨ w − w = 0 in (T, +∞) × ,
w=0 on (T, +∞) × 0 , (24.35)
⎩ w = 0 on (T, +∞) × 1 ,
∂ν w + 

where  = L − 2 L − 2 is also a symmetric matrix.


1 1

On the other hand, taking the inner product with C2 on both sides of the boundary
condition on 1 in system (III) and noting (24.30), we get

tT : C2 Be1 u 1 + C2 Be2 u 2 ≡ 0 on 1 . (24.36)

We claim that C2 Be1 = C2 Be2 = 0, namely, B satisfies the condition of C2 -


compatibility (24.29).
Otherwise, without loss of generality, we may assume that C2 Be1 = 0. Then it
follows from (24.36) that there exists a nonzero vector D2 ∈ R2 , such that

tT : D2T u ≡ 0 on 1 . (24.37)

Denoting
T = D2T L − 21 ,
D

we then have
tT : T w = 0 on 1 ,
D (24.38)
1
in which w = L 2 u. By Theorem 28.5, the following Kalman’s criterion


rank( D, =2
D) (24.39)
24.2 Condition of C p -Compatibility for the Boundary Coupling Matrix 265

is sufficient for the unique continuation of system (24.35) under the observation
(24.38) on the infinite horizon [T, +∞). Since rank(D) = N − 2, by Theorem 20.5,
system (III) is not exactly null controllable. So, the rank condition (24.39) does not
hold. Thus, by Proposition 2.12 (i), there exists a vector E = 0 in R2 , such that

T E = 
 T E = 0.
E = μE and D (24.40)

Noting (24.38) and the second formula of (24.40), we have both E and w|1 ∈
Since dim Ker( D)
Ker( D). = 1, there exists a constant α such that w = αE on 1 .
Therefore, noting the first formula of (24.40), we have

w = 
 αE = μαE = μw on 1 . (24.41)

Thus, (24.35) can be rewritten as


⎧ 
⎨ w − w = 0 in (T, +∞) × ,
w=0 on (T, +∞) × 0 , (24.42)

∂ν w + μw = 0 on (T, +∞) × 1 .

T w. Noting (24.38), it follows from (24.42) that


Let z = D
⎧ 
⎨ z − z = 0 in (T, +∞) × ,
z=0 on (T, +∞) × 0 , (24.43)

∂ν z = z = 0 on (T, +∞) × 1 .

Then, by Holmgren’s uniqueness theorem, we have

tT : T w = D2T u ≡ 0 in .
z=D (24.44)

Let D2T = (α1 , α2 ). Define the following row vector:

α1 e1T α2 e2T
c3 = + . (24.45)
e1  2 e2 2

Noting (e1 , e2 ) = 0 and (24.44), we have

tT : c3 U = α1 u 1 + α2 u 2 = D2T u ≡ 0 in . (24.46)

Let  
1 = C2 .
C (24.47)
c3
266 24 Necessity of the Conditions of C p -Compatibility

Since c3T ∈
/ Im(C2T ), it is easy to see that rank(C 1 ) = N − 1, and Ker(D T ) ∩
Im(C T  = 0, such that
1 ) = {0}, thus there exists a vector E

1T E
DT C  = 0. (24.48)

Let

 C
u = ( E,  C
1 U ), Lθ = −( E, 1 AU ), Rθ = −( E,
 C1 BU ). (24.49)

We get again problem (20.8) for w, satisfying (20.9). Noting that  is a parallelepiped,
similarly to the proof of Theorem 21.2, we get a contradiction to Lemma 20.3. 
Chapter 25
Determination of Exactly Synchronizable
States by p-Groups

When system (III) possesses the exact boundary synchronization by p-groups, the cor-
responding exactly synchronizable states by p-groups will be studied in this chapter.

In general, exactly synchronizable states by p-groups depend not only on initial data,
but also on applied boundary controls. However, when the coupling matrices A and
B satisfy certain algebraic conditions, the exactly synchronizable state by p-groups
can be independent of applied boundary controls. In this section, we first discuss the
case when the exactly synchronizable state by p-groups is independent of applied
boundary controls, then we present the estimate on each exactly synchronizable state
by p-groups in general situation.

25.1 Determination of Exactly Synchronizable States


by p-Groups

Theorem 25.1 Assume that both A and B satisfy the conditions of C p -compatibility
(23.13) and that B is similar to a real symmetric matrix. Assume furthermore that A T
and B T possess a common invariant subspace V , bi-orthonormal to Ker(C p ). Then
there exists a boundary control matrix D with rank(D) = rank(C p D) = N − p,
such that system (III) is exactly synchronizable by p-groups, and the exactly syn-
chronizable state by p-groups u = (u 1 , · · · , u p )T is independent of applied bound-
ary controls.

Proof Define the boundary control matrix D by

Ker(D T ) = V. (25.1)

© Springer Nature Switzerland AG 2019 267


T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_25
268 25 Determination of Exactly Synchronizable States by p-Groups

Since V is bi-orthonormal to Ker(C p ), by Proposition 2.5, we have

Ker(C p ) ∩ Im(D) = Ker(C p ) ∩ V ⊥ = {0}, (25.2)

then, by Proposition 2.11, we have

rank(C p D) = rank(D) = N − p. (25.3)

Thus, by Theorem 23.3, system (III) is exactly synchronizable by p-groups.


By (25.2), noting Ker(C p ) = Span{e1 , · · · , e p }, we may write

V = Span{E 1 , · · · , E p } with (er , E s ) = δr s (r, s = 1, · · · , p). (25.4)

Since V is a common invariant subspace of A T and B T , there exist constants αr s and


βr s (r, s = 1, · · · , p) such that


p

p
A T Er = αr s E s and B T Er = βr s E s . (25.5)
s=1 s=1

For r = 1, · · · , p, let
φr = (Er , U ). (25.6)

By system (III) and noting (25.1), for r = 1, · · · , p, we have




⎪  p
⎪ φ
⎪ r

− φ + αr s φs = 0 in (0, +∞) × ,


r


⎨ s=1
φr = 0 on (0, +∞) × 0 ,
(25.7)

⎪  p

⎪ ∂ν φr + βr s φs = 0 on (0, +∞) × 1 ,




⎩ s=1
0 ), φr = (Er , U
1 ) in .
t = 0 : φr = (Er , U

On the other hand, for r = 1, · · · , p, we have


p

p
tT : φr = (Er , U ) = (Er , es )u s = δr s u s = u r . (25.8)
s=1 s=1

Thus, the exactly synchronizable state by p-groups u = (u 1 , · · · , u p )T is entirely


determined by the solution to problem (25.7), which is independent of applied bound-
ary controls H . 
The following result gives the counterpart of Theorem 25.1.
Theorem 25.2 Assume that both A and B satisfy the conditions of C p -compatibility
(23.13) and that B is similar to a real symmetric matrix. Assume furthermore that
25.1 Determination of Exactly Synchronizable States by p-Groups 269

system (III) is exactly synchronizable by p-groups. Let V = Span{E 1 , · · · , E p } be


a subspace of dimension p. If the projection functions

φr = (Er , U ), r = 1, · · · , p (25.9)

are independent of applied boundary controls H which realize the exact boundary
synchronization by p-groups, then V is a common invariant subspace of A T and B T ,
and bi-orthonormal to Ker(C p ).
0 , U
Proof Let (U 1 ) = (0, 0). By Theorem 19.10, the linear mapping

F: H → (U, U  )

is continuous from L 2 (0, T ; (L 2 (1 )) M ) to C 0 ([0, T ]; (H α ()) N × (H α−1 ()) N ),


where α is defined by (19.8).
Let F  denote the Fréchet derivative of the application F. For any given H ∈
L (0, T ; (L 2 (1 )) M ), we define
2

U .
 = F  (0) H (25.10)

 satisfies a system similar to that of U :


By linearity, U
⎧ 
⎪U − U  + AU
=0 in (0, +∞) × ,


⎨U=0 on (0, +∞) × 0 ,
(25.11)
 + BU
⎪ ∂ν U
⎪  = DH on (0, +∞) × 1


t =0: U =U  = 0 in .

Since the projection functions φr = (Er , U ) (r = 1, · · · , p) are independent of


applied boundary controls H , we have

) ≡ 0, ∀ H
(Er , U  ∈ L 2 (0, T ; (L 2 (1 )) M ), r = 1, · · · , p. (25.12)

First, we prove that Er ∈/ Im(C Tp ) for r = 1, · · · , p. Otherwise, there exist an r̄


N−p
and a vector Rr̄ ∈ R , such that Er̄ = C Tp Rr̄ , then we have

) = (Rr̄ , C p U
0 = (Er̄ , U ), ∀ H
 ∈ L 2 (0, T ; (L 2 (1 )) M ). (25.13)

 is the solution to the corresponding reduced problem (23.18) and (23.19),


Since C p U
noting the equivalence between the exact boundary synchronization by p-groups for
the original system and the exact boundary controllability for the reduced system,
from the exact boundary synchronization by p-groups for system (III), we know that
 at the time
the reduced system (23.18) is exactly controllable, then the value of C p U
T can be chosen arbitrarily, thus we get Rr̄ = 0, which contradicts Er̄
= 0. Then,
we have Er ∈ / Im(C Tp ) (r = 1, · · · , p). Thus, V ∩ {Ker(C p )}⊥ = V ∩ Im(C Tp ) =
270 25 Determination of Exactly Synchronizable States by p-Groups

{0}. Hence, by Propositions 2.4 and 2.5, V is bi-orthonormal to Ker(C p ), and then
(V, C Tp ) constitutes a set of basis in R N . Therefore, there exist constant coefficients
αr s (r, s = 1, · · · , p) and vectors Pr ∈ R N − p (r = 1, · · · , p), such that


p
A T Er = αr s E s + C Tp Pr , r = 1, · · · , p. (25.14)
s=1

Taking the inner product with Er on both sides of the equations in (25.11) and noting
(25.12), we get
, Er ) = (U
0 = (AU , A T Er ) = (U
, C Tp Pr ) = (C p U
, Pr ) (25.15)

for r = 1, · · · , p. Similarly, by the exact boundary controllability for the reduced


system (23.18), we get Pr = 0 (r = 1, · · · , p), thus we have


p
A T Er = αr s E s , r = 1, · · · , p,
s=1

which means that V is an invariant subspace of A T .


On the other hand, noting (25.12) and taking the inner product with Er on both
sides of the boundary condition on 1 in (25.11), we get

) = (Er , D H
(Er , B U ) on 1 , r = 1, · · · , p. (25.16)

By Theorem 19.10, for r = 1, · · · , p, we have

) H 2α−1 (1 )
(Er , D H (25.17)
) H 2α−1 (1 )  c H
= (Er , B U  L 2 (0,T ;(L 2 (1 )) M ) .

We claim that D T Er = 0 for r = 1, · · · , p. Otherwise, for r = 1, · · · , p, setting


 = D T Er v, it follows from (25.17) that
H

v H 2α−1 (1 )  c v L 2 (0,T ;L 2 (1 )) . (25.18)

Since 2α − 1 > 0, it contradicts the compactness of H 2α−1 (1 ) → L 2 (1 ). Thus,


by (25.16) we have

) = 0 on (0, T ) × 1 , r = 1, · · · , p.
(Er , B U (25.19)

Similarly, there exist constants βr s (r, s = 1, · · · , p) and vectors Q r ∈ R N − p (r =


1, · · · , p), such that


p
B T Er = βr s E s + C Tp Q r , r = 1, · · · , p. (25.20)
s=1
25.1 Determination of Exactly Synchronizable States by p-Groups 271

Substituting it into (25.19) and noting (25.12), we have


p
) + (C Tp Q r , U
βr s (E s , U ) = (Q r , C p U
) = 0, r = 1, · · · , p. (25.21)
s=1

Similarly, by the exact boundary controllability for the reduced system (23.18), we
get Q r = 0 (r = 1, · · · , p), then we have


p
B Er =
T
βr s E s , r = 1, · · · , p, (25.22)
s=1

which indicates that V is also an invariant subspace of B T . The proof is


complete. 

25.2 Estimation of Exactly Synchronizable States


by p-Groups

When A and B do not satisfy all the conditions mentioned in Theorem 25.1, exactly
synchronizable states by p-groups may depend on applied boundary controls. We
have the following

Theorem 25.3 Assume that both A and B satisfy the conditions of C p -compatibility
(23.13). Then there exists a boundary control matrix D such that system (III) is
exactly synchronizable by p-groups, and each exactly synchronizable state by p-
groups u = (u 1 , · · · , u p )T satisfies the following estimate:

(u, u  )(T ) − (φ, φ )(T ) (H α+1 ()) p ×(H α ()) p


c C p (U0 , U
1 ) (H1 ) N − p ×(H0 ) N − p , (25.23)

where α is defined by (19.8), c is a positive constant, and φ = (φ1 , · · · , φ p )T is the


solution to the following problem (1  r  p):


⎪  p

⎪ φ 
− φ + αr s φs = 0 in (0, +∞) × ,

⎪ r r


⎨ s=1
φr = 0 on (0, +∞) × 0 ,
(25.24)

⎪  p

⎪ ∂ν φr + βr s φs = 0 on (0, +∞) × 1 ,




⎩ s=1
0 ), φr = (Er , U
1 ) in ,
t = 0 : φr = (Er , U
272 25 Determination of Exactly Synchronizable States by p-Groups

in which


p

p
Aer = αsr es and Ber = βsr es , r = 1, · · · , p. (25.25)
s=1 s=1

Proof We first show that there exists a subspace V which is invariant for B T and
bi-orthonormal to Ker(C p ).
Let B = P −1 P, where P is an invertible matrix, and  be a symmetric matrix.
Let V = Span(E 1 , · · · , E p } in which

Er = P T Per , r = 1, · · · , p. (25.26)

Noting (23.5) and the fact that Ker(C p ) is an invariant subspace of B, we get

B T Er = P T P Ber ⊆ P T PKer(C p ) ⊆ V, r = 1, · · · , p, (25.27)

then V is invariant for B T .


We next show that V ⊥ ∩ K er (C p ) = {0}. Then, noting that dim(V ) = dim (C p ) =
p, by Propositions 2.4 and 2.5, V is bi-orthonormal to K er (C p ). For this purpose,
let a1 , · · · , a p be coefficients such that


p
ar er ∈ V ⊥ . (25.28)
r =1

Then

p

p
( ar er , E s ) = ( ar Per , Pes ) = 0, s = 1, · · · , p. (25.29)
r =1 r =1

It follows that

p

p
( ar Per , as Pes ) = 0, (25.30)
r =1 s=1

then a1 = · · · = a p = 0, namely, V ⊥ ∩ K er (C p ) = {0}.


Denoting


p
Ber = βsr es , r = 1, · · · , p, (25.31)
s=1

a direct calculation yields that


p
B T Er = βr s E s , r = 1, · · · , p. (25.32)
s=1
25.2 Estimation of Exactly Synchronizable States by p-Groups 273

Define the boundary control matrix D by

K er (D T ) = V. (25.33)

Noting (23.5), we have

Ker(C p ) ∩ Im(D) (25.34)


⊥ ⊥
=Ker(C p ) ∩ {Ker(D )} = Ker(C p ) ∩ V
T
= {0},

then, by Proposition 2.11, we have

rank(C p D) = rank(D) = N − p. (25.35)

Therefore, by Theorem 23.3, system (III) is exactly synchronizable by p-groups.


Denoting ψr = (Er , U )(r = 1, · · · , p), we have

(Er , AU ) = (A T Er , U ) (25.36)
p
 p
=( αr s E s + A T Er − αr s E s , U )
s=1 s=1
 p

p
= αr s (E s , U ) + (A T Er − αr s E s , U )
s=1 s=1
p

p
= αr s ψs + (A Er −
T
αr s E s , U ).
s=1 s=1

By the assumption that V is bi-orthonormal to Ker(C p ), without loss of generality,


we may assume that

(Er , es ) = δr s (r, s = 1, · · · , p). (25.37)

Then, for any given t = 1, · · · , p, by the first formula of (25.25), we get


p

p
(A T Er − αr s E s , et ) = (Er , Aet ) − αr s (E s , et )
s=1 s=1

p
= αst (Er , es ) − αr t = αr t − αr t = 0,
s=1

hence


p
A T Er − αr s E s ∈ {K er (C p )}⊥ = Im(C Tp ), r = 1, · · · , p. (25.38)
s=1
274 25 Determination of Exactly Synchronizable States by p-Groups

Thus, there exist Rr ∈ R N − p (r = 1, · · · , p) such that


p
A T Er − αr s E s = C Tp Rr , r = 1, · · · , p. (25.39)
s=1

Taking the inner product on both sides of problem (III) and (III0) with Er , and
noting (25.32)–(25.33), for r = 1, · · · , p, we have


⎪  p



ψr − ψr + αr s ψs = −(Rr , C p U ) in (0, +∞) × ,




⎨ s=1
ψr = 0 on (0, +∞) × 0 ,
(25.40)

⎪ p

⎪ ∂ν ψr + βr s ψs = 0 on (0, +∞) × 1 ,




⎩ s=1
0 ), ψr = (Er , U
1 ) in .
t = 0 : ψr = (Er , U

Similarly to the proof of Theorem 22.4, we get

(ψ, ψ  )(T ) − (φ, φ )(T ) (H α+1 ()) p ×(H α ()) p (25.41)


0 , U
c C p (U 1 ) (H 1 ()) N − p ×(L 2 ()) N − p .

On the other hand, noting (25.37), it is easy to see that


p
tT : ψr = (Er , U ) = (Er , es )u s = u r , r = 1, · · · , p. (25.42)
s=1

Substituting it into (25.41), we get (25.23). 


Part VI
Synchronization for a Coupled System of
Wave Equations with Coupled Robin
Boundary Controls: Approximate
Boundary Synchronization

In this part, the approximate boundary synchronization and the approximate bound-
ary synchronization by groups will be discussed so that they can be realized by means
of substantially fewer number of boundary controls than the number of state variables
for system (III) with coupled Robin boundary controls under suitable hypotheses.
Chapter 26
Some Algebraic Lemmas

In order to study the approximate boundary synchronization for system (III) with
coupled Robin boundary controls, some algebraic lemmas are given in this chapter.

Let A be a matrix of order N and D a full column-rank matrix of order N × M with


M  N . By Proposition 2.12, we know the following Kalman’s criterion:

rank(D, AD, · · · , A N −1 D)  N − d (26.1)

holds if and only if the dimension of any given subspace, contained in Ker(D T ) and
invariant for A T , does not exceed d. In particular, the equality holds if and only if
the dimension of the largest subspace, contained in Ker(D T ) and invariant for A T ,
is exactly equal to d.
Let A and B be two matrices of order N and D a full column-rank matrix of
order N × M with M  N . For any given nonnegative integers p, q, · · · , r, s  0,
we define a matrix of order N × M by

R( p,q,··· ,r,s) = A p B q · · · Ar B s D. (26.2)

We construct an enlarged matrix

R = (R( p,q,··· ,r,s) , R( p ,q  ,··· ,r  ,s  ) , · · · ) (26.3)

by the matrices R( p,q,··· ,r,s) for all possible ( p, q, · · · , r, s), which, by Cayley–
Hamilton’s Theorem, essentially constitute a finite set M with dim(M)  M N .

Lemma 26.1 Ker(RT ) is the largest subspace of all the subspaces which are con-
tained in Ker(D T ) and invariant for A T and B T .

© Springer Nature Switzerland AG 2019 277


T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_26
278 26 Some Algebraic Lemmas

Proof First, noting that Im(D) ⊆ Im(R), we have Ker(RT ) ⊆ Ker(D T ). We now
show that Ker(RT ) is invariant for A T and B T . Let x ∈ Ker(RT ). We have

D T (B T )s (A T )r · · · (B T )q (A T ) p x = 0 (26.4)

for any given integers p, q, · · · , r, s  0. Then, it follows that A T x ∈ Ker(RT ),


namely, Ker(RT ) is invariant for A T . Similarly, Ker(RT ) is invariant for B T . Thus,
the subspace Ker(RT ) is contained in Ker(D T ) and invariant for both A T and B T .
Now let V be another subspace, contained in Ker(D T ) and invariant for A T and
B . For any given y ∈ V , we have
T

A T y ∈ V, B T y ∈ V and D T y = 0. (26.5)

Then, it is easy to see that

(B T )s (A T )r · · · (B T )q (A T ) p y ∈ V (26.6)

for any given integers p, q, · · · , r, s  0. Thus, by the first formula of (26.5), we


have
D T (B T )s (A T )r · · · (B T )q (A T ) p y = 0 (26.7)

for any given integers p, q, · · · , r, s  0, namely, we have

V ⊆ Ker(RT ). (26.8)

The proof is then complete. 

By the rank-nullity theorem, we have rank(R) + dim Ker(RT ) = N . The follow-


ing lemma is a dual version of Lemma 26.1.

Lemma 26.2 Let d  0 be an integer. Then


(i) the rank condition
rank(R)  N − d (26.9)

holds true if and only if the dimension of any given subspace, contained in Ker(D T )
and invariant for A T and B T , does not exceed d;
(ii) the rank condition
rank(R) = N − d (26.10)

holds true if and only if the dimension of the largest subspace, contained in Ker(D T )
and invariant for A T and B T , is exactly equal to d.

Proof (i) Let V be a subspace which is contained in Ker(D T ) and invariant for A T
and B T . By Lemma 26.1, we have

V ⊆ K er (RT ). (26.11)
26 Some Algebraic Lemmas 279

Assume that (26.9) holds, it follows from (26.11) that

N − d  rank(R) = N − dim Ker(RT )  N − dim(V ), (26.12)

namely,
dim(V )  d. (26.13)

Conversely, assume that (26.13) holds for any given subspace V which is con-
tained in Ker(D T ) and invariant for A T and B T . In particular, by Lemma 26.1, we
have dim Ker(RT )  d. Then, it follows that

rank(R) = N − dim Ker(RT )  N − d. (26.14)

(ii) Noting that (26.10) can be written as

rank(R)  N − d (26.15)

and
rank(R)  N − d. (26.16)

By (i), the rank condition (26.15) means that dim(V )  d for any given invariant
subspace V of A T and B T , contained in Ker(D T ). We claim that there exists a
subspace V0 , which is contained in Ker(D T ) and invariant for A T and B T , such that
dim(V0 ) = d. Otherwise, all the subspaces of this kind have a dimension fewer than
or equal to (d − 1). By (i), we get

rank(R)  N − d + 1, (26.17)

which contradicts (26.16). It proves (ii). The proof is then complete. 

Remark 26.3 In the special case that B = I , we can write

R = (D, AD, · · · , A N −1 D). (26.18)

Then, by Lemma 26.2, we find again that Kalman’s criterion (26.1) holds if and only
if the dimension of any given subspace, contained in Ker(D T ) and invariant for A T ,
does not exceed d. In particular, the equality holds if and only if the dimension of the
largest subspace, contained in Ker(D T ) and invariant for A T , is exactly equal to d.
Chapter 27
Approximate Boundary Null
Controllability

In this chapter, we will define the approximate boundary null controllability for
system (III) and the D-observability for the adjoint problem, and show that these
two concepts are equivalent to each other.

Definition 27.1 For (0 , 1 ) ∈ (H1 ) N × (H0 ) N , the adjoint system (19.19) is D-
observable on a finite interval [0, T ] if the observation

D T  ≡ 0 on [0, T ] × 1 (27.1)

implies that 0 = 1 ≡ 0, then  ≡ 0.

Proposition 27.2 If the adjoint system (19.19) is D-observable, then we necessarily


have rank(R) = N . In particular, if M = N , namely, D is invertible, then system
(19.19) is D-observable.

Proof Otherwise, we have dim Ker(RT ) = d  1. Let Ker(RT ) = Span{E 1 , · · · ,


E d }. By Lemma 26.1, Ker(RT ) is contained in Ker(D T ) and invariant for A T and
B T , namely, we have
D T Er = 0 for 1  r  d (27.2)

and there exist coefficients αsr and βsr such that


d  d
A T Er = αr s E s and B T Er = βr s E s for 1  r  d. (27.3)
s=1 s=1

In what follows, we restrict system (19.19) on the subspace Ker(RT ) and look for a
solution of the form
d
= φs E s , (27.4)
s=1

which, because of (27.2), obviously satisfies the D-observation (27.1).


© Springer Nature Switzerland AG 2019 281
T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_27
282 27 Approximate Boundary Null Controllability

Inserting the function (27.4) into system (19.19) and noting (27.3), it is easy to
see that for 1  r  d, we have
⎧ d
⎪ 
⎨φr − φr + s=1 αsr φs = 0 in (0, +∞) × ,
φr = 0 on (0, +∞) × 0 , (27.5)

⎩ 
∂ν φr + ds=1 βsr φs = 0 on (0, +∞) × 1 .

For any nontrivial initial data

t =0: φr = φ0r , φr = φ1r (1  r  d), (27.6)

we have  ≡ 0. This contradicts the D-observability of system (19.19).


Moreover, when D is invertible, the D-observation (27.1) implies that

∂ν  ≡  ≡ 0 on (0, T ) × 1 . (27.7)

Then, Holmgren’s uniqueness theorem (cf. Theorem 8.2 in [62]) implies well  ≡ 0,
provided that T > 0 is large enough. 
Definition 27.3 System (III) is approximately null controllable at the time T > 0
0 , U
if for any given initial data (U 1 ) ∈ (H0 ) N × (H−1 ) N , there exists a sequence {Hn }
of boundary controls in L with compact support in [0, T ], such that the sequence
M

{Un } of solutions to problem (III) and (III0) satisfies

u (k)
n −→ 0 in Cloc ([T, +∞); H0 ) ∩ Cloc ([T, +∞); H−1 )
0 1
(27.8)

for all 1  k  N as n → +∞.


By a similar argument as in Chaps. 8 and 16, we can prove the following
Proposition 27.4 System (III) is approximately null controllable at the time T > 0
if and only if its adjoint system (19.19) is D-observable on the interval [0, T ].
Corollary 27.5 If system (III) is approximately null controllable, then we necessar-
ily have rank(R) = N . In particular, as M = N , namely, D is invertible, system (III)
is approximately null controllable.
Proof This corollary follows immediately from Propositions 27.2 and 27.4. How-
ever, here we prefer to give a direct proof from the point of view of control.
Suppose that dim Ker(RT ) = d  1. Let Ker(RT ) = Span{E 1 , · · · , E d }. By
Lemma 26.1, Ker(RT ) is contained in Ker(D T ) and invariant for both A T and B T ,
then we still have (27.2) and (27.3). Applying Er to problem (III) and (III0) and
setting u r = (Er , U ) for 1  r  d, it follows that for 1  r  d, we have
⎧  d
⎨ u r − u r + s=1 αr s u s = 0 in (0, +∞) × ,
ur = 0 on (0, +∞) × 0 , (27.9)
⎩ 
∂ν u r + ds=1 βr s u s = 0 on (0, +∞) × 1
27 Approximate Boundary Null Controllability 283

with the initial condition

t =0: 0 ), u r = (Er , U
u r = (Er , U 1 ) in . (27.10)

Thus, the projections u 1 , · · · , u d of U on the subspace Ker(RT ) are independent of


applied boundary controls H , therefore, uncontrollable. This contradicts the approx-
imate boundary null controllability of system (III). The proof is then complete. 
Chapter 28
Unique Continuation for Robin Problems

We consider the unique continuation for Robin problems in this chapter.

28.1 General Consideration

By Proposition 27.2, rank(R) = N is a necessary condition for the D-observability


of the adjoint system (19.19).

Proposition 28.1 Let


 
A T − αI
μ = sup dim Ker . (28.1)
α,β∈C BT − β I

Assume that
Ker(RT ) = {0}. (28.2)

Then we have the following lower bound estimate:

rank(D)  μ. (28.3)

Proof Let α, β ∈ C, such that


 
A T − αI
V = Ker (28.4)
BT − β I

is of dimension μ. It is easy to see that any given subspace W of V is still invariant for
A T and B T , then by Lemma 26.1, condition (28.2) implies that Ker(D T ) ∩ V = {0}.
Then, it follows that

© Springer Nature Switzerland AG 2019 285


T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_28
286 28 Unique Continuation for Robin Problems

dim Ker(D T ) + dim (V )  N , (28.5)

namely,
μ = dim (V )  N − dim Ker(D T ) = rank(D). (28.6)

The proof is complete. 

In general, the condition dim Ker(RT ) = 0 does not imply rank(D) = N , so, the
D-observation (27.1) does not imply

 = 0 on (0, T ) × 1 . (28.7)

Therefore, the unique continuation for the adjoint system (19.19) with D-observation
(27.1) is not a standard type of Holmgren’s uniqueness theorem (cf. [18, 62, 75]).
Let us consider the following adjoint system (19.19) with  = (φ, ψ)T :


⎪ φ − φ + aφ + bψ = 0 in (0, +∞) × ,



⎪ 
⎨ψ − ψ + cφ + dψ = 0 in (0, +∞) × ,
φ=ψ=0 on (0, +∞) × 0 , (28.8)



⎪ ∂ ν φ + αφ = 0 on (0, +∞) × 1 ,


⎩∂ ψ + βψ = 0 on (0, +∞) × 1
ν

with the observation of rank one:

d1 φ + d2 ψ = 0 on [0, T ] × 1 , (28.9)

where a, b, c, d; α, β; and d1 , d2 are constants. In system (28.8), since the boundary


coupling matrix B is always assumed to be similar to a real symmetric matrix, without
loss of generality, we suppose that B is a diagonal matrix.

Proposition 28.2 The following results can be easily checked.


(a) Assume that A T and B T admit only one common eigenvector E. Then
Ker(RT ) = {0} if and only if (E, D) = 0.
(b) Assume that A T and B T admit only two common eigenvectors E 1 and E 2 .
Then Ker(RT ) = {0} if and only if (E i , D) = 0 for i = 1, 2.
(c) Assume that A T and B T have no common eigenvector. Then Ker(RT ) = {0}
if and only if D = 0.

The above conditions are only necessary for the unique continuation. We only
know a few results about their sufficiency as follows.
28.2 Examples in Higher Dimensional Cases 287

28.2 Examples in Higher Dimensional Cases

The first example is a cascade system with Neumann boundary conditions:


⎧ 

⎪ φ − φ = 0 in (0, +∞) × ,

⎨ψ  − ψ + φ = 0 in (0, +∞) × ,
(28.10)

⎪ φ=ψ=0 on (0, +∞) × 0 ,


∂ν φ = ∂ν ψ = 0 on (0, +∞) × 1

with the initial data in H10 () × H10 () × L 2 () × L 2 ().
Let  
00
AT = (28.11)
10

denote the corresponding coupling matrix of system (28.10). Since (0, 1)T is the only
eigenvector of A T , by Proposition 28.2, Ker(R) = {0} holds true if and only if d2 = 0.
The following result shows that it is also sufficient for the unique continuation.

Theorem 28.3 ([3]) Assume that d2 = 0. Let (φ, ψ) be a solution to system (28.10).
Then the partial observation (28.9) implies that φ ≡ ψ ≡ 0, provided that T > 0 is
large enough.

Remark 28.4 Let us consider the following slightly modified system:




⎪ φ − φ = 0 in (0, +∞) × ,



⎪ 
⎨ψ − ψ + φ = 0 in (0, +∞) × ,
φ=ψ=0 on (0, +∞) × 0 , (28.12)



⎪ ∂ν φ + αφ = 0 on (0, +∞) × 1 ,


⎩∂ ψ + βψ = 0 on (0, +∞) × 1 .
ν

Since (0, 1)T is the only common eigenvector of A T and B T , by Proposition 28.2,
Ker(RT ) = {0} if and only if d2 = 0. Unfortunately, the multiplier approach used in
[3] is quite technically delicate, we don’t know up to now if it can be adapted to get
the unique continuation for system (28.12) with the partial observation (28.9).

The second example is a system of decoupled wave equations with Robin bound-
ary conditions: ⎧ 

⎪ φ − φ = 0 in (0, +∞) × ,


⎨ ψ  − ψ = 0 in (0, +∞) × ,
φ=ψ=0 on (0, +∞) × 0 , (28.13)


⎪ ∂ν φ + αφ = 0 on (0, +∞) × 1 ,


∂ν ψ + βψ = 0 on (0, +∞) × 1 .
288 28 Unique Continuation for Robin Problems

If α = β, then dim Ker(RT ) > 0 for any given D = (d1 , d2 )T . So, we assume
that α = β, then (1, 0)T and (0, 1)T are the only common eigenvectors of A T and
B T . By Proposition 28.2, Ker(RT ) = {0} holds if and only if d1 d2 = 0. Moreover,
we replace the observation on the finite horizon (28.9) by the observation on the
infinite horizon:
d1 φ + d2 ψ = 0 on (0, +∞) × 1 . (28.14)

The following result confirms that Ker(RT ) = {0} or equivalently, rank(R) = 2


is sufficient for the unique continuation of system (28.13) with the observation on
the infinite horizon (28.14).

Theorem 28.5 Assume that α > 0, β > 0, α = β, and d1 d2 = 0. Let (φ, ψ) be a


solution to system (28.13). Then the partial observation on the infinite horizon (28.14)
implies that φ ≡ ψ ≡ 0.

Proof We first recall Green’s formula


  
uvd x = − ∇u · ∇vd x + ∂ν uvd (28.15)
  

and Rellich’s identity (cf. [62]):


 
2u(m · ∇u)d x = (n − 2) |∇u|2 d x (28.16)
  

+2 ∂ν u(m · ∇u)d − (m, ν)|∇u|2 d,


 

where m = x − x0 and ν stands for the unit outward normal vector on the boundary .
The energy of system (28.13) can be defined as

1
E(t) = (|φt |2 + |∇φ|2 + |ψt |2 + |∇ψ|2 )d x (28.17)
2 

1
+ (α|φ|2 + β|ψ|2 )d.
2 1

It is easy to check that E  (t) = 0 which yields the conservation of energy

E(t) = E(0), ∀ t  0. (28.18)

Now, taking the inner product with 2m · ∇u on both sides of the first equation in
(28.13) and integrating by parts, we get
28.2 Examples in Higher Dimensional Cases 289


T
φt (m · ∇φ)d x (28.19)
 0
 T  T 
= φt m · ∇φt d xdt + φ(m · ∇φ)d xdt.
0  0 

Then, using Green’s formula in the first term and Rellich’s identity in the second
term on the right-hand side, it follows that

T  T 
2 φt (m · ∇φ)d x = (m, ν)|φt |2 ddt (28.20)
 0 0 
 T  T
−n |φt |2 d xdt + (n − 2) |∇φ|2 d xdt
0  0 
 T 
+2 ∂ν φ(m · ∇φ)ddt − (m, ν)|∇φ|2 ddt.
0  

Noting the conservation of energy (28.18) and using Cauchy–Schwarz inequality on


the left-hand side, it follows that
 T   T 
n |φt | d xdt + (2 − n) 2
|∇φ|2 d xdt (28.21)
0  0 
 T
cE(0) + (m, ν)|φt |2 ddt
0 
 T
+ (2∂ν φ(m · ∇φ) − (m, ν)|∇φ|2 )d,
0 

here and hereafter, c denotes a positive constant independent of T .


Recall the usual multiplier geometrical condition:

(m, ν)  0, ∀x ∈ 0 ; (m, ν)  δ, ∀x ∈ 1 , (28.22)

where δ > 0 is a positive constant. Then on 1 , we have

2∂ν φ(m · ∇φ) − (m, ν)|∇φ|2 (28.23)


2
m
∞ |∂ν φ| · |∇φ| − δ|∇φ| 2


m
2∞
m
2∞ 2 2
 |∂ν φ|2 = α |φ| .
δ δ
While, noting that φ = 0 on 0 , we have

∇φ = ∂ν φν, (28.24)
290 28 Unique Continuation for Robin Problems

then

2∂ν φ(m · ∇φ) − (m, ν)|∇φ|2 = (m, ν)|∂ν φ|2  0. (28.25)

Substituting (28.23) and (28.25) into (28.21), it is easy to get


 T   T 
n |φt | d xdt + (2 − n) 2
|∇φ|2 d xdt (28.26)
0  0 
 T  

m
2∞ α2 T
cE(0) + (m, ν)|φt |2 ddt + |φ|2 ddt
0 1 δ 0 1
 T
cE(0) + c (|φ|2 + |φt |2 )ddt.
0 1

Next, taking the inner product with u on both sides of the first equation of system
(28.13) and integrating by parts, we have

T  T 
φt φd x = |φt |2 d xdt (28.27)
 0 0 
 T  T
+ φ∂ν φddt − |∇φ|2 d xdt.
0 1 0 

Using the conservation of energy (28.18), it follows that


 T   T 
− |φt |2 d xdt + |∇φ|2 d xdt  cE(0). (28.28)
0  0 

By (28.26) +(n − 1)×(28.28), we get


 T 
(|φt |2 + |∇φ|2 )d xdt (28.29)
0 
 T 
c (|φ|2 + |φt |2 )ddt + cE(0).
0 1

Similarly, we have
 T 
(|ψt |2 + |∇ψ|2 )d xdt (28.30)
0 
 T 
c (|ψ|2 + |ψt |2 )ddt + cE(0).
0 1
28.2 Examples in Higher Dimensional Cases 291

Combing (28.29) and (28.30), we have


 T  T 
E(t)dt  c (|φ|2 + |ψ|2 + |φt |2 + |ψt |2 )ddt + cE(0). (28.31)
0 0 1

Finally, taking the inner product with ψ on both sides of the first equation of
(28.13) and integrating by parts, we get

T  T 
φt ψd x = φt ψt d xdt (28.32)
 0 0 
 T  T
−α φψddt − ∇φ · ∇ψd xdt.
0 1 0 

Similarly, taking the inner product with φ on both sides of the second equation of
(28.13) and integrating by parts, we get

T  T 
ψt φd x = φt ψt d xdt (28.33)
 0 0 
 T  T
−β φψddt − ∇φ · ∇ψd xdt.
0 1 0 

Thus, we get

T  T 
(φt ψ − ψt φ)d x = (α − β) φψddt. (28.34)
 0 0 1

By the boundary observation (28.14), we have ψ = − dd21 φ on 1 , then, by Cauchy–


Schwarz’ inequality and noting (28.18), it comes from (28.34) that
 T 
(|φ|2 + |ψ|2 )ddt  cE(0). (28.35)
0 1

Because of the linearity, we also have


 T 

(|φt |2 + |ψt |2 )ddt  c E(0), (28.36)
0 1

where

= 1 (|φtt |2 + |∇φt |2 + |ψtt |2 + |∇ψt |2 )d x
E(t) (28.37)
2 

1
+ (α|φt |2 + β|ψt |2 )d.
2 1
292 28 Unique Continuation for Robin Problems

Noting the conservation of energy (28.18), and substituting (28.35)–(28.36) into


(28.31), we get

T E(0)  c(E(0) + E(0)). (28.38)

Taking T → +∞, we have E(0) = 0, then by (28.18), we get E(t) ≡ 0 for all t  0.
The proof is complete. 

28.3 One-Dimensional Case

In the one-space-dimensional case, Theorem 28.5 can be further improved to the


observation of finite horizon.

Theorem 28.6 Assume that α > 0, β > 0, α = β, and d1 d2 = 0. Then the following
one-dimensional system
⎧ 

⎪ φ − φx x = 0 0 < x < 1,


⎨ ψ  − ψx x = 0 0 < x < 1,
φ(t, 0) = ψ(t, 0) = 0, (28.39)



⎪ φx (t, 1) + αφ(t, 1) = 0,

ψx (t, 1) + βψ(t, 1) = 0

with the observation of finite horizon

d1 φ(t, 1) + d2 ψ(t, 1) = 0 ∀t ∈ [0, T ] (28.40)

only has the trivial solution, provided that T > 0 is large enough.

Proof Here we only give a sketch of the proof which is similar to that in Theorem
8.24.
Consider the following eigenvalue problem:
⎧ 2

⎪ λ u + u x x = 0, 0 < x < 1,
⎪ 2

⎨ λ v + vx x = 0, 0 < x < 1,
u(0) = v(0) = 0, (28.41)



⎪ u x (1) + αu(1) = 0,

vx (1) + βv(1) = 0.

Let
u = sin λx, v = sin λx. (28.42)

By the last two formulas in (28.41), we have

λ cos λ + α sin λ = 0, λ cos λ + β sin λ = 0. (28.43)


28.3 One-Dimensional Case 293

Rewrite the first formula of (28.43) as

λ
tan λ + = 0. (28.44)
α
Noting (28.44), we have

1 − tan2 λ α 2 − λ2
cos2 λ − sin2 λ = = (28.45)
1 + tan2 λ α 2 + λ2

and
tan λ αλ
sin λ cos λ = =− 2 . (28.46)
1 + tan λ
2 α + λ2

Then, by

e2iλ = cos 2λ + i sin 2λ = cos2 λ − sin2 λ + 2i sin λ cos λ, (28.47)

we have
α 2 − λ2 2αλi λ + αi
e2iλ = − 2 =− . (28.48)
α +λ
2 2 α +λ 2 λ − αi

The asymptotic expansion of e2iλ at λ = ∞ gives

2αi O(1)
e2iλ = −1 − + 2 . (28.49)
λ λ
Taking the logarithm on both sides, we get

2αi O(1)
2iλ = (2n + 1)πi + ln 1 + + 2
λ λ
2αi O(1)
= (2n + 1)πi + + 2 i,
λ λ
then
1 2α O(1)
λ = λαn := (n + )π + + 2 . (28.50)
2 λ λ
Noting λαn ∼ nπ, we get

1 α O(1)
λαn = (n + )π + + 2 . (28.51)
2 nπ n
Similarly, by the second formula of (28.43), we have

1 β O(1)
λ = λβn := (n + )π + + 2 . (28.52)
2 nπ n
294 28 Unique Continuation for Robin Problems

Noting α = β, it follows from (28.44) that

λαm = λβn , ∀m, n ∈ Z. (28.53)

Besides, by the monotonicity of the function λ → tan λ + αλ , we have

λαm = λαn , λβm = λβn , ∀m, n ∈ Z, m = n. (28.54)

β
Without loss of generality, assuming that α > β > 0, we can arrange {λαn } ∪ {λn }
into an increasing sequence

β β
· · · < λα−n < λ−n < · · · < λ1 < λα1 < · · · < λβn < λαn < · · · . (28.55)

On the other hand, noting a > β and using the expressions (28.51)–(28.52), there
exists a positive constant γ > 0, such that

β
λαn+1 − λαn  2γ, λn+1 − λβn  2γ (28.56)

and
1
 λαn − λβn  γ (28.57)
|n|

for all n ∈ Z with |n| large enough.


β
So, the sequence {λαn } ∪ {λn } satisfies the conditions (8.87), (8.91), and (8.92)
in Theorem 8.22 with c = 1, m = 2, and s = 1. Moreover, the upper density D +
β
defined in (8.93) for the sequence {λαn } ∪ {λn } is equal to 2.
β
Finally, we easily check that the system of eigenvectors {E nα , E n }n∈Z with
   β 
sin λαn x sin λn x
E nα E nβ
β
= λαn , = λn , n∈Z (28.58)
sin λαn x β
sin λn x

forms a Hilbert basis in (H 1 ()) N × (L 2 ()) N . Then any given solution to system
(28.39) can be represented by
     
φ α iλαn t α ψ β
= c e E , = cnβ eiλn t E nβ . (28.59)
φ n n ψ
n∈Z n∈Z

By the boundary observation (28.40), we have

 α sin λαn  β sin λn


β
d1 cnα eiλn t + d2 cnβ eiλn t β = 0. (28.60)
λαn λn
n∈Z n∈Z
28.3 One-Dimensional Case 295

α β
Then, all the conditions of Theorem 8.22 are verified, so the sequence {eiλn t , eiλn t }n∈Z
is ω-linearly independent in L 2 (0, T ), provided that T > 4π. It follows that

β
sin λαn sin λn
d1 cnα = 0, d2 cnβ β = 0, ∀ n ∈ Z, (28.61)
λαn λn

namely,
cnα = 0, cnβ = 0, ∀n ∈ Z, (28.62)

hence φ ≡ ψ ≡ 0. The proof is complete. 


Chapter 29
Approximate Boundary Synchronization

The approximate boundary synchronization is defined and studied in this chapter for
system (III) with coupled Robin boundary controls.

29.1 Definitions

Definition 29.1 System (III) is approximately synchronizable at the time T > 0


0 , U
if for any given initial data (U 1 ) ∈ (H0 ) N × (H−1 ) N , there exists a sequence
{Hn } of boundary controls in L with compact support in [0, T ], such that the
M

corresponding sequence {Un } of solutions to problem (III) and (III0) satisfies

u (k) (l)
n − u n → 0 in Cloc ([T, +∞); H0 ) ∩ Cloc ([T, +∞); H−1 )
0 1
(29.1)

for all k, l with 1  k, l  N as n → +∞.

Define the synchronization matrix of order (N − 1) × N by


⎛ ⎞
1 −1
⎜ 1 −1 ⎟
⎜ ⎟
C1 = ⎜ .. .. ⎟. (29.2)
⎝ . . ⎠
1 −1

Clearly,
Ker(C1 ) = Span{e1 } with e1 = (1, · · · , 1)T . (29.3)

© Springer Nature Switzerland AG 2019 297


T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_29
298 29 Approximate Boundary Synchronization

Then, the approximate boundary synchronization (29.1) can be equivalently


rewritten as
C1 Un → 0 as n → +∞ (29.4)

in the space
0
Cloc ([T, +∞); (H0 ) N −1 ) ∩ Cloc
1
([T, +∞); (H−1 ) N −1 ). (29.5)

Definition 29.2 The matrix A satisfies the condition of C1 -compatibility if there


exists a unique matrix A1 of order (N − 1), such that

C 1 A = A1 C 1 . (29.6)

The matrix A1 is called the reduced matrix of A by C1 .

Remark 29.3 The condition of C1 -compatibility (29.6) is equivalent to

AKer(C1 ) ⊆ Ker(C1 ). (29.7)

Then, noting (29.3), the vector e1 = (1, · · · , 1)T is an eigenvector of A, correspond-


ing to the eigenvalue a given by


N
a= ai j , i = 1, · · · , N , (29.8)
j=1

where Nj=1 ai j is independent of i = 1, · · · , N . Equation (29.8) is called the row-


sum condition, which is also equivalent to the condition of C1 -compatibility (29.6)
or (29.7).
Similarly, the matrix B satisfies the condition of C1 -compatibility if there exists
a unique matrix B 1 of order (N − 1), such that

C1 B = B 1 C1 , (29.9)

which is equivalent to the fact that

BKer(C1 ) ⊆ Ker(C1 ). (29.10)

Moreover, the vector e = (1, · · · , 1)T is also an eigenvector of B, corresponding to


the eigenvalue b given by


N
b= bi j , i = 1, · · · , N , (29.11)
j=1


N
where the row-sum j=1 bi j is independent of i = 1, · · · , N .
29.2 Fundamental Properties 299

29.2 Fundamental Properties

Theorem 29.4 Assume that system (III) is approximately synchronizable. Then we


necessarily have rank(R)  N − 1.

Proof Otherwise, let Ker(RT ) = Span{E 1 , · · · , E d } with d > 1. Noting that

dim Im(C1T ) + dim Ker(RT ) = N − 1 + d > N , (29.12)

there exists a unit vector E ∈ I m(C1T ) ∩ Ker(RT ). Let E = C1T x with x ∈ R N −1 .


The approximate boundary synchronization (29.4) implies that

(E, Un ) = (x, C1 Un ) → 0 (29.13)

as n → +∞ in the space
0
Cloc ([T, +∞); H0 ) ∩ Cloc
1
([T, +∞); H−1 ).

On the other hand, since E ∈ Ker(RT ), we have


d
E= αr Er , (29.14)
r =1

where the coefficients α1 , · · · , αd are not all zero. By Lemma 26.1, Ker(RT ) is
contained in Ker(D T ) and invariant for both A T and B T ; therefore, we still have (27.2)
and (27.3). Thus, applying Er to problem (III) and (III0) and setting u r = (Er , Un )
for 1  r  d, we find again problem (27.9)–(27.10) with homogeneous boundary
conditions. Noting that problem (27.9)–(27.10) is independent of n, it follows from
(29.13) and (29.14) that


d
d
αr u r (T ) ≡ αr u r (T ) ≡ 0. (29.15)
r =1 r =1

Then, by well-posedness, it is easy to see that


d
d
0 ) ≡
αr (Er , U 1 ) ≡ 0
αr (Er , U (29.16)
r =1 r =1

0 , U
for any given initial data (U 1 ) ∈ (H0 ) N × (H−1 ) N . This yields


d
αr Er = 0. (29.17)
r =1
300 29 Approximate Boundary Synchronization

Because of the linear independence of the vectors E 1 , · · · , E d , we get a contradiction


α1 = · · · = αd = 0. 
Theorem 29.5 Assume that system (III) is approximately synchronizable under the
minimal rank(R) = N − 1. Then, we have the following assertions:
(i) There exists a vector E 1 ∈Ker(RT ), such that (E 1 , e1 )=1 with e1 =(1, · · · , 1)T .
0 , U
(ii) For any given initial data (U 1 ) ∈ (H0 ) N × (H−1 ) N , there exists a unique
scalar function u such that

u (k)
n → u in Cloc ([T, +∞); H0 ) ∩ Cloc ([T, +∞); H−1 )
0 1
(29.18)

for all 1  k  N as n → +∞.


(iii) The matrices A and B satisfy the conditions of C1 -compatibility (29.6) and
(29.9), respectively.
Proof (i) Noting that dim Ker(RT ) = 1, by Lemma 26.1, there exists a nonzero
vector E 1 ∈ Ker(RT ), such that

D T E 1 = 0, A T E 1 = αE 1 and B T E 1 = β E 1 . (29.19)

We claim that E 1 ∈
/ Im(C1T ). Otherwise, applying E 1 to problem (III) and (III0) with
U = Un and H = Hn , and setting u = (E 1 , Un ), it follows that
⎧ 
⎨ u − u + αu = 0 in (0, +∞) × ,
u=0 on (0, +∞) × 0 , (29.20)

∂ν u + βu = 0 on (0, +∞) × 1

with the following initial data:

t =0: 0 ), u  = (E 1 , U
u = (E 1 , U 1 ) in . (29.21)

Suppose that E 1 ∈ Im(C1T ), there exists a vector x ∈ R N −1 , such that E 1 = C1T x.


Then, the approximate boundary synchronization (29.4) implies

(u(T ), u  (T )) = ((x, C1 Un (T )), (x, C1 Un (T ))) → (0, 0) (29.22)

in the space H0 × H−1 as n → +∞. Then, since u is independent of n, we have

u(T ) ≡ u  (T ) ≡ 0. (29.23)

Thus, because of the well-posedness of problem (29.20)–(29.21), it follows that

0 ) = (E 1 , U
(E 1 , U 1 ) = 0 (29.24)

0 , U
for any given initial data (U 1 ) ∈ (H0 ) N × (H−1 ) N . This yields a contradiction
E 1 = 0.
29.2 Fundamental Properties 301

Since E 1 ∈/ Im(C1T ), noting that Im(C1T ) = (Span{e1 })⊥ , we have (E 1 , e1 )


= 0.
 that (E 1 , e1 ) = 1.
Without loss of generality, we can take E 1 such
C
(ii) Since E 1 ∈
/ I m(C1T ), the matrix 1
is invertible. Moreover, we have
E 1T
   
C1 0
e = . (29.25)
E 1T 1 1

Noting (29.4), we have


       
C1 C 1 Un 0 0
U = → = u (29.26)
E 1T n
(E 1 , Un ) u 1

as n → +∞ in the space
0
Cloc ([T, +∞); (H0 ) N ) ∩ Cloc
1
([T, +∞); (H−1 ) N ). (29.27)

Then, noting (29.25), it follows that


 −1    −1  
C1 C 1 Un C1 0
Un = →u = ue1 (29.28)
E 1T E 1T Un E 1T 1

in the space (29.27), namely, (29.18) holds.


(iii) Applying C1 to system (III) with U = Un and H = Hn , and passing to the
limit as n → +∞, it follows from (29.4) and (29.28) that

C1 Ae1 u = 0 in [T, +∞) ×  (29.29)

and
C1 Be1 u = 0 on [T, +∞) × 1 . (29.30)

0 , U
We claim that at least for an initial data (U 1 ), we have

u
≡ 0 on [T, +∞) × 1 . (29.31)

Otherwise, it follows from system (29.20) that

∂ν u ≡ u ≡ 0 on [T, +∞) × 1 , (29.32)

then, by Holmgren’s uniqueness theorem (cf. Theorem 8.2 in [62]), we get u ≡ 0


0 , U
for all the initial data (U 1 ), namely, system (III) is approximately null control-
lable under the condition dim Ker(RT ) = 1. This contradicts Corollary 27.5. Then,
it follows from (29.29) and (29.30) that C1 Ae = 0 and C1 Be = 0, which give the
conditions of C1 -compatibility for A and B, respectively. The proof is complete. 
302 29 Approximate Boundary Synchronization

Remark 29.6 Theorem 29.5 (ii) indicates that under the minimal rank condition
rank(D) = N − 1, the approximate boundary synchronization in the consensus
sense (29.1) is actually that in the pinning sense (29.18) with the approximately
synchronizable state u.
Assume that both A and B satisfy the corresponding conditions of C1 -
compatibility, namely, there exist two matrices A1 and B 1 such that C1 A = A1 C1
and C1 B = B 1 C1 , respectively. Setting W = C1 U in problem (III) and (III0), we get
the following reduced system:
⎧ 
⎨ W − W + A1 W = 0 in (0, +∞) × ,
W =0 on (0, +∞) × 0 , (29.33)

∂ν W + B 1 W = C1 D H on (0, +∞) × 1

with the initial condition

t =0: 0 , W  = C1 U
W = C1U 1 in . (29.34)

Since B is similar to a real symmetric matrix, by Proposition 2.21, so is its reduced


matrix B 1 . Then, by Theorem 19.9, the reduced problem (29.33)–(29.34) is well-
posed in the space (H0 ) N −1 × (H−1 ) N −1 .
Accordingly, consider the reduced adjoint system
⎧ T
⎪ 
⎨ −  + A1  = 0 in (0, T ) × ,
=0 on (0, T ) × 0 , (29.35)

⎩ T
∂ν  + B 1  = 0 on (0, T ) × 1

with the C1 D-observation

(C1 D)T  ≡ 0 on (0, T ) × 1 . (29.36)

Obviously, we have
Proposition 29.7 Under the conditions of C1 -compatibility for A and B, system
(III) is approximately synchronizable if and only if the reduced system (29.33) is
approximately null controllable, or equivalently if and only if the reduced adjoint
system (29.35) is C1 D-observable.
Theorem 29.8 Assume that A and B satisfy the conditions of C1 -compatibility
(29.6) and (29.9), respectively. Assume furthermore that A T and B T admit a common
eigenvector E 1 such that (E 1 , e1 ) = 1 with e1 = (1, · · · , 1)T . Let D be defined by

Im(D) = (Span{E 1 })⊥ . (29.37)

Then system (III) is approximate synchronizable. Moreover, we have rank(R) =


N − 1.
29.2 Fundamental Properties 303

Proof Since (E 1 , e1 ) = 1, noting (29.37), we have e1 ∈


/ Im(D) and Ker(C1 ) ∩
Im(D) = {0}. Therefore, by Proposition 2.11, we have

rank(C1 D) = rank(D) = N − 1. (29.38)

Thus, the reduced adjoint system (29.35) is C1 D-observable because of Holmgren’s


uniqueness theorem. By Proposition 29.7, system (III) is approximate synchroniz-
able.
Noting (29.37), we have E 1 ∈ Ker(D T ). Moreover, since E 1 is a common eigen-
vector of A T and B T , we have E 1 ∈ Ker(RT ), hence dim Ker(RT )  1, namely,
rank(R)  N − 1. On the other hand, since rank(R)  rank(D) = N − 1, we get
rank(R) = N − 1. The proof is complete. 
Chapter 30
Approximate Boundary Synchronization
by p-Groups

The approximate boundary synchronization by p-groups is introduced and studied


in this chapter for system (III) with coupled Robin boundary controls.

30.1 Definitions

Let p  1 be an integer and let

0 = n0 < n1 < n2 < · · · < n p = N (30.1)

be integers such that n r − n r −1  2 for all 1  r  p. We rearrange the components


of the state variable U into p-groups:

(u (1) , · · · , u (n 1 ) ), (u (n 1 +1) , · · · , u (n 2 ) ), · · · , (u (n p−1 +1) , · · · , u (n p ) ). (30.2)

Definition 30.1 System (III) is approximately synchronizable by p-groups at the


0 , U
time T > 0 if for any given initial data (U 1 ) ∈ (H0 ) N × (H−1 ) N , there exists a
sequence {Hn } of boundary controls in L with compact support in [0, T ], such that
M

the corresponding sequence {Un } of solutions to problem (III) and (III0) satisfies

u (k) (l)
n − u n → 0 in Cloc ([T, +∞); H0 ) ∩ Cloc ([T, +∞); H−1 )
0 1
(30.3)

for n r −1 + 1  k, l  n r and 1  r  p as n → +∞.

© Springer Nature Switzerland AG 2019 305


T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_30
306 30 Approximate Boundary Synchronization by p-Groups

Let Sr be the following (n r − n r −1 − 1) × (n r − n r −1 ) matrix:


⎛ ⎞
1 −1 0 ··· 0
⎜0 1 −1 ⎟··· 0
⎜ ⎟
Sr = ⎜ . .. .. ⎟.
.. .. (30.4)
⎝ .. . . ⎠ . .
0 0 · · · 1 −1

Let C p be the following (N − p) × N full row-rank matrix of synchronization by


p-groups: ⎛ ⎞
S1
⎜ S2 ⎟
⎜ ⎟
Cp = ⎜ .
. .. ⎟
(30.5)
⎝ ⎠
Sp

For 1  r  p, setting

1, n r −1 + 1  i  n r ,
(er )i = (30.6)
0, otherwise,

it is clear that
Ker(C p ) = Span{e1 , e2 , · · · , e p }. (30.7)

Moreover, the approximate boundary synchronization by p-groups (30.3) can be


equivalently rewritten as
C p Un → 0 as n → +∞ (30.8)

in the space
0
Cloc ([T, +∞); (H0 ) N − p ) ∩ Cloc
1
([T, +∞); (H−1 ) N − p ). (30.9)

Definition 30.2 The matrix A satisfies the condition of C p -compatibility if there


exists a unique matrix A p of order (N − p), such that

C p A = A pC p. (30.10)

The matrix A p is called the reduced matrix of A by C p .

Remark 30.3 By Proposition 2.15, the condition of C p -compatibility (30.10) is


equivalent to
AKer(C p ) ⊆ Ker(C p ). (30.11)

Moreover, the reduced matrix A p is given by

A p = C p AC Tp (C p C Tp )−1 . (30.12)
30.1 Definitions 307

Similarly, the matrix B satisfies the condition of C p -compatibility if there exists a


unique matrix B p of order (N − p), such that

C p B = B pC p, (30.13)

which is equivalent to
BKer(C p ) ⊆ Ker(C p ). (30.14)

Assume that A and B satisfy the conditions of C p -compatibility (30.10) and


(30.13), respectively. Setting W = C p U in problem (III) and (III0), we get the fol-
lowing reduced system:
⎧ 
⎨ W − W + A p W = 0 in (0, T ) × ,
W =0 on (0, T ) × 0 , (30.15)

∂ν W + B p W = C p D H on (0, T ) × 1

with the initial condition

t =0: 0 , W  = C p U
W = C pU 1 in . (30.16)

Since B is similar to a real symmetric matrix, by Proposition 2.21, the reduced


matrix B p is also similar to a real symmetric matrix. Then, by Theorem 19.19, the
reduced problem (30.15)–(30.16) is well-posed in the space (H0 ) N − p × (H−1 ) N − p .
Accordingly, consider the reduced adjoint system
⎧ T
⎪ 
⎨ −  + A p  = 0 in (0, +∞) × ,
=0 on (0, +∞) × 0 , (30.17)

⎩ T
∂ν  + B p  = 0 on (0, +∞) × 1

together with the C p D-observation

(C p D)T  ≡ 0 on (0, T ) × 1 . (30.18)

We have
Proposition 30.4 Assume that A and B satisfy the conditions of C p -compatibility
(30.10) and (30.13), respectively. Then system (III) is approximately synchroniz-
able by p-groups if and only if the reduced system (30.15) is approximately null
controllable, or equivalently if and only if the reduced adjoint system (30.17) is
C p D-observable.
Corollary 30.5 Under the conditions of C p -compatibility (30.10) and (30.13), if
system (III) is approximately synchronizable by p-groups, we necessarily have the
following rank condition:
rank(C p R) = N − p. (30.19)
308 30 Approximate Boundary Synchronization by p-Groups

Proof Let R be the matrix defined by (26.2)–(26.3) corresponding to the reduced


matrices A p , B p and D = C p D. Noting (30.10) and (30.13), we have
r s r s
A p B p D = A p B p C p D = C p Ar B s D, (30.20)

then
R = C p R. (30.21)

Under the assumption that system (III) is approximately synchronizable by


p-groups, by Proposition 30.4, the reduced system (30.15) is approximately null
controllable, then by Corollary 27.5, we have rank(R) = N − p, which together
with (30.21) implies (30.19). 

30.2 Fundamental Properties

Proposition 30.6 Assume that system (III) is approximately synchronizable by


p-groups. Then, we necessarily have rank(R)  N − p.

Proof Assume that dim Ker(RT )=d with d > p. Let Ker(RT )=Span{E 1 , · · · , E d }.
Since
dim Ker(RT ) + dim Im(C Tp ) = d + N − p > N ,

we have Ker(RT ) ∩ Im(C Tp ) = {0}. Hence, there exists a nonzero vector x ∈ R N −d


and coefficients β1 , · · · , βd not all zero, such that


d
βr Er = C Tp x. (30.22)
r =1

Moreover, by Lemma 26.1, we still have (27.2) and (27.3). Then, applying Er to
problem (III) and (III0) with U = Un and H = Hn and setting u r = (Er , Un ) for
1  r  d, it follows that
⎧  d
⎨ u r − u r + s=1 αr s u s = 0 in (0, +∞) × ,
ur = 0 on (0, +∞) × 0 , (30.23)
⎩ d
∂ν u r + s=1 βr s u s = 0 on (0, +∞) × 1

with the initial condition

t =0: 0 ), u r = (Er , U
u r = (Er , U 1 ) in . (30.24)
30.2 Fundamental Properties 309

Noting (30.8), it follows from (30.22) that


d
βr u r = (x, C p Un ) → 0 (30.25)
r =1

as n → +∞ in the space
0
Cloc ([T, +∞); H0 ) ∩ Cloc
1
([T, +∞); H−1 ).

Since the functions u 1 , · · · , u d are independent of n and of applied boundary controls


Hn , it follows that


d 
d
βr u r (T ) = βr u r (T ) = 0 in . (30.26)
r =1 r =1

Then, it follows from the well-posedness of problem (30.23)–(30.24) that


d 
d
0 ) =
βr (Er , U 1 ) = 0
βr (Er , U (30.27)
r =1 r =1

0 , U
for any given initial data (U 1 ) ∈ (H0 ) N × (H−1 ) N . In particular, we get


d
βr Er = 0, (30.28)
r =1

then a contradiction: β1 = · · · = βd = 0, because of the linear independence of the


vectors E 1 , · · · , E d . The proof is achieved. 
Theorem 30.7 Let A and B satisfy the conditions of C p -compatibility (30.10) and
(30.13), respectively. Assume that A T and B T admit a common invariant subspace
V, which is bi-orthonormal to K er (C p ). Then, setting the boundary control matrix
D by
Im(D) = V ⊥ , (30.29)

system (III) is approximately synchronizable by p-groups. Moreover, we have


rank(R) = N − p.
Proof Since V is bi-orthonormal to Ker(C p ), we have

Ker(C p ) ∩ V ⊥ = Ker(C p ) ∩ Im(D) = {0}, (30.30)

therefore, by Proposition 2.11, we have

rank(C p D) = rank(D) = N − p. (30.31)


310 30 Approximate Boundary Synchronization by p-Groups

Thus, the C p D-observation (29.36) becomes the full observation

 ≡ 0 on (0, T ) × 1 . (30.32)

By Holmgren’s uniqueness theorem (cf. Theorem 8.2 in [62]), the reduced adjoint
system (30.17) is C p D-observable and the reduced system (30.15) is approximately
null controllable. Then, by Proposition 30.4, the original system (III) is approximately
synchronizable by p-groups. Noting that Ker(D T ) = V , by Lemma 26.1, it is easy
to see that rank(R) = N − p. The proof is then complete. 

Theorem 30.8 Assume that system (III) is approximately synchronizable by


p-groups. Assume furthermore that rank(R) = N − p. Then, we have the following
assertions:
(i) Ker(RT ) is bi-orthonormal to Ker(C p ).
(ii) For any given initial data (U 1 ) ∈ (H0 ) N × (H−1 ) N , there exist unique
0 , U
scalar functions u 1 , u 2 , · · · , u p such that

u (k)
n → u r in Cloc ([T, +∞); H0 ) ∩ Cloc ([T, +∞); H
0 1 −1
()) (30.33)

for n r −1 + 1  k  n r and 1  r  p as n → +∞.


(iii) The coupling matrices A and B satisfy the conditions of C p -compatibility
(30.10) and (30.13), respectively.

Proof (i) We claim that Ker(RT ) ∩ Im(C Tp ) = {0}. Then, noting that Ker(RT ) and
Ker(C p ) have the same dimension p and

Ker(RT ) ∩ {Ker(C p )}⊥ = Ker(RT ) ∩ Im(C Tp ) = {0}, (30.34)

by Proposition 2.5, Ker(RT ) and Ker(C p ) are bi-orthonormal. Let Ker(RT ) =


Span{E 1 , · · · , E p } and Ker(C P ) = Span{e1 , · · · , e p } such that

(Er , es ) = δr s , r, s = 1, · · · , p. (30.35)

Now we return to check that Ker(RT )∩Im(C Tp )={0}. If Ker(RT )∩Im(C Tp ) = {0},
then there exists a nonzero vector x ∈ R N − p and some coefficients β1 , · · · , β p not
all zero, such that
p
βr Er = C Tp x. (30.36)
r =1

By Lemma 26.1, we still have (27.2) and (27.3) with d = p. For 1  r  p, applying
Er to problem (III) and (III0) with U = Un and H = Hn , and setting

u r = (Er , U ), (30.37)
30.2 Fundamental Properties 311

it follows that
⎧  p
⎨ u r − u r + s=1 αr s u s = 0 in (0, +∞) × ,
ur = 0  on (0, +∞) × 0 , (30.38)
⎩ p
∂ν u r + s=1 βr s u s = 0 on (0, +∞) × 1

with the initial condition

t =0: 0 ), u r = (Er , U
u r = (Er , U 1 ). (30.39)

Noting (30.8), we have


p
βr u r = (x, C p Un ) → 0 as n → +∞ (30.40)
r =1

in the space
0
Cloc ([T, +∞); H0 ) ∩ Cloc
1
([T, +∞); H−1 ). (30.41)

Since the functions u 1 , · · · , u p are independent of n and of applied boundary


controls, we have
 p
p
βr u r (T ) ≡ βr u r (T ) ≡ 0 in . (30.42)
r =1 r =1

Then, it follows from the well-posedness of problem (30.38)–(30.39) that


p

p
0 ) =
βr (Er , U 1 ) = 0 in 
βr (Er , U (30.43)
r =1 r =1

0 , U
for any given initial data (U 1 ) ∈ (H0 ) N × (H−1 ) N . In particular, we get


p
βr Er = 0, (30.44)
r =1

then, a contradiction: β1 = · · · = β p = 0, because of the linear independence of the


vectors E 1 , · · · , E p .
(ii) Noting (30.8) and (30.37), we have
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
Cp C p Un 0
⎜ E1 ⎟
T ⎜ E 1 Un ⎟
T ⎜u1 ⎟
⎜ ⎟ ⎜ ⎟ ⎜ ⎟
⎜ .. ⎟ Un = ⎜ .. ⎟ → ⎜ .. ⎟ (30.45)
⎝ . ⎠ ⎝ . ⎠ ⎝.⎠
E Tp E Tp Un up
312 30 Approximate Boundary Synchronization by p-Groups

as n → +∞ in the space
0
Cloc ([T, +∞); (H0 ) N ) ∩ Cloc
1
([T, +∞); (H−1 ) N ), (30.46)

where u , · · · , u p are given by (30.37). Since Ker(RT ) ∩ Im(C Tp ) = {0}, the matrix
⎛ ⎞1
Cp
⎜ E 1T ⎟
⎜ ⎟
⎜ .. ⎟ is invertible. Thus, it follows from (30.45) that there exists U such that
⎝ . ⎠
E Tp

⎛ ⎞−1 ⎛ ⎞
Cp 0
⎜ E 1T ⎟ ⎜ u 1 ⎟
⎜ ⎟ ⎜ ⎟
Un → ⎜ .. ⎟ ⎜ . ⎟ =: U (30.47)
⎝ . ⎠ ⎝ .. ⎠
E Tp up

as n → +∞ in the space (30.46). Moreover, (30.8) implies that

tT : C p U ≡ 0 in .

Noting (30.7), (30.35) and (30.37), it follows that


p

p
tT : U= (Er , U )er = u r er in . (30.48)
r =1 r =1

Noting (30.6), we get then (30.33).


(iii) Applying C p to system (III) with U = Un and H = Hn , and passing to the
limit as n → +∞, by (30.8), (30.47) and (30.48), it is easy to get that


p
C p Aer u r (T ) ≡ 0 in  (30.49)
r =1

and

p
C p Ber u r (T ) ≡ 0 on 1 . (30.50)
r =1

On the other hand, because of the time-invertibility, system (30.38) defines an iso-
morphism from (H1 ) p × (H0 ) p onto (H1 ) p × (H0 ) p for all t  0. Then, it follows
from (30.49) and (30.50) that

C p Aer = 0 and C p Ber = 0 for 1  r  p. (30.51)


30.2 Fundamental Properties 313

We get thus the conditions of C p -compatibility for A and B, respectively. The proof
is complete. 
Remark 30.9 In general, the convergence (30.8) of the sequence {C p Un } does
not imply the convergence of the sequence of solutions {Un }. In fact, we even
don’t know if the sequence {Un } is bounded. However, under the rank condition
rank(R) = N − p, the convergence (30.8) actually implies the convergence (30.33).
Moreover, the functions u 1 , · · · , u p , called the approximately synchronizable state
by p-groups, are independent of applied boundary controls. In this case, system (III)
is approximately synchronizable by p-groups in the pinning sense, while that orig-
inally given by Definition 30.1 is in the consensus sense.
Let D p be the set of all the boundary control matrices D which realize the approx-
imate boundary synchronization by p-groups for system (III). In order to show the
dependence on D, we prefer to write R D instead of R given in (26.3). Then, we may
define the minimal rank as
N p = inf rank(R D ). (30.52)
D∈D p

Noting that rank(R D ) = N − dim Im(RTD ), because of Proposition 30.6, we have

N p  N − p. (30.53)

Moreover, we have the following


Corollary 30.10 The equality
Np = N − p (30.54)

holds if and only if the coupling matrices A and B satisfy the conditions of C p -
compatibility (30.10) and (30.13), respectively, and A T and B T possess a common
invariant subspace, which is bi-orthonormal to Ker(C p ). Moreover, the approximate
boundary synchronization by p-groups is in the pinning sense.
Proof Assume that (30.54) holds. Then there exists a boundary control matrix
D ∈ D p , such that dim Ker(RTD ) = p. By Theorem 30.8, the coupling matrices
A and B satisfy the conditions of C p -compatibility (30.10) and (30.13), respectively,
and Ker(RTD ) which, by Lemma 26.1, is bi-orthonormal to Ker(C p ), is invariant for
both A T and B T . Moreover, the approximate boundary synchronization by p-groups
is in the pinning sense.
Conversely, let V be a subspace which is invariant for both A T and B T , and
bi-orthonormal to Ker(C p ). Noting that A and B satisfy the conditions of C p -
compatibility (30.10) and (30.13), respectively, by Theorem 30.7, there exists a
boundary control matrix D ∈ D p , such that dim Ker(RTD ) = p, which together with
(30.53) implies (30.54). 
Remark 30.11 When N p > N − p, the situation is more complicated. We don’t
know if the conditions of C p -compatibility (30.10) and (30.13) are necessary, either
if the approximate boundary synchronization by p-groups is in the pinning sense.
Chapter 31
Approximately Synchronizable States
by p-Groups

When system (III) is approximately synchronizable by p-groups, the corresponding


approximately synchronizable states by p-groups will be considered in this chapter.

In Theorem 30.8, we have shown that if system (III) is approximately synchro-


nizable by p-groups under the condition dim Ker(RT ) = p, then A and B satisfy
the corresponding conditions of C p -compatibility, and Ker(RT ) is bi-orthonormal
to Ker(C p ); moreover, the approximately synchronizable state by p-groups is inde-
pendent of applied boundary controls. The following is the counterpart.

Theorem 31.1 Let A and B satisfy the conditions of C p -compatibility (30.10) and
(30.13), respectively. Assume that system (III) is approximately synchronizable by
p-groups. If the projection of the solution U to problem (III) and (III0) on a subspace
V of dimension p is independent of applied boundary controls, then V = Ker(RT ).
Moreover, Ker(RT ) is bi-orthonormal to Ker(C p ).
0 = U
Proof Fixing U 1 = 0, by Theorem 19.9, the linear map

F : H →U
is continuous, then, infinitely differential from L loc 2
(0, +∞; (L 2 (1 )) M ) to
Cloc ([0, +∞); (H0 ) ) ∩ Cloc ([0, +∞); (H−1 ) ). For any given boundary control
0 N 1 N
 ∈ L loc
H 2  be the Fréchet derivative of U on H
(0, +∞; (L 2 (1 )) M ), let U :

U = F  (0) H
.
Then, it follows from problem (III) and (III0) that
⎧ 
⎪  − U
U  + AU  = 0 in (0, +∞) × ,

⎨
U =0 on (0, +∞) × 0 ,
 + BU
 = DH  on (0, +∞) × 1 , (31.1)

⎪ ∂ ν U
⎩ =U  = 0 in .
t =0: U
© Springer Nature Switzerland AG 2019 315
T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_31
316 31 Approximately Synchronizable States by p-Groups

Let V = Span{E 1 , · · · , E p }. Then, the independence of the projection of U on the


subspace V , with respect to the boundary controls, implies that

) ≡ 0 in (0, +∞) × 
(Er , U for 1  r  p. (31.2)

We first show that Er ∈ / Im(C Tp ) for any given r with 1  r  p. Otherwise, there
exists an r̄ with 1  r̄  p and a vector xr̄ ∈ R N − p , such that Er̄ = C Tp xr̄ . Then, it
follows from (31.2) that
0 = (Er̄ , U) = (xr̄ , C p U
).

Since W = C p U  is the solution to the reduced system (30.15) with H = H , which


is approximately controllable, we get thus xr̄ = 0, which contradicts Er̄ = 0. Thus,
since dim Im(C Tp ) = N − p and dim(V ) = p, we have V ⊕ Im(C Tp ) = R N . Then,
for any given r with 1  r  p, there exists a vector yr ∈ R N − p , such that


p
A T Er = αr s E s + C Tp yr .
s=1

Noting (31.2) and applying Er to system (31.1), it follows that

0 = (AU , A T Er ) = (U
, Er ) = (U , C Tp yr ) = (C p U
, yr ).

Once again, the approximate controllability of the reduced system (30.15) implies
that yr = 0 for 1  r  p. Then, it follows that


p
A T Er = αr s E s , 1  r  p.
s=1

So, the subspace V is invariant for A T .


By the sharp regularity given in [30] (cf. also [29, 31]) on the problem of Neumann,
we have improved the regularity of the solution to problem (III) and (III0). In fact,
setting ⎧

⎨3/5 − ,  is a bounded smooth domain,
α = 2/3,  is a sphere, (31.3)


3/4 − ,  is a parallelepiped,

where  > 0 is a sufficiently small number, by Theorem 19.10, the trace

U |1 ∈ (Hloc
2α−1
((0, +∞) × 1 )) N (31.4)

0 , U
with the corresponding continuous dependence with respect to (U 1 , H ).
31 Approximately Synchronizable States by p-Groups 317

Next, noting (31.2) and applying Er (1  r  p) to the boundary condition on 1


in (31.1), we get
) = (Er , B U
(D T Er , H ).

Since 2α − 1 > 0, Hloc


2α−1
((0, +∞) × 1 ) is compactly embedded in L loc
2
((0, +∞) ×
1 ), we get then D Er = 0 for all 1  r  p, namely,
T

V ⊆ Ker(D T ). (31.5)

In particular, the subspace V is contained in Ker(D T ).


Moreover, for 1  i  p, we have

) = 0 on (0, +∞) × 1 .
(Er , B U (31.6)

Now, let xr ∈ R N − p , such that


p
B T Er = βr s E s + C Tp xr . (31.7)
s=1

Noting (31.2) and inserting the expression (31.7) into (31.6), it follows that

) = 0 on (0, +∞) × 1 .
(xr , C p U

Once again, because of the approximate boundary controllability of the reduced


system (30.15), we deduce that xr = 0 for 1  r  p. Then, we get


p
B T Er = βr s E s , 1  r  p.
s=1

So, the subspace V is also invariant for B T .


Finally, since dim(V ) = p, by Lemma 26.1 and Proposition 30.6, we have
Ker(RT ) = V . Then, by assertion (i) of Theorem 30.8, Ker(RT ) is bi-orthonormal
to Ker(C p ). This achieves the proof. 

Let d be a column vector of D and contained in Ker(C p ). Then d will be canceled


in the product matrix C p D, and therefore it cannot give any effect to the reduced
system (30.15). However, the vectors in Ker(C p ) may play an important role for the
approximate boundary controllability. More precisely, we have the following
Theorem 31.2 Let A and B satisfy the conditions of C p -compatibility (30.10) and
(30.13), respectively. Assume that system (III) is approximately synchronizable by
p-groups under the action of a boundary control matrix D. Assume furthermore that

e1 , · · · , e p ∈ Im(D), (31.8)
318 31 Approximately Synchronizable States by p-Groups

where e1 , · · · , e p are given by (30.6). Then system (III) is actually approximately


null controllable.

Proof By Proposition 27.4, it is sufficient to show that the adjoint system (19.19) is
D-observable. For 1  r  p, applying er to the adjoint system (19.19) and noting
φr = (er , ), it follows that
⎧ p
⎪ 
⎨φr − φr + s=1 αsr φs = 0 in (0, +∞) × ,
φr = 0 on (0, +∞) × 0 , (31.9)

⎩ p
∂ν φr + s=1 βsr φs = 0 on (0, +∞) × 1 ,

where the constant coefficients αsr and βsr are given by


p

p
Aer = αsr es and Ber = βsr es , 1  r  p. (31.10)
s=1 s=1

On the other hand, noting (31.8), the D-observation (27.1) implies that

φr ≡ 0 on (0, T ) × 1 (31.11)

for 1  r  p. Then, by Holmgren’s uniqueness theorem (cf. Theorem 8.2 in [62]),


we get
φr ≡ 0 in (0, +∞) ×  (31.12)

for 1  r  p. Thus,  ∈ Im(C Tp ), then we can write  = C Tp  and the adjoint


system (19.19) becomes

⎪ T 
⎨C p  − C p  + A C p  = 0 in (0, +∞) × ,
T T T

C Tp  = 0 on (0, +∞) × 0 , (31.13)



⎩ T
C p ∂ν  + B T C Tp  = 0 on (0, +∞) × 1 .

Noting the conditions of C p -compatibility (30.10) and (30.13), it follows that


⎧ T
⎪ 
⎨C p ( −  + A p ) = 0 in (0, +∞) × ,
T

C Tp  = 0 on (0, +∞) × 0 (31.14)



⎩ T T
C p (∂ν  + B p ) = 0 on (0, +∞) × 1 .

Since the mapping C Tp is injective, we find again the reduced adjoint system (30.17).
Accordingly, the D-observation (27.1) implies that

D T  ≡ D T C Tp  ≡ 0 on [0, T ] × 1 . (31.15)
31 Approximately Synchronizable States by p-Groups 319

Since system (III) is approximately synchronizable by p-groups under the action


of the boundary control matrix D, by Proposition 30.4, the reduced adjoint system
(30.17) for  is C p D-observable, therefore,  ≡ 0, then  ≡ 0. So, the adjoint sys-
tem (19.19) is D-observable, then by Proposition 27.4, system (III) is approximately
null controllable. 
Chapter 32
Closing Remarks

Some closing remarks including related literatures and prospects are given in this
chapter.

32.1 Related Literatures

The main contents of this monograph have been published or will be published in a
series of papers written by the authors with their collaborators (cf. [3, 35–56]).
For closing this monograph, let us comment on some related literatures. One
motivation for studying the synchronization consists of establishing a weak exact
boundary controllability in the case of fewer boundary controls. In order to realize
the exact boundary controllability, because of its uniform character with respect to
the state variables, the number of boundary controls should be equal to the degrees
of freedom of the considered system. However, when the components of initial data
are allowed to have different levels of energy, the exact boundary controllability
by means of only one boundary control for a system of two wave equations was
established in [65, 72], and for the cascade system of N wave equations in [2].
In [10], the authors established the controllability of two coupled wave equations
on a compact manifold with only one local distributed control. Moreover, both the
optimal time of controllability and the controllable spaces are given in the cases with
the same or different wave speeds.
The approximate boundary null controllability is more flexible with respect to the
number of applied boundary controls. In [37, 53, 55], for a coupled system of wave
equations with Dirichlet, Neumann or Robin boundary controls, some fundamental
algebraic properties on the coupling matrices are used to characterize the unique
continuation for the solution to the corresponding adjoint systems. Although these

© Springer Nature Switzerland AG 2019 321


T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8_32
322 32 Closing Remarks

criteria are only necessary in general, they open an important way to the research on
the unique continuation for a system of hyperbolic partial differential equations.
In contrast with hyperbolic systems, in [4] (also [5, 13] and the reference therein),
it was shown that Kalman’s criterion is sufficient to the exact boundary null control-
lability for systems of parabolic equations. Recently, in [76], the authors have estab-
lished the minimal time for a control problem related to the exact synchronization
for a linear parabolic system.
The average controllability proposed in [67, 86] gives another way to deal with
the controllability with fewer controls. The observability inequality is particularly
interesting for a trial on the decay rate of approximate controllability.

32.2 Prospects

The study of exact and approximate boundary synchronizations for systems of PDEs
has just begun, and there are still many problems worthy of further research.
1. We have discussed above a coupled system of wave equations with Dirich-
let, Neumann, or coupled Robin boundary controls. In the case of coupled Robin
boundary controls, since the solution has less trace regularity, we can only obtain a
relatively complete result when the domain  is a parallelepiped. It is still an open
problem to get a relevant result for more general domains. Moreover, besides the
original coupling matrix A, there is one more boundary coupling matrix B, which
brings more difficulties and opportunities to the study.
2. The above research is limited to the linear situation, and the method employed is
also linear. However, the study of synchronization should be extended to the nonlinear
situation. At present, in the one-space-dimensional case, for the coupled system
of quasilinear wave equations with various boundary controls, the exact boundary
synchronization has been achieved in the framework of classical solutions (cf. [20,
68]) and there is still larger developing space for more general research.
3. In addition to the study of exact boundary controllability of hyperbolic systems
on the whole space domain (cf. [32, 62, 73]), in the recent years, due to the demand of
applications, in the one-space-dimensional situation, the research on exact boundary
controllability of hyperbolic systems at a node has been developed, which is called the
exact boundary controllability of nodal profile (cf. [16, 33, 59]). Correspondingly,
the investigation on the approximate boundary controllability of nodal profile as well
as on the exact and approximate boundary synchronizations of nodal profile should
also be carried further.
4. We can also probe into similar problems on a complicate domain formed by
networks. The corresponding results in the one-space-dimensional case on the exact
boundary controllability and on the exact boundary controllability of nodal profile
can be found in [32, 59], etc., and the synchronization on a tree-like network for a
coupled system of wave equations should be carried out.
5. Some essentially new results can be obtained if we study the phenomena of
synchronization through coupling among individuals with possibly different motion
32.2 Prospects 323

laws (governing equations), whose nature is yet to be explored. The research on the
existence of the exactly synchronizable state for a coupled system of wave equations
with different wave speeds has been initiated.
6. The stability of the exactly synchronizable state or the approximately synchro-
nizable state should be studied systematically.
7. It is worth to consider the generalized exact boundary synchronization (cf.
[58] in the 1-D case and [77–80] in higher dimensional case) and the generalized
approximate boundary synchronization.
8. The coupled system of first-order linear or quasilinear hyperbolic systems has
wider implications and applications than the coupled system of wave equations, and
to do the study of similar problems on it will be of great significance, though more
difficult (cf. [68]).
9. To do similar research on the coupled system of other linear or nonlinear
evolution equations (such as beam equations, plate equations, heat equations, etc.)
will reveal many new properties and characteristics, which is also quite meaningful.
10. To extend the concept of synchronization to the case of components with dif-
ferent time delays will be more challenging and may expose quite different features.
11. The study above focuses on the exact or approximate synchronization for a
coupled system of wave equations in finite time through boundary controls. It is
also worthwhile to study the asymptotic synchronization in the linear and nonlinear
situations without any boundary control when t → +∞ (cf. [56]). This should be a
meaningful extension of the research on the asymptotic stability of the solution to
the coupled system of wave equations.
12. To dig into more practical applications of related results will further enrich
the new field of study and lead to greater promotion and influence of the subject.
References

1. F. Alabau-Boussouira, A two-level energy method for indirect boundary observability and


controllability of weakly coupled hyperbolic systems. SIAM J. Control Optim. 42, 871–904
(2003)
2. F. Alabau-Boussouira, A hierarchic multi-level energy method for the control of bidiagonal
and mixed n-coupled cascade systems of PDE’s by a reduced number of controls. Adv. Differ.
Equ. 18, 1005–1072 (2013)
3. F. Alabau-Boussouira, T.T. Li, B.P. Rao, Indirect observation and control for a coupled cascade
system of wave equations with Neumann boundary conditions, to appear
4. F. Ammar Khodja, A. Benabdallah, C. Dupaix, Null-controllability of some reaction-diffusion
systems with one control force. J. Math. Anal. Appl. 320, 928–943 (2006)
5. F. Ammar Khodja, A. Benabdallah, M. González-Burgos, L. de Teresa, Recent results on the
controllability of linear coupled parabolic problems: a survey. Math. Control Relat. Fields 1,
267–306 (2011)
6. A.V. Balakrishnan, Fractional powers of closed operators and the semigroups generated by
them. Pac. J. Math. 10, 419–437 (1960)
7. C. Bardos, G. Lebeau, J. Rauch, Sharp sufficient conditions for the observation, control, and
stabilization of waves from the boundary. SIAM J. Control Optim. 30, 1024–1064 (1992)
8. H. Brezis, Functional Analysis, Sobolev Spaces and Partial Differential Equations (Springer,
New York, 2011)
9. R. Bru, L. Rodman, H. Schneider, Extensions of Jordan bases for invariant subspaces of a
matrix. Linear Algebra Appl. 150, 209–225 (1991)
10. B. Dehman, J. Le Rousseau, M. Léautaud, Controllability of two coupled wave equations on a
compact manifold. Arch. Ration. Mech. Anal. 211, 113–187 (2014)
11. Th. Duyckaerts, X. Zhang, E. Zuazua, On the optimality of the observability inequalities for
parabolic and hyperbolic systems with potentials. Ann. Inst. H. Poincaré Anal. Non Linéaire
25, 1–41 (2008)
12. S. Ervedoza, E. Zuazua, A systematic method for building smooth controls for smooth data.
Discret. Contin. Dyn. Syst. Ser. B 14, 1375–1401 (2010)
13. E. Fernández-Cara, M. González-Burgos, L. de Teresa, Boundary controllability of parabolic
coupled equations. J. Funct. Anal. 259, 1720–1758 (2010)
14. N. Garofalo, F.-H. Lin, Monotonicity properties of variational integrals, A p weights and unique
continuation. Indiana Univ. Math. J. 35, 245–268 (1986)

© Springer Nature Switzerland AG 2019 325


T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8
326 References

15. I.C. Gohberg, M.G. Krein, Introduction to the Theory of Linear Nonselfadjoint Operators
(AMS, Providence, 1969)
16. M. Gugat, M. Hertz, V. Schleper, Flow control in gas networks: exact controllability to a given
demand. Math. Methods Appl. Sci. 34, 745–757 (2011)
17. M.L.J. Hautus, Controllability and observability conditions for linear autonomous systems.
Indag. Math. (N.S.) 31, 443–446 (1969)
18. L. Hörmander, Linear Partial Differential Operators (Springer, Berlin, 1976)
19. L. Hu, F.Q. Ji, K. Wang, Exact boundary controllability and exact boundary observability for
a coupled system of quasilinear wave equations. Chin. Ann. Math. Ser. B 34, 479–490 (2013)
20. L. Hu, T.T. Li, P. Qu, Exact boundary synchronization for a coupled system of 1-D quasilinear
wave equations. ESAIM: Control Optim. Calc. Var. 22, 1136–1183 (2016)
21. L. Hu, T.T. Li, B.P. Rao, Exact boundary synchronization for a coupled system of 1-D wave
equations with coupled boundary conditions of dissipative type. Commun. Pure Appl. Anal.
13, 881–901 (2014)
22. Ch. Huygens, Oeuvres Complètes, vol. 15 (Swets & Zeitlinger B.V., Amsterdam, 1967)
23. R.E. Kalman, Contributions to the theory of optimal control. Bol. Soc. Mat. Mex. 5, 102–119
(1960)
24. T. Kato, Fractional powers of dissipative operators. J. Math. Soc. Jpn. 13(3), 246–274 (1961)
25. T. Kato, Perturbation Theory for Linear Operators. Grundlehren der Mathematischen Wis-
senschaften, vol. 132 (Springer, Berlin, 1976)
26. V. Komornik, Exact Controllability and Stabilization, the Multiplier Method (Masson, Paris,
1994)
27. V. Komornik, P. Loreti, Observability of compactly perturbed systems. J. Math. Anal. Appl.
243, 409–428 (2000)
28. V. Komornik, P. Loreti, Fourier Series in Control Theory. Springer Monographs in Mathematics
(Springer, New York, 2005)
29. I. Lasiecka, R. Triggiani, Trace regularity of the solutions of the wave equation with homo-
geneous Neumann boundary conditions and data supported away from the boundary. J. Math.
Anal. Appl. 141, 49–71 (1989)
30. I. Lasiecka, R. Triggiani, Sharp regularity for mixed second-order hyperbolic equations of
Neumann type, I. L2 non-homogeneous data. Ann. Mat. Pura Appl. 157, 285–367 (1990)
31. I. Lasiecka, R. Triggiani, Regularity theory of hyperbolic equations with non-homogeneous
Neumann boundary conditions II, general boundary data. J. Differ. Equ. 94, 112–164 (1991)
32. T.T. Li, Controllability and Observability for Quasilinear Hyperbolic Systems. AIMS on Ap-
plied Mathematics, vol. 3 (American Institute of Mathematical Sciences and Higher Education
Press, Springfield and Beijing, 2010)
33. T.T. Li, Exact boundary controllability of nodal profile for quasilinear hyperbolic systems.
Math. Methods Appl. Sci. 33, 2101–2106 (2010)
34. T.T. Li, Exact boundary synchronization for a coupled system of wave equations, in Differential
Geometry, Partial Differential Equations and Mathematical Physics, ed. by M. Ge, J. Hong,
T. Li, W. Zhang (World Scientific, Singapore, 2014), pp. 219–241
35. T.T. Li, From phenomena of synchronization to exact synchronization and approximate syn-
chronization for hyperbolic systems. Sci. China Math. 59, 1–18 (2016)
36. T.T. Li, X. Lu, B.P. Rao, Exact boundary synchronization for a coupled system of wave equa-
tions with Neumann boundary controls. Chin. Ann. Math. Ser. B 39, 233–252 (2018)
37. T.T. Li, X. Lu, B.P. Rao, Approximate boundary null controllability and approximate boundary
synchronization for a coupled system of wave equations with Neumann boundary controls, in
Contemporary Computational Mathematics–A Celebration of the 80th Birthday of Ian Sloan,
vol. II, ed. by J. Dich, F.Y. Kuo, H. Wozniakowski (Springer, Berlin, 2018), pp. 837–868
38. T.T. Li, X. Lu, B.P. Rao, Exact boundary controllability and exact boundary synchronization
for a coupled system of wave equations with coupled Robin boundary controls, to appear
39. T.T. Li, B.P. Rao, Asymptotic controllability for linear hyperbolic systems. Asymptot. Anal.
72, 169–187 (2011)
References 327

40. T.T. Li, B.P. Rao, Contrôlabilité asymptotique de systèmes hyperboliques linéaires. C. R. Acad.
Sci. Paris Ser. I 349, 663–668 (2011)
41. T.T. Li, B.P. Rao, Synchronisation exacte d’un système couplé d’équations des ondes par des
contrôles frontières de Dirichlet. C. R. Acad. Sci. Paris Ser. 1 350, 767–772 (2012)
42. T.T. Li, B.P. Rao, Exact synchronization for a coupled system of wave equations with Dirichlet
boundary controls. Chin. Ann. Math. 34B, 139–160 (2013)
43. T.T. Li, B.P. Rao, Contrôlabilité asymptotique et synchronisation asymptotique d’un système
couplé d’équations des ondes avec des contrôles frontièères de Dirichlet. C. R. Acad. Sci. Paris
Ser. I 351, 687–693 (2013)
44. T.T. Li, B.P. Rao, Asymptotic controllability and asymptotic synchronization for a coupled
system of wave equations with Dirichlet boundary controls. Asymptot. Anal. 86, 199–226
(2014)
45. T.T. Li, B.P. Rao, Sur l’état de synchronisation exacte d’un système couplé d’équations des
ondes. C. R. Acad. Sci. Paris Ser. I 352, 823–829 (2014)
46. T.T. Li, B.P. Rao, On the exactly synchronizable state to a coupled system of wave equations.
Port. Math. 72 2–3, 83–100 (2015)
47. T.T. Li, B.P. Rao, Critères du type de Kálmán pour la contrôlabilité approchée et la synchro-
nisation approchée d’un système couplé d’équations des ondes. C. R. Acad. Sci. Paris Ser. 1
353, 63–68 (2015)
48. T.T. Li, B.P. Rao, A note on the exact synchronization by groups for a coupled system of wave
equations. Math. Methods Appl. Sci. 38, 241–246 (2015)
49. T.T. Li, B.P. Rao, Exact synchronization by groups for a coupled system of wave equations
with Dirichlet boundary control. J. Math. Pures Appl. 105, 86–101 (2016)
50. T.T. Li, B.P. Rao, Criteria of Kalman’s type to the approximate controllability and the approxi-
mate synchronization for a coupled system of wave equations with Dirichlet boundary controls.
SIAM J. Control Optim. 54, 49–72 (2016)
51. T.T. Li, B.P. Rao, Une nouvelle approche pour la synchronisation approchée d’un système
couplé d’équations des ondes: contrôles directs et indirects. C. R. Acad. Sci. Paris Ser. I 354,
1006–1012 (2016)
52. T.T. Li, B.P. Rao, Exact boundary controllability for a coupled system of wave equations with
Neumann boundary controls. Chin. Ann. Math. Ser. B 38, 473–488 (2017)
53. T.T. Li, B.P. Rao, On the approximate boundary synchronization for a coupled system of wave
equations: direct and indirect controls. ESIAM: Control Optim. Calc. Var. 24, 1675–1704
(2018)
54. T.T. Li, B.P. Rao, Kalman’s criterion on the uniqueness of continuation for the nilpotent system
of wave equations. C. R. Acad. Sci. Paris Ser. I 356, 1188–1194 (2018)
55. T.T. Li, B.P. Rao, Approximate boundary synchronization for a coupled system of wave equa-
tions with coupled Robin boundary conditions, to appear
56. T.T. Li, B.P. Rao, Unique continuation for elliptic operators and application to the asymptotic
synchronization of second order evolution equations, to appear
57. T.T. Li, B.P. Rao, L. Hu, Exact boundary synchronization for a coupled system of 1-D wave
equations. ESAIM: Control Optim. Calc. Var. 20, 339–361 (2014)
58. T.T. Li, B.P. Rao, Y.M. Wei, Generalized exact boundary synchronization for second order
evolution systems. Discret. Contin. Dyn. Syst. 34, 2893–2905 (2014)
59. T.T. Li, K. Wang, Q. Gu, Exact Boundary Controllability of Nodal Profile for Quasilinear
Hyperbolic Systems. Springer Briefs in Mathematics (Springer, Berlin, 2016)
60. J.-L. Lions, Equations Différentielles Opérationnelles et Problèmes aux Limites. Grundlehren,
vol. 111 (Springer, Berlin, 1961)
61. J.-L. Lions, Exact controllability, stabilization and perturbations for distributed systems. SIAM
Rev. 30, 1–68 (1988)
62. J.-L. Lions, Contrôlabilité exacte, Perturbations et Stabilisation de Systèmes Distribués, vol.
1 (Masson, Paris, 1988)
63. J.-L. Lions, Quelques Méthodes de Résolution des Problèmes aux Limites Non Linéaires
(Dunod, Gauthier-Villars, Paris, 1969)
328 References

64. J.-L. Lions, E. Magenes, Problèmes aux Limites non Homogènes et Applications, vol. 1 (Dunod,
Paris, 1968)
65. Z.Y. Liu, B.P. Rao, A spectral approach to the indirect boundary control of a system of weakly
coupled wave equations. Discret. Contin. Dyn. Syst. 23, 399–414 (2009)
66. Z.Y. Liu, S.M. Zheng, Semigroups Associated with Dissipative Systems, vol. 398 (CRC Press,
Boca Raton, 1999)
67. Q. Lü, E. Zuazua, Averaged controllability for random evolution partial differential equations.
J. Math. Pures Appl. 105, 367–414 (2016)
68. X. Lu, Local exact boundary synchronization for a kind of first order quasilinear hyperbolic
systems. Chin. Ann. Math. Ser. B 40, 79–96 (2019)
69. M. Mehrenberger, Observability of coupled systems. Acta Math. Hung. 103, 321–348 (2004)
70. A. Pazy, Semigroups of Linear Operators and Applications to Partial Differential Equations
(Springer, New York, 1983)
71. A. Pikovsky, M. Rosenblum, J. Kurths, Synchronization: A Universal Concept in Nonlinear
Sciences (Cambridge University Press, Cambridge, 2001)
72. L. Rosier, L. de Teresa, Exact controllability of a cascade system of conservation equations.
C. R. Acad. Sci. Paris 1(349), 291–296 (2011)
73. D.L. Russell, Controllability and stabilization theory for linear partial differential equations:
recent progress and open questions. SIAM Rev. 20, 639–739 (1978)
74. S. Strogatz, SYNC: The Emerging Science of Spontaneous Order (THEIA, New York, 2003)
75. F. Trèves, Basic Linear Partial Differential Equations. Pure and Applied Mathematics, vol. 62
(Academic, New York, 1975)
76. L.J. Wang, Q.S. Yan, Minimal time control of exact synchronization for parabolic systems
(2018). arXiv:1803.00244vl
77. Y.Y. Wang, Generalized exact boundary synchronization for a coupled system of wave equations
with Dirichlet boundary controls, to appear in Chin. Ann. Math. Ser. B
78. Y.Y. Wang, On the generalized exact boundary synchronization for a coupled system of wave
equations, Math. Methods Appl. Sci. 42, 7011–7029 (2019)
79. Y.Y. Wang, Induced generalized exact boundary synchronization for a coupled system of wave
equations, to appear in Appl. Math. J. Chin. Univ
80. Y.Y. Wang, Determination of the generalized exact boundary synchronization matrix for a
coupled system of wave equations, to appear in Front. Math. China
81. N. Wiener, Cybernetics, or Control and Communication in the Animal and the Machine, 2nd
edn. (The M.I.T. Press/Wiley, Cambridge/New York, 1961)
82. P.F. Yao, On the observability inequalities for exact controllability of wave equations with
variable coefficients. SIAM J. Control Optim. 37, 1568–1599 (1999)
83. R.M. Young, An Introduction to Nonharmonic Fourier Series (Academic, San Diego, 2001)
84. X. Zhang, E. Zuazua, A sharp observability inequality for Kirchhoff plate systems with poten-
tials. Comput. Appl. Math. 25, 353–373 (2006)
85. E. Zuazua, Exact controllability for the semilinear wave equation. J. Math. Pures Appl. 69,
1–31 (1990)
86. E. Zuazua, Averaged control. Autom. J. IFAC 50, 3077–3087 (2014)
Index

A Approximately synchronizable state, 13,


Adjoint problem, 10, 12, 35, 84, 87, 89, 94, 218, 219, 302, 323
103, 108, 112, 113, 123, 130, 140, Approximately synchronizable state by p
156, 158, 167, 174, 202, 204, 208, groups, 122, 124, 185, 315
211, 213, 217, 225, 234, 236 Asymptotically synchronizable state, 3
Adjoint system, 4, 35, 87, 88, 90, 96, 151, Asymptotic stability, 323
209, 219, 234, 281, 282, 302, 303, Asymptotic synchronization, 2, 323
310, 318 Asymptotic synchronization in the consen-
Adjoint variables, 156 sus sense, 3
Asymptotic synchronization in the pinning
Admissible set of all boundary controls, 38
sense, 3
Approximate boundary controllability of Attainable set, 7, 49, 69, 177, 184
nodal profile, 322 Attainable set of exactly synchronizable
Approximate boundary null controllability, states, 49, 177
9–12, 16, 81, 86, 87, 97, 114, 124, Attainable set of exactly synchronizable
126, 130, 140, 148, 201, 202, 207, states by p groups, 183
209, 321 Average controllability, 322
Approximate boundary synchronization, 9,
12, 106, 112, 217, 218, 297, 305, 313,
322 B
Approximate boundary synchronization by Backward problem, 39, 40, 47, 49, 82, 168,
groups, 150 177, 204
Banach’s theorem of closed graph, 40
Approximate boundary synchronization by
Beam equations, 323
2-groups, 147
Bi-orthonormal, 20, 29, 67, 75, 112, 122,
Approximate boundary synchronization by 140, 187, 192, 196, 197, 227, 267,
p-groups, 13, 116, 119, 125, 133, 272, 309, 313
135, 138, 151, 223, 226, 305, 313 Bi-orthonormal basis, 54
Approximately null controllable, 10, 11, 81, Bi-orthonormality, 19, 123, 187
83, 84, 86, 103, 106, 114, 130, 132, Boundary control, 3, 5, 6, 9–11, 13, 15, 34,
204, 282, 301, 308, 310, 318 40, 44, 45, 47, 52, 57, 60, 63, 66, 70,
Approximately synchronizable, 12, 13, 106, 71, 75, 77, 83, 84, 87, 106, 123–125,
108, 110, 126, 215, 218, 219 133, 138, 157, 170, 174, 175, 179,
Approximately synchronizable by p groups, 181, 186–188, 197, 202, 204, 215,
15, 117, 119, 130, 134, 226, 305, 315, 219, 223, 243, 245, 247, 255, 258,
319 267, 283, 297, 305, 309, 319, 321

© Springer Nature Switzerland AG 2019 329


T. Li and B. Rao, Boundary Synchronization for Hyperbolic Systems,
Progress in Nonlinear Differential Equations and Their Applications 94,
https://doi.org/10.1007/978-3-030-32849-8
330 Index

Boundary control matrix, 2, 6, 9, 13, 14, 47, Coupled Robin boundary controls, 17, 171,
53, 56, 63, 66, 67, 70, 77, 86, 91, 108, 240, 243, 244, 322
123, 138, 141, 145, 171, 187, 196, Coupled system of 1-D wave equations, 38
197, 219, 220, 226, 247, 257, 267, Coupled system of first-order linear or quasi-
271, 309, 317 linear hyperbolic systems, 323
Boundary coupling matrix, 17, 261, 286, 322 Coupled system of quasilinear wave equa-
tions, 322
Coupled system of wave equations, 2, 4, 16,
C 33, 42, 69, 174, 183, 207, 234, 321,
C 0 groups, 37 322
C1 D-observability, 112 Coupled system of wave equations with dif-
C1 D-observable, 12, 108, 217 ferent wave speeds, 323
C1 D-observation, 113, 221, 302 Coupling matrix, 2, 4, 5, 8, 11, 12, 17, 34,
C p D-observability, 123 44, 49, 61, 66, 73, 86, 90, 100, 108,
C p D-observable, 117, 131, 225, 310, 319 119, 135, 144, 145, 156, 165, 171,
C p D-observation, 123, 307 180, 182, 186, 207, 216, 218, 220,
Cq∗ D-observability, 141 224, 259, 263, 287, 322
Cq∗ D-observable, 141 Criterion of Kalman’s type, 217, 226
Cq∗ D-observation, 140
Carleman’s unique continuation, 36
Cascade matrix, 89, 146 D
Cascade system, 11, 171, 212, 287, 321 D0 -observable, 91, 95
Cauchy–Schwarz inequality, 289 D-observability, 10, 12, 82, 89, 97, 202, 208,
Cayley–Hamilton’s theorem, 24, 26, 118, 281, 285
277 D-observable, 10, 84, 85, 87, 91, 96, 101,
Classical solutions, 4, 38, 322 103, 128, 202, 205, 212, 281, 319
Compactness-uniqueness argument, 156, D-observation, 85, 92, 131, 211, 281, 282,
166 318
Compact operator, 37 Diagonalizable by blocks, 22, 54, 112, 123
Compact perturbation, 4, 166, 170, 242 Diagonalizable systems, 97
Compact support, 3, 8, 10, 12, 13, 34, 43, 59, Dimension equality, 19
66, 81, 83, 105, 115, 125, 127, 134, Direct boundary controls, 113, 124
174, 179, 201, 203, 240, 243, 282, Direct inequality, 170
297 Dirichlet boundary conditions, 2, 17, 231
Condition of C1 -compatibility, 6, 12, 107, Dirichlet boundary controls, 16, 59, 157,
126, 176, 177, 192, 216, 218, 219, 171, 173, 185, 189, 207, 209
226, 307 Dirichlet observation, 212
Condition of C2 -compatibility, 146, 263
Condition of C3 -compatibility, 193
Condition of C p -compatibility, 8, 14, 25, 63, E
68–71, 75, 117, 119, 127, 131, 133, Energy, 160, 171, 288
135, 138, 182, 187, 188, 225, 226, Energy estimates, 160
259, 306 Energy space, 37, 157, 166, 171
Condition of compatibility, 5, 45, 48, 49, 53, Exact boundary controllability, 34, 128, 157,
61, 63, 107, 175, 182 166, 231, 239, 243, 249, 269, 321,
Consensus sense, 2, 13, 14, 16, 112, 122, 322
147, 218, 227, 313 Exact boundary controllability of nodal pro-
Conservation of energy, 288 file, 322
Controllability, 5, 9, 81, 321 Exact boundary null controllability, 4, 6, 10,
Controllable part, 51, 74, 190, 191 34, 40, 44, 48, 52, 57, 64, 66, 72, 86,
Cosine operator, 232 113, 166, 176, 201, 246, 258
Coupled Robin boundary conditions, 2, 17, Exact boundary synchronization, 3, 4, 7, 12,
231, 234, 261 14, 44, 47, 48, 52, 59, 75, 107, 174,
Index 331

176, 179, 194, 216, 231, 243, 247, Generalized Ingham’s inequality (Ingham-
322 type theorem), 97
Exact boundary synchronization by groups, Geometrical multiplicity, 143, 144
59
Exact boundary synchronization by 2-
groups, 76, 263 H
Exact boundary synchronization by 3- Hautus test, 24
groups, 194 Heat equations, 323
Exact boundary synchronization by p- Hidden regularity, 128, 231
groups, 7, 61, 63, 66, 69, 75, 179, 185, Higher dimensional case, 101, 287, 294, 323
186, 258, 267 Hilbert basis, 101, 294
Exact controllability of systems of ODEs, 87 Hilbert Uniqueness Method (HUM), 4, 34,
Exactly controllable, 157, 239, 242, 269 155, 166
Exactly null controllable, 34, 41, 55, 59, 66, Holmgren’s uniqueness theorem, 10, 85, 89,
168, 175, 183, 185, 192, 242, 244, 113, 123, 131, 159, 206, 217, 245,
258 262, 282, 286, 301, 310, 318
Exactly synchronizable, 5, 44, 47, 59, 174,
176, 177, 191, 193, 243, 246, 247,
252 I
Exactly synchronizable by 2-groups, 12, 53, Induced approximate boundary synchro-
263 nization, 15, 133, 138, 148
Exactly synchronizable by p-groups, 8, 64, Induced approximately synchronizable, 16,
66, 74, 180, 183, 186, 191, 255, 256, 133, 137, 148
258, 262, 267, 271 Induced extension matrix, 15, 27, 133
Exactly synchronizable state, 3, 7, 51, 57, Infinite-dimensional dynamical systems of
174, 175, 177, 192, 244, 247 partial differential equations (PDEs),
Exactly synchronizable state by 2-groups, 77 2
Exactly synchronizable state by 3-groups, Ingham’s theorem, 98
196 Internal coupling matrix, 259, 261
Exactly synchronizable state by p-groups, 8, Interpolation inequality, 235
69, 179, 183, 186, 187, 193, 255, 267, Invariant subspace, 6, 22–24, 38, 46, 53, 70,
271 73, 76, 86, 102, 107, 110, 118, 135,
Existence of the exactly synchronizable 139, 143, 164, 187, 197, 212, 224,
state, 323 227
Inverse inequality, 170

F
Fewer boundary controls, 5, 10, 59, 157, 169, J
241, 247, 321 Jordan block, 89, 222
Finite-dimensional dynamical systems of or- Jordan chain, 27, 50, 76, 134, 143, 192, 194
dinary differential equations, 2 Jordan’s theorem, 28
Finite speed of wave propagation, 87 Jordan subspaces, 28
Fractional power, 232
Framework of classical solutions, 4, 38, 322
Fréchet derivative, 249, 315 K
Fredholm’s alternative, 164 Kalman’s criterion, 11, 14, 23, 86, 89, 96,
97, 102, 114, 117, 124, 138, 145, 171,
207, 211, 213, 277, 322
G Kalman’s rank condition, 24
Gap condition, 114, 124
Generalized approximate boundary syn-
chronization, 323 L
Generalized exact boundary synchroniza- Lack of boundary controls, 5, 43, 103, 173,
tion, 323 242, 255
332 Index

Largest geometrical multiplicity of the O


eigenvalues, 143 Observability, 37, 156, 166
Lax–Milgram’s Lemma, 169 Observability inequality, 4, 158, 160
Linear parabolic system, 322 Observability of compactly perturbed sys-
Lions’ compact embedding theorem, 242 tems, 34
Observation, 85, 87, 89, 95, 101, 103, 113,
123, 128, 140, 206, 209, 213, 217,
M 225, 281, 307, 310
Marked subspace, 26, 74, 77, 194 Observation of finite horizon, 292
Matrix of synchronization by 2-groups, 76 Observation on the infinite horizon/ time in-
Matrix of synchronization by p-groups, 8, terval, 208, 265, 288
60, 116, 119, 180, 224, 256, 257, 306 ω-linearly independent, 97, 98, 295
Maximal dissipative, 232 One-space-dimensional case, 4, 97, 100,
Minimal number of total controls, 15, 108, 209, 292, 322
109, 119, 135, 139, 142, 147, 218 One-space-dimensional systems, 11, 113,
Minimal rank condition, 13–15, 65, 111, 124
112, 119, 133, 139, 222, 226, 302 Orthogonal complement, 21, 166, 204
Mixed initial-boundary value problem, 60,
174
Moore–Penrose (generalized) inverse, 25, 46
P
Multiplier approach, 287
Parallelepiped, 232, 239–241, 244, 256, 260,
Multiplier geometrical condition, 4, 9, 11,
316, 322
33, 42, 48, 68, 85, 91, 95, 141, 158,
Phenomenon of synchronization, 1
206, 221, 289
Pinning sense, 2, 13, 14, 16, 112, 122, 123,
138, 218, 302, 313
N Plate equations, 323
Networks, 322
Neumann, 2, 17, 316, 321
Neumann boundary condition, 174, 189, R
231, 237, 287 Rank-nullity theorem, 23, 278
Neumann boundary controls, 17, 78, 157, Real symmetric matrix, 29, 234, 236, 239,
170, 181, 189, 201, 207, 240, 255 242, 244, 251, 263, 267, 286, 302,
Nilpotent, 89 307
Nilpotent matrix, 89, 141 Reduced adjoint problem, 12, 108, 123, 131,
Nilpotent system, 11, 89, 113 140, 217, 225
Non-approximate boundary null controlla- Reduced matrix, 6, 8, 14, 141, 144, 146, 246,
bility, 107 302, 306
Non-exact boundary controllability, 33, 39, Reduced matrix of A by C1 , 6, 46, 107, 175,
157, 169, 170, 240, 243, 245 298
Non-exact boundary null controllability, 33, Reduced matrix of A by C p , 25, 65, 116,
45 225, 306
Non-exact boundary synchronization, 43, Reduced system, 6, 12, 16, 48, 57, 64, 66,
47, 173, 176 72, 107, 113, 117, 123, 132, 140, 147,
Non-exact boundary synchronization by p- 176, 183, 185, 216, 221, 246, 249,
groups, 182 269, 302, 307, 308, 310, 316
Nonharmonic series, 97 Rellich’s identity, 288
Number of direct boundary controls, 11, 87, Riesz basis, 37, 101, 166, 210
207 Riesz–Fréchet’s representation theorem,
Number of indirect controls, 87 167, 236
Number of the total (direct and indirect) con- Riesz sequence, 98
trols, 11, 118, 133, 138, 142, 151, Robin boundary conditions, 2, 17, 231, 234,
207, 218, 223, 226 261, 287
Index 333

Row-sum condition, 5, 45, 65, 106, 174, 176, Systems described by ordinary differential
182, 216, 298 equations (ODEs), 1
Row-sum condition by blocks, 8, 44, 65, 182, Systems of parabolic equations, 322
261

T
S 2 × 2 systems, 11, 95, 113, 124
Schur’s theorem, 159 Total (direct and indirect) controls, 13, 14,
Semi-simple, 26 86, 87, 108, 109, 118, 119, 133, 138,
Skew-adjoint operator with compact, 37 141, 147, 207, 218, 226
Stability of the exactly synchronizable state,
323
State variables, 2–4, 7, 11, 15, 86, 87, 157, U
171, 239, 321 Unique continuation, 35, 85, 97, 212, 265,
Stronger version of the approximate bound- 285–287
ary synchronization by p-groups, 125 Upper density of a sequence, 98, 102
Strongly marked subspace, 26
Supplement, 14, 15, 19, 73, 112, 123, 133,
137, 139, 145, 227 V
Synchronizable part, 52–54, 74, 190–193 Variational form, 235
Synchronizable state, 45, 122 Variational problem, 235
Synchronization, 1, 2, 4–7, 44, 45, 321, 322
Synchronization for systems governed by
ODEs, 7 W
Synchronization matrix, 6, 46, 105, 109, 215, Weak exact boundary controllability, 321
218, 297
Synchronization on a tree-like network, 322
System of root vectors, 29, 37, 38, 141, 147, Z
149 Zero-sum condition by blocks, 261

You might also like