Download as pdf or txt
Download as pdf or txt
You are on page 1of 31

ECE4520/5520: Multivariable Control Systems I.

4–1

STABILITY

4.1: Vector norms and quadratic forms

■ Loosely speaking, a system is said to be stable if small perturbations


to its initial state or input result in small changes to how the system’s
state or output evolves over time.

! Sometimes, a system will be designed intentionally to be open-


loop unstable (why?), and then require a controller to stabilize it.
■ Mathematically, we find that there are a number of ways to describe
system stability properties, and we will discuss many of them here.
■ First, though, we need to review matrix norms: We need to be able to
quantify gains that lead to “small” or “large” changes in a system’s
response.

NORM : A measure of length or gain.


■ Properties of a vector norm, where x, y are vectors and ˛ is scalar:

kxk " 0

kxk D 0 iff x D 0

k˛xk D j˛j kxk

kx C yk # kxk C kyk :

Lecture notes prepared by Dr. Gregory L. Plett. Copyright $


c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–2
Vector p-norms

■ The p-norm of a vector is defined as


v
u nx
uX
kxkp D lim t ˛
jxk j˛ :
˛!p
kD1

■ One common example is the (Euclidean) 2-norm


q p
2
kxk D kxk2 D x1 C % % % C xn D x T x:
2

! kxk measures length of vector (from origin).

■ Another common vector norm is the 1-norm


v
u nx
uX
kxk1 D lim t ˛
jxk j˛ D max jxk j:
˛!1 k
kD1

■ A third common vector norm is the 1-norm


v
u nx nx
uX X
kxk1 D lim t˛
jxk j˛ D jxk j:
˛!1
kD1 kD1

Quadratic forms

■ The machinery required to understand matrix norms is a little more


complicated.
■ We spend some time to studying some relevant linear algebra first.

Eigenvalues and eigenvectors of symmetric matrices ( i.e., A D AT )

FACT : The eigenvalues of symmetric matrix A 2 Rn&n are real.


■ To see this, suppose Av D !v, v ¤ 0, v 2 Cn.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright $


c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–3
n
X
' T ' T
! Then, .v / Av D .v / .Av/ D !.v / v D !
' T
jvi j2 D !kvk22 .
i D1
n
X
! Also, .v ' /T Av D ..Av/'/T v D ..!v/'/T v D !' jvi j2 D !'kvk22 .
i D1
■ So, ! D !' , which means that ! 2 R (hence, we can assume v 2 Rn).
FACT : There is a set of orthonormal eigenvectors of A. i.e., q1; : : : ; qn,
such that Aqi D !i qi and qiT qj D ıij . We’ll show this for !i distinct.
■ Suppose v1; : : : ; vn is a set of linearly independent eigenvectors of A.
■ That is, suppose Avi D !i vi ; kvi k D 1.
■ Then, we have viT .Avj / D !j viT vj D .Avi /T vj D !i viT vj .
! " T T
■ This gives ! ( !
i j vi vj D 0 for i ¤ j and !i ¤ !j . Hence, vi vj D 0.

! In this case, we can say that the eigenvectors are orthogonal.

! In the general case (!i not distinct) we must say that the
eigenvectors can be chosen to be orthogonal.
■ Grouping the eigenvectors into an (orthogonal) matrix Q allows us to
write Q(1 AQ D QT AQ D ƒ.
Xn
T
■ Hence, we can express A as A D QƒQ D !i qi qiT .
i D1

Quadratic forms
■ A function f W Rn ! R of the form
n
X
T
f .x/ D x Ax D Aij xi xj
i;j

is called a quadratic form.


Lecture notes prepared by Dr. Gregory L. Plett. Copyright $
c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–4

■ In a quadratic form, we may assume A is symmetric since y D y T , so

y D x T Ax D x T AT x D x T ..A C AT /=2/x,

where .A C AT /=2 is called the symmetric part of A.


■ Suppose that A D AT and A D QƒQT with eigenvalues sorted so
!1 " % % % " !n. Then,

x T Ax D x T QƒQT x

D .QT x/T ƒ.QT x/


n
X
D !i .qiT x/2
i D1
n
X
.qiT x/2 D !1 .Q x/ I.Q x/ D !1 x T x
# T T T
$
# !1
i D1

D !1 kxk2 .

■ That is, we have x T Ax # !1x T x.


■ By a similar argument, we can find that x T Ax " !n kxk2, so we have

!n x T x # x T Ax # !1 x T x.

■ Sometimes !1 is called !max and !n is called !min.


■ Note also that q1T Aq1 D !1 kq1k2 and qnT Aqn D !n kqnk2, so the
inequalities are tight.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright $


c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–5
4.2: Matrix gain
■ Suppose now that A 2 Rm&n (not necessarily square or symmetric).
■ For x 2 Rn, kAxk = kxk gives the amplification factor or gain of A in
the direction x.
■ Since this gain varies with direction of the input x, we might be
interested in knowing:
! What is the maximum gain and corresponding gain direction of A?
! What is the minimum gain of A (and corresponding gain direction)?
! How does gain of A vary with direction?

■ The max. gain is called the matrix norm of A, and is denoted kAk.
■ In particular, the matrix p-norm is defined as
kAxkp
kAkp D sup :
x¤0 kxkp

■ For p D 2, we have already developed the machinery to evaluate the


norm. Consider first the norm squared:
kAxk22 x T AT Ax
sup 2
D sup D !max.AT A/,
x¤0 kxk2 x¤0 kxk22
p
so we have kAk2 D !max.AT A/.
p
■ Similarly, the min. gain is given by inf kAxk = kxk D
2 2 !min.AT A/.
x¤0

■ Note that AT A 2 Rn&n is symmetric and AT A " 0 so !min and !max " 0.
! The “max gain” input direction is x D q1 , the eigenvector of AT A
associated with !max.
! The “min gain” input direction is x D qn , the eigenvector of AT A
associated with !min.
Lecture notes prepared by Dr. Gregory L. Plett. Copyright $
c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–6
■ For p D 1, it can be shown that
m
X
kAk1 D max jaij j;
1#j #n
i D1

which is simply the maximum absolute column sum of the matrix.


■ For p D 1, it can be shown that
n
X
kAk1 D max jaij j;
1#i #m
j D1

which is the maximum absolute row sum of the matrix.

Matrix 2-norms via SVD

■ Finding matrix 1- and 1-norms is quite simple. Finding the 2-norm is


more difficult because of the need to find eigenvalues, eigenvectors.
■ We can replace this step with one that is computationally simpler by
using the singular value decomposition (SVD) of A 2 Rm&n, where
rank.A/ D r:
A D U †V T .

! U D Œu1 ; : : : ; ur " 2 Rm&r , and U T U D I , and ui are the left or


output singular vectors of A.
! V D Œv1; : : : ; vr " 2 Rn&r , and V T V D I , and vi are the right or input
singular vectors of A.
! † D diag.#1; : : : ; #r / where #1 " % % % " #r > 0, and #i are the
(nonzero) singular values of A.
■ The above is called a compact SVD. Most often, we compute a full
SVD, where

Lecture notes prepared by Dr. Gregory L. Plett. Copyright $


c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–7
m&m
! U D Œu1 ; : : : ; um " 2 R , and U T U D I ,
! V D Œv1; : : : ; vn " 2 Rn&n , and V T V D I ,
! The matrix † 2 Rm&n is “diagonal”
2 3
2 3 #1 2 0 3
#1
0 0 #1 0 6 ::: 7
†D4
6 ::: 0 5 or † D 4
7 6 ::: 7 6
5 or † D 6
7
7
6 0 #n 7
0 #m 0 0 #n
4 5
0 0 0
when m < n, m D n and m > n, respectively.
! In this case, #1 " % % % " #r > 0, and #i D 0 for i > r.
! In MATLAB, svd.m and svds.m

■ We often write the full SVD as partitioned:


2 32 3
h i †1 0r&.n(r/ V1T
A D U1 U2 4 54 5,
0.m(r/&r 0.m(r/&.n(r/ T
V2

where A D U1†1V1T is the compact SVD.


■ The matrices U and V form orthonormal bases for the four
fundamental subspaces
! The first r columns of V form a basis for the row space of A;

◆ The row space of a matrix is the set of all possible linear


combinations of its row vectors. It’s the span of the input vectors
x to y D Ax that result in nonzero y.
! The last n ( r columns of V form a basis for the nullspace of A;

◆ The null space of a matrix is the set of all possible vectors x


such that y D Ax D 0.
! The first r columns of U form a basis for the column space of A;

Lecture notes prepared by Dr. Gregory L. Plett. Copyright $


c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–8

◆ The column space is also the range of the matrix. It’s the set of
all possible outputs y generated as y D Ax.
T
! The last m ( r columns of U form a basis for the nullspace of A .

◆ The nullspace of AT is the set of all outputs y that cannot be


generated as y D Ax.
■ So, immediately we see some value to the SVD: It gives us the rank
of the matrix, and these four bases, which can give us range space,
nullspace, etc.
■ Also, the SVD is computationally much simpler than an eigensystem
calculation, which makes finding matrix 2-norms far simpler:

AT A D .U †V T /T .U †V T / D V †2V T
p
so #i D !i .AT A/ and hence kAk2 D #1.

Positive (semi) definite; negative (semi) definite

■ Finally, before continuing to study system stability, we consider some


properties of the gain matrix in a quadratic form.
■ We say that symmetric matrix Q is:

! Positive definite if x T Qx > 0; 8x ¤ 0:


! Positive semidefinite if x T Qx " 0; 8x ¤ 0:
! Negative definite if x T Qx < 0; 8x ¤ 0:
T
! Negative semidefinite if x Qx # 0; 8x ¤ 0:

For positive-definite matrix Q,

0 < !minŒQ"kxk2 # x T Qx # !maxŒQ"kxk2 ; 8x ¤ 0:

Lecture notes prepared by Dr. Gregory L. Plett. Copyright $


c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–9
4.3: Lyapunov stability
■ Consider a continuous-time LTV system
x.t/
P D A.t/x.t/ C B.t/u.t/

y.t/ D C.t/x.t/ C D.t/u.t/:


■ The system is (marginally) stable in the sense of Lyapunov or
internally stable if, for every x.t0/ D x0, the homogeneous response
x.t/ D ˆ.t; t0/x0; t "0
is uniformly bounded (i.e., all trajectories can be bounded by a single
constant vector).
! The effect of the initial condition can not grow unbounded over
time, but can grow temporarily during a transient phase.
■ It is additionally asymptotically stable if, for every x.t0/ D x0,
x.t/ ! 0 as t ! 1:
! The effect of initial conditions eventually disappear completely.

■ It is additionally exponentially stable if there exist constants c, ! such


that for every initial condition x.t0/ D x0
kx.t/k # ce !.t (t0/kx0 k; t " 0:
■ Finally, it is unstable if it is not marginally stable.
! The state may diverge over time, depending on specific x0 .

■ Notice that B.t/, C.t/, and D.t/ have no influence over Lyapunov
stability. All that matters is A.t/.
■ So, we often simply talk about the Lyapunov stability of the
homogeneous system x.t/ P D A.t/x.t/; t " 0:
Lecture notes prepared by Dr. Gregory L. Plett. Copyright $
c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–10
Eigenvalue conditions for LTI systems

■ For LTI systems, it is relatively simple to transform these definitions of


stability into tests for stability.
■ We know matrix-exponential properties and that ˆ.t; t0/ D e A.t (t0/ .
■ Therefore, a continuous-time LTI system is:

! (Marginally) stable iff all eigenvalues of A have negative or zero


real parts and all Jordan blocks for eigenvalues having zero real
parts are 1 & 1 in dimension.
! Asymptotically stable iff all eigenvalues have negative real parts.

! Exponentially stable iff all eigenvalues have negative real parts.

! Unstable iff at least one eigenvalue has pos. real part, or a Jordan
block for an eigenvalue having zero real part is bigger than 1 & 1.
■ When all eigenvalues have negative real part, we can find c, ! so that

ke At k # ce (!t :

■ Therefore, kx.t/k D ke A.t (t0 /x0 k # ke A.t (t0 /kkx0 k # ce (!.t (t0/ kx0 k.

! Asymptotic and exponential stability are equivalent for LTI systems.

■ Note: This test does not work for LTV systems in general. It’s possible
for R.!ŒA.t/"/ < 0 for all time and still have an unstable system.

Lyapunov stability theorem

■ The eigenvalue test will often be sufficient for evaluating Lyapunov


stability, but some extra properties can be helpful, especially when
extending to understanding stability of nonlinear systems.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright $


c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–11
■ The Lyapunov stability theorem provides an alternative condition to
check whether a continuous-time homogeneous LTI system is
asymptotically stable.
■ The theorem states that the following five conditions are equivalent:

1. The system is asymptotically stable.


2. The system is exponentially stable.
3. All the eigenvalues of A have strictly negative real parts.
4. For every symmetric positive-definite matrix Q, there exists a
unique solution P to the following Lyapunov equation

AT P C PA D (Q:

Further, P is symmetric and positive-definite.


5. There exists a symmetric positive-definite matrix P for which the
following Lyapunov matrix inequality holds

AT P C PA < 0:

■ We have already proven the equivalence between 1, 2, and 3.


■ To prove the rest, we show that 2 implies 4, that 4 implies 5, and that
5 implies 2 (thus forming a complete circular set of implications).

Lecture notes prepared by Dr. Gregory L. Plett. Copyright $


c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–12
4.4: Proof of the Lyaponov stability theorem

■ We start by showing that the unique solution to the Lyapunov


equation AT P C PA D (Q is
Z 1
T
P D e A t Qe At dt:
0

■ We verify this in four steps. The first step is to show that the integral is
well defined (finite).

! This is true since the system is exponentially stable, so e At and


T
hence ke A t Qe At k converges to zero exponentially quickly as
t ! 1. Therefore, the integral is absolutely convergent.
■ Second, we show that this integral solves the Lyapunov equation via
direct substitution:
Z 1
T T
AT P C PA D AT e A t Qe At C e A t Qe At A dt:
0

! But (noting that e At A D Ae At via infinite-series expansion of e At ),


d AT t At T T
.e Qe / D AT e A t Qe At C e A t Qe At A:
dt
! So,
Z 1
d T
h T ˇ1
AT P C PA D .e A t Qe At / dt D e A t Qe At ˇ
ˇ
0 dt 0
% &
AT t At T
D lim e Qe ( e A 0Qe A0:
t !1

! Because of asymptotic stability, the first term vanishes.


T
! Because e A 0 D I , the second term becomes (Q.

! So, the proposed solution solves the equation, but we don’t yet
know whether it is symmetric, positive-definite, and/or unique.
Lecture notes prepared by Dr. Gregory L. Plett. Copyright $
c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–13
■ Third, to show that it is symmetric we compute
Z 1% &T
T AT t At
P D e Qe dt
0
Z 1! % T &T
At T
Q eA t
T
"
D e dt
0
Z 1
T
D e A t Qe At dt D P:
0
! To show that it is positive-definite, we pick an arbitrary constant
vector ´ ¤ 0 and compute
Z 1 % T &
T T A t At
´ P´ D ´ e Qe ´ dt
0
Z 1
D w.t/T Qw.t/ dt;
0
which has positive-definite form since Q is positive definite.
T At
! Further, we see that ´ P ´ D 0 only when w.t/ D e ´ D 0, which
can only happen when ´ D 0 for finite time.
! Therefore P is positive-definite.

■ Fourth, to see that no other matrix solves this equation, we proceed


using a proof by contradiction.

! That is, assume that there exists another solution PN to the


Lyapunov equation. If this were so, we would have both

AT P C PA D (Q

AT PN C PN A D (Q:

! Then,
AT .P ( PN / C .P ( PN /A D 0:

Lecture notes prepared by Dr. Gregory L. Plett. Copyright $


c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–14
T
! Multiplying on the left by e A t and on the right by e At
T T
e A t AT .P ( PN /e At C e A t .P ( PN /Ae At D 0:

! On the other hand,


d % AT t & T T
e .P ( PN /e At
D e A t AT .P ( PN /e At C e A t .P ( PN /e At A D 0:
dt
T
! Therefore, e A t .P ( PN /e At must be constant for all time, which
requires that P D PN since e At is nonsingular.
■ We have now shown that condition 2 implies 4.
■ It immediately follows that 4 implies 5 if we select Q D I .
■ To show that condition 5 implies condition 2, let P be a symmetric
positive-definite matrix that satisfies
AT P C PA < 0
and let Q D (.AT P C PA/ > 0.

! Consider an arbitrary solution x.t/ to the homogeneous system


dynamics and define the scalar signal
v.t/ D x T .t/P x.t/ " 0:

! This signal is a kind of weighted “energy of the state” for the


homogeneous system.
! For a stable homogeneous system, we would expect this energy to
dissipate over time, which is exactly what we will show.
◆ This idea returns when talking about nonlinear stability, later.
! Taking derivatives of v.t/,

P D xP T .t/P x.t/ C x T .t/P x.t/:


v.t/ P
Lecture notes prepared by Dr. Gregory L. Plett. Copyright $
c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–15
! But, since x.t/
P D Ax.t/;

P D x .t/ A P C PA x.t/ D (x T .t/Qx.t/ # 0:


T
! T "
v.t/

! Therefore v.t/ is a nonincreasing signal and we conclude that

v.t/ D x T .t/P x.t/ # v.0/ D x T .0/P x.0/; t " 0:

! But since v.t/ D x T .t/P x.t/ " !min ŒP "kx.t/k2 , we conclude that
x T .t/P x.t/ v.t/ v.0/
kx.t/k22 # D # ; t " 0:
!minŒP " !minŒP " !minŒP "
! This means that the system is stable. To show that it is
exponentially stable, we combine our known results that

x T .t/Qx.t/ " !minŒQ"kx.t/k22

and that v.t/ D x T .t/P x.t/ # !maxŒP "kx.t/k22 and conclude


!minŒQ"
P D (x T .t/Qx.t/ # (!minŒQ"kx.t/k22 # (
v.t/ v.t/;
!maxŒP "
or that
v.t/
P # $v.t/:
! It can be shown (see text) that this converges more quickly than
the exponential
v.t/ # e $.t (t0/v.t0/:
! Since v.t/ converges exponentially quickly, so too must kx.t/k2
since we have already shown that
v.t/
kx.t/k22 # :
!minŒP "
■ This finishes our proof of the Lyapunov stability theorem for LTI
continuous-time systems.
Lecture notes prepared by Dr. Gregory L. Plett. Copyright $
c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–16
Solving Lyapunov equations
■ Lyapunov equations of the form
XA C BX D C;
crop up numerous times in control systems, for different reasons.
■ Sometimes we need to solve for X.
■ The equations can be solved by writing them out in terms of Xij .
■ Another way to do it uses the Kronecker product and vectorized
matrices.
KRONECKER PRODUCT:
2 3
a11B a12B % % % a1nB
::: ::: 7 :
6 7
A˝B D6
4 5
am1B am2B % % % amnB
That is, the Kronecker product is a large matrix containing all
possible permutations of A and B.
VECTORIZED MATRICES : We can convert a matrix into a column vector
which stacks up each column of the matrix. That is, if

A D Œa1; a2; : : : an"


2 3
a1
6 a2 7
6 7
.A/ D 6
6 :: 7 :
7
4 : 5
an
■ A general Kronecker-product rule is:
.AXB/ D ŒB T ˝ A".X/:
Lecture notes prepared by Dr. Gregory L. Plett. Copyright $
c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–17
■ Three specific special-case rules result:

.PMP T / D ŒP ˝ P ".M / .to be used later in course/

.XA/ D ŒAT ˝ I ".X/

.BX/ D ŒI ˝ B".X/

so, to solve the Lyapunov equation,

XA C BX D C

IXA C BXI D C

.IXA/ C .BXI / D .C /

ŒAT ˝ I ".X/ C ŒI ˝ B".X/ D .C /


˚ T '
ŒA ˝ I " C ŒI ˝ B" .X/ D .C /
'(1
.X/ D ŒAT ˝ I " C ŒI ˝ B"
˚
.C /:

X is then found by un-vectorizing .X/.


■ In MATLAB: lyap( )

Lecture notes prepared by Dr. Gregory L. Plett. Copyright $


c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–18
4.5: Discrete-time Lyapunov stability
■ Consider now the discrete-time LTV system
xŒk C 1" D AŒk"xŒk" C BŒk"uŒk"

yŒk" D C Œk"xŒk" C DŒk"uŒk":

■ The system is (marginally) stable in the Lyapunov sense or internally


stable if for every xŒk0 " D x0 the homogeneous state response
xŒk" D ˆŒk; k0 "x0; k0 " 0
is uniformly bounded.
■ It is asymptotically stable if, additionally, for every initial condition we
have xŒk" ! 0 as k ! 1.
■ It is exponentially stable if, additionally, there exist constants c, ! such
that for every xŒk0" D x0 we have
kxŒk"k # c!k(k0 kx0k; k " k0 :

■ It is unstable if it is not marginally stable in the Lyapunov sense.


■ Note again that the matrices BŒk", C Œk", and DŒk" play no role in this
definition, so we need consider only the homogeneous response
when considering Lyapunov stability.

Eigenvalue conditions
■ A discrete-time LTI system is

! Marginally stable iff all the eigenvalues of A have magnitude


smaller than or equal to 1, and all the Jordan blocks corresponding
to eigenvalues with magnitude equal to 1 are 1 & 1.
Lecture notes prepared by Dr. Gregory L. Plett. Copyright $
c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–19

! Asymptotically and exponentially stable iff all the eigenvalues of A


have magnitude strictly less than 1, or
! Unstable iff at least one eigenvalue of A has magnitude larger
than 1 or magnitude equal to 1, but the corresponding Jordan
block is larger than 1 & 1.

Lyapunov stability in discrete time


■ The following five conditions are equivalent for LTI systems
1. The system is asymptotically stable.
2. The system is exponentially stable.
3. All the eigenvalues of A have magnitude strictly smaller than 1.
4. For every symmetric positive-definite matrix Q, there exists a
unique solution P to the following discrete-time Lyapunov equation
(aka Stein equation)
AT PA ( P D (Q:
Furthermore, P is positive-definite.
5. There exists a symmetric positive-definite matrix P for which the
following Lyapunov matrix inequality holds
AT PA ( P < 0:
■ These conditions can be proven in a similar way to their
continuous-time counterparts.
■ Now, however, we use the energy function
vŒk" D x T Œk"P xŒk"; k > k0
which evolves according to
vŒk C 1" D x T Œk C 1"P xŒk C 1" D x T Œk"AT PAxŒk":
Lecture notes prepared by Dr. Gregory L. Plett. Copyright $
c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–20
■ The discrete-time Lyapunov equation guarantees that
vŒk C 1" D x T Œk".P ( Q/xŒk" D vŒk" ( x T Œk"QxŒk"; k " k0:
■ From this, we conclude that vŒk" is nonincreasing, and with some
work that it decreases to zero exponentially quickly.

Solving discrete-time Lyapunov equations


■ In discrete time, a Lyapunov equation takes on the form
W ( AWAT D BB T :
■ This may be converted to a standard Lyapunov equation. Let
Ac D .A C I /(1 .A ( I /
and
Cc D (2.A C I /(1BB T .AT C I /(1 :
■ Expand the continuous-time Lyapunov equation Ac W C WATc D Cc
using these definitions:
.A C I /(1 .A ( I /W C W .AT ( I /.AT C I /(1 D

( 2.A C I /(1BB T .AT C I /(1 :


■ Replace .A ( I / with .A C I ( 2I / and .AT ( I / with .AT C I ( 2I /.
■ Multiply through and simplify to get
W ( .A C I /(1W ( W .AT C I /(1 D (.A C I /(1BB T .AT C I /(1 :
■ Left multiply by (.A C I / and right multiply by .AT C I / and simplify
W ( AWAT D BB T :
■ So, with the above definitions of Ac and Cc based on A and B, we can
create a continuous-time Lyapunov equation and solve it for W .

Lecture notes prepared by Dr. Gregory L. Plett. Copyright $


c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–21
4.6: Stability of locally linearized systems

■ Now, consider a continuous-time homogeneous nonlinear system

x.t/
P D f .x.t//

with an equilibrium point at x eq . That is, f .x eq / D 0.


■ This gives rise to the locally linearized system

ı x.t/
P D Aıx.t/

with ıx.t/ D x.t/ ( x eq and A D @f .x eq /=@x.


■ If the linearized system passes the Lyapunov stability tests, can we
say anything about the stability of the original nonlinear system? Yes!
■ If f .x.t// is twice differentiable and the linearized system is
exponentially stable, then there exists a ball B ) Rn around x eq and
constants c, ! > 0 such that for every solution x.t/ to the nonlinear
system that starts at x0 2 B, we have

kx.t/ ( x eq k # ce !.t (t0 /kx.t0/ ( x eq k; t " t0 :

! We call this nonlinear system locally exponentially stable.

■ To show this, because f .%/ is twice differentiable, we know from


Taylor’s theorem that
eq
! "
r.x.t// D f .x.t// ( f .x / C Aıx.t/

D f .x.t// ( Aıx.t/ D O.kıx.t/k2/

which means that there exists a constant c and a ball BN around x eq


for which
kr.x.t//k # ckıx.t/k2; N
8x.t/ 2 B:

Lecture notes prepared by Dr. Gregory L. Plett. Copyright $


c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–22
■ Since the linearized system is
BN (from Taylor’s theorem)
exponentially stable, there exists a
positive-definite matrix P for which N
E (inside B)
AT P C PA D (I:
B (inside E)
■ Inspired by our proof of the
x eq
Lyapunov stability theorem, we
define the scalar “energy” signal

v.t/ D ıx T .t/P ıx.t/; t " 0:


1
■ We then compute its derivative #
4ckP k
along trajectories

v.t/
P D f T .x.t//P ıx.t/ C ıx T .t/Pf .x.t//

D .Aıx.t/ C r.x.t///T P ıx C ıx T .t/P .Aıx.t/ C r.x.t///

D ıx T .t/.AT P C PA/ıx.t/ C 2ıx T P r.x.t//

D (kıxk22 C 2ıx T .t/P r.x.t//

# (kıx.t/k22 C 2kP kkıx.t/kkr.x/k:


■ To make the proof work, we would like to make sure that the RHS is
negative. For example, it would work if we could show
1
(kıx.t/k22 C 2kP kkıx.t/kkr.x/k # ( kıx.t/k2:
2
■ To do so, let % > 0 be sufficiently small so that the ellipsoid

E D fx.t/ 2 Rn W .x.t/ ( x eq /T P .x.t/ ( x eq / # %g

centered at x eq satisfies the following two properties:

Lecture notes prepared by Dr. Gregory L. Plett. Copyright $


c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–23
N When x.t/ is inside this
1. E is fully contained inside the ball B.
ellipsoid, we know kr.x.t//k # ckıx.t/k2 and so x.t/ 2 E implies
P # (kıx.t/k2 C 2ckP kkıx.t/k3 D (.1 ( 2ckP kkıx.t/k/kıx.t/k2:
v.t/
2. We further shrink % so that inside the ellipsoid E we have
1
1 ( 2ckP kkıx.t/k "
2
which implies that
1
kıx.t/k # :
4ckP k
1
P # ( kıx.t/k2:
■ For this choice of % we have that x.t/ 2 E implies v.t/
2
■ We therefore conclude that x.t/ 2 E implies that both

v.t/ # % and v.t/


P # 0:
So, v.t/ cannot increase above % and x.t/ cannot exit E.
■ Therefore, if x.0/ starts inside E, it cannot exit this set.
■ P # (kıx.t/k2=2 and ıx T .t/P ıx.t/ # kP kkıxk2 ,
Moreover, since v.t/
we can further conclude that if x.0/ starts within E,
v.t/
v.t/
P #(
2kP k
and therefore we can show that ıx.t/ D x.t/ ( x eq decreases to zero
exponentially quickly.
■ In conclusion, if the linearized system is stable, then there exists
some operating regime around x eq for which the nonlinear system is
also stable: any ball B inside E.
■ It can also be shown that if the linearized system is unstable, there
exist initial conditions arbitrarily close to x eq for which the trajectories
do not converge as t ! 1.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright $


c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–24
4.7: Input–output stability: LTV case

■ We now consider a different measure of stability that expresses how


the magnitude of the input affects the magnitude of the output in the
absence of initial conditions.
■ We consider the continuous-time LTV system

x.t/
P D A.t/x.t/ C B.t/u.t/

y.t/ D C.t/x.t/ C D.t/u.t/:

■ We have seen that the forced response of this system in the absence
of initial conditions is
Z t
y.t/ D C.t/ˆ.t; &/B.&/u.&/ d& C D.t/u.t/:
0

■ This system is said to be (uniformly) bounded-input bounded-output


(BIBO) stable if there exists a finite constant ' such that, for every
input u.t/, its forced response satisfies

sup ky.t/k # ' sup ku.t/k:


t 2Œ0;1/ t 2Œ0;1/

Time-domain conditions for BIBO stability

■ The following two statements are equivalent:

1. A LTV system is uniformly BIBO stable.


2. Every entry of the system’s D.t/ is uniformly bounded and
Z t
sup jgij .t; &/j d& < 1
t "0 0

for every entry gij .t; &/ of C.t/ˆ.t; &/B.&/.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright $


c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–25
■ To prove this statement, we start with
Z t
ky.t/k # kC.t/ˆ.t; &/B.&/kku.&/k d& C kD.t/kku.t/k:
0
■ We then define
$ D sup ku.t/k; ı D sup kD.t/k:
t 2Œ0;1/ t 2Œ0;1/

■ Substituting, we can write


(Z 1 )
ky.t/k # kC.t/ˆ.t; &/B.&/k d& C ı $:
0
■ Therefore, if (Z 1 )
'D kC.t/ˆ.t; &/B.&/k d& C ı
0
is finite, we have proven our desired result.
■ To show this, we note that
X
kC.t/ˆ.t; &/B.&/k # jgij .t; &/j
i;j

as a consequence of the triangle inequality and therefore


Z 1 XZ t
kC.t/ˆ.t; &/B.&/k d& # jgij .t; &/j d&:
0 i;j 0

■ By our assumption that


Z t
sup jgij .t; &/j d& < 1;
t "0 0
we conclude that
Z t
' D sup kC.t/ˆ.t; &/B.&/k d& C ı
t "0 0
XZ t
# sup jgij .t; &/j d& C ı < 1:
t "0 i;j 0

Lecture notes prepared by Dr. Gregory L. Plett. Copyright $


c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–26
■ This shows that statement 2 implies statement 1.
■ To show the other direction, we show that if 2 is false then 1 must also
be false. This requires two steps.
■ In the first step, we consider
Z t
sup jgij .t; &/j d& < 1
t "0 0
but that some entry in D.t/ is unbounded.
■ Choose arbitrary time t D T for D.t/ to be unbounded, and choose
input 8
<0; 0 # & < T
uT .&/ D
: ej ; & " T

where ej 2 Rk is the j th unit vector in the basis of Rk .


■ Then, the forced response at time T is exactly
y.T / D D.T /ej :
■ Thus, we have found an input for which
sup kuT .t/k D 1
t 2Œ0;1/

and
sup ky.t/k " ky.T /k D kD.T /ej k " jdij .T /j;
t 2Œ0;1/
where the last inequality results from the fact that the norm of the
vector D.T /ej must be larger than the absolute value of its ith entry,
which is precisely dij .T /.
■ As dij .T / is unbounded, we conclude that we can make sup ky.t/k
t 2Œ0;1/
arbitrarily large using inputs uT .t/ for which sup kuT .t/k D 1.
t 2Œ0;1/

Lecture notes prepared by Dr. Gregory L. Plett. Copyright $


c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–27
■ This is not compatible with the existence of a finite gain g that
satisfies the stability criterion. Therefore, D.%/ must be bounded for
BIBO stability.
Z t
■ Now suppose that 2 is false because jgij .t; &/j d& is unbounded for
0
some i and j . This will also violate BIBO stability.
■ To see this, pick an arbitrary time T and a “switching” input
8
<Ce ; g .t; &/ " 0
j ij
uT .t/ D
:(ej ; gij .t; &/ < 0:

■ For this input, the forced response at time T is given by


Z t
y.t/ D C.t/ˆ.t; &/B.&/u.&/ d& C D.t/u.t/;
0
and its ith entry is equal to
Z t
jgij .t; &/j d& ˙ dij .t/:
0

■ We thus have found an input for which sup kuT .t/k D 1 and
t 2Œ0;1/
ˇZ t ˇ
ˇ ˇ
sup ky.t/k " ky.T /k " ˇ jgij .t; &/j d& ˙ dij .t/ˇˇ :
ˇ
t 2Œ0;1/ 0

■ Since we have assumed that the integral of jgij .t; &/j is unbounded,
we also now conclude that we can make sup ky.t/k arbitrarily large
t 2Œ0;1/
using inputs uT .t/ for which sup kuT .t/k D 1.
t 2Œ0;1/
■ This is not compatible with the existence of a finite gain g that
satisfies the stability criteria, which means that condition 2 must hold
for the system to be BIBO stable.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright $


c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–28
4.8: Input–output stability: LTI case
■ For a time-invariant system
x.t/
P D Ax.t/ C Bu.t/

y.t/ D C x.t/ C Du.t/;


we have that
C ˆ.t; &/B D C e A.t (&/B:
■ We can therefore rewrite our prior stability criterion as finite D plus
Z t
sup jgN ij .t ( &/j d& < 1
t "0 0

understanding that gN ij .t ( &/ denotes the ij th entry of C e A.t (&/ B.


■ Making a change of variable ( D t ( &, we can restate
Z t Z t Z 1
sup jgN ij .t ( &/j d& D sup jgN ij .(/j d( D jgN ij .(/j d(:
t "0 0 t "0 0 0

■ Therefore, we can restate our prior stability observations as the


equivalence between the following two statements:
1. A LTI system is uniformly BIBO stable.
2. D is finite and for every entry gN ij .(/ of C e A( B, we have
Z 1
jgN ij .(/j d( < 1:
0

Frequency-domain conditions for BIBO stability


■ For LTI systems, a stability test is often easier to do in the Laplace
domain.
■ Taking Laplace transforms with respect to temporal variable (,
LŒgN ij .(/" D LŒC e A( B" D C.sI ( A/(1B:
Lecture notes prepared by Dr. Gregory L. Plett. Copyright $
c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–29
■ The ij th entry of this result will be a strictly proper rational polynomial
˛0s q C ˛1s q(1 C % % % C ˛q(1s C ˛q
Gij .s/ D
.s ( !1 /m1 .s ( !2 /m2 % % % .s ( !k /mk
where the ! are the unique poles and the m are their multiplicities.
■ If we were to use partial-fraction expansion to invert Gij .s/, we would
find the intermediate result
a11 a12 a1m1
Gij .s/ D C C %%%C C%%%
s ( !1 .s ( !1/2 .s ( !1 /m1
ak1 ak2 akmk
C C % % % C :
s ( !k .s ( !k /2 .s ( !k /mk
■ The inverse Laplace transform is then given by

gN ij .t/ D L(1ŒGij .s/"

D a11e !1t C a12te !1t C % % % a1m1 t m1(1 e !1t C % % %

ak1e !k t C ak2te !k t C % % % C akmk t mk (1e !k t :

■ We therefore conclude the following

1. If for all Gij .s/, all the poles have strictly negative real parts, then
gN ij .t/ converges to zero exponentially quickly and the system is
BIBO stable.
2. If at least one of the Gij .s/ has a pole with zero or positive real
part, then jgN ij .t/j does not converge to zero and the system is not
BIBO stable.
■ Note that adding a constant finite D to C.sI ( A/(1B will not change
the poles, we can restate the BIBO stability relationship as

1. The system is uniformly BIBO stable.


Lecture notes prepared by Dr. Gregory L. Plett. Copyright $
c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–30

2. Every pole of every entry of the transfer function of the system has
a strictly negative real part.

BIBO versus Lyapunov stability


■ Systems that are exponentially stable in the sense of Lyapunov are
also BIBO stable.
■ However, the converse is not necessarily true. Consider the example
" # " #
1 0 0
x.t/
P D x.t/ C u.t/
0 (2 1
h i
y.t/ D 1 1 x.t/:

■ For this system " #


t
e 0
e At D
0 e (2t
which is unbounded and therefore Lyapunov unstable.
■ However " #" #
t
h i e 0 0
C e At B D 1 1 D e (2t
0 e (2t 1
and therefore the system is BIBO stable.

! The root of this distinction between types of stability stems from


the unstable part of the system being unobservable.
! We will study this more in the next chapter.

Discrete-time BIBO stability


■ Without going into the details, we simply state the conditions for
discrete-time BIBO stability.
Lecture notes prepared by Dr. Gregory L. Plett. Copyright $
c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett
ECE4520/5520, STABILITY 4–31
■ In the time domain, the following two conditions are true for a
discrete-time LTV or LTI system:

1. A system is uniformly BIBO stable.


2. Every entry of DŒk" is uniformly bounded and
k(1
X
sup jgij Œk; &"j < 1
k"0 &D0

for every entry gij Œk; &" of C Œk"ˆŒk; &"BŒ&".


■ For an LTI system, we also have

1. A system is uniformly BIBO stable.


2. For every entry gN ij .(/ of CA( B, we have
1
X
jgN ij .(/j < 1:
(D1

3. Every pole of every entry of the transfer function of the system has
magnitude strictly less than 1.

Where to from here?

■ We have now seen how to determine whether a given system is


stable, using two principal definitions of stability and a variety of tests.
■ What if it isn’t stable? Can we stabilize it? Can we control it?
■ To see if we will be able to do so, we need to understand two
important concepts: controllability and observability.
■ We look at these next.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright $


c 2015, 2011, 2009, 2007, 2005, 2003, 2001, 2000, Gregory L. Plett

You might also like