Download as pdf or txt
Download as pdf or txt
You are on page 1of 69

‫ﺟﺎﻣﻌﺔ ﻧﻳﻧﻭﻯ‬

‫ﻛﻠﻳﺔ ﻫﻧﺩﺳﺔ ﺍﻻﻟﻛﺗﺭﻭﻧﻳﺎﺕ‬


‫ﻗﺳﻡ ﻫﻧﺩﺳﺔ ﺍﻟﻧﻅﻡ ﻭﺍﻟﺳﻳﻁﺭﺓ‬

Advanced Control Systems


Lecture Notes

Prepared by: Mr. Abdullah I. Abdullah

September 2017

i
Course Objectives:

This is the second course on control systems. It covers linear systems and control
design techniques using state-space methods. Students should know
controllability, observability, state-space realizations, state feedback. They should
know how to design and apply LQR, pole placement and observers control
systems.

REFERENCES:

1-C. Katsuhiko Ogata, Modern Control Engineering, Prentice Hall.

2-R.C. Dorf, R.H. Bishop. Modern Control Systems (12th Edition).-


Prentice Hall, 2011.

ii
Contents

1 Linear Systems 1
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Review of System Modeling . . . . . . . . . . . . . . . . . . . . . 2
1.3 State-variable Modeling . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.1 The Concept of State . . . . . . . . . . . . . . . . . . . . 8
1.4 Simulation Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.1 State-variable models from Transfer Function . . . . . . . 10
1.4.2 Transfer Functions from State-variable Models . . . . . . 15
1.5 System Interconnections . . . . . . . . . . . . . . . . . . . . . . . 17
1.5.1 Series and Parallel Connections . . . . . . . . . . . . . . . 17
1.5.2 Feedback Connection . . . . . . . . . . . . . . . . . . . . . 17
1.6 Solution of State Equations . . . . . . . . . . . . . . . . . . . . . 18
1.6.1 Properties of the State-Transition Matrix . . . . . . . . . 22
1.7 Characteristic Equations . . . . . . . . . . . . . . . . . . . . . . . 24
1.7.1 Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.7.2 Eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.8 Similarity Transformation . . . . . . . . . . . . . . . . . . . . . . 27
1.8.1 Properties of the Similarity Transformation . . . . . . . . 29
1.8.2 Diagonal Canonical Form . . . . . . . . . . . . . . . . . . 29
1.8.3 Control Canonical Form . . . . . . . . . . . . . . . . . . . 31
1.8.4 Observer Canonical Form . . . . . . . . . . . . . . . . . . 33

2 Controllability and Observability 37


2.1 Motivation Examples . . . . . . . . . . . . . . . . . . . . . . . . 37
2.2 Controllability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.2.1 Controllability Tests . . . . . . . . . . . . . . . . . . . . . 40
2.3 Observability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.3.1 Observability Tests . . . . . . . . . . . . . . . . . . . . . . 42
2.4 Frequency Domain Tests . . . . . . . . . . . . . . . . . . . . . . . 43
2.5 Similarity Transformations Again . . . . . . . . . . . . . . . . . . 46

3 Stability 47
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.2 Stability Definitions . . . . . . . . . . . . . . . . . . . . . . . . . 48

iii
iv CONTENTS

4 Modern Control Design 51


4.1 State feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.2 Pole-Placement Design . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3 Missing Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.4 State Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.4.1 Estimator Design . . . . . . . . . . . . . . . . . . . . . . . 57
4.5 Closed-Loop System Characteristics . . . . . . . . . . . . . . . . 58
4.5.1 Controller-Estimator Transfer Function . . . . . . . . . . 60
4.6 Reduced-order State Estimators . . . . . . . . . . . . . . . . . . . 61
4.6.1 Example page 263 . . . . . . . . . . . . . . . . . . . . . . 65
4.7 Systems with Inputs . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.7.1 Full State Feedback . . . . . . . . . . . . . . . . . . . . . 65
Chapter 1
Linear Systems

1.1 Introduction
Control theory can be approached from a number of directions. The first
systematic method of dealing with what is now called control theory began
to emerge in the 1930s. Transfer functions and frequency domain techniques
dominated the so called classical approaches to control theory. In the late 1950s
and early 1960s a time domain approach using state variable descriptions started
to emerge. The 1980s saw great advances in control theory for the robust design
of systems with uncertainties in their dynamic characteristics. The concepts of
H∞ control and µ-synthesis theory were introduced.

For a number of years the state variable approach was synonymous with
modern control theory. At the present time state variable approach and the
various transfer function based methods are considered on an equal level, and
nicely complement each other. This course is concerned with the analysis and
design of control systems with the state variable point of view. The class of
systems studied are assumed linear time-invariant (LTI) systems.

Linear time-invariant systems are usually mathematically described in one of


two domains: time-domain and frequency-domain. In time-domain, the system’s
representation is in the form of a differential equation. The frequency domain
approach usually results in a system representation in the form of a transfer
function. By use of the Laplace transform the transfer function can be derived
from the differential equations, and a differential equation model can be derived
from the the transfer function using the inverse Laplace transform. A transfer
function can be writtten only for the case in which the system model is a linear
time-invariant differential equation and the system initial conditions are ignored.

It is assumed that the student is familiar with obtaining the mathematical


models of various physical systems in the form of differential equations and
transfer functions. Knowledge of the laws of physics for mechanical, rotational
mechanical, and electrical systems is also assumed to be familiar to the student.

1
2 CHAPTER 1. LINEAR SYSTEMS

1.2 Review of System Modeling


In this section we briefly review obtaining mathematical models of physi-
cal systems by means of several examples.. By the term mathematical model
we mean the mathematical relationships that relate the output of a system to
its input. Such models can be constructed from the knowledge of the physical
characteristics of the system, i.e. mass for a mechanical system or resistance for
an electrical system.

Example 1.1 Consider the simple mechanical system of Figure 1.1. Derive the equations
of motion for the system.

f (t)

y(t)
M

K B

Figure 1.1: Simple mechanical system.

 Solution We sum forces on the mass, M . Three forces influence the motion
of the mass, namely, the applied force, the frictional force, and the spring force.
Hence we can write
d2 y(t) dy
M 2
= f (t) − B − Ky(t)
dt dt
rearranging the terms implies
d2 y(t) dy
M 2
+B + Ky(t) = f (t) (1.1)
dt dt
This is a second-order differential equation with constant coefficients. Note
that the order of the differential equation is the order of the highest derivative.
Systems described by such equations are called linear systems of the same order
as the differential equation.
A transfer function can be found for the system of Figure 1.1, with the
applied force f (t) as the input and the displacement of the mass y(t) as the
output. We can express the Laplace transform of the system equation (1.1) as
M s2 Y (s) + BsY (s) + KY (s) = (M s2 + Bs + K)Y (s) = F (s)
The initial conditions are ignored, since the transfer function is to be derived.
Thus the transfer function is given by
Y (s) 1
G(s) = = (1.2)
F (s) M s2 + Bs + K
1.2. REVIEW OF SYSTEM MODELING 3

Transfer functions represent the ratio of a system’s frequency domain output to


the frequency domain input, assuming that the initial conditions on the system
are zero. 

As a second example, consider the mechanical system shown in Figure 1.2. Example 1.2
This is a simplified model of a car suspension system of one wheel, with M1
the mass of the car, B the shock absorber, K1 the springs, M2 the mass of the
wheel, and K2 the elastance of the tire.

Figure 1.2: Simplified car suspension system.

 Solution Note that two equations must be written, since two independent
displacements exist; that is, knowing displacement x1 (t) does not give us knowl-
wdge of displacement x2 (t).
As mass M2 moves in the direction of positive x2 , the spring K2 will compress
and react with a force K2 x2 against the motion. The spring K1 will also react
with a force K1 (x2 − x1 ) against the motion. Likewise the damper will resist
motion with viscous friction force B( dx dx1
dt − dt ). The free-body diagram of the
2

system in Figure 1.3 shows


Xthe two masses and all of the forces acting on them.
Applying Newton’s law, F = ma, we get
d2 x2
 
dx2 dx1
M2 2 = f (t) − B − − K1 (x2 − x1 ) − K2 x2 (1.3)
dt dt dt
For mass M1 , only the spring K1 and damper provide forces, which are equal
and opposite to the forces seen in (1.3). Therefore,
d2 x1
 
dx2 dx1
M1 2 = B − + K1 (x2 − x1 ) (1.4)
dt dt dt
4 CHAPTER 1. LINEAR SYSTEMS

M1

dx2 dx1

K1 (x2 − x1 ) B dt − dt

M2

f (t) K2 x2

Figure 1.3: Free-body diagram s showing the masses in Figure 1.2 and the forces
that act on them.

Rearranging (1.3) and (1.4) to a more convenient form,


d2 x1
 
dx1 dx2
M1 2 + B − + K1 (x1 − x2 ) = 0
dt dt dt
d2 x2
 
dx2 dx1
M2 2 + B − + K1 (x2 − x1 ) − K2 x2 = f (t)
dt dt dt
Taking Laplace transform of these equations yields

M1 s2 X1 (s) + B[sX1 (s) − sX2 (s)] + K1 [X1 (s) − X2 (s)] = 0 (1.5)


2
M2 s X2 (s) + B[sX2 (s) − sX1 (s)] + K1 [X2 (s) − X1 (s)]+
K2 X2 (s) = F (s) (1.6)

Suppose the transfer function is desired between F (s) and X1 (s), that is, be-
tween a force applied to the wheel and the resulting displacement of the car.
First we solve the Equation (1.5) for X1 (s),
Bs + K1
X1 (s) = X2 (s) = G1 (s)X2 (s)
M1 s2 + Bs + K1
where
Bs + K1
G1 (s) =
M1 s2 + Bs + K1
Next we solve Equation (1.6) for X2 (s),
1 Bs + K1
X2 (s) = F (s) + X1 (s)
s2
M2 + Bs + K1 + K2 2
M2 s + Bs + K1 + K2
= G2 (s)F (s) + G3 (s)X1 (s)

where
1
G2 (s) =
M2 s2 + Bs + K1 + K2
1.2. REVIEW OF SYSTEM MODELING 5

and
Bs + K1
G3 (s) =
s2
M2 + Bs + K1 + K2
To find the transfer function between F1 (s) and X1 (s), we construct a block
diagram for this example from the system equations as shown in Figure 1.4
Thus the transfer function is, from the block diagram,

Figure 1.4: Model for Example 1.2.

X1 (s) G1 (s)G2 (s)


T (s) = =
F (s) 1 − G1 (s)G3 (s)
This expression may be evaluated to yield
Bs + K1
T (s) = 
M1 M2 s4 + B(M1 + M2 )s3 + (K1 M2 + K1 M1 + K2 M1 )s2 + K2 Bs + K1 K2

Consider the circuit of Figure 1.5. In this circuit we consider v1 (t) to be the Example 1.3
circuit input and v2 (t) to be the circuit output. Write a set of equations such
that the solution of these equations will yield v2 (t) as a function of v1 (t).

Figure 1.5: Electrical circuit.

 Solution By Kirchoff’s voltage law, assuming zero initial condition on the


capacitor, we obtain
1 t
Z
R1 (t)i(t) + R2 i(t) + i(τ )dτ = v1 (t) (1.7)
C 0
1 t
Z
R2 i(t) + i(τ )dτ = v2 (t) (1.8)
C 0
6 CHAPTER 1. LINEAR SYSTEMS

Hence, these two equations form the mathematical model of the circuit in Figure
1.5. Also we can derive the transfer function of the circuit of example 1.3. Taking
the Laplace transform of equation (1.7) yields
1
R1 I(s) + R2 I(s) + I(s) = V1 (s)
sC
We solve for I(s):
V1 (s)
I(s) = 1

R1 + R2 + sC
Next the Laplace transform of (1.8) yields
1
R2 I(s) + I(s) = V2 (s)
sC
we substitute the value of I(s) found earlier:
1

R2 + sC
V2 (s) = 1 V1 (s)

R1 + R2 + sC

Rearranging this equation yields the transfer function G(s),


V2 (s) R2 Cs + 1
G(s) = = 
V1 (s) (R1 + R2 )Cs + 1

1.3 State-variable Modeling


In section 1.2 two models of LTI systems were presented: linear differential
equations with constant coefficients and transfer functions. In this section we
consider a third type of model: the state variable model. The set of equations
(1.3) and (1.4) are coupled, in the sense that the variables in one appear in the
other. This implies that they must be solved simultaneously, or else they must be
combined into a single, larger order differential equation by substituting one into
the other. Finding the transfer function was a long and tedious exercise. Instead,
we prefer to write the dynamic equations of physical systems as state equations.
State equations are simply collections of first-order differential equations that
together represent exactly the same information as the original larger differential
equation. Of course, with an nth -order differential equation, we will need n first-
order equations.
The variables used to write these n first-order equations are called state
variables. The collection of state variables at any given time is known as the
state of the system, and the set of all values that can be taken on by the state
is known as the state space. State space may be thought of as trajectory in n
dimensional space representing the manner in which the state variables change
as a function of time.
To illustrate state variable modeling we begin by giving an example. The
system model used to illustrate state variables, is given in Figure 1.6. This is
the same model of Example 1.1. The differential equation describing this system
was already determined in (1.1) as

d2 y(t) dy
M +B + Ky(t) = f (t) (1.9)
dt2 dt
1.3. STATE-VARIABLE MODELING 7

f (t)

y(t)
M

K B

Figure 1.6: Simple mechanical system.

and the transfer function given by

Y (s) 1
G(s) = = (1.10)
F (s) M s2 + Bs + K

This equation gives a description of the position y(t) as a function of the force
f (t). Suppose that we also want information about the velocity. Using the state
variable approach, we define the two state variables x1 (t) and x2 (t) as

x1 (t) = y(t) (1.11)

and
dy(t) dx1 (t)
x2 (t) = = = ẋ1 (t) (1.12)
dt dt
Thus x1 (t) is the position of the mass and x2 (t) is its velocity. Then from (1.26),
(1.27) and (1.12), we may write

d2 y(t)
     
dx2 (t) B K 1
2
= = ẋ2 (t) = − x2 (t) − x1 (t) + f (t) (1.13)
dt dt M M M

The state variable model is usually written in a specific format which is given
by rearranging the equations as

ẋ1 (t) = x2 (t)


     
K B 1
ẋ2 (t) = − x1 (t) − x2 (t) + f (t)
M M M
y(t) = x1 (t)

Usually state equations are written in a vector-matrix format as


      
ẋ1 (t) 0 1 x1 (t) 0
= K B + 1 f (t)
ẋ2 (t) −M −M x2 (t) M
 
  x1 (t)
y(t) = 1 0
x2 (t)
8 CHAPTER 1. LINEAR SYSTEMS

1.3.1 The Concept of State


The concept of state occupies a central position in modern control theory. It
is a complete smmary of the status of the system at a particular point in time
and is defined as:

Definition The state of a system at any time t0 is the amount of informa-


tion at t0 that, together with all inputs for t ≥ t0 , uniquely determines the
behaviour of the system for all t ≥ t0 .

Knowledge of the state at some initial time t0 , plus knowledge of the system
inputs after t0 , allows the determination of the state at a later time t1 . As
far as the state at t1 is concerned, it makes no difference how the initial state
was attained. Thus the state at t0 constitutes a complete history of the system
behaviour prior to t0 .
The most general state space representation of a LTI system is given by

ẋ(t) = Ax(t) + Bu(t) (1.14)


y(t) = Cx(t) + Du(t) (1.15)

where

x(t) = state vector = (n × 1) vector of the states of an nth -order system


A = (n × n) system matrix
B = (n × r) input matrix
u(t) = input vector = (r × 1) vector composed of the system input functions
y(t) = output vector = (p × 1) vector composed of the defined outputs
C = (p × n) output matrix
D = (p × r) matrix to represent direct coupling between input and output

Equation (1.14) is called the state equation, and Equation (1.15) is called the
output equation, together they are referred to as the state-variable equations.
Equations (1.14) and (1.15) are shown in block diagram form in Figure 1.7.
The heavier lines indicate that the signals are vectors, and the integrator symbol
really indicates n scalar integrators.

Figure 1.7: State space representation of CT linear system.


1.4. SIMULATION DIAGRAMS 9

Consider the system described by the coupled differential equations Example 1.4

ÿ1 + k1 ẏ1 + k2 y1 = u1 + k3 u2
ẏ2 + k4 y2 + k5 ẏ1 = k6 u1

where u1 and u2 are inputs, y1 and y2 are outputs, and ki = 1, · · · , 6 are system
parameters. Write a state space representation for the differential equations.

 Solution To generate state equations, we will introduce the variables

x1 = y1 x2 = ẏ1 = ẋ1 x3 = y2

From the system differential equations we write

ẋ2 = −k2 x1 − k1 x2 + u1 + k3 u2
ẋ3 = −k5 x2 − k4 x3 + k6 u1

We rewrite the differential equations in the following order:

ẋ1 = x2
ẋ2 = −k2 x1 − k1 x2 + u1 + k3 u2
ẋ3 = −k5 x2 − k4 x3 + k6 u1

with the output equations

y1 = x1
y2 = x3

These equations may be written in matrix form as


ẋ A x B
z }| { z
 }| z }| { z }| { u
{ 
ẋ1 0 1 0 x1 0 0  
z }| {
u
ẋ2  = −k2 −k1 0  x2  +  1 k3  1
u2
ẋ3 0 −k5 −k4 x3 k6 0
 
 x
0  1
  
y1 1 0
= x2 
y2 0 0 1
|{z} | {z } x3
y C | {z }
x

1.4 Simulation Diagrams


In the previous section we presented examples of finding the state model of a
system directly from the system differential equations. The procedure in these
examples is very useful and is employed in many practical situations. However,
sometimes only a transfer function may be available to describe a system.
We obtain state models directly from a transfer function by means of a
simulation diagram. A simulation diagram is a certain type of a block diagram
or a flow graph that is constructed to have a given transfer function or to
model a set of differential equations. Given the transfer function, the differential
10 CHAPTER 1. LINEAR SYSTEMS

equations, or the state equations of a system, we can construct a simulation


diagram of the system.
Simulation diagrams are very useful in constructing either digital or analog
computer simulations of a system. The basic element of the simulation diagram
is the integrator which can be easily constructed using electronic devices. Figure
1.8 shows the block diagram of an integrating device.

Figure 1.8: Integrating device.

In this figure Z
y(t) = x(t) dt

and the Laplace transform of this equation yields


1
Y (s) = X(s)
s
Note that if the output of an integrator is labeled as y(t), the input to the
integrator must be dy/dt. Two integrators are cascaded in Figure 1.9, if the
output of the second integrator is y(t), the input to this integrator must be
ẏ(t).

Note that we combine time-


domain with s-domain repre-
sentation only in simulation di-
agrams.
Figure 1.9: Cascaded Integrating devices.

The input to the first integrator must be ÿ(t). We can use these two integrators
to construct a simulation diagram of the mechanical system of Figure 1.1. The
input to the cascaded integrators in Figure 1.9 is ÿ(t) and the equation that
ÿ(t) must satisfy for the mechanical system is obtained from (1.1) as
B K 1
ÿ(t) = − ẏ(t) − y(t) + f (t)
M M M
Hence a summing junction and appropriate gains can be added to the block
diagram of Figure 1.9 to satisfy this equation as shown in Figure 1.10.

1.4.1 State-variable models from Transfer Function


A simulation diagram constructed from the system differential equations will
usually be unique. However, if the transfer function is used to construct the
1.4. SIMULATION DIAGRAMS 11

Figure 1.10: Simulation diagrams.

simulation diagram, the simulation diagram can be of many forms, that is, the
simulation diagram is not unique. Next, we consider two common and useful
forms of the simulation diagram, namely, control canonical form and observer
canonical form (The names will become evident later in the course). The two
different simulation diagrams are derived from the general transfer functions of
the form
m
X
bi si
Y (s)
G(s) = = i=0
n
U (s) X
ai si
i=0
b0 + · · · + bm−1 sm−1 + bm sm
= (1.16)
a0 + · · · + an−1 sn−1 + sn
where
m<n and an = 1

Control Canonical Form


Also called the phase variable model, as an example consider m = 2 and n = 3
in (1.16), therefore,
b0 + b1 s + b2 s2
Y (s) = U (s)
a0 + a1 s + a2 s2 + s3
Divide numerator and denominator by sn , in this example that is s3 , hence,
b0 s−3 + b1 s−2 + b2 s−1
Y (s) = U (s)
a0 s−3 + a1 s−2 + a2 s−1 + 1
Set
U (s)
W (s) =
a0 s−3 + a1 s−2 + a2 s−1 + 1
This gives
W (s) = U (s) − [a0 s−3 + a1 s−2 + a2 s−1 ]W (s)
and
Y (s) = [b0 s−3 + b1 s−2 + b2 s−1 ]W (s)
12 CHAPTER 1. LINEAR SYSTEMS

U (s) W (s)

Y (s)

Figure 1.11: Control canonical form.

A simulation diagram, called the control canonical form shown in Figure 1.11
can be drawn. Once a simulation diagram of a transfer function is constructed,
a state model of the system is easily obtained. The procedure is as follows:
1. Assign a state variable to the output of each integrator starting from right
to left. (We could assign state variables from left to right to obtain what
we call input feedforward canonical form).
2. Write an equation for the input of each integrator and an equation for
each system output.
Following the procedure above the state variable satisfy:

ẋ1 = x2
ẋ2 = x3
ẋ3 = −a0 x1 − a1 x2 − a2 x3 + u(t)

while the output is


y(t) = b0 x1 + b1 x2 + b2 x3
In matrix form this yields the following state-variable model
   
0 1 0 0
ẋ =  0 0 1  x + 0 u
−a0 −a1 −a2 1
 
y = b0 b1 b2 x

Note the direct connection with coefficients of the transfer function. The bottom
row of the A matrix contains the negatives of the coefficients of the characteristic
equation (i.e., the denominator of G(s)), starting on the left with −a0 and ending
on the right with −a2 . Above the bottom row is a column of zeros on the left
and a 2 × 2 identity matrix on the right. The B matrix is similarly very simple,
1.4. SIMULATION DIAGRAMS 13

all the elements are zero except for the bottom element, which is the gain from
the original system. The C matrix contains the positive of the coefficients of the
numerator of the transfer function, starting on the left with b0 and ending on
the right with b2 . These equations are easily extended to the nth -order system.
It is important to note that state matrices are never unique, and each G(s) has
infinite number of state models.

Observer Canonical Form


In addition to control canonical form, we can draw a simulation diagram called
the observer canonical form. To show how observer canonical form can be
derived, consider the transfer function in (1.16) with m = 2 and n = 3. Equation
(1.16) is written in the form

Y (s)[a0 + a1 s + a2 s2 + s3 ] = [b0 + b1 s + b2 s2 ]U (s)

Divide both sides by s3 to obtain

Y (s)[1 + a2 s−1 + a1 s−2 + a0 s−3 ] = [b2 s−1 + b1 s−2 + b0 s−3 ]U (s)

leading to

Y (s) = −[a2 s−1 + a1 s−2 + a0 s−3 ]Y (s) + [b2 s−1 + b1 s−2 + b0 s−3 ]U (s)

This relationship can be implemented by using a simulation diagram as shown


in Figure 1.12.

U (s)

Y (s)

Figure 1.12: Observer canonical form.

The state equations are written as


   
−a2 1 0 b2
ẋ = −a1 0 1 x + b1  u
−a0 0 0 b0
 
y= 1 0 0 x

Remark We say G(s) is strictly proper if m < n in (1.16). If G(s) is strictly


proper, we can get a state-space representation at once by filling the negative
14 CHAPTER 1. LINEAR SYSTEMS

denominator coefficients into the lowermost row of A if a control canonical form


is required and into the leftmost column if an observer canonical form is desired.
If m = n, G(s) is proper, but not strictly proper. In this cases we have to divide
the numerator of G(s) by the denominator. This will lead to a feedthrough
term, i.e., the D matrix is not zero. If G(s) is not proper state-space represen-
tations do not exist for non-proper transfer functions.

Example 1.5 Find the state and output equations for


5s2 + 7s + 4
G(s) =
s3 + 3s2 + 6s + 2
in control canonical form.

 Solution State equation


   
0 1 0 0
ẋ =  0 0 1  x + 0 u
−2 −6 −3 1

The output equation is  


y= 4 7 5 x 
Example 1.6 Find the state and output equations for
1
G(s) =
2s2 − s + 3
 Solution State equation
   
0 1 0
ẋ = x+ u
−3/2 1/2 1/2

The output equation is  


y= 1 0 x 
Example 1.7 Write a state variable expression for the following differential equation

2ÿ − ẏ + 3y = u̇ − 2u

 Solution If we attempt to use the definitions of state variables presented


earlier as in Example 1.4 we require derivatives of u in the state equations.
According to the standard form of (1.14) and (1.15), this is not allowed. We
need to eliminate the derivates of u. A useful formulation for state variables
here is to obtain a transfer function and then using a simulation diagram to
obtain the state model. The transfer function of the system is
s−2
Y (s) = U (s)
2s2 − s + 3
The state model in control canonical form is given by
   
0 1 0
ẋ = x+ u
−3/2 1/2 1
 
y = −1 1/2 x
1.4. SIMULATION DIAGRAMS 15

To show that the answer is true let us construct a simulation diagram. First we
have to express the transfer function in standard form:
1 −1
2s − s−2
Y (s) = U (s)
1 − 2 s + 32 s−2
1 −1

introduce an auxiliary signal W (s):


1
Y (s) = ( 21 s−1 − s−2 ) U (s)
1 − 12 s−1 + 32 s−2
| {z }
=:W (s)

Therefore,
W (s)[1 − 12 s−1 + 32 s−2 ] = U (s)
and
Y (s) = ( 12 s−1 − s−2 )W (s)

1
2

−1

1

2

3
2

Figure 1.13: Simulation diagram for Example 1.7 in control canonical form.

After we assign a state variable to the output of each integrator from right to
left we get,
ẋ1 = x2
3 1
ẋ2 = − x1 + x2 + u
2 2
1
y = −x1 + x2 
2

1.4.2 Transfer Functions from State-variable Models


It may be reasonably asked at this point how the state variable description which
is in time domain, relates to the transfer function representation. Consider a
state-variable model initially at rest
ẋ(t) = Ax(t) + Bu(t), x(0) = 0
y(t) = Cx(t) + Du(t)
16 CHAPTER 1. LINEAR SYSTEMS

Taking Laplace transforms yields


sX(s) = AX(s) + BU(s)
Y(s) = CX(s) + DU(s)
The term sX(s) must be written as sIX(s), where I is the identity matrix.
This additional step is necessary, since the subtraction of the matrix A from
the scalar s is not defined. Then,
sX(s) − AX(s) = (sI − A)X(s) = BU(s)
or
X(s) = (sI − A)−1 BU(s)
Substituting in the output equation, we get
Y(s) = C(sI − A)−1 BU(s) + DU(s)
We conclude that the transfer function from U (s) to Y (s) is then
G(s) = C(sI − A)−1 B + D
Example 1.8 The state equations of a system are given by
   
−2 0 1
ẋ = x+ u
−3 −1 2
 
y= 3 1 x
Determine the transfer function for the system.

 Solution The transfer function is given by


G(s) = C(sI − A)−1 B + D
First, we calculate (sI − A)−1 . Now,
     
1 0 −2 0 s+2 0
sI − A = s − =
0 1 −3 −1 3 s+1
Therefore,
det(sI − A) = (s + 2)(s + 1) = s2 + 3s + 2
Then, letting det(sI − A) = ∆(s) for convenience, we have
 s+1 
0
adj(sI − A)  ∆(s)
(sI − A)−1 =

= 
det(sI − A)  −3 s+2 
∆(s) ∆(s)
and the transfer function is given by
 s+1 
0
  ∆(s)
 
  1
G(s) = 3 1  
 −3 s+2  2
∆(s) ∆(s)
 s+1 
∆(s) 
 = 5s + 4
 
= 3 1  
 2s + 1  s2 + 3s + 2
∆(s)
1.5. SYSTEM INTERCONNECTIONS 17

1.5 System Interconnections


Frequently, a system is made up of components connected together in some
topology. This raises the question, if we have state models for components, how
can we assemble them into a state model for overall system?

1.5.1 Series and Parallel Connections


Consider the two systems connected in series as shown in Figure 1.14.

Figure 1.14: Series connection.

The diagram stands for the equations

ẋ1 = A1 x1 + B1 u
y1 = C1 x1 + D1 u
ẋ2 = A2 x2 + B2 y1
y = A2 x2 + D2 y1

Let us take the overall state to be


 
x1
x=
x2

Then
ẋ = Ax + Bu, y = Cx + Du
where
" # " #
A1 0 B1
A= , B=
B2 C1 A2 B2 D1
 
C = D 2 C1 C2 , D = D2 D1

Obtaining a state model of two systems connected in parallel as shown in Figure


1.15 is very similar to the series connection and is left for the student.

1.5.2 Feedback Connection


Consider the feedback connection shown in Figure 1.16, the state equations are
given by (assuming D2 = 0)

ẋ1 = A1 x1 + B1 e = A1 x1 + B1 (r − C2 x2 )
ẋ2 = A2 x2 + B2 u = A2 x2 + B2 (C1 x1 + D1 (r − C2 x2 ))
y = C2 x 2
18 CHAPTER 1. LINEAR SYSTEMS

Figure 1.15: Parallel connection.

Figure 1.16: Feedback connection.

Taking  
x1
x=
x2
we get
ẋ = Ax + Br, y = Cx
where
" # " #
A1 −B1 C2 B1
A= , B=
B2 C1 A2 − B2 D1 C2 B2 D1
 
C= 0 C2

1.6 Solution of State Equations


We have developed procedures for writing the state equations of a system,
given the system differential equations, the system transfer function, or the
simulation diagram. In this section we present two methods for finding the
solution of the state equations. The standard form of the state equation is
given by
ẋ(t) = Ax(t) + Bu(t)
This equation will now be solved using the Laplace transform. Taking Laplace
transforms
sX(s) − x(0) = AX(s) + BU(s)
1.6. SOLUTION OF STATE EQUATIONS 19

We wish to solve this equation for X(s); to do this we reaarange the last equation

(sI − A)X(s) = x(0) + BU(s)

Pre-multiplying by (sI − A)−1 , we obtain

X(s) = (sI − A)−1 x(0) + (sI − A)−1 BU(s) (1.17)

and the state vector x(t) is the inverse Laplace transform of this equation.
Therefore,
Z t
At
x(t) = e x(0) + eA(t−τ ) Bu(τ ) dτ (1.18)
0

if the initial time is t0 , then


Z t
x(t) = eA(t−t0 ) x(0) + eA(t−τ ) Bu(τ ) dτ
t0

The exponential matrix eAt is called the state transition matrix Φ(t) and is
defined as
Φ(t) = L−1 (sI − A)−1 = eAt
and
Φ(s) = (sI − A)−1
The exponential matrix eAt represents the following power series of the matrix
At, and
1 1
Φ(t) = eAt = I + At + A2 t2 + A3 t3 + · · · (1.19)
2! 3!
Equation (1.18) can be written as
Z t
x(t) = Φ(t)x(0) + Φ(t − τ )Bu(τ ) dτ (1.20)
0

In Equation (1.20) the first term represents the response to a set of initial
conditions (zero-input response), whilst the integral term represents the reponse
to a forcing function u(t) (zero-state response). Similarly, the output equation
is given by
Z t
y(t) = CΦ(t)x(0) + CΦ(t − τ )Bu(τ ) dτ + Du(t) (1.21)
0

Use the infinite series in (1.19) to evaluate the transition matrix Φ(t) if Example 1.9
 
0 1
A=
0 0

 Solution This is a good method only if A has a lot of zeros, since this
guarantees a quick convergence of the infinite series. Clearly,
 
2 0 0
A =
0 0
20 CHAPTER 1. LINEAR SYSTEMS

and we stop here, since A2 = 0 and any higher powers are zero. Therefore,
 
At 1 t
e = I + At = 
0 1

The most common way of evaluating the transition matrix Φ(t) is with the use
of Laplace, as the next Example demonstrates.

Example 1.10 Use the Laplace transform to find the transition matrix if A is the same as
in Example 1.9.

 Solution We first calculate the matrix (sI − A),


     
1 0 0 1 s −1
sI − A = s − =
0 1 0 0 0 s

The determinant of this matrix is

det(sI − A) = s2

and the adjoint matrix is


 
s 1
adj(sI − A) =
0 s

Next we determine the inverse of the matrix (sI − A),


 
−1 adj(sI − A) 1 s 1
(sI − A) = = 2
det(sI − A) s 0 s
 1 1 
 s s2 
=
1

0
s
The state transition matrix is the inverse Laplace transform of this matrix
 
At 1 t
e = Φ(t) = 
0 1

Example 1.11 Consider the system described by the transfer function


1
G(s) =
s2 + 3s + 2
(a) Write down the state equation in observer canonical form.
(b) Evaluate the state transition matrix Φ(t).
(c) Find the zero-state response if a unit step is applied.
 Solution (a) The state equation in observer canonical form is given by
   
−3 1 0
ẋ = x+ u
−2 0 1
1.6. SOLUTION OF STATE EQUATIONS 21

(b) To find the state transition matrix, we first calculate the matrix (sI − A),
     
1 0 −3 1 s + 3 −1
sI − A = s − =
0 1 −2 0 2 s

The determinant of this matrix is

det(sI − A) = s2 + 3s + 2 = (s + 1)(s + 2)

and the adjoint matrix is


 
s 1
adj(sI − A) =
−2 s+3

Next we determine the inverse of the matrix (sI − A),


 s 1 
adj(sI − A)  (s + 1)(s + 2) (s + 1)(s + 2) 
(sI − A)−1

= = 
det(sI − A) −2 s+3 
(s + 1)(s + 2) (s + 1)(s + 2)
 −1 2 1 −1 
+ +
 s+1 s+2 s+1 s+2 
=
 −2

2 2 −1 
+ +
s+1 s+2 s+1 s+2
The state transition matrix is the inverse Laplace transform of this matrix
 −t
−e + 2e−2t e−t − e−2t

Φ(t) =
−2e−t + 2e−2t 2e−t − e−2t

(c) If a unit step is applied as an input. Then U (s) = 1/s, and the second term
in (1.17) becomes
 s 1 
(s + 1)(s + 2) (s + 1)(s + 2)  0 1
 
(sI − A)−1 BU (s) = 


 −2 s+3  1 s
(s + 1)(s + 2) (s + 1)(s + 2)
 1   1 −1 1 
2
+ + 2
 s(s + 1)(s + 2)   s
 s+1 s+2 
= = 

 s+3   3 −2 1 
2
s(s + 1)(s + 2) + + 2
s s+1 s+2
The inverse Laplace transform of this term is

1 1
 
− e−t + e−2t
2 2
L−1 ((sI − A)−1 BU (s)) = 
 

 3 1 
− 2e−t + e−2t
2 2
22 CHAPTER 1. LINEAR SYSTEMS

Alternatively, we could have attempted a convolution solution,


t Z t
−e−(t−τ ) + 2e−2(t−τ ) e−(t−τ ) − e−2(t−τ )
Z  
0
Φ(t − τ )Bu(τ ) dτ = dτ
0 0 −2e−(t−τ ) + 2e−2(t−τ ) 2e−(t−τ ) − e−2(t−τ ) 1
Z t 
 (e−(t−τ ) − e−2(t−τ ) )dτ 
 0
= Z


 t 
−(t−τ ) −2(t−τ )
(2e −e )dτ
 0 t 

 e−t eτ − 2 e−2t e2τ  " (1 − e−t ) − 1 (1 − e−2t ) #
1

 0  2
= t  =

2(1 − e ) − 12 (1 − e−2t )
−t

 −t τ 1 −2t 2τ  
2e e − 2 e e
0
1 1
 
− e−t + e−2t
 2 2 
=
 3

1 
− 2e−t + e−2t
2 2
The result shows that the solution may be evaluated either by the Laplace
transform or the convolution integral. 

1.6.1 Properties of the State-Transition Matrix


Three properties of the state-transition matrix will now be derived.

1. Φ(0) = I. This result follows directly from (1.19) by setting t = 0. The


state transition matrix from Example 1.11 is used to illustrate this prop-
erty,
 −t
−e + 2e−2t e−t − e−2t

Φ(t) = (1.22)
−2e−t + 2e−2t 2e−t − e−2t
Now,
−e0 + 2e0 e0 − e0
   
1 0
Φ(0) = = =I
−2e0 + 2e0 2e0 − e0 0 1

2. Φ(t2 − t1 )Φ(t1 − t0 ) = Φ(t2 − t0 ). To prove this result, recall that Φ(t) =


eAt , hence,

Φ(t2 − t1 )Φ(t1 − t0 ) = eA(t2 −t1 ) eA(t1 −t0 )


= eA(t2 −t0 ) = Φ(t2 − t0 )

To illustrate this property consider the following:

−e−(t2 −t1 ) + 2e−2(t2 −t1 ) e−(t2 −t1 ) − e−2(t2 −t1 )


 
Φ(t2 − t1 )Φ(t1 − t0 ) =
−2e−(t2 −t1 ) + 2e−2(t2 −t1 ) 2e−(t2 −t1 ) − e−2(t2 −t1 )
 −(t −t )
−e 1 0 + 2e−2(t1 −t0 ) e−(t1 −t0 ) − e−2(t1 −t0 )

×
−2e−(t1 −t0 ) + 2e−2(t1 −t0 ) 2e−(t1 −t0 ) − e−2(t1 −t0 )
1.6. SOLUTION OF STATE EQUATIONS 23

The (1,1) element of the product matrix is given by

(1,1) element = [−e−(t2 −t1 ) + 2e−2(t2 −t1 ) ][−e−(t1 −t0 ) + 2e−2(t1 −t0 ) ]
+ [e−(t2 −t1 ) − e−2(t2 −t1 ) ][−2e−(t1 −t0 ) + 2e−2(t1 −t0 ) ]
= [e−(t2 −t0 ) − 2e−(t2 +t1 −2t0 ) − 2e−(2t2 −t1 −t0 ) + 4e−2(t2 −t0 ) ]
+ [−2e−(t2 −t0 ) + 2e−(t2 +t1 −2t0 ) + 2e−(2t2 −t1 −t0 ) − 2e−2(t2 −t0 ) ]

Combining these terms yields

(1,1) element = −e−(t2 −t0 ) + 2e−2(t2 −t0 )

which is the (1,1) element of Φ(t2 − t0 ). The other three elements of the
product matrix can be verified in a like manner.
The second property is based on time invariance. It implies that a state-
transition process can be divided into a number of sequential transitions.
Figure 1.17 illustrates that the transitionn from t = t0 to t = t2 is equal
to the transition from t0 to t1 , and then from t1 to t2 .

Figure 1.17: Property of the state-transition matrix.

3. Φ−1 (t) = Φ(−t). We can derive the third property by postmultiplying


both sides of Φ(t) = eAt by e−At , we get

Φ(t)e−At = eAt e−At = I

then premultiplying the above equation by Φ−1 (t), we get

e−At = Φ−1 (t)

Thus
Φ(−t) = Φ−1 (t) = e−At

To illustrate this property, consider again the state matrix in (1.22), we


assume the property is true. Hence,
 −t
−e + 2e−2t e−t − e−2t
  t
−e + 2e2t et − e2t

Φ(t)Φ(−t) = × =I
−2e−t + 2e−2t 2e−t − e−2t −2et + 2e2t 2et − e2t
24 CHAPTER 1. LINEAR SYSTEMS

1.7 Characteristic Equations


Characteristic equations play an important role in the study of linear sys-
tems. They can be defined with respect to differential equations, transfer func-
tions, or state equations.

Characteristic Equation from a Differential Equation


Consider that a linear time-invariant system is described by the differential
equation
dn y(t) dn−1 y(t) dy(t)
n
+ an−1 n−1
+ · · · + a1 + a0 y(t)
dt dt dt
dm u(t) dm−1 u(t) du(t)
= bm m
+ bm−1 + · · · + b1 + b0 u(t) (1.23)
dt dtm−1 dt
where n > m. By defining the operator s as
dk
sk = k = 1, 2, · · · , n
dtk
Equation (1.23) is written
(sn + an−1 sn−1 + · · · + a1 s + a0 )y(t) = (bm sm + bm−1 sm−1 + · · · + b1 s + b0 )u(t)
The Characteristic equation of the the system is defined as
sn + an−1 sn−1 + · · · + a1 s + a0 = 0 (1.24)

Characteristic Equation from a Transfer Function


The transfer function of the system described by (1.23) is
bm sm + bm−1 sm−1 + · · · + b1 s + b0
G(s) =
sn + an−1 sn−1 + · · · + a1 s + a0
The characteristic equation is obtained by equating the denominator polynomial
of the transfer function to zero.

Characteristic Equation from a State Equation


Recall that
G(s) = C(sI − A)−1 B + D
which can be written as
adj(sI − A)
G(s) = C B+D
|sI − A|
C[adj(sI − A)]B + |sI − A|D
=
|sI − A|
Setting the denominator of the transfer-function matrix G(s) to zero, we get
the characteristic equation
|sI − A| = 0
which is an alternative form of the characteristic equation, but should lead to
the same equation as in (1.24).
1.7. CHARACTERISTIC EQUATIONS 25

1.7.1 Eigenvalues
The roots of the characteristic equation are often referred to as the eigenvalues
of the matrix A.

Find the eigenvalues of the matrix A given by Example 1.12


 
1 −1
A=
0 −1

 Solution The characteristic equation of A is

|sI − A| = s2 − 1

Therefore, |sI − A| = 0 gives the eigenvalues λ1 = 1 and λ2 = −1. 

1.7.2 Eigenvectors
Any nonzero vector pi which satisfies the matrix equation

(λi I − A)pi = 0 (1.25)

where λi , i = 1, 2, · · · , n, denotes the ith eigenvalue of A, is called the eigen-


vector of A associated with the eigenvalue λi . The procedure for determining
eigenvectors can be divided into two possible cases depending on the results of
the eigenvalue calculations.
Case 1: All eigenvalues are distinct.
Case 2: Some eigenvalues are multiple roots of the characteristic equation.

Case 1: Distinct Eigenvalues


If A has distinct eigenvalues, the eigenvectors can be solved directly from (1.25).

Consider that a state equation has the matrix A given as in Example 1.12. Example 1.13
Find the eigenvectors.

 Solution The eigenvalues were determined in Example 1.12 as λ1 = 1 and


λ2 = −1. Let the eigenvectors be written as
   
p11 p
p1 = p2 = 12
p21 p22

Substituting λ1 = 1 and p1 into (1.25), we get


    
0 1 p11 0
=
0 2 p21 0

Thus, p21 = 0, and p11 is arbitrary which in this case can be set equal to 1.
Similarly, for λ = −1, (1.25) becomes
    
−2 1 p12 0
=
0 0 p22 0
26 CHAPTER 1. LINEAR SYSTEMS

which leads to
−2p12 + p22 = 0
The last equation has two unkowns, which means that one can be set arbitrarily.
Let p12 = 1, then p22 = 2. The eigenvectors are
   
1 1
p1 = p2 = 
0 2

Example 1.14 Consider the matrix  


1 3
A=
−6 −5

 Solution The characteristic equation is s2 + 4s + 13. Thus A has eigenvalues


−2 ± 3j. We next find the eigenvectors associated with the eigenvalues. We first
find    
λ−1 −3 −3 + 3j −3
=
6 λ + 5 λ =−2+3j 6 3 + 3j
1

Then to find the related eigenvector we solve


    
−3 + 3j −3 p11 0
=
6 3 + 3j p21 0
| {z }
p1

which results in the two equations

−3p11 (1 − j) − 3p21 = 0
6p11 + 3p21 (1 + j) = 0

or
p21 = −p11 (1 − j)
Thus if we choose p11 = 1,  
1
p1 =
−1 + j
To find the other eigenvector we just conjugate the one already found, p1 . Thus
 
1
p21 = 
−1 − j

Case 2: Repeated Eigenvalues


An eigenvalue with multilpicity 2 or higher is called a repeated eigenvalue. If
A has repeated eigenvalues, not all eigenvectors can be found using (1.25). We
use a variation of (1.25), called the generalized eigenvector. The next example
illustrates the process of finding the eigenvectors.

Example 1.15 Consider the matrix  


2 −8
A=
2 −6

 Solution The characteristic equation is s2 + 4s + 4. Thus, A has only one


1.8. SIMILARITY TRANSFORMATION 27

eigenvalue λ = −2 that is repeated twice. We first find the eigenvector for


λ = −2 using (1.25). Therefore,
     
λ−2 8 p11 0
=
−2 λ + 6 λ=−2 p21 0
| {z }
p1

Making the substituition λ = −2 in the matrix yields


    
−4 8 p11 0
=
−2 4 p21 0
| {z }
p1

resulting in the equations

−4p11 + 8p21 = 0
−2p11 + 4p21 = 0

Both equations tell us the same thing, that

p11 = 2p21

Thus, one choice for the eigenvector is


 
2
p1 =
1

For the generalized eigenvector that is associated with the second eigenvalue,
we use a variation of the equation we used to find the eigenvector. That is, we
write      
λ−2 8 p12 2
=−
−2 λ + 6 λ=−2 p22 1
| {z }
p2

which yields the equations

−4p12 + 8p22 = −2
−2p12 + 4p22 = −1

Either of these equations yields


2p12 − 1
p22 =
4
Choosing p12 = 1/2 yields "1#
2
p2 = 
0

1.8 Similarity Transformation


In this chapter, procedures have been presented for finding a state-variable
model from system differential equations, from system transfer functions, and
28 CHAPTER 1. LINEAR SYSTEMS

from system simulation diagrams. In this section, a procedure is given for


finding a different state model from a given state model. It will be shown that
a system has an unlimited number of state models. However, while the internal
characteristics are different, each state model for a system will have the same
input-output characteristics (same transfer function).
The state model of an LTI single-input, single-output system is given by
ẋ(t) = Ax(t) + Bu(t) (1.26)
y(t) = Cx(t) + Du(t) (1.27)
where x(t) is the n × 1 state vector, u(t) and y(t) are the scalar input and
output, respectively. Let us consider that the state model of Equations (1.26)
and (1.27) and suppose that the state vector x(t) can be expressed as
x(t) = Pv(t) (1.28)
where P is an n × n nonsingular matrix, called a transformation matrix, or
simply, a transformation. We can write,
v(t) = P−1 x(t)
Substituting (1.28) into the state equation in (1.26) yields
Pv̇(t) = APv(t) + Bu(t)
Premultiplying the above equation by P−1 to solve for v̇(t) results in the state
model for the state vector v(t):
v̇(t) = P−1 APv(t) + P−1 Bu(t) (1.29)
Using (1.28), we find that the output equation in (1.27) becomes
y(t) = CPv(t) + Du(t) (1.30)
We have the state equations expressed as a function of the state vector x(t) in
(1.26) and (1.27) and as a function of the transformed state vector v(t) in (1.29)
and (1.30).
The state equations as a function of v(t) can be expressed in the standard
format as
v̇(t) = Av v(t) + Bv u(t) (1.31)
and
y(t) = Cv x(t) + Dv u(t) (1.32)
Comparing (1.29) with (1.31) we get
Av = P−1 AP
and
Bv = P−1 B
Similarly, comparing (1.30) with (1.32), we see that
Cv = CP
and
Dv = D
The transformation just described is called a similarity transformation, since in
the transformed system such properties as the characteristic equation, eigenvec-
tors, eigenvalues, and transfer functions are all preserved by the transformation.
1.8. SIMILARITY TRANSFORMATION 29

1.8.1 Properties of the Similarity Transformation


1. The eigenvalues of A and Av are equal. Consider the determinant of
(sI − Av ),

|sI − Av | = |sI − P−1 AP| = |sP−1 IP − P−1 AP|


= |P−1 (sI − A)P|

Since the determinant of a product matrix is equal to the product of the


determinants of the matrices, the last equation becomes

|sI − Av | = |P−1 ||sI − A||P| = |sI − A|

Thus the characteristic equation is preserved, which naturally leads to the


same eigenvalues.
2. The following transfer functions are equal:

Cv (sI − Av )−1 Bv + Dv = C(sI − A)−1 B + D

This can be easily seen since,

Gv (s) = Cv (sI − Av )−1 Bv + Dv


= CP(sI − P−1 AP)−1 P−1 B + D

which is simplified to

Gv (s) = C(sI − A)−1 B + D = G(s)

Next we show how to choose the transformation matrix P to obtain a partic-


ular form. When carrying out analysis and design in the state space representa-
tion, it is often advantageous to transform these equations into particular forms.
We shall describe the diagonal canonical form, the control canonical form, and
the observer canonical form. The transformation equations are given without
proofs.

1.8.2 Diagonal Canonical Form


This form makes use of the eigenvectors. If A has distinct eigenvalues, there is
a nonsingular transformation P that can be formed by use of the eigenvectors
of A as its columns; that is
 
P = p1 p2 p3 · · · pn

where pi , i = 1, 2, · · · , n, denotes the eigenvector associated with the eigenvalue


λi . The Av matrix is a diagonal matrix,
 
λ1 0 0 ··· 0
 0 λ2 0 · · · 0 
 
Av =  0
 0 λ3 · · · 0  
 .. .. .. . . .. 
. . . . . 
0 0 0 · · · λn
30 CHAPTER 1. LINEAR SYSTEMS

where λ1 , λ2 , · · · , λn are the n distinct eigenvalues of A.

Example 1.16 Consider the matrix  


1 −1
A=
0 −1
which has eigenvalues λ1 = 1 and λ2 = −1. The eigenvectors where determined
in example 1.13 to be    
1 1
p1 = p2 =
0 2
Thus,  
  1 1
P = p1 p2 =
0 2
and the diagonal canonical form of A is written
   
1 2 −1 1 −1 1 1
Av = P−1 AP =
2 0 1 0 −1 0 2
 
1 0
= 
0 −1
In general, when the matrix A has repeated eigenvalues, it cannot be trans-
formed into a diagonal matrix. However, there exists a similarity transforma-
tion such that the Av matrix is almost diagonal. the matrix Av is called the
Jordan canonical form. A typical Jordan canonical form is shown below
 
λ1 1 0 0 0
 0 λ1 1 0 0
 
A=  0 0 λ 1 0 0 

0 0 0 λ2 0 
0 0 0 0 λ3
where it is assumed that A has a repeated eigenvalue λ1 and distinct eigenvalues
λ2 and λ3 . The Jordan canonical form generally has the following properties:
1. The elements on the main diagonal are the eigenvalues.
2. All elements below the main diagonal are zero.
3. Some of the elements immediately above the repeated eigenvalues on the
main diagonal are 1s.
Example 1.17 Consider the matrix  
2 −8
A=
2 −6
A has only one eigenvalue λ = −2 that is repeated twice. The generalized
eigenvector were found in example 1.15 to be
  "1#
2
p1 = p2 = 2
1 0
Thus, " #
1
  2 2
P = p1 p2 =
1 0
1.8. SIMILARITY TRANSFORMATION 31

and the Jordan canonical form of A is written


" #" #" #
−1
0 − 12 2 −8 2 1
2
Av = P AP = −2
−1 2 2 −6 1 0
 
−2 1
= 
0 −2

1.8.3 Control Canonical Form


The transformation matrix P that transforms the state model into control
canonical form is computed from the matrix
C = B AB A2 B · · · An−1 B
 

If this matrix is nonsingular, it will be invertible. Assume that Av and Bv are


in control canonical form. Now
P−1 AP = Av =⇒ P−1 A = Av P−1
and
P−1 B = Bv
Let p1 , p2 , · · · , pn denote the rows of P−1 . Then, Av P−1 = P−1 A is given by
    
0 1 0 ··· 0 p1 p1 A
 0
 0 1 ··· 0    p2   p2 A 
   
 .. .. .. .. ..   p3  =  p3 A 
 .
 . . . .    
  ..   .. 

 0 0 0 ··· 1   .   . 
−a0 −a1 −a2 · · · −an−1 pn pn A
Therefore,
p2 = p1 A
p3 = p2 A = p1 A2
p4 = p3 A = p1 A3
..
.
pn = pn−1 A = p1 An−1
Also, Bv = P−1 B yields    
0 p1 B
0  p2 B 
 ..  =  .. 
   
.  . 
1 pn B
which implies
p1 B = 0
p2 B = p1 AB = 0
p3 B = p1 A2 B = 0
..
.
pn B = p1 An−1 = 1
32 CHAPTER 1. LINEAR SYSTEMS

therefore, in vector matrix form, we have

p1 B AB A2 B · · · An−1 B = 0
   
0 0 ··· 1
| {z }
=C

Hence,
1 C −1
 
p1 = 0 0 0 ···
Having found p1 we can now go back and construct all the rows of P−1 . Note
that we are only interested in the last row of C −1 to define p1 .

Example 1.18 Transform the following state equation


   
1 2 1 1
ẋ = 0 1 3 x + 0 u
1 1 1 1

to control canonical form.

 Solution We first need to construct the C matrix. Therefore,


 
1 2 10
C = B AB A2 B = 0 3
 
9
1 2 7

Next we need to find C −1 , hence,


 
−0.3333 −0.6667 1.3333
C −1 =  −1 0.3333 1 
0.3333 0 −0.3333

We are only interested in the last row of C −1 to define p1 ,


 
p1 = 0.3333 0 −0.3333

Next, we compute p2 and p3 as follows


 
p2 = p1 A = 0 0.3333 0
p3 = p1 A2 = 0
 
0.3333 1

Therefore,  
0.3333 0 −0.3333
−1
P =  0 0.3333 0 
0 0.3333 1
Hence, the system can be transformed into the control canonical form,
   
0.3333 0 −0.3333 1 2 1 3 −1 1
Av = P−1 AP =  0 0.3333 0  0 1 3 0 3 0
0 0.3333 1 1 1 1 0 −1 1

and   
0.3333 0 −0.3333 1
Bv = P−1 B =  0 0.3333 0  0
0 0.3333 1 1
1.8. SIMILARITY TRANSFORMATION 33

Thus, the control canonical form model is given by


   
0 1 0 0
v̇ = 0 0 1 v + 0 u
3 1 3 1

which could have been determined once the coefficients of the characteristic
equation are known; however the exercise is to show how the control canonical
transformation matrix P is obtained.

1.8.4 Observer Canonical Form


A similar approach to the one used to derive the similarity transformation of the
control canonical form is used to determine the transformation matrix to obtain
the observer canonical form. The transformation matrix P that transforms the
state model into observer canonical form is computed from the matrix
 
C
 CA 
2 
 
O =  CA 

 .. 
 . 
CAn−1

If this matrix is nonsingular, it will be invertible. Assume that Av and Cv are


in observer canonical form. Now

P−1 AP = Av =⇒ AP = PAv

and
CP = Cv
Let p1 , p2 , · · · , pn denote the columns of P. Then, AP = PAv is given by
 
−an−1 1 0 · · · 0
 .. 
  .
 0 1 ··· 0   
p1 p2 p3 · · · pn  .. .. . . ..  = Ap1 Ap2 Ap3 · · · Apn
 −a2
 . . . .
 −a1 0 0 ··· 1
−a0 0 0 ··· 0

Therefore,

p1 = Ap2 = An−1 pn
..
.
pn−3 = Apn−2 = A3 pn
pn−2 = Apn−1 = A2 pn
pn−1 = Apn

Also, Cv = CP yields
   
1 0 ··· 0 = Cp1 Cp2 ··· Cpn
34 CHAPTER 1. LINEAR SYSTEMS

which implies

Cp1 = CAn−1 pn = 1
..
.
Cpn−2 = CA2 pn = 0
Cpn−1 = CApn = 0
Cpn = 0

therefore, in vector matrix form, we have


   
C 0
 CA 0
CA2
   
 pn = 0
   

 ..
  .. 
 .
 .
n−1
CA 1
| {z }
=O

Hence,
 
0
0
 
pn = O−1 0
 
 .. 
.
1
Having found pn we can now go back and construct all the columns of P. Note
that we are only interested in the last column of O−1 to define pn .

Example 1.19 Transform the following state equation


   
1 2 1 1
ẋ = 0 1 3 x + 0 u
1 1 1 1
 
y= 1 1 0 x

to observer canonical form.

 Solution We first need to construct the O matrix. Therefore,


   
C 1 1 0
O =  CA  = 1 3 4
CA2 5 9 14

Next we need to find O−1 , hence,


 
0.5 −1.667 0.3333
O−1 =  0.5 1.1667 −0.3333
−0.5 −0.3333 0.1667
1.8. SIMILARITY TRANSFORMATION 35

We are only interested in the last column of O−1 to define p3 , in this case
 
0.3333
p3 = −0.3333
0.1667

Next, we compute p1 and p2 as follows


 
0.3333
p1 = A2 p3 = 0.6667
0.1667
 
−0.1667
p2 = Ap3 =  0.1667 
0.1667

Therefore,  
0.3333 −0.1667 0.3333
P = 0.6667 0.1667 −0.3333
0.1667 0.1667 0.1667
Hence, the system can be transformed into the observer canonical form,
   
1 1 0 1 2 1 0.3333 −0.1667 0.3333
Av = P−1 AP = −2 0 4 0 1 3 0.6667 0.1667 −0.3333
1 −1 2 1 1 1 0.1667 0.1667 0.1667

and  
  0.3333 −0.1667 0.3333
Cv = CP = 1 1 0 0.6667 0.1667 −0.3333
0.1667 0.1667 0.1667
Thus, the observer canonical form model is given by
   
3 1 0 1
ẋ = 1 0 1 v + 2 u
3 0 0 3
 
y= 1 0 0 x

which could have been determined once the coefficients of the characteristic
equation are known; however the exercise is to show how the observer canonical
transformation matrix P is obtained.
36 CHAPTER 1. LINEAR SYSTEMS
Chapter 2
Controllability and Observability

Controllability and observability represent two major concepts of modern con-


trol system theory. These concepts were originally introduced by R. Kalman in
1960, and are particularly important for practical implementations. They can
be roughly defined as follows:
Controllability: In order to be able to do whatever we want with the given
dynamic system under control input, the system must be controllable.
Observability: In order to see what is going on inside the system under
observation, the system must be observable.
In this chapter we will investigate the controllability and observability prop-
erties of linear systems. We will see later in the course that the first stage of
the design of a linear controller is often the investigation of controllability and
observability.

2.1 Motivation Examples


It is often a common practice in control applications to design a control in-
put u(t) that makes the output y(t) behave in a desired manner. If one focuses
on input/output behavior without thinking about system states, problems may
be encountered. That is to say, if we restrict our attention to designing inputs
that make the output behave in a desirable way, problems may rise. If one has
a state space model, then it is possible that while you are making the outputs
behave nicely, some of the states x(t) may be misbehaving badly. This is best
illustrated with the following examples.

Consider the system Example 2.1


   
0 1 0
ẋ = x+ u
1 0 1
 
y = 1 −1 x
We compute the transition matrix
"1 #
(et + e−t ) 1 t
2 (e − e−t )
Φ(t) = 21 t −t
2 (e − e )
1 t
2 (e + e−t )

37
38 CHAPTER 2. CONTROLLABILITY AND OBSERVABILITY

and so, if we use the initial condition x(0) = 0, and the input u(t) is the unit
step function we get

Z t "1 # " #
e(t−τ ) + e−(t−τ ) 1
e(t−τ ) − e−(t−τ )

2 2 0
x(t) = dτ
e(t−τ ) − e−(t−τ )
1 1 −(t−τ )
 (t−τ )

0 2 2 e +e 1
" 1 #
t −t
2 (e + e ) − 1
= 1
2 (et + e−t )

The output is

y(t) = e−t − 1

which we plot in Figure 2.2.

Figure 2.1: Output response of the system to a step input.

Note that the output is behaving quite nicely. However, the states are all blow-
ing up to ∞ as t → ∞. This bad behavior is present in the state equation,
however, when we compute the output, this bad behavior gets killed by the out-
put matrix C. We say this system is unobservable, the concept of observabilty
will be discussed in details later in this chapter. 

Example 2.2 Consider the system


   
1 0 0
ẋ = x+ u
1 −1 1
 
y= 0 1 x

We compute the transition matrix


" #
et 0
Φ(t) = 1 t
2 (e − e−t ) e−t
2.2. CONTROLLABILITY 39

and so, if we use the initial condition x(0) = 0, and the input u(t) is the unit
step function we get
Z t" #" #
e(t−τ ) 0 0
x(t) = 1 (t−τ ) −(t−τ )
 −(t−τ ) dτ
0 2 e −e e 1
" #
0
=
1 − e−t

The output is
y(t) = 1 − e−t
Everything looks okay, the output is behaving nicely, and the states are not
blowing up to ∞. Let’s change the initial condition to x(0) = (1, 0). We then
compute " #
et
x(t) =
1 + 12 (et − 3e−t )
and
1
y(t) = 1 + (et − 3e−t )
2
which we plot in Figure 2.2.

Figure 2.2: Output response of the system to a step input and non-zero initial
conditions.

Note that the system is blowing up in both state and output. It is not hard to
see what is happening here, we do not have the ability to access the unstable
dynamics of the system with our input. We say this system is uncontrollable.
A theory that we discuss in details next. 

2.2 Controllability
Controllability is a property of the coupling between the input and the state,
and thus involves the matrices A and B.

Controllability: A linear system is said to be controllable at t0 if it is possible


40 CHAPTER 2. CONTROLLABILITY AND OBSERVABILITY

to find some input function u(t), that when applied to the system will transfer
the initial state x(t0 ) to the origin at some finite time t1 , i.e., x(t1 ) = 0.
Some authors define another kind of controllability involving the output y(t).
The definition given above is referred to as state controllability. It is the most
common definition, and is the only type used in this course, so the adjective
”state” is omitted. If a system is not completely controllable, then for some
initial states no input exists which can drive the system to the zero state. A
trivial example of an uncontrollable system arises when the matrix B is zero,
because then the input is disconnected from the state.

2.2.1 Controllability Tests


The most common test for controllability is that the n × n controllability matrix
C defined as
C = B AB A2 B · · · An−1 B
 
(2.1)
contains n linearly independent row or column vectors, i.e. is of rank n (that
is, the matrix is non-singular, i.e. the determinant is non-zero). Since only the
matrices A and B are involved, we sometimes say the pair (A, B) is controllable.

Example 2.3 Is the following system completely controllable


   
−2 0 1
ẋ = x+ u
3 −5 0
 
y = 1 −1 x
 Solution From (2.1) the controllability matrix is
 
C = B AB
where     
−2 0 1 −2
AB = =
3 −5 0 3
hence  
  1 −2
C= B AB =
0 3
Clearly the matrix is nonsingular since it has a non-zero determinant. Therefore
the system is controllable. 

Example 2.4 Is the following system completely controllable


   
0 1 −2 0 −1
ẋ =  3 −4 5  x + 2 −3 u
−6 7 8 4 −5
 Solution From (2.1) the controllability matrix is
C = B AB A2 B
 

hence
.. ..
 
0 −1 . −6 7 .
 2
  .. ..

C= B AB A B = 2
 
 −3 . 12 −10 . A2 B 

.. ..
4 −5 . 46 −55 .
2.3. OBSERVABILITY 41

Since the first three columns are linearly independent we can conclude that
rank C = 3. Hence there is no need to compute A2 B since it is well known
from linear algebra that the row rank of the given matrix is equal to its column
rank. Thus, rank C = 3 = n implies that the system under consideration
is controllable. Alternatively since C is nonsquare, we could have formed the
0 0
matrix CC , which is n × n; then if CC is nonsingular, C has rank n. 
There are several alternate methods for testing controllability, and some of
these may be more convenient to apply than the condition in (2.1).

Control Canonical Form


The pair (A,B) is completely controllable if A and B are in control canonical
form or transformable into control canonical form by a similarity transformation.

Diagonal Canonical Form


If A is diagonal and has distinct eigenvalues. Then, the pair (A,B) is control-
lable if B does not have any row with all zeros.

For a system in Jordan canonical form,


   
λ1 1 0 0 b11 b12
 0 λ1 1 0 b21 b22 
A= 0
 B= 
0 λ1 0  b31 b32 
0 0 0 λ2 b41 b42
for controllability only the elements in the row of B that corresponds to the last
row of the Jordan block are not all zeros. The elements in the other rows of B
corresponding to the Jordan block need not all be nonzero. Thus, the condition
of controllability for the above (A,B) pair is b31 6= 0, b32 6= 0, b41 6= 0, and
b42 6= 0.

Is the following system completely controllable Example 2.5


   
−2 1 1
ẋ = x+ u
0 −2 0
 Solution The eigenvalues of A are λ1 = −2 and λ2 = −2. The system is
already in Jordan canonical form and is uncontrollable since the last row of B
is zero. 

2.3 Observability
Observability is a property of the coupling between the state and the output,
and thus involves the matrices A and C.

Observability: A linear system is said to be observable at t0 if for an initial


state x(t0 ), there is a finite time t1 such that knowledge of y(t) for t0 < t ≤ t1
is sufficient to determine x(t0 ).
Observability is a major requirement in filtering and state estimation prob-
lems. In many feedback control problems, the controller must use output vari-
ables y rather than the state vector x in forming the feedback signals. If the
42 CHAPTER 2. CONTROLLABILITY AND OBSERVABILITY

system is observable, then y contains sufficient information about the internal


states.

2.3.1 Observability Tests


The most common test for observability is that the n × n observability matrix
O defined as
 
C
 CA 
2 
 
O =  CA  (2.2)

 .. 
 . 
CAn−1

is of rank n (that is, the matrix is non-singular, i.e. the determinant is non-
zero). Since only the matrices A and C are involved, we sometimes say the pair
(A, C) is observable.

Example 2.6 Is the following system completely observable


   
−2 0 1
ẋ = x+ u
3 −5 0
 
y = 1 −1 x

 Solution From (2.2) the observability matrix is


 
C
O=
CA

where
 
  −2 0  
CA = 1 −1 = −5 5
3 −5

hence
   
C 1 −1
O= =
CA −5 5

Clearly the matrix is singular since it has a zero determinant. Also the row
vectors are linearly dependent since the second row is -5 times the first row and
therefore the system is unobservable. 

Just as with controllability, there are several alternate methods for testing ob-
servability.

Observer Canonical Form

The pair (A,C) is completely observable if A and C are in observer canonical


form or transformable into observer canonical form by a similarity transforma-
tion.
2.4. FREQUENCY DOMAIN TESTS 43

Diagonal Canonical Form


If A is diagonal and has distinct eigenvalues. Then, the pair (A,C) is observable
if C does not have any column with all zeros.

For a system in Jordan canonical form, the pair (A,C) is completely observable
if all the elements in the column of C that corresponds to the first row of the
Jordan block are not all zeros.

Is the following system completely observable Example 2.7


   
−2 0 3
ẋ = x+ u
0 −1 1
 
y= 1 0 x

 Solution The eigenvalues of A are λ1 = −2 and λ2 = −1. The system is


already in diagonal canonical form and is unobservable since the last column of
C is zero. 

2.4 Frequency Domain Tests


Controllability and observability have been introduced in the state space
domain as pure time domain concepts. It is interesting to point out that in the
frequency domain there exists a very powerful and simple theorem that gives a
single condition for both the controllability and the observability of a system.
Let G(s) be a transfer function of a linear system

G(s) = C(sI − A)−1 B

Note that G(s) is defined by a ratio of two ploynomials containing the cor-
responding system poles and zeros. The following controllability-observability
theorem is given without proof.

Theorem: If there are no pole-zero cancellations in the transfer function of


a linear system, then the system is both controllable and observable. If the pole-
zero cancellation occurs in G(s), then the system is either uncontrollable or
unobservable or both uncontrollable and unobservable.

The importance of this theorem is that if a linear system is modeled by a transfer


function with no pole-zero cancellation, then we are assured that it is a control-
lable and observable system, no matter how the state model is derived. It should
be noted that the cancellation of poles and zeros can occur only in the model
of a system, not in the system itself. A physical system has characteristics, and
a model of the system has poles and zeros that describe those characteristics in
some approximate sense. Hence, in the physical system, a characteristic of one
part of a physical system may approximately negate a different characteristic
of another part of the system. If the characteristics approximately cancel, it is
very difficult to control and/or estimate these characteristics.

Consider a linear system represented by the the following transfer function Example 2.8
44 CHAPTER 2. CONTROLLABILITY AND OBSERVABILITY

(s + 3) s+3
G(s) = = 3
(s + 1)(s + 2)(s + 3) s + 6s2 + 11s + 6
The above theorem indicates that any state model for this system is either
uncontrollable or/and unobservable. To get the complete answer we have to go
to a state space form and examine the controllability and observability matrices.
One of the many possible state models of G(s) is as follows
   
0 1 0 0
ẋ =  0 0 1  x + 0 u
−6 −11 −6 1
 
y= 3 1 0 x

It is easy to show that the controllability and observability matrices are given
by    
0 0 1 3 1 0
C = 0 1 −6 O= 0 3 1
1 −6 25 −6 −11 −3
Since
det C = 1 6= 0 =⇒ rank C = 3 = n
and
det O = 0 6= 0 =⇒ rank O < 3 = n
this system is controllable, but unobservable. 

Example 2.9 For the circuit shown in Figure 2.3 , formulate a state variable description using
vc and iL as the state variables, with source voltage vs as the input and vo as
the output. Then determine the conditions on the resistor that would make the
system uncontrollable and unobservable.

iR

vo

Figure 2.3: RLC circuit example.

 Solution First, we notice that


diL (t)
vc (t) = vs (t) − vo (t) = vs (t) − L
dt
so
diL (t) 1 1
= vs (t) − vc (t)
dt L L
2.4. FREQUENCY DOMAIN TESTS 45

which is one of the necessary state equations. Furthermore, for the capacitor
voltage,
dvc (t) 1
= ic (t)
dt C 
1 vc (t)
= iL (t) + iR (t) −
C R
 
1 vs (t) − vc (t) vc (t)
= iL (t) + −
C R R
iL (t) 2vc (t) vs (t)
= − +
C RC RC
Therefore, the state equations are:
" # " 2 1
#" # " 1
#
v̇c (t) − RC C vc (t) RC
= + vs (t)
i̇L (t) − L1 0 iL (t) 1
L
 
  vc (t)
vx (t) = −1 0 + vs (t)
iL (t)

Testing controllability
 1 2 1 
− +
   RC R2 C 2 LC 
C= B AB = 
1 1


L RLC
The nonsingularity of this matrix can be checked by determining the determi-
nant 1 2 1
− 2 2+

RC

R C LC
1 1
= 2 − 2
1 1 R LC 2 L C


L RLC
Clearly the condition under which the determinant is zero is
r
L
R=
C
Similarly, the observability matrix is:
 
−1 0
O= 2 1

RC C
which obviously is always nonsingular. Hence,q the system is always observable,
L
but becomes uncontrollable whenever R = C .
If we were to calculate the transfer function between the output vx (t) and
the input vs (t), (using basic circuit analysis), we obtain the following transfer
function:  
1
s s+
vx RC
=
vs 2
2 1
s + s+
RC LC
46 CHAPTER 2. CONTROLLABILITY AND OBSERVABILITY
q
L
If R = C then the roots of the characteristic equation are given by

1
s1,2 = −
RC
1
giving a system with repeated roots at s = − RC . Thus, in pole-zero form, we
have  
1
s s+
vx RC s
=  = 
vs 1 1 1
s+ s+ s+
RC RC RC
which is now a first order system. The energy storage elements, in conjunction
with this particular resistance value, are interacting in such a way that their
effects combine, giving an equivalent first-order system. In this case, the time
constants due to these two elements are the same, making them equivalent to a
single element. 

2.5 Similarity Transformations Again


In this section we investigate the effects of the similarity transformations on
controllability and observability. Consider a linear system with controllability
matrix
C = B AB · · · An−1 B
 

Under a similarity transformation, we will have transformed matrices

Av = P −1 AP Bv = P −1 B

Therefore, in this new basis, the controllability test will give

Cv = Bv Av Bv A2v Bv · · ·
 

= P −1 B P −1 AP P −1 B P −1 AP P −1 AP P −1 B · · ·
 

= P −1 B P −1 AB P −1 A2 B · · ·
 

= P −1 B P −1 AB · · · P −1 An−1 B
 

= P −1 B AB · · · An−1 B
 

= P −1 C

Because P is nonsingular, rank Cv = rank C. An analogous process shows that


rank Ov = rank O. Thus, a system that is controllable (observable) will remain
so after the application of a similarity transformation.
Chapter 3
Stability

Stability is the most crucial issue in designing any control system. One of the
most common control problems is the design of a closed loop system such that
its output follows its input as closely as possible. If the system is unstable such
behavior is not guaranteed. Unstable systems have at least one of the state
variables blowing up to infinity as time increases. This usually cause the system
to suffer serious damage such as burn out, break down or it may even explode.
Therefore, for such reasons our primary goal is to guarantee stability. As soon
as stability is achieved one seeks to satisfy other design requirements, such as
speed of response, settling time, steady state error, etc.

3.1 Introduction
To help make the later mathematical treatment of stability more intuitive let
us begin with a general discussion of stability concepts and equilibrium points.
Consider the ball which is free to roll on the surface shown in Figure 3.1. The
ball could be made to rest at points A, E, F , and G and anywhere between
points B and D, such as at C. Each of these points is an equilibrium point of
the system.
In state space, an equilibrium point for a sytem is a point at which ẋ is zero
in the absence of all inputs and disruptive disturbances. Thus if the system is
placed in that state, it will remain there.

Figure 3.1: Equilibrium points

47
48 CHAPTER 3. STABILITY

A small perturbation away from points A or F will cause the ball to diverge
from these points. This behavior justifies labeling points A and F as unstable
equilibrium points. After small perturbations away from E and G, the ball will
eventually return to rest at these points. Thus E and G are labeled as stable
equilibrium points. If the ball is displaced slightly from point C, it will normally
stay at the new position. Points like C are sometimes said to be neutrally stable.
So far we assumed small perturbations, if the ball was displaced sufficiently
far from point G, it would not return to that point. We say the system is stable
locally. Stability therefore depends on the size of the original perturbation and
on the nature of any disturbances.
Stability deals with the following questions. If at time t0 the state is per-
turbed from its equilibrium point (such as the origin), does the state return to
that point, or remain close to it, or diverge from it? Whether an equilibrium
point is stable or not depends upon what is meant by remaining close. Such
a qualifying condition is the reason for the existence of a variety of stability
conditions.

3.2 Stability Definitions


Consider a linear time-invariant system described in state space as follows

ẋ = Ax + Bu x(0) = x0 6= 0
y = Cx + Du

The stability considered here refers to the zero-input case, i.e, the system has
zero input. The stability defined for u(t) = 0 is called internal stability and
sometimes referred to as zero-input stability. Clearly the solution of the state
equation is
x(t) = eAt x0

Stable Equilibrium Point


We say the origin is a stable equilibrium point if :

for any given value  > 0 there exists a number δ > 0 such that if kx(0)k < δ,
then x(t) satisfies kx(t)k <  for all t > 0.

where k(.)k stands for the Euclidean norm (also known as the 2-norm) of the
vector x(t), i.e.,

kx(t)k = (xT x)1/2 = (x21 + x22 + · · · + x2n )1/2

A simplified picture of the above definition is given in Figure 3.2. The stability
question here: if x(0) is near the origin, does x(t) remain near the origin? The
definition of stability is sometimes called the stability in the sense of Lyapunov.
If a system possesses this type of stability, then it is ensured that the state can
be kept within , in norm, of the origin by restricting the initial perturbation
to be less than δ, in norm. Note that δ ≤ .

Comment: Figure 3.2 is known as a phase portrait (or state trajectory), it


is a plot of ẋ versus x.
3.2. STABILITY DEFINITIONS 49

Figure 3.2: Illustration of stable trajectories

Asymptotic Stability
The origin is said to be an asymptotically stable equilibrium point if:

(a) it is stable, and


(b) given δ > 0 such that if kx(0)k < δ, then x(t) satisfies lim kx(t)k = 0.
t→∞
50 CHAPTER 3. STABILITY
Chapter 4
Modern Control Design

Classical design techniques are based on either frequency response or the root
locus. Over the last decade new design techniques has been developed, which
are called modern control methods to differentiate them from classical methods.
In this chapter we present a modern control design method known as pole place-
ment, or pole assignment. This method is similar to the root-locus design, in
that poles in the closed-loop transfer function may be placed in desired loca-
tions. Achievement of suitable pole locations is one of the fundamental design
objectives as this will ensure satisfactory transient response. The placing of all
poles at desired locations requires that all state variables must be measured.
In many applications, all needed system variables may not be physically mea-
surable or because of cost. In these cases, those system signals that cannot be
measured must be estimated from the ones that are measured. The estimation
of system variables will be discussed later in the chapter.

4.1 State feedback


Many design techniques in modern control theory is based on the state-
feedback configuration. The block diagram of a system with state-feedback
control is shown in Figure 4.1. The open-loop system, often called the plant, is
described in state variable form as:
ẋ = Ax + Bu (4.1)
y = Cx + Du (4.2)
It will be assumed that the system is single-input single-output; thus the state
vector x is n × 1, the plant input vector u is r × 1 and the output vector y
is p × 1. The feedback gain matrix K is 1 × n and is assumed constant. The
control system input r is assumed to be m × 1.

The equations which describe the state feedback problem are (4.1), (4.2) and
the relation
u(t) = r(t) − Kx(t)
Combining gives
ẋ = [A − BK]x + Br

51
52 CHAPTER 4. MODERN CONTROL DESIGN

Figure 4.1: State variable feedback system.

and
y = [C − DK]x + Dr
With this setup in mind the question is: What changes in overall system char-
acteristics can be achieved by the choice of K? Stability of the state feedback
system depends on the eigenvalues of (A − BK). Controllability depends on the
pair ([A − BK], B). Observability depends on the pair ([A − BK], [C − DK]).

4.2 Pole-Placement Design


Classical design procedures are based on the transfer function of the system;
pole-placement design is based on the state model of the system. The state
model of the plant considered is as given in (4.1) and (4.2) with D = 0. Initially,
we will assume that r(t) = 0. A system of this type (input equal to zero) is
called regulator control system. The purpose of such a system is to maintain the
system output y(t) at zero.
In general, in modern control design, the plant input u(t) is made a function
of the states, of the form
u(t) = f [x(t)]
This equation is called the control law. In pole-placement design, the control
law is specified as a linear function of the states, in the form
 
x1
   x2 

u(t) = −Kx(t) = − K1 K2 · · · Kn  . 

 .. 
xn
We will show that this control law allows all poles of the closed-loop system to
be placed in any desirable locations. This rule can be expressed as
u(t) = −K1 x1 (t) − K2 x2 (t) − · · · − Kn xn (t)
The design objective is: specify a desired root locations of the system charac-
teristic equation, and then calculate the gains Ki to yield these desired root
locations. The closed-loop system can be represented as shown in Figure 4.2.
4.2. POLE-PLACEMENT DESIGN 53

Figure 4.2: Pole-placement design.

Pole Placement Procedure


The state equation of the plant is given by
ẋ = Ax + Bu (4.3)
The control law is chosen to be
u(t) = −Kx(t) (4.4)
with  
K = K1 K2 ··· Kn
and n is the order of the plant. Substitution of (4.4) into (4.3) yields
ẋ(t) = Ax(t) − BKx(t) = (A − BK)x(t) = Af x(t) (4.5)
where Af = (A − BK) is the system matrix for the closed-loop system. The
characteristic equation for the closed-loop system is then
|sI − Af | = |sI − A + BK| = 0 (4.6)
Suppose that the design specifications require that the roots of the characteristic
equation be at −λ1 , −λ2 , · · · , −λn . The desired characteristic equation for the
system, which is denoted by αc (s) is
αc (s) = sn + αn−1 sn−1 + · · · + α1 s + α0
(4.7)
= (s + λ1 )(s + λ2 ) · · · (s + λn ) = 0
The pole-placement design procedure results in a gain vector K such that (4.6)
is equal to (4.7), that is,
|sI − A + BK| = αc (s) = sn + αn−1 sn−1 + · · · + α1 s + α0 (4.8)

Consider the system Example 4.1


   
0 1 0
ẋ = x+ u
−1 0 1
 
y= 1 0 x
54 CHAPTER 4. MODERN CONTROL DESIGN

Find the control law that places the closed-loop poles of the system so that they
are both at s = −2.

 Solution From equation (4.7) we find that

αc (s) = (s + 2)2
= s2 + 4s + 4 (4.9)

Equation (4.6) tells us that


     
s 0 0 1 0  
|sI − A + BK| = − + K1 K2
0 s −1 0 1
or
s2 + K2 s + 1 + K1 = 0 (4.10)
Equating the coefficients with like powers of s in (4.10) and (4.9) yields the
system of equations

K2 = 4
1 + K1 = 4

therefore,

K1 = 3
K2 = 4

The control law is    


K = K1 K2 = 3 4 
Calculating the gains by using the technique illustrated in the above example
becomes rather tedious when the order of the system is higher than 3. However,
if the state variable equations are in control canonical form the algebra for
finding the gains is very simple.

4.3 Missing Notes


4.4 State Estimation
In the preceding section we introduced state feedback under the implicit
assumption that all state variables are available for feedback. This assumption
may not hold in practice because not all states may physically be measurable. In
this case and in order to apply state feedback, it becomes necessary to observe
or estimate the state variables. We use the circumflex over a variable to denote
an estimate of the variable. For example, x̂ is an estimate of x.
A full-order state observer estimates all the of the system state variables. If,
however, some of the state variables are measured, it may only be necessary to
estimate a few of them. This is referred to as reduced-order state observer.
Consider the state equations

ẋ = Ax + Bu
(4.11)
y = Cx
4.4. STATE ESTIMATION 55

where A, B, and C are given and the input u(t) and output y(t) are available
to us. The state x, however, is not available to us. The problem is to estimate
x from u and y with the knowledge of A, B, and C. If we know A and B, we
can duplicate the original system as

x̂˙ = Ax̂ + Bu (4.12)

and as shown in Figure 4.3. The duplication will be called an open-loop esti-
mator. In Figure 4.3, since the observer dynamics will never exactly equal the

Figure 4.3: A simple open-loop state estimator.

system dynamics, this open-loop arrangement means that x and x̂ will gradually
diverge. Therefore, the open-loop estimator is, in general, not satisfactory. Now
we shall modify the estimator from the one in Figure 4.3 to the one in Figure
4.4, in which the output ŷ = Cx̂ is estimated and subtracted from the actual
output y of (4.11). The difference can be used, in a closed loop sense, to modify
the dynamics of the observer so that the output error (y − ŷ) is minimized.
The open-loop estimator in (4.12) is now modified as

x̂˙ = Ax̂ + Bu + Ke (y − Cx̂)

which can be written as

x̂˙ = (A − Ke C)x̂ + Bu + Ke y (4.13)


56 CHAPTER 4. MODERN CONTROL DESIGN

Figure 4.4: Closed-loop state estimator.

where Ke is the observer gain matrix. We define the error e(t) to be

e(t) = x(t) − x̂(t) (4.14)

The main purpose of state observer design is to make e(t) → 0 when t → ∞.


The derivative of the error can be expressed as
˙
ė(t) = ẋ(t) − x̂(t) (4.15)

Substituting (4.11) and (4.13) into this equation yields

ė(t) = Ax + Bu − (A − Ke C)x̂ − Bu − Ke y (4.16)

Since y = Cx,

ė(t) = Ax − (A − Ke C)x̂ − Ke Cx = (A − Ke C)[x − x̂] (4.17)

or
ė = (A − Ke C)e (4.18)
We see from this equation that the errors in the estimation of the states have
the same dynamics as the state estimator. This dynamic behaviour depends on
4.4. STATE ESTIMATION 57

the eigenvalues of (A − Ke C). If (A − Ke C) is stable, then x̂ → x as t → ∞.


Furthermore, these eigenvalues should allow the observer transient response to
be faster than the system itself (typically a factor of 5). In other words, the gain
matrix Ke is chosen to make the dynamics of the estimator faster than those of
the system.

4.4.1 Estimator Design


We now consider the design of the state estimators. The equation of the esti-
mator is given by
x̂˙ = (A − Ke C)x̂ + Bu + Ke y (4.19)
The characteristic equation of the estimator is then

|sI − A + Ke C| = 0 (4.20)

Note that this characteristic equation is the same as that of the error in (4.18).
As mentioned earlier, one method of designing state estimators is to make the
estimator five times faster than the closed-loop system. Hence we choose a
characteristic equation for the estimator, denoted by αe (s), that reflects the
desired speed of response:

αe (s) = (s + µ1 )(s + µ2 ) · · · (s + µn )
= sn + αn−1 sn−1 + · · · + α1 s + α0 = 0 (4.21)

Then the gain matrix Ke is calculated to satisfy

|sI − A + Ke C| = αe (s) (4.22)

The problem of observer design is essentially the same as the regulator pole
placement problem, and similar techniques may be used.

Consider the system Example 4.2


   
0 1 0
ẋ = x+ u
−1 0 1
 
y= 1 0 x

this is the same system used in Example 4.1. Compute the estimator gain matrix
that will place the estimator error poles at s = −10 (five times as fast as the
controller poles selected in example 4.1).

 Solution We are asked to place the two estimator error poles at s = −10.
The corresponding characteristic equation is

αe (s) = (s + 10)2 = s2 + 20s + 100 (4.23)

From (4.22), we get


     
s 0 0 1 Ke1 
0 = s2 + 20s + 100

|sI − A + Ke C| = − + 1
0 s −1 0 Ke2
Thus
s2 + sKe1 + Ke2 + 1 = s2 + 20s + 100
58 CHAPTER 4. MODERN CONTROL DESIGN

Comparing the coefficients in the above equation, we find that


   
Ke1 20
Ke = =
Ke2 99
Alternatively, Ackerman’s formula may be used to calculate the observer gain
matrix as follows  
0
0
Ke = αe (A)O−1  . 
 
 .. 
1
where
αe (A) = An + αn−1 An−1 + · · · + α1 A + α0 I = 0
Therefore,      
99 20 1 0 0 20
Ke = = 
−20 99 0 1 1 99

4.5 Closed-Loop System Characteristics


In general, since we cannot measure all of the states of a system, it is neces-
sary to employ a state estimator for pole -placement design. The design process
has two steps: first we design the feedback gain matrix K to yield the closed-
loop system characteristic equation αc (s). Next we design the state estimator
to yield the estimator characteristic equation αe (s). In the implementation we
place the estimator within the closed loop, as shown in Figure 4.5.

Figure 4.5: Closed-loop system with state estimator.

We now derive the characteristic equation of the closed loop system of Figure
4.5. First the plant equations are
ẋ = Ax + Bu
(4.24)
y = Cx
4.5. CLOSED-LOOP SYSTEM CHARACTERISTICS 59

with the control law implemented using observed state variables u(t) = −Kx̂(t).
If the difference between the actual and observed state variables is

e(t) = x(t) − x̂(t)

then
x̂(t) = x(t) − e(t) (4.25)
Then, from (4.24) and (4.25),
ẋ = Ax − BK(x − e)
(4.26)
= (A − BK)x + BKe

The observer error equation from (4.18) is

ė = (A − Ke C)e (4.27)

Combining (4.26) and (4.27) gives


" # " #" #
ẋ A − BK BK x
= (4.28)
ė 0 A − Ke C e

Equation (4.28) describes the closed-loop dynamics of the observed state feed-
back control system and because the matrix is block triangular the characteristic
equation is

|sI − A + BK||sI − A + Ke C| = αc (s)αe (s) = 0 (4.29)

Equation (4.29) shows that the desired closed-loop poles for the control system
are not changed by the introduction of the state observer. The 2n roots of the
closed-loop characteristic equations are then the n roots of the pole-placement
design plus the n roots of the estimator. Since the observer is normally designed
to have a more rapid response than the control system with full order observed
state feedback, the pole-placement roots will dominate.
The closed-loop state equations, with the states chosen as the system states
plus the estimates of those states, will now be derived. From (4.24) and since
the control law u = −Kx̂
ẋ = Ax − BKx̂ (4.30)
and from (4.13)

x̂˙ = (A − Ke C)x̂ − BKx̂ + Ke Cx


= (A − Ke C − BK)x̂ + Ke Cx

Thus the closed-loop state equations are then given by


" # " #" #
ẋ A −BK x
= (4.31)
˙x̂ Ke C A − Ke C − BK x̂

Consider the system Example 4.3


   
0 1 0
ẋ = x+ u
−1 0 1
 
y= 1 0 x
60 CHAPTER 4. MODERN CONTROL DESIGN

In Example 4.1 the gain matrix required to place the poles at s = −2 was
calculated to be  
K= 3 4
and the state-estimator gain matrix was calculated in Example 4.2 to be
 
20
Ke =
99

The estimator equations are, from (4.13)

x̂˙ = (A − Ke C)x̂ + Bu + Ke y = (A − Ke C − BK)x̂ + Ke y

since u = −Kx̂. The controller-estimator is then described by the equations


   
˙x̂ = −20 1
x+
20
y
−103 −4 99
 
u = −3 −4 x 

4.5.1 Controller-Estimator Transfer Function


As shown in the example above, the controller-estimator equations are

x̂˙ = (A − Ke C − BK)x̂ + Ke y
u = −Kx̂

where y is the input and u is the output. Since these equations are in the
standard state-space equations form, we can calculate a transfer function with
input Y (s) and output U (s). Taking the Laplace transform of the controller-
estimator equations yields
˙
sX̂(s) = (A − Ke C − BK)X̂(s) + Ke Y(s)
U (s) = −KX̂(s)

and rearranging and solving for U (s), we obtain the transfer function

U (s) = [−K(sI − (A − Ke C − BK))−1 Ke ]Y(s)

For the last example, the controller-estimator transfer function


U (s) −(456s + 217)
= 2
Y(s) s + 24s + 183

The controller-estimator can be considered as a single transfer function of Gec (s),


where
Gec (s) = K(sI − (A − Ke C − BK))−1 Ke
This configuration is illustrated in Figure 4.6 in which a summing junction has
been added to account for the negative sign. The system appears to be the
standard single-loop unity feedback system. The characteristic equation of the
closed-loop system can be expressed as

1 + Gec (s)Gp (s) = 0 (4.32)


4.6. REDUCED-ORDER STATE ESTIMATORS 61

Figure 4.6: System with controller-estimator.

To illustrate the characteristic equation of (4.29), we consider the system of Example 4.4
Example 4.3. For this system, the characteristic equation is

αc (s)αe (s) = (s2 + 4s + 4)(s2 + 20s + 100)


= s4 + 24s3 + 184s2 + 480s + 400 = 0

We can also calculate the characteristic equation from (4.32), where the system
is represented as shown in Figure 4.6:
  
456s + 217 1
1 + Gec (s)Gp (s) = 1 + 2 =0
s + 24s + 183 s2 + 1

Simplifying this equation yields

s4 + 24s3 + 184s2 + 480s + 400 = 0

which is the same characteristic equation as just calculated. 

4.6 Reduced-order State Estimators


The full-order state observer estimates all state variables, irrespective of
whether they are being measured. Since usually we will not want to estimate
any state we are measuring, we prefer to design an estimator that estimates only
those states that are not measured. This type of estimator is called a reduced-
order estimator. We develop design equations for such an estimator in this
section. We consider only the case of one measurement; hence we are measuring
only one state. It is assumed that the state variables are always chosen such
that the state measured is x1 (t). The output equation is therefore
 
y = x1 = Cx = 1 0 · · · 0 x (4.33)

Partition the state vector as  


x
x= 1
xe
where xe are the state variables to be estimated. Next we partition the state
equations " # " #" # " #
ẋ1 a11 A1e x1 b1
= + u (4.34)
ẋe Ae1 Aee xe Be
62 CHAPTER 4. MODERN CONTROL DESIGN

From this equation we write the equation of the states to be estimated,

ẋe = Ae1 x1 + Aee xe + Be u (4.35)

and the equation of the state that is measured

ẋ1 = a11 x1 + A1e xe + b1 u (4.36)

In these equations, xe is unknown and to be estimated, and x1 and u are known.


To derive the equations of the estimator, we manipulate (4.35) and (4.36) into
the same form as the plant equations for full state estimation. For full state
estimation,
ẋ = Ax + Bu (4.37)
and for the reduced-order estimator, from (4.35)

ẋe = Aee xe + [Ae1 x1 + Be u] (4.38)

For full state estimation


y = Cx (4.39)
and for reduced-order estimator, from (4.36)

[ẋ1 − a11 x1 − b1 u] = A1e xe (4.40)

Comparing (4.37) with (4.38) and (4.39) with (4.40), we see that the equations
are equivalent if we make the following substitutions

x ← xe
A ← Aee
Bu ← Ae1 x1 + Be u
y ← ẋ1 − a11 x1 − b1 u
C ← A1e

Making these substitutions into the equation for the full-order estimator,

x̂˙ = (A − Ke C)x̂ + Bu + Ke y

we obtain the equations of the reduced-order estimator

x̂˙ e = (Aee − Ke A1e )x̂e + Ae1 y + Be u + Ke [ẏ − a11 y − b1 u] (4.41)

where x1 has been replaced with y. The error dynamics are given by

ė = ẋe − x̂˙ e = (Aee − Ke A1e )e (4.42)

Hence the characteristic equation of the estimator and of the errors of estimation
are given by
αe (s) = |sI − Aee + Ke A1e | = 0 (4.43)
We then choose Ke to satisfy this equation, where we have chosen αe (s) to give
the estimator certain desired dynamics.
4.6. REDUCED-ORDER STATE ESTIMATORS 63

Ackermann’s formula for full-order estimation is,


 −1  
C 0
 CA  0
Ke = αe (A)  ..   . 
   
 .   .. 
CAn−1 1
Making the substitutions indicated above yields the formula for reduced-order
estimator  −1  
A1e 0
 A1e Aee  0
Ke = αe (Aee ) 
   
..   .. 
 .  .
n−2
A1e Aee 1
where
αe (Aee ) = An−1 n−2
ee + αn−2 Aee + · · · + α2 Aee + α1 I
Note that the oder of the estimator is one less than that of the full-order esti-
mator.
The reduced-order estimator derived above requires the derivative of y(t),
as seen from (4.41). We wish to avoid the use of ẏ since the differentiator will
amplify any high-frequency noise in y. However, we can eliminate this need for
differentiation by a change of variables, such that the calculation of ẏ is not
required. We will introduce a change of variables x̂e1 = x̂e − Ke y. Now if we
use the state vector x̂˙ e1 instead of x̂˙ e the state equation for the observer in
(4.41) will simplify:
x̂˙ e1 = x̂˙ e − Ke ẏ
= (Aee − Ke A1e )x̂e + Ae1 y + Be u + Ke [ẏ − a11 y − b1 u] − Ke ẏ
= (Aee − Ke A1e )[x̂e1 + Ke y] + Ae1 y + Be u − Ke a11 y − b1 u
= (Aee − Ke A1e )x̂e1
+ (Ae1 − Ke a11 + Aee Ke − Ke A1e Ke )y + (Be − Ke b1 )u (4.44)
This equation is solved for the variable x̂e1 , after which the estimated state
vector is computed from x̂e = x̂e1 + Ke y. The control input to the plant is
u = −K1 y − Ke x̂e (4.45)
An implementation of the closed-loop control system is shown in Figure 4.7.

Consider the system Example 4.5


   
0 1 0
ẋ = x+ u
−1 0 1
 
y= 1 0 x
Design a reduced-order estimator that has a pole at s = −10.

 Solution Comparing the above state equations with those of the partitioned
state matrices in (4.34) yields
a11 = 0 A1e = 1 b1 = 0
Ae1 = −1 Aee = 0 Be = 1
64 CHAPTER 4. MODERN CONTROL DESIGN

Figure 4.7: Implementation of a reduced-order state observer.

The first-order estimator characteristic equation is


αe (s) = s + 10
From Ackermann’s formula,
αe (Aee ) = 0 + 10 = 10
and
10
Ke = αe (Aee )[A1e ]−1 [1] = = 10
1
Then from (4.44)
x̂˙ e1 = (Aee − Ke A1e )x̂e1 + (Ae1 − Ke a11 + Aee Ke − Ke A1e Ke )y + (Be − Ke b1 )u
= [0 − (10)(1)]x̂e1 + [(−1) − (10)(0) + (0)(10) − (10)(1)(10)]y + [1 − (10)(0)]u
= −10x̂e1 − 101y + u
The estimated value of x2 is then x̂e = x̂e1 + 10y.
4.7. SYSTEMS WITH INPUTS 65

4.6.1 Example page 263

4.7 Systems with Inputs


In the preceding sections only regulator control systems were considered. A
regulator system has no input, and the purpose of the system is to return all
state variables to values of zero when the states have been perturbed. However,
many systems require that the system output track an input. For these cases,
the design equations for the regulator systems must be modified.

4.7.1 Full State Feedback


We first consider systems with full state feedback. For these systems, the plant
is described by
ẋ = Ax + Bu
with
u = −Kx
The plant input u is the only function that we can modify; hence the input
function r(t) must be added to this function. Therefore, the equation for u
becomes
u = −Kx + Kr r
The gain Kr can be determined to satisfy different design criteria. If we express
u as
u = −K1 x1 − K2 x2 − · · · − Kn xn + Kr r
we see that one method for choosing Kr is to combine the input r with a linear
combination of the states. For example, suppose that state x1 is the system
output that is to track the input r. One logical choice for Kr is then the gain
K1 , such that u becomes

u = K1 [r − x1 ] − K2 x2 − · · · − Kn xn

You might also like