Download as pdf or txt
Download as pdf or txt
You are on page 1of 87

State Space Fundamentals

c G.M. Asher, R. Blasco Giménez

6th June 2011


Contents

1 Introductory Concepts 4
1.1 The Concept of State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 De…nitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 State Space Equations for Physical Systems . . . . . . . . . . . . . . . . . . . . . 6
1.3.1 Example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.2 Example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.3 Example 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.4 Example 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.5 Example 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2 Non-linear systems and linearization 11


2.1 Example of non-linear equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

3 State Space Equations from Transfer Functions 13


3.1 Method 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2 Method 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.3 Method 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.4 Method 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
xi
3.4.1 u containing equal order nominator and denominator . . . . . . . . . . . 18
3.4.2 Systems with complex poles . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.4.3 System with repeated real poles . . . . . . . . . . . . . . . . . . . . . . . . 20
3.5 Change of coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.6 Transfer functions from state space equations . . . . . . . . . . . . . . . . . . . . 22
3.6.1 Example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

4 Transforming to canonical form 25


4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.2 Transforming to Diagonal or Modal Form . . . . . . . . . . . . . . . . . . . . . . 26
4.2.1 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.3 Summary of Eigenvalues, Eigenvectors and Canonical Transformations . . . . . . 30
4.4 Further Canonical Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.4.1 Systems with real repeated eigenvalues . . . . . . . . . . . . . . . . . . . . 31
4.4.2 Systems with complex eigenvalues . . . . . . . . . . . . . . . . . . . . . . 31
4.5 Advantage of the Modal Canonical Form . . . . . . . . . . . . . . . . . . . . . . . 33
4.6 Control and Observable Canonical Forms . . . . . . . . . . . . . . . . . . . . . . 33

1
CONTENTS 2

5 Controllability and Observability 34


5.1 Controllability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.1.1 Via Modal Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.1.2 Via Controllability Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.2 Observability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.2.1 Via Modal Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.2.2 Via the Observability Matrix . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.3 Example system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

6 Control and Observer Canonical Forms 39


6.1 Control Canonical Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.2 Observer Canonical Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

7 Multivariable State Feedback 43


7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
7.2 Single input systems. Design via Control Canonical Form . . . . . . . . . . . . . 44
7.3 Design via diagonal canonical form . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.3.1 Single input system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.3.2 Multiple Input System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.3.3 Repeated eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
7.3.4 Systems with complex eigenvalues . . . . . . . . . . . . . . . . . . . . . . 53
7.3.5 Computer algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

8 Reference Inputs and Integral Control 55


8.1 Reference Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
8.1.1 The single reference input . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
8.1.2 Position of reference input . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
8.1.3 Multi-Input Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
8.2 Integral Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

9 Full and Reduced Order Observers 64


9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
9.2 Full Order Observers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
9.2.1 Single Output Systems. Observer Design using Observer Canonical Form 66
9.3 Reduced Order State Observers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
9.4 Observers - Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
9.4.1 Closed loop eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
9.4.2 Reference Input and System Zeroes . . . . . . . . . . . . . . . . . . . . . . 72
9.4.3 Integral Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

10 Time Solution of the State Space Equation 75


10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
10.2 The continuous solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
10.3 Finding the Transition Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
10.3.1 Method 1. Inverse Laplace Transform . . . . . . . . . . . . . . . . . . . . 77
10.3.2 Method 2. Finding via Modal Transformation . . . . . . . . . . . . . . 77
10.4 The Discrete State Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
10.4.1 Approximate evaluation E; F . . . . . . . . . . . . . . . . . . . . . . . . . 81
10.4.2 More accurate evaluation of E and F . . . . . . . . . . . . . . . . . . . . . 81
CONTENTS 3

11 Discrete State Space Design for Digital Control Systems 83


11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
11.2 Discrete State Space Equations from Transfer Functions . . . . . . . . . . . . . . 84
11.3 Selection of Sample Interval and Closed Loop Eigenvalues . . . . . . . . . . . . . 84
11.4 Integral Control and Reduced Order Observers . . . . . . . . . . . . . . . . . . . 85
11.4.1 Integral Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
11.4.2 Reduced Order Observer . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Chapter 1

Introductory Concepts

1.1 The Concept of State

Figure 1.1: Basic system

By way of introduction, consider the system in …g. 1.1 and let us assume that, for say a
step input, u(s) = 1s , you wanted to determine the output time response x(t). Hopefully you
remember how to do this (you must obtain the Inverse Laplace Transform of X(s)), i.e.

1 s+2
x(t) = L (1.1)
s (s3 + 2s2 + 2s + 1)

the unique solution of x(t) being, of course, determined by initial conditions for x, dx
dt and
d2 x
dt2
at t = 0. To solve (1.1) you must …rst factorize the cubic polynomial and then apply partial
fraction techniques to obtain:

1 a b cs + d
x(t) = L + + (1.2)
s s + 1 s2 + s + 1

and having solved for a, b, c and d you then look up the Inverse Laplace Transform of each
term and, well done!, you’ve solved it. I shall not do it because it is boring.
The point is that the process is an analytic one and totally unsuitable for solution on a
computer. As it so happens, all simulation (i.e. …nding time responses) and nearly all control
design is nowadays done with the aid of a computer. Firstly, computers don’t like solving
polynomials (the cubic of (1.1) had to be factorized) and secondly, the computed does not like
working with mathematical operators - how indeed would "s" be represented on a computer?
Instead, we revert to the original di¤erential equations de…ning the physical system. As far
as the compute is concerned, we abandon the Laplace operator "s". We work in the time domain
and not in the frequency domain (or s domain).
The third order system in …g. 1.1 is de…ned by the third order di¤erential equation:
::: :: :: :
x + 2x + 2x + x = u + 2u (1.3)

u is the forcing input and x is the solution variable. As it happens, computers don’t like
(1.3) either - they don’t like higher order di¤erentials. Instead, (1.3) can be reduced to three

4
Introductory Concepts 5

…rst order di¤erential equations:


:
x1 = a11 x1 + a12 x2 + a13 x3 + b1 u
:
x2 = a21 x1 + a22 x2 + a23 x3 + b2 u (1.4)
:
x3 = a31 x1 + a32 x2 + a33 x3 + b3 u

Don’t worry how this is done, we´ll look at the methods later. We notice a few salient
features of (1.4). The derivatives of u have disappeared. This will be true for all physical
systems. Second, and most important, whereas we originally had one n´th order equation with
one solution variable, we now have n …rst order equations with n solution variables. Before, the
solution variable was the output x, but now we have x1 , x2 , x3 , and they can´t all be outputs
of the system! They are not outputs. For an n´th order system, they are a set of n variables,
which may or may not be physically real, which de…ne the state of the system at any given point
in time. They are termed STATE VARIABLES.
Given a physical system, then at any point in time, knowledge of the State Variables (at
that point in time) together with knowledge of the forcing function u is su¢ cient to uniquely
determine the system behaviour for all future time.
This statement can be seen from (1.4). Given u and x1 , x2 , x3 at time t, then (1.4) calculates
: : :
the slopes x1 , x2 , x3 at time t. Hence x1 , x2 , x3 can be determined at t + t. This becomes a
starting point and the process of calculating the slopes from the states continues. In principle,
: : :
this is exactly what the computer does with (1.4). It calculates x1 , x2 ... xn at time t and predicts
: : :
x1 , x2 ... xn at t + t. Using numerical analysis, the error magnitude of this prediction can also
be obtained from (1.4). The prediction can them be modi…ed or corrected to keep numerical
errors within given bounds.
Note the following:

The n states of a n´th order system are mutually independent. E.g. Given a 3rd order
system and at time t, x1 and x2 are known then x3 cannot be derived from x1 and x2 . If
it could, x3 would be "dependent" on x1 and x2 , and the system would be only 2nd order.

The choice of state variables for a given system is not unique (see section 1.3). This means
that a choice of state variables can be made purely for mathematical convinience. It is
fact which makes the state variable approach so powerful in control theory.

1.2 De…nitions
Before considering the choice of state variables for real physical systems, we will …rst de…ne all
our terms:

State Vector: The state vector x is the set of states (x1 ; x2 ; :::; xn )T . It is an n 1 column matrix.

State Space: (x1 ; x2 ; :::; xn ) can be said to form the axis of an n-dimensional cartesian space. At any
point in time, the numeric values of x1 ; x2 ; :::; xn thus form a vector in the space.

Input Vector: If a system has m inputs, u1 ; u2 ; :::; um then these form the vector u = (u1 ; u2 ; :::; um )T ,
an m 1 matrix.

Output Vector: If a system has r inputs, v1 ; v2 ; :::; vm then these form the r 1 column vector v =
(v1 ; v2 ; :::; vr )T . Note that v is the set of measurements or variables of interest of the
system.

State Space Eq.: Considering equation (1.4), it can be seen that the set of n …rst order equations can be
written

:
x = Ax + Bu (1.5)
Introductory Concepts 6

A is the n n system matrix. B is the input matrix and is n m. Since x de…nes the
state of the system, it follows that the outputs or system measurement must be a function
of x (some outputs may in fact be states). In general we have

v = Cx + Du (1.6)

C is the output matrix and is r n. D is the input-output matrix of order r m. It is


included for generality, but for most systems you will …nd that D is null, i.e. [0]. Eq.(1.5)
is termed the state equation. Eq.(1.6) is the output equation.

LTI System: If all the plant components are linear and do not change with time then the systems is
said to be linear, time-invariant and the matrices A; B; C; D consist only of numbers.

1.3 State Space Equations for Physical Systems


Before the state (space) equations (1.5) and (1.6) for a particular system can be derived, it is
necessary to have some convention for the choice of state variables. These derive from the energy
storage (or dynamic) elements within the system. Note that the state variables are associated
with the plant dynamics, i.e. the energy interchange which causes the plant to move from one
state to another.
A common convention consists on choosing the state variables as follows:
Dynamic elements for electrical systems:

i, current though each inductor L.

v, voltage across each capacitor C.

Dynamic elements for mechanical systems:

v, velocity of a lumped mass M .

x, position/extension of a spring K.

Those for other systems, e.g. hydraulic, magnetic circuit, thermal, chemical, economic, etc
can be found from "system modelling" text books. Note also that since = Li, q = Cv, then
‡ux and charge can also be used as state variables since for linear C and L they are proportional
to i and v respectively.

1.3.1 Example 1

Figure 1.2: Circuit with no dynamics

Figure 1.2. No inductive or capacitive elements. Circuit not dynamic. No state variables.
u
Order of the system = zero. u = V , the input. Note i = R , i.e., current (and everything else)
determined by the input alone.
Introductory Concepts 7

Figure 1.3: Simple electric circuit

1.3.2 Example 2
Figure 1.3. Two energy storage elements. Order of the system = 2

x1 iL
x= =
x2 vc

u1 E
u= =
u2 I
Let the "measurement" variables or variables of interest be vr ; ic , i.e.

v1 vr
v= =
v2 ic
: d d
We want expressions for x, i.e. for dt iL ; dt vc . Applying Kirchho¤ current law to node A
yields:

dvc vc
iL = ic + ir I=C + I (1.7)
dt R2
Applying Kirchho¤ voltage law to mesh B yields:

diL
E = vr + vL + vc = iL R1 + L + vc (1.8)
dt
hence

diL R1 1 1
= iL vc + E (1.9)
dt L L L
dvc 1 1 1
= iL vc + I (1.10)
dt C CR2 C
i.e.
R1 1 1
: x_ 1 L L x1 L 0 u1
x= = 1 1 + 1
x_ 2 C CR2 x2 0 C u2
t t t t
since x1 x2 = iL vc ; u1 u2 = E I . Hence
R1 1 1
L L L 0
A= 1 1 ; B= 1
C CR2 0 C

Finally, vr = R1 iL and ic (the other output) is ic = C dv


dt = iL
c 1
R2 vc + I from eq. 1.10.
Hence:

v1 vr R1 0 x1 0 0 u1
v= = = 1 +
v2 ic 1 R2 x2 0 1 u2
Introductory Concepts 8

hence

R1 0 0 0
C= 1 ; D=
1 R2 0 1

1.3.3 Example 3

Figure 1.4: Electric system with coupled states

Figure 1.4. Take care here. There are 3 inductors and is tempting to choose the states as
t
x = i1 i2 i3 . But i3 = i1 i2 , so that i3 is not independent (see section 1.1). The system
is only second order.

i1
x= ; u = (E) ; v = (vR2 )
i2
Applying the Kirchho¤ Voltage Law to mesh A: E = R1 i1 + L1 didt1 + L2 didt2
Applying the Kirchho¤ Voltage Law to mesh B: L2 didt2 = L3 didt3 + R2 i3 = L3 didt1 L3 didt2 +
R2 i1 R2 i2
hence

di2 L3 di1 R2 R2
= + i1 i2
dt L2 + L3 dt L2 + L3 L2 + L3
Substituting into the KVL mesh A equation yields:

L2 L3 di1 L2 R2 L2 R2
E = R1 i1 + L1 + + i1 i2
L2 + L3 dt L2 + L3 L2 + L3
L1 L2 + L1 L3 + L2 L3 di1 L2 R2 L2 R2
E = R1 i1 + + i1 i2
L2 + L3 dt L2 + L3 L2 + L3
L1 L2 + L1 L3 + L2 L3 di1 L2 R2 L2 R2
=E R1 i1 i1 + i2
L2 + L3 dt L2 + L3 L2 + L3
di1 L2 R1 + L2 R2 + L3 R1 L2 R2 L2 + L3
= i1 + i2 + E
dt L0 L0 L0
di1
where L0 = L1 L2 + L1 L3 + L2 L3 . Similarly eliminating dt yields:

di2 L3 L2 R1 + L2 R2 + L3 R1 L2 R2 L2 + L3 R2 R2
= 0
i1 + 0
i2 + E + i1 i2
dt L2 + L3 L L L0 L2 + L3 L2 + L3

di2 L1 R2 L3 R1 L1 R2 L3
= i1 i2 + 0 E
dt L0 L 0 L
and vR2 = R2 i1 R2 i2 .
Introductory Concepts 9

Hence
L2 R1 +L2 R2 +L3 R1 L2 R2 L2 +L3
A= L0 L0 ; B= L0
L1 R2 L3 R1 L1 R2 L3
L0 L0 L0

C= R2 R2
Note the existence of dependent variables associates with energy storage element often leads
to messy mathematics as the dependent variable is eliminated.

1.3.4 Example 4

Figure 1.5: Mechanical system

Now for a mechanical example (…g. 1.5) which electronic engineering student always …nd
positively delightful. Follow the procedure:

1. Assign positions and velocities of masses relative to ground.

2. Assign positions and velocities of ends of other elements if di¤erent from those of masses.

We have now assigned y2 ; v2 ; (v2 = y_ 2 ) to the mass and y1 ; v1 ; (v1 = y_ 1 ) to the spring damper
connections. u = (F ), the applied force as shown.
State variables are v2 on mass and (y2 y1 ) = x0 being the spring elongation. Hence x =
t
v2 x0 .
P d
Note the element characteristics are: Fm = M dt v2 ; Fk = kx0 ; FB = Bv1 .
0
We need the equations for v_ 2 and x_ . Newton’s law on mass yields:

M v_ 2 = F Fk = F kx0
which is of required form since only states or inputs on right-hand side.

FB Fk
x_ 0 = y_ 2 y_ 1 = v2 v1 = v2 = v2 =
B B
k 0
= v2 x
B
Since, obviously the spring force Fk must equal the damper force FB . Hence:
k 1
: v_ 2 0 M v2 M
x= = + (F )
x_ 0 1 k
B
x0 0
Introductory Concepts 10

Figure 1.6: Diagram of a DC motor

1.3.5 Example 5
Now for a mixed electromechanical system, a DC motor drive (…g. 1.6). Ra ; La are the resistance
and inductande of the armature. E is the back emf (electromotive force) E = kif !, where ! is
the shaft speed. The motor torque T = kif ia is balanced by an inertial component J !,
_ a friction
component B! and an applied load torque TL , which may change in time but is assumed to be
independent of !. What are the state variables? The energy storage components are:

La ! state variable ia .

J ! state variable !.

Lf ! state variable if , BUT if if is kept constant, then we can remove the dynamics
associated with the motor …eld winding. if will just become a number.

There are no capacitors and no springs. Therefore:

ia Va
x= ; u=
! TL
There are two equations:

1. Electrical: Va = Ra ia + La didta + kif !

2. Mechanical: T = kif ia = J !_ + B! + TL

Hence:

!
Ra kif 1
i_ a La La ia La 0 Va
x_ = = kif + 1
!_ B ! 0 J
TL
J J
:
x = Ax + Bu

Note the linearity of the system given that if is just a number. If Vf was varied, we would
di
have Vf = Rf if + Lf dtf , and if would be a state. The state equations would then be non-linear
on account of kif !; kif ia , i.e. two state variables multiplied together. Non-linearities are treated
in the next chapter.
Chapter 2

Non-linear systems and linearization

2.1 Example of non-linear equations


In eg. 1.3.5 assume that if is changed by changing vf and further that Lf is non-linear (i.e.
iron saturation) where Lf = Lf (if ).
We have: x = (ia ; !; if )t ; u = (Va ; TL )t
k
i_ a = Ra La Lf (if )if ! + Va
La ia La
k
!_ = L (i )i ia B TL
J!
J f f f
J
Rf Vf
Lf (if )
i f + Lf (if )
i_ f =
:
which cannot be put into the form x = Ax + Bu on account of the terms in the boxes. We
write the equations in general form:

x_ 1 = f1 (x1 ; x2 ; :::; u1 ; u2 ; :::) = f1 (x; u)


x_ 2 = f2 (x1 ; x2 ; :::; u1 ; u2 ; :::) = f2 (x; u)
where f1 ; f2 ; f3 ; : : : are non-linear functions of x1 ; x2 ; x3 ; : : : ; u1 ; u2 ; u3 ; : : : . These equations
are then written:

x_ = f (x; u) (2.1)
which is the general state space equation form.

2.2 Linearization
Solution to (2.1) is obtained from general purpose di¤erential equation solver routines and there
is no problem in solving a transition of a non-linear system. Deriving control laws for non-
linear systems are however di¢ cult. The technique pursued here is to assume that the system
is going to be controlled about a steady state operating point such that any travel away from
the operating point will be small. We may therefore consider only small changes ( x) about the
steady state or equilibrium state point x0 . Considering the i-th equation:
d
dt (x0 + x)i = (x_ 0 + x)
_ i = fi (x0 + x; u0 + u) =
@fi @fi @fi
= fi (x0 ; u0 ) + x1 @x1
+ x2 @x2
+ + u1 @u 1
+ :::
x0 ;u0 x0 ;u0 x0 ;u0

and of course at equilibrium x_ 0 = 0 = f (x0 ; u0 ) for all i. Hence:

11
Non-linear systems and linearization 12

0 @f1 @f1 @f1


1 0 @f1 @f1 @f1
1
@x1 @x2 ::: @xn @u1 @u2 ::: @um
B @f2 @f2 @f2 C B @f2 @f2 @f2 C
B @x1 @x2 ::: @xn C B @u1 @u2 ::: @xm C
x_ = B
B .. .. .. C
C x+B
B .. .. .. C
C u
@ . . . A @ . . . A
@fn @fn @fn @fn @fn @fn
@x1 @x2 ::: @xn @u1 @u2 ::: @um
x0 ;u0 x0 ;u0
or

x_ = A (x0 ; u0 ) x + B (x0 ; u0 ) u (2.2)


where x0 ; u0 is the equilibrium point satisfying x_ 0 = 0 = f (x0 ; u0 ). Knowing x0 ; u0 , A
and B are then derived from (2.2) which represents a linear set of equations for small motions
about the equilibrium point.

2.2.1 Example
x_ 1 = x1 + x22 + u1 u2 = f1 (x; u)
x_ 2 = 2x2 + x1 x2 + u2 = f2 (x; u)

x_ 1 = 1 x1 + 2x20 x2 + u20 u1 + u10 u2


x_ 2 = x20 x1 + ( 2 + x10 ) x2 + 0 + 1 u2
Hence:

1 2x20 u20 u10


x_ = x+ u
x20 ( 2 + x10 ) 0 1
for operating point de…ned by u10 = u20 = 0

0 = x10 + x220
x10 = x20 = 0
0 = 2x20 + x10 x20

1 0 0 0
x_ = x+ u
0 2 0 1
p
an alternative solution is: x10 = 2; x20 = 2; in this case the linearised system would
be:
p
x_ = p1 2 2
x+
0 0
u
2 0 0 1
Note the non-linear nature of the original system leads to three di¤erent equilibrium points
for the inputs u10 = u20 = 0.
Now, for operating point de…ned by u10 = 0; u20 = 1:

0 = x10 + x220
0 = 2x20 + x10 x20 + 1
then x10 = x220 ! x320 2x20 + 1 = 0 ! x10 = 1; x20 = 1

1 2 1 0
x_ = x+ u
1 1 0 1
1 1
p
Again, the remaining two operating points solutions would be: x20 = 2 2 5; x10 =
1 1
p 2
2 2 5
Chapter 3

State Space Equations from Transfer


Functions

Purpose: Given VU (s)


(s)
= G(s) …nd x_ = Ax + Bu, v = Cx + Du
Note that G(s) relates v to u. There are no x’s anyware. In fact, the choice of x is rather
arbitrary. This may seem annoying but, in fact, it is precisely the lack of uniqueness in x which
gives the state space form its power.
Assume we have a simple G(s) with no zeros:

V (s) 1
G (s) = = 3 2
U (s) s + 2s + 2s + 1
d
Regard s as dt

s3 + 2s2 + 2s + 1 V (s) = U (s)

...
v + 2•
v + 2v_ + v = u
Choose
0 1
v•
x = @ v_ A
v
then
...
x_ 1 = v = 2•
v 2v_ v+u
x_ 2 = x1
x_ 3 = x2

0 1 0 10 1 0 1
x_ 1 2 2 1 x1 1
@ x_ 2 A = @ 1 0 0 A @ x2 A + @ 0 A u
x_ 3 0 1 0 x3 0
0 1
x1
v= 0 0 1 @ x2 A
x3
G(s) with zeros:
V (s) b1 s2 + b2 s + b3
G (s) = = 3
U (s) s + 2s2 + 2s + 1

13
State Space Equations from Transfer Functions 14

Figure 3.1: Transfer function G(s)

3.1 Method 1
Expressing G(s) as a product of two blocks, as shown in …g. 3.1, we have
...
e + a1 e• + a2 e_ + a3 e = u
let 01
e•
x = @ e_ A
e
then from the above section we have:
0 1 0 10 1 0 1
x_ 1 a1 a2 a3 x1 1
@ x_ 2 A = @ 1 0 0 A @ x2 A + @ 0 A u (3.1)
x_ 3 0 1 0 x3 0
Also have:

v = b1 e• + b2 e_ + b3 e = b1 x1 + b2 x2 + b3 x3
therefore:

v= b1 b 2 b 3 x
Note the structure. The coe¢ cients of the denominator polynomial are on the …rst row of
t
A, B = 1 0 0 , C = b1 b2 b3 .
You can therefore write down matrices A; B; C directly from the transfer function. Equation
(3.1) is called the CONTROL CANONICAL FORM (CCF).

3.2 Method 2
V (s) b1 s2 +b2 s+b3
From G(s) = U (s) = s3 +2s2 +2s+1
we have
...
v + a1 v• + a2 v_ + a3 v = b1 u
• + b2 u_ + b3 u
First of all, we separate the variables with and without derivatives, and we de…ne the state
x3 as follows
...
x_ 3 = v + a1 v• + a2 v_ b1 u
• b2 u_ = a3 v + b3 u
therefore:
x3 = v• + a1 v_ + a2 v b1 u_ b2 u
Repeating the same process again:

x_ 2 = v• + a1 v_ b1 u_ = x3 a2 v + b2 u

x2 = v_ + a1 v b1 u
State Space Equations from Transfer Functions 15

And once more:

x_ 1 = v_ = x2 a1 v + b1 u

x1 = v
Therefore:
0 1 0 10 1 0 1
x_ 1 a1 1 0 x1 b1
@ x_ 2 A = @ a2 0 1 A @ x2 A + @ b2 A u
x_ 3 a3 0 0 x3 b3
0 1
x1
v = 1 0 0 @ x2 A
x3
a1 ; a2 ; a3 appear down the …rst column instead of the …rst row. b1 ; b2 ; b3 appear in B
and now C = 1 0 0 .
This form of the state space equation is called the OBSERVER CANONICAL FORM (OCF).
The matrices A; B; C are di¤erent for methods 1 and 2 because the state variables chosen to
represent the system are di¤erent.
There are may ways in which x can be chosen for a system, each way resulting in di¤erent
A; B; C. The two forms above, the CCF and OCF are very useful since the state space equations
can be written down by inspection of G (s).
It is seen however, that x vectors may not be physically measurable quantities. For the OCF,
for example, x1 = v, the output, but x2 and x3 are “arti…cial”. If the system were electrical, x2
might be a function of currents and voltages within the circuit.
It may be desired to select x which have a physical reality, in which case:

3.3 Method 3

Figure 3.2: Block diagram of an electromechanical system

Often physical control systems are made up of distinct systems, e.g. ampli…er, motor, mech-
anical dynamics (…g. 3.2). T (or current), v, are all measurable. They can be selected as
states.
There will be n state variables associated with a n-th order block. In …g. 3.2, 3 states are
associated with each of the 3 …rst order blocks. We can therefore assign a state to the output
of each …rst order block.
Let x1 = ; x2 = v; x3 = T ; u = Vm , therefore
x2 2x3 u
x1 = ; x2 = ; x3 =
s s+1 s + 10
i.e.:

x_ 1 = x2 ; x_ 2 = x2 + 2x3 ; x_ 3 = 10x3 + u
and
State Space Equations from Transfer Functions 16

0 1 0 10 1 0 1
x_ 1 0 1 0 x1 0
@ x_ 2 A = @ 0 1 2 A @ x2 A + @ 0 A u
x_ 3 0 0 10 x3 1
and the output = 1 0 0 x.
Note that A; B are not in CCF or OCF. You could put the system into CCF or OCF by
2
combining the blocks to yield G (s) = s(s+1)(s+10) and applying methods 1 or 2 above.

Figure 3.3: System with a 2nd order block with complex roots

What happens if a block is second order with complex roots (…g. 3.3)? There will be 2 states
associated with the second order block, one can be , what of the other?
Let x1 = ; x3 = Tm ; x_ 3 = 10x3 + u as before.
Now apply Method 2 to the second order block (not Method 1 since this will not yield
x1 = ). I.e.:

x
•1 + 2x_ 1 + 2x1 = x_ 3 + x3
Let

x_ 2 = x
•1 + 2x_ 1 x_ 3 = 2x1 + x3
x2 = x_ 1 + 2x1 x3
x_ 1 = 2x1 + x2 + x3

giving 0 1 0 10 1 0 1
x_ 1 2 1 1 x1 0
@ x_ 2 A = @ 2 0 1 A @ x2 A + @ 0 Au
x_ 3 0 0 10 x3 1
and x2 is, of course, an “arti…cial” state.
And …nally...

3.4 Method 4
We saw in the CCF,OCF, the A matrices had a special form containing the negative coe¢ cients
of the denominator polynomial of G (s). The equations are called canonical when there is
something special about A. A particular powerful canonical form is when the POLES of G (s)
appear down the diagonal of A.
:
E.g., given the system in …g. 3.4 can we …nd or choose states x which will result in x =
Ax + Bu where

Figure 3.4: Block diagram of a third order system


State Space Equations from Transfer Functions 17

0 1
0 0 0
A=@ 0 1 0 A?
0 0 10
The answer is YES! But because this canonical form is so important, we will give it some
special symbols.
When A contains the poles of G (s) down its diagonal we write it as .
The choice of x giving we write y.
The state space equations become y_ = y + Qu.
This canonical form is called the diagonal canonical form or MODAL canonical form or just
MODAL form.
Q is the input matrix (i.e. B) when the equations are in modal form.
Procedure: Start by assigning x1 , x2 , x3 as in Method 3 above (see …g. 3.5)

Figure 3.5: Assignment of state variables x1 , x2 and x3

Then, carry out the partial fraction expansion of the transfer function of each state variable:

X1 (s) 2 1=5 2=9 1=45


U (s) = s(s+1)(s+10) = s + s+1 + s+10
X2 (s) 2 2=9 2=9
U (s) = (s+1)(s+10) = 0 + s+1 + s+10
X3 (s) 1 1
U (s) = s+10 = 0 + 0 + s+10
" " "
mode 1 mode 2 mode 3
pole = 0 pole = 1 pole = 10

Now de…ne our new states y which will put the system into modal form:

U (s) U (s) U (s)


Y1 (s) = ; Y2 (s) = ; Y3 (s) =
s s+1 s + 10
therefore:
y_ 1 = u; y_ 2 = y2 + u; y_ 3 = 10y3 + u
and 0 1 0 10 1 0 1
y_ 1 0 0 0 y1 1
@ y_ 2 A = @ 0 1 0 A @ y2 A + @ 1 A u
y_ 3 0 0 10 y3 1
i.e. of form y_ = y + Qu. Assuming x1 as the output:
0 1
y1
v = 15 2 1 @ y2 A
9 45
y3
The states y are of course completely arti…cial. The relationship between the physical states
(x) and the new states (y) is:
0 1 0 10 1
x1 1=5 2=9 1=45 y1
@ x2 A = @ 0 2=9 2=9 A @ y2 A = y
x3 0 0 1 y3
State Space Equations from Transfer Functions 18

Where is the matrix of partial fraction coe¢ cients. The states y can be obtained from x
by using: y = T x where T = 1.

T is termed a Transformation Matrix which transforms the physical states x into the arti…cial
states y. yi is termed the i-th mode of the system.
We can also draw what is termed the MODAL BLOCK DIAGRAM (…g. 3.6).

Figure 3.6: Modal block diagram

Note for
0 a single
1 input, single output (SISO) system with real non-repeated poles, Q will
1
always be @ 1 A if the partial fraction method is used.
1
Unfortunately complications occur when:
xi
1. an u term contains an equal order nominator and denominator.

2. complex poles exist.

3. repeated real roots exist..

xi
3.4.1 u
containing equal order nominator and denominator

Figure 3.7: Block diagram

If we apply the partial fraction expansion method to the system in …g. 3.7, we will have:
X1 (s) (s+p)(s+q) a1 a2 a3
U (s) = (s+ )(s+ )(s+ ) = (s+ ) + (s+ ) + (s+ )
X2 (s) (s+p)(s+q) b1 b2
U (s) = (s+ )(s+ ) = b0 + (s+ ) + (s+ )
X3 (s) (s+p) c1
U (s) = (s+ ) = c0 + (s+ )
X2 (s) X3 (s)
where b0 and c0 are constants arising from the fact that U (s) and U (s) have nominator and
denominator of equal order.
State Space Equations from Transfer Functions 19

0 1
0 0
U (s) U (s) U (s)
De…ning Y1 (s) = s+ ; Y2 (s) = s+ ; Y3 (s) = s+ , we have y_ = @ 0 0 Ay +
0 0
0 1
1
@ 1 A u as before.
1
1 0
0
But now x = y + P u where P is, in this case, @ b0 A and y = 1 (x P u).
c0
What this means is that x is not a true vector (In fact, both x2 and x3 depend on time
derivatives of u, which is not allowed). y is of course a true state vector, and so we have succeeded
in putting the system into a canonical state space form. The expression y = 1 (x P u) is not
problematic.

3.4.2 Systems with complex poles

Figure 3.8: System with complex poles

Figure 3.8 shows a system with complex poles where x2 is “hidden” in the second order
block.
Carry out partial fraction expansion, noting s2 + 2s + 10 = (s + 1)2 + 32 :
X1 (s) 10 1 3 s+1 1
U (s) = (s+2)(s2 +2s+10)
= 3 (s+1)2 +32 + 1 (s+1)2
+32
+ s+2
X2 (s) 10 10 3
U (s) = (s2 +2s+10)
= 3 (s+1)2 +32 + 0 + 0
X3 (s) 1 1
U (s) = (s+2) = 0 + 0 + s+2
" " "
e t sin 3t e t cos 3t e 2t
mode mode mode
There are three modes, the exponentially decaying “sin”and “cos”modes and the e 2t mode.
We have much freedom to de…ne x2 how we like, so long as we ensure that it is independent
from x1 and x3 (i.e. if x2 can be expressed as k1 x1 + k2 x2 where k1 and k2 are constants, then
x2 is NOT independent of x1 and x3 ). Above, x2 is related to x1 and x3 via functions of s and
so it is OK. The actual choice of x2 is made for easy mathematics, and it is common to put it
in terms of a single e t sin !t or e t cos !t as above.
De…ning:

3 s+1 1
Y1 (s) = U (s); Y2 (s) = U (s); Y3 (s) = U (s)
(s + 1)2 + 32 (s + 1)2 + 32 (s + 2)

(i.e. to make yi the i-th mode of the system), then we have:


0 1 1
3 1 1
x= y= @ 10
0 0 Ay
3
0 0 1
Note must be non-singular which it will be if x1 ; x2 and x3 are all independent. From the
above de…nitions of yi we can immediately draw the modal block diagram (…g. 3.9).
State Space Equations from Transfer Functions 20

Figure 3.9: Modal block diagram

and we are left with having to …nd ; Q in y_ = y + Qu.


Multiplying Y1 (s) by (s + 1) yields:

3 (s + 1)
Y1 (s) (s + 1) = U (s) = 3Y2 (s)
(s + 1)2 + 32
i.e.

y_ 1 = y1 + 3y2 (3.2)
Multiplying Y2 (s) by (s + 1) yields:

(s + 1)2
Y2 (s) (s + 1) = U (s)
(s + 1)2 + 32
and

(s + 1)2 32
Y2 (s) (s + 1) + 3Y1 (s) = U (s) + U (s) = U (s)
(s + 1)2 + 32 (s + 1)2 + 32
i.e.

y_ 2 = 3y1 y2 + u (3.3)
and of course, we have:

y_ 3 = 2y3 + u (3.4)
Combining (3.2), (3.3) and (3.4) yields:
0 1 0 1
1 3 0 0
y_ = @ 3 1 0 A y+ @ 1 Au
0 0 2 1
i.e. the real part of the complex pole appears on the diagonal and the imaginary part in the
o¤ diagonal.

3.4.3 System with repeated real poles


Figure 3.10 shows a system with real repeated poles s2 2s + 1 = (s 1)2 . The partial fraction
expansion yields:
X1 (s) 1 1 1 1 1 1 1
U (s) = (s+3)(s2 2s+1)
= 4 (s 1)2 + 16 (s 1) + 16 s+3
X2 (s) 1 1
U (s) = s 1 = 0 + s 1 + 0
X3 (s) 1 1
U (s) = s+3 = 0 + 0 + s+3
State Space Equations from Transfer Functions 21

Figure 3.10: System with repeated real poles

De…ning:
1 1 1
Y1 (s) = 2 U (s); Y2 (s) = U (s); Y3 (s) = U (s)
(s 1) s 1 s+3
1
Noting that Y1 (s) = s 1 Y2 (s) we can express the above de…nitions as:

y_ 1 = y1 + y2
y_ 2 = y2 + u
y_ 3 = 3y3 + u

yielding
0 1 0 1
1 1 0 0
y_ = @ 0 1 0 Ay + @ 1 Au
0 0 3 1
with the modal block diagram shown in …g. 3.11.

Figure 3.11: Modal block diagram

And the corresponding Modal matrix is:


0 1 1 1
1
4 16 16
x= y=@ 0 1 0 Ay
0 0 1

3.5 Change of coordinates


All of this is very nice, but:

How can we have more than one state space representation for the same system?

If the state space representation is di¤erent for a di¤erent set of state variables:

– When I calculate one representation, do I have to do all the work again for another
representation?
– Are these state space representations “equivalent” in some sense?
State Space Equations from Transfer Functions 22

How can I go the other way around, i.e. if I have the state space equations, how can I
obtain the Transfer Function(s) of the system?

The …rst question will be addressed in this section, and the second and third in the next
section.
Assume we know the State Space representation of a system using a particular set of State
Variables (x):
:
x = Ax + Bu; v = Cx + Du

Now we want to represent the system dynamics using a DIFFERENT set of State Variables
(z). We can try to obtain all the State Equation from the physical model. Alternatively, we can
express the above equation using the new state variables if we know the relationship between
the “old” (x) and “new” (z) state variables in order to obtain the new representation.

1
z = Tx ! x = T z
Therefore, substituting x in the state space equation, we have:

1 1
T z_ = AT z + Bu
solving for z_ yields:

1
z_ = T AT z + T Bu
And applying the sambe de…nition to the output equation:

1
v = CT z + Du

3.6 Transfer functions from state space equations


How to go the other way around, i.e. obtain the transfer function from the state space equations.

x_ = Ax + Bu; v = Cx + Du
In order to obtain the Transfer Function of a system we apply the Laplace transform with
zero initial conditions. Remember the transfer function of a system is G(s) = VU (s)
(s)
; i.e. V (s) =
G(s)U (s).

1
sX(s) = AX(s) + BU (s) ! (sI A) X(s) = BU (s) ! X(s) = (sI A) BU (s)

h i
1
V (s) = C (sI A) B + D U (s)

adj (sI A)
V (s) = C B + D U (s)
jsI Aj
1
V (s) = [Cadj (sI A) B + D jsI Aj] U (s)
jsI Aj
Let us take a closer look at the result. For SISO systems, we obtain a single transfer function,
but for MIMO systems we obtain a SET of Transfer Functions (this is good news).
The characteristic equation of ALL these transfer functions is given by: jsI Aj = 0. Note
jsI Aj is a polynomial of degree equal to the number of states (interesting, isn’t it?).
State Space Equations from Transfer Functions 23

Remember the de…nition of EIGENVALUE of a matrix. The eigenvalues of a matrix A are


obtained by solving the equation:

j I Aj = 0
Therefore the poles of the system are equal to the eigenvalues of matrix A.
Now you know how to obtain DIFFERENT State Space representations of the SAME sys-
tem.
h You also know ihow to obtain the Transfer Function from these representations: V (s) =
C (sI A) 1 B + D U (s) .
Does it mean that we will obtain DIFFERENT Transfer Functions if we use DIFFERENT
State Space representations as a starting point? The obvious answer is NO. The same system
always has the same transfer function. Let us prove it.
First of all, de…ne the relationship between “new” and “old” state variables:
1
z = Tx ! x = T z

Then calculate the transfer function of the “new” system:

1 1
z_ = T AT z + T Bu; v = CT z + Du
h i
1 1 1
V (s) = CT sI T AT T B + D U (s)
h i
1 1 1
V (s) = C T sI T AT T B + D U (s)
h i
1 1 1 1
V (s) = C T sIT T T AT T B + D U (s)
h i
1 1
V (s) = C sIT T A B + D U (s)
h i
1
V (s) = C (sI A) B + D U (s)
Therefore, starting from the State Space equations based on the “new” variables (z), we
have proved that the obtained transfer function is exactly the same as the one obtained from
the State Space equations based on the “old” state variables (x).

3.6.1 Example 1
Obtain the transfer function of the following system:

0 1 0 1
2 2 3 1
x_ = @ 0 3 4 A x+ @ 2 Au
0 0 5 3
v = 4 0 1 x
h i
The transfer function is obtained by applying V (s) = C (sI A) 1 B + D U (s)
2 0 0 11 1 0 13
2 2 3 1
6 7
V (s) = 4 4 0 1 @sI @ 0 3 4 AA @ 2 A5 U (s)
0 0 5 3
2 0 1 1 0 13
s+2 2 3 1
6 @ A @ 7
V (s) = 4 4 0 1 0 s+3 4 2 A5 U (s)
0 0 s+5 3
State Space Equations from Transfer Functions 24

2 0 1 0 13
s2 + 8s + 15 2s + 10 3s + 17 1
4 4 0 1 @ 0 s2 + 7s + 10 4s + 8 A @ 2 A5
0 0 s2 + 5s + 6 3
V (s) = U (s)
(s + 2) (s + 3) (s + 5)

s2 + 69s + 326
V (s) = U (s)
(s + 2) (s + 3) (s + 5)
If we had two inputs and two outputs:
2 0 2 10 13
s + 8s + 15 2s + 10 3s + 17 1 1
4 4 0 1 @
0 s2 + 7s + 10 4s + 8 A @ 2 0 A5
0 1 0 2
0 0 s + 5s + 6 3 0
V (s) = U (s)
(s + 2) (s + 3) (s + 5)

1 s2 + 69s + 326 4s2 + 32s + 60


V (s) = U (s)
(s + 2) (s + 3) (s + 5) 2s2 + 26s + 44 0
h i
Note the expression V (s) = C (sI A) 1 B + D U (s) gives all the possible transfer func-
tions, i.e.:

s2 + 69s + 326 4s2 + 32s + 60


V1 (s) = U1 (s) + U2 (s)
(s + 2) (s + 3) (s + 5) (s + 2) (s + 3) (s + 5)
2s2 + 26s + 44
V2 (s) = U1 (s)
(s + 2) (s + 3) (s + 5)
Chapter 4

Transforming to canonical form

4.1 Introduction
Given, say:

V (s) s+6 s+6


= =
U (s) (s + 1) (s + 2) (s 4) s3 s2 10s 8
you can already write down the state space equations in a number of forms, i.e.:

1. You may have chosen x ”physically”. You will get:

x_ = Ax + Bu; v = Cx

2. Control Canonical Form (also called companion form):


0 1
1 10 8
x_ c = Ac xc + Bc u; v = Cc xc ; Ac = @ 1 0 0 A
0 1 0

3. Observer Canonical Form:


0 1
1 1 0
x_ o = Ao xo + Bo u; v = Co xo ; Ao = @ 10 0 1 A
8 0 0

4. Diagonal Canonical Form

x_ d = Ad xd + Bd u; v = Cd xd
0 1
1 0 0
y_ = y + Qu; v = Cd y; =@ 0 2 0 A
0 0 4

where Ac ; Ao ; can be written down by inspection. The eigenvalues of A; Ac ; Ao and


are the same ( 1 = 1; 2 = 2; 3 = 4). The CCF and OCF are normally used for SISO
systems. DCF or the Modal Form is generally used for MIMO systems.
It is often found however that when modelling physical systems it is easier to write down
the state space equations then to write down the transfer function (that is certainly true for
aircraft, missiles, motors, non-linear and MIMO systems, for example). Also, when modelling,

25
Transforming to canonical form 26

the states of the systems are, in general, “physical”, so that the resulting state space equations
are not in canonical form.
The problem addressed is:
Given x_ = Ax + Bu; v = Cx transform directly to a canonical form without going to
transfer functions …rst (note for MIMO systems, going from transfer functions to canonical form
is either di¢ cult or impossible). We transform directly to a canonical form by trying to …nd a
matrix T such that xcanonical = T x.

4.2 Transforming to Diagonal or Modal Form


We wish to …nd a matrix T such that
y = Tx (4.1)
Given
x_ = Ax + Bu; v = Cx
and y = T x or x = T 1y (and hence x_ = T 1 y_ ), then

T x_ = T Ax + T Bu; v = Cx

1 1
y_ = T AT y + T Bu; v = CT y (4.2)

y_ = y + Qu

1
= T AT ; Q = TB (4.3)
where is the diagonal matrix of eigenvalues (poles).
Consider the eigenvalues of A. Therefore, we have:

A 1
= 1 1
A 2
= 2 2
.. .
. = ..
A n
= n n

from the de…nition of eigenvectors, where i is an n 1 column vector. We formulate an


h i
n n matrix = 1 ; 2 ; :::; n , then we can express the above equation as:
0 1
h i h i 1 0
B .. C
A = A 1 ; 2 ; :::; n = 1 ; 2 ; :::; n @ . A (4.4)
0 n

therefore

1
A = ! = A (4.5)
and comparing with (4.3) we see that

1
T = (4.6)
and y and x are thus related by:

1
y= x (4.7)
Transforming to canonical form 27

Therefore, we can write:

1
y_ = y + Bu; v=C y (4.8)

4.2.1 Notes
1. is called the modal matrix of A. Its columns are the eigenvectors of A. Since the
eigenvectors are the RATIO of cofactos of A’s row elements, the actual numerical elements
of will not be unique.
h i
2. Don’t get the columns of muddled. = 1 ; 2 ; :::; n where 1 is derived from 1 , etc.
0 1
1 0
If you care to work through 1 A you will see that the result is B .. C
@ . A.
0 n

3. When y_ = y + Qu was found for SINGLE INPUT SYSTEMS, by the partial fraction
X(s)
expansion of U (s) , (section 3.4), then:

(a) the states x were related to y by x = y where was the matrix of partial fraction
coe¢ cients. Thus the partial fraction coe¢ cient matrix is a modal matrix.
t
(b) we obtained Q = 1 1 1 . This Q is obtained only for the partial fraction
expansion method. When …nding y_ = y + Qu from x_ = Ax + Bu, then even for a
SISO system you must …nd from the eigenvectors of A and formulate Q by 1 B.

1 adj( )
4. To …nd Q demands the inversion of . Remember that = j j . For a 2x2 matrix
a b 1
there is a quick formula which should be remembered. If A = then A =
c d
1 d b
jAj .
c a
5. Complications arise in calculating and for systems containing complex eigenvalues
(poles) or real, repeated eigenvalues.

(a) Complex eigenvalues. will be complex. contains diagonal entries j . You


will not be asked to manipulate matrices with complex elements.
(b) Repeated real eigenvalues. If 1 = 2 then 1 = 2 . Hence will contain two
identical columns and will thus be singular, i.e. j j = 0 and 1 will not exist. To

overcome this we create 2 arti…cially to yield a matrix such that 1A = has


a “1” on an o¤ diagonal (see section 4.4). Such a is a special case of the diagonal
canonical form and is termed the Jordan Canonical Form.

4.2.2 Example
Consider the system in (4.9). Obtain the corresponding modal canonical form using the Modal
Matrix method

x_ = Ax + Bu; v = Cx
0 1 0 1
1 1 0 0
A = @ 10 0 1 A ; B = @ 1 A ; C= 1 0 0 (4.9)
8 0 0 6

First of all, we obtain the eigenvalues of matix A:


Transforming to canonical form 28

jA Ij = 0

0 1 0 1
1 1 0 1 1 0
@ 10 0 1 A I = @ 10 1 A = 2 3
+ 10 + 8
8 0 0 8 0
2 3
+ 10 + 8 = 0
The solutions are 1 = 1; 2 = 2; 3 = 4. Now that we have the eigenvalues, we can
proceed to calculate the corresponding eigenvectors:

(A 1 I) 1
=0

00 1 10 1
1 1 0 11
@@ 10 0 1 A 1 I A @ 12 A = 0
8 0 0 13
0 10 1
2 1 0 11
@ 10 1 1 A @ 12 A = 0
8 0 1 13
0 1
2 11 + 12
@ 10 11 + 12 + 13 A = 0
8 11 + 13
Let 11 = 1, therefore 2 + 12 = 0 ! 12 = 2; and 8 + 13 =0! 13 = 8. Therefore,
the eigenvector 1 for eigenvalue 1 = 1 is
0 1
1
1
=@ 2 A
8
We repeat the same procedure for 2 = 2

(A 2 I) 2
=0

00 1 10 1
1 1 0 21
@@ 10 0 1 A 2 I A @ 22 A = 0
8 0 0 23
0 10 1
3 1 0 21
@ 10 2 1 A @ 22 A = 0
8 0 2 23
0 1
3 21 + 22
@ 10 21 + 2 22 + 23 A = 0
8 21 + 2 23
Let 21 = 1, therefore 3 + 22 = 0 ! 22 = 3; and 8 + 2 23 =0! 23 = 4. Therefore,
the eigenvector 2 for eigenvalue 2 = 2 is
0 1
1
2
=@ 3 A
4
Transforming to canonical form 29

And …nally, for 3 = 4 we have

(A 3 I) 3
=0

00 1 10 1
1 1 0 31
@@ 10 0 1 A 3 I A @ 32 A = 0
8 0 0 33
0 10 1
3 1 0 31
@ 10 4 1 A @ 32 A = 0
8 0 4 33
0 1
3 31 + 32
@ 10 31 4 32 + 33 A = 0
8 31 4 33
Let 31 = 1, therefore 3 + 32 = 0 ! 32 = 3; and 8 4 33 =0! 23 = 2. Therefore, the
eigenvector 3 for eigenvalue 3 = 4 is
1 0
1
3
=@ 3 A
2
The matrix will be:
0 1 0 1 1 1
1
1 1 1 5 5 5
=@ 2 3 3 A; 1
=@ 2
3
1
3
1
6
A
8 2 1
8 4 2 15 15 30
Therefore, the matrix will be:

0 1 1 1
10 10 1 0 1
5 5 5 1 1 0 1 1 1 1 0 0
= 1
A =@ 2
3
1
3
1
6
A @ 10 0 1 A @ 2 3 3 A=@ 0 2 0 A
8 2 1
15 15 30 8 0 0 8 4 2 0 0 4

and the matrix Q:


0 1 1 1
10 1 0 1
5 5 5 0 1
Q= 1
B=@ 2
3
1
3
1
6
A@ 1 A = @ 2
3
A
8 2 1 1
15 15 30 6 3

Hence, the diagonal State Space equation will be:


0 1 0 1
1 0 0 1
y_ = @ 0 2 0 Ay + @ 2
3
Au
1
0 0 4 3
and the output equation:
0 1
1 1 1
v=C y= 1 0 0 @ 2 3 3 Ay = 1 1 1 y
8 4 2
Note that eigenvalues and eigenvectors can be calculated very easily with the help of computer
packages like Matlab.
Transforming to canonical form 30

4.3 Summary of Eigenvalues, Eigenvectors and Canonical Trans-


formations
1. Eigenvalues and eigenvector are properties of a Square Matrix.
2. An eigenvector i of a matrix A is a vector for which the transformation z i = A i
lies in
the same direction as i (i.e. z i = i ).
3. The magnitude of an eigenvector is irrelevant - it is its direction which matters. An
eigenvector can be normalised to give it unit magnitude by i(normalised) = i = i , where
q
= 2 + 2 + ::: + 2 :
i 1i 2i ni

4. An eigenvalue i of a matrix is a scalar and is equal to the ratio of the magnitudes of the
vectors jz i j = i
where z i = A i .

5. For an n n matrix A there will be n eigenvalues, each eigenvalue having a corresponding


eigenvector.
6. The eigenvalues are found …rst from the characteristic equation jA Ij = 0 which, in
expansion, is an nth order polynomial.
7. The corresponding eigenvector i to eigenvalue i is then found from the equation (A i I)
i = 0 by arbitrarily setting one element to 1 (generally ii ).

8. The modal transformation matrix is formed by assembling the eigenvectors i as colums


of a matrix, i.e. = 1
; 2
; :::; n
.

9. The transformation y = 1x
applied to the state equation x_ = Ax + Bu gives the
0 1
1 0
canonical form y_ = y + 1 Bu, where = 1A = B .. C
@ . A for a system with
0 n
no repeated eigenvalues.
10. The alternative transformation y = t x also gives the canonical form y_ = y + t Bu,
where = ( 1 ; 2 ; :::; n ) and i is the ith eigenvector of At , provided there are no repeated
eigenvalues.

11. If the eigenvectors 1


; 2
; :::; n
of A and ( 1 ; 2 ; :::; n ) of At are all normalised, then
1 t. However, it is unnecessary to normalise for the derivation of the canonical form
as in (9) and (10) above.
12. For a system with two or more repeated eigenvalues, only the transformation y = 1x is
valid, giving y_ = y + 1 Bu as before, but with
0 1
1 1 0
B C
1A = B C
2
= B . . C.
@ . A
0 n

13. For a system with comples eigenvalues 1;2 = j!, a further transformation z = Sy is
recommended, where
0 1 0 1
1 1 0 0 1=2 j1=2 0 0
B +j j 0 0 C B 1=2 +j1=2 0 0 C
S=B @ 0
C. S 1 =B C
0 1 0 A @ 0 0 1 0 A
0 0 0 1 0 0 0 1
Transforming to canonical form 31

giving:
z=S S 1z +S 1 Bu

where
0 1
! 0 0
B ! 0 0 C
S S 1 =B
@
C and S 1B is real.
0 0 3 0 A
0 0 0 4

4.4 Further Canonical Forms


4.4.1 Systems with real repeated eigenvalues
e.g. 0 1
0 1 0
x_ = Ax + Bu A=@ 0 0 1 A ; 1 = 1; 2 = 1; 3 = 2
2 5 4
0 1 0 1
1 1
(A 1 I) 1 = 0 gives 1 = @ 1 A ; (A 3 I) 3 = 0 gives 3 = @ 2 A
1 4
How is 2 obtained? (A 2 I) 2
= 0 will give 2
= 1
and if has two identical columns,
then j j = 0 and 1 does not exist.
Instead put (A 2 I) 2 = 1 . I.e.
00 1 0 11 0 1 0 1
0 1 0 1 0 0 21 1
@@ 0 0 1 A @ 0 1 0 AA @ 22 A = @ 1 A
2 5 4 0 0 1 23 1
0 1 0 1
21 + 22 1
@ 22 + 23
A = @ 1 A choose 21 = 1 then
2 21 5 22 3 23 1
0 1
1
1 + 22 = 1 ! 22 = 0; 23 = 1 therefore: 2 = @ 0 A, then
1
0 1
1 1 1
=@ 1 0 2 A and putting y = 1 x = T x such that y_ = y + 1 Bu gives

1 1 4
0 10 10 1 0 1
2 5 2 0 1 0 1 1 1 1 1 0
= 1A = @ 2 3 1 A@ 0 0 1 A@ 1 0 2 A=@ 0 1 0 A
1 2 1 2 5 4 1 1 4 0 0 2
besides the eigenvalues, note the extra 1 on the lead diagonal.

4.4.2 Systems with complex eigenvalues


E.g. x_ = Ax + Bu eigenvalues of A; 1 = + j!; 2 = j!; 3 ; : : : ; n all real and distinct.
Then the eigenvectors 1 ; 2 will be complex and hence .
Also, for y = 1 x, y_ = y + 1 Bu, where
0 1
+ j! 0
B j! C
B C 1 B is complex.
=B C and
@ 3 A
..
0 .
In reality, it is not practicable to attempt complex combinations of state variables x to give
y and hence, real transformation and system matrices are required.
This can be achieved by using a further transformation z = Sy, where
Transforming to canonical form 32

0 1 0 1
1 1
0 0 1=2 j1=2 0 0
B +j 0 0 C
j B 1=2 +j1=2 0 0 C
S=B @ 0
C,
A S 1=B @
C
A
0
1 0 0 0 1 0
0 0
0 1 0 0 0 1
Then z = S 1z + S
S 1 Bu
0 1
! 0 0
B ! 0 0 C
where S S 1 = B @ 0 0
C and S 1 B is real.
3 0 A
0 0 0 4
E.g. Let’s consider:
0 1
16 151 956 2380
B 1 0 0 0 C
A = B @ 0
C its eigenvalues are 1 = 2 + 8i;
A 2 = 2
1 0 0
0 0 1 0
8i; 3 = 5 and 4 = 7
The corresponding
0 eigenvectors
1 0are: 1 0 1 0 1
376 416i 376 + 416i 125 343
B 60 32i C B 60 + 32i C B 25 C B 49 C
=B @
C;
A =B
@
C;
A =B
@
C;
A =B@
C
1 2 + 8i 2 2 8i 3 5 4 7 A
1 1 1 1
and the
0 resulting modal matrix: 1
376 416i 376 + 416i 125 343
B 60 32i 60 + 32i 25 49 C
=B @
C
2 + 8i 2 8i 5 7 A
1 1 1 1
0 1 10 1
376 416i 376 + 416i 125 343 16 151 956 2380
1A = B
B 60 32i 60 + 32i 25 49 C B 1 0 0 0 C
= C B C
@ 2 + 8i 2 8i 5 7 A @ 0 1 0 0 A
1 1 1 1 0 0 1 0
0 1 0 1
376 416i 376 + 416i 125 343 2 + 8i 0 0 0
B 60 32i 60 + 32i 25 49 C B 0 2 8i 0 0 C
B C=B C
@ 2 + 8i 2 8i 5 7 A @ 0 0 5 0 A
1 1 1 1 0 0 0 7
0 10 10 1 1
1 1 0 0 2 + 8i 0 0 0 1 1 0 0
B +i i 0 0 C B 0 2 8i 0 0 C B i 0 0 C
S S 1=B CB C B +i C =
@ 0 0 1 0 A @ 0 0 5 0 A @ 0 0 1 0 A
0 0 0 1 0 0 0 7 0 0 0 1
0 1
2 8 0 0
B 8 2 0 0 C
=B @ 0
C
0 5 0 A
0 0 0 7
In any case S 1 (and S 1 ) is always real:
0 10 1 1
1 1 0 0 376 416i 376 + 416i 125 343
B +i i 0 0 C B 49 C
S 1=B C B 60 32i 60 + 32i 25 C =
@ 0 0 1 0 A @ 2 + 8i 2 8i 5 7 A
0 0 0 1 1 1 1 1
0 8 161 1060 2275 1
6497 6497 6497 6497
B 49 87 3253 7245 C
=B
@
51 976
1
25 988
11
51 976
48
25 988
238
C
A
146 146 73 73
1 9 44 170
178 178 89 89
Transforming to canonical form 33

4.5 Advantage of the Modal Canonical Form


0 1
1 0
B .. C
From y_ = @ . A y + Qu it can be seen that the n-th order system has been trans-
0 n
formed into n Decoupled …rst order di¤erential equations. A complex MIMO system expressed
as a set of Transfer Functions is a mess and are not conductive to classical control techniques.
The modal form“parallels”out the modes as can be seen from the modal block diagram. If you
don’t like an eigenvalue i and you want to change it, you have the opportunity of applying
feedback (u = ki yi ) to change i only, j ; j 6= i remain unaltered.
The control design is carried out in the modal domain to achieve a control law u = Ky (see
section 7.3). Of course y are not “physical”state variables and have to be transformed back by
y = T x, i.e. our control law will be u = KT x = K 1 x.

4.6 Control and Observable Canonical Forms


We have found a matrix T (T = 1 ) which will transform x
_ = Ax + Bu into diagonal canonical
form (y_ = y + Qu). We still have to …nd a matrix T which will transform x_ = Ax + Bu into
Control Canonical Form or Observer Canonical Form. First, however, we need the concepts of
Controllability and Observability.
Chapter 5

Controllability and Observability

We have x_ = Ax + Bu; v = Cx which can be transformed to modal form using y = 1x or


y = t x to yield:

y_ = y + Qu; v=C y

5.1 Controllability
5.1.1 Via Modal Form
Consider y_ = y + Qu e.g.
0 1 0 1
2 0 0 0 2
y_ = @ 0 3 0 Ay + @ 1 3 Au
0 0 1 1 0
The plant will respond with e t , e 3t , e2t tems. Since Q constains non zero entries on all
rows, the modes are functions of the inputs. What if row 1 of Q was 0 0 , i.e.
0 1 0 1
2 0 0 0 0
y_ = @ 0 3 0 Ay + @ 1 3 Au
0 0 1 1 0
The mode e2t is not a¤ected by the input and cannot be controlled. I the ith row of Q is
zero, the ith mode cannot be altered. If any mode cannot be altered, the system is said to be
UNCONTROLLABLE.
A system is controllable if the state vector x can be forced to reach any state x0 , provided
that the adequate input is fed into the system.
If the uncontrolable mode is stable, it doesn’t matter too much.
If the uncontrolable mode is unstable, oh dear!!
For repeated and complex roots:
Repeated: 0 1 0 1
1 1 0 0
y_ = @ 0 1 0 Ay + @ 1 Au
0 0 1 1
is OK, but: 0 1 0 1
1 1 0 1
y_ = @ 0 1 0 Ay + @ 0 Au
0 0 1 1
is not OK.

34
Controllability and Observability 35

Complex: 0 1 0 1
2 1 0 0
y_ = @ 1 2 0 Ay + @ 1 Au
0 0 1 2
is OK, but:
0 1 0 1
2 1 0 0
y_ = @ 1 2 0 Ay + @ 0 Au
0 0 1 1
is not OK.
If in doubt, draw the MODAL BLOCK DIAGRAM (section 3.4) and see if all modes can be
traced back to an input. If so, then the system is controllable.

5.1.2 Via Controllability Matrix


Here you don’t have to convert to canonical form to test wheter a system is controllable.
Given x_ = Ax + Bu, we can formulate the controllability matrix C

C= B AB A2 B An 1B

where n is the order of A. C is n nr and for a single input system (r = 1), C is square, i.e.
for n = 2 ! C = B AB .
A system is controllable if the rank of C = n, the order of the system.
A system is uncontrollable if the rank of C < n.
De…nition of rank: rank of n m matrix is the order of the largest NON-SINGULAR square
submatrix.
e.g.

4 5 5 5 25
A= ; B= ; B AB =
1 0 1 1 5
Submatrices of order 1 are 5; 25; 1; 5 ALL non-singular, therefore rank at least 1
5 25
Submatrices of order 2 are which is singular. Therefore the rank is 1, which is
1 5
smaller than n, so the system is uncontrollable.
e.g. (third order system n = 3)
0 1 0 1
1 2 1 1 1
A=@ 0 1 1 A; B = @ 2 1 A
2 1 1 0 0
0 1
1 1 5 1
C = B AB A2 B = @ 2 1 2 1 A2 B A
0 0 0 3
Can we …nd a 3 3 submatrix here with jSM3 j = 6 0? If we can, then rank = 3. Note
1 5 1
1 2 1 6= 0. It is not necessary to calculate A2 B since we have already found a non-
0 0 3
singular submatrix of order = n. Therefore the system is controllable.
Note for single input case C is n n, i.e. square. Rank can only equal n if the whole matrix
C is non-singular, i.e. the system would be controllable if C 1 exists.
Singularity is only really de…ned for square matrices. For an n m matrix m > n, you can
regard the condition that rank = n as a test for the“singularity” of the non-square matrix.
Controllability and Observability 36

5.2 Observability
5.2.1 Via Modal Form
Let´s look at v = C y, e.g.
0 1
y1
0 2 1 @ y2 A
v=
0 1 0
y3
Let y1 ; y2 ; y3 be the e2t , e 3t ; e t modes as before. Note the y1 (e2t ) mode does not appear
in either of the measurements v1 or v2 . Therefore y1 is said to be unobservable.
If the elements of the ith column of C are all zero, then the ith mode does not appear in
the measurements and the system is said to be unobservable.

5.2.2 Via the Observability Matrix


Given x_ = Ax + Bu; v = C y we can formulate the observability matrix O
h i
2 n 1 T
O = C T AT C T AT C T AT C
If the rank of O = n the system is observable. If the rank of O < n the system is unobservable.
Once again, if v is scalar, i.e. the system has a single output, O is square and the rank test
reduces to seeing if O 1 exists or not.

5.3 Example system

U(s) V(s)
s-1
s+4
G(s)

1
s-1
H(s)

Figure 5.1: Example system block diagram

Classical technique applied to block diagram in …g. 5.1 gives:


s 1
V (s) G(s) s 1 6
= = s+41 = =1
U (s) 1 + G(s)H(s) 1 + s+4 s +5 s+5
So the system is stable (pole at s = 5), controllable (feedback from V (s) to U (s) can change
the pole at s = 5) and observable (v(t) will contain the e 5t term), OR IS IT?
Expressing G(s) as a estrictly proper transfer function leads to the block diagram in …g.
5.2.
Putting the system into state space form:
Controllability and Observability 37

U(s) -5 x1(s) V(s)


s+4
G(s)

x2(s) 1
s-1
H(s)

Figure 5.2: Equivalent block diagram of the system in …g. 5.1

x_ 1 + 4x1 = 5 (u x2 )
x_ 2 x2 = x1 + (u x2 )
v = x1 + (u x2 )

which yields:

x_ 1 4 5 x1 5
= + u
x_ 2 1 0 x2 1
x1
v = 1 1 + (1) u
x2

4 5
the eigenvalues of A are jA Ij = = ( 4 ) 5 = 0. Which gives the
1
characteristic equation 2 + 4 5 = 0, ( + 5) ( 1) = 0 with eigenvalues 1 = 5; 2 = 1.
Therefore the system is unstable. In classical theory the pole and zero at s = 1 cancelled and
“didn’t show up”.
OK, the system is unstable, let’s control it. Let’s just check C:

5 25
C= B AB =
1 5
and jCj = 0, therefore the system is uncontrollable, but which mode cannot be controlled?
We can go to the modal canonical form to …nd it out. For 1 = 5

4 5 1 5
(A 1 I) = + 5I =
1 0 1 5

5
=
1 1
For 2 =1

1
=
2 1
1 1
5 1 1 6 6
= ; = 1 5
1 1 6 6
Controllability and Observability 38

In modal canonical form


1 1
5 0 y1 6 6 5
y_ = + 1 5 u
0 1 y2 6 6 1

5 0 y1 1
y_ = + u
0 1 y2 0
Oh dear, it’s the unstable mode which is uncontrollable! Well, if we build the system, at least
we would see it going unstable (the system trips and shuts down, etc). Let us see: v = Cx + Du;
v = C y + Du where
5 1 y1
v= 1 1 +u
1 1 y2

y1
v= 6 0 +u
y2
and the e+t term doesn’t show up in measurements! The system in short is unstable, uncon-
trollable, unobservable and a mess!
Discussion: A system which is uncontrollable, etc is usually because the designer is an idiot.
One way of being an idiot is to design a system with unstable pole-zero cancellation!
But if the rows of Q are very small, it shows that controlling that mode with that input is
a poor choice. E.g. using ailerons to control a plane’s altitude modes, etc.
Chapter 6

Control and Observer Canonical


Forms

It will be seen later that control design becomes very simple if the state space equations are
in CCF or OCF. We therefore need to be able to transform the state space equations to CCF
and/or OCF in a wimilar way to transforming to JCF. Note although CCF and OCF can be
de…ned for multi input multi output systems, its main power comes in consideration of single
input single output systems, therefore this is assumed.

6.1 Control Canonical Form


Given x_ = Ax + Bu; v = Cx + Du, we wish to …nd a transformation T such that xc = T x; xc
being the CCF state vector.

T x_ = T Ax + T Bu; v = Cx + Du

1 1
x_ c = T AT xc + T Bu; v = CT xc + Du

x_ c = Ac xc + Bc u; v = Cc xc + Du
We can obtain Ac and Bc directly. If 1; 2; : : : ; n are the eigenvalues of A. Then the
characteristic equation will be:

(s 1 ) (s 2 ) : : : (s n) =0
and therefore we can obtain the characteristic polynomial:

sn + a1 sn 1
+ a2 sn 2
+ + an
therefore:
0 1 0 1
a1 a2 a3 an 1
B 1 0 0 0 C B 0 C
B C B C
B 0 1 0 0 C B 0 C
Ac = B C; Bc = B C
B .. .. .. C B .. C
@ . . . A @ . A
0 0 1 0 0
Why do we need to …nd T then? Because manipulations of systems in CCF will give results
in terms of xc . If our states are x then we need to be able to go from x ! xc , i.e. xc = T x.
T can be obtained from the controllability matrix.

39
Control and Observer Canonical Forms 40

Let
C= B AB A2 B An 1B

which is n n for single input systems.

Cc = Bc Ac Bc A2c Bc Anc 1B
c
h i
= T B T AT 1T B T AT 1 2 TB T AT 1 n 1 TB

therefore:
h i
T 1
Cc = B AB T 1 T AT 1 2 TB An 1B =C

since T 1 T AT 1 2T =T 1 A2 T = A2 .
c
Therefore:

1 1
T Cc = C; T = CC c 1 ; ; T = Cc C 1

The procedure is:

1. Take A and B. Form C.

2. Find the characteristic equation of A.

3. Write down Ac and Bc .

4. Form Cc :

5. Calculate T = Cc C 1.

6. Check with Bc = T B.

Example:
3 1
2 1 1 1 1 1 2 2
A= ; B= ; C= ; C = 1 1
0 3 1 1 3 2 2

2 s 1
jA sIj = =( 2 s) ( 3 s) = s2 + 5s + 6
0 3 s

5 6 1 1 5
Ac = ; Bc = ; Cc =
1 0 0 0 1
3 1
1 1 5 2 2 1 2
T = Cc C = 1 1 = 1 1
0 1 2 2 2 2
check
1 2 1 1
Bc = T B = 1 1 =
2 2 1 0
which appears correct.
Control and Observer Canonical Forms 41

6.2 Observer Canonical Form


Given x_ = Ax + Bu; v = Cx + Du, we wish to …nd a transformation T such that xo = T x; xo
being the OCF state vector.

T x_ = T Ax + T Bu; v = Cx + Du

1 1
x_ o = T AT xo + T Bu; v = CT xo + Du

x_ o = Ao xo + Bo u; v = Co xo + Du
We can obtain Ao and Co directly. If 1; 2; : : : ; n are the eigenvalues of A. Then the
characteristic equation will be:

(s 1 ) (s 2 ) : : : (s n) =0
and therefore we can obtain the characteristic polynomial:

sn + a1 sn 1
+ a2 sn 2
+ + an
therefore:
0 1
a1 1 0 0
B a2 0 1 0 C
B C
B .. ..C
Ao = B
B a3 0 0 . .C;
C Co = 1 0 0 0
B .. C
@ . 1 A
an 0 0 0
Why do we need to …nd T then? Because manipulations of systems in OCF will give results
in terms of xo . If our states are x then we need to be able to go from x ! xo , i.e. xo = T x.
T can be obtained from the observability matrix.
h i
2 n 1 T
O = C T AT C T AT C T AT C
which is n n for single input systems.
Note:
2 3
C
6 CA 7
6 7
OT = 6 CA2 7=P
4 5
..
.
Now:
h i
2
Oo = CoT ATo CoT ATo CoT
h 2 i
= CT 1 T T AT 1 T CT 1 T T AT 1 T CT 1 T
h i
1 T 1 T 1 T 2 1 T
= T CT T AT T T T CT ATo T CT

Therefore;

h i
2 1 T
T T Oo = CT AT C T T T ATo T CT
h i
2
= CT AT C T AT CT
Control and Observer Canonical Forms 42

2 1 T 1 A2 T T 2
given that T T ATo T CT = T o C T = AT CT
therefore:
T T
T T Oo = O; T T = OOo 1 ; T = OOo 1 = Oo 1 (O)T = Po 1 P

T = Po 1 P ; T 1
=P 1
Po
Procedure:

1. Take A, C.
2 3
C
6 CA 7
6 7
2. Form P = 6 CA2 7
4 5
..
.
3. Find the characteristic equation of A.

4. Write down Ao and Co .


2 3
Co
6 Co Ao 7
6 7
5. Form Po = 6 C A2 7
4 o o 5
..
.

6. Calculate T = Po 1 P .

7. Check with C = Co T .

Example:

2 1 2 3
A= ; C= 2 3 ; P =
0 3 4 7
The characteristic equation is the same as before: s2 + 5s + 6 = 0. Therefore:

5 1 1 0 1 1 0
Ao = ; C= 1 0 ; Po = ; Po =
6 0 5 1 5 1
Therefore:

1 0 2 3 2 3
T = Po 1 P = =
5 1 4 7 6 8
Check:

2 3
Co T = 1 0 = 2 3
6 8
Full check
1
1 2 3 2 1 2 3 5 1
T AT = =
6 8 0 3 6 8 6 0
Chapter 7

Multivariable State Feedback

7.1 Introduction
Given a system x_ = Ax + Bu; v = Cx + Du (open loop), we can design a closed loop system
with user speci…ed eigenvalues ( ) by feedback of the system states. We shall be considering the
system in …g. 7.1.

Figure 7.1: State feedback

The feedback consists of:

x_ = Ax + Bu; v = Cx (7.1)
u = k1 r k1 k2 x

u = k1 r Kx (7.2)
where K = k1 k2 . K is an r n matrix and is called the state feedback matrix connecting
the states x to the actuating inputs u.
The closed loop system is, by putting eq. (7.2) into eq. (7.1):

x_ = Ax + B (k1 r Kx) ; v = Cx
x_ = (A BK) x + Bk1 r; v = Cx (7.3)
Eq. (7.3) is another state space equation. The closed loop poles are given by the eigenvalues
of (A BK), i.e.

jA BK c Ij =0 (7.4)

43
Multivariable State Feedback 44

The problem to solve is: given the desired closed loop poles c1 , c2 , : : : , cn , …nd the matrix
K that will place the eigenvalues of (A BK) at the desired position.
The problems of reference input, closed loop gain and integral control do not a¤ect eq. (7.4)
and will be treated in section 8.

7.2 Single input systems. Design via Control Canonical Form


Consider the system:

x_ = Ax + Bu; v = Cx
where u is scalar (single input) and x are “physical” states which we shall assume are
measurable. Hence we could write C = I and v = x i.e. there is no di¤erence between states
and outputs.
We …rst convert the system to Control Canonical Form, i.e.:

x_ c = Ac xc + Bc u; v = Cc xc
where xc = T x and T = Cc C 1:

1 1
x_ c = T AT xc + T Bu; v = CT xc
We note that:
0 1 0 1
a1 a2 a3 an 1
B 1 0 0 0 C B 0 C
B C B C
B 0 1 0 0 C B 0 C
Ac = B C; Bc = B C
B .. .. .. C B .. C
@ . . . A @ . A
0 0 1 0 0

where ai is the coe¢ cient of sn i (or n i ) of a(s), the Open loop characteristic polynomial.
We now carry out a control design in the canonical domain by state feedback law:

u= Kc xc = kc1 kc2 kcn xc (7.5)

and therefore the closed loop canonical system matrix is:

0 1 0 1
a1 a2 a3 an 1
B 1 0 0 0 C B 0 C
B C B C
B 0 1 0 0 C B 0 C
Ac Bc Kc = B C B C kc1 kc2 kcn
B .. .. .. C B .. C
@ . . . A @ . A
0 0 1 0 0

0 1 0 1
a1 a2 a3 an kc1 kc2 kc3 kcn
B 1 0 0 0 C B 0 0 0 0 C
B C B C
B 0 1 0 0 C B 0 0 0 0 C
Ac Bc Kc = B C B C
B .. .. .. C B .. .. .. C
@ . . . A @ . . . A
0 0 1 0 0 0 0 0
0 1
a1 kc1 a2 kc2 a3 kc3 an kcn
B 1 0 0 0 C
B C
B 0 1 0 0 C
Ac Bc Kc = B C (7.6)
B .. .. .. C
@ . . . A
0 0 1 0
Multivariable State Feedback 45

Therefore, a1 + kc1 ; a2 + kc2 ; ; an + kcn are the coe¢ cients of the closed loop characteristic
polynomial (s):
Letting the closed loop characteristic polynomial be:

sn + 1s
n 1
+ 2s
n 2
+ + n = (s c1 ) (s c2 ) (s cn ) (7.7)

where ci are the desired closed loop eigenvalues (which may be repeated or complex). Then
it is seen that i = ai + kci , i.e.

kci = i ai (7.8)
It must be understood that the previous equation gives the elements of Kc which relate u to
xc (the canonical state variables). If these are not physical states (and they usually will not)
you must transform back to the real states x.
We have:

u= Kc xc ; xc = T x
therefore
u= Kc T x = Kx (7.9)
and
K = Kc T (7.10)
The procedure can be summarized:

1. Given A, B, C, …nd the open loop characteristic polynomial a(s) from jA Ij = 0

2. Select the closed loop poles c. Form the closed loop characteristic polynomial (s)

3. Calculate kci . This gives Kc

4. Write down the open loop Ac , Bc and …nd C and Cc :

5. Calculate T = Cc C 1; and hence K = Kc T .

Example:

2 1 1
A= ; B=
0 3 1
therefore, the open loop poles are 2; 3. We wish to …nd the feedback law u = Kx which
will place the closed loop poles (eigenvalues) at = 5 j5.
First of all, we …nd the open loop characteristic polynomial:

a(s) = jsI Aj = (s + 2) (s + 3) = s2 + 5s + 6
Then we can calculate the closed loop characteristic polynomial:

(s) = (s + 5 + j5) (s + 5 j5) = s2 + 10s + 50


Now we obtain the elements of matrix Kc .

kc1 = 1 a1 = 10 5=5
kc2 = 2 a2 = 50 6 = 44

Now we need to obtain K = Kc T: First of all, we need to obtain T = Cc C 1.

To do that, we write the open loop matrices in control canonical form:


Multivariable State Feedback 46

5 6 1
Ac = ; Bc =
1 0 0
and we calculate the controllability matrices:
3 1
1 1 1 2 2
C= B AB = ; C = 1 1
1 3 2 2

1 5
Cc = Bc Ac Bc =
0 1
3 1
1 1 5 2 2 1 2
T = Cc C = 1 1 = 1 1
0 1 2 2 2 2
therefore

1 2
K = Kc T = 5 44 1 1 = 17 12
2 2
and the control law will be:
x1
u= Kx = 17 12
x2

Now we are going to check our design. First of all, we can obtain the closed loop state space
equation:

x_ = (A BK) x + Bk1 r; v = Cx
Obviously, the closed loop poles will be the eigenvalues of (A BK), i.e. we must solve the
following equation

jA BK c Ij =0

2 1 1
17 12 cI =0
0 3 1

2 1 17 12
cI =0
0 3 17 12

19 13
cI =0
17 9

19 c 13
=0
17 9 c

2
c + 10 c + 50 = 0
and the solutions of the closed loop characteristic equations are: c1 = 5+j5; c2 = 5 j5
Remember to take care with signs. Eq.(7.8) gives coe¢ cients of kci where u = Kc xc i.e.,
for negative feedback, kci will be positive numbers. In the above example, negative feedback is
applied to x1 and positive feedback to x2 .
Note also that the Control Canonical Form is only easily applicable for single input systems.
For multiple inputs, the design should be done in Diagonal Canonical Form or modal form.
Multivariable State Feedback 47

7.3 Design via diagonal canonical form


This procedure is often called Modal control since, by separating out the modes of the system,
we can control the modes individually. Modal control is equally applicable to single input or
multi-input systems. The procedure will be presented …rst for single input systems.

7.3.1 Single input system


Consider x_ = Ax + Bu; v = Cx where A has real distinct eigenvalues.
First, transform to Modal form:
y_ = y + Qu; where Q = 1 B or Q = t B, i.e.:

0 1 0 1
1 0 q1
B C B q1 C
B 2 C B C
y_ = B .. C y + B .. C u
@ . A @ . A
0 n qn
Then, we can draw the modal block diagram (for the closed loop system)

Figure 7.2: Modal block diagram for the closed loop system

and apply the feedback law u = Kd y = kd1 kd2 kdn y: If i is to remain


unchanged, then kdi = 0:
Then, we should obtain the characteristic equation of the closed loop system by applying
Mason’s Rule:
X X prod. of pairs of
1 loops + etc = 0
non touch. loops
or by block algebra. Afterwards, we can obtain the values of kdi by comparing the charac-
teristic equation of the closed loop with the desired characteristic equation.

Example
0 1 0 1
2 0 0 1
y_ = @ 0 3 A
0 y+ @ 2 Au
0 0 4 1
The modal block diagram will be:
Multivariable State Feedback 48

Figure 7.3: Modal block diagram for the closed loop system

We have two unstable modes, y1 and y2 . We should …nd a control law such that closed loop
eigenvalues are 1 ; 2 = 1 j1; 3 = 4. Note mode y3 can be left alone. The characteristic
equation is:

kd1 2kd2
1+ + =0
s 2 s 3
i.e: s2 + s (kd1 + 2kd2 5) + (6 3kd1 4kd2 ) = 0
We want: (s + 1)2 + 12 = 0, i.e.: s2 + 2s + 2 = 0
Therefore:

kd1 + 2kd2 5 = 2
6 3kd1 4kd2 = 2

yielding: k1 = 10; k2 = 8:5; Kd = 10 8:5 0 and u = 10 8:5 0 y:


Once again, we require the control law in terms of physical state variables x: Since y = tx

or y = 1 x, we have:

t 1
u= Kd x or u = Kd x

7.3.2 Multiple Input System


Once again assume real distinct eigenvalues, e.g.:
0 1 0 1
a1 0 0 3 2 0
y_ = @ 0 a2 0 Ay + @ 1 0 1 Au
0 0 a3 0 0 1
The modal block diagram is shown in …g. 7.4.
Consider changing y1 , y2 alone. y1 is dependent on u1 , u2 : y2 is dependent on u1 , u3 :
Therefore we have a choice of what input we whish to use:

1. Use u1 to control both y1 , y2 : The above diagram reduces to:


and the analysis is as for example in section 7.3.1 above.

2. Could use u1 to control y1 both and u3 to control y2 : The resulting block diagram is shown
in …g. 7.6.
Multivariable State Feedback 49

Figure 7.4: Modal block diagram

Figure 7.5: Closed loop modal block diagram (feedback using only u1 )

Figure 7.6: Closed loop modal block diagram (using u1 to control y1 and u3 to control y2 )
Multivariable State Feedback 50

The closed loop characteristic equations are:


3kd1 kd2
1+ = 0; 1+ =0
s + a1 s + a2
and the analysis of each loop can be done separately.
Note: you should understand that if we required closed loop eigenvalues 1 ; 2 ( j!)
and 3 = a3 (i.e. third mode unchanged) then (2) above is not possible since feeding back y1
to u1 and y2 to u3 will yield 1 ; 2 (closed loop) as real. This should be obvious.
For multiple input systems, the choice of feedback law is not unique and thus the control law
is not unique for a desired pole placement. The choice of the control imput will often depend
on the practicalities of the system. If the …rst row of Q is for example 36 0:003 17 then
is fairly obvious that y1 is hardly a function of u2 and you would not use u2 to control y1 .

Example
A system is de…ned by
0 1 0 1
1 4 2 2 1
x_ = Ax + Bu; A=@ 1 1 1 A; B=@ 1 1 A
2 1 1 1 2
Find the controller matrix K to place the closed loop eigenvalues at 4; 4; 2.
Since we have two inputs, we are going to carry out the design in modal canonical form.
First of all, we have to express the system in modal canonical form.
Therefore, we have to obtain the eigenvalues of A, as well as the modal matrix .
0 1
1 4 2
sI @ 1 1 1 A = 0 ! s3 3s2 4s = s (s + 1) (s 4) = 0
2 1 1
therefore, the open loop eigenvalues are 0; 1; 4 (i.e. the system is open loop unstable). The
modal matrix can be calculated as in the previous chapter:
0 1 0 1 3 1
1
1 1 2 2 2 2
= @ 21 0 1 A; 1
=@ 1 2 0 A
3 1 1 1
2 1 1 4 4 4
Now, we can express the system in modal canonical form:

1 1
y_ = A y+ Bu

0 1 10 10 1 0 1
1 1 2 1 4 2 1 1 2 0 0 0
= 1
A =@ 1
2 0 1 A @ 1 1 1 A @ 12 0 1 A=@ 0 1 0 A
3 3
2 1 1 2 1 1 2 1 1 0 0 4
0 1 10 1 0 1
1 1 2 2 1 0 2
Q= 1
B=@ 1
2 0 1 A @ 1 1 A=@ 0 1 A
3
2 1 1 1 2 1 0
From matrix B we can conclude that modes y1 and y2 can only be controlled from u2 ,
whereas mode y3 can only be controlled from u1 . The corresponding modal block diagram will
be:
which gives the following closed loop characteristic equations:

2kd21 1kd22
1+ + = 0; s2 + (1 2kd21 kd22 ) s 2kd21 = 0
s s+1
Multivariable State Feedback 51

Figure 7.7: Closed loop modal block diagram (using u1 to control y3 and u2 to control y1 , y2 )

kd13
1+ = 0; s 4 + kd13 = 0
s 4
modes y1 and y2 will have closed loop eigenvalues of 4 and 3, and mode y3 will be changed
to have a closed loop eigenvalue equal to 4. Therefore, for modes y1 and y2 ; the desired closed
loop characteristic equation will be:

(s + 3) (s + 4) = 0; s2 + 7s + 12 = 0
Comparing with
s2 + (1 2kd21 kd22 ) s 2kd21 = 0
and equating coe¢ cients, we have:

1 2kd21 kd22 = 7
2kd21 = 12

and kd21 = 6, kd22 = 6:


For mode y3 we have that the desired closed loop characteristic equation is:

s+4=0
therefore:

4 + kd13 = 4
and kd13 = 8: The feedback law will be:

0 0 8
u= Kd y = y
6 6 0
and expressed in the original state variables:
0 1 1
1 1 2
0 0 8 @ 2 2 2
u= Kd 1
x= 1
2 0 1 A x= x
6 6 0 3 3 3 3
2 1 1
Multivariable State Feedback 52

we can check the result by obtaining the closed loop system:

0 1 0 1 0 1
1 4 2 2 1 6 3 1
2 2 2
Acl = A BK = @ 1 1 1 A @ 1 1 A =@ 4 2 2 A
3 3 3
2 1 1 1 2 6 9 7

and the closed loop characteristic equation is:


0 1
6 3 1
sI @ 4 2 2 A = 0 ! s3 + 11s2 + 40s + 48 = 0
6 9 7

s3 + 11s2 + 40s + 48 = (s + 3) (s + 4) (s + 4) = 0

7.3.3 Repeated eigenvalues


Consider a multi-input system in which y1 corresponds to the repeated eigenvalue mode. For
example:
0 1 0 1
1 1 0 0 0
y_ = @ 0 1 0 Ay + @ 2 1 Au
0 0 3 0 1
The canonical block diagram can be written down directly:

Figure 7.8: Canonical block diagram

It is obvious from the block diagram that y1 and y2 modes can be controlled via u1 or u2 .
This was not obvious from the canonical state equation since the …rst row of Q was 0 0 :
However y1 is related to u1 via the 1 entry on the o¤ diagonal.
We apply feedback to either u1 or u2 (or both if you wish) to stabilise the double mode
1 2 . In the diagram u1 is chosen. Note that the double eigenvalue will be changed to two new
;
eigenvalues. We therefore need two feedback gains k1 ; k2 .
The characteristic equation is:

2k1 2k2
1 =0
(s 1)2 (s 1)
(positive feedback). Therefore
Multivariable State Feedback 53

s2 (2 + 2k2 ) s + (1 2k1 + 2k2 ) = 0


2
s ( 1 + 2) s + 1 2 = 0

where 1 ; 2 are the two new eigenvalues. Equating coe¢ cients we can obtain the control
law u = Kd y.

7.3.4 Systems with complex eigenvalues


For a single input system with complex eigenvalues, it was seen that the canonical block diagram
could be drawn directly from the de…nition of the modes obtained through partial fraction
expansion. The di¢ culty there was in obtaining the canonical state equation y_ = y + Qu. For
multi-input systems we must obtain y_ = y + Qu …rst from x_ = Ax + Bu. The inverse problem
now exists, i.e. how to obtain the canonical block diagram from the canonical state equation.
Consider the multi-input system:
0 1 0 1
1 2 0 1 0
y_ = @ 2 1 0 Ay + @ 2 1 Au
0 0 3 0 1
with unstable eigenvalues at 1 ; 2 = 1 j2. To stabilise 1 ; 2 we only need to draw the
block diagram relating y1 ; y2 to u1 ; u2 . We need it in transfer function form. Neglecting the
third row of and Q we have:
(sI ) y = Qu
i.e.

s 1 2 y1 1 0 u1
=
2 s 1 y2 2 1 u2
therefore:

y1 1 s 1 2 1 0 u1
=
y2 (s 1)2 + 22 2 s 1 2 1 u2
y1 1 s+3 2 u1
=
y2 (s 1)2 + 22 2s 4 s+1 u2
and the canonical block diagram can be drawn. It is a bit messy. For feedback purposes we
note that both y1 and y2 are functions of u1 , therefore, we do not need to apply feedback to u2 .
The block diagram relating y1 ; y2 to u1 is:

Figure 7.9: Canonical block diagram relating y1 ; y2 to u1

To change eigenvalues to 1 j for instance (note positive feedback):


The closed loop characteristic equation will be
k1 (s + 3) k2 (2s 4)
1 =0
(s 1)2 + 22 (s 1)2 + 22
Multivariable State Feedback 54

Figure 7.10: Canonical block diagram with feedback gains

s2 + ( k1 2 2k2 ) s + (5 3k1 + 4k2 ) = 0

we want

s2 + 2s + 2 = 0
Therefore, the equation system to solve is:

k1 2 2k2 = 2
5 3k1 + 4k2 = 2

giving k1 = 1; k2 = 1:5 or
0 1
y1
u1 1 1:5 0 @ y2 A = Kd y
=
u2 0 0 0
y3
and we transform back into real state variables via y = 1 x to give u = K 1 x:
d
Note in the above procedure only one pair of unstable complex poles are being altered. This
resulted in the Y (s) = (sI ) 1 Q being of second order. What if you want to alter two
sets of comples poles e.g. 1 ; 2 = 1 j1 and 3 ; 4 = 1 j50 ? Since modes 1 and 2 anre
decoupled from modes 3 and 4 you …rst do a second order calculation Y (s) = (sI ) 1 Q for
y1 ; y2 and another second order calculation Y (s) = (sI ) 1 Q for y3 ; y4 .
If the stabilisation of the four modes involves a common input you must however draw the
four modes in one block diagram. If the stabilisation of modes 1 and 2 invoves di¤erent input(s)
from that of modes 3 and 4 then you can of course draw two separate block diagrams and obtain
k1 ; k2 independently of k3 ; k4 .

7.3.5 Computer algorithms


All modern control techniques are procedural number crunching exercises involving matrices
which are vere conducive for computer implementation. Unlike the design by control canonical
form (single input) which is very straightforward to implement on a computer, formulating
Modal block diagrams and applying Mason’s Rule is basically a hand technique suitable for
example sheets and exams. It also promotes understanding. Computer algorithms for multiple
input systems are a little tricy on account of the control laws not being unique. However,
formal algorithmic procedures for multi-input systems do exist and control textbooks should be
consulted if you ever desire to write one.
Chapter 8

Reference Inputs and Integral


Control

8.1 Reference Inputs


Reference inputs do not a¤ect the closed loop eigenvalues and hence do no appear in the design
of the feedback law u = Kx. However, in general, the positioning of the reference input
does a¤ect the closed loop zeros and hence the steady state gain and errors. Unfortunately,
state space analysis does not handle closed loop zeros very well and, as such, the general (i.e.
multi-input multi-output) design of state feedback systems for meeting gain and error coe¢ cient
requirements is di¢ cult and beyond the scope of this course. However, this is not so great a
drawback as may …rst appear since, in the great majority of cases, the selection of reference
inputs and their e¤ects is normally quite obvious. Furthermore, the introduction of integral
compensating terms to increase the type of the system is usually quite straightforward. The
following subsections contain examples and guidelines to help get a feel for reference inputs.

8.1.1 The single reference input


What is often seen in text books is:

Figure 8.1: System with as many references as states

This looks as if each state is being compared with a reference input. This representation
is“generality run riot”! In a great many cases, however, only one state (which may also be
de…nes as an output) is actually being controlled, in which case only ONE reference input is
requiered. The following two examples illustrate this:

1. In many systems (generally mechanical) the states are derivatives of each other, eg. x; x;
_ x•;
etc. If x (eg. position) is to be controlled in steady state, hence reference values of x;
_ x•;
etc must be zero. Consider, for example, the simple position control system in …g. 8.2.

55
Reference Inputs and Integral Control 56

Figure 8.2: Position control system

Obviously r2 = 0. Note the signs to yield u = Kx. k1 and k2 can be found by the
techniques in section 7. For this example, it can be seen that …g. 8.2 reduces to:

Figure 8.3: Equivalent block diagram

i.e. state feedback has produced a simple lead compensator. The system is Type 1 and x1
will follow r1 with zero steady state error.
2. Let the states not be simple derivatives of each other, but let the states be taken as the
output of transfer function blocks in cascade (…g. 8.4).
Let x1 be desired to be controlled
n ato x1 = 10. Thus r1 = 10. Therefore the steady state
1
value of x2 is 30 since lims!0 s+3 = 13 . Thus r2 is reduntant as x2 will settle at 30
anyway (specifying r2 will just alter the system errors as the system is type 0). It can be
seen that if such a system consists of blocks in cascade, specifying the steady state value
of one state (usually the output) will determine the steady state values of the others. This
can be seen from the closed loop state space equation:
x_ = (A BK) x + BKr

where r is scalar and BK is n 1: In steady state x_ = 0, therefore:


1
x = (A BK) BKr
1
and the entire x vector will be unique provided that (A BK) exists. Figure 8.4 can
be reduced to the block diagram in …g. 8.5 (r2 removed).
The system is type 0 with a steady state DC gain of k1 =(6 + k1 + 3k2 ).
Reference Inputs and Integral Control 57

Figure 8.4: More general system

Figure 8.5: Equivalent block diagram with r2 removed


Reference Inputs and Integral Control 58

To summarize, if there is one output or state to be controlled, then generally the systems are
single input (u scalar) and one reference input is su¢ cient. This, of course, is consistent with
intuition and is, in a sense, labouring the obvious.

8.1.2 Position of reference input


For a single input (or reference) system, it will be shown that …g. 8.6 represents the most ‡exible
arrangement for reference input position.

Figure 8.6: Reference position for a single input system

This is the single input version of …g. 7.1. K1 is now scalar and K2 is a 1 n matrix. The
feedback law is:

u = Kx + K1 r (8.1)
K = K1 K2

i.e. the feedback matrix is split into two, with the reference input placed in between. This
allows for easy selection of the DC gain. Assume that v is equal to a single state, i.e. C =
1 0 0
We know that:

1
V (s) = C (sI A) BU (s)
i.e.

V (s) C (adj (sI A)) B n(s)


= OLT F = = (8.2)
U (s) jsI Aj p(s)
where n(s) and p(s) are the zero and pole polynomials of the open loop transfer function.
Note that C (adj (sI A)) B is 1 1 (i.e. its single element is n(s)).
Now, taking the Laplace transform of the feedback law:

U (s) = KX(s) + K1 R(s) =


= K (sI A) 1 BU (s) + K1 R(s) =
q(s)
= U (s) + K1 R(s)
p(s)
1
since K (sI A) B = K (adj (sI A)) B= jsI Aj and K (adj (sI A)) B is scalar, hence

U (s) (p(s) + q(s)) = p(s)K1 R(s)


Reference Inputs and Integral Control 59

i.e.

p(s)K1
U (s) = R(s) (8.3)
p(s) + q(s)
Combining eq. 8.3 with 8.2, we obtain the closed loop transfer function as:

n(s) n(s) p(s)K1 n(s)K1


V (s) = U (s) = R(s) = R(s)
p(s) p(s) p(s) + q(s) p(s) + q(s)
V (s) n(s)K1
= (8.4)
R(s) pc (s)
where pc (s) is the closed loop characteristic polynomial. This equation is very important, it
shows that:

1. The zeroes of the closed loop and open loop transfer functions are identical. For a single
input system, state feedback does not alter the plant zeroes. This is true no matter where
the reference input is placed.

2. The scalar K1 appears in the nominator. This allows for the selection of the closed loop
gain.

Example
An open loop system is given by

2 (s + 2)
G(s) =
(s + 1) (s + 3)

Design, using state feedback, a control to achieve a system with poles at s = 8; s = 10


and a DC gain of unity.
We want a closed loop transfer function:

k 0 (s + 2)
Gc (s) =
(s + 8) (s + 10)
with k 0 = 40 for unity DC gain. From eq. 8.4 we see that K1 2 (s + 2) = 40 (s + 2) from
which K1 = 20.
The open loop characteristic polynomial is s2 + 4s + 3.
The closed loop characteristic polynomial is s2 + 18s + 80.
Then from eq. 7.8 k1 = 14; k2 = 77. And

u= Kx = 14 77 x

Note we have used control canonical form. As such x are probably not “physical states”.
However, we shall assume they are here.
Since K = 14 77 , we therefore have K2 = 14 77 =K1 = 0:7 3:85 and we have
the closed loop diagram in …g. 8.7.
Note that the system zero remains at 2. Indeed it has not been possible to change it.

8.1.3 Multi-Input Systems


If there are n states (or outputs) to be controlled about non zero values, then n reference inputs
will be required. Generally, such a system will have n inputs. The scheme of …g. 8.1 will
generally be acceptable and, if each state being controlled is the output of a Type 1 system,
then the outputs will follow the reference inputs with zero error. It is also possible to draw the
system as …g. 7.1 in which K1 will, in general, be a matrix and not a scalar. It can be shown
Reference Inputs and Integral Control 60

Figure 8.7: Block diagram of the closed loop system

that when K1 is a matrix, state feedback does a¤ect the system zeroes which will then a¤ect the
DC gain (if the system is not Type 1) and also the system response (but not the stability). As
remarked earlier, it is di¢ cult to design K1 to select closed loop zeroes. The best approach is to
forget them and simulate the controlled system on a computer to see wherther the response is
acceptable. If DC gains are a problem, the elements of K1 (and hence K2 since K is …xed) can
de modi…ed accordingly.
There is an important class of systems in which it is possible (and desirable) to make each
controlled output dependent on one input only.
E.g. given:

2 1 x1 1 1=2 u1
x_ = +
1 2 x2 1=2 1 u2
State feedback can be applied to yield:

x_ = (A BKD ) x + BKD1 u0
where

2 0 1 0
(A BKD ) = ; BKD1 =
0 2 0 1
The values of KD and KD1 can be easily derived if B is square and invertible:

1 2 0
KD = B A+ =
0 2
1 2 4
1 1=2 2 1 2 0 3 3
= = 4 2
1=2 1 1 2 0 2 3 3

1 4 2
1 1 0 1 1=2 1 0 3 3
KD1 = B = = 2 4
0 1 1=2 1 0 1 3 3

KD = KD1 KD2 ! KD2 = KD11 KD


4 2 1 2 4
3 3 3 3 0 1
KD2 = 2 4 4 2 =
3 3 3 3 1 0

The ”closed loop” system is now termed ”non-interacting” or ”decoupled” for obvious reas-
ons. Further state feedback can now easily be applied to alter the eigenvalues. The general
scheme is show in …g 8.8.
When B is square, the design of decoupled systems via state feedback is quite easy. When
B is not square it is generally quite di¢ cult. For the general theory, advanced texts should be
consulted.
Reference Inputs and Integral Control 61

Figure 8.8: Decoupling using state feedback

8.2 Integral Control


It has been seen that state feedback never increases the Type of the system. I.e. if the system
is Type 0, the state controlled will still be Type 0 and will show a steady state error to a step
input. If we wish to increase the system type we must augment the state equation to include
extra ”controller” states as follows:
Recall that in classical compensation, integral compensation meant that the system input u
was a function of the integral of the error. In state feedback, the input must be a function of
states only. Therefore, we must make the integral of the error a state of the system, i.e.
Z
xI = (r v)dt ! x_ I = r v

where xI is an extra state. For sign convention reasons we will de…ne x_ I = v r: Here we
are treating v and r as scalar (single controlled output). If we have p controlled outputs then
x_ I = v r, i.e. p extra states.
Let the system be given by:

x_ = Ax + Bu; v = Cx
If the v outputs are to be ”integral” controlled, we augment the state space equation with

x_ I = v r = Cx r
therefore:

x_ A 0 x B 0
= + u+ r
x_ I C 0 xI 0 I
i.e.
x_ 0 = A0 x0 + B 0 u + G0 r
Matrices A0 and B 0 will be used for control design. Now we will carry out state feedback to
alter the eigenvalues of A0 . This will yield a feedback law:
0 1
x1
B x2 C
B C
B .. C
0
k1 k2 kn kI1 kI2 B C
u = Kx = B . C
B xI1 C
@ A
..
.
where kI1 ; kI2 ; etc. are the gains of the ”integral compensators”.
An example will make this obvious.
Reference Inputs and Integral Control 62

Example
Let
2 1 x1 2 x1
x_ = + u; v= 1 0
0 3 x2 1 x2
The output v is required to follow a step reference with zero steady state error. The system
is Type 0. The closed loop system is to have eigenvalues of 5; 2; 3: Note three eigenvalues,
since we are going to augment the system with an integral controller so increasing the order of
the system by one. De…ne x_ I = v r = x1 r. The augmented state equation is
0 1 0 10 1 0 1 0 1
x_ 1 2 1 0 x1 2 0
x_ 0 = @ x_ 2 A = @ 0 3 0 A @ x2 A + @ 1 A u + @ 0 A r
x_ I 1 0 0 xI 0 1
and the control law will be: u = Kx0 = k1 k2 kI x0
Control in modal canonical form. The eigenvalues of A0 are:
0 1
2 1 0
A0 I = 0; @ 0 3 0 A I = 0; ( + 2) ( + 3) =0
1 0 0
Hence the open loop eigenvalues are 0; 2; 3. Therefore, we only need to change one
eigenvalue.
Now, we obtain the eigenvectors of A0 in order to …nd the modal matrix:
0 1 0 1 0 1 0 1
0 2 3 0 2 3
1 =
@ 0 A; 2 =
@ 0 A; 3 =
@ 3 A; =@ 0 0 3 A
1 1 1 1 1 1
0 1 0 1 10 1
0 0 0 0 2 3 2
y_ = @ 0 2 0 Ay + @ 0 0 3 A @ 1 Au
0 0 3 1 1 1 0
0 1 0 5 1
0 0 0 6
y_ = @ 0 2 0 A y + @ 21 A u
1
0 0 3 3
change the …rst mode to = 5,i.e:

5k1 5k1
1+=0!s+ =0
6s 6
equating coe¢ cients with s + 5 = 0; we have

5k1
= 5 ! k1 = 6
6
The control law is
0 1 10 1
0 2 3 x1
u = 6 0 0 y= 6 0 0 @ 0 0 3 A @ x2 A =
1 1 1 xI
0 1
x1
= 3 1 6 @ x2 A
xI

Hence, the controlled system is:


Note the signs. At the ”u summation” all are negative, on account of u = Kx0 . You can
see why we de…ned x_ I = v r; i.e. the above is equivalent to:
Reference Inputs and Integral Control 63

Figure 8.9: Control with integral action

Figure 8.10: Control with integral action

Note if we wanted a Type II system for the above example then:

x_ I1 = v r
x_ 21 = xI1

then
0 1 0 1
2 1 0 0 2
B 0 3 0 0 C B 1 C
A = B
@ 1
C; B=B C
0 0 0 A @ 0 A
0 0 1 0 0
u = k1 k2 k3 k4 x0

and the corresponding block diagram:

Figure 8.11: Block diagram for double integral action


Chapter 9

Full and Reduced Order Observers

9.1 Introduction
You should now appreciate that state feedback is a powerful technique which cannot only sta-
bilize complex multivariable intereacting systems but is able to position the closed loop poles
(eigenvalues) to any position that the designer requires. However, the main assumption with
feedback of all the system states is that all the system states are measurable quantities, other-
wise, the law u = Kx would be practically useless.
We have said earlier that in nearly all cases it is possible to derive x_ = Ax + Bu such that
the states are “physical” quantities. This is true, but the fact that all states may be physical
and measurable in principle does not mean that they are easily measurable in practice. There
are a number of scenarios:

1. A system (especially mechanical) may have states x; x;


_ x•; ; _ ; •; ; _ ; • in which case a large
number of potentiometers, tachos, accelerometers, etc could be required.

2. Deriving x;
_ x• from x is a di¤erential process and could be noisy.

3. The measurements may be just plain noisy anyway.

4. Transducers may be expensive.

5. States may be inaccessable. This is common in nuclear and chemical industries where
states (e.g. heat or ‡uid ‡ow) are in areas which are inconvenient to readh.

The reasons may be any one or a mixture of the abov. The problem is solved by augmenting
the system controller u = Kx with a state Estimator or State Observer which literally
estimates the states for use in the controller. If all the states are to be estimated, we hava a
Full Observer. If a portion of the states is easily measurable, then estimation of the remainder
constitutes a Reduced Order Observer.
Observers work by comparing the real, measured output of the plant, v, with the output
of a modelled or Simulated plant, v^. The error between v and v^ is then used to adjust the
states x^ of the simulated plant. It is these simulated states, x^, which are used for feedback,
i.e. u = K x ^. The simulated plan can be either an analogue model consisting of integradors,
ampli…ers, etc. or a digital model on a microprocessor.
You can now see that the otuput vector, v, now comes into its own. Before, when we assumed
all states were available v was rather super‡uous, the C matrix being either the unit matrix
or a matrix consisting of the odd unity element. For observers, v are the actual measurements
which will be used to compare the real plant and the modelled plant performance.

64
Full and Reduced Order Observers 65

9.2 Full Order Observers


This will be done …rst as the theory is much simpler than for the reduced case. The theory will
be done for multi-output measurments, v, which may or may not be states. Consider …rst an
“open loop observer”:

Figure 9.1: Open loop observer

where x ^ and v^ are state and output estimates. u is the controller output and is obviously
a known quantity. Both plant and simulator a stimulated by u and as such x ^ will follow x
providing:

1. The model matrices A; B and C are accurate.

^0 and x0 are the same.


2. The initial state conditions x

Statement 2 is particularly far fetched. If xi cannot be measured, how do we know x ^i0 ? Even
^0 were accurate, it can be seen that x
if x ^ and x will drift appart though inaccuracies in A and
B.
What is done is to apply feedback to the model system in order that the error between x
and x ^ (which we denote as x ~=x x ^) reduces to zero. The resulting closed loop observer is
shown in …g. 9.2.

Figure 9.2: Closed loop observer


Full and Reduced Order Observers 66

Note that it is not necessary to include the modelled B inside the model system feedback
loop, since Bu is a known quantity. The plant equations are:

x_ = Ax + Bu (9.1)
v = Cx + Du (9.2)

D is not common, but is included for generality.


The observer equations are:

b_ = A^
x x + Bu + L (v v^) (9.3)
v^ = C x
^ + Du (9.4)

Substituting (9.2) and (9.4) into (9.3) gives:

b_ = A^
x x + Bu + LC (x x
^) (9.5)

Now substract (9.5) from (9.1)

x_ b_ = A (x
x x
^) LC (x x
^) (9.6)

i.e.
e_ = (A
x LC) x
~ (9.7)
Eq. (9.7) de…nes the Error Dynamics. It is an unforced equation and hence the error x ~ will
decay to zero according to the eigenvalues of (A LC). The error will decay to zero no matter
what the initial error conditions x
~0 (and hence x
^0 ) are. The problem now is to choose L to give
acceptable eigenvalues for (A LC). What should they be?
Remember that the …nal closed loop system will have the eigenvalues of (A BK). Therefore
the dynamics of x will be made of transient terms de…ned by these closed loop eigenvalues. Since
we want x ^ to follow x, then any error between them must decay faster than any closed loop
transient. Rule:
The eigenvalues of (A LC) should be aproximately 10 times faster than those of (A BK).
Generally the slowest eigenvalue of (A LC) should be an order higher than the fast-
est “reasonably dominant” eigenvalue of (A BK). E.g. let the eigenvalues of (A BK) be
4; 2 j2; 80. The 4 is reasonably dominant, the 80 is not. Therefore choose the eigen-
values of (A LC) as 40 j40; 40; 50. Note actual numbers, real or complex, are rather
arbitrary.
Now we have to …nd L.

9.2.1 Single Output Systems. Observer Design using Observer Canonical


Form
This procedure is in fact the exact dual of the procedure in section 7.2. Consider the system:

x_ = Ax + Bu; v = Cx
We …rst convert to observer canonical form:

x_ o = Ao xo + Bo u; v = Co xo (9.8)
where 2 3 2 3
Co C
6 Co Ao 7 6 CA 7
6 7 6 7
xo = T x; T = Po 1 P; Po = 6 Co A2o 7; P = 6 CA2 7 (9.9)
4 5 4 5
.. ..
. .
Full and Reduced Order Observers 67

if v is scalar, C and Co will be 1 n and P and Po will be square. Po 1 will only exist if the
system is observable.
We note that Ao and Co can be written by inspection:
0 1
a1 1 0 0
B .. C
B a2 0 1 . C
B C
B C
Ao = B a3 0 0 . . . 0 C ; Co = 1 0 0
B C
B .. C
@ . 1 A
an 0 0 0
where ai is the coe¢ cient of sn i (or n i ) of a(s), the open loop characteristic polynomial.
t
Since Co is 1 n, it follows that Lo will be n 1, i.e. Lo = lo1 lo2 lon . The su¢ x
”o” in Loi denotes that the design is being done in the observer canonical domain. Therefore:
0 1 0 1
lo1 lo1 0 0
B lo2 C B lo2 0 0 C
B C B C
Lo Co = B . C 1 0 0 =B . . .. C
@ .. A @ .. .. . A
lon lon 0 0
and Ao Lo Co is therefore:
0 1
a1 lo1 1 0 0
B .. C
B a2 lo2 0 1 . C
B C
B .. C
Ao Lo Co = B a3 lo3 0 0 . 0 C (9.10)
B C
B .. C
@ . 1 A
an lon 0 0 0
which is in observer canonical form. Therefore a1 + lo1 ; a2 + lo2 ; : : : are the coe¢ cients of
sn iof (s), the closed loop characteristic polynomial of (Ao Lo Co ), i.e.

sn + 1s
n 1
+ 2s
n 2
+ + n = (s o1 ) (s o2 ) (s on )

where oi are the observer eigenvalues (real or complex). Therefore, we have:

loi = i ai (9.11)
and
t
Lo = lo1 lo2 lon (9.12)
Now, if we used x_ o = Ao xo +Bo u; v = Co xo as our model system, the matrix Lo would give
us and estimate of the states xo . Hence, the control law u = Kx would become u = KT 1 xo .
Alternatively, we can attempt to get estimates of x directly. The observer equation in observer
canonical form is:

b_ o = Ao x
x ^o + Bo u + Lo (v v^)
since x
^o = T x
^, we have:

b_ = Ao T x
Tx ^ + Bo u + Lo (v v^)

b_ = T
x 1
Ao T x
^+T 1
Bo u + T 1
Lo (v v^)

b_ = A^
x x + Bu + L (v v^)
Full and Reduced Order Observers 68

Figure 9.3: Observer and controller block diagram

and comparing terms we wee that:

1
L=T Lo (9.13)
The procedure for designing an observer is therefore:

1. Given A; B; C …nd the characteristic polynomial a(s) from jA Ij = 0.


2 3 2 3
C Co
6 CA 7 6 Co Ao 7
6 7 6 7
2. Calculate P = 6 CA2 7. Write down Ao ; Co . Calculate Po = 6 C A2 7 :
4 5 4 o o 5
.. ..
. .

3. Select the observer eigenvalues ( o ). Form (s).

4. Calculate loi using loi = i ai to form Lo .

5. Calculate T 1 =P 1P
o.

6. Calculate L = T 1L .
o

And the observer (with the controller) is shown in …g. 9.3.

Example Let
2 1 1
A= ; B= ; C= 2 3
0 3 1
Design a controller u = Kx to give closed loop eigenvalues of 5 j5. The states are not
accessable.
Therefore we need to design an observer with observer eigenvalues of 50 j50. The control
design has already been done in section (7.1). K is 17 12 : We are going to design the
observer following the steps above:
The open loop characteristic polynomial is: (s + 2) (s + 3) = s2 + 5s + 6.
C 2 3
Now we obtain the matrix P = =
CA 4 7
The system in observer canonical form is:

5 1
Ao = ; Co = 1 0
6 0
Co 1 0
Therefore, the matrix Po = = .
Co Ao 5 1
The observer eigenvalues are o = 50 j50 therefore (s) = (s + 50)2 + 502 = s2 + 100s +
5000.
Full and Reduced Order Observers 69

95
Therefore lo1 = 100 5 = 95; lo2 = 5000 6 = 4994, and Lo =
4994
Now we need the matrix T 1 = P 1 Po in order to obtain matrix L in the original coordin-
ates:
1 3
1 1 2 3 1 0 4 2
T =P Po = =
4 7 5 1 3 1
and …nally:
3
1 4 2 95 7111
L=T Lo = =
3 1 4994 4709
To check the design, we just need to obtain the eigenvalues of (A LC).

2 1 7111 14 224 21 332


A LC = 2 3 =
0 3 4709 9418 14 124

14 224 21 332 2
I = + 100 + 5000
9418 14 124
Therefore the design is correct.

9.3 Reduced Order State Observers


The full order observer estimates all the states whether you like it or not. If some of the states
are measurable you can either:

use a full observer and so gain estimates of both measurable and non-measurable states.
You can choose wheter to use the measured states or their estimates in the feedback law
(there is some sense in doing this if the measurements are noisy).

design a reduced order estimator in which only non-measurable states are estimated (ob-
served)

The reduced order observer is a little trickier than the full order one. The procedural
implementatios is however just as straightforward.
We split up the system into measurable states xm and not measurable states xr (we’ll let r
stand for reduced ). We can write x_ = Ax + Bu as:

x_ m A11 A12 xm B1
= + u (9.14)
x_ r A21 A22 xr B2
and

xm
v= C1 0 (9.15)
xr
C1 will generally be a unit matrix. We can write the state equation for xr as:

x_ r = A22 xr + (A21 xm + B2 u) = (9.16)


= A22 xr + ur (9.17)

since xm is measurable and u is of course known. Therefore, we can view ur = A21 xm + B2 u


as an equivalent input which forces the state equation for the unknown states. What we need
now is an output equation for the unknown states. We have
Full and Reduced Order Observers 70

x_ m = A11 xm + A12 xr + B1 u
therefore:

x_ m A11 xm B1 u = v r = A12 xr (9.18)


where x_ m A11 xm B1 u is our equivalent output. True, it contains x_ m which is not directly
measurable, but this will be seen not to be a problem. We can now write the observer equation
b_ r is the same was as with the full order observer:
for x

b_ r = A22 x
x ^r + ur + L (vr v^r ) (9.19)
i.e.
b_ r = A22 x
x ^r + (A21 xm + B2 u) +L (x_ m A11 xm B1 u b)
A12 x
| {z } | {z } | {z r} (9.20)
ur vr v^r
or

b_ r = A22 x
x ^r + (A21 xm + B2 u) + L (A12 xr br )
A12 x (9.21)
and now we substract eq. 9.21 from eq. 9.16 to yield:

x_ r b_ r = A22 (xr
x br )
x LA12 (xr br )
x

e_ r = (A22
x er
LA12 ) x (9.22)
which is the unforced dynamic equation for the errors in the unknown states and is analogous
to eq 9.7. The L matrix can be found by the procedures developed in section 9.2.1 with the
matrix A22 acting as the system matrix A and A12 acting as the output matrix C.
The observer however remains to be implemented. The full observer was represented by eq
9.3. The reduced order equivalent is 9.19 or 9.20. Collecting terms in that equation, we get:

b_ r = (A22
x LA12 ) x
^r + (A21 LA11 ) xm + (B2 LB1 ) u + Lx_ m (9.23)
Now xm and u are no problem since they are known. To get rid of the Lx_ m term we de…ne
a new state:

b_ r
x_ c = x Lx_ m
i.e.
br
xc = x Lxm (9.24)
Therefore
x_ c = (A22 LA12 ) x
^r + (A21 LA11 ) xm + (B2 LB1 ) u (9.25)
And the reduced order observer (eqs. 9.24 and 9.25) can be implemented as shown in …g.
9.4.
I’m afaid you will have to remember this block diagram (or 9.24 and 9.25) together with eq.
9.22. Armed with these you can now design reduced order observers.
Full and Reduced Order Observers 71

Figure 9.4: Reduced order observer

Example We’ll take our old example again:

2 1 1
A= ; B= ; C= 1 0
0 3 1
Design a controller u = Kx to give closed loop eigenvalues of 5 j5. The control design
has already been done in section (7.1) K is 17 12 : The state x1 is measurable and x2 is
inaccessable.
We have A11 = 2; A12 = 1; A21 = 0; A22 = 3. Observer eigenvalues are given by those
of (A22 LA12 ) = 3 L 1 = 3 L. Let the observer pole be at s = 50. Therefore
j I Aj = + 3 + L = 0, hence L = 47.
The resulting observer and control loop is shown in …g 9.5.

Figure 9.5: Block diagram of the complete system (control and reduced order observer)

9.4 Observers - Discussion


9.4.1 Closed loop eigenvalues
We have controlled the system using u = Kx to give a system with user selected closed loop
eigenvalues. We have then designed an observer with fast “observer”eigenvalues. Question: has
Full and Reduced Order Observers 72

the existence of the observer altered the principal (and designed) closed loop eigenvalues? The
answer is no. We shall prove this for the full observer case.
For a n-th order sistem, with states x, the observer has e¤ectively added another n state
b. The closed loop system will be of order 2n. since x
variables x e=x x b, i.e. a linear combination
of estimate and real states, we can therefore use x and xe as the 2n state variables for the closed
loop system (the eigenvalues of the closed loop system matrix will be the same as if we had used
x and xb as the 2n states).
The closed loop system is de…ned by:

x_ = Ax + Bu; u = b
Kx (9.26)

and the error state equation:


e_ = (A
x e
LC) x (9.27)
e=x
From x b, we have x
x b=x e, therefore eq. 9.26 can be expressed as:
x

x_ = Ax b = Ax
BK x BK (x e)
x

x_ = (A e
BK) x + BK x (9.28)
e_ = (A
x e
LC) x

Eq. 9.28 are the closed loop state equations and can be written (neglecting reference inputs):

x_ (A BK) BK x
e_ = (9.29)
x 0 (A LC) e
x
The eigenvalues of the closed loop are given by:

I (A BK) BK
=0
0 I (A LC)

and since the determinant is block diagonal, this reduces to:

j I (A BK)j j I (A LC)j = 0 (9.30)

i.e. the eigenvalues of the closed loop system consist of the union between the eigenvalues of
(A BK) and of (A LC) : This is extremely convinient as it means that:
The controller and observer can be designed independently. Their existence
does not affect the eigenvalues of the other.

9.4.2 Reference Input and System Zeroes


Consider a single input single output system with a full order observer, as shown in …g 9.6.
Let pc (s) be the characteristic polynomial of (A BK), K = K1 K2 ; pe (s) be the character-
istic polynomial of (A LC).
Let n(s) be the polynomial of the plant open loop zeroes.
Then it can be shown that the closed loop transfer function is given by:

v(s) k1 n (s) pe (s) k1 n (s)


= = (9.31)
r(s) pc (s) pe (s) pc (s)
Therefore, for single input, single output systems, the observer and controller are placed in
the feedback path, with only the scalar gain K1 in the forward path. Then, we can conclude:

1. The closed loop zeros are the same as the plant open loop zeros.
Full and Reduced Order Observers 73

Figure 9.6: Block diagram of a SISO system with state feedback control using a full order
observer

2. The observer poles are cancelled by equal zeros. This means that the fast observer modes
are uncontrollable from r and unobservable at v. It does not mean that the system is only
of order n, as the full system contains 2n eigenvalues. Eq. 9.30 still holds and the full
system contains 2n eigenvalues.

3. Steady state gain via the parameter k1 can be carried out as in section 8.1.2.

In fact, statement 2 is true for …g 9.6 even for a multi-input case, i.e. K1 is a matrix.
However, in this case, statements 1 and 3 do not hold.

Example Given:
V (s) (s + 4)
=
U (s) (s + 3) (s + 2)
design a controller to place the closed loop poles at 5 j5. Only v is measurable. The system
is to have unity DC gain.
(s+4)
The system is second order. Let x1 = v. Let x2 = (s+2) . This choice of x2 is rather arbitrary
and is done because the state space equation is thus:

2 1 1 x1
x_ = x+ u; v = 1 0
0 3 1 x2

which we have already done! i.e.

x1
u= 17 12
x
^2

where x
^2 is the output of a reduced order observer with L = 47 (see section 9.3).
From eq. 9.31 the closed loop transfer function is to be:

V (s) K1 (s + 4)
= 2
R(s) s + 10s + 50

which has a steady state gain of 0:08. Therefore K1 = 12:5 and K2 is a 1 2 matrix K2 =
1:36 0:96
Full and Reduced Order Observers 74

Figure 9.7: Use of a full order observer in a system with integral action

9.4.3 Integral Control


No problem at all here. An observer is used to estimate the states of the plant and not, obviously,
the augmented states created for error control, as shown in …g. 9.7.
Therefore the augmented state space equations are used to design the controller whilst the
observer design is based on (A LC) where A is the non-augmented system matrix. In fact,
kI xI can be viewed as “reference input”to the plant/observer system. From eq. 9.31 there will
be no zeroes from kI xI to v. Note that K is not split into K1 and K2 as the integral control
will automatically ensure that the steady state gain between v and r is unity.
Chapter 10

Time Solution of the State Space


Equation

10.1 Introduction
In this rection we will sove the state space equation x_ = Ax + Bu, i.e. given x (t0 ), the initial
states at t = t0 , and u (t) we will solve for x (t).
Although we will concentrate on solving the plant state equation x_ = Ax + Bu, the same
techniques can be used to solve for the closed loop system equation x_ = (A BK) x + BK1 r
since this is mathematically identical.
The aim of this section is twofold:

1. to enable you to simulate the transient waveforms of the uncontrolled and controlled plants
on a computer (for a linear time-invariant system). This is essential as it enables you to
test out your control designs and see how sensitive the design is to parameter variations
which generally exist in practice.

2. It provides a bridge between continuous state space systems and discrete time state space
systems.

The theory of the continuous solution will be done …rst. We will then “discretise”the theory
for implementation on a computer.

10.2 The continuous solution


Consider a …rst order ordinary di¤erential equation: x_ = ax + bu: If the initial condition x (t0 )
at t = t0 is given and the input function u(t) is known for t t0 then the solution is:

x (t) = x (t0 ) ea(t t0 )


+ q (t t0 ) ea(t t0 )
(10.1)

The …rst term is known as the complimentary function and the second is a particular integral.
The complimentary function is the solution to the unforced equation x_ = ax and depends only
on the initial conditions and the system modes (in this case e at ). The particular integral is the
solution of the forced system with zero initial conditions.It is a function of the input u(t) and
the system modes. q (t) is a function to be found.
If you don’t understand eq 10.1 don’t worry, just accept it. For the vector equation x_ =
Ax + Bu we will postulate, by analogy, a solution of the form:

x (t) = xCF (t t0 ) + xP I (t t0 )

x (t) = (t t0 ) x (t0 ) + (t t0 ) q (t t0 ) (10.2)

75
Time Solution of the State Space Equation 76

where x (t0 ) is the initial condition vector and is n 1, q (t t0 ) is also n 1 and is our
input dependant function and is to be found. Therefore (t t0 ) is a n n matrix. It is called
the Transition Matrix. For a …rst order system (t t0 ) = e a(t t0 ) . For higher order
systems (t t0 ) will contain element which are time functions of the modes, i.e. elements like
e 1 (t t0 ) ; e t sin t; etc. We write (t t0 ) rather than just to emphasize that it is a matrix
whose elements are time functions, time beginning at t = t0 (normally t0 = 0).
Now xCF (t t0 ) = (t t0 ) x (t0 ), and xCF must obey the unforced equation x= _ Ax.
Therefore:
_ (t t0 ) x (t0 ) = A (t t0 ) x (t0 )

Therefore:
_ (t t0 ) = A (t t0 ) (10.3)
in other words, _ Ax. In fact,
itself obeys x= has the following properties:

1. is called a Fundamental Matrix, i.e. it obeys x=


_ Ax.

2. (t0 t0 ) = I

3. (t3 t1 ) = (t3 t2 ) (t2 t1 ). This is known as the transition property

4. 1 (t t0 ) = (t0 t)

We will see in section 10.3 how to calculate the transition matrix. Now we are going to …nd
the unknown function q (t t0 ) :
Since xP I (t t0 ) = (t t0 ) q (t t0 ) is the solution to x_ = Ax + Bu; it must obey it:
_ (t t0 ) q (t t0 ) + (t t0 ) q_ (t t0 ) = A (t t0 ) q (t t0 ) + Bu (t) (10.4)

Substituting _ (t t0 ) = A (t t0 ) into the previous equation:

A (t t0 ) q (t t0 ) + (t t0 ) q_ (t t0 ) = A (t t0 ) q (t t0 ) + Bu (t)

(t t0 ) q_ (t t0 ) = Bu (t)
1
q_ (t t0 ) = (t t0 ) Bu (t)
Z t
1
q (t t0 ) = ( t0 ) Bu ( ) d
t0

Applying property 4 1 (t t0 ) = (t0 t)


Z t
q (t t0 ) = (t0 ) Bu ( ) d
t0

Therefore the particular integral will be:


Z t
xP I (t t0 ) = (t t0 ) q (t t0 ) = (t t0 ) (t0 ) Bu ( ) d
t0

(t t0 ) can be taken inside the integral, as it is not a function of :


Z t
xP I (t t0 ) = (t t0 ) (t0 ) Bu ( ) d (10.5)
t0

Using the transition property we have:


Z t
xP I (t t0 ) = (t ) Bu ( ) d
t0
Time Solution of the State Space Equation 77

Therefore, the complete solution is

x (t) = xCF (t t0 ) + xP I (t t0 )
Z t
x (t) = (t t0 ) x (t0 ) + (t ) Bu ( ) d (10.6)
t0
You must remember this equation and understand its structure, you will not be asked to
derive it.
Now all we have to do is …nd (t t0 ) :

10.3 Finding the Transition Matrix


10.3.1 Method 1. Inverse Laplace Transform
Taking Laplace tranforms of x_ = Ax + Bu; including initial conditions:

sx (s) x (t0 ) = Ax (s) + Bu(s)


i.e.

1 1
x (s) = (sI A) x (t0 ) + (sI A) Bu(s)
and
n o n o
x (t) = L (sI A) 1 x (t0 ) + L 1 (sI A) 1 Bu(s)
1

n o n o
x (t) = L 1 (sI A) 1 x (t0 ) + L 1 (sI A) 1 Bu(s)
| {z } | {z }
complim. func. part. integral
Therefore, it can be seen that
n o
1 1
(t) = L (sI A)
This looks neat until you try an example e.g.

0 1 1
1 s+3 1
A= ; (sI A) =
2 3 (s + 1) (s + 2) 2 s
0 n o n o 1
n o 1 (s+3) 1 1
L L
(t) = L 1
(sI A) 1 = @ n (s+1)(s+2) o n (s+1)(s+2) o A
2 s
L 1 (s+1)(s+2) L 1
(s+1)(s+2)

Note: to obtain (t t0 ), just …nd the inverse Laplace transform and replace t in the
resulting expressions by (t t0 ). The example shows that for a n-th order system you have n3
partial fraction coe¢ cients to …nd.

10.3.2 Method 2. Finding via Modal Transformation


As a preliminary to method 2 consider: For a …rst order system we saw that (t t0 ) = ea(t t0 )
or (t) = eat . Can we say that (t) = e[A]t where A is a n n matrix. Here the matrix
exponential is de…ned as follows:

A2 t2 A3 t3 A4 t4
e[A]t = I + At + + + + (10.7)
2! 3! 4!
and
d n [A]t o A2 t A3 t2 A4 t3
e =A+ + + + (10.8)
dt 1! 2! 3!
Time Solution of the State Space Equation 78

We can say that (t) = eAt as long as eAt satis…es the properties of in sec. 10.1. Well,
d
multiplying the …rst equation by A we obtain the second equation. Therefore dt eAt = Ae[A]t .
At
I.e. the matrix exponential is a solution to the homogeneus equation. Also e = I for t = 0.
You can take it from me that properties 3 and 4 are satis…ed as well, therefore:

(t t0 ) = e(t t0 )A (10.9)
Now let us use this expression to calculate (t t0 ):
Let x_ = Ax unforced be transformed to y_ = y. The solution y (t) is (assuming t0 = 0):

y (t) = mdal (t) y (0)

and from (t) = e[A]t

y (t) = e[ ]t
y (0)
However, y (t) = 1 x, therefore

x (t) = e[ ]t 1
x (0)
and
= e[ ]t 1
(10.10)

Unlike e[A]t , e[ ]t is easily obtained:


2 t2 3 t3 4 t4
e[ ]t
=I + t+ + + + =
2! 3! 4!
0 1 0 2 2
1
1t 0 1t 0
B 2t
C 1 B 2 2 C
B C B 2t C
=I +B .. C+ B .. C+ =
@ . A 2! @ . A
2 2
0 nt 0 nt
0 2 2 1
1t
1+ 1t + 2! + 0
B 2 2
2t
C
B 1+ 2t + + C
=B
B
2!
..
C
C
@ . A
2 2
nt
0 1+ nt + 2! +
Therefore 0 1
e 1t 0
B e 2t C
B C
e[ ]t
=B .. C
@ . A
0 e nt

Therefore e[ ]t can be written down by inspection and found by = e[ ]t 1.

Example
0 1 0
Find the complete solution of x_ = x+ u for a step function u (t) = 2
2 3 1
1
(t > 0). Let x (0) =
0
0 1 1 1
First of all, we have to …nd . A= , 1 = 1; 2 = 2, = ,
2 3 1 2
1 2 1
= :
1 1
Time Solution of the State Space Equation 79

Therefore:

1 1 e t 0 2 1
= =
1 2 0 e 2t 1 1
2e t e 2t e t e 2t
= t
2e + 2e 2t e t + 2e 2t

Now we can obtain the complimentary function:

2e t e 2t e t e 2t 1
(t) x (0) = t =
2e + 2e 2t e t + 2e 2t 0
2e t e 2t
= t
2e + 2e 2t

To obtain the solution, now we need to …nd a particular integral:


Z t
xP I = (t ) Bu ( ) d
0

0 0
Bu ( ) = ( 2) = hence:
1 2

2e (t ) e 2(t ) e (t ) e 2(t ) 0
(t ) Bu ( ) = (t ) + 2e 2(t ) (t ) + 2e 2(t ) =
2e e 2
2e (t ) + 2e 2(t )
=
2e (t ) 4e 2(t )

Z t t
2e (t ) + 2e 2(t ) 2e (t ) +e 2(t )
xP I = d = =
0 2e (t ) 4e 2(t ) 2e (t ) 2e 2(t )
0
2+1 2e t + e 2t 1 + 2e t e 2t
= =
2 2 2e t 2e 2t 2e t + 2e 2t

Therefore, considering t0 = 0; the total solution is:

x (t) = xCF (t) + xP I (t) =


x1 2e t e 2t 1 + 2e t e 2t
= + =
x2 2e t + 2e 2t 2e t + 2e 2t
4e t 2e 2t 1
= t 2t
4e + 4e

10.4 The Discrete State Equation


Solving x_ = Ax + Bu by hand following the previous procedure is obviously time consuming
even for second order systems. In practice, the solution is always done on a computer. Let us
denote the solution time step as h. Then what computer solutions do in principle is this: let
x be known at t = kh, where k is the k-th time step. Let u be known at t = kh: Then the
gradients x_ are known at t = kh since x_ = Ax + Bu can be computed. If x and x_ are known,
the computer can predict x at t = (k + 1) h.
Let x (k) be x at t = kh; u (k) be u at t = kh; x (k + 1) be x at t = (k + 1) h.
Time Solution of the State Space Equation 80

1.0
y

0.5

0.0
1 2 3 4 5
t(s)

-0.5

-1.0

Figure 10.1: Temporal evolution of x1 and x2

x1
-1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0

x2
-0.2

-0.4

-0.6

-0.8

-1.0

Figure 10.2: State space trajectory


Time Solution of the State Space Equation 81

Then the prediction process to …nd x (k + 1) will be of the form:

x (k + 1) = E (h) x (k) + F (h) u (k) (10.11)

where E is n n and called the discrete system matrix. We can write E (h) to denote that the
elements of E will be dependent on h. F is n r and is the discrete input matrix. This equation
is called the Discrete State Space Equation. The problem is: given the continuous state
space matrices A; B, how do we get E; F ?
Note that if v = Cx + Du, then obviously v (k) = Cx (k) + Du (k).

10.4.1 Approximate evaluation E; F


We can approximate x_ as:

x (k + 1) x (k)
x_ Ax (k) + Bu (k)
h
therefore
x (k + 1) x (k) = Ahx (k) + Bhu (k)
and
x (k + 1) = (I + Ah) x (k) + Bhu (k)
E (h) = (I + Ah) ; F (h) = Bh (10.12)

10.4.2 More accurate evaluation of E and F .


We simple use the transition matrix and the solution to the continuous time state equation:
Z t
x (t) = (t t0 ) x (t0 ) + (t ) Bu ( ) d (10.13)
t0

Let t0 = kh and t = (k + 1) h (now you see our obsession with retaining t0 ). We see that
t t0 = h, so we can use eq. 10.13 as an excellent predictor formula from one time step to the
next. Therefore, we have:
Z (k+1)h
x ((k + 1) h) = (h) x (kh) + ((k + 1) h ) Bu ( ) d (10.14)
kh

We consider that the input is kept constant from t0 = kh to t = (k + 1) h, i.e. u = u (kh)


from t0 to t: Therefore, the term Bu can be taken outside the integral.
"Z #
(k+1)h
x ((k + 1) h) = (h) x (kh) + ((k + 1) h )d Bu (kh)
kh

Now we can tidy the integral with a change of variable # = (k + 1) h , giving d# = d :


Z 0
x ((k + 1) h) = (h) x (kh) + (#) ( d#) Bu (kh)
h
Z h
x ((k + 1) h) = (h) x (kh) + (#) d# Bu (kh)
0
And the ”stepped ” equation will be:
Z h
x (k + 1) = (h) x (k) + (#) d# Bu (k) (10.15)
0
Time Solution of the State Space Equation 82

It can be seen that Z h


E= (h) ; F = ( )d B (10.16)
0

The computation of (h) is easy:

A2 h2 A3 h3
(h) = eAh = I + Ah + + + (10.17)
2! 3!
n n (n 1) (n 1)
this expression is computed until A n!h A h
(n 1)! < ", a small value. Usually 7 to 10
terms are su¢ cient. Rh
The computation of 0 ( ) d is also straightforward (we can integrate the above expression
term by term):
Z h
Ah2 A2 h3
eA d = Ih + + + =A 1
( (h) I) (10.18)
0 2! 3!
Therefore, we only need to calculate the power series once. In order to avoid to take the
inverse of A, the power series of the integral term is usually calculated:
Z h
Ah2 A2 h3
= eA d = Ih + + + =A 1
( (h) I) (10.19)
0 2! 3!

From matrix , we can obtain E and F .


1
= A (E I) (10.20)
E = A +I
F = B

The computer solution via (10.17), (10.18), (10.16) and (10.11) are for linear, time invariant
A and B matrices (i.e. ones with numbers in). If the system is non-linear then either A and B
contain non-linear functions or else x_ = Ax + Bu cannot be formulated at all. The solution of
x_ = f (x; u) for non-linear systems demands integration routines based on methods like Runge-
Kutta or Adam’s. Simple versions of these are quite easy to write and implement on a small
computer but the methods are those of numerical analysis and are thus outside of this course.
Finally it should be remembered that sophisticated integration routines are available as such or
integrated in software packages such as Matlab.
Chapter 11

Discrete State Space Design for


Digital Control Systems

11.1 Introduction
Consider the discrete time state space equation x (k + 1) = Ex (k) + F u (k) where E and F are
given by eq (10.16) which itself was derived from eq. (10.13). What the discrete equation does
is to de…ne the system response over discrete time steps given that u is held constant over each
time step, i.e. u put through a sample and hold (zero order hold).
Therefore the matrices E and F correspond to the zero hold equivalent of the continuous
time system. If we want to obtain the input output relationship (z domain transfer function)
we just have to apply the z transform to the following equations:

x (k + 1) = Ex (k) + F u (k) (11.1)


v (k) = Cx (k)

taking the z transform we have

zX (z) = EX (z) + F U (z)

and solving for X (z)


(zI E) X (z) = F U (z)
1
X (z) = (zI E) F U (z)
then substituting X (z)
1
V (z) = CX (z) = C (zI E) F U (z)
adj (zI E)
V (z) = C F U (z) (11.2)
jzI Ej
Therefore, the eigenvalues of E are the Z plane poles of the discrete time transfer function
V (z)
U (z)obtained from the continuous system with a zero order hold.
The fact about the eigenvalues of E being Z plane poles is very elegant. The designer can
choose the closed loop Z plane poles which will then be the closed loop eigenvalues of the matrix
(E F K) where u (k) = Kx (k) : All the state variable and observer theory developed for
continuous systems can thus be applied to digital systems with A replaced by E and B replaced
by F (The matrices C; D are, of course, the same for continuous and discrete time). The
concepts of control, observer and modal canonical forms, transformation, control and observer
design, controllability and observability and all the procedures contained therein are identical.
There is no point on repeating them all. Although the procedures are identical, there are one
or two pitfalls which are lurking around. These will now be treated.

83
Discrete State Space Design for Digital Control Systems 84

11.2 Discrete State Space Equations from Transfer Functions


There are two ways of obtaining discrete state space equations from transfer functions:

1. Obtain x_ = Ax + Bu …rst, and then convert it to the discrete time equivalent (x (k + 1) =


Ex (k) + F u (k)) using the modal matrix method. If this is done x (k) will just be the
sampled version of x (t), i.e. the physicality of the discrete states is that of the continuous
states.

2. Obtain G (z) …rst. i.e. G (z) = 1 z 1 Z 1s G (s) .Then G (z) can be put into control,
observer or modal canonical form via the procedures of section 3, methods 1,2 and 4.
However, …nding G (z) …rst is not recommended. First of all, it would only be useful for
single input single output systems, and more important, the physicality of the states is
lost.

Thus obtaining x (k + 1) = Ex (k) + F u (k) from x_ = Ax + Bu is universally used. (In this


course, however, your starting point for digital control design will be the discrete state equation
as the eAh terms must be done by computer)

11.3 Selection of Sample Interval and Closed Loop Eigenvalues


In going from x_ = Ax + Bu to x (k + 1) = Ex (k) + F u (k) it must be remembered that E
and F are functions of the sampling interval T . The sampling rate ! s = 2 =T will be derived
from the choice of the dominant closed loop natural frequency ! 0 such that 5! 0 < ! s < 10! 0 .
On the basis of observer decay rates being approximately 10 times faster than the closed loop
system decay rates, we can also de…ne a sampling period for an observer model, To , such that
50! 0 < 2 =To < 100! 0 . Unlike the continuous system, we can make the observer eigenvalues
the same as the closed loop control eigenvalues, as the observer dynamics will be 10 times faster
on account of To T =10:
The procedure can be summarized as follows (starting with x_ = Ax + Bu):

1. Choose the closed loop bandwidth and ! 0 . Hence choose ! s (giving T )

2. Formulate x (k + 1) = E (T ) x (k) + F (T ) u (k)

3. For an n-th order system, n > 2, locate the closed loop pole pair on ! s =! 0 contour with
a good damping factor. Fix the remaining poles closer to the origin of the Z plane.

4. Formulate the closed loop characteristic polynomial pc (z) from the desired pole positions.

5. Use pc (z) to determine u (k) = Kx (k) using either control or modal canonical forms.

6. If the use of an observer is necessary, …x T0 = T =10 and calculate x (k + 1) = E (T0 ) x (k)+


F (T0 ) u (k). Transform this to observer or modal canonical form. Carry out the observer
design with po (z) = pc (z) as calculated in (4) above.

Note that with the above observer design the microprocessor will be simulating the plant
with a time step T0 . Every 10 time steps it will calculate u (k) = Kx (k).
Alternatively, you can run the controller and the observer with the same sampling period
(T ). The observer eigenvalues now need to be 10 times faster than the control eigenvalues. You
would then choose ! s 40! 0 (i.e. the plant dominant poles lying very near to the z = 1 point).
The dominant observer poles are then placed on the ! s =! 0 4 contour. This method has
the advantage that the same E and F matrices are used for both control and observer design.
Needles to say, you microprocessor/controller sample rate is always determined by the observer
and for fast servo systems you may need very fast micros.
Discrete State Space Design for Digital Control Systems 85

11.4 Integral Control and Reduced Order Observers


In our previous control/observer designs there were two instances where 1s appeared in the
controller/observer. The …rst instance was when we had integral control (or increase in system
type) and the second was the case of the reduced order observer. For digitally controlled systems,
the implementation of the 1s term needs some remarks.

11.4.1 Integral Control


Consider the system in …g. 11.1.

Figure 11.1: Integral control for discrete time state space

Because of the z transform …nal value theorem flimk!1 f (k) = limz!1 (z 1) F (z)g, then
it can be shown that the term z 1 1 is su¢ cient to increase the system type. Note that z 1 1 is
not the z transform of 1s : It is chosen as the simplest function of z which will do the trick.
The feedback law will be:
u (k) = Kx (k) kI xI (k)
1
XI (z) = [V (z) R (z)]
z 1
and therefore

xI (k + 1) = xI (k) + v (k) r (k) =


= xI (k) + Cx (k) r (k)

which is the discrete equivalent of x_ I = v r. The augmented state equation is therefore


x (k + 1) E 0 x (k) F 0
= + u (k) r (k) (11.3)
xI (k + 1) C 1 xI (k) 0 1
Note that for a multi input system, the 1 will become I in the system matrix. Control design
now proceeds as for the continuous case.

11.4.2 Reduced Order Observer


Assume that xr states are inaccessible. Proceeding as in section 9.3 we de…ne:

xm (k + 1) E11 E12 xm (k) F1


= + u (k) (11.4)
xr (k + 1) E21 E22 xr (k) F2

The reduced order discrete observer derivation proceeds analogously with eqs (9.16) to (9.23)
with x_ terms replaced by x (k + 1) terms. Eq. (9.23) becomes:

x
^r (k + 1) = (E22 LE12 ) x
^r (k) + (E21 LE11 ) xm (k) + (F2 LF1 ) u (k) + Lxm (k + 1) (11.5)

To get rid of xm (k + 1) we de…ne a new state xc such that:

xc (k + 1) = x
^r (k + 1) Lxm (k + 1)
Discrete State Space Design for Digital Control Systems 86

xc (k) = x
^r (k) Lxm (k) (11.6)
and

xc (k + 1) = (E22 LE12 ) x
^r (k) + (E21 LE11 ) xm (k) + (F2 LF1 ) u (k) (11.7)

Eqs (11.6) and (11.7) in diagrammatic form:

Figure 11.2: Discrete reduced order observer

Once again z 1 is certainly not the z transform of 1s . It is just the simplest way of imple-
menting the reduced order observer equation above. The observer in …g. 11.2 can easily be
implemented on a microprocessor.

You might also like