Professional Documents
Culture Documents
State Space 2
State Space 2
1 Introductory Concepts 4
1.1 The Concept of State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 De…nitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 State Space Equations for Physical Systems . . . . . . . . . . . . . . . . . . . . . 6
1.3.1 Example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.2 Example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.3 Example 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.4 Example 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.5 Example 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1
CONTENTS 2
Introductory Concepts
By way of introduction, consider the system in …g. 1.1 and let us assume that, for say a
step input, u(s) = 1s , you wanted to determine the output time response x(t). Hopefully you
remember how to do this (you must obtain the Inverse Laplace Transform of X(s)), i.e.
1 s+2
x(t) = L (1.1)
s (s3 + 2s2 + 2s + 1)
the unique solution of x(t) being, of course, determined by initial conditions for x, dx
dt and
d2 x
dt2
at t = 0. To solve (1.1) you must …rst factorize the cubic polynomial and then apply partial
fraction techniques to obtain:
1 a b cs + d
x(t) = L + + (1.2)
s s + 1 s2 + s + 1
and having solved for a, b, c and d you then look up the Inverse Laplace Transform of each
term and, well done!, you’ve solved it. I shall not do it because it is boring.
The point is that the process is an analytic one and totally unsuitable for solution on a
computer. As it so happens, all simulation (i.e. …nding time responses) and nearly all control
design is nowadays done with the aid of a computer. Firstly, computers don’t like solving
polynomials (the cubic of (1.1) had to be factorized) and secondly, the computed does not like
working with mathematical operators - how indeed would "s" be represented on a computer?
Instead, we revert to the original di¤erential equations de…ning the physical system. As far
as the compute is concerned, we abandon the Laplace operator "s". We work in the time domain
and not in the frequency domain (or s domain).
The third order system in …g. 1.1 is de…ned by the third order di¤erential equation:
::: :: :: :
x + 2x + 2x + x = u + 2u (1.3)
u is the forcing input and x is the solution variable. As it happens, computers don’t like
(1.3) either - they don’t like higher order di¤erentials. Instead, (1.3) can be reduced to three
4
Introductory Concepts 5
Don’t worry how this is done, we´ll look at the methods later. We notice a few salient
features of (1.4). The derivatives of u have disappeared. This will be true for all physical
systems. Second, and most important, whereas we originally had one n´th order equation with
one solution variable, we now have n …rst order equations with n solution variables. Before, the
solution variable was the output x, but now we have x1 , x2 , x3 , and they can´t all be outputs
of the system! They are not outputs. For an n´th order system, they are a set of n variables,
which may or may not be physically real, which de…ne the state of the system at any given point
in time. They are termed STATE VARIABLES.
Given a physical system, then at any point in time, knowledge of the State Variables (at
that point in time) together with knowledge of the forcing function u is su¢ cient to uniquely
determine the system behaviour for all future time.
This statement can be seen from (1.4). Given u and x1 , x2 , x3 at time t, then (1.4) calculates
: : :
the slopes x1 , x2 , x3 at time t. Hence x1 , x2 , x3 can be determined at t + t. This becomes a
starting point and the process of calculating the slopes from the states continues. In principle,
: : :
this is exactly what the computer does with (1.4). It calculates x1 , x2 ... xn at time t and predicts
: : :
x1 , x2 ... xn at t + t. Using numerical analysis, the error magnitude of this prediction can also
be obtained from (1.4). The prediction can them be modi…ed or corrected to keep numerical
errors within given bounds.
Note the following:
The n states of a n´th order system are mutually independent. E.g. Given a 3rd order
system and at time t, x1 and x2 are known then x3 cannot be derived from x1 and x2 . If
it could, x3 would be "dependent" on x1 and x2 , and the system would be only 2nd order.
The choice of state variables for a given system is not unique (see section 1.3). This means
that a choice of state variables can be made purely for mathematical convinience. It is
fact which makes the state variable approach so powerful in control theory.
1.2 De…nitions
Before considering the choice of state variables for real physical systems, we will …rst de…ne all
our terms:
State Vector: The state vector x is the set of states (x1 ; x2 ; :::; xn )T . It is an n 1 column matrix.
State Space: (x1 ; x2 ; :::; xn ) can be said to form the axis of an n-dimensional cartesian space. At any
point in time, the numeric values of x1 ; x2 ; :::; xn thus form a vector in the space.
Input Vector: If a system has m inputs, u1 ; u2 ; :::; um then these form the vector u = (u1 ; u2 ; :::; um )T ,
an m 1 matrix.
Output Vector: If a system has r inputs, v1 ; v2 ; :::; vm then these form the r 1 column vector v =
(v1 ; v2 ; :::; vr )T . Note that v is the set of measurements or variables of interest of the
system.
State Space Eq.: Considering equation (1.4), it can be seen that the set of n …rst order equations can be
written
:
x = Ax + Bu (1.5)
Introductory Concepts 6
A is the n n system matrix. B is the input matrix and is n m. Since x de…nes the
state of the system, it follows that the outputs or system measurement must be a function
of x (some outputs may in fact be states). In general we have
v = Cx + Du (1.6)
LTI System: If all the plant components are linear and do not change with time then the systems is
said to be linear, time-invariant and the matrices A; B; C; D consist only of numbers.
x, position/extension of a spring K.
Those for other systems, e.g. hydraulic, magnetic circuit, thermal, chemical, economic, etc
can be found from "system modelling" text books. Note also that since = Li, q = Cv, then
‡ux and charge can also be used as state variables since for linear C and L they are proportional
to i and v respectively.
1.3.1 Example 1
Figure 1.2. No inductive or capacitive elements. Circuit not dynamic. No state variables.
u
Order of the system = zero. u = V , the input. Note i = R , i.e., current (and everything else)
determined by the input alone.
Introductory Concepts 7
1.3.2 Example 2
Figure 1.3. Two energy storage elements. Order of the system = 2
x1 iL
x= =
x2 vc
u1 E
u= =
u2 I
Let the "measurement" variables or variables of interest be vr ; ic , i.e.
v1 vr
v= =
v2 ic
: d d
We want expressions for x, i.e. for dt iL ; dt vc . Applying Kirchho¤ current law to node A
yields:
dvc vc
iL = ic + ir I=C + I (1.7)
dt R2
Applying Kirchho¤ voltage law to mesh B yields:
diL
E = vr + vL + vc = iL R1 + L + vc (1.8)
dt
hence
diL R1 1 1
= iL vc + E (1.9)
dt L L L
dvc 1 1 1
= iL vc + I (1.10)
dt C CR2 C
i.e.
R1 1 1
: x_ 1 L L x1 L 0 u1
x= = 1 1 + 1
x_ 2 C CR2 x2 0 C u2
t t t t
since x1 x2 = iL vc ; u1 u2 = E I . Hence
R1 1 1
L L L 0
A= 1 1 ; B= 1
C CR2 0 C
v1 vr R1 0 x1 0 0 u1
v= = = 1 +
v2 ic 1 R2 x2 0 1 u2
Introductory Concepts 8
hence
R1 0 0 0
C= 1 ; D=
1 R2 0 1
1.3.3 Example 3
Figure 1.4. Take care here. There are 3 inductors and is tempting to choose the states as
t
x = i1 i2 i3 . But i3 = i1 i2 , so that i3 is not independent (see section 1.1). The system
is only second order.
i1
x= ; u = (E) ; v = (vR2 )
i2
Applying the Kirchho¤ Voltage Law to mesh A: E = R1 i1 + L1 didt1 + L2 didt2
Applying the Kirchho¤ Voltage Law to mesh B: L2 didt2 = L3 didt3 + R2 i3 = L3 didt1 L3 didt2 +
R2 i1 R2 i2
hence
di2 L3 di1 R2 R2
= + i1 i2
dt L2 + L3 dt L2 + L3 L2 + L3
Substituting into the KVL mesh A equation yields:
L2 L3 di1 L2 R2 L2 R2
E = R1 i1 + L1 + + i1 i2
L2 + L3 dt L2 + L3 L2 + L3
L1 L2 + L1 L3 + L2 L3 di1 L2 R2 L2 R2
E = R1 i1 + + i1 i2
L2 + L3 dt L2 + L3 L2 + L3
L1 L2 + L1 L3 + L2 L3 di1 L2 R2 L2 R2
=E R1 i1 i1 + i2
L2 + L3 dt L2 + L3 L2 + L3
di1 L2 R1 + L2 R2 + L3 R1 L2 R2 L2 + L3
= i1 + i2 + E
dt L0 L0 L0
di1
where L0 = L1 L2 + L1 L3 + L2 L3 . Similarly eliminating dt yields:
di2 L3 L2 R1 + L2 R2 + L3 R1 L2 R2 L2 + L3 R2 R2
= 0
i1 + 0
i2 + E + i1 i2
dt L2 + L3 L L L0 L2 + L3 L2 + L3
di2 L1 R2 L3 R1 L1 R2 L3
= i1 i2 + 0 E
dt L0 L 0 L
and vR2 = R2 i1 R2 i2 .
Introductory Concepts 9
Hence
L2 R1 +L2 R2 +L3 R1 L2 R2 L2 +L3
A= L0 L0 ; B= L0
L1 R2 L3 R1 L1 R2 L3
L0 L0 L0
C= R2 R2
Note the existence of dependent variables associates with energy storage element often leads
to messy mathematics as the dependent variable is eliminated.
1.3.4 Example 4
Now for a mechanical example (…g. 1.5) which electronic engineering student always …nd
positively delightful. Follow the procedure:
2. Assign positions and velocities of ends of other elements if di¤erent from those of masses.
We have now assigned y2 ; v2 ; (v2 = y_ 2 ) to the mass and y1 ; v1 ; (v1 = y_ 1 ) to the spring damper
connections. u = (F ), the applied force as shown.
State variables are v2 on mass and (y2 y1 ) = x0 being the spring elongation. Hence x =
t
v2 x0 .
P d
Note the element characteristics are: Fm = M dt v2 ; Fk = kx0 ; FB = Bv1 .
0
We need the equations for v_ 2 and x_ . Newton’s law on mass yields:
M v_ 2 = F Fk = F kx0
which is of required form since only states or inputs on right-hand side.
FB Fk
x_ 0 = y_ 2 y_ 1 = v2 v1 = v2 = v2 =
B B
k 0
= v2 x
B
Since, obviously the spring force Fk must equal the damper force FB . Hence:
k 1
: v_ 2 0 M v2 M
x= = + (F )
x_ 0 1 k
B
x0 0
Introductory Concepts 10
1.3.5 Example 5
Now for a mixed electromechanical system, a DC motor drive (…g. 1.6). Ra ; La are the resistance
and inductande of the armature. E is the back emf (electromotive force) E = kif !, where ! is
the shaft speed. The motor torque T = kif ia is balanced by an inertial component J !,
_ a friction
component B! and an applied load torque TL , which may change in time but is assumed to be
independent of !. What are the state variables? The energy storage components are:
La ! state variable ia .
J ! state variable !.
Lf ! state variable if , BUT if if is kept constant, then we can remove the dynamics
associated with the motor …eld winding. if will just become a number.
ia Va
x= ; u=
! TL
There are two equations:
2. Mechanical: T = kif ia = J !_ + B! + TL
Hence:
!
Ra kif 1
i_ a La La ia La 0 Va
x_ = = kif + 1
!_ B ! 0 J
TL
J J
:
x = Ax + Bu
Note the linearity of the system given that if is just a number. If Vf was varied, we would
di
have Vf = Rf if + Lf dtf , and if would be a state. The state equations would then be non-linear
on account of kif !; kif ia , i.e. two state variables multiplied together. Non-linearities are treated
in the next chapter.
Chapter 2
x_ = f (x; u) (2.1)
which is the general state space equation form.
2.2 Linearization
Solution to (2.1) is obtained from general purpose di¤erential equation solver routines and there
is no problem in solving a transition of a non-linear system. Deriving control laws for non-
linear systems are however di¢ cult. The technique pursued here is to assume that the system
is going to be controlled about a steady state operating point such that any travel away from
the operating point will be small. We may therefore consider only small changes ( x) about the
steady state or equilibrium state point x0 . Considering the i-th equation:
d
dt (x0 + x)i = (x_ 0 + x)
_ i = fi (x0 + x; u0 + u) =
@fi @fi @fi
= fi (x0 ; u0 ) + x1 @x1
+ x2 @x2
+ + u1 @u 1
+ :::
x0 ;u0 x0 ;u0 x0 ;u0
11
Non-linear systems and linearization 12
2.2.1 Example
x_ 1 = x1 + x22 + u1 u2 = f1 (x; u)
x_ 2 = 2x2 + x1 x2 + u2 = f2 (x; u)
0 = x10 + x220
x10 = x20 = 0
0 = 2x20 + x10 x20
1 0 0 0
x_ = x+ u
0 2 0 1
p
an alternative solution is: x10 = 2; x20 = 2; in this case the linearised system would
be:
p
x_ = p1 2 2
x+
0 0
u
2 0 0 1
Note the non-linear nature of the original system leads to three di¤erent equilibrium points
for the inputs u10 = u20 = 0.
Now, for operating point de…ned by u10 = 0; u20 = 1:
0 = x10 + x220
0 = 2x20 + x10 x20 + 1
then x10 = x220 ! x320 2x20 + 1 = 0 ! x10 = 1; x20 = 1
1 2 1 0
x_ = x+ u
1 1 0 1
1 1
p
Again, the remaining two operating points solutions would be: x20 = 2 2 5; x10 =
1 1
p 2
2 2 5
Chapter 3
V (s) 1
G (s) = = 3 2
U (s) s + 2s + 2s + 1
d
Regard s as dt
...
v + 2•
v + 2v_ + v = u
Choose
0 1
v•
x = @ v_ A
v
then
...
x_ 1 = v = 2•
v 2v_ v+u
x_ 2 = x1
x_ 3 = x2
0 1 0 10 1 0 1
x_ 1 2 2 1 x1 1
@ x_ 2 A = @ 1 0 0 A @ x2 A + @ 0 A u
x_ 3 0 1 0 x3 0
0 1
x1
v= 0 0 1 @ x2 A
x3
G(s) with zeros:
V (s) b1 s2 + b2 s + b3
G (s) = = 3
U (s) s + 2s2 + 2s + 1
13
State Space Equations from Transfer Functions 14
3.1 Method 1
Expressing G(s) as a product of two blocks, as shown in …g. 3.1, we have
...
e + a1 e• + a2 e_ + a3 e = u
let 01
e•
x = @ e_ A
e
then from the above section we have:
0 1 0 10 1 0 1
x_ 1 a1 a2 a3 x1 1
@ x_ 2 A = @ 1 0 0 A @ x2 A + @ 0 A u (3.1)
x_ 3 0 1 0 x3 0
Also have:
v = b1 e• + b2 e_ + b3 e = b1 x1 + b2 x2 + b3 x3
therefore:
v= b1 b 2 b 3 x
Note the structure. The coe¢ cients of the denominator polynomial are on the …rst row of
t
A, B = 1 0 0 , C = b1 b2 b3 .
You can therefore write down matrices A; B; C directly from the transfer function. Equation
(3.1) is called the CONTROL CANONICAL FORM (CCF).
3.2 Method 2
V (s) b1 s2 +b2 s+b3
From G(s) = U (s) = s3 +2s2 +2s+1
we have
...
v + a1 v• + a2 v_ + a3 v = b1 u
• + b2 u_ + b3 u
First of all, we separate the variables with and without derivatives, and we de…ne the state
x3 as follows
...
x_ 3 = v + a1 v• + a2 v_ b1 u
• b2 u_ = a3 v + b3 u
therefore:
x3 = v• + a1 v_ + a2 v b1 u_ b2 u
Repeating the same process again:
x_ 2 = v• + a1 v_ b1 u_ = x3 a2 v + b2 u
x2 = v_ + a1 v b1 u
State Space Equations from Transfer Functions 15
x_ 1 = v_ = x2 a1 v + b1 u
x1 = v
Therefore:
0 1 0 10 1 0 1
x_ 1 a1 1 0 x1 b1
@ x_ 2 A = @ a2 0 1 A @ x2 A + @ b2 A u
x_ 3 a3 0 0 x3 b3
0 1
x1
v = 1 0 0 @ x2 A
x3
a1 ; a2 ; a3 appear down the …rst column instead of the …rst row. b1 ; b2 ; b3 appear in B
and now C = 1 0 0 .
This form of the state space equation is called the OBSERVER CANONICAL FORM (OCF).
The matrices A; B; C are di¤erent for methods 1 and 2 because the state variables chosen to
represent the system are di¤erent.
There are may ways in which x can be chosen for a system, each way resulting in di¤erent
A; B; C. The two forms above, the CCF and OCF are very useful since the state space equations
can be written down by inspection of G (s).
It is seen however, that x vectors may not be physically measurable quantities. For the OCF,
for example, x1 = v, the output, but x2 and x3 are “arti…cial”. If the system were electrical, x2
might be a function of currents and voltages within the circuit.
It may be desired to select x which have a physical reality, in which case:
3.3 Method 3
Often physical control systems are made up of distinct systems, e.g. ampli…er, motor, mech-
anical dynamics (…g. 3.2). T (or current), v, are all measurable. They can be selected as
states.
There will be n state variables associated with a n-th order block. In …g. 3.2, 3 states are
associated with each of the 3 …rst order blocks. We can therefore assign a state to the output
of each …rst order block.
Let x1 = ; x2 = v; x3 = T ; u = Vm , therefore
x2 2x3 u
x1 = ; x2 = ; x3 =
s s+1 s + 10
i.e.:
x_ 1 = x2 ; x_ 2 = x2 + 2x3 ; x_ 3 = 10x3 + u
and
State Space Equations from Transfer Functions 16
0 1 0 10 1 0 1
x_ 1 0 1 0 x1 0
@ x_ 2 A = @ 0 1 2 A @ x2 A + @ 0 A u
x_ 3 0 0 10 x3 1
and the output = 1 0 0 x.
Note that A; B are not in CCF or OCF. You could put the system into CCF or OCF by
2
combining the blocks to yield G (s) = s(s+1)(s+10) and applying methods 1 or 2 above.
Figure 3.3: System with a 2nd order block with complex roots
What happens if a block is second order with complex roots (…g. 3.3)? There will be 2 states
associated with the second order block, one can be , what of the other?
Let x1 = ; x3 = Tm ; x_ 3 = 10x3 + u as before.
Now apply Method 2 to the second order block (not Method 1 since this will not yield
x1 = ). I.e.:
x
•1 + 2x_ 1 + 2x1 = x_ 3 + x3
Let
x_ 2 = x
•1 + 2x_ 1 x_ 3 = 2x1 + x3
x2 = x_ 1 + 2x1 x3
x_ 1 = 2x1 + x2 + x3
giving 0 1 0 10 1 0 1
x_ 1 2 1 1 x1 0
@ x_ 2 A = @ 2 0 1 A @ x2 A + @ 0 Au
x_ 3 0 0 10 x3 1
and x2 is, of course, an “arti…cial” state.
And …nally...
3.4 Method 4
We saw in the CCF,OCF, the A matrices had a special form containing the negative coe¢ cients
of the denominator polynomial of G (s). The equations are called canonical when there is
something special about A. A particular powerful canonical form is when the POLES of G (s)
appear down the diagonal of A.
:
E.g., given the system in …g. 3.4 can we …nd or choose states x which will result in x =
Ax + Bu where
0 1
0 0 0
A=@ 0 1 0 A?
0 0 10
The answer is YES! But because this canonical form is so important, we will give it some
special symbols.
When A contains the poles of G (s) down its diagonal we write it as .
The choice of x giving we write y.
The state space equations become y_ = y + Qu.
This canonical form is called the diagonal canonical form or MODAL canonical form or just
MODAL form.
Q is the input matrix (i.e. B) when the equations are in modal form.
Procedure: Start by assigning x1 , x2 , x3 as in Method 3 above (see …g. 3.5)
Then, carry out the partial fraction expansion of the transfer function of each state variable:
Now de…ne our new states y which will put the system into modal form:
Where is the matrix of partial fraction coe¢ cients. The states y can be obtained from x
by using: y = T x where T = 1.
T is termed a Transformation Matrix which transforms the physical states x into the arti…cial
states y. yi is termed the i-th mode of the system.
We can also draw what is termed the MODAL BLOCK DIAGRAM (…g. 3.6).
Note for
0 a single
1 input, single output (SISO) system with real non-repeated poles, Q will
1
always be @ 1 A if the partial fraction method is used.
1
Unfortunately complications occur when:
xi
1. an u term contains an equal order nominator and denominator.
xi
3.4.1 u
containing equal order nominator and denominator
If we apply the partial fraction expansion method to the system in …g. 3.7, we will have:
X1 (s) (s+p)(s+q) a1 a2 a3
U (s) = (s+ )(s+ )(s+ ) = (s+ ) + (s+ ) + (s+ )
X2 (s) (s+p)(s+q) b1 b2
U (s) = (s+ )(s+ ) = b0 + (s+ ) + (s+ )
X3 (s) (s+p) c1
U (s) = (s+ ) = c0 + (s+ )
X2 (s) X3 (s)
where b0 and c0 are constants arising from the fact that U (s) and U (s) have nominator and
denominator of equal order.
State Space Equations from Transfer Functions 19
0 1
0 0
U (s) U (s) U (s)
De…ning Y1 (s) = s+ ; Y2 (s) = s+ ; Y3 (s) = s+ , we have y_ = @ 0 0 Ay +
0 0
0 1
1
@ 1 A u as before.
1
1 0
0
But now x = y + P u where P is, in this case, @ b0 A and y = 1 (x P u).
c0
What this means is that x is not a true vector (In fact, both x2 and x3 depend on time
derivatives of u, which is not allowed). y is of course a true state vector, and so we have succeeded
in putting the system into a canonical state space form. The expression y = 1 (x P u) is not
problematic.
Figure 3.8 shows a system with complex poles where x2 is “hidden” in the second order
block.
Carry out partial fraction expansion, noting s2 + 2s + 10 = (s + 1)2 + 32 :
X1 (s) 10 1 3 s+1 1
U (s) = (s+2)(s2 +2s+10)
= 3 (s+1)2 +32 + 1 (s+1)2
+32
+ s+2
X2 (s) 10 10 3
U (s) = (s2 +2s+10)
= 3 (s+1)2 +32 + 0 + 0
X3 (s) 1 1
U (s) = (s+2) = 0 + 0 + s+2
" " "
e t sin 3t e t cos 3t e 2t
mode mode mode
There are three modes, the exponentially decaying “sin”and “cos”modes and the e 2t mode.
We have much freedom to de…ne x2 how we like, so long as we ensure that it is independent
from x1 and x3 (i.e. if x2 can be expressed as k1 x1 + k2 x2 where k1 and k2 are constants, then
x2 is NOT independent of x1 and x3 ). Above, x2 is related to x1 and x3 via functions of s and
so it is OK. The actual choice of x2 is made for easy mathematics, and it is common to put it
in terms of a single e t sin !t or e t cos !t as above.
De…ning:
3 s+1 1
Y1 (s) = U (s); Y2 (s) = U (s); Y3 (s) = U (s)
(s + 1)2 + 32 (s + 1)2 + 32 (s + 2)
3 (s + 1)
Y1 (s) (s + 1) = U (s) = 3Y2 (s)
(s + 1)2 + 32
i.e.
y_ 1 = y1 + 3y2 (3.2)
Multiplying Y2 (s) by (s + 1) yields:
(s + 1)2
Y2 (s) (s + 1) = U (s)
(s + 1)2 + 32
and
(s + 1)2 32
Y2 (s) (s + 1) + 3Y1 (s) = U (s) + U (s) = U (s)
(s + 1)2 + 32 (s + 1)2 + 32
i.e.
y_ 2 = 3y1 y2 + u (3.3)
and of course, we have:
y_ 3 = 2y3 + u (3.4)
Combining (3.2), (3.3) and (3.4) yields:
0 1 0 1
1 3 0 0
y_ = @ 3 1 0 A y+ @ 1 Au
0 0 2 1
i.e. the real part of the complex pole appears on the diagonal and the imaginary part in the
o¤ diagonal.
De…ning:
1 1 1
Y1 (s) = 2 U (s); Y2 (s) = U (s); Y3 (s) = U (s)
(s 1) s 1 s+3
1
Noting that Y1 (s) = s 1 Y2 (s) we can express the above de…nitions as:
y_ 1 = y1 + y2
y_ 2 = y2 + u
y_ 3 = 3y3 + u
yielding
0 1 0 1
1 1 0 0
y_ = @ 0 1 0 Ay + @ 1 Au
0 0 3 1
with the modal block diagram shown in …g. 3.11.
How can we have more than one state space representation for the same system?
If the state space representation is di¤erent for a di¤erent set of state variables:
– When I calculate one representation, do I have to do all the work again for another
representation?
– Are these state space representations “equivalent” in some sense?
State Space Equations from Transfer Functions 22
How can I go the other way around, i.e. if I have the state space equations, how can I
obtain the Transfer Function(s) of the system?
The …rst question will be addressed in this section, and the second and third in the next
section.
Assume we know the State Space representation of a system using a particular set of State
Variables (x):
:
x = Ax + Bu; v = Cx + Du
Now we want to represent the system dynamics using a DIFFERENT set of State Variables
(z). We can try to obtain all the State Equation from the physical model. Alternatively, we can
express the above equation using the new state variables if we know the relationship between
the “old” (x) and “new” (z) state variables in order to obtain the new representation.
1
z = Tx ! x = T z
Therefore, substituting x in the state space equation, we have:
1 1
T z_ = AT z + Bu
solving for z_ yields:
1
z_ = T AT z + T Bu
And applying the sambe de…nition to the output equation:
1
v = CT z + Du
x_ = Ax + Bu; v = Cx + Du
In order to obtain the Transfer Function of a system we apply the Laplace transform with
zero initial conditions. Remember the transfer function of a system is G(s) = VU (s)
(s)
; i.e. V (s) =
G(s)U (s).
1
sX(s) = AX(s) + BU (s) ! (sI A) X(s) = BU (s) ! X(s) = (sI A) BU (s)
h i
1
V (s) = C (sI A) B + D U (s)
adj (sI A)
V (s) = C B + D U (s)
jsI Aj
1
V (s) = [Cadj (sI A) B + D jsI Aj] U (s)
jsI Aj
Let us take a closer look at the result. For SISO systems, we obtain a single transfer function,
but for MIMO systems we obtain a SET of Transfer Functions (this is good news).
The characteristic equation of ALL these transfer functions is given by: jsI Aj = 0. Note
jsI Aj is a polynomial of degree equal to the number of states (interesting, isn’t it?).
State Space Equations from Transfer Functions 23
j I Aj = 0
Therefore the poles of the system are equal to the eigenvalues of matrix A.
Now you know how to obtain DIFFERENT State Space representations of the SAME sys-
tem.
h You also know ihow to obtain the Transfer Function from these representations: V (s) =
C (sI A) 1 B + D U (s) .
Does it mean that we will obtain DIFFERENT Transfer Functions if we use DIFFERENT
State Space representations as a starting point? The obvious answer is NO. The same system
always has the same transfer function. Let us prove it.
First of all, de…ne the relationship between “new” and “old” state variables:
1
z = Tx ! x = T z
1 1
z_ = T AT z + T Bu; v = CT z + Du
h i
1 1 1
V (s) = CT sI T AT T B + D U (s)
h i
1 1 1
V (s) = C T sI T AT T B + D U (s)
h i
1 1 1 1
V (s) = C T sIT T T AT T B + D U (s)
h i
1 1
V (s) = C sIT T A B + D U (s)
h i
1
V (s) = C (sI A) B + D U (s)
Therefore, starting from the State Space equations based on the “new” variables (z), we
have proved that the obtained transfer function is exactly the same as the one obtained from
the State Space equations based on the “old” state variables (x).
3.6.1 Example 1
Obtain the transfer function of the following system:
0 1 0 1
2 2 3 1
x_ = @ 0 3 4 A x+ @ 2 Au
0 0 5 3
v = 4 0 1 x
h i
The transfer function is obtained by applying V (s) = C (sI A) 1 B + D U (s)
2 0 0 11 1 0 13
2 2 3 1
6 7
V (s) = 4 4 0 1 @sI @ 0 3 4 AA @ 2 A5 U (s)
0 0 5 3
2 0 1 1 0 13
s+2 2 3 1
6 @ A @ 7
V (s) = 4 4 0 1 0 s+3 4 2 A5 U (s)
0 0 s+5 3
State Space Equations from Transfer Functions 24
2 0 1 0 13
s2 + 8s + 15 2s + 10 3s + 17 1
4 4 0 1 @ 0 s2 + 7s + 10 4s + 8 A @ 2 A5
0 0 s2 + 5s + 6 3
V (s) = U (s)
(s + 2) (s + 3) (s + 5)
s2 + 69s + 326
V (s) = U (s)
(s + 2) (s + 3) (s + 5)
If we had two inputs and two outputs:
2 0 2 10 13
s + 8s + 15 2s + 10 3s + 17 1 1
4 4 0 1 @
0 s2 + 7s + 10 4s + 8 A @ 2 0 A5
0 1 0 2
0 0 s + 5s + 6 3 0
V (s) = U (s)
(s + 2) (s + 3) (s + 5)
4.1 Introduction
Given, say:
x_ = Ax + Bu; v = Cx
x_ d = Ad xd + Bd u; v = Cd xd
0 1
1 0 0
y_ = y + Qu; v = Cd y; =@ 0 2 0 A
0 0 4
25
Transforming to canonical form 26
the states of the systems are, in general, “physical”, so that the resulting state space equations
are not in canonical form.
The problem addressed is:
Given x_ = Ax + Bu; v = Cx transform directly to a canonical form without going to
transfer functions …rst (note for MIMO systems, going from transfer functions to canonical form
is either di¢ cult or impossible). We transform directly to a canonical form by trying to …nd a
matrix T such that xcanonical = T x.
T x_ = T Ax + T Bu; v = Cx
1 1
y_ = T AT y + T Bu; v = CT y (4.2)
y_ = y + Qu
1
= T AT ; Q = TB (4.3)
where is the diagonal matrix of eigenvalues (poles).
Consider the eigenvalues of A. Therefore, we have:
A 1
= 1 1
A 2
= 2 2
.. .
. = ..
A n
= n n
therefore
1
A = ! = A (4.5)
and comparing with (4.3) we see that
1
T = (4.6)
and y and x are thus related by:
1
y= x (4.7)
Transforming to canonical form 27
1
y_ = y + Bu; v=C y (4.8)
4.2.1 Notes
1. is called the modal matrix of A. Its columns are the eigenvectors of A. Since the
eigenvectors are the RATIO of cofactos of A’s row elements, the actual numerical elements
of will not be unique.
h i
2. Don’t get the columns of muddled. = 1 ; 2 ; :::; n where 1 is derived from 1 , etc.
0 1
1 0
If you care to work through 1 A you will see that the result is B .. C
@ . A.
0 n
3. When y_ = y + Qu was found for SINGLE INPUT SYSTEMS, by the partial fraction
X(s)
expansion of U (s) , (section 3.4), then:
(a) the states x were related to y by x = y where was the matrix of partial fraction
coe¢ cients. Thus the partial fraction coe¢ cient matrix is a modal matrix.
t
(b) we obtained Q = 1 1 1 . This Q is obtained only for the partial fraction
expansion method. When …nding y_ = y + Qu from x_ = Ax + Bu, then even for a
SISO system you must …nd from the eigenvectors of A and formulate Q by 1 B.
1 adj( )
4. To …nd Q demands the inversion of . Remember that = j j . For a 2x2 matrix
a b 1
there is a quick formula which should be remembered. If A = then A =
c d
1 d b
jAj .
c a
5. Complications arise in calculating and for systems containing complex eigenvalues
(poles) or real, repeated eigenvalues.
4.2.2 Example
Consider the system in (4.9). Obtain the corresponding modal canonical form using the Modal
Matrix method
x_ = Ax + Bu; v = Cx
0 1 0 1
1 1 0 0
A = @ 10 0 1 A ; B = @ 1 A ; C= 1 0 0 (4.9)
8 0 0 6
jA Ij = 0
0 1 0 1
1 1 0 1 1 0
@ 10 0 1 A I = @ 10 1 A = 2 3
+ 10 + 8
8 0 0 8 0
2 3
+ 10 + 8 = 0
The solutions are 1 = 1; 2 = 2; 3 = 4. Now that we have the eigenvalues, we can
proceed to calculate the corresponding eigenvectors:
(A 1 I) 1
=0
00 1 10 1
1 1 0 11
@@ 10 0 1 A 1 I A @ 12 A = 0
8 0 0 13
0 10 1
2 1 0 11
@ 10 1 1 A @ 12 A = 0
8 0 1 13
0 1
2 11 + 12
@ 10 11 + 12 + 13 A = 0
8 11 + 13
Let 11 = 1, therefore 2 + 12 = 0 ! 12 = 2; and 8 + 13 =0! 13 = 8. Therefore,
the eigenvector 1 for eigenvalue 1 = 1 is
0 1
1
1
=@ 2 A
8
We repeat the same procedure for 2 = 2
(A 2 I) 2
=0
00 1 10 1
1 1 0 21
@@ 10 0 1 A 2 I A @ 22 A = 0
8 0 0 23
0 10 1
3 1 0 21
@ 10 2 1 A @ 22 A = 0
8 0 2 23
0 1
3 21 + 22
@ 10 21 + 2 22 + 23 A = 0
8 21 + 2 23
Let 21 = 1, therefore 3 + 22 = 0 ! 22 = 3; and 8 + 2 23 =0! 23 = 4. Therefore,
the eigenvector 2 for eigenvalue 2 = 2 is
0 1
1
2
=@ 3 A
4
Transforming to canonical form 29
(A 3 I) 3
=0
00 1 10 1
1 1 0 31
@@ 10 0 1 A 3 I A @ 32 A = 0
8 0 0 33
0 10 1
3 1 0 31
@ 10 4 1 A @ 32 A = 0
8 0 4 33
0 1
3 31 + 32
@ 10 31 4 32 + 33 A = 0
8 31 4 33
Let 31 = 1, therefore 3 + 32 = 0 ! 32 = 3; and 8 4 33 =0! 23 = 2. Therefore, the
eigenvector 3 for eigenvalue 3 = 4 is
1 0
1
3
=@ 3 A
2
The matrix will be:
0 1 0 1 1 1
1
1 1 1 5 5 5
=@ 2 3 3 A; 1
=@ 2
3
1
3
1
6
A
8 2 1
8 4 2 15 15 30
Therefore, the matrix will be:
0 1 1 1
10 10 1 0 1
5 5 5 1 1 0 1 1 1 1 0 0
= 1
A =@ 2
3
1
3
1
6
A @ 10 0 1 A @ 2 3 3 A=@ 0 2 0 A
8 2 1
15 15 30 8 0 0 8 4 2 0 0 4
4. An eigenvalue i of a matrix is a scalar and is equal to the ratio of the magnitudes of the
vectors jz i j = i
where z i = A i .
9. The transformation y = 1x
applied to the state equation x_ = Ax + Bu gives the
0 1
1 0
canonical form y_ = y + 1 Bu, where = 1A = B .. C
@ . A for a system with
0 n
no repeated eigenvalues.
10. The alternative transformation y = t x also gives the canonical form y_ = y + t Bu,
where = ( 1 ; 2 ; :::; n ) and i is the ith eigenvector of At , provided there are no repeated
eigenvalues.
13. For a system with comples eigenvalues 1;2 = j!, a further transformation z = Sy is
recommended, where
0 1 0 1
1 1 0 0 1=2 j1=2 0 0
B +j j 0 0 C B 1=2 +j1=2 0 0 C
S=B @ 0
C. S 1 =B C
0 1 0 A @ 0 0 1 0 A
0 0 0 1 0 0 0 1
Transforming to canonical form 31
giving:
z=S S 1z +S 1 Bu
where
0 1
! 0 0
B ! 0 0 C
S S 1 =B
@
C and S 1B is real.
0 0 3 0 A
0 0 0 4
1 1 4
0 10 10 1 0 1
2 5 2 0 1 0 1 1 1 1 1 0
= 1A = @ 2 3 1 A@ 0 0 1 A@ 1 0 2 A=@ 0 1 0 A
1 2 1 2 5 4 1 1 4 0 0 2
besides the eigenvalues, note the extra 1 on the lead diagonal.
0 1 0 1
1 1
0 0 1=2 j1=2 0 0
B +j 0 0 C
j B 1=2 +j1=2 0 0 C
S=B @ 0
C,
A S 1=B @
C
A
0
1 0 0 0 1 0
0 0
0 1 0 0 0 1
Then z = S 1z + S
S 1 Bu
0 1
! 0 0
B ! 0 0 C
where S S 1 = B @ 0 0
C and S 1 B is real.
3 0 A
0 0 0 4
E.g. Let’s consider:
0 1
16 151 956 2380
B 1 0 0 0 C
A = B @ 0
C its eigenvalues are 1 = 2 + 8i;
A 2 = 2
1 0 0
0 0 1 0
8i; 3 = 5 and 4 = 7
The corresponding
0 eigenvectors
1 0are: 1 0 1 0 1
376 416i 376 + 416i 125 343
B 60 32i C B 60 + 32i C B 25 C B 49 C
=B @
C;
A =B
@
C;
A =B
@
C;
A =B@
C
1 2 + 8i 2 2 8i 3 5 4 7 A
1 1 1 1
and the
0 resulting modal matrix: 1
376 416i 376 + 416i 125 343
B 60 32i 60 + 32i 25 49 C
=B @
C
2 + 8i 2 8i 5 7 A
1 1 1 1
0 1 10 1
376 416i 376 + 416i 125 343 16 151 956 2380
1A = B
B 60 32i 60 + 32i 25 49 C B 1 0 0 0 C
= C B C
@ 2 + 8i 2 8i 5 7 A @ 0 1 0 0 A
1 1 1 1 0 0 1 0
0 1 0 1
376 416i 376 + 416i 125 343 2 + 8i 0 0 0
B 60 32i 60 + 32i 25 49 C B 0 2 8i 0 0 C
B C=B C
@ 2 + 8i 2 8i 5 7 A @ 0 0 5 0 A
1 1 1 1 0 0 0 7
0 10 10 1 1
1 1 0 0 2 + 8i 0 0 0 1 1 0 0
B +i i 0 0 C B 0 2 8i 0 0 C B i 0 0 C
S S 1=B CB C B +i C =
@ 0 0 1 0 A @ 0 0 5 0 A @ 0 0 1 0 A
0 0 0 1 0 0 0 7 0 0 0 1
0 1
2 8 0 0
B 8 2 0 0 C
=B @ 0
C
0 5 0 A
0 0 0 7
In any case S 1 (and S 1 ) is always real:
0 10 1 1
1 1 0 0 376 416i 376 + 416i 125 343
B +i i 0 0 C B 49 C
S 1=B C B 60 32i 60 + 32i 25 C =
@ 0 0 1 0 A @ 2 + 8i 2 8i 5 7 A
0 0 0 1 1 1 1 1
0 8 161 1060 2275 1
6497 6497 6497 6497
B 49 87 3253 7245 C
=B
@
51 976
1
25 988
11
51 976
48
25 988
238
C
A
146 146 73 73
1 9 44 170
178 178 89 89
Transforming to canonical form 33
y_ = y + Qu; v=C y
5.1 Controllability
5.1.1 Via Modal Form
Consider y_ = y + Qu e.g.
0 1 0 1
2 0 0 0 2
y_ = @ 0 3 0 Ay + @ 1 3 Au
0 0 1 1 0
The plant will respond with e t , e 3t , e2t tems. Since Q constains non zero entries on all
rows, the modes are functions of the inputs. What if row 1 of Q was 0 0 , i.e.
0 1 0 1
2 0 0 0 0
y_ = @ 0 3 0 Ay + @ 1 3 Au
0 0 1 1 0
The mode e2t is not a¤ected by the input and cannot be controlled. I the ith row of Q is
zero, the ith mode cannot be altered. If any mode cannot be altered, the system is said to be
UNCONTROLLABLE.
A system is controllable if the state vector x can be forced to reach any state x0 , provided
that the adequate input is fed into the system.
If the uncontrolable mode is stable, it doesn’t matter too much.
If the uncontrolable mode is unstable, oh dear!!
For repeated and complex roots:
Repeated: 0 1 0 1
1 1 0 0
y_ = @ 0 1 0 Ay + @ 1 Au
0 0 1 1
is OK, but: 0 1 0 1
1 1 0 1
y_ = @ 0 1 0 Ay + @ 0 Au
0 0 1 1
is not OK.
34
Controllability and Observability 35
Complex: 0 1 0 1
2 1 0 0
y_ = @ 1 2 0 Ay + @ 1 Au
0 0 1 2
is OK, but:
0 1 0 1
2 1 0 0
y_ = @ 1 2 0 Ay + @ 0 Au
0 0 1 1
is not OK.
If in doubt, draw the MODAL BLOCK DIAGRAM (section 3.4) and see if all modes can be
traced back to an input. If so, then the system is controllable.
C= B AB A2 B An 1B
where n is the order of A. C is n nr and for a single input system (r = 1), C is square, i.e.
for n = 2 ! C = B AB .
A system is controllable if the rank of C = n, the order of the system.
A system is uncontrollable if the rank of C < n.
De…nition of rank: rank of n m matrix is the order of the largest NON-SINGULAR square
submatrix.
e.g.
4 5 5 5 25
A= ; B= ; B AB =
1 0 1 1 5
Submatrices of order 1 are 5; 25; 1; 5 ALL non-singular, therefore rank at least 1
5 25
Submatrices of order 2 are which is singular. Therefore the rank is 1, which is
1 5
smaller than n, so the system is uncontrollable.
e.g. (third order system n = 3)
0 1 0 1
1 2 1 1 1
A=@ 0 1 1 A; B = @ 2 1 A
2 1 1 0 0
0 1
1 1 5 1
C = B AB A2 B = @ 2 1 2 1 A2 B A
0 0 0 3
Can we …nd a 3 3 submatrix here with jSM3 j = 6 0? If we can, then rank = 3. Note
1 5 1
1 2 1 6= 0. It is not necessary to calculate A2 B since we have already found a non-
0 0 3
singular submatrix of order = n. Therefore the system is controllable.
Note for single input case C is n n, i.e. square. Rank can only equal n if the whole matrix
C is non-singular, i.e. the system would be controllable if C 1 exists.
Singularity is only really de…ned for square matrices. For an n m matrix m > n, you can
regard the condition that rank = n as a test for the“singularity” of the non-square matrix.
Controllability and Observability 36
5.2 Observability
5.2.1 Via Modal Form
Let´s look at v = C y, e.g.
0 1
y1
0 2 1 @ y2 A
v=
0 1 0
y3
Let y1 ; y2 ; y3 be the e2t , e 3t ; e t modes as before. Note the y1 (e2t ) mode does not appear
in either of the measurements v1 or v2 . Therefore y1 is said to be unobservable.
If the elements of the ith column of C are all zero, then the ith mode does not appear in
the measurements and the system is said to be unobservable.
U(s) V(s)
s-1
s+4
G(s)
1
s-1
H(s)
x2(s) 1
s-1
H(s)
x_ 1 + 4x1 = 5 (u x2 )
x_ 2 x2 = x1 + (u x2 )
v = x1 + (u x2 )
which yields:
x_ 1 4 5 x1 5
= + u
x_ 2 1 0 x2 1
x1
v = 1 1 + (1) u
x2
4 5
the eigenvalues of A are jA Ij = = ( 4 ) 5 = 0. Which gives the
1
characteristic equation 2 + 4 5 = 0, ( + 5) ( 1) = 0 with eigenvalues 1 = 5; 2 = 1.
Therefore the system is unstable. In classical theory the pole and zero at s = 1 cancelled and
“didn’t show up”.
OK, the system is unstable, let’s control it. Let’s just check C:
5 25
C= B AB =
1 5
and jCj = 0, therefore the system is uncontrollable, but which mode cannot be controlled?
We can go to the modal canonical form to …nd it out. For 1 = 5
4 5 1 5
(A 1 I) = + 5I =
1 0 1 5
5
=
1 1
For 2 =1
1
=
2 1
1 1
5 1 1 6 6
= ; = 1 5
1 1 6 6
Controllability and Observability 38
5 0 y1 1
y_ = + u
0 1 y2 0
Oh dear, it’s the unstable mode which is uncontrollable! Well, if we build the system, at least
we would see it going unstable (the system trips and shuts down, etc). Let us see: v = Cx + Du;
v = C y + Du where
5 1 y1
v= 1 1 +u
1 1 y2
y1
v= 6 0 +u
y2
and the e+t term doesn’t show up in measurements! The system in short is unstable, uncon-
trollable, unobservable and a mess!
Discussion: A system which is uncontrollable, etc is usually because the designer is an idiot.
One way of being an idiot is to design a system with unstable pole-zero cancellation!
But if the rows of Q are very small, it shows that controlling that mode with that input is
a poor choice. E.g. using ailerons to control a plane’s altitude modes, etc.
Chapter 6
It will be seen later that control design becomes very simple if the state space equations are
in CCF or OCF. We therefore need to be able to transform the state space equations to CCF
and/or OCF in a wimilar way to transforming to JCF. Note although CCF and OCF can be
de…ned for multi input multi output systems, its main power comes in consideration of single
input single output systems, therefore this is assumed.
T x_ = T Ax + T Bu; v = Cx + Du
1 1
x_ c = T AT xc + T Bu; v = CT xc + Du
x_ c = Ac xc + Bc u; v = Cc xc + Du
We can obtain Ac and Bc directly. If 1; 2; : : : ; n are the eigenvalues of A. Then the
characteristic equation will be:
(s 1 ) (s 2 ) : : : (s n) =0
and therefore we can obtain the characteristic polynomial:
sn + a1 sn 1
+ a2 sn 2
+ + an
therefore:
0 1 0 1
a1 a2 a3 an 1
B 1 0 0 0 C B 0 C
B C B C
B 0 1 0 0 C B 0 C
Ac = B C; Bc = B C
B .. .. .. C B .. C
@ . . . A @ . A
0 0 1 0 0
Why do we need to …nd T then? Because manipulations of systems in CCF will give results
in terms of xc . If our states are x then we need to be able to go from x ! xc , i.e. xc = T x.
T can be obtained from the controllability matrix.
39
Control and Observer Canonical Forms 40
Let
C= B AB A2 B An 1B
Cc = Bc Ac Bc A2c Bc Anc 1B
c
h i
= T B T AT 1T B T AT 1 2 TB T AT 1 n 1 TB
therefore:
h i
T 1
Cc = B AB T 1 T AT 1 2 TB An 1B =C
since T 1 T AT 1 2T =T 1 A2 T = A2 .
c
Therefore:
1 1
T Cc = C; T = CC c 1 ; ; T = Cc C 1
4. Form Cc :
5. Calculate T = Cc C 1.
6. Check with Bc = T B.
Example:
3 1
2 1 1 1 1 1 2 2
A= ; B= ; C= ; C = 1 1
0 3 1 1 3 2 2
2 s 1
jA sIj = =( 2 s) ( 3 s) = s2 + 5s + 6
0 3 s
5 6 1 1 5
Ac = ; Bc = ; Cc =
1 0 0 0 1
3 1
1 1 5 2 2 1 2
T = Cc C = 1 1 = 1 1
0 1 2 2 2 2
check
1 2 1 1
Bc = T B = 1 1 =
2 2 1 0
which appears correct.
Control and Observer Canonical Forms 41
T x_ = T Ax + T Bu; v = Cx + Du
1 1
x_ o = T AT xo + T Bu; v = CT xo + Du
x_ o = Ao xo + Bo u; v = Co xo + Du
We can obtain Ao and Co directly. If 1; 2; : : : ; n are the eigenvalues of A. Then the
characteristic equation will be:
(s 1 ) (s 2 ) : : : (s n) =0
and therefore we can obtain the characteristic polynomial:
sn + a1 sn 1
+ a2 sn 2
+ + an
therefore:
0 1
a1 1 0 0
B a2 0 1 0 C
B C
B .. ..C
Ao = B
B a3 0 0 . .C;
C Co = 1 0 0 0
B .. C
@ . 1 A
an 0 0 0
Why do we need to …nd T then? Because manipulations of systems in OCF will give results
in terms of xo . If our states are x then we need to be able to go from x ! xo , i.e. xo = T x.
T can be obtained from the observability matrix.
h i
2 n 1 T
O = C T AT C T AT C T AT C
which is n n for single input systems.
Note:
2 3
C
6 CA 7
6 7
OT = 6 CA2 7=P
4 5
..
.
Now:
h i
2
Oo = CoT ATo CoT ATo CoT
h 2 i
= CT 1 T T AT 1 T CT 1 T T AT 1 T CT 1 T
h i
1 T 1 T 1 T 2 1 T
= T CT T AT T T T CT ATo T CT
Therefore;
h i
2 1 T
T T Oo = CT AT C T T T ATo T CT
h i
2
= CT AT C T AT CT
Control and Observer Canonical Forms 42
2 1 T 1 A2 T T 2
given that T T ATo T CT = T o C T = AT CT
therefore:
T T
T T Oo = O; T T = OOo 1 ; T = OOo 1 = Oo 1 (O)T = Po 1 P
T = Po 1 P ; T 1
=P 1
Po
Procedure:
1. Take A, C.
2 3
C
6 CA 7
6 7
2. Form P = 6 CA2 7
4 5
..
.
3. Find the characteristic equation of A.
6. Calculate T = Po 1 P .
7. Check with C = Co T .
Example:
2 1 2 3
A= ; C= 2 3 ; P =
0 3 4 7
The characteristic equation is the same as before: s2 + 5s + 6 = 0. Therefore:
5 1 1 0 1 1 0
Ao = ; C= 1 0 ; Po = ; Po =
6 0 5 1 5 1
Therefore:
1 0 2 3 2 3
T = Po 1 P = =
5 1 4 7 6 8
Check:
2 3
Co T = 1 0 = 2 3
6 8
Full check
1
1 2 3 2 1 2 3 5 1
T AT = =
6 8 0 3 6 8 6 0
Chapter 7
7.1 Introduction
Given a system x_ = Ax + Bu; v = Cx + Du (open loop), we can design a closed loop system
with user speci…ed eigenvalues ( ) by feedback of the system states. We shall be considering the
system in …g. 7.1.
x_ = Ax + Bu; v = Cx (7.1)
u = k1 r k1 k2 x
u = k1 r Kx (7.2)
where K = k1 k2 . K is an r n matrix and is called the state feedback matrix connecting
the states x to the actuating inputs u.
The closed loop system is, by putting eq. (7.2) into eq. (7.1):
x_ = Ax + B (k1 r Kx) ; v = Cx
x_ = (A BK) x + Bk1 r; v = Cx (7.3)
Eq. (7.3) is another state space equation. The closed loop poles are given by the eigenvalues
of (A BK), i.e.
jA BK c Ij =0 (7.4)
43
Multivariable State Feedback 44
The problem to solve is: given the desired closed loop poles c1 , c2 , : : : , cn , …nd the matrix
K that will place the eigenvalues of (A BK) at the desired position.
The problems of reference input, closed loop gain and integral control do not a¤ect eq. (7.4)
and will be treated in section 8.
x_ = Ax + Bu; v = Cx
where u is scalar (single input) and x are “physical” states which we shall assume are
measurable. Hence we could write C = I and v = x i.e. there is no di¤erence between states
and outputs.
We …rst convert the system to Control Canonical Form, i.e.:
x_ c = Ac xc + Bc u; v = Cc xc
where xc = T x and T = Cc C 1:
1 1
x_ c = T AT xc + T Bu; v = CT xc
We note that:
0 1 0 1
a1 a2 a3 an 1
B 1 0 0 0 C B 0 C
B C B C
B 0 1 0 0 C B 0 C
Ac = B C; Bc = B C
B .. .. .. C B .. C
@ . . . A @ . A
0 0 1 0 0
where ai is the coe¢ cient of sn i (or n i ) of a(s), the Open loop characteristic polynomial.
We now carry out a control design in the canonical domain by state feedback law:
0 1 0 1
a1 a2 a3 an 1
B 1 0 0 0 C B 0 C
B C B C
B 0 1 0 0 C B 0 C
Ac Bc Kc = B C B C kc1 kc2 kcn
B .. .. .. C B .. C
@ . . . A @ . A
0 0 1 0 0
0 1 0 1
a1 a2 a3 an kc1 kc2 kc3 kcn
B 1 0 0 0 C B 0 0 0 0 C
B C B C
B 0 1 0 0 C B 0 0 0 0 C
Ac Bc Kc = B C B C
B .. .. .. C B .. .. .. C
@ . . . A @ . . . A
0 0 1 0 0 0 0 0
0 1
a1 kc1 a2 kc2 a3 kc3 an kcn
B 1 0 0 0 C
B C
B 0 1 0 0 C
Ac Bc Kc = B C (7.6)
B .. .. .. C
@ . . . A
0 0 1 0
Multivariable State Feedback 45
Therefore, a1 + kc1 ; a2 + kc2 ; ; an + kcn are the coe¢ cients of the closed loop characteristic
polynomial (s):
Letting the closed loop characteristic polynomial be:
sn + 1s
n 1
+ 2s
n 2
+ + n = (s c1 ) (s c2 ) (s cn ) (7.7)
where ci are the desired closed loop eigenvalues (which may be repeated or complex). Then
it is seen that i = ai + kci , i.e.
kci = i ai (7.8)
It must be understood that the previous equation gives the elements of Kc which relate u to
xc (the canonical state variables). If these are not physical states (and they usually will not)
you must transform back to the real states x.
We have:
u= Kc xc ; xc = T x
therefore
u= Kc T x = Kx (7.9)
and
K = Kc T (7.10)
The procedure can be summarized:
2. Select the closed loop poles c. Form the closed loop characteristic polynomial (s)
Example:
2 1 1
A= ; B=
0 3 1
therefore, the open loop poles are 2; 3. We wish to …nd the feedback law u = Kx which
will place the closed loop poles (eigenvalues) at = 5 j5.
First of all, we …nd the open loop characteristic polynomial:
a(s) = jsI Aj = (s + 2) (s + 3) = s2 + 5s + 6
Then we can calculate the closed loop characteristic polynomial:
kc1 = 1 a1 = 10 5=5
kc2 = 2 a2 = 50 6 = 44
5 6 1
Ac = ; Bc =
1 0 0
and we calculate the controllability matrices:
3 1
1 1 1 2 2
C= B AB = ; C = 1 1
1 3 2 2
1 5
Cc = Bc Ac Bc =
0 1
3 1
1 1 5 2 2 1 2
T = Cc C = 1 1 = 1 1
0 1 2 2 2 2
therefore
1 2
K = Kc T = 5 44 1 1 = 17 12
2 2
and the control law will be:
x1
u= Kx = 17 12
x2
Now we are going to check our design. First of all, we can obtain the closed loop state space
equation:
x_ = (A BK) x + Bk1 r; v = Cx
Obviously, the closed loop poles will be the eigenvalues of (A BK), i.e. we must solve the
following equation
jA BK c Ij =0
2 1 1
17 12 cI =0
0 3 1
2 1 17 12
cI =0
0 3 17 12
19 13
cI =0
17 9
19 c 13
=0
17 9 c
2
c + 10 c + 50 = 0
and the solutions of the closed loop characteristic equations are: c1 = 5+j5; c2 = 5 j5
Remember to take care with signs. Eq.(7.8) gives coe¢ cients of kci where u = Kc xc i.e.,
for negative feedback, kci will be positive numbers. In the above example, negative feedback is
applied to x1 and positive feedback to x2 .
Note also that the Control Canonical Form is only easily applicable for single input systems.
For multiple inputs, the design should be done in Diagonal Canonical Form or modal form.
Multivariable State Feedback 47
0 1 0 1
1 0 q1
B C B q1 C
B 2 C B C
y_ = B .. C y + B .. C u
@ . A @ . A
0 n qn
Then, we can draw the modal block diagram (for the closed loop system)
Figure 7.2: Modal block diagram for the closed loop system
Example
0 1 0 1
2 0 0 1
y_ = @ 0 3 A
0 y+ @ 2 Au
0 0 4 1
The modal block diagram will be:
Multivariable State Feedback 48
Figure 7.3: Modal block diagram for the closed loop system
We have two unstable modes, y1 and y2 . We should …nd a control law such that closed loop
eigenvalues are 1 ; 2 = 1 j1; 3 = 4. Note mode y3 can be left alone. The characteristic
equation is:
kd1 2kd2
1+ + =0
s 2 s 3
i.e: s2 + s (kd1 + 2kd2 5) + (6 3kd1 4kd2 ) = 0
We want: (s + 1)2 + 12 = 0, i.e.: s2 + 2s + 2 = 0
Therefore:
kd1 + 2kd2 5 = 2
6 3kd1 4kd2 = 2
or y = 1 x, we have:
t 1
u= Kd x or u = Kd x
2. Could use u1 to control y1 both and u3 to control y2 : The resulting block diagram is shown
in …g. 7.6.
Multivariable State Feedback 49
Figure 7.5: Closed loop modal block diagram (feedback using only u1 )
Figure 7.6: Closed loop modal block diagram (using u1 to control y1 and u3 to control y2 )
Multivariable State Feedback 50
Example
A system is de…ned by
0 1 0 1
1 4 2 2 1
x_ = Ax + Bu; A=@ 1 1 1 A; B=@ 1 1 A
2 1 1 1 2
Find the controller matrix K to place the closed loop eigenvalues at 4; 4; 2.
Since we have two inputs, we are going to carry out the design in modal canonical form.
First of all, we have to express the system in modal canonical form.
Therefore, we have to obtain the eigenvalues of A, as well as the modal matrix .
0 1
1 4 2
sI @ 1 1 1 A = 0 ! s3 3s2 4s = s (s + 1) (s 4) = 0
2 1 1
therefore, the open loop eigenvalues are 0; 1; 4 (i.e. the system is open loop unstable). The
modal matrix can be calculated as in the previous chapter:
0 1 0 1 3 1
1
1 1 2 2 2 2
= @ 21 0 1 A; 1
=@ 1 2 0 A
3 1 1 1
2 1 1 4 4 4
Now, we can express the system in modal canonical form:
1 1
y_ = A y+ Bu
0 1 10 10 1 0 1
1 1 2 1 4 2 1 1 2 0 0 0
= 1
A =@ 1
2 0 1 A @ 1 1 1 A @ 12 0 1 A=@ 0 1 0 A
3 3
2 1 1 2 1 1 2 1 1 0 0 4
0 1 10 1 0 1
1 1 2 2 1 0 2
Q= 1
B=@ 1
2 0 1 A @ 1 1 A=@ 0 1 A
3
2 1 1 1 2 1 0
From matrix B we can conclude that modes y1 and y2 can only be controlled from u2 ,
whereas mode y3 can only be controlled from u1 . The corresponding modal block diagram will
be:
which gives the following closed loop characteristic equations:
2kd21 1kd22
1+ + = 0; s2 + (1 2kd21 kd22 ) s 2kd21 = 0
s s+1
Multivariable State Feedback 51
Figure 7.7: Closed loop modal block diagram (using u1 to control y3 and u2 to control y1 , y2 )
kd13
1+ = 0; s 4 + kd13 = 0
s 4
modes y1 and y2 will have closed loop eigenvalues of 4 and 3, and mode y3 will be changed
to have a closed loop eigenvalue equal to 4. Therefore, for modes y1 and y2 ; the desired closed
loop characteristic equation will be:
(s + 3) (s + 4) = 0; s2 + 7s + 12 = 0
Comparing with
s2 + (1 2kd21 kd22 ) s 2kd21 = 0
and equating coe¢ cients, we have:
1 2kd21 kd22 = 7
2kd21 = 12
s+4=0
therefore:
4 + kd13 = 4
and kd13 = 8: The feedback law will be:
0 0 8
u= Kd y = y
6 6 0
and expressed in the original state variables:
0 1 1
1 1 2
0 0 8 @ 2 2 2
u= Kd 1
x= 1
2 0 1 A x= x
6 6 0 3 3 3 3
2 1 1
Multivariable State Feedback 52
0 1 0 1 0 1
1 4 2 2 1 6 3 1
2 2 2
Acl = A BK = @ 1 1 1 A @ 1 1 A =@ 4 2 2 A
3 3 3
2 1 1 1 2 6 9 7
s3 + 11s2 + 40s + 48 = (s + 3) (s + 4) (s + 4) = 0
It is obvious from the block diagram that y1 and y2 modes can be controlled via u1 or u2 .
This was not obvious from the canonical state equation since the …rst row of Q was 0 0 :
However y1 is related to u1 via the 1 entry on the o¤ diagonal.
We apply feedback to either u1 or u2 (or both if you wish) to stabilise the double mode
1 2 . In the diagram u1 is chosen. Note that the double eigenvalue will be changed to two new
;
eigenvalues. We therefore need two feedback gains k1 ; k2 .
The characteristic equation is:
2k1 2k2
1 =0
(s 1)2 (s 1)
(positive feedback). Therefore
Multivariable State Feedback 53
where 1 ; 2 are the two new eigenvalues. Equating coe¢ cients we can obtain the control
law u = Kd y.
s 1 2 y1 1 0 u1
=
2 s 1 y2 2 1 u2
therefore:
y1 1 s 1 2 1 0 u1
=
y2 (s 1)2 + 22 2 s 1 2 1 u2
y1 1 s+3 2 u1
=
y2 (s 1)2 + 22 2s 4 s+1 u2
and the canonical block diagram can be drawn. It is a bit messy. For feedback purposes we
note that both y1 and y2 are functions of u1 , therefore, we do not need to apply feedback to u2 .
The block diagram relating y1 ; y2 to u1 is:
we want
s2 + 2s + 2 = 0
Therefore, the equation system to solve is:
k1 2 2k2 = 2
5 3k1 + 4k2 = 2
giving k1 = 1; k2 = 1:5 or
0 1
y1
u1 1 1:5 0 @ y2 A = Kd y
=
u2 0 0 0
y3
and we transform back into real state variables via y = 1 x to give u = K 1 x:
d
Note in the above procedure only one pair of unstable complex poles are being altered. This
resulted in the Y (s) = (sI ) 1 Q being of second order. What if you want to alter two
sets of comples poles e.g. 1 ; 2 = 1 j1 and 3 ; 4 = 1 j50 ? Since modes 1 and 2 anre
decoupled from modes 3 and 4 you …rst do a second order calculation Y (s) = (sI ) 1 Q for
y1 ; y2 and another second order calculation Y (s) = (sI ) 1 Q for y3 ; y4 .
If the stabilisation of the four modes involves a common input you must however draw the
four modes in one block diagram. If the stabilisation of modes 1 and 2 invoves di¤erent input(s)
from that of modes 3 and 4 then you can of course draw two separate block diagrams and obtain
k1 ; k2 independently of k3 ; k4 .
This looks as if each state is being compared with a reference input. This representation
is“generality run riot”! In a great many cases, however, only one state (which may also be
de…nes as an output) is actually being controlled, in which case only ONE reference input is
requiered. The following two examples illustrate this:
1. In many systems (generally mechanical) the states are derivatives of each other, eg. x; x;
_ x•;
etc. If x (eg. position) is to be controlled in steady state, hence reference values of x;
_ x•;
etc must be zero. Consider, for example, the simple position control system in …g. 8.2.
55
Reference Inputs and Integral Control 56
Obviously r2 = 0. Note the signs to yield u = Kx. k1 and k2 can be found by the
techniques in section 7. For this example, it can be seen that …g. 8.2 reduces to:
i.e. state feedback has produced a simple lead compensator. The system is Type 1 and x1
will follow r1 with zero steady state error.
2. Let the states not be simple derivatives of each other, but let the states be taken as the
output of transfer function blocks in cascade (…g. 8.4).
Let x1 be desired to be controlled
n ato x1 = 10. Thus r1 = 10. Therefore the steady state
1
value of x2 is 30 since lims!0 s+3 = 13 . Thus r2 is reduntant as x2 will settle at 30
anyway (specifying r2 will just alter the system errors as the system is type 0). It can be
seen that if such a system consists of blocks in cascade, specifying the steady state value
of one state (usually the output) will determine the steady state values of the others. This
can be seen from the closed loop state space equation:
x_ = (A BK) x + BKr
To summarize, if there is one output or state to be controlled, then generally the systems are
single input (u scalar) and one reference input is su¢ cient. This, of course, is consistent with
intuition and is, in a sense, labouring the obvious.
This is the single input version of …g. 7.1. K1 is now scalar and K2 is a 1 n matrix. The
feedback law is:
u = Kx + K1 r (8.1)
K = K1 K2
i.e. the feedback matrix is split into two, with the reference input placed in between. This
allows for easy selection of the DC gain. Assume that v is equal to a single state, i.e. C =
1 0 0
We know that:
1
V (s) = C (sI A) BU (s)
i.e.
i.e.
p(s)K1
U (s) = R(s) (8.3)
p(s) + q(s)
Combining eq. 8.3 with 8.2, we obtain the closed loop transfer function as:
1. The zeroes of the closed loop and open loop transfer functions are identical. For a single
input system, state feedback does not alter the plant zeroes. This is true no matter where
the reference input is placed.
2. The scalar K1 appears in the nominator. This allows for the selection of the closed loop
gain.
Example
An open loop system is given by
2 (s + 2)
G(s) =
(s + 1) (s + 3)
k 0 (s + 2)
Gc (s) =
(s + 8) (s + 10)
with k 0 = 40 for unity DC gain. From eq. 8.4 we see that K1 2 (s + 2) = 40 (s + 2) from
which K1 = 20.
The open loop characteristic polynomial is s2 + 4s + 3.
The closed loop characteristic polynomial is s2 + 18s + 80.
Then from eq. 7.8 k1 = 14; k2 = 77. And
u= Kx = 14 77 x
Note we have used control canonical form. As such x are probably not “physical states”.
However, we shall assume they are here.
Since K = 14 77 , we therefore have K2 = 14 77 =K1 = 0:7 3:85 and we have
the closed loop diagram in …g. 8.7.
Note that the system zero remains at 2. Indeed it has not been possible to change it.
that when K1 is a matrix, state feedback does a¤ect the system zeroes which will then a¤ect the
DC gain (if the system is not Type 1) and also the system response (but not the stability). As
remarked earlier, it is di¢ cult to design K1 to select closed loop zeroes. The best approach is to
forget them and simulate the controlled system on a computer to see wherther the response is
acceptable. If DC gains are a problem, the elements of K1 (and hence K2 since K is …xed) can
de modi…ed accordingly.
There is an important class of systems in which it is possible (and desirable) to make each
controlled output dependent on one input only.
E.g. given:
2 1 x1 1 1=2 u1
x_ = +
1 2 x2 1=2 1 u2
State feedback can be applied to yield:
x_ = (A BKD ) x + BKD1 u0
where
2 0 1 0
(A BKD ) = ; BKD1 =
0 2 0 1
The values of KD and KD1 can be easily derived if B is square and invertible:
1 2 0
KD = B A+ =
0 2
1 2 4
1 1=2 2 1 2 0 3 3
= = 4 2
1=2 1 1 2 0 2 3 3
1 4 2
1 1 0 1 1=2 1 0 3 3
KD1 = B = = 2 4
0 1 1=2 1 0 1 3 3
The ”closed loop” system is now termed ”non-interacting” or ”decoupled” for obvious reas-
ons. Further state feedback can now easily be applied to alter the eigenvalues. The general
scheme is show in …g 8.8.
When B is square, the design of decoupled systems via state feedback is quite easy. When
B is not square it is generally quite di¢ cult. For the general theory, advanced texts should be
consulted.
Reference Inputs and Integral Control 61
where xI is an extra state. For sign convention reasons we will de…ne x_ I = v r: Here we
are treating v and r as scalar (single controlled output). If we have p controlled outputs then
x_ I = v r, i.e. p extra states.
Let the system be given by:
x_ = Ax + Bu; v = Cx
If the v outputs are to be ”integral” controlled, we augment the state space equation with
x_ I = v r = Cx r
therefore:
x_ A 0 x B 0
= + u+ r
x_ I C 0 xI 0 I
i.e.
x_ 0 = A0 x0 + B 0 u + G0 r
Matrices A0 and B 0 will be used for control design. Now we will carry out state feedback to
alter the eigenvalues of A0 . This will yield a feedback law:
0 1
x1
B x2 C
B C
B .. C
0
k1 k2 kn kI1 kI2 B C
u = Kx = B . C
B xI1 C
@ A
..
.
where kI1 ; kI2 ; etc. are the gains of the ”integral compensators”.
An example will make this obvious.
Reference Inputs and Integral Control 62
Example
Let
2 1 x1 2 x1
x_ = + u; v= 1 0
0 3 x2 1 x2
The output v is required to follow a step reference with zero steady state error. The system
is Type 0. The closed loop system is to have eigenvalues of 5; 2; 3: Note three eigenvalues,
since we are going to augment the system with an integral controller so increasing the order of
the system by one. De…ne x_ I = v r = x1 r. The augmented state equation is
0 1 0 10 1 0 1 0 1
x_ 1 2 1 0 x1 2 0
x_ 0 = @ x_ 2 A = @ 0 3 0 A @ x2 A + @ 1 A u + @ 0 A r
x_ I 1 0 0 xI 0 1
and the control law will be: u = Kx0 = k1 k2 kI x0
Control in modal canonical form. The eigenvalues of A0 are:
0 1
2 1 0
A0 I = 0; @ 0 3 0 A I = 0; ( + 2) ( + 3) =0
1 0 0
Hence the open loop eigenvalues are 0; 2; 3. Therefore, we only need to change one
eigenvalue.
Now, we obtain the eigenvectors of A0 in order to …nd the modal matrix:
0 1 0 1 0 1 0 1
0 2 3 0 2 3
1 =
@ 0 A; 2 =
@ 0 A; 3 =
@ 3 A; =@ 0 0 3 A
1 1 1 1 1 1
0 1 0 1 10 1
0 0 0 0 2 3 2
y_ = @ 0 2 0 Ay + @ 0 0 3 A @ 1 Au
0 0 3 1 1 1 0
0 1 0 5 1
0 0 0 6
y_ = @ 0 2 0 A y + @ 21 A u
1
0 0 3 3
change the …rst mode to = 5,i.e:
5k1 5k1
1+=0!s+ =0
6s 6
equating coe¢ cients with s + 5 = 0; we have
5k1
= 5 ! k1 = 6
6
The control law is
0 1 10 1
0 2 3 x1
u = 6 0 0 y= 6 0 0 @ 0 0 3 A @ x2 A =
1 1 1 xI
0 1
x1
= 3 1 6 @ x2 A
xI
x_ I1 = v r
x_ 21 = xI1
then
0 1 0 1
2 1 0 0 2
B 0 3 0 0 C B 1 C
A = B
@ 1
C; B=B C
0 0 0 A @ 0 A
0 0 1 0 0
u = k1 k2 k3 k4 x0
9.1 Introduction
You should now appreciate that state feedback is a powerful technique which cannot only sta-
bilize complex multivariable intereacting systems but is able to position the closed loop poles
(eigenvalues) to any position that the designer requires. However, the main assumption with
feedback of all the system states is that all the system states are measurable quantities, other-
wise, the law u = Kx would be practically useless.
We have said earlier that in nearly all cases it is possible to derive x_ = Ax + Bu such that
the states are “physical” quantities. This is true, but the fact that all states may be physical
and measurable in principle does not mean that they are easily measurable in practice. There
are a number of scenarios:
2. Deriving x;
_ x• from x is a di¤erential process and could be noisy.
5. States may be inaccessable. This is common in nuclear and chemical industries where
states (e.g. heat or ‡uid ‡ow) are in areas which are inconvenient to readh.
The reasons may be any one or a mixture of the abov. The problem is solved by augmenting
the system controller u = Kx with a state Estimator or State Observer which literally
estimates the states for use in the controller. If all the states are to be estimated, we hava a
Full Observer. If a portion of the states is easily measurable, then estimation of the remainder
constitutes a Reduced Order Observer.
Observers work by comparing the real, measured output of the plant, v, with the output
of a modelled or Simulated plant, v^. The error between v and v^ is then used to adjust the
states x^ of the simulated plant. It is these simulated states, x^, which are used for feedback,
i.e. u = K x ^. The simulated plan can be either an analogue model consisting of integradors,
ampli…ers, etc. or a digital model on a microprocessor.
You can now see that the otuput vector, v, now comes into its own. Before, when we assumed
all states were available v was rather super‡uous, the C matrix being either the unit matrix
or a matrix consisting of the odd unity element. For observers, v are the actual measurements
which will be used to compare the real plant and the modelled plant performance.
64
Full and Reduced Order Observers 65
where x ^ and v^ are state and output estimates. u is the controller output and is obviously
a known quantity. Both plant and simulator a stimulated by u and as such x ^ will follow x
providing:
Statement 2 is particularly far fetched. If xi cannot be measured, how do we know x ^i0 ? Even
^0 were accurate, it can be seen that x
if x ^ and x will drift appart though inaccuracies in A and
B.
What is done is to apply feedback to the model system in order that the error between x
and x ^ (which we denote as x ~=x x ^) reduces to zero. The resulting closed loop observer is
shown in …g. 9.2.
Note that it is not necessary to include the modelled B inside the model system feedback
loop, since Bu is a known quantity. The plant equations are:
x_ = Ax + Bu (9.1)
v = Cx + Du (9.2)
b_ = A^
x x + Bu + L (v v^) (9.3)
v^ = C x
^ + Du (9.4)
b_ = A^
x x + Bu + LC (x x
^) (9.5)
x_ b_ = A (x
x x
^) LC (x x
^) (9.6)
i.e.
e_ = (A
x LC) x
~ (9.7)
Eq. (9.7) de…nes the Error Dynamics. It is an unforced equation and hence the error x ~ will
decay to zero according to the eigenvalues of (A LC). The error will decay to zero no matter
what the initial error conditions x
~0 (and hence x
^0 ) are. The problem now is to choose L to give
acceptable eigenvalues for (A LC). What should they be?
Remember that the …nal closed loop system will have the eigenvalues of (A BK). Therefore
the dynamics of x will be made of transient terms de…ned by these closed loop eigenvalues. Since
we want x ^ to follow x, then any error between them must decay faster than any closed loop
transient. Rule:
The eigenvalues of (A LC) should be aproximately 10 times faster than those of (A BK).
Generally the slowest eigenvalue of (A LC) should be an order higher than the fast-
est “reasonably dominant” eigenvalue of (A BK). E.g. let the eigenvalues of (A BK) be
4; 2 j2; 80. The 4 is reasonably dominant, the 80 is not. Therefore choose the eigen-
values of (A LC) as 40 j40; 40; 50. Note actual numbers, real or complex, are rather
arbitrary.
Now we have to …nd L.
x_ = Ax + Bu; v = Cx
We …rst convert to observer canonical form:
x_ o = Ao xo + Bo u; v = Co xo (9.8)
where 2 3 2 3
Co C
6 Co Ao 7 6 CA 7
6 7 6 7
xo = T x; T = Po 1 P; Po = 6 Co A2o 7; P = 6 CA2 7 (9.9)
4 5 4 5
.. ..
. .
Full and Reduced Order Observers 67
if v is scalar, C and Co will be 1 n and P and Po will be square. Po 1 will only exist if the
system is observable.
We note that Ao and Co can be written by inspection:
0 1
a1 1 0 0
B .. C
B a2 0 1 . C
B C
B C
Ao = B a3 0 0 . . . 0 C ; Co = 1 0 0
B C
B .. C
@ . 1 A
an 0 0 0
where ai is the coe¢ cient of sn i (or n i ) of a(s), the open loop characteristic polynomial.
t
Since Co is 1 n, it follows that Lo will be n 1, i.e. Lo = lo1 lo2 lon . The su¢ x
”o” in Loi denotes that the design is being done in the observer canonical domain. Therefore:
0 1 0 1
lo1 lo1 0 0
B lo2 C B lo2 0 0 C
B C B C
Lo Co = B . C 1 0 0 =B . . .. C
@ .. A @ .. .. . A
lon lon 0 0
and Ao Lo Co is therefore:
0 1
a1 lo1 1 0 0
B .. C
B a2 lo2 0 1 . C
B C
B .. C
Ao Lo Co = B a3 lo3 0 0 . 0 C (9.10)
B C
B .. C
@ . 1 A
an lon 0 0 0
which is in observer canonical form. Therefore a1 + lo1 ; a2 + lo2 ; : : : are the coe¢ cients of
sn iof (s), the closed loop characteristic polynomial of (Ao Lo Co ), i.e.
sn + 1s
n 1
+ 2s
n 2
+ + n = (s o1 ) (s o2 ) (s on )
loi = i ai (9.11)
and
t
Lo = lo1 lo2 lon (9.12)
Now, if we used x_ o = Ao xo +Bo u; v = Co xo as our model system, the matrix Lo would give
us and estimate of the states xo . Hence, the control law u = Kx would become u = KT 1 xo .
Alternatively, we can attempt to get estimates of x directly. The observer equation in observer
canonical form is:
b_ o = Ao x
x ^o + Bo u + Lo (v v^)
since x
^o = T x
^, we have:
b_ = Ao T x
Tx ^ + Bo u + Lo (v v^)
b_ = T
x 1
Ao T x
^+T 1
Bo u + T 1
Lo (v v^)
b_ = A^
x x + Bu + L (v v^)
Full and Reduced Order Observers 68
1
L=T Lo (9.13)
The procedure for designing an observer is therefore:
5. Calculate T 1 =P 1P
o.
6. Calculate L = T 1L .
o
Example Let
2 1 1
A= ; B= ; C= 2 3
0 3 1
Design a controller u = Kx to give closed loop eigenvalues of 5 j5. The states are not
accessable.
Therefore we need to design an observer with observer eigenvalues of 50 j50. The control
design has already been done in section (7.1). K is 17 12 : We are going to design the
observer following the steps above:
The open loop characteristic polynomial is: (s + 2) (s + 3) = s2 + 5s + 6.
C 2 3
Now we obtain the matrix P = =
CA 4 7
The system in observer canonical form is:
5 1
Ao = ; Co = 1 0
6 0
Co 1 0
Therefore, the matrix Po = = .
Co Ao 5 1
The observer eigenvalues are o = 50 j50 therefore (s) = (s + 50)2 + 502 = s2 + 100s +
5000.
Full and Reduced Order Observers 69
95
Therefore lo1 = 100 5 = 95; lo2 = 5000 6 = 4994, and Lo =
4994
Now we need the matrix T 1 = P 1 Po in order to obtain matrix L in the original coordin-
ates:
1 3
1 1 2 3 1 0 4 2
T =P Po = =
4 7 5 1 3 1
and …nally:
3
1 4 2 95 7111
L=T Lo = =
3 1 4994 4709
To check the design, we just need to obtain the eigenvalues of (A LC).
14 224 21 332 2
I = + 100 + 5000
9418 14 124
Therefore the design is correct.
use a full observer and so gain estimates of both measurable and non-measurable states.
You can choose wheter to use the measured states or their estimates in the feedback law
(there is some sense in doing this if the measurements are noisy).
design a reduced order estimator in which only non-measurable states are estimated (ob-
served)
The reduced order observer is a little trickier than the full order one. The procedural
implementatios is however just as straightforward.
We split up the system into measurable states xm and not measurable states xr (we’ll let r
stand for reduced ). We can write x_ = Ax + Bu as:
x_ m A11 A12 xm B1
= + u (9.14)
x_ r A21 A22 xr B2
and
xm
v= C1 0 (9.15)
xr
C1 will generally be a unit matrix. We can write the state equation for xr as:
x_ m = A11 xm + A12 xr + B1 u
therefore:
b_ r = A22 x
x ^r + ur + L (vr v^r ) (9.19)
i.e.
b_ r = A22 x
x ^r + (A21 xm + B2 u) +L (x_ m A11 xm B1 u b)
A12 x
| {z } | {z } | {z r} (9.20)
ur vr v^r
or
b_ r = A22 x
x ^r + (A21 xm + B2 u) + L (A12 xr br )
A12 x (9.21)
and now we substract eq. 9.21 from eq. 9.16 to yield:
x_ r b_ r = A22 (xr
x br )
x LA12 (xr br )
x
e_ r = (A22
x er
LA12 ) x (9.22)
which is the unforced dynamic equation for the errors in the unknown states and is analogous
to eq 9.7. The L matrix can be found by the procedures developed in section 9.2.1 with the
matrix A22 acting as the system matrix A and A12 acting as the output matrix C.
The observer however remains to be implemented. The full observer was represented by eq
9.3. The reduced order equivalent is 9.19 or 9.20. Collecting terms in that equation, we get:
b_ r = (A22
x LA12 ) x
^r + (A21 LA11 ) xm + (B2 LB1 ) u + Lx_ m (9.23)
Now xm and u are no problem since they are known. To get rid of the Lx_ m term we de…ne
a new state:
b_ r
x_ c = x Lx_ m
i.e.
br
xc = x Lxm (9.24)
Therefore
x_ c = (A22 LA12 ) x
^r + (A21 LA11 ) xm + (B2 LB1 ) u (9.25)
And the reduced order observer (eqs. 9.24 and 9.25) can be implemented as shown in …g.
9.4.
I’m afaid you will have to remember this block diagram (or 9.24 and 9.25) together with eq.
9.22. Armed with these you can now design reduced order observers.
Full and Reduced Order Observers 71
2 1 1
A= ; B= ; C= 1 0
0 3 1
Design a controller u = Kx to give closed loop eigenvalues of 5 j5. The control design
has already been done in section (7.1) K is 17 12 : The state x1 is measurable and x2 is
inaccessable.
We have A11 = 2; A12 = 1; A21 = 0; A22 = 3. Observer eigenvalues are given by those
of (A22 LA12 ) = 3 L 1 = 3 L. Let the observer pole be at s = 50. Therefore
j I Aj = + 3 + L = 0, hence L = 47.
The resulting observer and control loop is shown in …g 9.5.
Figure 9.5: Block diagram of the complete system (control and reduced order observer)
the existence of the observer altered the principal (and designed) closed loop eigenvalues? The
answer is no. We shall prove this for the full observer case.
For a n-th order sistem, with states x, the observer has e¤ectively added another n state
b. The closed loop system will be of order 2n. since x
variables x e=x x b, i.e. a linear combination
of estimate and real states, we can therefore use x and xe as the 2n state variables for the closed
loop system (the eigenvalues of the closed loop system matrix will be the same as if we had used
x and xb as the 2n states).
The closed loop system is de…ned by:
x_ = Ax + Bu; u = b
Kx (9.26)
x_ = Ax b = Ax
BK x BK (x e)
x
x_ = (A e
BK) x + BK x (9.28)
e_ = (A
x e
LC) x
Eq. 9.28 are the closed loop state equations and can be written (neglecting reference inputs):
x_ (A BK) BK x
e_ = (9.29)
x 0 (A LC) e
x
The eigenvalues of the closed loop are given by:
I (A BK) BK
=0
0 I (A LC)
i.e. the eigenvalues of the closed loop system consist of the union between the eigenvalues of
(A BK) and of (A LC) : This is extremely convinient as it means that:
The controller and observer can be designed independently. Their existence
does not affect the eigenvalues of the other.
1. The closed loop zeros are the same as the plant open loop zeros.
Full and Reduced Order Observers 73
Figure 9.6: Block diagram of a SISO system with state feedback control using a full order
observer
2. The observer poles are cancelled by equal zeros. This means that the fast observer modes
are uncontrollable from r and unobservable at v. It does not mean that the system is only
of order n, as the full system contains 2n eigenvalues. Eq. 9.30 still holds and the full
system contains 2n eigenvalues.
3. Steady state gain via the parameter k1 can be carried out as in section 8.1.2.
In fact, statement 2 is true for …g 9.6 even for a multi-input case, i.e. K1 is a matrix.
However, in this case, statements 1 and 3 do not hold.
Example Given:
V (s) (s + 4)
=
U (s) (s + 3) (s + 2)
design a controller to place the closed loop poles at 5 j5. Only v is measurable. The system
is to have unity DC gain.
(s+4)
The system is second order. Let x1 = v. Let x2 = (s+2) . This choice of x2 is rather arbitrary
and is done because the state space equation is thus:
2 1 1 x1
x_ = x+ u; v = 1 0
0 3 1 x2
x1
u= 17 12
x
^2
where x
^2 is the output of a reduced order observer with L = 47 (see section 9.3).
From eq. 9.31 the closed loop transfer function is to be:
V (s) K1 (s + 4)
= 2
R(s) s + 10s + 50
which has a steady state gain of 0:08. Therefore K1 = 12:5 and K2 is a 1 2 matrix K2 =
1:36 0:96
Full and Reduced Order Observers 74
Figure 9.7: Use of a full order observer in a system with integral action
10.1 Introduction
In this rection we will sove the state space equation x_ = Ax + Bu, i.e. given x (t0 ), the initial
states at t = t0 , and u (t) we will solve for x (t).
Although we will concentrate on solving the plant state equation x_ = Ax + Bu, the same
techniques can be used to solve for the closed loop system equation x_ = (A BK) x + BK1 r
since this is mathematically identical.
The aim of this section is twofold:
1. to enable you to simulate the transient waveforms of the uncontrolled and controlled plants
on a computer (for a linear time-invariant system). This is essential as it enables you to
test out your control designs and see how sensitive the design is to parameter variations
which generally exist in practice.
2. It provides a bridge between continuous state space systems and discrete time state space
systems.
The theory of the continuous solution will be done …rst. We will then “discretise”the theory
for implementation on a computer.
The …rst term is known as the complimentary function and the second is a particular integral.
The complimentary function is the solution to the unforced equation x_ = ax and depends only
on the initial conditions and the system modes (in this case e at ). The particular integral is the
solution of the forced system with zero initial conditions.It is a function of the input u(t) and
the system modes. q (t) is a function to be found.
If you don’t understand eq 10.1 don’t worry, just accept it. For the vector equation x_ =
Ax + Bu we will postulate, by analogy, a solution of the form:
x (t) = xCF (t t0 ) + xP I (t t0 )
75
Time Solution of the State Space Equation 76
where x (t0 ) is the initial condition vector and is n 1, q (t t0 ) is also n 1 and is our
input dependant function and is to be found. Therefore (t t0 ) is a n n matrix. It is called
the Transition Matrix. For a …rst order system (t t0 ) = e a(t t0 ) . For higher order
systems (t t0 ) will contain element which are time functions of the modes, i.e. elements like
e 1 (t t0 ) ; e t sin t; etc. We write (t t0 ) rather than just to emphasize that it is a matrix
whose elements are time functions, time beginning at t = t0 (normally t0 = 0).
Now xCF (t t0 ) = (t t0 ) x (t0 ), and xCF must obey the unforced equation x= _ Ax.
Therefore:
_ (t t0 ) x (t0 ) = A (t t0 ) x (t0 )
Therefore:
_ (t t0 ) = A (t t0 ) (10.3)
in other words, _ Ax. In fact,
itself obeys x= has the following properties:
2. (t0 t0 ) = I
4. 1 (t t0 ) = (t0 t)
We will see in section 10.3 how to calculate the transition matrix. Now we are going to …nd
the unknown function q (t t0 ) :
Since xP I (t t0 ) = (t t0 ) q (t t0 ) is the solution to x_ = Ax + Bu; it must obey it:
_ (t t0 ) q (t t0 ) + (t t0 ) q_ (t t0 ) = A (t t0 ) q (t t0 ) + Bu (t) (10.4)
A (t t0 ) q (t t0 ) + (t t0 ) q_ (t t0 ) = A (t t0 ) q (t t0 ) + Bu (t)
(t t0 ) q_ (t t0 ) = Bu (t)
1
q_ (t t0 ) = (t t0 ) Bu (t)
Z t
1
q (t t0 ) = ( t0 ) Bu ( ) d
t0
x (t) = xCF (t t0 ) + xP I (t t0 )
Z t
x (t) = (t t0 ) x (t0 ) + (t ) Bu ( ) d (10.6)
t0
You must remember this equation and understand its structure, you will not be asked to
derive it.
Now all we have to do is …nd (t t0 ) :
1 1
x (s) = (sI A) x (t0 ) + (sI A) Bu(s)
and
n o n o
x (t) = L (sI A) 1 x (t0 ) + L 1 (sI A) 1 Bu(s)
1
n o n o
x (t) = L 1 (sI A) 1 x (t0 ) + L 1 (sI A) 1 Bu(s)
| {z } | {z }
complim. func. part. integral
Therefore, it can be seen that
n o
1 1
(t) = L (sI A)
This looks neat until you try an example e.g.
0 1 1
1 s+3 1
A= ; (sI A) =
2 3 (s + 1) (s + 2) 2 s
0 n o n o 1
n o 1 (s+3) 1 1
L L
(t) = L 1
(sI A) 1 = @ n (s+1)(s+2) o n (s+1)(s+2) o A
2 s
L 1 (s+1)(s+2) L 1
(s+1)(s+2)
Note: to obtain (t t0 ), just …nd the inverse Laplace transform and replace t in the
resulting expressions by (t t0 ). The example shows that for a n-th order system you have n3
partial fraction coe¢ cients to …nd.
A2 t2 A3 t3 A4 t4
e[A]t = I + At + + + + (10.7)
2! 3! 4!
and
d n [A]t o A2 t A3 t2 A4 t3
e =A+ + + + (10.8)
dt 1! 2! 3!
Time Solution of the State Space Equation 78
We can say that (t) = eAt as long as eAt satis…es the properties of in sec. 10.1. Well,
d
multiplying the …rst equation by A we obtain the second equation. Therefore dt eAt = Ae[A]t .
At
I.e. the matrix exponential is a solution to the homogeneus equation. Also e = I for t = 0.
You can take it from me that properties 3 and 4 are satis…ed as well, therefore:
(t t0 ) = e(t t0 )A (10.9)
Now let us use this expression to calculate (t t0 ):
Let x_ = Ax unforced be transformed to y_ = y. The solution y (t) is (assuming t0 = 0):
y (t) = e[ ]t
y (0)
However, y (t) = 1 x, therefore
x (t) = e[ ]t 1
x (0)
and
= e[ ]t 1
(10.10)
Example
0 1 0
Find the complete solution of x_ = x+ u for a step function u (t) = 2
2 3 1
1
(t > 0). Let x (0) =
0
0 1 1 1
First of all, we have to …nd . A= , 1 = 1; 2 = 2, = ,
2 3 1 2
1 2 1
= :
1 1
Time Solution of the State Space Equation 79
Therefore:
1 1 e t 0 2 1
= =
1 2 0 e 2t 1 1
2e t e 2t e t e 2t
= t
2e + 2e 2t e t + 2e 2t
2e t e 2t e t e 2t 1
(t) x (0) = t =
2e + 2e 2t e t + 2e 2t 0
2e t e 2t
= t
2e + 2e 2t
0 0
Bu ( ) = ( 2) = hence:
1 2
2e (t ) e 2(t ) e (t ) e 2(t ) 0
(t ) Bu ( ) = (t ) + 2e 2(t ) (t ) + 2e 2(t ) =
2e e 2
2e (t ) + 2e 2(t )
=
2e (t ) 4e 2(t )
Z t t
2e (t ) + 2e 2(t ) 2e (t ) +e 2(t )
xP I = d = =
0 2e (t ) 4e 2(t ) 2e (t ) 2e 2(t )
0
2+1 2e t + e 2t 1 + 2e t e 2t
= =
2 2 2e t 2e 2t 2e t + 2e 2t
1.0
y
0.5
0.0
1 2 3 4 5
t(s)
-0.5
-1.0
x1
-1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0
x2
-0.2
-0.4
-0.6
-0.8
-1.0
where E is n n and called the discrete system matrix. We can write E (h) to denote that the
elements of E will be dependent on h. F is n r and is the discrete input matrix. This equation
is called the Discrete State Space Equation. The problem is: given the continuous state
space matrices A; B, how do we get E; F ?
Note that if v = Cx + Du, then obviously v (k) = Cx (k) + Du (k).
x (k + 1) x (k)
x_ Ax (k) + Bu (k)
h
therefore
x (k + 1) x (k) = Ahx (k) + Bhu (k)
and
x (k + 1) = (I + Ah) x (k) + Bhu (k)
E (h) = (I + Ah) ; F (h) = Bh (10.12)
Let t0 = kh and t = (k + 1) h (now you see our obsession with retaining t0 ). We see that
t t0 = h, so we can use eq. 10.13 as an excellent predictor formula from one time step to the
next. Therefore, we have:
Z (k+1)h
x ((k + 1) h) = (h) x (kh) + ((k + 1) h ) Bu ( ) d (10.14)
kh
A2 h2 A3 h3
(h) = eAh = I + Ah + + + (10.17)
2! 3!
n n (n 1) (n 1)
this expression is computed until A n!h A h
(n 1)! < ", a small value. Usually 7 to 10
terms are su¢ cient. Rh
The computation of 0 ( ) d is also straightforward (we can integrate the above expression
term by term):
Z h
Ah2 A2 h3
eA d = Ih + + + =A 1
( (h) I) (10.18)
0 2! 3!
Therefore, we only need to calculate the power series once. In order to avoid to take the
inverse of A, the power series of the integral term is usually calculated:
Z h
Ah2 A2 h3
= eA d = Ih + + + =A 1
( (h) I) (10.19)
0 2! 3!
The computer solution via (10.17), (10.18), (10.16) and (10.11) are for linear, time invariant
A and B matrices (i.e. ones with numbers in). If the system is non-linear then either A and B
contain non-linear functions or else x_ = Ax + Bu cannot be formulated at all. The solution of
x_ = f (x; u) for non-linear systems demands integration routines based on methods like Runge-
Kutta or Adam’s. Simple versions of these are quite easy to write and implement on a small
computer but the methods are those of numerical analysis and are thus outside of this course.
Finally it should be remembered that sophisticated integration routines are available as such or
integrated in software packages such as Matlab.
Chapter 11
11.1 Introduction
Consider the discrete time state space equation x (k + 1) = Ex (k) + F u (k) where E and F are
given by eq (10.16) which itself was derived from eq. (10.13). What the discrete equation does
is to de…ne the system response over discrete time steps given that u is held constant over each
time step, i.e. u put through a sample and hold (zero order hold).
Therefore the matrices E and F correspond to the zero hold equivalent of the continuous
time system. If we want to obtain the input output relationship (z domain transfer function)
we just have to apply the z transform to the following equations:
83
Discrete State Space Design for Digital Control Systems 84
2. Obtain G (z) …rst. i.e. G (z) = 1 z 1 Z 1s G (s) .Then G (z) can be put into control,
observer or modal canonical form via the procedures of section 3, methods 1,2 and 4.
However, …nding G (z) …rst is not recommended. First of all, it would only be useful for
single input single output systems, and more important, the physicality of the states is
lost.
3. For an n-th order system, n > 2, locate the closed loop pole pair on ! s =! 0 contour with
a good damping factor. Fix the remaining poles closer to the origin of the Z plane.
4. Formulate the closed loop characteristic polynomial pc (z) from the desired pole positions.
5. Use pc (z) to determine u (k) = Kx (k) using either control or modal canonical forms.
Note that with the above observer design the microprocessor will be simulating the plant
with a time step T0 . Every 10 time steps it will calculate u (k) = Kx (k).
Alternatively, you can run the controller and the observer with the same sampling period
(T ). The observer eigenvalues now need to be 10 times faster than the control eigenvalues. You
would then choose ! s 40! 0 (i.e. the plant dominant poles lying very near to the z = 1 point).
The dominant observer poles are then placed on the ! s =! 0 4 contour. This method has
the advantage that the same E and F matrices are used for both control and observer design.
Needles to say, you microprocessor/controller sample rate is always determined by the observer
and for fast servo systems you may need very fast micros.
Discrete State Space Design for Digital Control Systems 85
Because of the z transform …nal value theorem flimk!1 f (k) = limz!1 (z 1) F (z)g, then
it can be shown that the term z 1 1 is su¢ cient to increase the system type. Note that z 1 1 is
not the z transform of 1s : It is chosen as the simplest function of z which will do the trick.
The feedback law will be:
u (k) = Kx (k) kI xI (k)
1
XI (z) = [V (z) R (z)]
z 1
and therefore
The reduced order discrete observer derivation proceeds analogously with eqs (9.16) to (9.23)
with x_ terms replaced by x (k + 1) terms. Eq. (9.23) becomes:
x
^r (k + 1) = (E22 LE12 ) x
^r (k) + (E21 LE11 ) xm (k) + (F2 LF1 ) u (k) + Lxm (k + 1) (11.5)
xc (k + 1) = x
^r (k + 1) Lxm (k + 1)
Discrete State Space Design for Digital Control Systems 86
xc (k) = x
^r (k) Lxm (k) (11.6)
and
xc (k + 1) = (E22 LE12 ) x
^r (k) + (E21 LE11 ) xm (k) + (F2 LF1 ) u (k) (11.7)
Once again z 1 is certainly not the z transform of 1s . It is just the simplest way of imple-
menting the reduced order observer equation above. The observer in …g. 11.2 can easily be
implemented on a microprocessor.