Professional Documents
Culture Documents
Solution Part1.Ps
Solution Part1.Ps
Solution Part1.Ps
Kemin Zhou
January 9, 1998
Preface
This solutions manual contains two parts. The rst part contains some intuitive derivations of H2 and H1 control. The derivations given here are not strictly rigorous but I feel that they are helpful (at least to me) in understanding H2 and H1 control theory. The second part contains the solutions to problems in the book. Most problems are solved in detail but not for all problems. It should also be noted that many problems do not have unique solutions so the manual should only be used as a reference. It is also possible that there are errors in the manual. I would appreciate very much of your comments and suggestions. Kemin Zhou
iii
iv
Contents
Preface iii
3 5 8
2 Understanding H1 Control
13
13 14
II Solutions Manual
1 2 3 4 5 6 7 8 9
Introduction Linear Algebra Linear Dynamical Systems H and H1 Spaces Internal Stability Performance Speci cations and Limitations Balanced Model Reduction Model Uncertainty and Robustness Linear Fractional Transformation
2
19
21 23 27 31 37 43 49 53 65
vi
10 Structured Singular Value 11 Controller Parameterization 12 Riccati Equations 13 H Optimal Control 14 H1 Control 15 Controller Reduction 16 H1 Loop Shaping 17 Gap Metric and -Gap Metric
2
Part I
Chapter 1
Understanding H2/LQG Control
We present a natural and intuitive approach to the H2 /LQG control theory from the state feedback and state estimation point of view.
B2 D12 has full column rank for all !; B1 (iv) D21 has column column rank for all !. Let Tzw denote the transfer matrix from w to z .
(iii)
A j!I C1 A j!I C2
kTzw k :=
2
1 2
1 1
sian noise
LQG Control Problem: Assume w(t) is a zero mean, unit variance, white GausE fw(t)g = 0; E fw(t)w ( )g = I (t u = K (s)y
) 1 Z T kz k2 dt J = E Tlim T !1
0
):
De ne
Ax = A B2 R1 1 D12 C1 ; Ay = A B1 D21 R2 1 C2 ;
Then assumptions (iii) and (iv) guarantee that the following algebraic Riccati equations have respectively stabilizing solutions X2 0 and Y2 0
It is well-known that the H2 and LQG problems are equivalent and the optimal controller is given by L K2 (s) := A + B2 F2 + L2C2 0 2 F2 and min J = min kTzw k2 = trace (B1 X2 B1 ) + trace (F2 Y2 F2 R1 ) : 2
In this section, we shall look at the H2 problem from a time domain point of view, which will lead to a simple proof of the result. The following lemma gives a time domain characterization of the H2 norm of a stable transfer matrix.
kTzw k = E
2 2
Z
0
kz (t)k dt
2
Proof. Let the impulse response of the system be denoted by g(t) = CeAtB. It is
kTzw k =
2 2
Z
0
where
Q=
eA t C CeAt dt 0
A Q + QA + C C = 0
Next, note that
Z
0
kz (t)k dt = E
2
= =
Z
0
Z
0
w0 B eA t C CeAt Bw0 dt
i
In view of the above lemma, the H2 control can be regarded as a problem of nding a controller K (s) for the system described in equations (1.1){(1.3) with w = w0 (t) and x(0) = 0 that minimizes Z 1 J1 := E kz k2 dt Now we are ready to consider the output feedback H2 problem. We shall need the following fact. Lemma 1.2 Suppose K (s) is strictly proper. Then x(0+ ) = B1 w0 : _ = A + By; u = C ^ ^ ^ Then the closed-loop system becomes ^ x = A B2 C _ x + B1 w =: A x + Bw _ ^ ^ ^ 2 A BD21 BC and x(t) = eAt Bw 0 (t)
0
Suppose that there exists an output feedback controller such that the closed-loop system is stable. Then x(1) = 0. Note that
J1 = E
= E = E = E = E = E = E
1 1 1 1 1 1
kz (t)k dt
2
d kz k + dt x (t)X x(t) dt kz k + 2x X x dt _
2 2 2 2
Z Z Z
kC x + D uk + 2x X (Ax + B w + B u) dt
1 12 2 2 1 2
= E = E
Z Z
0
1 1
u = F2 x
if full state is available for feedback. Since full state is not available for feedback, we have to implement the control law using estimated state:
u = F2 x ^
(1.4)
where x is the estimate of x. A standard observer can be constructed from the system ^ equations (1.1) and (1.3) as _ x = Ax + B2 u + L(C2 x y) ^ ^ ^ (1.5) where L is the observer gain to be determined such that A + LC2 is stable and J1 is minimized. Let e := x x ^ Then
e(t) = eAL t BL w0 J2 := E
= E = E
Z Z
0
1 1 1
(u F2 x) R1 (u F2 x)dt
e F2 R1 F2 e dt w0 BL eALt F2 R1 F2 eALt BL w0 dt
1
0
= trace
= trace F2 R1 F2
= trace fF2 R1 F2 Y g
eAL t BL BLeAL t dt
8 where or
(A + LC2 )Y + Y (A + LC2 ) + (B1 + LD21)(B1 + LD21 ) = 0 Subtract the equation of Y2 from the above equation to get (A + LC2 )(Y
L = L2 .
i.e., and
K2 (s) := A + B2 F2 + L2C2 F
2
L2
E fw(t)g = 0; E fw(t)w ( )g = I (t
In this case, we have the following relationship.
):
z (t) =
(
Z
0
CeA(t )Bw( )d
) 1 Z T kz (t)k2 dt E Tlim T !1
0
1 Z T Z t Z t w ( )B eA (t = E Tlim T !1
(
0 0 0
C
)
CeA(t s) Bw(s) d
ds dt
o
1 Z T Z t Z t trace nB eA (t = E Tlim T !1
0 0 0
CeA(t s) Bw(s)w ( )
o
d ds dt
o eA (t )C CeA(t s) B E fw(s)w ( )g d ds dt
eA (t )C CeA(t s) B (
s) d ds dt
Now consider the LQG control problem. Suppose that there exists an output feedback controller such that the closed-loop system is stable. Then x(1) = 0. Lemma 1.4 Suppose that K (s) is a strictly proper stabilizing controller. Then E fx(t)w (t)g = B1 =2:
_ = A + By; u = C ^ ^ ^ Then the closed-loop system becomes ^ x = A B2 C _ x + B1 w =: A x + Bw _ ^ ^ ^ 2 A BD21 BC Then x(t) = Z t eA(t )Bw( ) d (t)
0
t t t
eA B ( ) d = B=2 2
) 1 Z T kz (t)k2 dt J := E Tlim T !1 (
0
1 = E Tlim T !1
( (
Z
0
kC x + D uk + 2x X (Ax + B w + B u) dt
1 12 2 2 1 2
1 = E Tlim T !1
Z
0
u = F2 x
if full state is available for feedback. Since full state is not available for feedback, we will have to implement the control law using estimated state: u = F2 x ^ (1.6)
11
where x is the estimate of x. A standard observer can be constructed from the system ^ equations as _ x = Ax + B2 u + L(C2 x y) ^ ^ ^ (1.7) where L is the observer gain to be determined such that A + LC2 is stable and J is minimized. Let e := x x ^ Then e = (A + LC2 )e + (B1 + LD21)w =: AL e + BL w _ u F2 x = F2 e
e(t) =
and
eAL(t ) BLw( ) d
J3
) 1 Z T (u F x) R (u F x)dt := E Tlim T 2 1 2 !1 (
= E
e F2 R1 F2 e dt
)
F2 R1 F2 eAL(t s) BLw(s) d
)
ds dt
Y=
Z
0
eAL t BL BLeAL t dt
(A + LC2 )Y + Y (A + LC2 ) + (B1 + LD21)(B1 + LD21 ) = 0 Subtract the equation of Y2 from the above equation to get (A + LC2 )(Y Y2 ) + (Y Y2 )(A + LC2 ) + (L L2)R2 (L L2 ) = 0 It is then clear that Y Y2 and equality holds if L = L2 . Hence J3 is minimized if L = L2 .
12
Chapter 2
Understanding H1 Control
We give an intuitive derivation of H1 controller.
w u
3
y
2
B1 0 D21 D21 = I .
Theorem 2.1 There exists an admissible controller such that kTzw k1 < i the following three conditions hold:
13
14
UNDERSTANDING H1 CONTROL
X1 A + A X1 + X1 (B1 B1 = AY1 + Y1 A + Y1 (C1 C1 =
2
B2 B2 )X1 + C1 C1 = 0: C2 C2 )Y1 + B1 B1 = 0:
(2.1) (2.2)
(iii) (X1 Y1 ) < 2 : Moreover, when these conditions hold, one central controller is ^ Ksub(s) := A1 Z1 L1 F1 0 where
^ A1 := A + 2B1 B1 X1 + B2 F1 + Z1 L1 C2 2 F1 := B2 X1 ; L1 := Y1 C2 ; Z1 := (I Y1 X1 ) 1 :
Most existing derivations and proofs of the H1 control results given in Theorem 2.1 are mathematically quite complex. Some algebraic derivations (such as the one given in the book) are simple but they provide no insight to the theory for control engineers. Here we shall present an intuitive but nonrigorous derivation of the H1 results by using only some basic system theoretic concept such as state feedback and state estimation. In fact, we shall construct intuitively the output feedback H1 central controller by combining an H1 state feedback and an observer. A key fact we shall use is the so-called bounded real lemma, which states that, for a system z = G(s)w with state space realization G(s) = C (sI A) 1 B 2 H1 , kGk1 < , which is essentially equivalent to
Z
0
kz k
kwk dt < 0; 8w 6= 0
2
0 such that
XA + A X + XBB X= 2 + C C = 0 and A + BB X= 2 is stable. Dually, there is a Y = Y 0 such that Y A + AY + Y C CY= 2 + BB = 0 and A + Y C C= 2 is stable.
15
x = Ax + B1 w + B2 u; z = C1 x + D12 u; y = C2 x + D21 w _ We shall rst consider state feedback u = Fx. Then the closed-loop system becomes x = (A + B2 F )x + B1 w; z = (C1 + D12 F )x _ By the bounded real lemma, kTzw k1 < implies that there exists an X = X 0 such
that
X (A + B2 F ) + (A + B2 F ) X + XB1 B1 X= 2 + (C1 + D12 F ) (C1 + D12 F ) = 0 which is equivalent, by completing square with respect to F , to XA + A X + XB1 B1 X= 2 XB2 B2 X + C1 C1 + (F + B2 X ) (F + B2 X ) = 0
Intuition suggests that we can take
F = B2 X XA + A X + XB1B1 X= 2 XB2B2 X + C1 C1 = 0 This is exactly the X1 Riccati equation under the preceding simpli ed conditions. Hence, we can take F = F1 and X = X1 .
we have
Z
0
which gives
kTzw k1 < . Then x(1) = 0 because the closed-loop system is stable. Consequently,
1
Next, suppose that there is an output feedback stabilizing controller such that
kz k
kwk dt =
2 2
Z
0 2
1
2
kz k
get
kz k kwk + x X1 x + x X1 x dt _ _ Substituting x = Ax + B w + B u and z = C x + D u into the above integral and _ using the X1 equation, and nally completing the squares with respect to u and w, we
=
0 1 2 1 12
d kwk + dt (x X1 x) dt
2
Z
0
kz k
kwk dt =
2
Z
0
kvk
krk dt
2
2 where v = u + B2 X1 x = u F1 x and r = w B1 X1 x. Substituting w into the system equations, we have the new system equations
x = (A + B1 B1 X1 = 2)x + B1 r + B2 u _ v = F1 x + u y = C2 x + D21 r
16
UNDERSTANDING H1 CONTROL
Z
0
kTvr k1 < or
ku F1 xk
krk dt < 0
2
Obviously, this also suggests intuitively that the state feedback control can be u = F1 x and a worst state feedback disturbance would be w = 2 B1 X1 x. Since full state is not available for feedback, we have to implement the control law using estimated state:
u = F1 x ^
where x is the estimate of x. A standard observer can be constructed from the new ^ system equations as _ x ^ x = (A + B1 B1 X1 = 2)^ + B2 u + L(C2 x y) ^ where L is the observer gain to be determined. Let e := x x. Then ^
e = (A + B1 B1 X1 = 2 + LC2 )e + (B1 + LD21 )r _ v = F1 e Since it is assumed that kTvr k1 < , it follows from the dual version of the bounded real lemma that there exists a Y 0 such that Y (A + B1 B1 X1 = 2 + LC2 ) + (A + B1 B1 X1 = 2 + LC2 )Y + Y F1 F1 Y= 2 +(B1 + LD21 )(B1 + LD21 ) = 0
The above equation can be written as (A + B1 B1 X1 = 2) + (A + B1 B1 X1 = 2)Y + Y F1 F1 Y= 2 + B1 B1 Y C2 C2 Y +(L + Y C2 )(L + Y C2 ) = 0 Again, intuition suggests that we can take
L = Y C2
which gives
Y (A + B1 B1 X1 = 2) + (A + B1 B1 X1 = 2)Y + Y F1 F1 Y=
It is easy to verify that where Y1 is as given in Theorem 2.1. Since Y
Y C2 C2 Y + B 1 B 1 = 0
Y = Y1 (I
X1 Y1 )
2
0, we must have
(X1 Y1 ) <
17
which is exactly the H1 central controller given in Theorem 2.1. We can see that the H1 central controller can be obtained by connecting a state feedback with a state estimate under the worst state feedback disturbance.
18
UNDERSTANDING H1 CONTROL
Part II
Solutions Manual
19