Professional Documents
Culture Documents
Chapter 3
Chapter 3
Chapter 3
Main goal: introduce the key concepts enabling to deal with current
issues related to the control of complex systems 2
Course overview
– Introduction
– Chapter 1 : Structure of controlled systems
– Chapter 2 : State space representation, controllability, observability
– Chapter 3 : State feedback control
– Chapter 4 : Observers and estimated state feedback control
– Chapter 5 : Performance and robustness analysis of a control law
3
Chapter 3 overview
1. Aim of controllers
a. Interest a closed-loop structure
b. Stability, performance, robustness
2. Pole placement state feedback control
a. State feedback control principle
b. Calculation of the state feedback matrix
c. Additional gain calculation for steady-state purposes
3. Example: magnetic levitation system
4. LQ control
5. Example: lateral motion of an aircraft
6. Additional integral action in the control structure
7. Example: inverted pendulum
8. Conclusion 4
Interest of closed-loop structure
– Open-loop structure – Closed-loop structure
System
Controller System
electronics.stackexchange.com
0.6
tr within the range 5% of y st
Amplitude
K
0.4
0.2
Peak time
tm =
0
-0.2
tm tr w0 1− 2
w0t m 3
-0.4
0 2 4 6 8 10 12 14
Time (seconds)
50
40
30
20 w0 t m 3
10 x
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 8
Qualities of a controlled system
– Rapidity issue: some examples
1.4 (3)
(3)
(4)
(1)
s (1 + s ) (1 + 0.5 s ) 3.0 33° 0.75 3.06
(5) 100
(s ) (s )
(6)
1.2 (6) (2) 2
+ 0.4 s + 1 2
+ s + 100 20.1 32° 1.36 3.07
1 5
(3)
(1 + 0.2 s ) (1 + 0.1 s ) 1.8 22° 8.40 3.34
2
0.8
1 + 2s
s (1 + s ) (1 + 0.5 s )
0.6 (5) (4) 71° 1.41 2.67
(4)
0.4 0.3 (1 − s )
(5) s (1 + s ) (1 + 0.5 s ) 2.5 49° 0.30 2.56
0.2
−5 (1 + 3 s )
0 (6) (1 − 12 s ) (1 + 0.3 s )
2 0.2 55° 1.16 3.34
0 5 10 15
ωc t (normalized time scale)
9
Qualities of a controlled system
– Steady-state accuracy
b(t)
▪ Objective: no steady-state error
between the output and the e(t)
u(t) y(t)
setpoint, despite the presence of Controller System
disturbances ym (t)
e(t )
y(t )
e Compromise
Stability vs Rapidity & Accuracy
t
10
Handout
State feedback control 3.1
▪ State feedback control law – assuming the state available at any time
u(t ) = - K x(t ) + kc e(t )
e(t) u(t) y(t)
+
kc System
▪ Resulting in the closed-loop state space representation
x(t ) = (A − BK)x(t ) + B kc e(t )
x(t)
y(t ) = C x(t ) + D u(t ) K
▪ State feedback control law – assuming the state available at any time
u[k ] = −K x[k ] + kc e[k ] e[k] + u[k] y[k]
kc System
▪ Resulting in the closed-loop state space representation
x[k + 1] = ( F − G K ) x[k ] + G kce[k ] x[k]
K
y[k ] = H x[k ] + J u[k ]
i =1
▪ In this case, the dynamic of the non controllable part ( x 2 (t ) vector) can not be modified through the
action of the control law
x 2 (t ) = A 22 x 2 (t )
14
Pole placement state feedback control
– Step #1
▪ Determine K in order to impose the eigenvalues of the matrix A ‒ BK (i.e. the poles of the
controlled system) at specified values:
Open-loop form Closed-loop form
x(t ) = A x(t ) + B u(t ) x(t ) = (A − BK)x(t ) + B kc e(t )
y(t ) = C x(t ) + D u(t ) u(t ) = - x(t ) + kc e(t ) y(t ) = C x(t ) + D u(t )
▪ The imposed eigenvalues of the matrix A ‒ BK (i.e. the poles of the controlled system) are the
roots of the characteristic polynomial of the (A-BK) matrix.
▪ Solve the characteristic equation to derive K, having a unique solution if the system is fully
controllable:
n Similar approach in
det[ I − (A − BK)] = ( − i )
i =1
discrete-time
15
Pole placement state feedback control
– Some approaches to determine the K matrix
n
Matlab:
▪ Objective: det I − ( A − B K ) ( − i ) ; i chosen eigenvalues help place
i =1
– ( − 1 ) ( − n ) = n + 1n −1 + + n −1 + n
16
Pole placement state feedback control
– Step #2
▪ Determine kc in order to ensure a zero steady-state error for a step setpoint (SISO case)
dim( y ) = dim(u ) = 1
▪ To ensure in steady state that y = e, the kc gain should be derived from the relation:
18
Handout
Pole placement state feedback control 3.1.2
Im( )
Re( )
19
Handout
Pole placement state feedback control 3.1.2
Im( )
Re( )
20
Pole placement state feedback control
– A simple example
100
G(s) =
s s
s (1 + )(1 + )
100 1000
– Specifications
▪ Overshoot of the closed-loop step response less than 10% w02
s2 + 2 x w0 s + w02
damping factor of 0.7 of the equivalent 2nd order system
▪ Peak time of the closed-loop step response of tm = 0.01 s
Natural frequency of 300 rad/s of the equivalent 2nd order system
21
Pole placement state feedback control
– Transfer function 100 107
G(s) = =
s s s 3 + 1100 s 2 + 105 s
s (1 + ) (1 + )
100 1000
– State space representation (control canonical form)
0 1 0 0
(t ) = 0 0
x
5
1 x(t ) +
0u(t )
y(t ) = 107 0 0 x(t )
0 − 10 − 1100 1
– Controllability - Observability
0 0 1 Rank(QG)=3 107 0 0
QG = 0 1 − 1100
Q0 = 0 107
0
Rank(Q0)=3
1 − 1100 1110000 0 0 107
22
Pole placement state feedback control
– State feedback: pole placement strategy
x = 0.7
2 complex conjugate poles + the fastest pole remains (w1 = 1000 rad/s)
w
0 = 300 rad/s
−1 0 0 0 0 k1 = 90000000
det 0 −1 + 0 0 0 = ( + 1000)( 2 + 420 + 90000) k2 = 410000
0 105 k3 k = 320
+ 1100 k1 k2 3
−1
– Determination of kc: kc = kc = 9
C (A − BK)−1 B
23
Pole placement state feedback control
– Matlab code
clear all
close all
% State-space representation
A=[0 1 0;0 0 1;0 -10^5 -1100]
B=[0;0;1]
C=[10^7 0 0]
24
Pole placement state feedback control:
application to a magnetic levitation system
– Physical equations
I R
d I(t )
L
+
L + R I(t ) = U(t )
U dt
ze
x0 - z d 2 z(t ) I(t )
Fm m 2
=c 2
− mg
z+l dt ( x0 − z(t ))
z
Fm
P
0
R = 10
L = 0.01 H
U = R I mg x02
I = = i0 g = 10 m s−2
• equilibrium at z = 0 I → c i0 = 0.5 A
c 2
− mg = 0 U = R I = R i
x0 0 x0 = 0.0125 m
i1(t )
▪ State: x(t ) = z(t )
z(t )
R = 10
i1(t ) −1000 0 0 i1(t ) 100 L = 0.01 H
d
z (t ) =
0 0 1 z (t ) +
0 u1(t ) g = 10 m s−2
dt z(t ) 20
1600 0 z(t ) 0 i0 = 0.5 A
i1(t ) x0 = 0.0125 m
Vz (t ) = ( 0 4 000 0 ) z(t ) = 4 000 V m−1
z(t )
26
Pole placement state feedback control:
application to a magnetic levitation system
– Open-loop analysis i1(t ) −1000 0 0 i1(t ) 100
Unstable system d
z (t ) =
0 0 1 z (t ) +
0 u1(t )
dt
▪ eigenvalues: z(t ) 20
1600 0 z(t ) 0
i1(t )
− 1000, − 40, + 40
Vz (t ) = ( 0 4 000 0 ) z(t )
z(t )
▪ Observability: C 0 4 103 0
3
CA = 0 0 4 10 of rank 3 → OK
C A2 8 104 6.4 106 0
27
Pole placement state feedback control:
application to a magnetic levitation system
– State feedback control
i1(t ) −1000 0 0 i1(t ) 100
▪ System in open-loop: d
z(t ) = 0 0
1 z(t ) + 0 u1(t )
dt z(t ) 20
1600 0 z(t ) 0
i1(t )
▪ State feedback control: u1(t ) = − ( k1 k2 k3 ) z (t ) + kc e(t )
z(t )
(
det I − ( A − B K ) = ( + 1000 ) 2 + 2 0.6 130 + 1302 )
▪ Calculation of K using “place” routine in Matlab
k1 = 1.560 k2 = 9.375 103 k3 = 87.25
29
Pole placement state feedback control:
application to a magnetic levitation system
– State feedback control
i1(t ) −1000 − 100 k1 −100 k2 −100 k3 i1(t ) 100 kc
▪ Calculation of kc : d
z(t ) = 0 0 1
z(t ) + 0 e(t )
dt z(t ) z(t ) 0
20 1600 0
i1(t )
Vz (t ) = ( 0 4 000 0 ) z(t )
z(t )
▪ In steady-state:
0 −1000 − 100 k1 −100 k2 −100 k3 i1 100 kc
0 = 0 0 1 z
+ 0 e
0 20 1600 0 z 0
z =0
i1 = −80 z 800 + 80 k1 − k2
4000 kc kc = − = 2.1125
Vz = 4 000 z = − e 4 000
800 + 80 k1 − k2
30
Pole placement state feedback control:
application to a magnetic levitation system
– Step response of the nonlinear system
Vz (V )
– Closed-loop system
I
U I+ i1
e +
+ U
kc System z
d z
U (V )
k1 dt
k2
k3
31
Pole placement state feedback control:
application to a magnetic levitation system
– Matlab code
clear all
close all
% State-space representation
A=[-1000 0 0;0 0 1;20 1600 0]
B=[100;0;0]
C=[0 4000 0]
dim(x1 ) = rang ( QC ) = q n
x1 (t ) A11 A12 x1 (t ) B1
= +
A 22 x 2 (t ) 0
u(t ) x1 : controllable part of the state
2 0
x (t )
x 2 : non controllable part of the state
A BO
▪ State feedback law
u(t ) = − K1 K 2 x(t ) + e(t )
A BF
▪ Eigenvalues of the closed-loop
det ( I − ( A11 − B1K1 ) ) det ( I − A 22 ) = 0
33
Pole placement state feedback control
– Special case: constant setpoint and constant & measurable disturbance (continuous-
time case, SISO case) b(t)
d (t ) kkdb
▪ Two gains must be determined, kc and kd
▪ Open-loop system: x(t ) = Ax(t ) + Bu (t ) + Bd (t ) e(t) + u(t) y(t)
+
kc System
y (t ) = Cx(t )
▪ State feedback law: x(t)
u (t ) = −Kx(t ) + kc e(t ) + kd d (t ) K
K
kc = −[C( A − BK ) −1B]−1
kd = −[C( A − BK ) −1B]−1[C( A − BK ) −1B] 34
Pole placement state feedback control
– Summary
kkdb b(t)
d (t )
▪ State feedback control with pole placement
u (t ) = −Kx(t ) + kc e(t ) + kd d (t ) e(t)
+
+ u(t) y(t)
kc System
▪ Design steps (SISO case)
– Assumptions: x(t)
K
K
▪ Fully controllable system (at least stabilizable)
▪ All states are measured
▪ In case of constant disturbance, assumed to be measured
– Choice of the desired closed-loop eigenvalues
– Calculation of the state feedback gain K provides the expected
eigenvalues
– Determination of the kc and kd gains to cancel steady-state
error in order to track a constant setpoint and to reject a
constant disturbance
35
LQ control
– State feedback control
▪ How to choose the eigenvalues of the
closed-loop system?
– Determine the dominant poles and
impose/keep the other poles if stable and
faster
– Choice of the eigenvectors in the MIMO
case
λi
➔ Possible solution: Linear Quadratic control (LQ)
▪ Principle: design of an optimal control providing the best
compromise between evolution of the states and control values.
36
LQ control
– Typical form of the criterion
– Need for a performance criterion
( x(t ) Qx(t ) + u(t )T R u(t ) ) dt
+
J = T
improve
▪ The settling time
▪ The peak time min J
▪ The overshoot with Q = QT 0 and R = R T 0
▪ The steady-state error ▪ Open-loop representation: x(t ) = A x(t ) + B u(t )
▪… with initial condition x(0)
▪ State feedback law: u(t ) = −Kx(t )
– Lyapunov stability
▪ if V is a Lyapunov function
– V(0) = 0
– V ( x) 0, x D, x 0
V
– V ( x) = ( x) f ( x) 0
x
then the equilibrium point x = 0
is Lyapunov stable
V
If V ( x) = ( x) f ( x) 0, x D, x 0 then asymptotic stability is ensured
x
39
Handout
LQ control formulation 3.2
– Simple example
▪ Open-loop system x(t ) = − x(t ) + u (t ) with A = −1 ; B = 1
▪ LQ control:
= T
– Cost function: J = ( x(t )T Q x(t ) + u(t )T R u(t ) ) dt ;
+ Q Q 0
; Q = HT H
R = R
0 T
0
leading here to:
q 0
( ) r 0 ; H = q 0 full rank
+
J = q x 2
(t ) + r u 2
(t ) dt ;
0
– (A, B) is controllable and (A, H) is observable
– Control given by:
u(t ) = −R −1BT P x(t )
with P = p 0 unique solution of the algebraic Riccati equation PA + AT P − PBR −1BT P + Q = 0
42
Handout
LQ control formulation 3.2
– Simple example
▪ Solution of the Riccati equation:2
−1 p q
PA + A P − PBR B P + Q = 0 −2 p −
T T
+ q = 0 p + 2 r p − q r = 0 p = r −1 + 1 + (positive root)
2
r r
−1 T
q q
thus u (t ) = − R B P x(t ) = − −1 + 1 + x(t ) and x(t ) = − x(t ) + u (t ) = − 1 + x(t )
r r
q x(0) x(0)
− 1+ t
x(t ) = e r
x(0)
q
q − 1+ t
u (t ) = 1 − 1 + e r
x(0)
r
43
LQ control formulation
– Tuning of the weighting terms: some simple considerations
▪ Normalization of variables
▪ Limitation of the number of coefficients: Diagonal weighting matrices
▪ /!\ only the ratio « Q / R » is important. Multiplying Q and R by the same value does not change the
optimal control law.
▪ Idea: start from a tuning with all values equal to 1, and then adjust the weighting coefficients by atrial
and error approach, until specifications are fulfilled
44
LQ control formulation
– Infinite horizon LQ control with criterion on the output
( y(t ) Q y(t ) + u(t ) )
+
x(t ) = A x(t ) + B u(t ) J = T T
R u(t ) dt with Q = CT QC
0
y (t ) = Cx(t )
( x(t ) Qx(t ) + u(t )T R u(t ) ) dt
+
i.e. J = T
0
– Infinite horizon LQ control – discrete-time case
▪ Open-loop representation x[k + 1] = F x[k ] + G u[k ] Matlab :
help dlqr
+ Q = QT 0
▪ Criterion J = x[k ] Q x[k ] + u[k ] R u[k ] ;
T T
R = R
T
0 0
Matlab :
help idare
▪ Resulting LQ control law
K = ( R + G PG ) G T PF
−1
u[k ] = −Kx[k ] with T
P = Q + FT PF − FT PG ( R + G T PG ) G T PF
−1
Algebraic Riccati
equation in discrete-time 45
LQ control example
Velocity
– Example: lateral motion of an aircraft vector
(t ) sideslip angle
(t ) −0.75 0.006 −1 0.037 (t) 0.0012 0.0092
p(t ) roll velocity
d p(t ) −12.9 −0.75 0.387 0 p(t ) 6.05 0.952 a (t)
= + r (t ) yaw rate
d t r (t ) 4.31 0.024 −0.17 0 r (t ) −0.416 −1.76 d (t) (t ) roll angle
(t ) 0 1 0 0 (t ) 0 0 a steering angle
d steering angle from stabilizer
( q (t ) + q2 (t ) 2 + r1 a (t ) 2 + r2 d (t ) 2 ) dt
+
J = 1
2
0
46
LQ control example
Response to an initial
– Example: lateral motion of an aircraft condition: β=0.1rad
▪ Tuning #1
q1 = q2 = 1; r1 = r2 = 1
▪ Tuning #2
q1 = 1; q2 = 10 ; r1 = 3 ; r2 = 1
47
Handout
State feedback control with integral action 3.2.5
Response to an initial
– Example: lateral motion of an aircraft condition: Φ =1rad
▪ Tuning #1
q1 = q2 = 1; r1 = r2 = 1
▪ Tuning #2
q1 = 1; q2 = 10 ; r1 = 3 ; r2 = 1
48
LQ control
– Constant setpoint and constant & measurable disturbance (continuous case, MIMO)
▪ System in open-loop:
x(t ) = Ax(t ) + Bu(t ) + Bd(t ) N d(t)
y (t ) = Cx(t )
e(t) + u(t) y(t)
▪ Control: u(t ) = −K x(t ) + M e(t ) + N d(t ) M
+ Dynamical
System
▪ Closed-loop: K
x(t)
−1 −1
Unique solution if:
M = −[C( A − BK ) B] dim(u) = dim(y)
N = −[C( A − BK ) −1 B]−1[C( A − BK ) −1B] A B
is invertible
C 0 49
Pole placement state feedback control
– From previous developments
N d(t)
▪ With the system
x(t ) = A x(t ) + B u(t ) e(t) + y(t)
+ u(t) Dynamical
y(t ) = C x(t ) M System
dim(x) = n ; dim(u) = m ; dim(y) = l ; l = m
▪ Assuming x(t)
K
– that (A, B) is at least stabilizable, a state feedback gain K
can be calculated such that the closed-loop is stable
A B
– that det 0, the following matrices are calculated M = −[C( A − BK ) −1 B]−1
C 0
N = −[C( A − BK ) −1 B]−1[C( A − BK ) −1B]
▪ The control u(t ) = −K x(t ) + M e(t ) + N d(t )
– guarantees stability of the closed-loop system
– assures that y(t) = e(t) in steady-state for any constant
setpoint and measured constant disturbance
▪ But … 50
Pole placement state feedback control
– … Limitations of this structure
▪ Tracking of the setpoint and disturbance rejection requires accurate knowledge of the model
of the system (A, B, C, D), which is never the case
➔ model uncertainties resulting in a non perfect behavior in steady-state
– Proposal of a strategy that can better perform: state feedback control with
integral action
➔ an integral action is well-known as the key ingredient to cancel steady-state errors for
constant setpoints and disturbances
➔ it does not require an accurate knowledge of the system model
➔ disturbances do not need to be measured
51
Handout
State feedback control with integral action 3.2.5
– Objective: y(t) must reach e (t) = ec constant despite constant disturbance d(t)= d
x d A 0 B ' B 0
x a = ; da = ; A a = a ; Ba =
; B = ; Ca = ( C 0 )
q e C 0 0 0 −I 52
State feedback control with integral action
– With the augmented system
x a (t ) = A a x a (t ) + B a u(t ) + Ba d a (t )
y (t ) = Ca x a (t )
▪ To apply the Kalman criterion to (Aa, Ba), note that the controllability matrix is given by:
B AB A 2 B A n+ l -1 B A B 0 B AB A n+ l -2 B
QG = n + l -2
=
0 CB C AB CA B C 0 Im 0 0 0
▪ K = (K1 K2) can be calculated such that u(t ) = −K x a (t ) = −K1 x(t ) − K 2 q(t ) the closed-loop system
is asymptotically stable
➔ But: which equilibrium point for the system?
53
State feedback control with integral action
– At equilibrium:
0 = ( A a − B a K ) xa + Ba d a
y = Ca xa x d
with xa = ; da =
q e
e
– Leading to: xa = −( A a − B a K ) −1 Ba d a System
y = C x = e
– Thus:
▪ Asymptotically stable closed-loop system
▪ Exact tracking of constant setpoint
▪ Even with non measured constant disturbances
54
State feedback control with integral action
– Pole placement state feedback control with integral action
▪ Check conditions
m l
A B
rank = n+l
C 0
( A, B) controllable
▪ Introduce (Aa, Ba) and determine K such that the closed-loop system is asymptotically stable
for any desired location of the eigenvalues
q(t ) = y(t ) − e → 0 y(t ) → e
u(t ) = −K x a (t ) = −K 1 x(t ) − K 2 q(t )
To summarize:
e
System u(t ) = −K1 x(t ) − K 2 q(t )
q(t ) = ( y ( ) − e( ) ) d
t
55
State feedback control with integral action
– LQ control with integral action
▪ Introduce (Aa, Ba) and apply the standard LQ design procedure
Q 0
Q a = QTa = 1 0 ; R = R T
0 ; Q a = H T
a Ha
0 Q2
▪ Check conditions
( A a , B a ) stabilizable ( A a , H a ) detectable
e
System
m l
A B
rank = n+l
C 0
( A, B) stabilizable
56
State feedback control with integral action
– Setpoint feedforward
▪ Objective: improve the system response by Kc
introducing an anticipation of the control e +
value at equilibrium System
▪ Control law
u(t ) = −K1 x(t ) − K 2 q(t ) + K c e c (t )
▪ In steady-state
0 A B xc
= u
c
e C 0 c
−1
xc A B 0
uc = ( 0 I ) = ( 0 I ) ec u c = K cec
c
u C 0 I
Kc
57
State feedback control with integral action
– Example: inverted pendulum on a cart
Linearized model
( )
T
x(t ) = z (t )T (t )T z (t )T (t )T
0 0 1 0 0
0 0 0 1 0
A= ,B =
0 1 0 0 1
0 22 0 0 2
C = (1 0 0 0) , D = 0
60