Chapter 3

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 60

Control Theory

Chapter 3: State feedback control


Didier DUMUR
didier.dumur@centralesupelec.fr
Current issues and main parts of the course

– Current issues – Main parts of the course


▪ State space representation, Time-domain approach: deal
▪ Predictive control/optimal with nonlinear MIMO systems
control controllability, observability
▪ Interconnected, MIMO, large ▪ Linear quadratic control
Introduction to optimal control
scale systems ▪ State feedback control/
▪ Constraints, robustness, observers – Kalman filters Main concept required for
resilience ▪ Performance and sensorless control
▪ Sensorless control robustness analysis To take into account constraints
▪ Distributed, hierarchical control ▪ Industrial conferences to and uncertainties
▪ Data-driven control conclude the course
Current industrial issues

Main goal: introduce the key concepts enabling to deal with current
issues related to the control of complex systems 2
Course overview

– Introduction
– Chapter 1 : Structure of controlled systems
– Chapter 2 : State space representation, controllability, observability
– Chapter 3 : State feedback control
– Chapter 4 : Observers and estimated state feedback control
– Chapter 5 : Performance and robustness analysis of a control law

3
Chapter 3 overview
1. Aim of controllers
a. Interest a closed-loop structure
b. Stability, performance, robustness
2. Pole placement state feedback control
a. State feedback control principle
b. Calculation of the state feedback matrix
c. Additional gain calculation for steady-state purposes
3. Example: magnetic levitation system
4. LQ control
5. Example: lateral motion of an aircraft
6. Additional integral action in the control structure
7. Example: inverted pendulum
8. Conclusion 4
Interest of closed-loop structure
– Open-loop structure – Closed-loop structure

System
Controller System

electronics.stackexchange.com

… but what is really expected from the controller?


5
Qualities of a controlled system Handout
1.3

– A feedback system must fulfill three qualities:


– Stability: stabilize an unstable system, or improve the
damping of a system
– Rapidity: guarantee a dynamic behavior which ensures
a suitable settling time (e.g. avoid too slow behavior)
– Accuracy (steady state or transient): decrease or cancel
errors with respect to desired setpoints
e(t )
A feedback system ensures
s(t )
the compromise between
Stable system
e with good
stability on the one hand
damping and rapidity-accuracy on
t P. Albertos, I. Mareels
the other hand Feedback and Control for Everyone
6
Qualities of a controlled system
– Second order system: explicit relation (w0 , x) → (D, tm )
w02
G(s) = 2 Overshoot
s + 2x w0 s + w02
ymax − y st
Step response of a second order linear time-invariant system D= = exp −  

1.4
Step Response
y st  1− 2 
1.2
D%
1,05K
1

0,95K Settling time


0.8

0.6
tr  within the range  5% of y st
Amplitude

K
0.4

0.2
Peak time


tm =
0

-0.2
tm tr w0 1− 2
w0t m  3
-0.4
0 2 4 6 8 10 12 14
Time (seconds)

Effect of unstable zeros (undershoot) 7


Qualities of a controlled system
– Stability issue: consider the eigenvalues of the evolution matrix of the state space
representation
– Rapidity issue: for a second order system, explicit relations that provide links
between (w0 , x) → (D, tm )
D%
100
Overshoot of the closed-loop step
90 response for a second order system as
function of the damping of the closed-
80
loop poles
70 x

1−x 2
60 D=e

50

40

30

20 w0 t m  3
10 x
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 8
Qualities of a controlled system
– Rapidity issue: some examples

(2) Closed-loop step response (normalized)


Open-loop transfer function ΔG Δφ ωc ωc tm
1.6 (1) 1
(1) (2)

1.4 (3)
(3)
(4)
(1)
s (1 + s ) (1 + 0.5 s ) 3.0 33° 0.75 3.06
(5) 100
(s ) (s )
(6)
1.2 (6) (2) 2
+ 0.4 s + 1 2
+ s + 100 20.1 32° 1.36 3.07

1 5
(3)
(1 + 0.2 s ) (1 + 0.1 s ) 1.8 22° 8.40 3.34
2
0.8
1 + 2s
s (1 + s ) (1 + 0.5 s )
0.6 (5) (4)  71° 1.41 2.67
(4)
0.4 0.3 (1 − s )
(5) s (1 + s ) (1 + 0.5 s ) 2.5 49° 0.30 2.56
0.2
−5 (1 + 3 s )
0 (6) (1 − 12 s ) (1 + 0.3 s )
2 0.2 55° 1.16 3.34
0 5 10 15
ωc t (normalized time scale)

9
Qualities of a controlled system
– Steady-state accuracy
b(t)
▪ Objective: no steady-state error
between the output and the e(t)
u(t) y(t)
setpoint, despite the presence of Controller System

disturbances ym (t)

e(t )
y(t )

e Compromise
Stability vs Rapidity & Accuracy
t

10
Handout
State feedback control 3.1

– Principle of state feedback control


▪ Starting from the open-loop state space representation – Linear time-invariant continuous-time
system case x(t ) = A x(t ) + B u(t )
 dim(x) = n ; dim(u) = m
y(t ) = C x(t ) + D u(t )

▪ State feedback control law – assuming the state available at any time
u(t ) = - K x(t ) + kc e(t )
e(t) u(t) y(t)
+
kc System
▪ Resulting in the closed-loop state space representation
x(t ) = (A − BK)x(t ) + B kc e(t )
 x(t)
y(t ) = C x(t ) + D u(t ) K

More information: Brian Douglas https://engineeringmedia.com/videos


https://www.youtube.com/watch?time_continue=2&v=FXSpHy8LvmY&feature=emb_logo 11
Handout
State feedback control 3.1

– Principle of state feedback control


▪ Starting from the open-loop state space representation – Linear time-invariant discrete-time system
case x[k + 1] = F x[k] + G u[k] dim(x) = n ; dim(u) = m
y[k] = H x[k] + J u[k]

▪ State feedback control law – assuming the state available at any time
u[k ] = −K x[k ] + kc e[k ] e[k] + u[k] y[k]
kc System
▪ Resulting in the closed-loop state space representation
x[k + 1] = ( F − G K ) x[k ] + G kce[k ] x[k]
K
y[k ] = H x[k ] + J u[k ]

More information: Brian Douglas https://engineeringmedia.com/videos


https://www.youtube.com/watch?time_continue=2&v=FXSpHy8LvmY&feature=emb_logo
12
Pole placement state feedback control
– Main idea: impose the eigenvalues of the closed-loop matrix at specified values
– Motivation
▪ Response to an initial condition – continuous-time case
n
( A − B K ) ( t − t0 )
x(t0 ) =  α i e
i ( t −t0 )
x(t ) = e i = eigenvalues of de A − B K
i =1

(for non multiple eigenvalues)

▪ Response to an initial condition – discrete-time case


n
x[k ] = ( F − GK ) x[k0 ] =  α i i k − k0 i = eigenvalues of F − G K
k − k0

i =1

(for non multiple eigenvalues)

➔ Selecting specific eigenvalues enables shaping


the system response 13
Pole placement state feedback control
– Theoretical justification
▪ Wonham Theorem (continuous and discrete-time case) : There exists a matrix K for any choice of the
eigenvalues if and only if the system is fully controllable
▪ A non fully controllable system can be structured by means of a change of basis under the following
canonical form (continuous-time case) :
 x1 (t )   A11 A12   x1 (t )   B1 
 =   +   u(t )
 x2 (t )   0 A 22   x2 (t )   0 

  x1 (t ) 
y (t ) = ( C1 C2 )  x (t )  + D u(t )
  2 

▪ In this case, the dynamic of the non controllable part ( x 2 (t ) vector) can not be modified through the
action of the control law
x 2 (t ) = A 22 x 2 (t )

14
Pole placement state feedback control
– Step #1
▪ Determine K in order to impose the eigenvalues of the matrix A ‒ BK (i.e. the poles of the
controlled system) at specified values:
Open-loop form Closed-loop form
x(t ) = A x(t ) + B u(t ) x(t ) = (A − BK)x(t ) + B kc e(t )
 
y(t ) = C x(t ) + D u(t ) u(t ) = -  x(t ) + kc e(t ) y(t ) = C x(t ) + D u(t )

▪ The imposed eigenvalues of the matrix A ‒ BK (i.e. the poles of the controlled system) are the
roots of the characteristic polynomial of the (A-BK) matrix.
▪ Solve the characteristic equation to derive K, having a unique solution if the system is fully
controllable:
n Similar approach in

det[ I − (A − BK)] = ( − i )
i =1
discrete-time

15
Pole placement state feedback control
– Some approaches to determine the K matrix
n
Matlab:
▪ Objective: det  I − ( A − B K )    (  − i ) ; i chosen eigenvalues help place
i =1

▪ Ackerman formula (if dim(u)=1)

– ( − 1 )  ( −  n ) = n + 1n −1 +  +  n −1 +  n

– P ( A) = A n + 1A n−1 +  +  n−1A +  n I n


−1
– K = (0 0 0 1) B AB A 2B A n −1B  P ( A)

▪ Considering the eigenvectors (any dim(u))


Same approach in
– Find v i , w i such that ( A − i In ) v i − B w i = 0 i = 1, n discrete-time replacing
A and B by F and G
Leading to K =  w1 w n  v1 vn 
−1

16
Pole placement state feedback control
– Step #2
▪ Determine kc in order to ensure a zero steady-state error for a step setpoint (SISO case)

x(t ) = (A − BK)x(t ) + B kc e(t ) 0 = (A − BK)x  + B kc e x  = − (A − BK)−1B kc e


   
y(t ) = C x(t ) steady-state y = C x
   y  = C x 

dim( y ) = dim(u ) = 1

▪ To ensure in steady state that y = e, the kc gain should be derived from the relation:

Similar approach in discrete-


y  = − C (A − BK)−1B kc e −1
 kc = time resulting in:
y  = e C (A − BK)−1 B
1
kc = −1
H (I − F + G K ) G
Important: the steady-state error is non zero in case of constant
disturbance
17
Pole placement state feedback control
– Some rules for the choice of the eigenvalues (continuous-time case)
▪ with real parts strictly negative (stability)
▪ real or conjugate complex values (matrices with real coefficients)
▪ moving the eigenvalues to the left in the complex plane increases the rapidity of
the response of the closed-loop system, and its damping
▪ choosing distant eigenvalues of (A-BK) compared to those of A leads to
important gains in the K matrix and therefore high magnitude of the control
signal u(t)

– The discrete-time case can be deduced through the change i → e


 T i e

18
Handout
Pole placement state feedback control 3.1.2

– Choice of the eigenvalues – examples

Im( )

Re( )

19
Handout
Pole placement state feedback control 3.1.2

– Choice of the eigenvalues – examples

Im( )

Re( )

20
Pole placement state feedback control
– A simple example
100
G(s) =
s s
s (1 + )(1 + )
100 1000

– Specifications
▪ Overshoot of the closed-loop step response less than 10% w02
s2 + 2 x w0 s + w02
 damping factor of 0.7 of the equivalent 2nd order system
▪ Peak time of the closed-loop step response of tm = 0.01 s
 Natural frequency of 300 rad/s of the equivalent 2nd order system

21
Pole placement state feedback control
– Transfer function 100 107
G(s) = =
s s s 3 + 1100 s 2 + 105 s
s (1 + ) (1 + )
100 1000
– State space representation (control canonical form)
0 1 0  0
 (t ) = 0 0
x

5
1  x(t ) +

0u(t )
   
y(t ) = 107 0 0 x(t )
0 − 10 − 1100 1

– Controllability - Observability

0 0 1  Rank(QG)=3 107 0 0 
QG = 0 1 − 1100  
Q0 =  0 107
0 

  Rank(Q0)=3
1 − 1100 1110000  0 0 107 
 
22
Pole placement state feedback control
– State feedback: pole placement strategy

x = 0.7
  2 complex conjugate poles + the fastest pole remains (w1 = 1000 rad/s)
w
 0 = 300 rad/s

– Determination of K: det  I − (A − BK) = ( + 1)( 2 + 2 x 0  + 02 )

  −1 0  0 0 0  k1 = 90000000
    
det  0  −1  + 0 0 0   = ( + 1000)( 2 + 420 + 90000) k2 = 410000
 0 105 k3   k = 320
  + 1100  k1 k2  3

−1
– Determination of kc: kc = kc = 9
C (A − BK)−1 B

23
Pole placement state feedback control
– Matlab code

clear all
close all

% State-space representation
A=[0 1 0;0 0 1;0 -10^5 -1100]
B=[0;0;1]
C=[10^7 0 0]

eig(A) % eigenvalues in open-loop


Qc=ctrb(A,B); rank(Qc) % Controllability
Qo=obsv(A,C); rank(Qo) % Observability
xsi=0.7; w0=300; p1=-xsi*w0+i*sqrt(1-xsi^2)*w0;p2=-xsi*w0-i*sqrt(1-xsi^2)*w0
P=[-1000;p1;p2] % Closed-loop eigenvalues
K=place(A,B,P) % State feedback vector
kc= -1/(C*inv(A-B*K)*B) % Additional gain

24
Pole placement state feedback control:
application to a magnetic levitation system
– Physical equations
I R
d I(t )
L
+
L + R I(t ) = U(t )
U dt
ze
x0 - z d 2 z(t ) I(t )
Fm m 2
=c 2
− mg
z+l dt ( x0 − z(t ))
z
Fm
P
0
R = 10 
L = 0.01 H
U = R I  mg x02
 I = = i0 g = 10 m s−2
• equilibrium at z = 0  I → c i0 = 0.5 A
c 2
− mg = 0 U = R I = R i
 x0  0 x0 = 0.0125 m

 I(t ) = I + i1(t )  = 4 000 V m−1


• linearization around equilibrium: 
U(t ) = U + u1(t ) 25
Pole placement state feedback control:
application to a magnetic levitation system
– Model
 d i1(t )
▪ Linearized model around z=0 L dt + R i1(t ) = u1(t )

 2
 d z(t ) = g i (t ) + 2 g z(t )

 dt
2 i0 1 x0
measurement: Vz (t ) =  z(t )

 i1(t ) 
 
▪ State: x(t ) =  z(t ) 
 z(t ) 
 

R = 10 
 i1(t )   −1000 0 0   i1(t )  100  L = 0.01 H
d       
 z (t ) =
  0 0 1  z (t ) +
  0  u1(t ) g = 10 m s−2
dt  z(t )   20
   1600 0   z(t )   0  i0 = 0.5 A
 i1(t )  x0 = 0.0125 m
 
Vz (t ) = ( 0 4 000 0 )  z(t )   = 4 000 V m−1
 z(t ) 
 
26
Pole placement state feedback control:
application to a magnetic levitation system
– Open-loop analysis  i1(t )   −1000 0 0   i1(t )  100 
Unstable system d       
 z (t ) =
  0 0 1  z (t ) +
  0  u1(t )
dt
▪ eigenvalues:  z(t )   20
   1600 0   z(t )   0 
 i1(t ) 
− 1000, − 40, + 40  
Vz (t ) = ( 0 4 000 0 )  z(t ) 
 z(t ) 
 

100 −105 108 


 
B A B A2B  =  0 3
▪ Controllability:   
0 2.10  of rank 3 → OK

 0 2.103 −2.106 

▪ Observability:  C   0 4 103 0 
   3

 CA  =  0 0 4 10  of rank 3 → OK
C A2  8 104 6.4 106 0 

    27
Pole placement state feedback control:
application to a magnetic levitation system
– State feedback control
 i1(t )   −1000 0 0   i1(t )  100 
▪ System in open-loop: d   
 z(t )  =  0 0
   
1   z(t )  +  0  u1(t )
dt  z(t )   20
   1600 0   z(t )   0 

 i1(t ) 
 
▪ State feedback control: u1(t ) = − ( k1 k2 k3 )  z (t )  + kc e(t )
 z(t ) 
 

 i1(t )   −1000 − 100 k1 −100 k2 −100 k3   i1(t )  100 kc 


d        
 z (t ) = 0 0 1   z (t ) +
  0  e(t )
▪ Closed-loop system: dt  z(t )  
   20 1600 0 

 z(t )   0 
   
 i1(t ) 
 
Vz (t ) = ( 0 4 000 0 )  z(t ) 
 z(t )  28
 
Pole placement state feedback control:
application to a magnetic levitation system
– State feedback control
▪ Objective: D  10%

tm  0.03 s

▪ Eigenvalues of A: -1000, -40, 40


▪ Chosen eigenvalues:
x = 0.6
−1000 and 2 complex conjugate eigenvalues: 
w0 = 130 rad/s

(
det  I − ( A − B K )  = (  + 1000 )  2 + 2  0.6  130  + 1302 )
▪ Calculation of K using “place” routine in Matlab
k1 = 1.560 k2 = 9.375 103 k3 = 87.25
29
Pole placement state feedback control:
application to a magnetic levitation system
– State feedback control
 i1(t )   −1000 − 100 k1 −100 k2 −100 k3   i1(t )  100 kc 
▪ Calculation of kc : d   
 z(t )  =  0 0 1


   
 z(t )  +  0  e(t )
dt  z(t )     z(t )   0 
   20 1600 0     
 i1(t ) 
 
Vz (t ) = ( 0 4 000 0 )  z(t ) 
 z(t ) 
 

▪ In steady-state:
 0   −1000 − 100 k1 −100 k2 −100 k3   i1  100 kc 
       
0 =  0 0 1  z
   + 0 e
0  20 1600 0  z  0 
       
z =0
i1 = −80 z 800 + 80 k1 − k2
4000 kc kc = − = 2.1125
Vz = 4 000 z = − e 4 000
800 + 80 k1 − k2
30
Pole placement state feedback control:
application to a magnetic levitation system
– Step response of the nonlinear system
Vz (V )
– Closed-loop system

I
U I+ i1
e +
+ U
kc System z

d z
U (V )
k1 dt

k2

k3

31
Pole placement state feedback control:
application to a magnetic levitation system
– Matlab code

clear all
close all

% State-space representation
A=[-1000 0 0;0 0 1;20 1600 0]
B=[100;0;0]
C=[0 4000 0]

eig(A) % eigenvalues in open-loop


Qc=ctrb(A,B); rank(Qc) % Controllability
Qo=obsv(A,C); rank(Qo) % Observability
xsi=0.6; w0=130; p1=-xsi*w0+i*sqrt(1-xsi^2)*w0;p2=-xsi*w0-i*sqrt(1-xsi^2)*w0
P=[-1000;p1;p2] % Closed-loop eigenvalues
K=place(A,B,P) % State feedback vector
kc= -1/(C*inv(A-B*K)*B) % Additional gain
32
Handout
Pole placement state feedback control 3.1.4

– State feedback control of partially controllable systems (SISO case)


▪ Assume a stabilizable system, with open-loop canonical state space representation

dim(x1 ) = rang ( QC ) = q  n
 x1 (t )   A11 A12   x1 (t )   B1 
 =  +
A 22   x 2 (t )   0 
u(t ) x1 : controllable part of the state
 2   0
x (t )
x 2 : non controllable part of the state
A BO
▪ State feedback law
u(t ) = −  K1 K 2  x(t ) + e(t )

▪ Leading to the closed-loop representation


 A − B1K1 A12 − B1K 2   x1 (t ) 
x(t ) =  11  x (t ) + B e (t ), with x ( t ) =  
 0 A 22   2 
x (t )

A BF
▪ Eigenvalues of the closed-loop
det (  I − ( A11 − B1K1 ) ) det (  I − A 22 ) = 0
33
Pole placement state feedback control
– Special case: constant setpoint and constant & measurable disturbance (continuous-
time case, SISO case) b(t)
d (t ) kkdb
▪ Two gains must be determined, kc and kd
▪ Open-loop system: x(t ) = Ax(t ) + Bu (t ) + Bd (t ) e(t) + u(t) y(t)
+
 kc System
 y (t ) = Cx(t )
▪ State feedback law: x(t)
u (t ) = −Kx(t ) + kc e(t ) + kd d (t ) K
K

▪ Leading to the closed-loop:


x(t ) = ( A − BK)x(t ) + Bkc e(t ) + (Bkd + B) d (t ) dim( y ) = dim(u ) = dim(d ) = 1

 y (t ) = Cx(t )
▪ Which requires to obtain y(t) = e(t) in steady-state (i.e. for e(t) = ec constant and d(t) = d constant)

kc = −[C( A − BK ) −1B]−1
kd = −[C( A − BK ) −1B]−1[C( A − BK ) −1B] 34
Pole placement state feedback control
– Summary
kkdb b(t)
d (t )
▪ State feedback control with pole placement
u (t ) = −Kx(t ) + kc e(t ) + kd d (t ) e(t)
+
+ u(t) y(t)
kc System
▪ Design steps (SISO case)
– Assumptions: x(t)
K
K
▪ Fully controllable system (at least stabilizable)
▪ All states are measured
▪ In case of constant disturbance, assumed to be measured
– Choice of the desired closed-loop eigenvalues
– Calculation of the state feedback gain K provides the expected
eigenvalues
– Determination of the kc and kd gains to cancel steady-state
error in order to track a constant setpoint and to reject a
constant disturbance
35
LQ control
– State feedback control
▪ How to choose the eigenvalues of the
closed-loop system?
– Determine the dominant poles and
impose/keep the other poles if stable and
faster
– Choice of the eigenvectors in the MIMO
case

λi
➔ Possible solution: Linear Quadratic control (LQ)
▪ Principle: design of an optimal control providing the best
compromise between evolution of the states and control values.
36
LQ control
– Typical form of the criterion
– Need for a performance criterion
( x(t ) Qx(t ) + u(t )T R u(t ) ) dt
+
J = T

(cost function) J which aims to 0

improve
▪ The settling time
▪ The peak time min J
▪ The overshoot with Q = QT 0 and R = R T 0
▪ The steady-state error ▪ Open-loop representation: x(t ) = A x(t ) + B u(t )
▪… with initial condition x(0)
▪ State feedback law: u(t ) = −Kx(t )

➔Find K which minimizes the criterion J


37
Handout
LQ control: mathematical tools 2.2.2

– Positive definite matrix


▪ A symmetric matrix P = P is positive definite (i.e. P = P 0 ) if for all non-zero vector x 
T T n
\{0}
xT Px  0
– Lyapunov equation
▪ A matrix A  n n
is Hurwitz iif for all Q = QT 0 , there exists P = PT 0 s.t.
AT P + PA + Q = 0
– Lyapunov function
▪ Consider V :D → with D a convex open space including the origin x = 0
– V is positive definite if V(0) = 0 and V(x) > 0  x  0
– V is a Lyapunov function if V is positive definite, continuously differentiable and such that its time
derivative along the system trajectories x = f ( x) verifies
V
V ( x) = ( x)  f ( x)  0 x
x 38
Handout
LQ control: mathematical tools 2.2.2

– Lyapunov stability
▪ if V is a Lyapunov function
– V(0) = 0
– V ( x)  0, x D, x  0
V
– V ( x) = ( x)  f ( x)  0
x
then the equilibrium point x = 0
is Lyapunov stable

V
If V ( x) = ( x)  f ( x)  0, x D, x  0 then asymptotic stability is ensured
x

39
Handout
LQ control formulation 3.2

– Infinite horizon LQ control


▪ Open-loop representation: x (t ) = A x(t ) + B u(t )
▪ Objective: find u(t) which for all x(0) minimizes the cost function Matlab :
K=lqr(A,B,Q,R)
Q = Q T

J =  ( x(t ) Q x(t ) + u(t ) R u(t ) ) dt ; 


+
T T 0
R = R
0 T
0
▪ Existence of a solution: (A, B) stabilizable and (H, A) detectable, with H full rank rectangular matrix
s.t. Q = HT H
▪ Solution u(t ) = −Kx(t ) ; K = R −1BT P State feedback form!
P = PT unique solution 0 of the algebraic Riccati equation:
PA + AT P − PBR −1BT P + Q = 0 Matlab :
help icare
– In all cases the controlled system is stable: x(0) x(t ) → 0
t →+

– If (H,A) is observable, 𝐏 is positive definite


More information: Anthony Rossiter, Optimal control
https://www.youtube.com/watch?v=23UJQBy-7ME 40
Handout
LQ control formulation 3.2

– Infinite horizon LQ control


▪ Proof: consider the Lyapunov function
V (x) = xT P x ; P solution of the Riccati equation

▪ Writing the derivative of V, the cost function is shown to be minimal for


u(t ) = −Kx(t ) ; K = R −1BT P
and equal to J = x(0)T P x(0)
– Extension to systems with random disturbances
▪ System: x(t ) = A x(t ) + Bu(t ) + v(t ) v white noise vector with E v(t) = 0 ; E v(t) v( )T  = V (t) (t −  )
▪ Cost function:
 Q = Q
T
1 T
J = lim E   ( x(t ) Q x(t ) + u(t ) R u(t ) ) dt  ; 
T T 0
T →+
T 0  R = R T 0

▪ Solution: similar assumptions and solution as for the deterministic case


41
Handout
LQ control formulation 3.2

– Simple example
▪ Open-loop system x(t ) = − x(t ) + u (t ) with A = −1 ; B = 1

▪ LQ control:

 = T
– Cost function: J =  ( x(t )T Q x(t ) + u(t )T R u(t ) ) dt ; 
+ Q Q 0
; Q = HT H
R = R
0 T
0
leading here to:
q  0
( ) r  0 ; H = q  0 full rank
+
J = q x 2
(t ) + r u 2
(t ) dt ;
0

– (A, B) is controllable and (A, H) is observable
– Control given by:
u(t ) = −R −1BT P x(t )
with P = p  0 unique solution of the algebraic Riccati equation PA + AT P − PBR −1BT P + Q = 0
42
Handout
LQ control formulation 3.2

– Simple example
▪ Solution of the Riccati equation:2
−1 p  q
PA + A P − PBR B P + Q = 0  −2 p −
T T
+ q = 0  p + 2 r p − q r = 0  p = r  −1 + 1 +  (positive root)
2

r  r
−1 T
 q q
thus u (t ) = − R B P x(t ) = −  −1 + 1 +  x(t ) and x(t ) = − x(t ) + u (t ) = − 1 + x(t )
 r r

 q x(0) x(0)
− 1+  t
x(t ) = e  r
x(0)
 q
 q − 1+  t
u (t ) = 1 − 1 +  e  r
x(0)
 r

43
LQ control formulation
– Tuning of the weighting terms: some simple considerations
▪ Normalization of variables
▪ Limitation of the number of coefficients: Diagonal weighting matrices

J =  ( q1 x1 (t ) 2 + ... + qn xn (t ) 2 ) + (11 ) dt



r u (t ) 2
+ ... + r u
m m (t ) 2
0

▪ Iterations from an initial choice


– Increasing the coefficients of Q improve rapidity, increase control values
– Increasing coefficients of R smooth the control action decrease rapidity

▪ /!\ only the ratio « Q / R » is important. Multiplying Q and R by the same value does not change the
optimal control law.
▪ Idea: start from a tuning with all values equal to 1, and then adjust the weighting coefficients by atrial
and error approach, until specifications are fulfilled

44
LQ control formulation
– Infinite horizon LQ control with criterion on the output
( y(t ) Q y(t ) + u(t ) )
+
x(t ) = A x(t ) + B u(t ) J = T T
R u(t ) dt with Q = CT QC
0
y (t ) = Cx(t )
( x(t ) Qx(t ) + u(t )T R u(t ) ) dt
+
i.e. J =  T
0
– Infinite horizon LQ control – discrete-time case
▪ Open-loop representation x[k + 1] = F x[k ] + G u[k ] Matlab :
help dlqr
+ Q = QT 0
▪ Criterion J =  x[k ] Q x[k ] + u[k ] R u[k ] ; 
T T

R = R
T
0 0
Matlab :
help idare
▪ Resulting LQ control law
K = ( R + G PG ) G T PF
−1
u[k ] = −Kx[k ] with T

P = Q + FT PF − FT PG ( R + G T PG ) G T PF
−1
Algebraic Riccati
equation in discrete-time 45
LQ control example
Velocity
– Example: lateral motion of an aircraft vector

 (t ) sideslip angle
  (t )   −0.75 0.006 −1 0.037    (t)   0.0012 0.0092 
       p(t ) roll velocity
d  p(t )   −12.9 −0.75 0.387 0   p(t )   6.05 0.952    a (t) 
= + r (t ) yaw rate

d t r (t )   4.31 0.024 −0.17 0   r (t )   −0.416 −1.76    d (t)  (t ) roll angle
      
  (t )   0 1 0 0   (t )   0 0   a steering angle
 d steering angle from stabilizer

– Objectives Open-loop eigenvalues


‒ 0,766 roll mode
▪ Settling time of 3s for initial conditions on β or Φ ‒ 0,0062 spiral mode
▪ Correctly damped responses ‒ 0,447 ± j 2,074 Dutch roll mode
▪ “Reasonable” control magnitudes
– Criterion

( q  (t ) + q2  (t ) 2 + r1  a (t ) 2 + r2  d (t ) 2 ) dt
+
J = 1
2
0

46
LQ control example
Response to an initial
– Example: lateral motion of an aircraft condition: β=0.1rad
▪ Tuning #1
q1 = q2 = 1; r1 = r2 = 1

 −1.372 0.453 0.481 0.983 


K = 
 0.405 −0.115 −0.649 −0.020 

▪ Tuning #2
q1 = 1; q2 = 10 ; r1 = 3 ; r2 = 1

 −1, 745 0, 654 0, 456 1,802 


K = 
 0,990 −0,179 −1,186 0,354 

47
Handout
State feedback control with integral action 3.2.5

Response to an initial
– Example: lateral motion of an aircraft condition: Φ =1rad
▪ Tuning #1
q1 = q2 = 1; r1 = r2 = 1

 −1,372 0, 453 0, 481 0,983 


K = 
 0, 405 −0,115 −0, 649 −0, 020 

▪ Tuning #2
q1 = 1; q2 = 10 ; r1 = 3 ; r2 = 1

 −1, 745 0, 654 0, 456 1,802 


K = 
 0,990 −0,179 −1,186 0,354 

48
LQ control
– Constant setpoint and constant & measurable disturbance (continuous case, MIMO)
▪ System in open-loop:
x(t ) = Ax(t ) + Bu(t ) + Bd(t ) N d(t)

y (t ) = Cx(t )
e(t) + u(t) y(t)
▪ Control: u(t ) = −K x(t ) + M e(t ) + N d(t ) M
+ Dynamical
System

▪ Closed-loop: K
x(t)

x(t ) = ( A − BK)x(t ) + B M e(t ) + (BN + B)d(t )



y (t ) = Cx(t )
▪ Imposing y(t) = e(t) (setpoint) in steady-state (i.e. for e (t) = ec constant and d(t) = d constant)

−1 −1
Unique solution if:
M = −[C( A − BK ) B] dim(u) = dim(y)
N = −[C( A − BK ) −1 B]−1[C( A − BK ) −1B] A B
  is invertible
 C 0  49
Pole placement state feedback control
– From previous developments
N d(t)
▪ With the system
x(t ) = A x(t ) + B u(t ) e(t) + y(t)
 + u(t) Dynamical
y(t ) = C x(t ) M System
dim(x) = n ; dim(u) = m ; dim(y) = l ; l = m
▪ Assuming x(t)
K
– that (A, B) is at least stabilizable, a state feedback gain K
can be calculated such that the closed-loop is stable
A B
– that det    0, the following matrices are calculated M = −[C( A − BK ) −1 B]−1
 C 0 
N = −[C( A − BK ) −1 B]−1[C( A − BK ) −1B]
▪ The control u(t ) = −K x(t ) + M e(t ) + N d(t )
– guarantees stability of the closed-loop system
– assures that y(t) = e(t) in steady-state for any constant
setpoint and measured constant disturbance
▪ But … 50
Pole placement state feedback control
– … Limitations of this structure
▪ Tracking of the setpoint and disturbance rejection requires accurate knowledge of the model
of the system (A, B, C, D), which is never the case
➔ model uncertainties resulting in a non perfect behavior in steady-state

▪ Constant disturbances need to be measured, which is rarely the case

– Proposal of a strategy that can better perform: state feedback control with
integral action
➔ an integral action is well-known as the key ingredient to cancel steady-state errors for
constant setpoints and disturbances
➔ it does not require an accurate knowledge of the system model
➔ disturbances do not need to be measured
51
Handout
State feedback control with integral action 3.2.5

– Open-loop representation x (t ) = A x(t ) + B u(t ) + B d(t )



y (t ) = C x(t )

– Objective: y(t) must reach e (t) = ec constant despite constant disturbance d(t)= d

q(t ) =  ( y ( ) − e( ) ) d  q(t ) = y ( ) − e( )


t
– Explicit integral action: additional state variable 0

 x(t )   A 0   x(t )   B   B 0  d(t ) 


 =
   +
   u(t ) +   
– Augmented system  q(t )   C 0   q(t )   0   0 −I  e(t ) 

 y(t ) = C 0  x(t ) 
 ( ) 
  q(t ) 
x a (t ) = A a x a (t ) + B a u(t ) + Ba d a (t )

y (t ) = Ca x a (t )

x  d A 0  B  '  B 0 
x a =   ; da =   ; A a =   a   ; Ba = 
; B =  ; Ca = ( C 0 )
 q  e  C 0 0  0 −I  52
State feedback control with integral action
– With the augmented system
x a (t ) = A a x a (t ) + B a u(t ) + Ba d a (t )

y (t ) = Ca x a (t )
▪ To apply the Kalman criterion to (Aa, Ba), note that the controllability matrix is given by:
 B AB A 2 B A n+ l -1 B   A B   0 B AB A n+ l -2 B 
QG =  n + l -2 
=  
 0 CB C AB CA B   C 0   Im 0 0 0 

▪ Thus (Aa, Ba) is controllable if: m  l



 A B
rank   = n+l
  C 0 
( A, B) controllable

▪ K = (K1 K2) can be calculated such that u(t ) = −K x a (t ) = −K1 x(t ) − K 2 q(t ) the closed-loop system
is asymptotically stable
➔ But: which equilibrium point for the system?
53
State feedback control with integral action
– At equilibrium:
0 = ( A a − B a K ) xa + Ba d a

 y = Ca xa x  d
with xa =   ; da =  
 q e
e
– Leading to:  xa = −( A a − B a K ) −1 Ba d a System

y = C x = e

– Thus:
▪ Asymptotically stable closed-loop system
▪ Exact tracking of constant setpoint
▪ Even with non measured constant disturbances

54
State feedback control with integral action
– Pole placement state feedback control with integral action
▪ Check conditions
m  l

 A B
 rank   = n+l
  C 0 
( A, B) controllable

▪ Introduce (Aa, Ba) and determine K such that the closed-loop system is asymptotically stable
for any desired location of the eigenvalues
q(t ) = y(t ) − e → 0  y(t ) → e
u(t ) = −K x a (t ) = −K 1 x(t ) − K 2 q(t )

To summarize:
e
System u(t ) = −K1 x(t ) − K 2 q(t )
q(t ) =  ( y ( ) − e( ) ) d
t

55
State feedback control with integral action
– LQ control with integral action
▪ Introduce (Aa, Ba) and apply the standard LQ design procedure

( x(t ) Q1 x(t ) + q(t )T Q 2 q(t ) + u(t )T R u(t ) ) dt


+
J = T
0

Q 0 
Q a = QTa =  1   0 ; R = R T
 0 ; Q a = H T
a Ha
 0 Q2 

▪ Check conditions
( A a , B a ) stabilizable ( A a , H a ) detectable
e
System

m  l

 A B
 rank   = n+l
  C 0 
( A, B) stabilizable
56
State feedback control with integral action
– Setpoint feedforward
▪ Objective: improve the system response by Kc
introducing an anticipation of the control e +
value at equilibrium System
▪ Control law
u(t ) = −K1 x(t ) − K 2 q(t ) + K c e c (t )

▪ In steady-state

 0   A B   xc 
 =   u 
 c 
e C 0  c

−1
 xc   A B 0
uc = ( 0 I )   = ( 0 I )     ec u c = K cec
 c
u  C 0  I

Kc
57
State feedback control with integral action
– Example: inverted pendulum on a cart
Linearized model
( )
T
x(t ) = z (t )T  (t )T z (t )T  (t )T

0 0 1 0 0
   
 0 0 0 1  0
A= ,B =
0 1 0 0 1
   
 0 22 0 0  2
C = (1 0 0 0) , D = 0

Tuning and resulting controller


q1 = 20; q2 = 250 ; q3 = q4 = 1; q5 = 100; r = 1

K = ( −16 63 −11.9 13.5 −10 )


Time (s)
58
State feedback control with integral action
– Example: inverted pendulum on a cart

Responses for different initial


conditions
❑ non linear model
❑ Linear model

Time (s) Time (s) Time (s) 59


Conclusion
– Highlights of the chapter
▪ State feedback control: systematic approach (any number of states,
continuous/discrete-time case, SISO/MIMO case …)
▪ Design parameters
– Pole placement → specify the eigenvalues of the closed-loop system
– LQ → specify the weighting matrices
▪ The transient behavior is obtained by the state feedback & the steady-state regime is
provided through the integral action or additional gains
▪ In general, all state variables are not available (can not all be measured)
– Partial state feedback control: enables to select a number of eigenvalues equal to the number of
measured states ; but the other ones can not be specified
– Additional observer needed to reconstruct the state (next chapter)

60

You might also like