Professional Documents
Culture Documents
Tutmukp01 Talk
Tutmukp01 Talk
Analysis:
Closed loop system is given determine characteristics or behavior
Design:
Desired system characteristics or behavior are specified configure or synthesize closed
loop system
Input Variable
Plant
Measurement of
Variable Variable
sensor
Control-system components
3
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
1.Introduction
Definition:
A closed-loop system is a system in which certain forces (we call these inputs) are
determined, at least in part, by certain responses of the system (we call these outputs).
System System
O O
inputs outputs
4
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
1.Introduction
Definitions:
The system for measurement of a variable (or signal) is called a sensor.
A plant of a control system is the part of the system to be controlled.
The compensator (or controller or simply filter) provides satisfactory
characteristics for the total system.
Sensor
Goal:
Maintain stable gradient and phase.
Solution:
Feedback for gradient amplitude and phase.
Phase amplitude
controller controller Klystron cavity
~
~
Controller
Gradient
set point
+
-
Phase detector
6
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
1.Introduction
Model:
Mathematical description of input-output relation of components combined with block
diagram.
Klystron
Reference controller
error cavity
input RF power output
_+
amplifier plant
amplifier
Monitoring
transducer
Gradient detector
7
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
1.Introduction
RF control model using “transfer functions”
controller Klystron
M(s)
Gradient detector
A transfer function of a linear system is defined as the ratio of the Laplace
transform of the output and the Laplace transform of the input with I. C .’s =zero.
Input-Output Relations
Input Output Transfer Function
i(t) R1
V1(t) R2 V2(t)
C
Differential equations: t
1
R1 i(t) + R2 i(t) + ∫ i (τ ) dτ = V1(t)
t
C0
1
R2 i(t) + ∫ i(τ ) dτ = V2(t)
C0
Laplace Transform:
1
R1 I(s) + R2 I(s) + I(s) = V1(s)
s ⋅C
1
R2 I(s) + I(s) = V2(s)
s ⋅C
Transfer function:
V2(s) R2 ⋅ C ⋅ s + 1
G(s) = =
V1(s) (R1 + R2 )C ⋅ s + 1
Input V1 ,output V2
9
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
1.Introduction
Example 3: Circuit with operational amplifier
R2 C
i1 R1
-
+
Vi Vo
.
Vi(s) = R1 I 1(s) and Vo(s) = − R2 +
1
I 1(s)
s ⋅C
V (s) R ⋅C ⋅ s + 1
G(s) = 0 = − 2
Vi(s) R1 ⋅ C ⋅ s
It is convenient to derive a transfer function for a circuit with a single operational
amplifier that contains input and feedback impedance:
Z f (s)
I(s)
Z i(s) -
+
Vi(s) Vo(s)
.
Vo(s) Z f (s)
Vi(s) = Z i(s) I(s) and Vo(s) = − Z f (s) I(s) G(s) = =−
Vi(s) Z i(s)
10
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
Model of Dynamic System
We will study the following dynamic system:
Parameters:
k : spring constant
γ k γ : damping constant
u(t) : force
Quantity of interest:
m=1 y(t) y(t) : displacement from equilibrium
u(t)
Differential equation: Newton’s third law (m = 1)
&y&(t ) = − k y (t ) − γ y& (t ) + u (t )
k 12
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
2.1 Linear Ordinary Differential Equation (LODE)
General form of LODE:
y (n ) (t ) + an −1 y (n −1) (t ) + ... + a1 y& (t ) + a0 y (t ) = bm u (m ) (t ) + ... + b1 u& (t ) + b0 u (t )
m ,n Positive integers, m ≤ n ; coefficients a0 ,a1 ,...,a n −1 , b0 ,...,bm real numbers.
13
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
2.1Linear Ordinary Differential Equation (LODE)
Most important for control system/feedback design:
y (n) (t ) + a n −1 y (n −1 ) (t ) + ... + a1 y& (t ) + a0 y(t) = bm u ( m ) (t ) + ... + b1 u& (t ) + b0 u (t )
In general: given any linear time invariant system described by LODE can be
realized/simulated/easily visualized in a block diagram (n = 2 , m = 2 )
Control-canonical form
b2
b1
+ +
u (t ) + x2 x1 y (t )
-
∫ ∫ b0 +
-
a1
a0
Very useful to visualize interaction between variables!
What are x1 and x2 ????
More explanation later, for now: please simply accept it!
14
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
2.2 State Space Equation
Any system which can be presented by LODE can be represented in State space
form (matrix differential equation).
What do we have to do ???
Let’s go back to our first example (Newton’s law):
&y&(t ) + γ y& (t ) + k y (t ) = u (t )
1. STEP: Deduce set off first order differential equation in variables
x1 (t ) ≅ Position : y (t )
x2 (t ) ≅ Velocity : y& (t ) :
x&1 (t ) = y& (t ) = x2 (t )
x&1 (t ) 0 1 x1 (t ) 0
=
x& (t ) -k - γ x (t ) + 1 u (t )
2 2
x& (t ) = A x (t ) + B u (t )
State equation
x (t )
y (t ) = [1 0 ] 1
x 2 (t )
y (t ) = C x(t ) + D u (t )
Measurement equation
Definition:
The system state x of a system at any time t0 is the “amount of information” that,
together with all inputs for t ≥ t0 , uniquely determines the behaviour of the system
for all t ≥ t0 .
16
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
2.2 State Space Equation
The linear time-invariant (LTI) analog system is described via
Standard form of the State Space Equation
b0 b1 b2
+ +
x1 + x2 + + y(t)
∫ ∫
- -
a0 a1
19
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
2.2 State Space Equation
Now: Solution of State Space Equation in the time domain. Out of the hat…et voila:
t
x(t ) = Φ (t ) x(0 ) + ∫ Φ (τ ) B u (t − τ ) dτ
0
x&1 (t ) = y& (t ) = x2 (t )
x&2 (t ) + 4 x2 (t ) + 3 x1 (t ) = 2 u(t )
x&2 (t ) = −3 x1 (t ) − 4 x2 (t ) + 2 u(t )
x1 (t )
Define system state x(t ) = . Then it follows:
x
2 (t )
0 1 0
x& (t ) = x (t ) + 2 u (t )
- 3 − 4
y (t ) = [1 0 ] x(t )
21
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
2.3 Cavity Model
Ig circulator Conductor Ib
Conductor
. Zo . Zo Ro Ib
Last
I g ~ Generator Zo L ~ Beam-Current
C
Resonator
. . .
Coupler 1:m
I 'g Ir Ib
Equivalent circuit:
Generator
I 'g ~ Rext L C ~ Ib
Ro
Resonator
1 & 1
C ⋅ U&& + ⋅ U + ⋅U = I&g′ + I&b
RL L
1 ω
ω1 / 2: = = 0
2 RL C 2QL
2
U&& + 2ω1 / 2 ⋅ U& + ω02 ⋅ U = 2 RL ω1 / 2 ⋅ I&g + I&b
m 22
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
2.3 Cavity Model
Only envelope of rf (real and imaginary part) is of interest:
U (t ) = (U r (t ) + i U i (t )) ⋅ exp (i ωHF t)
I g (t ) = (I gr (t ) + i I gi (t ))⋅ exp(i ωHF t )
I b (t ) = (I bω r (t ) + i I bω i (t )) ⋅ exp(i ωHF t ) = 2(I b0 r (t ) + i I b0 i (t )) ⋅ exp(i ωHF t )
1
r U r (t ) r
I
m gr (t ) + I b0 r (t )
x (t ) = u (t ) =
U i (t ) 1
I (t ) + I (t )
m gi b0 i
General Form:
r r r
x& (t ) = A ⋅ x (t ) + B ⋅ u (t )
24
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
2.3 Cavity Model
Solution: t
r r r
x (t ) = Φ (t ) ⋅ x (0 ) + ∫ Φ (t − t' ) ⋅ B ⋅ u (t ′) dt ′
0
Special Case:
1
r
I
m gr (t ) + I b0 r (t ) I r
u (t ) = = :
I gi (t ) + I b0 i (t ) I i
1
m
r
ωHF
U r (t ) Q ⋅ ω1 / 2 − ∆ω ⋅ 1 − cos( ∆ωt ) − sin( ∆ωt ) e −ω1 / 2 t ⋅ I r
=
U (t ) ω 2 + ∆ω 2 ∆ω sin( ∆ωt )
i 1/ 2 ω1/ 2 cos ( ∆ωt ) Ii
25
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
2.3 Cavity Model
Integrator
+ 1 1 Scope
2
s s
Step - - Integrator 1
Gain
Gain 1 Harmonic oscillator
4
Gain 2
Harmonic oscillator
x′ = Ax + Bu Scope
y = Cx + Du
Step
State space
Step
cavity x′ = Ax + Bu
Load Data
y = Cx + Du
State space Scope
Step 26
2.3 Cavity Model
Integrator
+ 1
k s
Step - -
Gain
Gain 2
w12
dw
Gain 4
Load Data dw
Scope
Gain 5
+ 1
s
Step 1 - + Integrator 1
Gain 3
w12
27
2.4 Masons Rule
Mason’s Rule is a simple formula for reducing block diagrams. Works on continuous and
discrete. In its most general form it is messy, but For special case when all path touch
H(s) =
∑ ( forward path gains )
1-∑ (loop path gains )
Two path are said to touch if they have a component in common, e.g. an adder.
10 11
H5
U 1
H1
2 3
H2 4 5
H3 6 Y
9 8 7
H4
Forward path: F1: 1 - 10 - 11 - 5 – 6
F2 : 1 - 2 - 3 - 4 - 5 – 6 G( f1 ) = H5 H3
G( f2 ) = H1 H2 H3
Loop path : I1: 3 - 4 - 5 - 8 – 9
G(I1 ) = H2 H4
I2 : 5 - 6 - 7
G(I2 ) = H3
Check: all path touch (contain adder between 4 and 5)
G( f1 ) + G( f 2 ) H 5 H 3 + H1 H 2 H 3 H 3 (H 5 + H1 H 2 )
By Mason’s rule: H= = =
1 − G(l1 ) − G(l2 ) 1 − H 2 H 4 − H 3 1 − H 2 H4 − H3 28
2.5 Transfer Function G (s)
Continuous-time state space model
x& (t ) = A x(t ) + B u (t ) State equation
y (t ) = C x(t ) + D u (t ) Measurement equation
X (s ) = (sI − A) x (0 ) + (sI − A) B U (s )
−1 −1
= φ(s ) x (0 ) + φ(s ) B U (s )
Y (s ) = C X (s ) + D U (s )
= C[ (sI − A) ]x(0 ) + [c(sI − A) B + D]U (s )
−1 −1
G (s ) = C (sI − A) B + D = C φ(s ) B + D
−1
29
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
2.5 Transfer Function
Transfer function of TESLA cavity including 8/9-pi mode
H cont (s ) ≈ H cav (s ) = H π (s ) + H 8 (s )
π
9
(ω1 / 2 )8 π s + (ω1 / 2 )8 π − ∆ω 8
8 9
9
π
2
9
π − mode H 8 (s) = −
∆ ω + ( )
+ s + (ω1 / 2 )8 π
9 π s ω 8
π
9
∆ω 82 8
π
1/ 2
9
9
π 9
9
30
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
2.5 Transfer Function of a Closed Loop System
R (s ) E (s ) U (s ) Y(s)
H c (s ) G(s )
-
M (s)
With L(s ) the transfer function of the open loop system (controller plus plant).
= T (s ) R(s )
Disturbances are system influences we do not control and want to minimize its
impact on the system.
G c (s) ⋅ G p (s) G d (s)
C (s ) = R(s) + D(s)
1 + G c (s) ⋅ G p (s) ⋅ H(s) 1 + G c (s) ⋅ G p (s) ⋅ H(s)
= T(s) ⋅ R(s) + T d (s) ⋅ D(s)
D(s)
Gd (s)
To Reject disturbances, make Td ⋅ D(s ) small!
R(s) Gc (s) Gp (s) C(s)
Definition:
Input never exceeds M 1 and output never exceeds M 2 , then we have BIBO
stability!
34
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
2.6 Stability
1
Example: Y (s ) = G (s ) U (s ), integrator G (s ) =
s
1.Case
u (t ) = δ (t ), U (s ) = 1
1
y (t ) = L−1 [Y (s )] = L−1 = 1
s
2.Case
1
u (t ) = 1 , U (s ) =
s
1
y (t ) = L− 1 [Y (s )] = L− 1 2 = t
s
35
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
2.6 Stability
Y (s ) = G(s ) U (s )
t t ∞
y (t ) = ∫ g (τ ) u (t − τ ) dτ
0
≤ ∫ g (τ ) u (t − τ )
0
dτ ≤ M 1 ∫
0
g (τ ) dτ ≤ M 2
36
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
2.7 Poles and Zeroes
[sI − A] adj
G (s ) = C Φ (s ) B + D = C B+D
χ (s )
Coefficients of Transfer function G(s) are rational functions in the complex variables
∏ mk =1 (s − zk ) N ij (s )
g ij (s ) = α ⋅ n =
∏ l =1 (s − pl ) Dij (s )
z k Zeroes.pl Poles, α real constant, and it is m ≤ n (we assume common factors have
already been canceled!)
Since numerator N (s ) and denominator D(s ) are polynomials with real coefficients,
Poles and zeroes must be real numbers or must arise as complex conjugated pairs!
37
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
2.7 Poles and Zeroes
Stability directly from state-space
Re call : H (s ) = C (sI − A) B + D
−1
38
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
2.8 Stability Criteria
A system is BIBO stable if, for every bounded input, the output remains bounded with
Increasing time.
For a LTI system, this definition requires that all poles of the closed-loop transfer-function
(all roots of the system characteristic equation) lie in the left half of the complex plane.
While the first criterion proofs whether a feedback system is stable or unstable,
the second Method also provides information about the setting time (damping term).
39
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
2.8 Poles and Zeroes
Pole locations tell us about impulse response i.e. also stability:
X X
Medium oscillation
Medium growth
X X X
Re (s) = σ
X X
No Oscillation No oscillation No oscillation
Fast Decay No growth Fast growth
40
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
2.8 Poles and Zeroes
In general a complex root must have a corresponding conjugate root ( N(s), D(S) polynomials
with real coefficients.
41
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
2.8 Bode Diagram
dB
ω1
ω2 ω
Gm Gain Margin
00 ω2 ω1 ω
− 900
0
Phase Margin φm
− 180
The closed loop is stable if the phase of the unity crossover frequency of the OPEN LOOP
Is larger than-180 degrees.
42
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
2.8 Root Locus Analysis
Definition: A root locus of a system is a plot of the roots of the system characteristic
equation (the poles of the closed-loop transfer function) while some parameter of the
system (usually the feedback gain) is varied.
X X X
p3 p2 p1
K
K H (s ) =
(s − p1 ) (s − p 2 ) (s − p 3 )
R (s ) + Y(s)
K H(s)
-
K H (s )
G CL (s ) = roots at 1+ KH (s ) = 0 .
1 + K H (s )
1 1
s − p1 (s − p1 )(s − p2 )
X X X
p1 p2 p1
(a) (b)
s − z1 s − z1
(s − p1 )(s − p2 ) (s − p1 )(s − p2 )
X O X O X X
p2 z1 p1 z1 p2 p1
(d)
(c)
44
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
2.8 Root Locus Examples (Cnt’d)
p2
1 X 1
(s − p1 )(s − p2 )(s − p3 ) (s − p1 )(s − p2 )(s − p3 )
X X X X
p3 p2 p1 p1
X
p3
(e) (f)
p2 s − z1
1
X
(s − p1 )(s − p2 )(s − p3 ) (s − p1 )(s − p2 )(s − p3 )
X X X O X
p1 p3 p2 z1 p1
X
p3
(h)
(g)
45
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
3.Feedback
The idea:
Suppose we have a system or “plant”
“open loop”
plant
r
plant
“closed loop”
Ufeedback
46
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
3.Feedback
Open loop gain:
U G(s)
Y
−1
G O.L
(s ) = G(s ) = u
y
Closed-loop gain:
U G(s)
Y
U fb “closed loop”
H (s )
G(s)
G C.L(s) =
1 + G(s) H(s)
Pr oof: y = G (u − u fb )
= G u − G u fb ⇒ y +G Hy = G u
y G
= G u − G Hy ⇒ =
u (1 + G H )
47
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
3.1 Feedback-Example 1
Consider S.H.O with feedback proportional to x i.e.:
Where &x& + γ x& + ωn2 x = u + u fb
u fb (t ) = −α x (t )
U &x& 1 x& 1 x y
+ - S s
- -
ω n2
α
1
ω)
log(ω
ωn2 log ωn log ωn2 + α
1
ω n2 + α
DC response: s=0
So the effect of the proportional feedback in this case is to increase the bandwidth
of the system
(and reduce gain slightly, but this can easily be compensated by adding a constant gain in front…)
49
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
3.1 Feedback-Example 2
In S.H.O. suppose we use integral feedback:
t
u fb (t ) = −α ∫ x(τ ) dτ
0
t
i.e &x& + γ x& + ωn2 x = u − α ∫ x(τ ) dτ
0
&x& x& x y
U 1 1
+ - S S
- -
γ
ω n2
α
s
α 1
1 + 2
2
(
s s + γs + ω n + α )
s
=
( )
s s + γs + ω n2 + α
2
Observe that
1. G C.L. (0 = 0 )
2. For large s (and hence for large ω )
dB 1
G C.L. (s ) ≈ ≈ G O.L. (s )
G O.L. (iω)
(2 2
s + γ s + ωn )
1
ω)
log(ω
ω n2
G C.L. (iω)
So integral feedback has killed DC gain
i.e system rejects constant disturbances
51
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
3.1 Feedback-Example 3
Suppose S.H.O now apply differential feedback i.e.
u fb (t ) = −α x&(t )
&x& 1 x& 1 x
+ - S S
- -
γ
ω n2
x
αS
α x&
Now have
&x& + (γ + α ) x& + ωn2 x = u
52
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
3.1 Feedback-Example 3
1
Now G C.L. (s ) =
s 2 + (γ + α ) s + ω n2
dB
G O.L. (iω)
1
ω)
log(ω
ω n2
G C.L. (iω)
So the effect of differential feedback here is to “flatten the resonance” i.e. damping is increased.
53
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
3.1 PID controller
(1) The latter 3 examples of feedback can all be combined to form a
P.I.D. controller (prop.-integral-diff).
u x= y
+ S.H.O
-
P.I.D controller
K p + K D s + K l /s
(2) In example above S.H.O. was a very simple system and it was clear what
physical interpretation of P. or I. or D. did. But for large complex systems not
obvious
54
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
3.1 PID controller
For example, if you are so smart let’s see you do this with your P.I.D. controller:
Damp this mode, but leave the other two modes undamped, just as they are.
This could turn out to be a tweaking nightmare that’ll get you nowhere fast!
55
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
3.2 Full State Control
Suppose we have system
x& (t ) = A x (t ) + B u (t )
y (t ) = C x (t )
Since the state vector x(t) contains all current information about the system the
most general feedback makes use of all the state info.
u fb = − k1 x1 − ..... − k n xn
= -k x
Where k = [k1......k n ] (row matrix)
[ ]
Proportional fbk : up = −k p x = − k p 0
a (s )
Then can move eigen values of A − BK anywhere we want using full state feedback.
Proof:
Given any system as L.O.D.E. or state space it can be written as:
AO.L. B
x1 0 1 ... 0 x1 0
... 0 ... ... .. . ... 0
= + u
... 0 ... ... 1 ... ...
x n -a 0 ... .. . -a n- 1 xn 1
x1
...
y = [b0 ... ... bn- 1 ]
...
xn
Where
bn − 1 s n − 1 + ... + b0
= C (sI − A )
O.L. −1
G B= n
s + a n − 1 s n − 1 + ... + a0
57
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
3.2 Full State Control
O.L.
i.e. first row of A Gives the coefficients of the denominator
( )
a O.L.(s ) = det sI − AO.L. = s n + an−1s n−1 + ... + a0
Now
AC.L. = AO.L. − BK
0 1 ... 0 0
0 ... ... ... 0
= − [k ... ... k ]
0 ... ... 1 ... 0 n-1
-a
0 ... ... -a n-1 1
0 1 ... 0
0 ... ... ...
=
0 ... ... 1
-(a
0 0+ k ) ... ... -(an-1 + k )
n −1
(
a C.L. (s ) = det sI − A C.L. )
= s n + (a 0 + k 0 )s n − 1 + ... + (a n − 1 + k n − 1 )
Using u = − Kx have direct control over every closed-loop denominator coefficient
can place root anywhere we want in s-plane.
58
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
3.2 Full State Control
Example: Detailed block diagram of S.H.O with full-scale feedback
u &x& 1 x& 1 x y
+
- - S S
-
γ
ω n2
k2 x&
+
x
k1
Of course this assumes we have access to the x& state, which we actually
Don’t in practice.
59
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
3.2 Full State Control
With full state feedback have (assume D=0)
u x& 1 x y
+ B + C
s
-
u fb = −kx A
K
So x& = A x + B[u + u fb ]
= Ax + Bu + BKufb
x& = ( A − BK ) x + B u
u fb = −Kx
y = Cx
With full state feedback, get new closed loop matrix
(
AC.L. = AO.L. − BK )
Now all stability info is now given by the eigen values of new A matrix
60
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
3.3 Controllability and Observability
The linear time-invariant system
x& = Ax + Bu
y = Cx
Is said to be controllable if it is possible to find some input u(t) that will transfer the
initial state x(0) to the origin of state-space, x(t0 ) = 0,with t0 finite
0 = φ (t 0 )x (0 ) + ∫ φ (τ )Bu (t 0 − τ ) dτ
0
With t 0 finite. It can be shown that this condition is satisfied if the controllability matrix
CM = [B AB A2 B ... An-1 B]
Has inverse. This is equivalent to the matrix C M having full rank (rank n for an n- th
order differential equation).
61
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
3.3 Controllability and Observability
Observable:
The linear time-invariant system is said to be observable if the initial conditions x(0)
Can be determined from the output function y(t), 0 ≤ t ≤ t1 where t1 is finite. With
t
y (t ) = Cx = C φ (t )x 0 + C ∫ φ (τ )Bu (t − τ ) d τ
0
The system is observable if this equation can be solved for x(0). It can be shown that
the system is observable if the matrix:
C
CA
OM =
...
n-1
CA
Has inverse. This is equivalent to the matrix O M having full rank (rank n for an n-th
Order differential equation).
62
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
4.Discrete Systems
Where do discrete systems arise?
Digitized
Continuous system sample
u(k) uc (t ) yc (t ) y(k)
DAC h(t ) ADC
t t t t
“Digitized” “Zero-order-hold” “continuous” “Digitized”
Computer controller
Assume the DAC+ADC are clocked at sampling period T.
Continued… 63
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
4. Discrete Systems
Then u(t) is given by:
u (k ) ≡ u c (t ); kT ≤ t < (k + 1 )T
y (k ) ≡ y c (kT ); k = 0 ,1,2 ,...
x& c (t ) = A x c (t ) + B u c (t ); x c (0 ) = x0
y c (t ) = C x c (t ) + D u c (t )
Can we obtain direct relationship between u(k) and y(k)? i.e. want
Equivalent discrete system:
u( k ) y( k )
DAC h( t ) ADC
u( k ) y( k )
h( k )
64
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
4. Discrete Systems
Yes! We can obtain equivalent discrete system.
t
Recall x c (t ) = e x c (0 ) + ∫ e Aτ .Bu c (t − τ) dτ
At
0
T
From this x c (kT + T ) = e AT
x c (kT ) + ∫ e Aτ .Bu c (kT − τ ) d τ
0
So we have an exact (note: x(k + 1) = x(k ) + x& (k ) T + O(.)) discrete time equivalent to the time
Continuous system at sample times t=kT- no numerical approximation!
65
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
4.1 Linear Ordinary Difference Equation
A linear ordinary difference equation looks similar to a LODE
y (k + n ) + an-1 y (k + n − 1) + ... + a1 y (k + 1) + a0 y (k ) = bm u (k + m ) + ... + b1 u (k + 1) + b0 u (k )
n ≥ m; Assumes initial values y (n-1), ...., y(1),y (0 ) = 0 .
(z n
) ( )
+ z n −1an −1 + ... + za1 + a0 Y (z ) = z mbm + .... + zb1 + b0 U(z)
z mbm + .... + zb1 + b0
Y (z ) = n U (z )
z + ... + za1 + a0
Y (z ) = G(z ) U (z )
Once again:
if U ( z ) = 1, (u (k ) = δ (k )), then Y (z ) = G (z ).
Transfer Function of system is the Z-Transform of its pulse response!
66
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
4.1 z-Transform of Discrete State Space Equation
x(k + 1) = Ad x(k ) + Bd u (k )
y(k ) = C x(k ) + D u (k )
Homogeneous solution
Y (z ) = G (z ) U (z ) with
G ( z ) = C ( zI-A d )− 1 B + D
Exactly like for the continuous systems!!!!!!!
67
4.2 Frequency Domain/z-Transform
For analyzing discrete-time systems:
z-Transform
(analogue to Laplace Transform for time-continuous system)
It converts linear ordinary difference equation into algebraic equations: easier to find
a solution of the system!
It gives the frequency response for free!
1
with ω = 2 πf, f = the sampling frequency in Hz,
T
T difference / Time between two samples.
∞
In the same spirit: F ( z ) = Z[f (k ) ] = ∑ f (k ) z
k =0
-k
.
With z a complex variable
Note: if f (k ) = 0 for k = -1 ,- 2 ,...... th en F~ (ω ) = F(z = e iω ).
68
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
4.3 Stability (z-domain)
A discrete LTI system is BIBO stable if
u (k ) < M; ∀k = y (k ) < K; ∀k
k k k ∞
y (k ) = ∑ u (k − i ) h(i ) ≤ ∑ u(k − i ) h(i )
0 0
≤ M ∑ h(i ) ≤ M ∑ h(i )
0 0
∞
∴ ∑ h (i ) < ∞ BIBO stable.
0
∏ i = 1 (z − z i ) k
H ( z ) = α.
∏ i = 1 (z − p i )
n
= ∑i=1
β i T i (z )
X
X .. . .
X
Growing
Constant . ..
X
. Damping
Damping
X X X
Re{z}
X Damping
X
Damping
X
X Growing
z-Plane 70
4.3 Stability (z- domain)
In General
Complex pair oscillatory growth / damping
Real pole exponential growth / decay but maybe oscillatory too (e.g: r n1(n) wherer < 0)
The farther inside unit circle poles are
The faster the damping the higher stability
i . e p i ≤ 1 system stable
71
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
4.3 stability (z-domain)
Stability directly from State Space:
b(z )
H (z ) = = C ( zI − Ad ) Bd
−1
a (z )
Cadj ( zI − Ad )Bd
=
det ( zI − Ad )
b ( z ) = Cadj ( zI − Ad )B d
a ( z ) = det ( zI − Ad )
72
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
4.4 Discrete Cavity Model
Converting the transfer function from the continuous cavity model to the discrete model:
ω12 s + ω12 -∆
H (s ) = ∆ω
∆ω 2 + (s + ω12 )
2
s + ω12
1 H (s ) z − 1 −1 H(s)
H (z ) = 1 − Z = ⋅ Z L |t =kTs
z s z s
73
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
4.5 Linear Quadratic Regulator
Given: x (k + 1 ) = A x (k ) + B u (k )
z (k ) = C x ( x )
Suppose the system is unstable or almost unstable.We want to find ufb(k) which will
bring x(k) to Zero, quickly, from any Initial condition.
i.e.
X
+ {A,B,C}
u fb (k ) = ?
74
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
4.5 Trade Off
Z Z
K K
Ufb
Ufb
K K
(2) But “Cheap“ control i.e ufb Small (2) But “expensive control i.e ufb large.
75
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
4.5 Quadratic Forms
x1
x = ∈ R2
x2
f ( x ) = f (x 1 , x 2 )
= ax 12 + bx 1 x 2 + cx 1 + dx 22
1
a b
2 x1 x1
[ ]
= x1 x 2 [
+ c 0 ]
b d x2
1 x2
2 PT
Q
f (x ) = x T Qx + P T x + e
Quadratic Part Linear Part Constant
76
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
4.5 Quadratic Cost for Regulator
What do we mean by “bad“ damping and “cheap“ control? We now define precisely
what we mean. Consider:
∞
J ≡ ∑ i i i R ui }
{x T
i =0
Q x + u T
The first term penalizes large state excursions, the second penalizes large control.
Q ≥ 0 ,R > 0
77
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
4.5 LQR Problem Statement
(Linear quadratic regulator)
Given: xi + 1 = Ax i + Bu i ; x 0 given:
u i = − K opt xi
(
K opt = R + B T SB ) −1
B T SA
S = AT SA + Q − A T AB R + B T SB ( ) −1
B T SA
(2) Equation A.R.E. is matrix quadratic equation. Looks pretty intimidating but
Computer can solve in a second.
(3) No tweaking ! Just specify {A,B,C,D} and Q and R, press return button, LQR
Routine Spits out Kopt- done
(Of course picking Q and R is tricky sometimes but that‘s another story).
(Of course that doesn‘t mean its “best“ in the absolute sense .-)
79
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
4.5 LQR Problem Statement - Remarks
(5) As vary Q/R Ratio we get whole family of Klqr‘s, i.e. can Trade-off between state
excursion (Damping) Vs actuator effort (Control)
State
excursions
Achievable
∞ J z1
Jz = ∑x
i =0
T
i Qxi
= ∑x T
i C T pCx i
= ρ ∑ z iT z i optimal ∞
Ju = ∑u
i=0
T
i Ru i
J u1 Actuator effort
80
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
4.6 Optimal Linear Estimation
How can be obtain “good“ estimates of the velocity state from just observing
the position state?
Furthermore the Sensors may be noisy and the plant itself maybe subject to
outside disturbances (process noise) i.e. we are looking for this:
Process Cx (k )
X
noise
+ {A,B,C}
81
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
4.6 Problem Statement :
x (k + 1) = A x (k ) + B w (k )
z (k ) = C x ( x )
y (k ) = C x (k ) + v (k )
Process w(k ) z (k )
X
noise + {A,B,C}
û = K x̂(x|k−1) sensor
Noise v (k )
+
x̂ (x|k − 1)
K Estimator
y (k )
E [ x (k ) − x̂ (k|k − 1 )
2
2
]= minimal
( )
Fact from statistics: x̂ k k − 1 = E x(k ) [ ( y0 ,..., yk −1 )]
82
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
4.6 Kalman Filter
The Kalman filter is an efficient algorithm that computes the new x̂i +1 i ( the linear-least-mean
{ }
( square estimate) of the system state vector xi +1 , given y0 ,..., yi ,by updating the old estimate
x̂i i −1 and old ~
xi i −1 (error) .
2
p i i−1 = ~
xi i−1 yi
2 (new measurement)
The Kalman Filter produces x̂i +1 i from x̂i i −1 ( rather than x̂i i ), because it “tracks” the system
“dynamics”. By the time we compute x̂i i from x̂i i −1 , the system state has changed from
xi to xi +1 = Axi + Bwi
83
4.6 Kalman Filter
The Kalman Filter algorithm can be divided in a measurement update and a time update:
Kalman Filter
x̂i i −1 x̂i i x̂i +1 i
Measure. Time
update update
pi i −1 pi i pi +1 i
yi
Measurement update (M.U.):
(
x̂i i = x̂i i −1 + p i i −1C T Cp i i −1C T + V ) (y − Cx̂ )
−1
i i i −1
C (Cp + V ) Cp
T T −1
pi i = pi i −1 − pi i −1 i i −1
C i i −1
x̂i + 1 i = A x̂i i
Time Update (T.U.):
pi + 1 i = Ap i i AT + BWB T
With initial conditions: x̂0 −1 = 0
p̂0 −1 = X 0 84
4.6 Kalman Filter
By pluggin M.U. equations into T.U. equations. One can do both steps at once:
x̂i +1 i = Ax̂i i
(
= Ax̂i i −1 + Api i −1C T Cp i i −1C T + V ) (y − Cx̂ )
−1
i i i −1
(
x̂i +1 i = Ax̂i i −1 + Li yi − C x̂i i −1 )
where (
Li ≡ A pi i −1C T (Cp i i −1
CT +V ) ) −1
pi +1 i = Api i AT + BWB T
[ (
= A pi i −1 − pi i −1C T Cp i i −1C T + V ) −1
]
Cp i i −1 AT + BWB T
(
pi +1 i = Api i AT + BWB T − Api i −1C T Cp i i −1C T + V ) (Cp
−1
i i −1
)
− 1 AT
x̂i ŷi
x̂i +1 i −1 i −1
+ -
−1
Z 1 C - +
+
A ei
Li
Time varying gain
86
Continued..
4.6 Picture of Kalman Filter
Plant Equations:
xi +1 = Axi + Bui
yi = Cxi + vi
Kalman Filter:
x̂i +1 i = Ax̂i i −1
(
+ Li yi − ŷi i −1
)
yi i −1
= Cx̂i i −1
If v=w=0=> Kalman filter can estimate the state precisely in a finite number of steps.
87
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
4.6 Kalman Filter
Remarks:
(1) Since y i = Cx i + v i and ŷ i i − 1 = C x̂ i can write estimator equation as
x̂i +1 i = A x̂i i −1
(
+ Li C xi + vi − C x̂i i −1
)
= ( A − Li C ) x̂i i −1
+ Li C xi + vi
(2) In practice, the Riccati equation reaches steady state in a few steps. People
Often run with steady-state K.F.i.e
Lss = Apss C T (CPss C T + V)−1
Where
pss = Apss AT + BWBT − Apss C T (CPss C T + V)−1CPss A
88
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
4.7 LQG Problem
Given: x = Ax + Bu + B w
k +1 k k w k
z = Cx
k k
y = Cx + v
k k k
w ,w = Wδ , v ,v = Vδ
i j ij i j ij
w ,v =0
i j
w k ,v k both Gaussian
For Gaussian, K.L. gives the absolute best estimate 89
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
4.7 LQG problem
The separation principle states that the LGQ optimal controller is obtained by:
(1) Using Kalman filter to obtain least squares optimal estimate of the plant state,
90
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
4.7 Picture of LQG Regulator
Wk B
xk +1
xk zk
+ B + Z −1 1 C
-
zk vk
+
A yk
Kx̂k k −1
K
x̂k k −1 ŷ k k −1
- x̂k +1
+ + Z −1 1 C +
-
+ -
A ek
91
4.7 LQG Regulator
x = A x + ( − Bu ) + B w
k +1 k k w k
Plant z =Cx
k k
y =Cx +v
k k k
x̂ = A x̂ + Bu + L y − C x̂
LQG Controller k + 1k k k −1 k k k k − 1
u = − K x̂
k k k −1
−1 −1
k = − R + BT SB
+ S = AT SA + Q − AT SB R + BT SB BT SA
−1
T
L = APC V + CPC T +P
−1
T T T
= APA + BWB − APC V + CPC T CPC T
92
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
4.7 Problem Statement (in English)
Want a controller which takes as input noisy measurements, y, and produces as output a
Feedback signal ,u, which will minimize excursions of the regulated plant outputs (if no pole
-zero cancellation, then this is equivalent to minimizing state excursions.)
Also want to achieve “regulation” with as little actuator effort ,u, as possible.
(1). Q and R are weigthing matrices that allow trading off of rms u and rms x.
T
(2) if Q = C ρ C; ρ > 0 then trade off rms z VS rms u
(3). In the stochastic LQR case, the only difference is that now we don’t have complete state
information yi = Cxi + vi we have only noisy observations
94
Stefan Simrock, “Tutorial on Control Theory” , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011
4.7 Problem Statement
(5) We can let Q/R ratio vary and we’ll obtain family of LQG controllers. Can
Plot rms z vs rms u for each one
Trade-Off curves
rms Z
ACHIEVABLE
LQG, Q/R=0.01
Zrms(1)
X other
LQG, Q/R=100
Zrms(2)