Professional Documents
Culture Documents
Feedback Linearization
Feedback Linearization
8 `Feedback Linearization
Key points
• Feedback linearization = ways of transforming original system models into
equivalent models of a simpler form.
• Completely different from conventional (Jacobian) linearization, because
feedback linearization is achieved by exact state transformation and feedback,
rather than by linear approximations of the dynamics.
• Input-Output, Input-State
• Internal dynamics, zero dynamics, linearized zero dynamics
• Jacobi’s identity, the theorem of Frobenius
• MIMO feedback linearization is also possible.
Feedback linearization is an approach to nonlinear control design that has attracted lots of
research in recent years. The central idea is to algebraically transform nonlinear systems
dynamics into (fully or partly) linear ones, so that linear control techniques can be
applied.
133
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005
The basic idea of simplifying the form of a system by choosing a different state
representation is not completely unfamiliar; rather it is similar to the choice of reference
frames or coordinate systems in mechanics.
References: Sastry, Slotine and Li, Isidori, Nijmeijer and van der Schaft
Terminology
Feedback Linearization
A “catch-all” term which refers to control techniques where the input is used to linearize
all or part of the system’s differential equations.
Input/Output Linearization
A control technique where the output y of the dynamic system is differentiated until the
physical input u appears in the rth derivative of y. Then u is chosen to yield a transfer
function from the “synthetic input”, v, to the output y which is:
Y ( s) 1
=
V ( s) s r
If r, the relative degree, is less than n, the order of the system, then there will be internal
dynamics. If r = n, then I/O and I/S linearizations are the same.
Input/State Linearization
A control technique where some new output ynew = hnew(x) is chosen so that with respect
to ynew, the relative degree of the system is n. Then the design procedure using this new
output ynew is the same as for I/O linearization.
134
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005
SISO Systems
x& = f ( x) + g ( x)u
y = h( x )
∂h 1
y& = x&L f h + L g (h)u = L1f h
∂x
y = h( x) = L0f h
y& = L1f h + L g (h)u = L1f h with L g h = 0
&y& = L2f h + L g ( L1f h)u = L2f h with L g ( L1f h) = 0
…
y ( r ) = Lrf h + L g ( Lrf−1 h)u = v with L g ( Lrf−1 h) ≠ 0
( )
L g Lrf−1 (h) ≠ 0
135
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005
That is, r is the smallest integer for which the coefficient of u is non-zero over the space
where we want to control the system.
Let’s set:
α ( x) = Lrf (h)
β ( x) = Lg ( Lrf−1 (h))
v 1 1 1 … 1 y
s s s s
r integrators
Y (s) 1
We have an r-integrator linear system, of the form: = .
V (s) s r
We can now design a controller for this system, using any linear controller design
method. We have v = α + βu . The controller that is implemented is obtained through:
1
u= [− α ( x) + v]
β ( x)
⇒ y ( r ) + c r −1 y ( r −1) + ... + c0 y = 0
136
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005
τ
r −1 1
v = −∑ c k Lkf (h) + K c ( y d − y ) + ∫ ( y d − y )dτ
k =0 τ 0
Here the first term in the expression is the standard feedback linearization term, and the
second term is tuned online for robustness.
Internal Dynamics
z1 z1 ≡ y = L0f h
z z = y& = L1f h
z ≡ 2 where 2
... ...
zr z r = y ( r −1) = Lrf−1 h
So we can write:
z& = Az + Bv
0 1 0 ... 0 0
0 0 1 0 ... 0
z& = ... ... ... ... ... z + ... v
0 0 ... 0 1 0
0 0 ... 0 0 1
y = [1 0 ... 0 0]z
0 1 0 ... 0 0
0 0 1 0 ... 0
where A = ... ... ... ... ... and B = ...
0 0 ... 0 1 0
0 0 ... 0 0 1
We define:
137
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005
z
x = where z is rx1 and ξ is (n-r)x1. ( z ∈ ℜ r , ξ ∈ ℜ n − r ).
ξ
The normal forms theorem tells us that there exists an ξ such that:
ξ& = ψ ( z, ξ )
So we have:
z& = Az + Bv
&
ξ = ψ ( z,ξ )
The ξ equation represents “internal dynamics”; these are not observable because z does
not depend on ξ at all ⇒ “internal”, and hard to analyze!
We want to analyze the zero dynamics. The system is difficult to analyze. Oftentimes, to
make our lives easier, we analyze the so-called “zero dynamics”:
ξ& = ψ (0, ξ )
∂ψ
J= and we look at the eigenvalues of J.
∂ξ 0
If these are well behaved, perhaps the nonlinear dynamics might be well-behaved. If
these are not well behaved, the control may not be acceptable!
x& = Ax + Bu
y = Cx
Y ( s)
We have: H (s) = = C ( sI − A) −1 B
U ( s)
The eigenvalues of the zero dynamics are the zeroes of H(s). Therefore if the zeroes of
H(s) are non-minimum phase (in the right-half plane) then the zero dynamics are
unstable.
138
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005
By analogy, for nonlinear systems: if ξ& = ψ (0, ξ ) is unstable, then the system:
x& = f ( x) + g ( x)u
y = h( x )
Input/Output Linearization
x& = f ( x) + g ( x)u
y = h( x )
o Procedure
y&
&y&
...
y (r ) = α ( x) + β ( x)u
1
u= [− α ( x) + v]
β ( x)
Y (s) 1
c) Then the system has the form: =
V (s) s r
o Example
139
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005
x&1 = x 2 + x13 + u
x& 2 = −u
y = h( x) = x1
Follow steps:
⇒ u = − x 2 − x13 + v
c) Choose a control law for the r-integrator system, for example proportional control
x&1 = v = − K p x1
x& 2 = −u = −(− x13 − x 2 + v) = −(− x13 − x 2 − K p x1 ) = x13 + K p x1 + x 2
There are two possible approaches when faced with this problem:
140
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005
x& = f ( x) + g ( x)u
Question: does there exist a transformation φ(x) such that the transformed system is
linear?
z1
z
z ≡ 2
...
zn
0 1 0 ... 0 0
0 0 1 0 ... 0
A = ... ... ... ... ... and B = ...
0 0 ... 0 1 0
0 0 ... 0 0 1
z1 x1
z x
z ≡ 2 ⇔ x ≡ 2
... ...
zn xn
Question: does there exist an output y=z1(x) such that y has relative degree n?
141
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005
Let z 2 ≡ L1f ( z1 )
Then: Lg ( L1f ( z1 )) = 0
z&1 = z 2
z& 2 = z 3
...
z& n = Lnf ( z1 ) + L g ( Lnf−1 ( z1 ))u ≡ v
( )
Lg Lkf (h) = 0 for k = 1,…,n-2
And L g (Lnf−1 (h) ) ≠ 0 ?
z1 L f ( z1 )
0
z L1 ( z )
z ≡ 2 = f 1
... ...
n −1
z r L f ( z1 )
⇒ is there a test?
x& = f ( x) + g ( x)u
Jacobi’s identity
142
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005
Jacobi’s Identity
Remember:
[ ]
ad if g ≡ f , ad if−1 g and ad f g = [ f , g ]
For k = 0
L g ( L0f ( z1 )) = 0 ⇒ Lg ( z1 ) = 0
∂z1 ∂z ∂z
.g 1 + 2 .g 2 + ... + n .g n = 0 (first order)
∂x1 ∂x 2 ∂x n
For k = 1
L g ( L1f ( z1 )) = 0
∂z ∂z ∂z
L g 1 . f 1 + 2 . f 2 + ... + n . f n = 0
∂x1 ∂x 2 ∂x n
∂z ∂z ∂z
∇ 1 . f 1 + 2 . f 2 + ... + n . f n .g = 0
∂x1 ∂x 2 ∂x n
143
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005
∂ 2 z1
⇒ + ... ⇒ 2nd order (gradient)
∂x12
Things get messy, but by repeated use of Jacobi’s identity (see Slotine and Li), we have:
The two conditions above are equivalent. Evaluating the second half:
[ ]
Lad k g ( z1 ) = 0 ⇔ ∇z1 . g , ad f g ,..., ad nf − 2 g = 0
f
∂z1 ∂z ∂z
∇z1 .g = 0 ⇒ .g1 + 2 .g 2 + ... + n .g n = 0
∂x1 ∂x 2 ∂x n
∂z1 ∂z ∂z
∇z1 .ad f g = 0 ⇒ .(...) + 2 .(...) + ... + n .(...) = 0
∂x1 ∂x 2 ∂x n
144
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005
Theorem of Frobenius:
A solution to the set of partial differential equations Lad k g ( z1 ) = 0 for k ∈ [0, n − 2] exists
f
a) [g , ad f ]
g ,..., ad nf −1 g has rank n
b) [g , ad f ]
g ,..., ad nf − 2 g is involutive
♦
Definition of “involutive”:
[ f , f ] = ∑α
m
i j ijk ( x) f k ( x) ∀(i, j ) ∈ N 2
k =1
♦
i.e. when you take Lie brackets you don’t generate new vectors.
x&1 = x 2 + x13 + u
x& 2 = −u
Question: does there exist a scalar z1(x1,x2) such that the relative degree be 2?
x + x13 1
f = 2 g=
0 − 1
145
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005
− 1 − 3 x12 + 1
(g , [ f , g ]) =
1 0
x1 = ± 3 3 looks dangerous
∂z ∂z 2 1
∇z1 .g = 0 ⇒ 1 =0
∂x1 ∂x 2 − 1
∂z1 ∂z 2
⇒ = ⇒ z1 = x1 + x 2
∂x1 ∂x 2
z1 = x1 + x 2
&z&1 = z& 2 = x& 2 + 3x12 x&1 = 3x12 ( x 2 + x13 ) + (3 x12 − 1).u (u appears! (good))
Define &z&1 = v , z1 = x1 + x 2
146
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005
⇒ z1 → z1d
z1 = x1 + x 2
z1d = x1d + x 2 d
x2
-x1d3
Consider a “square” system (where the number of inputs is equal to the number of
outputs = n)
m
x
& = f ( x ) + ∑
i =1
g i .u i
y = [h1 ,...hm ]T
m
y& k = L f (hk ) + ∑ Lg i (hk )u i
i =1
Let rk, the relative degree, be defined as the relative degree of each output, i.e.
147
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005
For some i, ( )
L gi Lrfk −1 (hk ) ≠ 0
Let:
d r1 y1
r1
dt
y ≡ ... where yr is an mx1 vector
r
rm
d ym
dt rm
Lr1f (h1 )
l ( x) = ...
Lrfm (hm )
Then we have:
d r1 y1
r1
= v1
dt
...
so y⇔v
rm ...
d ym
dt rm = v m
148
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005
u = J −1 (v − l )
Problems:
Internal Dynamics
The linear subspace has dimension (or relative degree) for the whole system:
m
rT = ∑ rk
k =1
z1i = hi
z&1i = z 2i
...
i
( )
m
z& ri = L fi (hi ) + ∑ Lg k L fi (hi ) u k ≡ vi
r r −1
1
z11
1
z2
...
zT
x = z 1r1 ⇒ x= T where zT is rx1, ξT is (n-rT)x1
z2 ξ
1
...
...
Can we get a ξ who isn’t directly a function of the controls (like for the SISO case)? NO!
149
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005
z& = Az + Bv
and y r ≡ l ( x) + J ( x).u ≡ v
y r ≡ 0 ⇒ u* = − J −1l ( x)
The output is identically equal to zero if we set the control equal to zero (at all times).
x2
u1
x1
150
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005
Basically, ψ is the yaw angle of the vehicle, and x1 and x2 are the Cartesian locations of
the wheels. u1 is the velocity of the front wheels, in the direction that they are pointing,
and u2 is the steering velocity.
x1
x = x 2
ψ
y&1 cosψ 0 u1
y& = sin ψ 0 u 2
2
cosψ 0
J ( x) = is clearly singular (has rank 1).
sin ψ 0
x&1 = cosψx3
x& = sin ψx
2 3
x& 3 = u&1 = u 3
ψ& = u 2
151
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005
x3 cosψ 0 0
x sin ψ 0 0
f = 3 g1 = g2 =
0 1 0
0 0 1
Take y1 ≡ x1 and y 2 ≡ x 2 .
cosψ − u1 sin ψ
and the new J(x) matrix: J ( x) = is non-singular for u1 ≠ 0 (as long as
sin ψ u1 cosψ
the axle is moving).
v1 cosψ − u1 sin ψ u1 u
v = sin ψ = J ( x) 1
2 u1 cosψ u 2 u 2
&y&1 = v1
&y&2 = v 2
Let:
u 3 −1 v1
u = J ( x) v
2 2
and u&1 = u 3 ⇒ we have a dynamic feedback controller (the controller has dynamics, not
just gains, in it).
♦
152
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005
o In general terms
x& = f ( x) + g ( x)u
y = h(x)
nth order
r = relative degree < n
a) Differentiate:
∂h ∂x ∂h
y& = h&( x) = . = .( f ( x) + g ( x)u ) = L f h + L g h.u and L g h = 0 if r>1
∂x ∂t ∂x
153
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005
⇒ y& = L f h
∂L f h ∂L f h
&y& =
∂y& ∂
=
∂t ∂t
[ ]
Lf h =
∂x
x& =
∂x
.[ f ( x) + g ( x).u ] = L2f h + L g L f h.u
⇒ &y& = L2f h
y ( r −1) = Lrf−1 h
∂ r −1
y (r ) = L f h[ f ( x) + g ( x)u ] = Lrf h + L g Lrf−1 h.u where L g Lrf−1 h ≠ 0
∂t
b) Choose u in terms of v
1
Let u = r −1
(− Lrf h + v)
Lg L f h
154
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005
155
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005
x& = f ( x) + g ( x)u
y = h(x)
We have an nth order system where y is the natural output, with relative degree r.
The differential equations governing the new states have some convenient properties.
Example:
1 0
The points in 2-space are usually expressed in the natural basis: 1 .
0
So when we write “x”, we mean a point in 2-space that is gotten to from the origin by
doing:
156
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005
x2
x1
1 0 x1
0 1 x
2
1 0
0 1 x = [t1 t 2 ]x'
where t1 and t2 are the eigenvectors of A and x’ represents the coordinates in the new
basis.
We seek some nonlinear transformation so that the new state, x’, is governed by
differential equations such that the first r-1 states are a string of integrators (derivatives of
each other), and the differential equation for the rth state has the form:
and n-r internal dynamics states will be decoupled from u (this is a matter of
convenience).
157
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005
T1 ( x)
T ( x)
x' = T ( x) = 2
...
Tn ( x)
z&1 z2
... ...
... zr
x& ' = z& r = nonlin. fn( x) + u
ξ&1 Φ1 ( z, ξ )
...
...
ξ&n − r Φ n − r ( z , ξ )
We know how to choose T1(x) through Tr(x). They are just y, y& , &y&,... etc…
OK Not OK
o T(x) is continuous
o T-1(x’) is continuous
158
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005
∂T ∂T −1
o Also and must exist and be continuous
∂x ∂x'
z T ( x )
x' = = 1
ξ T2 ( x)
Choose T2(x1,x2) to satisfy the above conditions. Let’s start with condition 2, u does not
appear in the equation for ξ& .
ξ& = fn( z , ξ )
∂T2 ( x1 , x 2 ) ∂T2 ( x1 , x 2 )
ξ& = x&
∂x1 ∂x 2
∂T ( x , x ) ∂T2 ( x1 , x 2 ) a sin x 2 0
= 2 1 2 − x 2 + 1u
∂x1 ∂x 2 1
We are only concerned about the second term. To eliminate the dependence in u, we must
have:
∂T2 ( x1 , x 2 )
⇒ =0
∂x 2
⇒ T2 ( x1 , x 2 ) = T2 ( x1 )
z T ( x ) x
x' = = 1 = 2
ξ T2 ( x) x1
159
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard © 2005
What about T2 ( x1 ) = x12 ? (NO – violates one-to-one transformation part of the conditions
for a proper diffeomorphism).
160