Professional Documents
Culture Documents
Nonlinear System Analysis and Control
Nonlinear System Analysis and Control
Nonlinear System Analysis and Control
Lecture notes by
Xiaoming Hu
2021
8. Semiglobal stabilization 63
9. Stabilization using small inputs 64
Chapter 9. Tracking and Regulation 67
1. Steady state response of a nonlinear system 67
2. Output regulation 69
3. Trajectory tracking for nonholonomic systems 70
Chapter 10. Switched Systems 75
1. Common quadratic Lyapunov function 75
2. Controllability of switched linear systems 77
Appendix A. Some Geometric Concepts 79
CHAPTER 1
ẋ = f (x, u)
y = h(x, u)
(1.1.1) Ṙ = RS(ω)
2 CHAPTER 1. NONLINEAR DIFFERENTIAL SYSTEMS
where
⎛ ⎞
0 −ω3 ω2
(1.1.2) S(ω) = ⎝ ω3 0 −ω1 ⎠ .
−ω2 ω1 0
Thus,
⎛ ⎞ ⎛ ⎞⎛ ⎞
φ̇ 1 sin φ tan θ cos φ tan θ ω1
(1.1.3) ⎝ θ̇ ⎠ = ⎝0 cos φ − sin φ ⎠ ⎝ω2 ⎠
ψ̇ 0 sin φ sec θ cos φ sec θ ω3
where x and y are Cartesian coordinates of the middle point on the rear
axle, θ is orientation angle, v is the longitudinal velocity measured at that
point, l is the distance of the two axles, and φ is the steering angle.
In this example we briefly review the nonholonomic constraints on such
a system. The nonholonomic constraints are basically due to the two facts
of a car: no movement in the direction orthogonal to the front wheels and
in the direction orthogonal to the rear wheels, and the range of steering is
limited. (Naturally if dynamical factors such as friction forces acting on the
tires are considered, then the situation is much more complicated.)
2. SOME BASIC RESULTS 3
vf φ
θ
(x,y)
(x,y)
h
w L
y h
x
Figure 1.1. The geometry of a car, with position (x, y),
orientation θ and steering angle φ.
That the velocity orthogonal to the rear wheels should be zero implies
(1.1.5) ẋ sin(θ) − ẏ cos(θ) = 0,
That the velocity orthogonal to the front wheels should be zero implies
d d
(1.1.6) (x + l cos(θ)) sin(θ + φ) − (y + l sin(θ)) cos(θ + φ) = 0.
dt dt
One can easily verify that the state equations defined by
ẋ = v cos(θ)
(1.1.7) ẏ = v sin(θ)
v
θ̇ = tan φ
l
satisfy the nonholonomic constraints.
An alternative way to derive the third equation is to use the facts:
lθ̇ = vf sin φ, vf cos φ = v.
Thus we have the equation.
One can simplify the model a bit by defining the right hand side of the
third equation as a new control ω:
θ̇ = ω.
By this way we obtain the so-called Unicycle Model.
x0 is said to be an equilibrium if
f (x0 , t) = 0 ∀t ≥ 0.
Ω : |t − t0 | ≤ a |x − x0 | ≤ b.
Let
M = max |f (x, t)|
(t,x)∈Ω
and
b
α = min(a, ).
M
Then,
Uniqueness of solution
Example 1.4.
1
ẋ = x 3
x(0) = 0
For any 0 ≤ c ≤ 1,
0 0≤t≤c
φc (t) = 2(t−c) 3
3 )
2 c<t≤1
is a solution.
lim |x(x0 , t, t0 )| = ∞
t→δmax
then,
t t
y(t) ≤ c(t) + β(t) α(s)c(s)exp( α(r)β(r)dr)ds, a ≤ t ≤ b.
a s
2. SOME BASIC RESULTS 7
x2 x2
x1 x1
Node Focus
x2 x2
x1 x1
Saddle Center
Figure 1.1
CHAPTER 2
Periodic Solutions
1. Periodic solutions
Consider
(2.1.1) ẋ = f (x),
where x ∈ Rn and f is Lipschitz. A solution x(x0 , t) is called periodic with
periodic T if for all t ≥ 0
x(x0 , t + T ) = x(x0 , t)
A periodic solution is represented by a closed phase curve (maybe just
an equilibrium), which is also called a cycle. A periodic solution is called
isolated if there exists a neighborhood of it that does not contain any other
periodic solution.
An isolated periodic solution is called a limit cycle.
Remark: A linear system does not have any limit cycle.
Example 2.1. The index of a node, saddle, and focus (or center) is
respectively +1, −1, +1.
ẋ = f (x).
Then, If (C) = 1.
This fact can be seen from the fact that the vector field is always tangent
to C. It is very useful for the understanding of limit cycles.
ẋ1 = f1 (x1 , x2 )
(2.3.1)
ẋ2 = f2 (x1 , x2 )
Example 2.2.
ẋ1 = x2
ẋ2 = −x1 + (1 − x21 − x22 )x2
Denote S = {(x1 , x2 ) : x21 + x22 ≤ 1}. Then, if initialized in S, γ + ∈ S.
And x21 + x22 = 1 is a limit cycle.
2 2
x 1 + x2 =1
x2
x1
3. SECOND ORDER SYSTEMS 13
Lyapunov Stability
f (x0 , t) = 0, t ≥ 0.
If the time t does not appear explicitly on the right hand side of (3.1.1),
we have
(3.1.3) ẋ = f (x), x ∈ Rn ,
which is called an autonomous system. Autonomous system is mostly im-
portant in engineering applications. Because of autonomy we have
x(x0 , t + c, t0 + c) = x(x0 , t, t0 ),
which implies that the solution does not depend on a particular initial time.
Thus we can rewrite the solution as x(x0 , t − t0 ) or simply assume t0 = 0.
First, we consider the Lyapunov stability.
Definition 3.1. Consider system (3.1.1) and assume x = 0 is an equi-
librium.
(1) x = 0 is stable, if for any > 0 and any t0 ≥ 0, there exists a
δ(, t0 ) > 0, such that
(3.1.4) x0 < δ(, t0 ) ⇒ x(x0 , t, t0 ) < , ∀t ≥ t0 ≥ 0.
(2) x = 0 is uniformly stable, if it is stable, and in (3.1.4) the δ is
independent of t0 , i.e.,
δ(, t0 ) = δ().
(3) x = 0 is unstable, if it is not stable.
Next, we consider the convergence of the solutions.
Definition 3.2. Consider system (3.1.1) and assume x = 0 is an equi-
librium.
(1) x = 0 is attractive, if for each t0 ≥ 0 there exists a η(t0 ) > 0 such
that
(3.1.5) x0 < η(t0 ) ⇒ lim x(x0 , t + t0 , t0 ) = 0.
t→+∞
1
(3.1.8) ẋ = − x.
1+t
The solution is
1 + t0
(3.1.9) x= x0 .
1 + t0 + t
From (3.1.9) it is clear that the system is asymptotically stable. But
it is not uniformly asymptotically stable because it is not uniformly
attractive.
If the solution of a system satisfies an exponential relation as (3.1.7),
the system is said to be exponentially stable. We give a rigorous definition.
Definition 3.4. System (3.1.1) is said to be exponentially stable at x0 =
0, if there exist positive numbers a > 0 and b > 0 and a neighborhood N0 of
the origin, such that
(3.1.10) x(x0 , t, t0 ) ≤ ax0 e−b(t−t0 ) , t ≥ t0 ≥ 0, x0 ∈ N0 .
V̇ = −xT Qx < 0.
Obviously, A(t) has both eigenvalues −1 fore every t. But the solution to
the system is x(t) = Ψ(t)Ψ−1 (t0 )x(t0 ), where
t
e (cos(t) = 12 sin(t) e−3t (cos(t) − 12 sin(t))
Ψ(t) = t .
e (sin(t) − 12 cos(t)) e−3t (sin(t) + 12 cos(t))
ẋ = A(t)x
is exponentially stable.
Example 4.1.
Suppose x = 0 is the equilibrium of interest, for V (t, x), we define the total
derivative, V̇ as:
∂V ∂V
V̇ (t, x) = (t, x) + f (x, t).
∂t ∂x
V̇ ≤ 0 ∀t ≥ 0 in a neighborhood of 0.
Example 4.2.
ẋ1 = a(t)x2n+1
2
ẋ2 = −a(t)x2n+1
1
Let V = 12 (x2n+2
1 + x2n+2
2 ), giving V̇ = 0, and the system is thus uniformly
stable.
J2 − J3
ω̇1 = ω2 ω3 + u1
J1
J3 − J1
ω̇1 = ω1 ω3 + u2
J2
J1 − J2
ω̇1 = ω2 ω1 + u3
J3
but |x| < 1 is not in D(0). The next theorem deals with exponential
stability.
Theorem 4.4. Suppose there exists a lpdf V (t, x) which is bounded by
Proof:
Since −bx2 ≤ −V (t, x), we have
c
V̇ (t, x) ≤ −cx2 ≤ − V (t, x),
b
and
V (t, x(t)) ≤ V (t0 , x0 )e− b (t−t0 ) .
c
Now, we have
1
x(t)2 ≤ V (t, x(t))
a
1
V (t0 , x0 )e− b (t−t0 )
c
≤
a
b
x0 2 e− b (t−t0 )
c
≤
a
so
b
x0 e− 2b (t−t0 ) ,
c
x(t) ≤ ∀t ≥ t0 .
a
V̇ = −x2 f (x2 ).
where λmin denotes the smallest eigenvalue of the matrix. Then there exist a
positive definite matrix P ∈ Rn×n and matrices Q ∈ Rm×n and W ∈ Rm×m
and > 0 such that
AT P + P A = −P − QT Q
BT P + W T Q = C
W T W = D + DT .
26 CHAPTER 4. LYAPUNOV’S DIRECT METHOD
V̇ = −x41 − x42 .
Example 4.7. As an illustration of the necessity of radially unbounded-
ness, consider
ẋ1 = −x31 + x22 x31
ẋ2 = −x2
x21
Let V = 1+x21
+ 2x22 , giving
2x41
V̇ ≤ − − 2x22 .
(1 + x21 )2
3. CONVERSE THEOREMS TO LYAPUNOV’S STABILITY
THEOREMS 27
However, the system is not globally asymptotically stable. (solve for x2 and
insert in equation for ẋ1 .)
V̇ (x) ≤ −γ(x), ∀x ∈ Br
Theorem 4.11. Suppose x = 0 is exponentially stable, then there exists
a C 1 function V(x) and positive constants a, b and c > 0 such that
therefore
1 1
| x f (x) − 0| ≤ Lx2
2 2
28 CHAPTER 4. LYAPUNOV’S DIRECT METHOD
or
1 1 1
− Lx2 ≤ x f (x) ≤ Lx2
2 2 2
So
˙ 2 ≥ − 1 Lx2
x
2
=⇒
1
x2 ≥ x0 2 e− 2 Lt
So
2
V (x0 ) ≥ x0 2
L
Now
dV (x(x0 , t)) d ∞
V̇ (x) = = x(x(x0 , t), ς)2 dς
dt dt 0
d ∞
= x(x0 , t + ς)2 dς
dt 0
Put r = ς + t
∞
d
x(x0 , r)2 dr = −x(x0 , t)2 = −x2
dt t
3.2. Converse theorem to global asymptotically stability.
Theorem 4.12. Suppose x = 0 is globally asymptotically stable, then
there exists a C 1 function V(x) and α, β and γ of class K∞ such that
α(x) ≤ V (x) ≤ β(x), ∀x ∈ n
V̇ (x) ≤ −γ(x), ∀x ∈ n
Remark 4.1. In general we can not show the global version of Theo-
rem(4.11), unless one assumes that f (x) satisfies a linear growth condition
i.e.
| ∂f∂x
(x)
| < K, ∀x ∈ n .
Proof:
if: By the principle of stability in the first approximation.
only if: Suppose x = 0 is exponentially stable, then by Theorem (4.11),
∃V (x) such that
αx2 ≤ V (x) ≤ βx2 , ∀x ∈ Br
V̇ (x) ≤ −γx2 , ∀x ∈ Br
∂V
≤ μx, ∀x ∈ Br
∂x
Rewrite the equation as
ẋ = Ax + F (x)
where
∂f
F (x) = f (x) − |x=0 x = f (x) − Ax
∂x
note that
F (x) = O(x2 )
then
∂V (x)
V̇ = (Ax + F (x))
∂x
∂V (x)
= Ax + O(x3 )
∂x
Let
V (x) = P1 x + x P2 x + O(x3 )
V (x) ≥ 0 =⇒ P1 = 0, P2 > 0
=⇒
∂V (x)
Ax = (2x P2 + O(x2 ))Ax
∂x
= x (P2 A + A P2 )x + O(x3 )
V̇ ≤ −γx2
=⇒ P2 A + A P2 is negative definite =⇒ A stable.
CHAPTER 5
Suppose φ(t) is a periodic solution of the system. Then the invariant set
γ = {φ(t)|0 ≤ t < T } is called an orbit.
Φ(t, 0) = K(t)eBt ,
1
where K(t) = K(t + T ) and K(0) = I, and B = T lnΦ(T, 0).
ż = Az + f (z, y), z ∈ p
(5.2.2)
ẏ = By + g(z, y), y ∈ m
Theorem 5.2. There exists an invariant set defined by y=h(z), z < δ
h ∈ C 2 , and h(0)=0, h’(0)=0. This invariant set is called a center manifold.
On the center manifold the dynamics of the system is governed by
(5.2.3) ẇ = Aw + f (w, h(w))
The center manifold is same as in the previous example and the flow on the
center manifold is governed by
3. Singular perturbation
Consider
ẋ = f (x, z, ε, t) x ∈ n ,
(5.3.1) εż = g(x, z, ε, t) z ∈ n ,
If (5.3.1) is in standard form, then for each root, φi , we have the following
reduced model:
(5.3.3) x̄˙ = f (x̄, φ(x̄, t), 0, t).
Since we only study one of the reduced models in a time, we drop the
subscript i for simplicity.
Example 5.3. Consider a system with high gain amplifier
+ + + int z int x
- - K -
where N = tan(·). Or
ẋ = z
ż = −kx − z − k tan(z) + ku
set ε = 1
k ⇒
ẋ = z
εż = −x − εz − tan(z) + u
set ε = 0 ⇒
z̄ = tan−1 (ū − x̄)
x̄˙ = tan−1 (ū − x̄)
or
u- +
N -1 z- inv
-
x
-
z-
z
t0 ε t1
dẑ
= g(x0 , ẑ(τ ) + z̄(t0 ), 0, t0 )
dτ
with ẑ(0) = z0 −z̄(t0 ), and x0 , t0 fixed parameters. ẑ(τ ) is used as a boundary
layer correction for a possible uniform approximation of z(t):
t − t0
(5.3.6) z(t) = z̄(t) + ẑ + O(ε).
ε
Clearly, z̄(t) is the slow transient of z(t), and ẑ(τ ) the fast transient.
Theorem 5.5. (Tikhonov)
Suppose
i. the equilibrium ẑ = 0 of the boundary layer system is asymptotically
stable uniformly in x0 and t0 , and z0 − z̄(t0 ) belongs to it’s domain
of attraction,
ii.
∂g
Re λ (x̄(t), z̄(t), 0, t) ≤ −c < 0 ∀t ∈ [t0 , T ],
∂z
then (5.3.5) and (5.3.6) are valid for all t ∈ [t0 , T ], while (5.3.4) holds for
all t ∈ [t1 , T ], whereas the “thickness of the boundary layer” t1 − t0 can be
made arbitrarily small by choosing sufficiently small ε.
CHAPTER 6
[f, k] ∈ Δ(x).
[b, Ax] = Ab
Let us compute the strong accessibility distribution Rc (x) and check the ac-
cessibility of the system. In this case,
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
αx2 x3 0 0
f (x) := ⎣ βx3 x1 ⎦ , g1 (x) := ⎣ 1 ⎦ , g2 (x) := ⎣ 0 ⎦ .
γx1 x2 0 1
where α := (a2 − a3 )/a1 , β = (a3 − a1 )/a2 and γ = (a1 − a2 )/a3 .
Step 1.: R0 (x) = span {g1 (x), g2 (x)} = span {e2 , e3 }.
Step 2.: Lie brackets are computed as follows:
⎡ ⎤
αx3
∂e2 ∂f
[f, g1 ] = f (x) − e2 = − ⎣ 0 ⎦ =: g3 (x)
∂x ∂x
⎡ γx1 ⎤
αx2
∂e3 ∂f
[f, g2 ] = f (x) − e3 = − ⎣ βx1 ⎦ =: g4 (x)
∂x ∂x 0
⎡ ⎤
0
[g1 , g2 ] = ⎣ 0 ⎦ .
0
Thus,
R1 (x) = span {gi (x), i = 1, . . . , 4} .
Step 3.: If α = 0 (i.e. a2 = a3 ), then R1 (x) = R0 (x). So, Rc (x) =
R0 (x) = span {e2 , e3 }. If α = 0, then R1 (x) = R0 (x) and dim R1 (x) =
2 < 3 for x2 = x3 = 0. Hence, go back to Step 2.
Step 2-2.:
R2 (x) = R1 (x) + span {[f, gi ] , [gi , gj ] , i, j = 1, 2, 3, 4}
Since
⎡ ⎤
−α
∂g4 ∂e2
[g1 , g4 ] = e2 − g4 (x) = ⎣ 0 ⎦ , (α = 0)
∂x ∂x 0
R2 (x) = R3 (whole space).
Step 3-2: Since dim R2 (x) = 3 for any x, Rc (x) = R3 .
Therefore, if a2 = a3 , then the system is locally strongly accessible from
any point in R3 .
Focal point
x1
x2 Optical axis f y1
x3 y2
where ω is the angular velocity measured in the camera frame, and and v is
the translational velocity. If we consider ω as given and v to be regulated,
we then have a control system.
Now let us consider the modeling of a motion sensor, which detects the
gradient of lateral motion (infra-red radiation). For a given sensor i located
2. PERCEPTION AND OBSERVABILITY 41
r(t 0 )
r(t )
.
r(t )
θi
si
where f (x0 ) = 0.
Definition 7.1. The system (7.1.1) is said to be equivalent to a linear
system (locally around an equilibrium point x0 ), iff there exists a (local)
coordinate chart (U, z) (z(x0 ) = 0) such that the system (7.1.1) is expressed
as
m
(7.1.2) ż = Az + bi ui := Az + Bu, z ∈ U, u ∈ Rm ,
i=1
For notational ease, we still use f and gi for their new forms under the
coordinates z, i.e., F∗−1 (f ) and F∗−1 (gi ). Moreover, set ni = n1 +n2 +· · ·+ni ,
and denote
δi = (0, · · · , 0,
1 , 0, · · · , 0)T ∈ Rn .
i−th
Then
gi = δ(ni−1 +1) ,
(7.1.5)
adkf gi = δni−1 +k+1 , i = 1, · · · , m, k = 1, · · · , ni − 1.
Denote
T
f = f11 , · · · , fn11 , · · · , f1m , · · · , fnmm .
Using (7.1.5), a straightforward computation shows that
∂fts 1, s = i, and t = j + 1, s, i = 1, · · · , m;
=
∂zji
0, otherwise; t = 1, · · · , ni ; j = 1, · · · , ni − 1.
1. LINEAR EQUIVALENCE OF NONLINEAR SYSTEMS 47
Proof. (Necessity) Let f be any vector field and h any smooth function.
Then for any diffeomorphism, T the Lie derivative is invariant in the sense
that
LT∗ (f ) ((T −1 )∗ (h)) = (T −1 )T ∗ (Lf (h)).
Now the necessity of i), ii), and iii) are obvious because they are correct for
a linear system.
(Sufficiency) We need a formula, which itself is useful.
k
i k
(7.1.11) Ladk g h(x) = (−1) Lfk−i Lg Lif h(x),
f i
i=0
where
k k!
= .
i i!(k − i)!
It can be proved by induction. It is trivially true for k = 0. Assume it
is true for k = p, then
Ladp+1 g h(x) = Lf Ladp g h(x) − Ladp g Lf h(x).
f f f
Using (7.1.11) to the two terms in the above and noticing the identity that
p p p+1
+ = ,
q q+1 q+1
1. LINEAR EQUIVALENCE OF NONLINEAR SYSTEMS 49
Using (7.1.11) and the condition iii), the claim (7.1.12) is obvious.
Since (7.1.12) is equivalent to that
( )
dLkf h(x), [adsf gi , adtf gj ] = 0,
According to the Theorem 7.1, the system (7.1.10) can be locally expressed
as
ż = Az + Bu
y = h(z).
Now under z coordinates the vector vields
& '
{X1 , · · · , Xn } = g1 , · · · , adfn1 −1 g1 , · · · , gm , · · · , adfnm −1 gm .
Then Xi = δi = ∂
∂zi , i = 1, · · · , n. It is easy to see from iii) that
We consider its local linear equivalence at the origin. It is easy to see that
x1 1 1
f= x1 , g= , and adf g = 1
cos(x2 ) 0 cos(x2 ) .
50 CHAPTER 7. LINEARIZATION OF NONLINEAR SYSTEMS
Lg h = 0, Lg Lkf h = 1, k ≥ 1.
Hence
Lg Lsf Lg Ltf h = 0, s ≥ 0, t ≥ 0.
The condition 3 follows. That is, the system (7.1.13) is equivalent to a linear
system around the origin.
Next, we look for its linear form. Let
1 g 1
X1 = g = , X2 = adf = 1
0 cos(x2 )
z1 ◦ ez2 (0).
F (z1 , z2 ) = eX 1 X2
First, we solve eX 2
z2 (0), that is, to solve
dx1
dz2 = 1
dx2 1
dz2 = cos(x1 ) ,
with F −1 as
z1 = x1 − sin(x2 )
z2 = sin(x2 ).
Finally, the system (7.1.13) can be expressed under coordinate z as
⎧
⎪
⎨ż1 = u
ż2 = z1 + z2
⎪
⎩
y = z2 .
Δi = span adsf gj j = 1, · · · , m; s ≤ i − 1 , i = 1, 2, · · · .
The following theorem has basic importance.
Theorem 7.2. The system (7.1.1) is linearizable about x0 , iff there is a
neighborhood U , such that
i) Δi (x), x ∈ U , i = 1, · · · , n are non-singular and involutive;
ii) dim(Δn )(x) = n, x ∈ U .
Proof. (Necessary) An obvious fact is that the distributions Δi , i =
1, · · · , n are feedback invariant. That is, let
f˜ = f + gα, g̃ = gβ,
then & '
Δi = span adsf˜ g̃j j = 1, · · · , m; s ≤ i − 1 .
52 CHAPTER 7. LINEARIZATION OF NONLINEAR SYSTEMS
So Δij is spanned by the vector fields in the first i − 1 blocks and the first j
vector fields in the i-th block of (7.2.3), that is,
(Δ1 , · · · , Δks ) = Δ11 , · · · , Δ1d1 , · · · , Δs1 , · · · , Δsds .
The perpendicular co-distribution of Δi is denoted by Ωks +1−i . Then we
have a sequence of co-distributions as
Ω 1 , · · · , Ω ks ,
where the first i-th element is the perpendicular co-distribution of the last
i-th element in the sequence of {Δt }. Corresponding to {Δij }, the co-
distribution sequence is re-indexed as
{Ω1 , · · · , Ωks } = {Ω11 , · · · , Ω1ds , · · · , Ω21 , · · · , Ω2ds−1 , · · · , Ωs1 , · · · , Ωsd1 }.
Then
* +⊥
Ωij = Δds+1−i
s+1−i +1−j
.
2. STATE FEEDBACK LINEARIZATION 53
dLkf zj , j = 1, · · · , ms ; k = 1, · · · , ks − 1
Now assume
s −1
k ms
cij dLif zj = 0.
i=1 j=1
Hence c1ks −1 = · · · = cm
ks −1 = 0. Then we consider
s
.k −2 m /
s s
cij dLif zj , adf g1 , · · · , adf gm = 0.
i=1 j=1
procedure shows that all cij = 0, that is, dLif zj are linearly independent,
i = 1, · · · , ks − 1, j = 1, · · · , ms .
In fact, the above also shows that
(7.2.6) Lif zj ∈ Δ⊥
ks −1−i , j = 1, · · · , ms .
54 CHAPTER 7. LINEARIZATION OF NONLINEAR SYSTEMS
Moreover, the first t columns of the co-vector fields in (7.2.7) span the co-
distributions Ωt , t = 1, · · · , km .
Denote by r1 = d1 + · · · + ds , r2 = d1 + · · · + ds−1 , · · · , rs = d1 , we can
get n linearly independent functions, grouped as
⎛ ⎞
z1
⎜ ⎟
z11 = ⎝ ... ⎠ , z21 = Lf z11 , · · · , zr11 Lfr1 −1 z11
⎛zms ⎞
zms +1
⎜ .. ⎟
z1 = ⎝ . ⎠ , z22 = Lf z12 , · · · , zr22 = Lfr2 −1 z12
2
zms−1
..
. ⎛ ⎞
zm2 +1
⎜ ⎟
z1s = ⎝ ... ⎠ , z2s = Lf z1s , · · · , zrss = Lfrs −1 z1s
zm1
Using this set of functions as a local coordinate frame, we have
ż1i = z2i
..
(7.2.8) .
żri i−1 = zri i
żri i = Lrfi z1i + Lg Lfri −1 z1i u, i = 1, · · · , s.
2. STATE FEEDBACK LINEARIZATION 55
Set ⎛ ⎞ ⎛ r1 1 ⎞
Lg Lfr1 −1 z11 Lf z1
⎜ ⎟ ⎜Lr2 z12 ⎟
⎜Lg Lfr2 −1 z12 ⎟ ⎜ f ⎟
⎜
D=⎜ ⎟, F = ⎜ . ⎟.
.. ⎟ ⎝ .. ⎠
⎝ . ⎠
Lg Lfrs −1 z1s Lrfs z1s
We claim that D is non-singular. Define an n × m matrix, E as
⎛ ⎞
dz11
⎜ .. ⎟
⎜ . ⎟
⎜ r1 −1 1 ⎟
.⎜dLf z1 ⎟ /
⎜ ⎟
⎜ . ⎟
E= ⎜ .
. ⎟,g .
⎜ ⎟
⎜ dz1s ⎟
⎜ ⎟
⎜ .. ⎟
⎝ . ⎠
rs −1 s
dLf z1
Then rank(E) = m. This is because the first n forms are linearly indepen-
dent, and as a convention g1 , · · · , gm are linearly independent. Now the only
m non-zero rows of E form D, so rank(D) = m.
Using the controls as
u = −D −1 F + D −1 v,
the system (7.2.8) becomes
ż1i = z2i
..
(7.2.9) .
żri i−1 = zri i
żri i = vi , i = 1, · · · , s,
Feedback Stabilization
ẋ = f (x) + g(x)u
y = h(x)
where x ∈ Rn , u ∈ Rm and y ∈ Rm .
Relative degree:
The system is said to have relative degree {r1 , ..., rm } at x0 if
and
⎡ ⎤
Lg1 Lfr1 −1 h1 (x0 ) ··· Lgm Lfr1 −1 h1 (x0 )
⎢ ⎥
⎢ Lg1 Lfr2 −1 h2 (x0 ) ··· Lgm Lfr2 −1 h2 (x0 ) ⎥
A(x0 ) = ⎢ ⎥
⎣ ··· ··· ··· ⎦
Lg1 Lfrm −1 hm (x0 ) ··· Lgm Lfrm −1 hm (x0 )
is nonsingular.
Lemma 8.1. Suppose the system has relative degree {r1 , ..., rm } at x0 ,
then the row vectors
dh1 (x0 ), dLf h1 (x0 ), · · · , dLfr1 −1 h1 (x0 ), · · · , dhm (x0 ), dLf hm (x0 ), · · · , dLfrm −1 hm (x0 )
Proposition 8.2. Suppose the system has relative degree {r1 , ..., rm }
at x0 , then in N (x0 ) it can be transformed into the following form by a
58 CHAPTER 8. FEEDBACK STABILIZATION
coordinate change:
ż = f0 (z, ξ) + p(z, ξ)u
ξ˙1i = ξ2i
..
.
ξ˙ri i −1 = ξri i
m
i
ξ̇r−i = bi (z, ξ) + aij (z, ξ)uj
j=1
yi = ξ1i i = 1, ..., m
If in addition,
sp{g1 , · · · , gm }
is involutive, then one can find z(x) such that p(z, ξ) = 0.
2. Feedback stabilizability
Consider a SISO system
(8.2.1) ẋ = f (x) + g(x)u
(8.2.2) y = h(x)
where x ∈ N (0) ∈ Rn , f (0) = 0, f ∈ C 1 , g ∈ C 1 and f (0) = 0, h(0) = 0.
Remark 8.2. although in this section our focus is on SISO systems, all
the results discussed in the rest of this section also apply to MIMO systems.
The system (8.2.1) is said to be locally stable if the origin (0) of the
system is asymptotically stable and the domain of the attraction is not
necessarily the whole Rn space.
Let
∂f (0)
A= , b = g(0)
∂x
Fact: if the pair(A, b) is stabilizable, then (2.2) is locally stabilizable.
3. LOCAL VERSUS GLOBAL 59
The above result is sometimes called the Brockett theorem [?]. The
theoretical foundation for this was also given by a Russian mathematician
Krasnoselskii [?].
sr + a1 sr−1 + · · · + ar
becomes Hurwitz polynomial (i.e., all the roots are in the open left half-
plane.)
f˜ = f + gα g̃ = gβ
τi = (−1) i−1
adi−1
f˜
g̃(x) 1 ≤ i ≤ r.
If τi s are complete, then the system can be globally transformed by coordinate
change into
ż = f0 (z, ξ1 , · · · , ξr )
ξ̇1 = ξ2
..
.
ξ˙r−1 = ξr
ξ˙r = b(z, ξ) + a(z, ξ)u
y = ξ1
5. GLOBAL ASYMPTOTIC STABILIZATION 61
such that
ż = f0 (z, v(z))
is globally asym. stable, then the system (8.5.1) is globally asym. stabilizable
by a feedback control.
6. Passivity approach
Consider the system (8.4.1).
Definition 8.1. The system is said to be passive if there exists a positive
semidefinite function V (x) (storage function), such that for all u ∈ U
∂V
yT u ≥ (f + gu).
∂x
It is said to be lossless if the equality holds and strictly passive if
∂V
yT u ≥ (f + gu) + S(x),
∂x
where S(x) is positive definite.
7. Artstein-Sontag’s theorem
We consider again (8.4.1).
Definition 8.2. A positive definite, radially unbounded and differen-
tiable function V(x) is called a control Liapunov function if for all x = 0,
Lg V (x) = 0 ⇒ Lf V (x) < 0.
> 0 there exists a δ() > 0 such that for each x ∈ Bδ \ 0, there exists
ux ∈ B
such that
Lf V (x) + Lg V (x)ux < 0.
One can use the condition in the theorem to show it is continuous at 0. With
this control, we have
2
V̇ = − (Lf V (x))2 + (Lg V (x))4 < 0
8. Semiglobal stabilization
We say a nonlinear system is semiglobally stabilizable if for any bounded
set of initial data S, there exists a feedback control such that the closed-loop
system is asymptotically stable and the domain of attraction contains S.
Theorem 8.11. If
ż = f0 (z, 0)
is globally exponentially stable, then the same high gain control
u = −(kar−1 ξr + · · · + kr a0 ξ1 )
also stabilizes the system semiglobally.
Remark 8.3. In this result, we do not need to cancel a(z, ξ1 ) in the
control.
Theorem 8.14. Let 0 < < 14 . Then there exists a feedback law of the
form
n
u=− i sat(hi (x1 , · · · , xn ))
i=1
9. STABILIZATION USING SMALL INPUTS 65
∀ x0 ∈ N (x∗ ), then
xss (t) = x(t, x∗ , u∗ (·))
is called the steady state response to u∗ (·).
68 CHAPTER 9. TRACKING AND REGULATION
2. Output regulation
Consider
(9.2.1) ẋ = f (x) + g(x)u + p(x)w
(9.2.2) ẇ = s(w)
(9.2.3) e = h(x, w)
where the first equation is the plant with f (0) = 0, the second equation
is an exosystem as we defined before and e is the tracking error. Here w
represents both the signals to be tracked and disturbances to be rejected.
Full information output regulation problem
Find, if possible, u = α(x, w), such that
1. x = 0 of
ẋ = f (x) + g(x)α(x, 0)
is exponentially stable;
2. the solution to
ẋ = f (x, w, α(x, w))
ẇ = s(w)
satisfies
lim e(x(t), w(t)) = 0
t→∞
for all initial data in some neighborhood of the origin.
Let us first consider the linear case:
ẋ = Ax + Bu + P w
ẇ = Sw
e = Cx + Qw
Proposition 9.3. Suppose the pair (A, B) is stabilizable and no eigen-
value of S is on the open left half plane, then the full information output
regulation problem is solvable if and only if there exist matrices Π and Γ
which solve the linear matrix equation
AΠ + BΓ + P = ΠS
CΠ + Q = 0.
The feedback control then is
u = K(x − Πw) + Γw
70 CHAPTER 9. TRACKING AND REGULATION
where A + BK is Hurwitz.
Now consider (9.2.1). Suppose:
H1: w = 0 is a stable equilibrium of the exosystem and
∂s
|w=0
∂w
has all eigenvalues on the imaginary axis.
H2: The pair f (x), g(x) has a stabilizable linear approximation at x = 0.
Theorem 9.4. Suppose H1 and H2 are satisfied. The full information
output regulation problem is solvable if and only if there exist π(w), c(w)
with π(0) = 0, c(0) = 0, both defined in some neighborhood of the origin,
satisfying the equations
∂π
s(w) = f (π(w)) + g(π(w))c(w) + p(π(w))w
∂w
h(π(w), w) = 0
The feedback control can be designed as
α(x, w) = K(x − π(w)) + c(w)
where K stabilizes the linearization of ẋ = f (x) + g(x)u in (9.2.1).
φ
θ
(x,y)
(x,y)
h
w L
y h
where x and y are cartesian coordinates of the middle point on the rear
axle, θ is orientation angle, v is the longitudinal velocity measured at that
point, l is the distance of the two axles, and φ is the steering angle.
3. TRAJECTORY TRACKING FOR NONHOLONOMIC SYSTEMS 71
That the velocity orthogonal to the front wheels should be zero implies
d d
(9.3.2) (x + l cos(θ)) sin(θ + φ) − (y + l sin(θ)) cos(θ + φ) = 0.
dt dt
One can easily verify that the state equations defined by
ẋ = v cos(θ)
(9.3.3) ẏ = v sin(θ)
θ̇ = vl tan φ
ẋ = v cos(θ)
(9.3.4) ẏ = v sin(θ)
θ̇ = ω
xd (t) = p(t)
(9.3.5)
yd (t) = q(t), 0 ≤ t ≤ T.
x˙d = vd cos(θd )
(9.3.6)
y˙d = vd sin(θd ).
This gives
q̈(t)ṗ(t) − p̈(t)q̇(t)
ωd (t) = θ̇d (t) = .
vd (t)2
By this way we obtain an open-loop control vd (t), ωd (t) for the trajectory
tracking. In fact this is the unique control that gives exact tracking.
However, the above controller is not robust at all with respect to distur-
bances and measurement errors. Thus we present a feedback controller that
does the tracking only approximately, but robustly.
The idea is that we choose an off-the-axis point as our new reference
point:
xL = x + L cos φ
(9.3.7) yL = y + L sin φ.
This leads to
ẋL cos φ −L sin φ v
=
ẏL sin φ L cos φ ω
A
It is well known that by this way one can feedback linearize the dynamics
since the matrix A is always nonsingular. Thus,
v cos φ sin φ ẋL
(9.3.8) = .
ω − L sin φ
1 1
L cos φ ẏL
where Δφ(t) is the relative angle to the reference point on the trajectory
measured by the vehicle, ρ(t) is the distance from (x, y) on the vehicle to
the reference point (xd (t), yd (t)).
Remark 9.1. It seems that the shorter L is, the better the tracking accu-
racy becomes. However, both simulations and experiments have shown that
3. TRAJECTORY TRACKING FOR NONHOLONOMIC SYSTEMS 73
Switched Systems
This section considers system (10.1.3) only. It is well known that even
if each switching mode of (10.1.3) is stable, under certain switching system
(10.1.3) could be unstable, and vise versa. The following example is from
the literature.
Example 10.1. Consider the following system
(10.1.4) ẋ = Aσ(x,t) x, x ∈ R2 ,
76 CHAPTER 10. SWITCHED SYSTEMS
The two switching modes are stable. But if the switching law is as follows:
in quadrants 1 and 4, σ(x, t) = 1 and in quadrants 2 and 3, σ(x, t) = 2,
then the system is unstable.
Taking −Ai , i = 1, 2 for switching modes, it is easy to see that the
same switching law makes the switched system stable, though each mode is
unstable.
One way to assure the stability under arbitrary switching laws is a com-
mon Lyapunov function. For switched linear system (10.1.3) we consider a
common quadratic Lyapunov function (QLF).
(10.1.5) P A + AT P < 0.
If we can prove that Pi,k+1 = Pk+1 Ai + ATi Pk+1 < 0, we are done. Since
Manifold:
Suppose N is an open set in Rn . The set M is defined as
M = {x ∈ N : λi (x) = 0, i = 1, ..., n − m}
where λi are smooth functions.
⎡ ∂λ1 ⎤
∂x
⎢ .. ⎥
If rank ⎣ . ⎦ = n − m ∀x ∈ M ,
∂λn−m
∂x
then M is a (hyper)surface (which a smooth manifold of dimension m).
That is
∂λ(x) n
∂
Lb λ = b=( bi )λ(x)
∂x ∂xi
So we can also write a tangent vector b to Rn in the operator form:
n
∂
b= bi
∂xi
We see from this that
∂
{ } i = 1, ..., n
∂xi
is a basis for the tangent space to Rn .
Cotangent vector and cotangent space:
80 CHAPTER A. SOME GEOMETRIC CONCEPTS
Note that frequently the set of local coordinates (φ1 , ..., φn ) is represented as
an n-vector x = col(x1 , ..., xn ).
Differentials
Let N and M be smooth manifolds. Let F : N → M be a smooth
mapping. The differential of F at p ∈ N is the linear map
F∗ : Tp N → TF (p) M
d dσi (t) ∂
n
σ∗ ( )t = ( )
dt dt ∂φi σ(t)
i=1
Furthermore, we have
λ(Φ(h, p)) − λ(p)
Lf λ(p) = lim .
h→0 h
Conventionally we denote
Lf Lg λ = Lf (Lg λ)
82 CHAPTER A. SOME GEOMETRIC CONCEPTS
and
(n−1)
Lnf λ = Lf (Lf λ)
For any two vector fields f and g on M , we define a new vector field,
denoted by [f, g], called the Lie bracket of the two vector fields, according
to the rule:
[f, g](λ) = (Lf Lg λ)(p) − (Lg Lf λ)(p)
In local coordinates the expression of [f, g] is given as
∂g ∂f
f− g
∂x ∂x
Lemma A.1. The collection of all (smooth) vector fields on M (denoted
as V (M )) with the product [·, ·] is a Lie algebra, i.e., [·, ·] has the following
properties:
1) it is skew commutative:
[f, g] = −[g, f ]
2) it is bilinear over R:
[a1 f1 + a2 f2 , g] = a1 [f1 , g] + a2 [f2 , g]
3) it satisfies the Jacobi identity:
[f, [g, h]] + [g, [h, f ]] + [h, [f, g]] = 0
Two vector fields f g are called commuting if Φft ◦ Φgs = Φgs ◦ Φft .
Fact: An exact form is closed. In the finite dimensional space case, the
converse is also true.
Distributions
dim(D(p)) = d
where d is a constant.
A distribution D is called involutive if [f, g] ∈ D whenever f, g are vector
fields in D.
A manifold P is called an integral manifold of a distribution D if for
each p ∈ P
Tp P = D(p)