Professional Documents
Culture Documents
Course Lecture8
Course Lecture8
In Robust control design of nonlinear uncertain systems, stability theory plays an important
role in engineering systems. For any given control system, it is crucial to have a stable system since an unstable control system is useless. Lyapunov 1 . stability criterion is a general
and useful procedure for studying the stability of nonlinear systems. The Lyapunov stability
theory include two methods, Lyapunovs first method and Lyapunovs direct method. Lyapunovs first method is a technique which simply uses the idea of system linearization(lowest
order approximation) around a given point and one can only achieve local stability results
with small stability regions. Lyapunovs direct method is the most important tool for design and analysis of nonlinear systems. Lyapunovs direct method is directly applied to
nonlinear systems without the need to linearization and thus achieves global stability. The
basic concept behind Lyapunovs direct method is that if the total energy of a system, electrical/mechanical; linear/nonlinear, is continuously dissipating, then the system will eventually
reach an equilibrium point and remain at that point. Hence, Lyapunovs direct method include two steps, first find a appropriate scalar function, referred to as Lyapunov function,
second evaluate its first-order time derivative along the trajectory of the system. If the
Lyapunov function derivative is decreasing along the system trajectory as time increases,
then the system energy is dissipating and thus the system will eventually settle down. The
definitions below give a more formal statement of admissible choices of Lyapunov function
candidate.
Autonomous systems: the nonlinear system
x = f (x, u, t)
is said to be autonomous if f does not depend explicitly on time, i.e., if the system can be
written
x = f (x)
Otherwise, the system is called non-autonomous.
Equilibrium point: A state xe is an equilibrium point(state) of the system if x(t) = xe ,
1
theory introduced in late 19th century by the Russian mathematician Alexandr Mikhailovich Lyapunov
then it remains equal to xe for all time. Mathematically, this means that xe satisfies
0 = f (xe ).
In this paper, we are mainly interested in stability of equilibrium points.
Stability and instability: The equilibrium point xe = 0 is said to be stable if, for any
> 0, there exists > 0, such that if kx(0)k < , then kx(t)k < for all t 0. Otherwise,
the equilibrium point is unstable.
Asymptotic stability: An equilibrium point 0 is asymptotically stable if it is stable, and if
in addition there exits some > 0 such that kx(0)k < implies that x(t) 0 as t .
Exponential stability: An equilibrium point 0 is exponentially stable if there exits two
strictly positive numbers and such that
kx(t)k kx(0)ket , t > 0,
in some ball B in the neighborhood of the origin.
Lyapunovs first method:
1. The equilibrium point of the nonlinear system is asymptotically stable if the linearized
system is strictly stable.
2. The equilibrium point of the nonlinear system is unstable if the linearized system is
strictly unstable.
3. If the linearized system is marginally stable, one cannot conclude anything from the
linear approximation (equilibrium point may be stable, unstable, or asymptotically stable
for the nonlinear system)
Lyapunov function: If function V(x) is positive definite and has continuous partial derivatives in a ball B , and if its time derivative along any state trajectory of system x = f (x) is
negative semi-definite, i.e., V (x 0, then V (x) is said to be a Lyapunov function.
Global stability: Assume that there exists a scalar function V of the state x, with continuous first order derivatives such that
V (x) is positive definite
2
as follows: given positive definite matrices Q and R, there is a unique positive definite matrix
P satisfying the algebraic Riccati equation AT P + P A P BR1 B T P Q = 0, then the
Lyapunov function is V (x) = xT P x and the stabilizing control is u(x) = R1 B T P x.
The example shows that control design and search of Lyapunov function are integrated and
can be done systematically for LTI systems and that Lyapunov functions for linear systems
can always be chosen to be quadratic functions. We shall use the above result in chapter
three to investigate robust control design for linear and certain nonlinear uncertain systems.
Moreover, one of the main objectives of this book is to develop systematic procedures of designing control and searching for Lyapunov function for general nonlinear uncertain systems,
though it is not as complete of a solution as the above one for LTI systems.
Example: Consider the scalar system given by
x = u + a,
where a is an uncertain (time-varying) parameter satisfying |a| < 1. Under the standard
linear feedback control law u = kx, the derivative of the Lyapunov function V = 0.5x2 is
given by
a
.
V = kx x
k
Because of the uncertainty in a, V is only negative definite outside the ball B(0, a/k)
B(0, 1/k). Hence, the system is not asymptotically stable, but the solution is given by
x = ekt x0 +
a
a
1 ekt
k
k
as t .
So, the solution is globally uniformly ultimately bounded (GUUB) with respect to 1/k for
the class of uncertainty denoted by a. Furthermore, the bound of GUUB stability tends to
the origin as k .
The implications of the example are twofold. First, if V is negative definite outside some
hyper-ball in state space, stability result of GUUB is concluded. Second, while larger control
energy makes the bound of GUUB of the state smaller, no control of finite energy achieves
asymptotic stability. Both observations can be extended to general nonlinear systems.
The next section addresses robotic manipulator systems which are widely used in the area
robust control. Some of the theories developed here are applied to robotic manipulator
systems. A brief general discussion is presented below for robotic systems.
Robotic Manipulators
(1)
where
N (q, q)
= G(q) + F (q)
+ F
M (q) <nn is the inertia matrix, Vm (q, q)
<nn is a matrix containing the centripetal
and Coriolis terms, G(q) <n is the gravity vector, F (q, t) <n is a vector representing
lumped uncertainties, q(t) <n is the joint variable vector, and <n is the input torque
vector. There are three widely used properties of the robot dynamic equation above. These
properties will be used in chapter 6, or whenever a robotic system is under study, during the
stability analysis of the robust controller.
Property 1
xT
In other words, matrix
Property 3
1
M (q) Vm (q, q)
x = 0,
2
1
M (q)
2
x <n .
Vm (q, q)
is skew-symmetric.
Position Control
This design technique is used to position a robotic system link(s) to a specific position
(desired location) in which accuracy is important especially in industrial and medical robotic
systems. Two main types of robust control design schemes have been proposed, one utilizes
the so-called Min-Max control and the other uses the saturation type controller. The
Min-Max controller is naturally discontinuous and yields global exponential stability, while
the saturation controller is continuous but yields global uniform ultimate boundedness. The
position control simply drives the robotic link(s) to a final desired position with a very small
error, which is referred to as a set point tracking.
6
Force Control
Many control design schemes have been developed for robotic systems in free space. This
is, a robot arm is not in contact with any surface. However, most industrial robots used
for yelding, grinding, polishing, etc.., require contact with objects or surfaces. Hence, the
robot arm motion is constrained depending on the direction of the arm movement. This fact
motivated researchers to investigate the constrained motion case and develop position/force
controllers. Among these controllers are hybrid position/force control, impedance control,
and reduced order methods. The disadvantage of the hybrid position/force control is that
it requires exact knowledge of the robot manipulator and thus, the analysis is limited to
the uncertainty free systems. An adaptive control design scheme was developed for hybrid position/force robots with uncertainty which is based on the joint-space robot model
formulation.
Impedance Control
Impedance control is based on the idea that the robust controller should be utilized to regulate the dynamic behavior between the robot arm end-effector motion and the force exerted
on the surface, rather than considering the motion and force control problems separately.
The name impedance emanates from the idea of using an Ohms law type relationship
between motion and force. Similar to previous types of controllers, impedance controller has
been extensively studied. A robust impedance controller was developed to ensure stability in
lieu of uncertainties. An adaptive impedance controller was also developed that takes care
of parametric uncertainty.
Industrial Robots
In present days, adaptive control is widely utilized in industrial robots because of the advantage of the inexpensive computer power that has become available. Moreover, these
robots are being utilized to their full potential in terms of the speed and precision of their
movements. It is possible to use a dynamic model of the manipulator as the heart of the
sophisticated control algorithm with a powerful control computer. This dynamic model allows the control algorithm to know how to control the manipulators actuators in order to
compensate for the complicated effects of inertia, centripetal, Coriolis, gravity, and friction
forces when the robot is in motion. The result is that the manipulator can be made to
follow a desired trajectory through space with smaller tracking errors. Adaptive control, as
7
other types of controllers, has its advantages and disadvantages. Adaptive control cannot
be utilized to estimate system with fast time-varying uncertainties or parameters because
one cannot predict the nature of the uncertainty and the adaptive algorithm may not be
able to adapt to fast enough to the time-varying parameters. On the other hand, a robust
controller, used mostly in this dissertation, can stabilize nonlinear systems with arbitrary
fast time-varying uncertainties or parameters. Moreover, we shall introduce robust control
design techniques for robotic systems with arbitrary fast time-varying uncertainties since
robust control design requires only known bounding functions of the uncertainties. This
dissertation focuses on nonlinear robust control design schemes.
Many primary results of nonlinear uncertain systems under matching conditions have been
developed in the last 15 years. Gutman introduced a discontinuous min-max control which
yields asymptotic stability for nonlinear systems under the matching condition. Because of
the discontinuity behavior of the controller, it is physically poorly behaved since all physical
systems have a finite bandwidth, but the discontinuous control requires systems with infinite
bandwidth. Later, Corless and Leitmann introduced a class of continuous state feedback
controller guaranteeing uniform ultimate boundedness under the matching conditions. The
mathematical model of nonlinear uncertain systems under matching conditions is established
through the following definition.
Definition: Consider the following nonlinear uncertain system
x = f (x, t) + f (x, t) + B(x, t)u + B(x, t)u
(2)
where f (x, t) and B(x, t) are the unknown parts of f (x, t) and B(x, t), respectively. The
system is said to satisfy the matching conditions MCs if uncertainty f (x, t) can be decomposed as
f (x, t) = B(x, t)f 0 (x, t),
(3)
(4)
in which the uncertainty enters the system through the same channel as control input u. The
reason behind inequality (3) is twofold, first, the system is not stabilizable for the case when
B 0 (x, t) = 1. Moreover, if kB 0 (x, t)k > 1, then term 1+B 0 (x, t) is uncertain and hence,
any input control may cause the state to grow out of bound. Second, the inequality ensures
that there is no singularity in the control design by guaranteeing that term 1 + B 0 (x, t) is
invertible.
Remark: If the uncertainty would be known, one can easily choose a control input to cancel
its effect and achieve stability. But, since physical dynamical systems contain some uncertainties which are unknown, one replaces those uncertainties by their bounding functions
which are chosen depending of the structure of the system and then the robust control design scheme can be adopted. we shall investigate system stability through Lyapunovs direct
method.
3.1
(5)
We shall assume that the origin (x = 0) is globally asymptotically stable for the uncontrolled
system x = f (x, t). Furthermore, suppose that there exists a Lyapunov function for system
(5), i.e., there exists a continuously differentiable function V (x, t) that satisfies the following
inequalities, for all (x, t) [0, )
V
V
+
[f (x, t)] (kxk)
t
x
(6)
where i are class K functions. To demonstrate the stability of system (4), choose input
control u(x, t) to be of the form
u(x, t) =
(x, t)
(x, t),
(k(x, t)k + (t)
where > 0 and (t) an L1 function, are chosen freely by the designer and
(x, t) = B(x, t)
V
(x, t)
x
(7)
V
V
+
[f (x, t) + Bf 0 + B (1 + B 0 ) u]
t
x
V
(kxk) +
[Bf 0 + B (1 + B 0 ) u]
x
V
V
B (x, t) +
B (1 + B 0 ) u
(kxk) +
x
x
2 (x, t) (1 + B 0 )
(kxk) + kk
(k(x, t)k + (t)
2 (x, t)
(kxk) + kk
(k(x, t)k + (t)
k(x, t)k(t)
(kxk) +
(k(x, t)k + (t)
(kxk) + (t)
(8)
The following results are deduced from robust control design under matching conditions.
1. If (t) is constant, say (t) = 1, then the system is globally uniformly ultimately
bounded with an ultimate bound given by a class K function of over infinite time
horizon.
2. If (t) is an exponentially decaying function, say (t) = eat , for some a > 0, then the
system is globally exponentially stable.
In summary, one can apply the above systematic design scheme to systems satisfying the
matching conditions. The mechanical dynamics of a rigid-link robotic manipulator for instance, is an example of a physical system satisfying the matching conditions. However,
there are many uncertain nonlinear systems that do not satisfy the matching conditions.
The next section introduces robust control design scheme for systems satisfying the so-called
equivalently matched uncertainty.
3.2
Although it would be ideal that robust control can be designed to stabilize all uncertain
systems in the form of (??), the following examples show that not all uncertain systems are
stabilizable.
Example: Consider the second-order system
x 1 = x2 + (x1 , x2 ),
10
x 2 = u,
in which the uncertainty () is bounded as |(x1 , x2 )| 2+x21 +x22 . One can easily see that
the system with any admissible uncertainty is not stabilizable since a possibility of additive
uncertainty 1 (x1 , x2 ) within the given bounding function is x2 + x1 .
The system is not stabilizable since uncertainty within its bound can change the structure
of the system such that part of system dynamics becomes unstable and decoupled from the
rest of the system and from control input.
Example: Consider the scalar system
x = x + [1 + (x)]u,
where uncertainty is bounded as |(x)| C for some C 1. The system is not stabilizable
since (x) could be 1, and then the system is not subject to any control. The uncertainty
(x) may be such that 1 + (x) is uncertain because of C > 1, and therefore any control
introduced may have adverse effect since it may cause the state to grow out of bound more
quickly. In fact, whenever there is a large multiplicative uncertainty associated with the
control input, no control is the best choice, and the uncertain system becomes unstabilizable
if any control is needed. It is worth noting that the first subsystem in Example ?? becomes
this example if (x1 , x2 ) = x1 + 0 (x1 )x2 .
Example: Consider the scalar system
x = (x) + u2 ,
where uncertainty is bounded as |(x)| 1. The system is not stabilizable since, no matter
what choice is made for u, the control action in x is always unidirectional (positive). In
fact, any scalar uncertain system is not stabilizable if the designer cannot make x be both
positive and negative upon his choice through selecting u (specifically, through choosing
robust control to dominate all possible uncertainties).
Example: Consider the system
x 1 = 11 x1 + x2 + 13 x3
x 2 = 21 x1 + 22 x2 + x3
x 3 = u,
where uncertain terms ij are independent but bounded by constants Cij > 0. The system
is not stabilizable for many sets of constants Cij . To see this conclusion, consider the
simplest case that the uncertainties are time-invariant and state-independent. In this case,
11
0 13 11 13 + 1
C = 0 1 21 13 + 22 .
1 0
0
The zero z of the transfer function and the determinant of controllability matrix are, respectively,
z=
1
+ 22 ,
13
and
det(C) = 213 21 + 13 22 11 13 1.
If det(C) = 0, the system becomes uncontrollable due to pole-zero cancellation, and the
cancellation may occur in the right half of the s plane. Uncontrollability due to unstable
pole-zero cancellation implies that the system cannot be stabilized. For the system under
consideration, the presence of uncertainty 13 of potentially large size implies that this
kind of instabilizability may arise unless certain size limitations in terms of the bound of
13 are imposed on the maximum magnitudes of 11 , 21 and 22 . Relationship between
bounding functions of uncertainties can be found through robust control design to guarantee
both stabilizability and robust stability. There are many other uncertain systems in which
unstable, uncontrollable pole-zero cancellation may occur.
Although dynamics of the above examples are simple, they show existence of unstabilizable
systems and, more importantly, provide intuitive explanations of what may cause systems
to be unstabilizable. Specifically, there are two categories in the state space: loss of controllability and control contribution to differential equation being either unknown or only
unidirectional (as shown in second and third examples). In first and last examples, the two
systems have isolated subsystem or pole-zero cancellation and therefore are uncontrollable.
As a result of the above examples, it is crucial to identify stabilizable uncertain systems and
to design robust control for those systems. Robust control theory is to identify the class of
all stabilizable uncertain systems and to provide stabilizing controls that guarantee desired
performance.
The ultimate objective of robust control theory of nonlinear uncertain system is twofold.
First, if necessary, determine the least requirements, called structural conditions, on the
system (either in terms of system structure or location of uncertainty) such that it can
be stabilized or controlled. Second, find procedures under which robust control u can be
12
systematically designed. The key issue in the design is the search of Lyapunov functions
and their associated robust controllers (which may be different for achieving various types
of performances).
The backstepping design procedure can be seen from the following simple example.
Example: Consider the second-order system:
x 1 = x2 ,
x 2 = u.
This system is linear and consists of two cascaded integrators. A linear stabilizing control
can be designed by solving a simple Lyapunov equation. The Riccati equation can be used to
design robust control if there are linearly bounded uncertainties. However, those procedures
do not apply to nonlinear systems since they depend on linear matrix equations. Here, we
plan to start an intuitive design that can be extended later to nonlinear systems.
From the second equation, we see that u can control x2 to anywhere. For the first equation,
if x2 were a control variable, an obvious stabilizing control would be x2 = x1 . Since x2
is not a control but a state variable, the equation x2 = x1 does not make any sense. To
distinguish the state variable x2 and the control designed for x2 from the actual control u,
let us call the control designed for x2 fictitious control and denote it by xd2 = x1 . Although
the fictitious control is not implementable, we can rewrite the first equation as
x 1 = x1 + (x2 + x1 ) = x1 + (x2 xd2 ).
This simple manipulation reveals intuitively that stabilization of the first equation may be
achieved if we can make x2 xd2 = x2 + x1 converge to zero. Hence, fictitious control xd2
can be viewed as the desired trajectory for state variable x2 . Recall that, in the second
equation, control u can be designed to drive x2 anywhere. The problem of making x2 track
xd2 is equivalent to making the new, translated state variable z2 = x2 xd2 converge zero (that
is, a stabilization problem). The dynamics of z can be found as follows:
z2 = x 2 x d2 = x 2 + x 1 = u + x2 .
Obviously, the control u = x2 z2 = x2 (x2 + x1 ) guarantees asymptotic stability
of z2 . Once z2 = x2 + x1 converges to zero, x1 will approach zero by the design of xd2 in
13
14