Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

EML5311 Lyapunov Stability & Robust Control Design

Lyapunov Stability criterion

In Robust control design of nonlinear uncertain systems, stability theory plays an important
role in engineering systems. For any given control system, it is crucial to have a stable system since an unstable control system is useless. Lyapunov 1 . stability criterion is a general
and useful procedure for studying the stability of nonlinear systems. The Lyapunov stability
theory include two methods, Lyapunovs first method and Lyapunovs direct method. Lyapunovs first method is a technique which simply uses the idea of system linearization(lowest
order approximation) around a given point and one can only achieve local stability results
with small stability regions. Lyapunovs direct method is the most important tool for design and analysis of nonlinear systems. Lyapunovs direct method is directly applied to
nonlinear systems without the need to linearization and thus achieves global stability. The
basic concept behind Lyapunovs direct method is that if the total energy of a system, electrical/mechanical; linear/nonlinear, is continuously dissipating, then the system will eventually
reach an equilibrium point and remain at that point. Hence, Lyapunovs direct method include two steps, first find a appropriate scalar function, referred to as Lyapunov function,
second evaluate its first-order time derivative along the trajectory of the system. If the
Lyapunov function derivative is decreasing along the system trajectory as time increases,
then the system energy is dissipating and thus the system will eventually settle down. The
definitions below give a more formal statement of admissible choices of Lyapunov function
candidate.
Autonomous systems: the nonlinear system
x = f (x, u, t)
is said to be autonomous if f does not depend explicitly on time, i.e., if the system can be
written
x = f (x)
Otherwise, the system is called non-autonomous.
Equilibrium point: A state xe is an equilibrium point(state) of the system if x(t) = xe ,
1

theory introduced in late 19th century by the Russian mathematician Alexandr Mikhailovich Lyapunov

then it remains equal to xe for all time. Mathematically, this means that xe satisfies
0 = f (xe ).
In this paper, we are mainly interested in stability of equilibrium points.
Stability and instability: The equilibrium point xe = 0 is said to be stable if, for any
> 0, there exists > 0, such that if kx(0)k < , then kx(t)k < for all t 0. Otherwise,
the equilibrium point is unstable.
Asymptotic stability: An equilibrium point 0 is asymptotically stable if it is stable, and if
in addition there exits some > 0 such that kx(0)k < implies that x(t) 0 as t .
Exponential stability: An equilibrium point 0 is exponentially stable if there exits two
strictly positive numbers and such that
kx(t)k kx(0)ket , t > 0,
in some ball B in the neighborhood of the origin.
Lyapunovs first method:
1. The equilibrium point of the nonlinear system is asymptotically stable if the linearized
system is strictly stable.
2. The equilibrium point of the nonlinear system is unstable if the linearized system is
strictly unstable.
3. If the linearized system is marginally stable, one cannot conclude anything from the
linear approximation (equilibrium point may be stable, unstable, or asymptotically stable
for the nonlinear system)
Lyapunov function: If function V(x) is positive definite and has continuous partial derivatives in a ball B , and if its time derivative along any state trajectory of system x = f (x) is
negative semi-definite, i.e., V (x 0, then V (x) is said to be a Lyapunov function.
Global stability: Assume that there exists a scalar function V of the state x, with continuous first order derivatives such that
V (x) is positive definite
2

V (x) is negative definite


V (x) as kxk
then the equilibrium at the origin is globally asymptotically stable.
Stability of uniform ultimate boundedness: A solution x, x(t0 ) = x0 is said to be
uniformly ultimately bounded (UUB) in a hyperball B(0, ) centered at the origin and of
radius , if there exists a non-negative constant (x0 , B) < , independent of t0 , such that
kx0 k < implies x(t) B for all t t0 + (x0 , B).
Example: Lyapunov function for LTI systems. Consider the linear system
x = Ax,
where x <n is the state, A <nn is the system matrix. Propose a quadratic Lyapunov
function candidate
V (x) = xT P x,
where P is a positive definite function to be determined. Taking time derivative yields
4
V (x) = xT x + x T x = xT (AT P + P A)x = xT Qx,

where Q is the solution of algebraic Lyapunov equation Q = AT P + P A. Therefore, the


system is stable if Q is positive definite or semi-definite.
A Lyapunov function successful for stability analysis can be found not by randomly choosing
P but only by determining P from the Lyapunov equation for any given positive definite
Q. It has been shown that, given a positive definite Q, the system is stable if and only if
the unique solution of Lyapunov equation is also positive definite. That is, this backward
procedure is necessary and sufficient for both existence of Lyapunov function and analyzing
stability. As will be shown later, this systematic way of generating Lyapunov functions for
linear systems also applies to many nonlinear (uncertain) systems, for example, the class of
feedback linearizable nonlinear systems, the class of nonlinear systems with a linear part,
etc.
For control design, consider the system
x = Ax + Bu,
where B <nm is the input matrix, u <n is the input. If the pair (A, B) is controllable,
control design and search for Lyapunov function are done through the backward procedure
3

as follows: given positive definite matrices Q and R, there is a unique positive definite matrix
P satisfying the algebraic Riccati equation AT P + P A P BR1 B T P Q = 0, then the
Lyapunov function is V (x) = xT P x and the stabilizing control is u(x) = R1 B T P x.
The example shows that control design and search of Lyapunov function are integrated and
can be done systematically for LTI systems and that Lyapunov functions for linear systems
can always be chosen to be quadratic functions. We shall use the above result in chapter
three to investigate robust control design for linear and certain nonlinear uncertain systems.
Moreover, one of the main objectives of this book is to develop systematic procedures of designing control and searching for Lyapunov function for general nonlinear uncertain systems,
though it is not as complete of a solution as the above one for LTI systems.
Example: Consider the scalar system given by
x = u + a,
where a is an uncertain (time-varying) parameter satisfying |a| < 1. Under the standard
linear feedback control law u = kx, the derivative of the Lyapunov function V = 0.5x2 is
given by

a
.
V = kx x
k

Because of the uncertainty in a, V is only negative definite outside the ball B(0, a/k)
B(0, 1/k). Hence, the system is not asymptotically stable, but the solution is given by
x = ekt x0 +

a
a
1 ekt
k
k

as t .

So, the solution is globally uniformly ultimately bounded (GUUB) with respect to 1/k for
the class of uncertainty denoted by a. Furthermore, the bound of GUUB stability tends to
the origin as k .
The implications of the example are twofold. First, if V is negative definite outside some
hyper-ball in state space, stability result of GUUB is concluded. Second, while larger control
energy makes the bound of GUUB of the state smaller, no control of finite energy achieves
asymptotic stability. Both observations can be extended to general nonlinear systems.
The next section addresses robotic manipulator systems which are widely used in the area
robust control. Some of the theories developed here are applied to robotic manipulator
systems. A brief general discussion is presented below for robotic systems.

Robotic Manipulators

A robot is a reprogrammable multifunctional manipulator designed to move material, parts,


tools, or specialized devices through variable programmed motions for the performance of
a variety of tasks. A robot arm is classified to be either rigid or flexible link. A rigid link
could be either revolute (rotary) or linear (prismatic), a prismatic link allows a linear relative
motion between any two links, see figure (??). In the chapters to come, all robot manipulator
systems discussed are of revolute nature.
In the case of a system as complicated as a robot, it is not practical to assume that the
parameters in the dynamic model of the robotic system are known precisely. There will
always be inexact cancellation of the nonlinearties in the system due to uncertainties. In such
cases we use robust control to simplify the equations of motion as much as possible by ignoring
certain terms in the equations. One of the uses of robotic systems in the environmental
waste management in which accuracy is important specially the accuracy in positioning the
end-effector position of the manipulator. Requirement such as safety, motion compliance
control, and operation environment can be fulfilled by using low-level robot controller in
which the end-effector arm is moved quickly, yet accurately while maintaining a high degree
of robustness.
Since we are interested in robotic manipulator system as we shall present in chapter 6, let
us formulate the dynamical model for a rigid link robot manipulator. The rigid link robot
is described by
= M (q)
q + Vm (q, q)
q + N (q, q)

(1)

where
N (q, q)
= G(q) + F (q)
+ F
M (q) <nn is the inertia matrix, Vm (q, q)
<nn is a matrix containing the centripetal
and Coriolis terms, G(q) <n is the gravity vector, F (q, t) <n is a vector representing
lumped uncertainties, q(t) <n is the joint variable vector, and <n is the input torque
vector. There are three widely used properties of the robot dynamic equation above. These
properties will be used in chapter 6, or whenever a robotic system is under study, during the
stability analysis of the robust controller.
Property 1

The inertia matrix M (q) is symmetric and positive definite. Hence,


m1 M (q) m2 (q),
where m1 is a positive constant and m2 (q) is a strictly positive definite function. Moreover,
m1 and m2 (q) are chosen in such a way that the maximum possible parameter variation of
M (q) is taken into account.
Note: For the case that the robotic system is purely revolute, m2 (q) = m2 is a positive
constant.
Property 2
The matrices M (q) and Vm (q, q)
satisfy the following equation:

xT
In other words, matrix
Property 3

1
M (q) Vm (q, q)
x = 0,
2

1
M (q)
2

x <n .

Vm (q, q)
is skew-symmetric.

The centripetal/Coriolis term Vm (q, q)


is bounded as
kVm (q, q)k
a1 kqk,

and the fiction and gravity terms are bounded as


kG(q) + Fd q + Fs (q)k
a2 + a3 kqk,

where ai are known constants.


After introducing the properties used in the analysis of robotic systems, let us discuss briefly
a variety of robust control design for robotic systems.

Position Control
This design technique is used to position a robotic system link(s) to a specific position
(desired location) in which accuracy is important especially in industrial and medical robotic
systems. Two main types of robust control design schemes have been proposed, one utilizes
the so-called Min-Max control and the other uses the saturation type controller. The
Min-Max controller is naturally discontinuous and yields global exponential stability, while
the saturation controller is continuous but yields global uniform ultimate boundedness. The
position control simply drives the robotic link(s) to a final desired position with a very small
error, which is referred to as a set point tracking.
6

Force Control
Many control design schemes have been developed for robotic systems in free space. This
is, a robot arm is not in contact with any surface. However, most industrial robots used
for yelding, grinding, polishing, etc.., require contact with objects or surfaces. Hence, the
robot arm motion is constrained depending on the direction of the arm movement. This fact
motivated researchers to investigate the constrained motion case and develop position/force
controllers. Among these controllers are hybrid position/force control, impedance control,
and reduced order methods. The disadvantage of the hybrid position/force control is that
it requires exact knowledge of the robot manipulator and thus, the analysis is limited to
the uncertainty free systems. An adaptive control design scheme was developed for hybrid position/force robots with uncertainty which is based on the joint-space robot model
formulation.

Impedance Control
Impedance control is based on the idea that the robust controller should be utilized to regulate the dynamic behavior between the robot arm end-effector motion and the force exerted
on the surface, rather than considering the motion and force control problems separately.
The name impedance emanates from the idea of using an Ohms law type relationship
between motion and force. Similar to previous types of controllers, impedance controller has
been extensively studied. A robust impedance controller was developed to ensure stability in
lieu of uncertainties. An adaptive impedance controller was also developed that takes care
of parametric uncertainty.

Industrial Robots
In present days, adaptive control is widely utilized in industrial robots because of the advantage of the inexpensive computer power that has become available. Moreover, these
robots are being utilized to their full potential in terms of the speed and precision of their
movements. It is possible to use a dynamic model of the manipulator as the heart of the
sophisticated control algorithm with a powerful control computer. This dynamic model allows the control algorithm to know how to control the manipulators actuators in order to
compensate for the complicated effects of inertia, centripetal, Coriolis, gravity, and friction
forces when the robot is in motion. The result is that the manipulator can be made to
follow a desired trajectory through space with smaller tracking errors. Adaptive control, as
7

other types of controllers, has its advantages and disadvantages. Adaptive control cannot
be utilized to estimate system with fast time-varying uncertainties or parameters because
one cannot predict the nature of the uncertainty and the adaptive algorithm may not be
able to adapt to fast enough to the time-varying parameters. On the other hand, a robust
controller, used mostly in this dissertation, can stabilize nonlinear systems with arbitrary
fast time-varying uncertainties or parameters. Moreover, we shall introduce robust control
design techniques for robotic systems with arbitrary fast time-varying uncertainties since
robust control design requires only known bounding functions of the uncertainties. This
dissertation focuses on nonlinear robust control design schemes.

Robust control design under Matching Conditions

Many primary results of nonlinear uncertain systems under matching conditions have been
developed in the last 15 years. Gutman introduced a discontinuous min-max control which
yields asymptotic stability for nonlinear systems under the matching condition. Because of
the discontinuity behavior of the controller, it is physically poorly behaved since all physical
systems have a finite bandwidth, but the discontinuous control requires systems with infinite
bandwidth. Later, Corless and Leitmann introduced a class of continuous state feedback
controller guaranteeing uniform ultimate boundedness under the matching conditions. The
mathematical model of nonlinear uncertain systems under matching conditions is established
through the following definition.
Definition: Consider the following nonlinear uncertain system
x = f (x, t) + f (x, t) + B(x, t)u + B(x, t)u

(2)

where f (x, t) and B(x, t) are the unknown parts of f (x, t) and B(x, t), respectively. The
system is said to satisfy the matching conditions MCs if uncertainty f (x, t) can be decomposed as
f (x, t) = B(x, t)f 0 (x, t),

B(x, t) = B(x, t)B 0 (x, t),

and if there exists a positive constant such that,


B 0 (x, t) 1 .

(3)

Therefore, the system can be rewritten as


x = f (x, t) + B(x, t) [f 0 (x1 ) + (1 + B 0 (x, t)) u(x, t)]
8

(4)

in which the uncertainty enters the system through the same channel as control input u. The
reason behind inequality (3) is twofold, first, the system is not stabilizable for the case when
B 0 (x, t) = 1. Moreover, if kB 0 (x, t)k > 1, then term 1+B 0 (x, t) is uncertain and hence,
any input control may cause the state to grow out of bound. Second, the inequality ensures
that there is no singularity in the control design by guaranteeing that term 1 + B 0 (x, t) is
invertible.
Remark: If the uncertainty would be known, one can easily choose a control input to cancel
its effect and achieve stability. But, since physical dynamical systems contain some uncertainties which are unknown, one replaces those uncertainties by their bounding functions
which are chosen depending of the structure of the system and then the robust control design scheme can be adopted. we shall investigate system stability through Lyapunovs direct
method.

3.1

Lyapunov stability in Robust Control Design

The nominal model of system (4) is given by


x = f (x, t) + B(x, t)u(x, t)

(5)

We shall assume that the origin (x = 0) is globally asymptotically stable for the uncontrolled
system x = f (x, t). Furthermore, suppose that there exists a Lyapunov function for system
(5), i.e., there exists a continuously differentiable function V (x, t) that satisfies the following
inequalities, for all (x, t) [0, )
V
V
+
[f (x, t)] (kxk)
t
x

1 (kxk) V (x, t) 2 (kxk),

(6)

where i are class K functions. To demonstrate the stability of system (4), choose input
control u(x, t) to be of the form
u(x, t) =

(x, t)
(x, t),
(k(x, t)k + (t)

where > 0 and (t) an L1 function, are chosen freely by the designer and
(x, t) = B(x, t)

V
(x, t)
x

kf 0 (x, t)k (x, t).

(7)

Differentiating V (x, t) under robust control (7) yields


V

V
V
+
[f (x, t) + Bf 0 + B (1 + B 0 ) u]
t
x
V
(kxk) +
[Bf 0 + B (1 + B 0 ) u]
x

V
V

B (x, t) +
B (1 + B 0 ) u
(kxk) +
x
x
2 (x, t) (1 + B 0 )
(kxk) + kk
(k(x, t)k + (t)
2 (x, t)
(kxk) + kk
(k(x, t)k + (t)
k(x, t)k(t)
(kxk) +
(k(x, t)k + (t)
(kxk) + (t)

(8)

The following results are deduced from robust control design under matching conditions.
1. If (t) is constant, say (t) = 1, then the system is globally uniformly ultimately
bounded with an ultimate bound given by a class K function of over infinite time
horizon.
2. If (t) is an exponentially decaying function, say (t) = eat , for some a > 0, then the
system is globally exponentially stable.
In summary, one can apply the above systematic design scheme to systems satisfying the
matching conditions. The mechanical dynamics of a rigid-link robotic manipulator for instance, is an example of a physical system satisfying the matching conditions. However,
there are many uncertain nonlinear systems that do not satisfy the matching conditions.
The next section introduces robust control design scheme for systems satisfying the so-called
equivalently matched uncertainty.

3.2

Examples of Unstabilizable Uncertain Systems

Although it would be ideal that robust control can be designed to stabilize all uncertain
systems in the form of (??), the following examples show that not all uncertain systems are
stabilizable.
Example: Consider the second-order system
x 1 = x2 + (x1 , x2 ),
10

x 2 = u,

in which the uncertainty () is bounded as |(x1 , x2 )| 2+x21 +x22 . One can easily see that
the system with any admissible uncertainty is not stabilizable since a possibility of additive
uncertainty 1 (x1 , x2 ) within the given bounding function is x2 + x1 .
The system is not stabilizable since uncertainty within its bound can change the structure
of the system such that part of system dynamics becomes unstable and decoupled from the
rest of the system and from control input.
Example: Consider the scalar system
x = x + [1 + (x)]u,
where uncertainty is bounded as |(x)| C for some C 1. The system is not stabilizable
since (x) could be 1, and then the system is not subject to any control. The uncertainty
(x) may be such that 1 + (x) is uncertain because of C > 1, and therefore any control
introduced may have adverse effect since it may cause the state to grow out of bound more
quickly. In fact, whenever there is a large multiplicative uncertainty associated with the
control input, no control is the best choice, and the uncertain system becomes unstabilizable
if any control is needed. It is worth noting that the first subsystem in Example ?? becomes
this example if (x1 , x2 ) = x1 + 0 (x1 )x2 .
Example: Consider the scalar system
x = (x) + u2 ,
where uncertainty is bounded as |(x)| 1. The system is not stabilizable since, no matter
what choice is made for u, the control action in x is always unidirectional (positive). In
fact, any scalar uncertain system is not stabilizable if the designer cannot make x be both
positive and negative upon his choice through selecting u (specifically, through choosing
robust control to dominate all possible uncertainties).
Example: Consider the system
x 1 = 11 x1 + x2 + 13 x3
x 2 = 21 x1 + 22 x2 + x3
x 3 = u,
where uncertain terms ij are independent but bounded by constants Cij > 0. The system
is not stabilizable for many sets of constants Cij . To see this conclusion, consider the
simplest case that the uncertainties are time-invariant and state-independent. In this case,
11

the transfer function between u and x1 is


X1 (s)
13 (s 22 ) + 1
=
,
U (s)
s(s 11 )(s 22 ) 21 s
and the controllability matrix is

0 13 11 13 + 1

C = 0 1 21 13 + 22 .
1 0
0
The zero z of the transfer function and the determinant of controllability matrix are, respectively,
z=

1
+ 22 ,
13

and

det(C) = 213 21 + 13 22 11 13 1.

If det(C) = 0, the system becomes uncontrollable due to pole-zero cancellation, and the
cancellation may occur in the right half of the s plane. Uncontrollability due to unstable
pole-zero cancellation implies that the system cannot be stabilized. For the system under
consideration, the presence of uncertainty 13 of potentially large size implies that this
kind of instabilizability may arise unless certain size limitations in terms of the bound of
13 are imposed on the maximum magnitudes of 11 , 21 and 22 . Relationship between
bounding functions of uncertainties can be found through robust control design to guarantee
both stabilizability and robust stability. There are many other uncertain systems in which
unstable, uncontrollable pole-zero cancellation may occur.
Although dynamics of the above examples are simple, they show existence of unstabilizable
systems and, more importantly, provide intuitive explanations of what may cause systems
to be unstabilizable. Specifically, there are two categories in the state space: loss of controllability and control contribution to differential equation being either unknown or only
unidirectional (as shown in second and third examples). In first and last examples, the two
systems have isolated subsystem or pole-zero cancellation and therefore are uncontrollable.
As a result of the above examples, it is crucial to identify stabilizable uncertain systems and
to design robust control for those systems. Robust control theory is to identify the class of
all stabilizable uncertain systems and to provide stabilizing controls that guarantee desired
performance.
The ultimate objective of robust control theory of nonlinear uncertain system is twofold.
First, if necessary, determine the least requirements, called structural conditions, on the
system (either in terms of system structure or location of uncertainty) such that it can
be stabilized or controlled. Second, find procedures under which robust control u can be
12

systematically designed. The key issue in the design is the search of Lyapunov functions
and their associated robust controllers (which may be different for achieving various types
of performances).

Back-Stepping Design Procedure

The backstepping design procedure can be seen from the following simple example.
Example: Consider the second-order system:
x 1 = x2 ,

x 2 = u.

This system is linear and consists of two cascaded integrators. A linear stabilizing control
can be designed by solving a simple Lyapunov equation. The Riccati equation can be used to
design robust control if there are linearly bounded uncertainties. However, those procedures
do not apply to nonlinear systems since they depend on linear matrix equations. Here, we
plan to start an intuitive design that can be extended later to nonlinear systems.
From the second equation, we see that u can control x2 to anywhere. For the first equation,
if x2 were a control variable, an obvious stabilizing control would be x2 = x1 . Since x2
is not a control but a state variable, the equation x2 = x1 does not make any sense. To
distinguish the state variable x2 and the control designed for x2 from the actual control u,
let us call the control designed for x2 fictitious control and denote it by xd2 = x1 . Although
the fictitious control is not implementable, we can rewrite the first equation as
x 1 = x1 + (x2 + x1 ) = x1 + (x2 xd2 ).
This simple manipulation reveals intuitively that stabilization of the first equation may be
achieved if we can make x2 xd2 = x2 + x1 converge to zero. Hence, fictitious control xd2
can be viewed as the desired trajectory for state variable x2 . Recall that, in the second
equation, control u can be designed to drive x2 anywhere. The problem of making x2 track
xd2 is equivalent to making the new, translated state variable z2 = x2 xd2 converge zero (that
is, a stabilization problem). The dynamics of z can be found as follows:
z2 = x 2 x d2 = x 2 + x 1 = u + x2 .
Obviously, the control u = x2 z2 = x2 (x2 + x1 ) guarantees asymptotic stability
of z2 . Once z2 = x2 + x1 converges to zero, x1 will approach zero by the design of xd2 in
13

x 1 = x1 + z2 (which is stable if z2 = 0), and consequently x2 goes to zero. Therefore, the


overall system is asymptotically stable.
This intuitive argument of stability can be verified by a simple Lyapunov proof. Choosing
Lyapunov function V = x21 + z22 , one can easily show that the control u = x2 (x1 + x2 )
yields global asymptotic stability. In fact, the Lyapunov function is the sum of Lyapunov
functions for subsystems of the states x1 and z2 .
The control in this example is designed by working sequentially through two integrators.
In the process, a fictitious control is design, a state transformation is performed in which
fictitious control is differentiated. Such a design is called a recursive design since, by the
transformation, the design of fictitious control is imbedded into the actual control design.
The design is also called backstepping or backward recursive because the direction in which
the sequential design is proceeded is the opposite to the direction of signal flow graph of
the system, that is, the direction at which physical information flows within the system.
This approach which obviously works systematically for multiple-integrator systems was
realized in the sixties. But applications of its extensions to nonlinear control, adaptive
control, and robust control have been developed only in past several years. Mathematically,
the design procedure can be genearlized and applied to nonlinear systems because of the
following reasons. First, by introducing a fictitious control variable to a given subsystem, its
dynamics satisfy locally the matching conditions with respect to the fictitious control and
therefore can be compensated. Second, state transformation make the difference between
dynamics of fictitious control and its corresponding state variable equivalently matched and
therefore can be compensated. Finally, sub-Lyapunov functions can be easily found for all
subsystems since they are of first order, and the overall Lyapunov function is simply the sum
of sub-Lyapunov functions, by which stability of the overall system can be concluded.

14

You might also like