Professional Documents
Culture Documents
Haimo Finite Time Controllers
Haimo Finite Time Controllers
CONTROL, AND OPTIMIZATION 1986 Society for Industrial and Applied Mathematics
Vol. 24, No. 4, July 1986 013
v. T. HAIMOf
Abstract. Continuous finite time differential equations are introduced as fast accurate controllers for
dynamical systems. These have qualities superior to controllers which are currently in use in such applications
as robotics. The structure of the phase portrait for scalar second order finite time systems is determined.
This characterization is used to develop a class of second order finite time systems which can be used as
controllers.
Key words, control systems, nonlinear stability, finite time control, ordinary differential equations
* Received by the editors January 30, 1985, and in revised form June 17, 1985. This work was supported
in part by the National Science Foundation under grant numbers ECF-81-21428 and EFS-84-03923, and by
the Office of Naval Research, under the Joint Services Electronics Program contract number N0014-75-C-0648.
f Graduate School of Business Administration, Harvard University, Boston, Massachusetts 0213.
760
FINITE TIME CONTROLLERS 761
=f(x), x in R n,
and that x(t, To) denotes a solution which passes through Xo at =0. We will often
call solutions trajectories. When x(t, To) is regarded as a map from R n+l to R’ then it
will be called the flow of =f. A set is invariant with respect to the flow if all solutions
intersecting the set are contained in it.
An equilibrium point is a point, x, such that f(x)= O.
An equilibrium point is asymptotically stable if
(i) for any e > 0 there exists a 8 > 0 so that Ilxoll < ,
liT(t, xo)ll < for all _-> 0,
and
(ii) there exists a neighborhood of 0, U, so that all trajectories which enter U
converge to the origin.
Here we have used the double bar to denote the Euclidean norm, as we shall
continue to do throughout the paper.
In studying stability Lyapunov theory is very useful. The idea is to find a function
which when restricted to a trajectory is a strictly decreasing function of time. If the
restricted function has a unique minimum which is at the origin, then the trajectory
must converge to that equilibrium.
More formally: suppose there exists v(x) so that v(0) is the unique minimum of
v in a neighborhood of x=0. Suppose also that v is C and t=(grad v,f(x))<O
except at 0 where it vanishes. (Here grad v denotes the gradient of v with respect to
the standard Riemannian metric on En.) Such a function will be called a Lyapunov
function for =f(x). Since we are only interested in studying local stability properties
of ordinary differential equations, we need only define such a function in a neighbor-
hood of zero. A Lyapunov function is positive definite if it is positive except at zero
where it vanishes.
3. Finite time systems. It is appropriate to limit discussion to differential equations
with an isolated equilibrium point at the origin, and no other equilibria, because we
are interested, for example, in the behavior of a robot arm in the neighborhood of a
set point. This set point is modeled as an isolated equilibrium and, as we are studying
local behavior, other equilibria are not of concern. We will call differential equations
with the properties that the origin is asymptotically stable, and all solutions which
converge to zero do so in finite time, finite time differential equations. Unless otherwise
specified all right-hand sides of differential equations will be C everywhere except at
zero, where they will be assumed to be continuous, and to have an isolated equilibrium.
One notices immediately that finite time differential equations cannot be Lipschitz
at the origin. As all solutions reach zero in finite time, there is nonuniqueness of
solutions through zero in backwards time. This, of course, violates the uniqueness
condition for solutions of Lipschitz differential equations.
In one dimension necessary and sufficient conditions for the finite time property
may be found easily. We have
FACT 1. : r(x), r(O) O, x in R, is finite time iff
(i) xr(x)<-_ 0 and equals 0 only at x O, for x in a neighborhood of O, and
op
(ii) (dx/r(x))<o for all p in R.
Here (i) determines the asymptotic stability of the origin and (ii) determines the finite
time property.
,
The proof is left to the reader.
Let siga z (sgn z)lzl for z and a in R.
762 v.T. HAIMO
dv / r(v) >- dt T,
where the trajectory of g with initial condition x(0) =p reaches the origin at T,
for T =< o. Since this time to origin is finite, then =g is finite time.
Example 2.
X
-1 .J I_sig /
is finite time.
Proofi Let v x2 + x. Then
Ix, 1/2(x sig /2 x2+ x2 sig /,2 x).
One may show this is negative definite as follows. Let
C {(x1, X2): -Ix,
By symmetry one easily sees that t achieves its maximum on the set C when x x.
But t < 0 when x x2 and so t < 0 when x 0.
We need to show that t< r(v) where r is the right-hand side of a finite time
differential equation. Letting r(z) z 4/5, we note that 6 < r(v) because h (v, t)
v 4/ + f is largest when x x2, and there it is negative. Thus by Proposition 1 u is
finite time.
4. Seeontl order systems. Systems of particular interest in many control theoretic
situations are second order systems, and one may ask whether it is possible to generate
continuous finite time controllers for second order systems.
Second order systems may of course be represented as first order systems with a
special structure. One notices immediately that second order systems have at least one
Lipschitz component since if
=w(x,)
then letting x x and x2
E1 X2
and
2-- W(XI X2)"
The following theorem describes the behavior of finite time systems which have
at least one Lipschitz component.
THEOREM 1. Suppose that g(x) is finite time, with x in R", g(O)=0, and g in
C on R" -{0}, and that g(x) is Lipschitz at x =O, for some i. Ifx(t) is a solution which
FINITE TIME CONTROLLERS 763
xi( t)
,TIIX(t)ll -"
lim
Proof. Suppose x(t, Po) is a solution of =g with x(0, Po)= Po and x( T, Po)= 0.
By the mean value theorem, there is some q in [0, T] so that
0= x,( T, po) x,(O, po)+ Tg,(x(q, Po))
or
g,(x(q, Po)) 1
x(O, Po) T
T may be considered to be a function of the initial condition p, where T(p) is
the time to origin for the trajectory beginning at p. We then have
g,(x(q(p),p)) 1
x,(0, p) T(p)"
One may take the limit as p 0 along the trajectory through the point Po.
Since x,(t, p) is a smooth function of and vanishes at t- T(p) then
xi(q,p)
lim 1.
-.o x(0, p)
Thus
g,(x(q, p)) x,(q, p) 1
lim lim
p-.o x,(q, p) x,(O, p) p-,o T(p)
and so
g,(x(q,p))
lim
p-,o xi(q,p)
g, is Lipschitz so g,/llxll is bounded. Thus
g,(x(q, p)) Ilx(q,
lim
,,-,o Ilx(q, p)ll x,(q, p)
which implies that
This result tells us that the trajectories of a second order finite time system converge
in the state space along a hyperplane Xl =0 (where Xl denotes the position of the
system as above) since such a system must have at least one non-Lipschitz component.
Theorem 1 implies that for the system to reach zero in finite time trajectories must
enter the region where the non-Lipschitz terms dominate.
We restrict our search for second order finite time systems to scalar problems. By
Theorem 1 we know that trajectories of finite time systems in the (x, ) plane converge
tangent to the line x 0. This tells us (among other things) that finite time trajectories
do not spiral around the origin infinitely often as they approach it.
In trying to generate examples of ditterential equations with certain asymptotic
behavior one may frequently exploit the fact that there are contours which may only
764 v.T. HAIMO
be crossed in certain directions, or not at all (as in the case of contours which are
invariant with respect to the flow) in order to trap the trajectory into some region. This
Downloaded 10/04/19 to 137.110.60.64. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
is of course the heart of Lyapunov theory. If one wishes to show that a second order
system is finite time, one could search for a contour that prevented trajectories from
spiraling around the origin. It seems natural to search for a contour which is itself
invariant. This idea lies at the core of the next two theorems.
THEOREM 2. Consider the scalar differential equation g(x, ) with g(0, 0)=0.
Let g be in C except at the origin where it is only assumed to be continuous. Suppose
that the origin is asymptotically stable. Then all trajectories which reach zero do so in
finite time if and only if
(i) there exists a solution q to the scalar differential equation
dq
q(z)-z= g(z q(z)), q(0) 0,
Let (x(t), (t)) be a solution of 5/= g(x, Yc) with x(T) (T) 0 for T <- By
Lemma 2 there is a function h such that (t)= h(x(t)) for sufficiently large t. We thus
.
have
dh
(1) 5/= g(x, h(x)) h(x)
,
By assumption, the components of the solution to 5/= g(x, ), x and reach zero
in finite time. Thus : q(x) must be a finite time first order differential equation and
Downloaded 10/04/19 to 137.110.60.64. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
so condition (i) is satisfied. Condition (ii) also holds since for every solution p(z) to
dp
p(Z)z g(z, p(z)), p(O) O,
one obtains a trajectory (x, ) which satisfies p(x). Once again, since x and : reach
zero in finite time then =p must be a finite time differential equation.
One should note that associated to each second order scalar finite time equation,
5/= g, there is a first order discontinuous differential equation (described in condition
(i) of Theorem 2). This equation has nonunique solutions through the initial condition
q(0) =0, and these solutions are nonzero for z 0. This distinguishes them from the
classical examples of nonunique solutions of such equations as :=sigl/2x, with
x(0) =0.
Theorem 2 provides a qualitative description of the phase portraits of second
order finite time scalar systems. It can also be used to generate examples as we shall
see shortly.
The problem of determining the stability properties of an equilibrium point of a
non-Lipschitz differential equation can be difficult. In Theorem 2, asymptotic stability
of the origin is assumed. In the next theorem a special structure of g gives a Lyapunov
function associated with g.
THEOREM 3. Let i g(x, Y) =f(x) + d(), where f(O) d(0) =0, g is C except at
zero where it is continuous, (, g) (0, O) only at (x, ) (0, 0), and f is monotone
decreasing. Then 5i--u is finite time if and only if
(i) there exists a solution q(z) to the first order differential equation on R
dq
(2)
such that . q(z)
z g(z, q(z)),
Necessity is a tautology because the assumption that 5/= g is finite time includes
the asymptotic stability of the origin. Thus we are back in the situation of Theorem 2.
Downloaded 10/04/19 to 137.110.60.64. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
We would like to find a solution to this differential equation. The first thing that one
notices about it, however, is that it is discontinuous when q 0. Thus the existence
theorems for solutions of continuous differential equations are not applicable. We will
generate some continuous differential equations from (3).
Consider p(x)=(1/2)q(x) 2. This gives
dp dq
x q(x)
x -sig x-sig b q(x).
dp
(6) sig" x + rip(x)[ b/2 p(O) 0
dx
and
Since the pb/2 term dominates the -x term, then we must have a> b/(2-b).
In case (ii) we have that
lim x lim p(x) b/2 or lim p(x) lim x 2/b.
xO xO x-O xO
In this case we have 2a/b- 1 -> a since the x term dominates near 0. But 2a/b- 1 >= a
implies a >- b/(2- b). In order for there to be a negative solution p(x) to (6) for x > 0
one may show similarly that it is necessary that a <= b/(2-b).
We know that there is a solution p(x), with p(0)=0 to (6), for x > 0, by the
existence theorem for solutions of continuous differential equations on open sets. Thus
if a > b/(2-b) this solution must be positive.
One may show by the same method that (7) may have a positive solution for x < 0
if and only if a >= b/(2- b) and must have a positive solution if the inequality is strict.
Thus if assumption (B) holds, then (3) has a solution q. It is also clear from the above
analysis that if assumption (B) holds then
q(x)
(8) lim= 1 forz= 1/(2-b).
x-o -sig x
We claim that q(x) is finite time if b < 1. Define h(x) by the equation q(x)=
h(x)-sig x. Then
lim O.
-o sig x
Downloaded 10/04/19 to 137.110.60.64. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
and
lim t =-1-<-v /2 near zero.
x-0
Thus by Proposition 1 =q is finite time if z < 1, and z < 1 implies that b < 1. (Since
the time to origin function v, for q, is unbounded at zero if b-> 1 it follows that
the equation =q is finite time iff b < 1.)
It only remains to show that all solutions of (3) are finite time. By the above
analysis any solution of (3), q, satisfies (8). If b < 1 then all such solutions are finite time.
Note that we have also shown that if a < b/(2-b) or if b= > 1 then 5/=g is not
finite time. We have for instance
Example 3. 5/=-sig "’ x-sig’5: is a finite time differential equation.
The phase portrait of this differential equation is displayed in Fig. 1.
It should be noted that this figure also displays the nonunique solutions of the
scalar first order differential equation q(x)(dq/dx) -sig "4 x -sig "5 q(x), with q(0) 0,
where the x-axis is horizontal and the q(x) axis is vertical.
A more cumbersome proof along the lines of the proof of Corollary may be
used to show that the following holds.
COROLLARY 2. Let g -sig x-sig b + f(x)+ d(Y) where a > O, 1 > b > O, a >
b/(2- b), f(0)= d(0)= 0,O(f)>O(Ixl a) andO(d) > O(l[b); then i g is a finite time
equation.
The class of examples described in Corollary 2 is quite large. Theorem 2 describes
the phase portrait of second order, finite time scalar systems, and also enables us to
-
identify a class of these systems. These may in turn be used as finite time controllers.
For instance, the block design implementation for controlling a link of robot arm
is displayed in Fig. 2. In this scheme I is. the moment of inertia of the link.
0s-_ + sig"(
- _0---.--, "0o
Acknowledgments. I would like to thank Roger Brockett for many helpful dis-
cussions on this material. John Baillieul also offered some interesting insights.
REFERENCES
R. W. BROCKEYr AND C. I. BYRNES, Multivariable Nyquist criterion, root locus, and pole placement by
output feedback, IEEE Trans. Automat. Control, AC-26 (1981), pp. 271-284.
[2] S. R. PECK, Combinatorics of Schubert calculus and inverse eigenvalue problems, Ph.D. Thesis, Harvard
Univ., Cambridge, MA, 1984.
[3] A. E. BRYSON AND Y. C. Ho, Applied Optimal Control, John Wiley, New York, 1975.
[4] J. P. LASALLE, Some extensions of Lyapunov’s second method, IEEE Trans. Circuit Theory, CT-7 (1960),
pp. 520-527.