Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

SIAM J.

CONTROL, AND OPTIMIZATION 1986 Society for Industrial and Applied Mathematics
Vol. 24, No. 4, July 1986 013

FINITE TIME CONTROLLERS*


Downloaded 10/04/19 to 137.110.60.64. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

v. T. HAIMOf
Abstract. Continuous finite time differential equations are introduced as fast accurate controllers for
dynamical systems. These have qualities superior to controllers which are currently in use in such applications
as robotics. The structure of the phase portrait for scalar second order finite time systems is determined.
This characterization is used to develop a class of second order finite time systems which can be used as
controllers.

Key words, control systems, nonlinear stability, finite time control, ordinary differential equations

1. Introduction. A standard problem in system theory is to develop controllers


which drive a system to a given position as fast as possible. Consider

=f(x)+u(t)g(x), xin R"


where f models the natural dynamics of the system and g the effect of the control u.
An example would be the positioning of a robotic manipulator at a set point in space.
There is a considerable body of research on linear feedback input for multi-
dimensional systems, that is feedback control laws of the form
u(t)g(x)= Kx, K R"-, R".
This research has been concerned with finding K so that certain performance criteria
are met by the feedback system (see e.g., [1] and [2]). Linear feedback may be quite
good from the point of view of accuracy of tracking and placement. It has the
disadvantage, however, that solutions of the feedback system are exponential functions
of time if f is smooth, since the system behaves linearly in a neighborhood of the set
point. Thus convergence can never occur in finite time. Whether or not this presents
a serious practical problem will depend on the application.
One may ask whether it is possible to control a system to equilibrium in finite
time using a bounded control. A standard textbook solution to this problem is to use
a bang-bang control strategy (see, for example, [3]). Such controls optimize the time
to reach equilibrium for trajectories of

=Ax+Bu, xinR n, uinR, ]u]_-<l.


It turns out that the optimal control u(x) is discontinuous, and switches from u 1
to u =-1 on specific contours in x space.
The implementation of such a discontinuous control strategy leads to decreased
response time. There may also be unwanted side effects such as introduced vibrations
which arise because of repeated overshooting of the switching contour, caused by
errors in the implementation of the discontinuous control. We are thus led to rephrase
the question posed above. Can one control a system to equilibrium using a bounded
and continuous control law? We will develop such finite time controllers.

* Received by the editors January 30, 1985, and in revised form June 17, 1985. This work was supported
in part by the National Science Foundation under grant numbers ECF-81-21428 and EFS-84-03923, and by
the Office of Naval Research, under the Joint Services Electronics Program contract number N0014-75-C-0648.
f Graduate School of Business Administration, Harvard University, Boston, Massachusetts 0213.
760
FINITE TIME CONTROLLERS 761

2. Definitions. We will discuss qualitative properties of differential equations.


Some vocabulary which arises in this context is herewith defined. Suppose that
Downloaded 10/04/19 to 137.110.60.64. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

=f(x), x in R n,
and that x(t, To) denotes a solution which passes through Xo at =0. We will often
call solutions trajectories. When x(t, To) is regarded as a map from R n+l to R’ then it
will be called the flow of =f. A set is invariant with respect to the flow if all solutions
intersecting the set are contained in it.
An equilibrium point is a point, x, such that f(x)= O.
An equilibrium point is asymptotically stable if
(i) for any e > 0 there exists a 8 > 0 so that Ilxoll < ,
liT(t, xo)ll < for all _-> 0,
and
(ii) there exists a neighborhood of 0, U, so that all trajectories which enter U
converge to the origin.
Here we have used the double bar to denote the Euclidean norm, as we shall
continue to do throughout the paper.
In studying stability Lyapunov theory is very useful. The idea is to find a function
which when restricted to a trajectory is a strictly decreasing function of time. If the
restricted function has a unique minimum which is at the origin, then the trajectory
must converge to that equilibrium.
More formally: suppose there exists v(x) so that v(0) is the unique minimum of
v in a neighborhood of x=0. Suppose also that v is C and t=(grad v,f(x))<O
except at 0 where it vanishes. (Here grad v denotes the gradient of v with respect to
the standard Riemannian metric on En.) Such a function will be called a Lyapunov
function for =f(x). Since we are only interested in studying local stability properties
of ordinary differential equations, we need only define such a function in a neighbor-
hood of zero. A Lyapunov function is positive definite if it is positive except at zero
where it vanishes.
3. Finite time systems. It is appropriate to limit discussion to differential equations
with an isolated equilibrium point at the origin, and no other equilibria, because we
are interested, for example, in the behavior of a robot arm in the neighborhood of a
set point. This set point is modeled as an isolated equilibrium and, as we are studying
local behavior, other equilibria are not of concern. We will call differential equations
with the properties that the origin is asymptotically stable, and all solutions which
converge to zero do so in finite time, finite time differential equations. Unless otherwise
specified all right-hand sides of differential equations will be C everywhere except at
zero, where they will be assumed to be continuous, and to have an isolated equilibrium.
One notices immediately that finite time differential equations cannot be Lipschitz
at the origin. As all solutions reach zero in finite time, there is nonuniqueness of
solutions through zero in backwards time. This, of course, violates the uniqueness
condition for solutions of Lipschitz differential equations.
In one dimension necessary and sufficient conditions for the finite time property
may be found easily. We have
FACT 1. : r(x), r(O) O, x in R, is finite time iff
(i) xr(x)<-_ 0 and equals 0 only at x O, for x in a neighborhood of O, and
op
(ii) (dx/r(x))<o for all p in R.
Here (i) determines the asymptotic stability of the origin and (ii) determines the finite
time property.

,
The proof is left to the reader.
Let siga z (sgn z)lzl for z and a in R.
762 v.T. HAIMO

Example 1. =-sigl/2 x is a finite time equation.


One may use Lyapunov theory to extend Fact 1 to the multidimensional case.
Downloaded 10/04/19 to 137.110.60.64. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

PROPOSITION 1. Consider g(x) with x in R and g continuous. If v is a positive


definite Lyapunov function for Y g, and if f <- r(v), where r(z), z in R, is a finite
time equation, then : g is also finite time.
Proof. b <- r(v) implies that dv/r(v) >= dt since by Fact 1 r(v) < 0 for v > 0. We
then have
oe> dv/r(v) (again by Fact 1) and

dv / r(v) >- dt T,
where the trajectory of g with initial condition x(0) =p reaches the origin at T,
for T =< o. Since this time to origin is finite, then =g is finite time.
Example 2.
X
-1 .J I_sig /
is finite time.
Proofi Let v x2 + x. Then
Ix, 1/2(x sig /2 x2+ x2 sig /,2 x).
One may show this is negative definite as follows. Let
C {(x1, X2): -Ix,
By symmetry one easily sees that t achieves its maximum on the set C when x x.
But t < 0 when x x2 and so t < 0 when x 0.
We need to show that t< r(v) where r is the right-hand side of a finite time
differential equation. Letting r(z) z 4/5, we note that 6 < r(v) because h (v, t)
v 4/ + f is largest when x x2, and there it is negative. Thus by Proposition 1 u is
finite time.
4. Seeontl order systems. Systems of particular interest in many control theoretic
situations are second order systems, and one may ask whether it is possible to generate
continuous finite time controllers for second order systems.
Second order systems may of course be represented as first order systems with a
special structure. One notices immediately that second order systems have at least one
Lipschitz component since if
=w(x,)
then letting x x and x2
E1 X2
and
2-- W(XI X2)"
The following theorem describes the behavior of finite time systems which have
at least one Lipschitz component.
THEOREM 1. Suppose that g(x) is finite time, with x in R", g(O)=0, and g in
C on R" -{0}, and that g(x) is Lipschitz at x =O, for some i. Ifx(t) is a solution which
FINITE TIME CONTROLLERS 763

reaches zero at T < oo then


Downloaded 10/04/19 to 137.110.60.64. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

xi( t)
,TIIX(t)ll -"
lim

Proof. Suppose x(t, Po) is a solution of =g with x(0, Po)= Po and x( T, Po)= 0.
By the mean value theorem, there is some q in [0, T] so that
0= x,( T, po) x,(O, po)+ Tg,(x(q, Po))
or
g,(x(q, Po)) 1
x(O, Po) T
T may be considered to be a function of the initial condition p, where T(p) is
the time to origin for the trajectory beginning at p. We then have
g,(x(q(p),p)) 1
x,(0, p) T(p)"
One may take the limit as p 0 along the trajectory through the point Po.
Since x,(t, p) is a smooth function of and vanishes at t- T(p) then
xi(q,p)
lim 1.
-.o x(0, p)
Thus
g,(x(q, p)) x,(q, p) 1
lim lim
p-.o x,(q, p) x,(O, p) p-,o T(p)
and so
g,(x(q,p))
lim
p-,o xi(q,p)
g, is Lipschitz so g,/llxll is bounded. Thus
g,(x(q, p)) Ilx(q,
lim
,,-,o Ilx(q, p)ll x,(q, p)
which implies that

This result tells us that the trajectories of a second order finite time system converge
in the state space along a hyperplane Xl =0 (where Xl denotes the position of the
system as above) since such a system must have at least one non-Lipschitz component.
Theorem 1 implies that for the system to reach zero in finite time trajectories must
enter the region where the non-Lipschitz terms dominate.
We restrict our search for second order finite time systems to scalar problems. By
Theorem 1 we know that trajectories of finite time systems in the (x, ) plane converge
tangent to the line x 0. This tells us (among other things) that finite time trajectories
do not spiral around the origin infinitely often as they approach it.
In trying to generate examples of ditterential equations with certain asymptotic
behavior one may frequently exploit the fact that there are contours which may only
764 v.T. HAIMO

be crossed in certain directions, or not at all (as in the case of contours which are
invariant with respect to the flow) in order to trap the trajectory into some region. This
Downloaded 10/04/19 to 137.110.60.64. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

is of course the heart of Lyapunov theory. If one wishes to show that a second order
system is finite time, one could search for a contour that prevented trajectories from
spiraling around the origin. It seems natural to search for a contour which is itself
invariant. This idea lies at the core of the next two theorems.
THEOREM 2. Consider the scalar differential equation g(x, ) with g(0, 0)=0.
Let g be in C except at the origin where it is only assumed to be continuous. Suppose
that the origin is asymptotically stable. Then all trajectories which reach zero do so in
finite time if and only if
(i) there exists a solution q to the scalar differential equation
dq
q(z)-z= g(z q(z)), q(0) 0,

such that , q(z) is a finite time scalar differential equation,


(ii) every solution p to
dp
p(z)-d-z=g(z,p(z)), p(O) o,
is such that p(z) is a finite time differential equation.
Proof. Note that the analysis may be restricted to a sufficiently small neighborhood
of the origin, N, such that all solutions with initial conditions in N converge to zero.
To prove sufficiency, two Lemmas are required. The structure of the proof is
Lemma 1 --> Lemma 2 --> sufficiency.
LEMMA 1. Suppose (x(t), (t)) is a solution of 5i g(x, ) with x(T) (T) 0
for T<=oo. Then there is an S, with 0<S< Tsuch that for S<t< Tx(t)(t)<O.
Proof. If x(tl):(tl) > 0, for some tl => 0, then there is a t2 > t such that x(t2)(t2) <
0. Otherwise Ix(t)[ is always increasing for > tl and thus x(t) cannot converge to zero.
Suppose there is no $ such that x(t)Yc(t)<O for S < < T. This implies that if
x(t):(t)<O for t->0 there is an s> so that x(s)(s)=O. This in turn implies that
x(s) =0 for the following reason: One may show fairly easily that xg(x, 0)<0 for all
nonzero x, since the origin is the unique equilibrium solution (in N) of 5 g, and
since the origin is asymptotically stable. Thus the vector field (, g(x, )) points into
the region in (x, ) space, x < 0, along the line 0, and so trajectories leaving this
region must exit through the line x 0.
There is a sequence of times { ti}, (with 1,. , oo and lim ti T) with x(t) 0
and (t) 0. Note that (t)(ti+l) < 0. Thus
((t,)) ((t,+)) (:( ti) q(x( ti))(:( ti+l) q(x( t,+)))) < 0.
(The equality holds since q(0)=0 and x(ti)=0 for all i.)
This shows that the function on R 2, H(x, )= -q(x) (here we are regarding x
and as variables in R 2 rather than as functions of t) passes through zero along the
trajectory x(t), :(t). By assumption (i), however, the contour in R 2, H(x, )=0, is
invariant with respect to the flow. Thus H (x, ) cannot change sign when it is evaluated
along the trajectory x(t), (t). This contradiction shows that Lemma 1 holds.
LEMMA 2. Suppose (x(t), (t)) is a solution of 5i g(x, ) with x(T) (T) 0
for T <-. There is an r R +, and a function h which is continuous and smooth away
from O, such that for T>-_ > r
Yc(t)=h(x(t)), h(0) 0.
FINITE TIME CONTROLLERS 765

Proof. By Lemma 1 there is an S satisfying 0<S<T so that for S<t<


Tx(t):(t) < 0. Clearly x and do not change sign for such t. If x(t) is positive, then
Downloaded 10/04/19 to 137.110.60.64. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

it decreases as a function of t, and increases if it is negative. Thus no value of x is


reached twice along the trajectory (x(t), (t)) for > S. This implies that (t) may be
expressed as a function of x(t): :(t) h(x(t)) for > S. h(x) is smooth, except perhaps
at x=0 because
(as a function
=
of x
h(x), so 5 (dhldx): or dh/dx= 5i1, and 5i(t)l(t) is continuous
and ; not t) except perhaps at : =0. : 0 on the trajectory only
when x 0.
h is continuous at zero: lim h(x) 0. Lemma 2 is proved. We show that it implies
sufficiency,
x-.o

Let (x(t), (t)) be a solution of 5/= g(x, Yc) with x(T) (T) 0 for T <- By
Lemma 2 there is a function h such that (t)= h(x(t)) for sufficiently large t. We thus
.
have
dh
(1) 5/= g(x, h(x)) h(x)

except at x 0. But by L’Hospital’s rule

h(x) g(x, h(x)) dh


lim lim at x O,
x-,o x ,,-o h(x) dx

in finite time, and since h(0)= 0 so does .


showing that (1) is satisfied for x 0. Thus h satisfies the conditions of assumption
(ii) implying that : h(x) must be finite time. Under these conditions x reaches zero
As we started with an arbitrary trajectory all solutions must reach zero in finite time.
We prove necessity. Suppose 5/= g(x, Yc) is finite time. We know by Theorem 1
that limt_,r(/x)=+/-o for a solution (x, ) with x(T)=(T)=O.
Consider the function x:. If there is a sequence of times {ti}, i= 1,. ., , , with
lim 6 T, where (x)(6) 0 for all i, then there is a sequence of times {sj},j 1,. .,
so that (sj)= 0, for all j. Suppose this were not true. Then there would be some I so
that for > I
x(ti) O:=>x( ti) 0.
If x(6) 0 and x(ti+l) 0 then
( ti):( ti+l) < O,
which indicates that =0 at some point between t and ti+l. This contradicts our
assumption.
Such a sequence, {s}, contradicts the fact that lim (/x) +/-oo as T. Thus there
is an S< T so that for S< < T x(t)<0. If for instance x <0 then x is strictly
increasing and so there is a functional relationship between x and :
(t) q(x(t)), with q(0) 0.
One may show that q is smooth except perhaps at x 0, so q clearly satisfies the
relationship
dq
q(x)
--x g(x, q(x))

and also satisfies it in the limit as x goes to zero.


766 v.T. HAIMO

,
By assumption, the components of the solution to 5/= g(x, ), x and reach zero
in finite time. Thus : q(x) must be a finite time first order differential equation and
Downloaded 10/04/19 to 137.110.60.64. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

so condition (i) is satisfied. Condition (ii) also holds since for every solution p(z) to
dp
p(Z)z g(z, p(z)), p(O) O,
one obtains a trajectory (x, ) which satisfies p(x). Once again, since x and : reach
zero in finite time then =p must be a finite time differential equation.
One should note that associated to each second order scalar finite time equation,
5/= g, there is a first order discontinuous differential equation (described in condition
(i) of Theorem 2). This equation has nonunique solutions through the initial condition
q(0) =0, and these solutions are nonzero for z 0. This distinguishes them from the
classical examples of nonunique solutions of such equations as :=sigl/2x, with
x(0) =0.
Theorem 2 provides a qualitative description of the phase portraits of second
order finite time scalar systems. It can also be used to generate examples as we shall
see shortly.
The problem of determining the stability properties of an equilibrium point of a
non-Lipschitz differential equation can be difficult. In Theorem 2, asymptotic stability
of the origin is assumed. In the next theorem a special structure of g gives a Lyapunov
function associated with g.
THEOREM 3. Let i g(x, Y) =f(x) + d(), where f(O) d(0) =0, g is C except at
zero where it is continuous, (, g) (0, O) only at (x, ) (0, 0), and f is monotone
decreasing. Then 5i--u is finite time if and only if
(i) there exists a solution q(z) to the first order differential equation on R
dq
(2)
such that . q(z)
z g(z, q(z)),

(ii) any solution, h, to (2) is such that .


q(0)-0
q(z) is a finite time differential equation, and
h(z) is a finite time equation.
Proof. (Sufficiency). We need only prove that the origin is asymptotically stable,
and then apply Theorem 2. This will be accomplished by using a Lyapunov function.
Consider the function defined in a neighborhood of the origin

v(x, :)= (1/2)2_ f(z) dz.


This is positive definite since f is decreasing and f(0)= 0.
f) (Si- f(x)) d().
We will show that this is negative for sufficiently small (x, ) except when : 0.
By condition (i) there is a solution q(z) to
dq
q(z)
zz =f(z) + d(q(z)),
with q(0)=0. =q is finite time, so by Fact 1, dq/dz is negative. If z>0 then
q(z)(dq/dz)>O and so f(z)+d(q(z))>O. This implies that d(q(z))>O. (The last
inequality holds because f(z) < 0 for z > 0.) Thus d(y) > 0 for y < 0. Similarly one
may show that d (y) < 0 if y > 0.
We note that all trajectories are bounded as too, because if V(Xo)=c(cin R /)
then v(x(t, Xo)) <= c. Thus we may use LaSalle’s theorem [4] to note that all trajectories
converge to the origin.
FINITE TIME CONTROLLERS 767

Necessity is a tautology because the assumption that 5/= g is finite time includes
the asymptotic stability of the origin. Thus we are back in the situation of Theorem 2.
Downloaded 10/04/19 to 137.110.60.64. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

5. Examples of second order systems. With Theorem 3 we may generate a class of


finite time second order systems. These are presented in the following corollary.
COROLLARY 1. Let 5i -sig x--sigb g with a > O, b > O. If (A) b < 1 and (B)
a > b/(2- b), then g is finite time.
Proof. -sigax is monotone decreasing. In order to apply Theorem 3, we must
prove that
(i) there exists a finite time solution q to
dq
q-x -sig x-sig b q(x) with q(0) 0,
and
(ii) that all such solutions are finite time.
Suppose that
dq
(3) qx -sig x-sig b q(x), q(0)-0.

We would like to find a solution to this differential equation. The first thing that one
notices about it, however, is that it is discontinuous when q 0. Thus the existence
theorems for solutions of continuous differential equations are not applicable. We will
generate some continuous differential equations from (3).
Consider p(x)=(1/2)q(x) 2. This gives
dp dq
x q(x)
x -sig x-sig b q(x).

Suppose we knew that xq(x) < 0 for x # 0.


If x _-> 0 and p _-> 0 then
dp
(4) _sig x + rp(x) b/2, p(O) 0
dx
(where r 2 /2), and if x _-< 0, p -_> 0 then

(5) dp= _sig x- rp(x) b/2 p(0) =0.


dx
Clearly if each of these equations has a positive solution, p, for x->_0, x_-<0
respectively, and if xq(x)< 0 for nonzero x, then there is a solution to (3) for all x.
We first prove that xq(x)< 0 for nonzero x. Suppose that x > 0 and q > 0. Then
dq dq
qxx <0 SO
x <0"
If q(0)= 0, however, this means we have a function which vanishes at zero, is positive
for x > 0, where it has a negative derivative. This contradiction leads us to conclude
that any solution q of (3) is such that xq(x) < 0 for x > 0. One can show similarly that
xq(x) <-_0 for all x and only vanishes at the origin.
Both differential equations (4) and (5) are defined on closed sets. In order to use
existence theorems for continuous differential equations on open sets to show that
these equations have solutions, we must extend their domains of definition. This may
be done as follows.
768 v.T. HAIMO

Consider the differential equations


Downloaded 10/04/19 to 137.110.60.64. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

dp
(6) sig" x + rip(x)[ b/2 p(O) 0
dx
and

(7) --’-= -sig" x- rlp(x)l /,


dx
p(O) o.
These differential equations are defined for all real x and p, and reduce to (4) and (5)
when p > 0. If (6) has a solution p which is positive for x > 0, and if (7) has a solution
p which is positive for x < 0 then (4) and (5) have the required solutions.
Consider (6). If there is a positive solution p for x > 0, with p(0) 0 then dp/dx > 0
for x > 0 if x is sufficiently small. This implies that either
dp b/2
(limits from x > 0)
(i) lim
x-,o X lim_.o p(x)
or that
(ii) lim x lim p(x)b/2 (limits from x > 0).
x-0 x0

In case (i) we have that


dp
lim
x-o x limx_o p / x limx_o p /2,
which implies that
lim p(x) lim X 2/(2-b).
x-O xO

Since the pb/2 term dominates the -x term, then we must have a> b/(2-b).
In case (ii) we have that
lim x lim p(x) b/2 or lim p(x) lim x 2/b.
xO xO x-O xO

In this case we have 2a/b- 1 -> a since the x term dominates near 0. But 2a/b- 1 >= a
implies a >- b/(2- b). In order for there to be a negative solution p(x) to (6) for x > 0
one may show similarly that it is necessary that a <= b/(2-b).
We know that there is a solution p(x), with p(0)=0 to (6), for x > 0, by the
existence theorem for solutions of continuous differential equations on open sets. Thus
if a > b/(2-b) this solution must be positive.
One may show by the same method that (7) may have a positive solution for x < 0
if and only if a >= b/(2- b) and must have a positive solution if the inequality is strict.
Thus if assumption (B) holds, then (3) has a solution q. It is also clear from the above
analysis that if assumption (B) holds then
q(x)
(8) lim= 1 forz= 1/(2-b).
x-o -sig x
We claim that q(x) is finite time if b < 1. Define h(x) by the equation q(x)=
h(x)-sig x. Then

lim O.
-o sig x
Downloaded 10/04/19 to 137.110.60.64. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

FINITE TIME CONTROLLERS


769
770 v.T. HAIMO

Consider the function v(x)=lxl-z/1-z, which is positive definite if z < 1. Then


t =(grad v, q)= (sig x)(h(x)-sig x)
Downloaded 10/04/19 to 137.110.60.64. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

and
lim t =-1-<-v /2 near zero.
x-0

Thus by Proposition 1 =q is finite time if z < 1, and z < 1 implies that b < 1. (Since
the time to origin function v, for q, is unbounded at zero if b-> 1 it follows that
the equation =q is finite time iff b < 1.)
It only remains to show that all solutions of (3) are finite time. By the above
analysis any solution of (3), q, satisfies (8). If b < 1 then all such solutions are finite time.
Note that we have also shown that if a < b/(2-b) or if b= > 1 then 5/=g is not
finite time. We have for instance
Example 3. 5/=-sig "’ x-sig’5: is a finite time differential equation.
The phase portrait of this differential equation is displayed in Fig. 1.
It should be noted that this figure also displays the nonunique solutions of the
scalar first order differential equation q(x)(dq/dx) -sig "4 x -sig "5 q(x), with q(0) 0,
where the x-axis is horizontal and the q(x) axis is vertical.
A more cumbersome proof along the lines of the proof of Corollary may be
used to show that the following holds.
COROLLARY 2. Let g -sig x-sig b + f(x)+ d(Y) where a > O, 1 > b > O, a >
b/(2- b), f(0)= d(0)= 0,O(f)>O(Ixl a) andO(d) > O(l[b); then i g is a finite time
equation.
The class of examples described in Corollary 2 is quite large. Theorem 2 describes
the phase portrait of second order, finite time scalar systems, and also enables us to

-
identify a class of these systems. These may in turn be used as finite time controllers.
For instance, the block design implementation for controlling a link of robot arm
is displayed in Fig. 2. In this scheme I is. the moment of inertia of the link.

0s-_ + sig"(
- _0---.--, "0o

FIG. 2. Second order servo-system with finite feedback.

Acknowledgments. I would like to thank Roger Brockett for many helpful dis-
cussions on this material. John Baillieul also offered some interesting insights.

REFERENCES
R. W. BROCKEYr AND C. I. BYRNES, Multivariable Nyquist criterion, root locus, and pole placement by
output feedback, IEEE Trans. Automat. Control, AC-26 (1981), pp. 271-284.
[2] S. R. PECK, Combinatorics of Schubert calculus and inverse eigenvalue problems, Ph.D. Thesis, Harvard
Univ., Cambridge, MA, 1984.
[3] A. E. BRYSON AND Y. C. Ho, Applied Optimal Control, John Wiley, New York, 1975.
[4] J. P. LASALLE, Some extensions of Lyapunov’s second method, IEEE Trans. Circuit Theory, CT-7 (1960),
pp. 520-527.

You might also like