Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

"Optimization is essentially the art, science and mathematics of choosing the best among a

given set of finite or infinite alternatives. A new impetus to the development of optimization
began with dramatic development of linear programming in the late 1940's. Convexity also
Brics
brought in a complete new feature to optimization"

Optimization Theory - A Modern face of Applied Mathematics


By Joydeep Dutta

1. Optimization Theory: A Historical Rundown


Optimization is essentially the art, science and
mathematics of choosing the best among a given
set of finite or infinite alternatives. Though
currently optimization is an interdisciplinary
subject cutting through the boundaries of
mathematics, economics, engineering, natural
sciences, and many other fields of human
endeavour it had its root in antiquity. In fact the
oldest story that goes with the history of
optimization concerns an ancient princess
named Dido. She was fleeing from the
prosecution of her brother and a piece of land on
Figure 1: Brachistochrone curve.
the Mediterranean coast caught her fancy. She
made a deal with the local leader. She requested of Variations and still continues to test all our
him to cut a bull's hide into thin strips and and tie modern methods. Though this problem even
them up and enclose as much land as one can attracted the genius of Isaac Newton it was Euler
with it. In modern day language the problem who first gave a framework to attack such
mathematically is as follows - Among all closed problems. A typical calculus of variations
curves of a given length find the one that problem consists of minimizing an integral under
encloses maximum area. This is called the some given boundary conditions. In fact it was
Isoperimetric problem. This problem is now Joseph. L. Lagrange who in 1757 first gave a
mentioned in a regular fashion in any course in general framework for tackling problems of the
the Calculus of Variations. However, most calculus of variations. His method popularly
problems of antiquity came from geometry and known as the Lagrangian multiplier rule still
since there were no general methods to solve continues to be used and remains to be one of
such problems, each one of them was solved by the central topics in optimization theory.
very different approaches. An interesting However the method for solving simple
collection of such problems can be found in the unconstrained problems on the real line was
book titled Stories about Maxima and Minima by known to Fermat much before derivatives were
V. M. Tikhomirov. introduced in the mathematical literature. In
Beginning of the modern methods of n
today's language if x Î R be a local minimum of
optimization can be traced back to the growth of f : R n ® R then
the Calculus of Variations. In 1696 Johann
Bernoulli proposed the famous Brachistochrone
problem (Figure 1) which says that find the curve It was Leibnitz who first gave more detailed
along which a particle moving from one point to characterization of real-valued functions in terms
another in a vertical plane does so in minimum of their derivatives. This involved both first order
time. and second order characterization. However
This problem precisely gave rise to the Calculus calculus of variations continued to steal the
19
limelight and continued to do so till the important to note that Lagrange had only studied
nineteenth century. However from the modern equality constraints. Thus the KKT conditions
point of view the problems of Calculus of can be thought of as the Lagrangian multiplier
variations can be thought of as an infinite- rule for inequality constraints.
dimensional optimization problem and can be 2. Convexity: A Crucial Matter in Optimization
solved using modern techniques. However its Theory
methods are important since during those days A KKT point need not be a minimum of a given
without any knowledge of infinite dimensional program. However the sufficiency of the KKT
problems Euler and Lagrange could develop a conditions is guaranteed when the objective and
general framework for solving such problems. constraint functions are assumed to be convex.
Even David Hilbert mentioned the importance of This is precisely where convexity enters the
the subject in his famous 1900 lecture in the larger picture of optimization and goes on to play
International Congress of Mathematicians. In a central role. Modern optimization by large is
fact the calculus of variations later gave rise to based on convexity and notions relying on it. A
the modern subject of optimal control which has function f : C Í R n ® R is said to be convex over
large number of applications in engineering and the convex set if for any
space science. we have
However during the early years of 20th century f ( x + l ( y - x)) £ l f ( y ) + (1 - l ) f ( x).
there was hardly any development of When f differentiable can have an equivalent
optimization theory. A new impetus to the representation as follows
development of optimization began with
dramatic development of linear programming in f ( y ) - f ( x) ³ áÑf ( x), y - xñ, "x, y Î C
the late 1940's. A linear programming problem It is clear from the above expression that every
consists of linear functions as objective and critical point (point with of a
constraints. It became apparent that linear convex function is a global minima. In fact even
programming has tremendous applications in without differentiability one can easily show that
economics, military operations, business, every local minima of a convex function is global.
engineering, etc. It was the development of the This simple feature makes convexity such an
simplex method for solving linear programming attractive thing in optimization theory. Convexity
problems that became the turning point of the also brought in a complete new feature to
subject and George. B. Dantzig who formulated optimization. It is that of non smoothness or non-
this method became an icon and linear differentiability. In fact there are very simple
programming became an important part of convex functions which are not differentiable at
applied mathematics. However in the early the point of minimum. The function
1950's it was observed that there are important is the prototype example, shown in
application problems which involved nonlinear Figure 2.
functions as well as constraints represented by
inequalities. Thus in order to tackle the inequality
constraints some new mathematics was
developed and the first optimality condition in
this direction was given by Fritz John(1948) and
later refined by Kuhn and Tucker (1951). The
Kuhn-Tucker conditions turned out to be very
practical and were thus used to develop
algorithms for inequality constrained
minimization problem. However it was later
found that the use of inequality constraints was
also studied by W. Karush way back in 1939 and
thus the conditions are called the Karush-Kuhn-
Tucker conditions or simply KKT conditions. It is Figure 2: A plot of .

20
Observe that this function is not differentiable at play a central role in optimization theory. In fact
x = 0 yet it does not have a derivative there. so deep rooted is the notion of convexity that
Thus the notion of sub gradient was developed in usually nonconvex problems are tackled by
order to imitate the notion of the derivative at the devising suitable convex approximations.
points of non-differentiability. For a real-valued 3. Smoothness of Functions
convex function defined over an open convex set Though convexity has such power it was
at any point there exists called the observed that many application problems are
subgradient of f at and satisfies the devoid of convexity yet are non-differentiable.
following inequality This lead Francis Clarke to study locally Lipschitz
nonsmooth function in details in the early
1970's. He developed the notion of generalized
Let us note the subgradient need not be unique ( gradient or the Clarke subdifferential. Clarke
loss of differentiability at a point used his subdifferential to develop KKT type
corresponds to the fact that more that one conditions for mathematical programming
tangent plane can pass through the same point). problem as well as in developing necessary
Thus the collection of all subgradients at a given conditions for optimal control problems. This
point is called the sub differential of f at new approach in tackling nonsmoothness in
and is denoted as Further a point x is a optimization gave rise to the subjects of
global minimum if and only if . This can Nonsmooth Analysis and Nonsmooth
be indeed thought of as the generalization of Optimization. However Clarke's approach
Fermat's rule in the differentiable case. In fact if depended heavily on the tools of convex
we consider the following simple convex analysis. A complete move away from convexity
problem was championed by Mordukhovich by
developing the now famous sequential
Where f and are real-valued convex function nonsmooth analysis. Mordukhovich succeeded
on R n . Let x0 be a minimum of the above in developing a robust subdifferential for lower-
problem and also assume that there exists x semicontinuous functions which is nonconvex in
such that Then one can show that general and is contained in the Clarke
there exists l ³ 0 such that subdifferential. Motivated by optimal control
0 Î ¶f ( x0 ) + l¶g ( x0 ) Mordukhovich introduced the notion of Extremal
and Principle which turned out have far reaching
consequences and which has application to
The above conditions can be thought of as the areas that go beyond traditional optimization.
KKT type conditions for a convex problem in a 4. Numerical Methods for Optimization
nonsmooth setting or a general setting. Thus Meanwhile on the other side during the mid
with the powerful tool like the subdifferential 1960's and 1970's there has been a tremendous
convexity became a very amenable object even growth in the development of algorithms for
in the nonsmooth setting. Infact Tucker coined solving nonlinear programming problems. It first
the term Convex Analysis in order to bring out began by trying to develop efficient and
the importance of convexity and with the implementable algorithms to solve the
publication of the classic monograph titled unconstrained problem
Convex Analysis (Princeton, 1970) by R. T.
Rockafeller, which established convex analysis The most efficient methods developed for
as an important branch of modern mathematics. solving this problem used the principle of line
It is important to note that linear programming search, i.e. one moves from the current iterate
problems are a subclass of the convex to the next one denoted as x+ in the following
programming problems. As Rockafellar has manner
recently pointed out that the great watershed in x+ = x + td ,
optimization is not really between linearity and where d is called the descent direction satisfying
nonlinearity but between convexity and
nonconvexity. Since then convexity continues to and t > 0 is called the steplength.
21
This leads to the fact that and thus applying them to develop necessary optimality
by moving along d one is able to improve the conditions. Relations of subdifferentials to
objective function. Various methods like this generalized convex functions were also one of
have been developed using the line search the areas that I have been interested in. Later on
philosophy. Further finding a minimum amounts during my post-doctoral days I worked on the
to finding the root of the equation Thus notion of a convexity factor which is a very
various approaches like the Newton method or general object and posses many generic
Quasi-Newton method used in equation solving properties of most of the well known sub
can be applied to optimization. The Newton differentials. I also got interested in an emerging
method has the following scheme of iteration area of optimization called Abstract Convexity
during my postdoctoral days and it remains to by
major interest. Abstract Convexity has grown
Major contributions in this area have been made out of generalizing the well known global
by Davidon, Powell, Fletcher, Goldfard, Broyden, property of a convex function namely, every
Goldfarb, and Shanno. In solving constrained convex function is an upper envelope of the
problems the penalty method, the barrier method family of affine functions majorized by it (or point
and the sequential quadratic programming wise smaller that itself). The sub-area of
scheme are important classical approaches. Abstract Convexity where I have worked most is
It is important to note that the simplex method an area called Monotonic Analysis where in one
for linear programming which has been effective studies functions which are increasing and has
in solving many practical problems is not a some additional properties like positive
polynomial time algorithms. Thus there have homogeneity or convex along rays. This area has
been efforts to develop a polynomial time important applications to Global optimization. In
algorithm for linear programming which will be if this area I collaborate with Professor A. Rubinov
not better but at least as efficient as the simplex (Ballarat, Australia) and Professor J. E. Martinez-
method in practice. The major breakthrough Legaz (Barcelona, Spain). Apart from this I am
came in 1984 when Karmarkar announced his interested in studying various theoretical aspects
now famous algorithm. In Karmarkar's method in of vector optimization. Recently my interest also
contrast to the simplex method one moves shifted to bi-level programming and
through the interior of the feasible region and mathematical programming with equilibrium
move towards a vertex. Karmarkar's work gave constraints. I plan to study optimality conditions
rise to the now important interior point methods in this particular area since the most well known
of convex optimization. The interior point constraint qualification fails. Further I plan to
methods remain to be one of the most fertile have a major project in the area of Interior Point
fields of modern optimization research. The Method theory and I intend to study how to
growth of interior point methods has also lead to increase the speed of interior point algorithms for
the growth of the modern areas of semi definite conic programs by using suitable self-concordant
programming and conic programming. Apart barriers. In fact there have been few encouraging
from these the important areas associated with recent results in this direction from some leading
modern optimization are semi finite researchers. Also I plan to develop software with
programming, vector optimization, global a user friendly front end to use the interior point
optimization, bi-level programming, methods in convex programming and have a
mathematical programming under equilibrium library of such programs. Apart from the
constraints, Abstract Convexity, just to name a computational aspects there are also interesting
few. theoretical aspects that can be studied in interior
5. My Interests in IITK point theory, for example the detailed study of
My research initially centered around nonsmooth self scaled cones and developing faster path-
optimization and trying to understand and design following algorithms.
sub differentials for nonconvex functions and

22
References
1. C. R. Bector, S. Chandra and J. Dutta,Principles of Optimization
Theory, Narosa Publishers, New Delhi, 2004.
2. D. P. Bertsekas, Convex Analysis and Optimization, Athena Scientific
Publication, 2003.
3. D. P. Bertsekas, Nonlinear Programming, Athena Scientific
Publication, 1999.
4. J. Borwein and A. S. Lewis, Convex Analysis and Nonlinear
Optimization, Springer, 2000.
5. F. H. Clarke, Optimization and Nonsmooth Analysis, Wiley
Interscience, 1983.
6. B. D. Craven, Control and Optimization, Chapman and Hall, 1995.
7. S. Dempe, Foundations of Bi-level Programming, Kluwer Academic
Publishers, 2002.
8. J. Jahn, Vector Optimization, Springer, 2004.
9. J. Renegar, A Mathematical View of Interior-Point Methods in Convex
Optimization, SIAM, 2001.
10. R. T. Rockafellar, Convex Analysis, Princeton University Press, 1970.
11. R. T. Rockafellar and R. J. B. Wets, Variational Analysis, Springer,
1998.
12. A. Rubinov, Abstract Convexity and Global Optimization, Kluwer
Academic Publishers, 2000.

About the author: Joydeep Dutta recieved his Ph.d in mathematics in 1998
from I.I.T. Kharagpur with specialization in Nonsmooth Optimization.
Consequently he did his post-doctoral studies in the Indian Statistical Institute,
New Delhi and the Autonomous University of Barcelona, Spain. His main
research interests are in Abstract Convexity, Nonsmooth Optimization and
Vector Optimization. He is currently an Assistant Professor in the Department
of Mathematics at the I.I.T. Kanpur

23
Books Written by IITK Faculty

Engineering Optimization: Theory and Practice by Singiresu S. Rao


Publisher: New Age Intl. Pvt. Ltd., Delhi (Reprint, 2003)
This book provides the most practical, up-to-date and comprehensive
coverage of new and classical optimization techniques currently in
use throughout a wide range of industries. This book is designed
to serve as both daily working resource and an excellent graduate
level text, this book gives an in-depth coverage of linear
and non-linear programming, dynamic programming, integer
programming and stochastic programming techniques.

Network Flows by Ravindra K. Ahuja, Thomas L. Magnanti, James B. Orlin


Publisher: Prentice-Hall, 1993
The book presents a comprehensive introduction to network flows that brings
together the classic and the contemporary aspects of the field, and provides an
integrative view of theory, algorithms, and applications. "This book contains a lot of
great algorithms for network flow theory and it also contains many of the great
applications, which are very useful in practice." Tiravat Assavapokee, Georgia

Linear Programming and Network Models by S. K. Gupta


Publisher: Affiliated East-West Publishers. Delhi, 1984
The book presents in an elementary but rigorous manner the most important
aspects of linear programming essential to a beginner in operations research. This
book is currently out of print.

Linear Programming for Schools by S. K. Gupta


Publisher: Arya Book Depot, New Delhi, 1968
The book presents linear programming methods for school students. This book is currently out of
print..

more books on Page 6

24

You might also like