Professional Documents
Culture Documents
Scatter Search and Local NLP Solvers: A Multistart Framework For Global Optimization
Scatter Search and Local NLP Solvers: A Multistart Framework For Global Optimization
This article may be used only for the purposes of research, teaching, and/or private study. Commercial use
or systematic downloading (by robots or other automatic processes) is prohibited without explicit Publisher
approval, unless otherwise noted. For more information, contact permissions@informs.org.
The Publisher does not warrant or guarantee the articles accuracy, completeness, merchantability, fitness
for a particular purpose, or non-infringement. Descriptions of, or references to, products or publications, or
inclusion of an advertisement in this article, neither constitutes nor implies a guarantee, endorsement, or
support of claims made of that product, publication, or service.
INFORMS is the largest professional society in the world for professionals in the fields of operations research, management
science, and analytics.
For more information on INFORMS, its publications, membership, or meetings visit http://www.informs.org
INFORMS Journal on Computing informs
Vol. 19, No. 3, Summer 2007, pp. 328340
issn 1091-9856 eissn 1526-5528 07 1903 0328 doi 10.1287/ijoc.1060.0175
2007 INFORMS
Zsolt Ugray
Management Information Systems Department, Utah State University, Logan, Utah 84322,
zsolt.ugray@usu.edu
Leon Lasdon
Department of Information, Risk, and Operations Management, McCombs School of Business,
The University of Texas at Austin, Austin, Texas 78712, lasdon@mail.utexas.edu
John Plummer
Department of Computer Information Systems and Quantitative Methods, Texas State University,
San Marcos, Texas 78666, jp05@txstate.edu
Fred Glover
University of Colorado, Boulder, Colorado 80309, fred.glover@colorado.edu
James Kelly
OptTek Systems, Inc., Boulder, Colorado 80302, kelly@opttek.com
Rafael Mart
Departamento de Estadstica e Investigacin Operativa, University of Valencia,
46100 Burjassot, Valencia, Spain, rafael.marti@uv.es
T he algorithm described here, called OptQuest/NLP or OQNLP, is a heuristic designed to nd global optima
for pure and mixed integer nonlinear problems with many constraints and variables, where all problem
functions are differentiable with respect to the continuous variables. It uses OptQuest, a commercial implemen-
tation of scatter search developed by OptTek Systems, Inc., to provide starting points for any gradient-based
local solver for nonlinear programming (NLP) problems. This solver seeks a local solution from a subset of these
points, holding discrete variables xed. The procedure is motivated by our desire to combine the superior accu-
racy and feasibility-seeking behavior of gradient-based local NLP solvers with the global optimization abilities
of OptQuest. Computational results include 155 smooth NLP and mixed integer nonlinear program (MINLP)
problems due to Floudas et al. (1999), most with both linear and nonlinear constraints, coded in the GAMS
modeling language. Some are quite large for global optimization, with over 100 variables and 100 constraints.
Global solutions to almost all problems are found in a small number of local solver calls, often one or two.
Key words: global optimization; multistart heuristic; mixed integer nonlinear programming; scatter search;
gradient methods
History: Accepted by Michel Gendreau, Area Editor for Heuristic Search and Learning; received August 2002;
revised January 2005, September 2005; accepted January 2006. Published online in Articles in Advance
July 20, 2007.
where x is an n-dimensional vector of continuous too close to a previously discovered local minimum.
decision variables, y is a p-dimensional vector of dis- Then, Ns additional uniformly distributed points are
crete decision variables, and the vectors gl, gu, l, and generated, and the procedure is applied to the union
u contain upper and lower bounds for the nonlinear of these points and those retained from previous itera-
and linear constraints, respectively. The matrices A1 tions. The critical distance referred to above decreases
and A2 are m2 by n and m2 by p, respectively, and con- each time a new set of sample points is added. The
tain the coefcients of any linear constraints. The set authors show that, if the sampling continues inde-
S is dened by simple bounds on x, and we assume nitely, each local minimum of f will be located, but
Downloaded from informs.org by [134.245.254.52] on 13 July 2017, at 03:44 . For personal use only, all rights reserved.
that it is closed and bounded, i.e., that each compo- the total number of local searches is nite with proba-
nent of x has a nite upper and lower bound. This bility one. They also develop Bayesian stopping rules,
is required by the OptQuest scatter-search procedure. which incorporate assumptions about the costs and
The set Y is assumed to be nite, and is often the set potential benets of further function evaluations, to
of all p-dimensional binary or integer vectors y that determine when to stop the procedure.
satisfy nite bounds. The objective function f and the When the critical distance decreases, a point from
m1 -dimensional vector of constraint functions G are which L was previously not started may become a
assumed to have continuous rst partial derivatives at starting point in the next cycle. Hence all sample
all points in S Y . This is necessary so that a gradient- points generated must be saved. This also makes the
based local NLP solver can be applied to the relaxed choice of the sample size Ns important, since too
NLP sub-problems formed from (1)(4) by allowing small a sample leads to many revised decisions,
the y variables to be continuous. while too large a sample will cause L to be started
many times. Random-linkage (RL) multistart algo-
rithms introduced by Locatelli and Schoen (1999)
2. Multistart Algorithms for Global retain the good convergence properties of MLSL, and
Optimization do not require that past starting decisions be revised.
In this section, which reviews past work on mul- Uniformly distributed points are generated one at a
tistart algorithms, we focus on unconstrained prob- time, and L is started from each point with a proba-
lems where there are no discrete variables, since to bility given by a nondecreasing function d, where
the best of our knowledge multistart algorithms have d is the distance from the current sample point to
been investigated theoretically only in this context. the closest of the previous sample points with a bet-
These problems have the form of (1)(4) with no ter function value. Assumptions on this function that
y variables and no constraints except the bounds give RL methods the same theoretical properties as
x S in (4). All global minima of f are assumed to MLSL are derived in the above reference.
occur in the interior of S. By multistart we mean any Recently, Fylstra et al. have implemented a ver-
algorithm that attempts to nd a global solution by sion of MLSL that can solve constrained problems
starting a local NLP solver, denoted by L, from mul- (http://www.solver.com/xlspremsolv3.htm). Limited
tiple starting points in S. The most basic multistart to problems with no discrete variables y, it uses the
method generates uniformly distributed points in S, L1 exact penalty function, dened as
and starts L from each of these. This converges to
m
a global solution with probability one as the num- P1 x w = f x + wi violgi x (5)
i=1
ber of points approaches innityin fact, the best
of the starting points converges as well. However, where the wi are nonnegative penalty weights, m =
this procedure is very inefcient because the same m1 + m2 , and the vector g has been extended
local solution is located many times. A convergent to include the linear constraints (4). The function
procedure that largely overcomes this difculty is violgi x is the absolute amount by which the ith
called multilevel single linkage (MLSL) (Rinnooy Kan constraint is violated at the point x. It is well known
and Timmer 1987a, b). MLSL uses a simple rule to (Nash and Sofer 1996) that if x is a local optimum of
exclude some potential starting points. A uniformly (1)(4), u is a corresponding optimal multiplier vec-
distributed sample of Ns points in S is generated, tor, the second order sufciency conditions are satis-
and the objective f is evaluated at each point. The ed at x u and
points are sorted according to their f values, and
wi > absui (6)
the qNs best points are retained, where q is an algo-
rithm parameter between 0 and 1. L is started from then x is a local unconstrained minimum of P1 . If
each point of this reduced sample, except if there is (1)(4) has several local minima, and each wi is larger
another sample point within a certain critical distance than the maximum of all absolute multipliers for con-
that has a lower f value. L is also not started from straint i over all these optima, then P1 has a local min-
sample points that are too near the boundary of S, or imum at each of these local constrained minima. Even
Ugray et al.: Scatter Search and Local NLP Solvers: A Multistart Framework for Global Optimization
330 INFORMS Journal on Computing 19(3), pp. 328340, 2007 INFORMS
though P1 is not a differentiable function of x, MLSL within a specied feasibility tolerance as the sequence
can be applied to it, and when a randomly generated converges to a local optimum. GRG algorithms have
trial point satises the MLSL criterion to be a starting a simplex-like phase 1phase 2 structure. Phase 1
point, any local solver for the smooth NLP problem begins with the given starting point and, if it is not
can be started from that point. The local solver need feasible, attempts to nd a feasible point by minimiz-
not make any reference to the exact penalty func- ing the sum of constraint violations. If this effort ter-
tion P1 , whose only role is to provide function val- minates with some constraints violated, the problem
ues to MLSL. We will use P1 in the same way in our is assumed to be infeasible. However, this local opti-
Downloaded from informs.org by [134.245.254.52] on 13 July 2017, at 03:44 . For personal use only, all rights reserved.
OQNLP algorithm. We are not aware of any theoret- mum of the phase 1 objective may not be global, so a
ical investigations of this extended MLSL procedure, feasible point may exist. If a feasible point is found,
so it must currently be regarded as a heuristic. phase 2 uses it as its starting point, and proceeds to
minimize the true objective. Both phases consist of a
sequence of line searches, each of which produces a
3. The OQNLP Algorithm feasible point with an objective value not worse (and
3.1. The Global PhaseScatter Search usually better) than its predecessor.
Scatter search (ScS) is a population-based meta- Several good commercially available implementa-
heuristic algorithm devised to intelligently perform a tions of GRG and SQP solvers exist; see Nash (1998)
search on the problem domain (Glover 1998). It oper- for a review. As with any numerical-analysis software,
ates on a set of solutions called the reference set or a local NLP solver can fail to nd a local solution from
population. Elements of the population are maintained a specied starting point. The problem may be too
and updated from iteration to iteration. ScS differs badly conditioned, badly scaled, or too large for the
from other population-based evolutionary heuristics solver, causing it to terminate at a point (feasible or
like genetic algorithms (GAs) mainly in its emphasis infeasible) that is not locally optimal. While the reli-
on generating new elements of the population mostly ability of the best current NLP solvers is quite high,
by deterministic combinations of previous members these difculties occurred in our computational test-
of the population as opposed to the more extensive ing, and we discuss this in more detail later.
use of randomization. ScS was founded on strategies Let L be a local NLP solver capable of solving
that were proposed as augmentations to GAs more (1)(4), and assume that L converges to a local opti-
than a decade after their debut in ScS. It embodies mum for any starting point x0 S. Let Lx0 be the
principles and strategies that are still not emulated by locally optimal solution found by L starting from x0 ,
other evolutionary methods and prove to be advan- and let xi , i = 1 2 nloc be all the local optima
tageous for solving a variety of complex optimization of the problem. The basin of attraction of the ith local
problems. For the most recent and complete descrip- optimum relative to L, denoted by Bxi , is the set
tion of ScS, see Laguna and Marti (2003). Also see of all starting points in S from which the sequence
the Online Supplement to this paper on the journals of points generated by L converges to xi : Bxi =
website. !x0 x0 S Lx0 = xi ".
One measure of difculty of a global optimization
3.2. The Local PhaseGradient-Based NLP problem with unique global solution x1 is the volume
Solvers of Bx1 divided by the volume of the rectangle, S,
Many papers and texts discuss gradient-based NLP the relative volume of Bx1 . The problem is trivial if
solvers, e.g., Nash and Sofer (1996), Nocedal and this relative volume is 1, as it is for convex programs,
Wright (1999), and Edgar et al. (2001). These solve and problem difculty increases as this relative vol-
problems of the form (1)(4), but with no discrete y ume approaches zero.
variables. They require a starting point as input, and
use values and gradients of the problem functions 3.3. Comparing Heuristic Search Methods and
to generate a sequence of points that, under fairly Gradient-Based NLP Solvers
general smoothness and regularity conditions, con- For smooth problems, the relative advantages of a
verge to a local optimum. The main classes of algo- heuristic search method like ScS over a gradient-
rithms in widespread use are successive quadratic based NLP solver are its ability to locate an approx-
programming (SQP) and generalized reduced gradi- imation to a good local solution (often the global
ent (GRG); see Edgar et al. (2001, chapter 8). The algo- optimum), and the fact that it can handle discrete
rithm implemented in the widely used MINOS solver variables. Gradient-based NLP solvers converge to
(Murtagh and Saunders 1982) is similar to SQP. If the nearest local solution, and have no facilities
there are nonlinear constraints, SQP and MINOS gen- for discrete variables, unless they are embedded in a
erate a sequence of points that usually violate the non- rounding heuristic or branch-and-bound method. Rel-
linear constraints, with the violations decreasing to ative disadvantages of heuristic search methods are
Ugray et al.: Scatter Search and Local NLP Solvers: A Multistart Framework for Global Optimization
INFORMS Journal on Computing 19(3), pp. 328340, 2007 INFORMS 331
their limited accuracy, and their weak abilities to deal STAGE 2: MAIN ITERATIVE LOOP
with equality constraints (more generally, narrow fea- WHILE (Stage 2 iterations < Stage 2 iteration limit)
sible regions). They nd it difcult to satisfy many DO {
nonlinear constraints to high accuracy, but this is a Get (trial solution from OptQuest);
strength of gradient-based NLP solvers. Search meth- Evaluate (objective and nonlinear constraint
ods also require an excessive number of iterations to values at trial solution,);
nd approximations to local or global optima accurate Put (trial solution, objective and constraint
values to OptQuest database);
Downloaded from informs.org by [134.245.254.52] on 13 July 2017, at 03:44 . For personal use only, all rights reserved.
diverse, in the sense that they are not too close to any 1.0
previously found local solution. Its goal is to prevent
0.2155 0.8
L from starting more than once within the basin of 1.0316
attraction of any local optimum, so it plays the same 0.6 1.6071
role as the rule in the MLSL algorithm of Section 2, 0.4
0.2
ordered by its objective value, as is the Euclidean dis-
0.4
tance between it and the starting point that led to
1.6071 0.6
it. If a local solution is located more than once, the 1.0316
maximum of these distances, maxdist, is updated and 0.8 0.2155
stored. For each trial point t, if the distance between t 1.0
and any local solution already found is less than
distfactor maxdist, L is not started from the point, and Figure 1 Local Optima and Ten L Starting Points for Six-Hump Camel-
back Function
we obtain the next trial solution from OptQuest.
This distance lter implicitly assumes that the
the stationary point at the origin and the six local
attraction basins are spherical, with radii at least
minima of the two-variable six-hump camelback func-
maxdist. The default value of distfactor is 0.75, and it
tion (Dixon and Szeg 1975) as dark squares, labeled
can be set to any positive value. As distfactor ap-
with their objective value. The ten points from which
proaches zero, the ltering effect vanishes, as would
OQNLP starts the local solver are shown as nine
be appropriate if there were many closely spaced local
white diamonds, plus the origin. The local minima
solutions. As it increases, the ltering effect increases
occur in pairs with equal objective value, located sym-
until eventually L is never started in stage 2. Its de-
metrically about the origin. There were 144 trial points
fault value is chosen empirically to achieve a reason-
generated in the initial OptQuest iterations stage,
able compromise between these extremes.
and these, plus the ten points in the initial population,
The merit lter helps insure that the L starting
are shown in Figure D in the Online Supplement. The
points have high quality, by not starting from can-
best of these 154 points is the population point 0 0,
didate points whose exact penalty function value P1
so this becomes the rst starting point for the local
in (5) is greater than a threshold. This threshold is set solver. This happens to be a stationary point of F , so it
initially to the P1 value of the best candidate point satises the optimality test (that the norm of the gra-
found in the rst stage of the algorithm. If trial points dient of the objective be less than the optimality toler-
are rejected by this test for more than waitcycle con- ance), and the local solver terminates there. The next
secutive iterations, the threshold is increased by the local solver start is at iteration 201, and this locates the
updating rule: global optimum at 00898 07127, which is located
twice. The other global optimum at 00898 07127
threshold threshold + threshfactor
is found rst at iteration 268, and is located six times.
10 + absthreshold The limit on total OQNLP iterations in this run was
1,000. L was started at only nine of the 846 OptQuest
where the default value of threshfactor is 0.2 and that trial points generated in the main iterative loop of
for waitcycle is 20. The threshfactor value is selected stage 2. All but two of the starting points are in the
to provide a signicant but not-too-large increase in basin of attraction of one of the two global optima.
threshold, while the waitcycle value insures that these This is mainly due to the merit lter. In particular, the
increases happen often enough but not too often. threshold values are always less than 1.6071, so no
Using the default values of 200 stage 1 and 800 stage 2 starts are ever made in the basin of attraction of the
candidate points, threshold can be increased in stage 2 two local optima with this objective value. The merit
at most 40 times. The additive 1.0 term is included lter alone rejected 498 points, the distance lter alone
so that threshold increases by at least threshfactor when 57, and both rejected 281.
its current value is near zero. When a trial point is Figure 2 illustrates the dynamics of the merit-lter-
accepted by the merit lter, threshold is decreased by ing process for iterations 155 to 407 of this problem,
setting it to the P1 value of that point. displaying the objective values for the trial points as
The combined effect of these two lters is that L white diamonds, and the threshold values as dark
is started at only a few percent of the OptQuest trial lines. All objective values greater than 2.0 are set
points, yet global optimal solutions are found for to 2.0.
a very high percentage of the test problems. Some The initial threshold value is zero, and it is raised
insight is gained by examining Figure 1, which shows twice to a level of 0.44 at iteration 201, where the
Ugray et al.: Scatter Search and Local NLP Solvers: A Multistart Framework for Global Optimization
INFORMS Journal on Computing 19(3), pp. 328340, 2007 INFORMS 333
2.0
1.5
1.0
Downloaded from informs.org by [134.245.254.52] on 13 July 2017, at 03:44 . For personal use only, all rights reserved.
0.5
0.0
150 200 250 300 350 400
0.5
1.0
1.5
Figure 2 Objective and Threshold Values for Six-Hump Camelback Function for Iterations 155 to 407
trial-point objective value of 029 falls below it. L lter accepts one of the next trial points, but 51 of the
is then started and locates the global optimum at 52 accepted points are too near one of the two global
00898 07127, and the threshold is reset to 029. optima, and they are rejected by the distance lter.
This cycle then repeats. Nine of the ten L starts are This simple example illustrates a number of impor-
made in the 252 iterations shown in the graph. In tant points:
this span, there are 12 points where the merit lter 1. Setting the bounds on the continuous or discrete
allows a start and the threshold is decreased, but L is variables to be too large in magnitude is likely to
not started at three of these because the distance lter slow the OQNLP algorithm (or any search algorithm)
rejects them. and may lead to a poorer nal solution. In the above
Figure 3 shows the same information for iterations example, if the variable bounds had been (2 2)
408 to 1,000. There is only one L start in this span. This rather than (10 10), the trial points generated by the
is not due to a lack of high-quality trial points: there initial population would have had much lower objec-
are more good points than previously, many with val- tive values. OptQuest can overcome this when the
ues near or equal to 10310 (the global minimum is initial population is updated.
10316), and the merit threshold is usually 10310 2. L found a highly accurate approximation to the
as well. Every time this threshold is raised, the merit global solution of this unconstrained problem at its
2.0
1.5
1.0
0.5
0.0
407 507 607 707 807 907
0.5
1.0
1.5
Figure 3 Objective and Threshold Values for Six-Hump Camelback Function for Iterations 408 to 1,000
Ugray et al.: Scatter Search and Local NLP Solvers: A Multistart Framework for Global Optimization
334 INFORMS Journal on Computing 19(3), pp. 328340, 2007 INFORMS
second call. OptQuest alone would have taken many In mode (2), Optquest presents candidate y vectors
more iterations to achieve this accuracy. only to L, which are xed while L nds correspond-
3. The best trial point generated by the initial pop- ing (locally) optimal x values. The starting values for
ulation may not have as good an objective value as x can be chosen to minimize computational effort. We
those generated from the second or succeeding ones, are experimenting with an option that obtains all pos-
especially if the variable bounds are too large. Using sible trial points for the current population, sorts them
the best rst-generation point as the initial L start- in terms of their distance from each other, and calls
L in that sorted order, starting each call of L from
Downloaded from informs.org by [134.245.254.52] on 13 July 2017, at 03:44 . For personal use only, all rights reserved.
functions. GAMS identies all linear terms in each rows for the series EX8_6_1 and EX8_6_2 is the num-
function, and supplies their coefcients separately, ber of particles in a cluster whose equilibrium cong-
thus identifying all linear constraints. This enables us uration is sought via potential-energy minimization.
to invoke the OptQuest option that maps each trial (For more details on these problems see Section 4.2.)
point into a point that satises the linear constraints.
The derivative information supplied by GAMS signif- 4.1. Continuous VariablesThe Base Case
icantly enhances the performance of the local solver This section describes results obtained when OQNLP
since only nonconstant derivatives are re-evaluated, is applied to 128 of the problems in the Floudas
and these are always available to full machine pre- et al. (1999) test set with no discrete variables. A few
cision. As mentioned earlier, this GAMS version can problems for which no GAMS NLP solver can nd a
call any GAMS NLP solver. feasible solution in 800 solver calls are omitted. Com-
putations were performed on a DELL OptiPlex PC
For our computational experiments we used the
with a 1.2 GHz Pentium IV processor and 261 Mbytes
large set of global optimization test problems coded
of RAM, running under Windows 2000.
in GAMS from Floudas et al. (1999). Table 1 shows
The options and main algorithm parameters used
the characteristics of 142 individual and two groups
are shown in Table 2 (see Section 3.4 for denitions).
of problems.
The lter parameter values (waitcycle, threshfactor, dist-
Most problems arise from chemical engineering, factor) correspond to fairly tight lters, and these must
but some are from other sources. Most are small, be loosened to solve some problems. The OptQuest
but a few have over 100 variables and a comparable use linear constraints option, which projects trial
numbers of constraints, and 13 have both continu- points onto the linear constraints, is not used be-
ous and discrete variables. Almost all of the problems cause it is very time-consuming for larger problems.
without discrete variables have local solutions dis- SNOPT, an SQP implementation, was used for the
tinct from the global solution, and the majority of largest problems because many calls to the GRG
problems have constraints. Sometimes all constraints solvers CONOPT (Drud 1994) and LSGRG2 termi-
are linear, as with the concave quadratic programs nate infeasible on these problems. The 8_3_x problems
of series EX2_1_x, but many problems have nonlin- include many pooling constraints, which have bilin-
ear constraints, and these are often the source of the ear terms. In almost all these terminations, the GRG
nonconvexities. The best known objective value and solvers nd a local minimum of the phase 1 objec-
(in most cases) the corresponding variable values are tive. SNOPT has no phase 1, and never terminates
provided in Floudas et al. (1999). The symbol N in the infeasible.
Ugray et al.: Scatter Search and Local NLP Solvers: A Multistart Framework for Global Optimization
336 INFORMS Journal on Computing 19(3), pp. 328340, 2007 INFORMS
Table 2 Base Case OQNLP and OptQuestGRG Parameters and Computational effort needed to achieve these re-
Options Used sults is quite low, and increases slowly with problem
Total iterations = 1000 size, except for the geometric mean solution time for
Stage 1 iterations = 200
the largest problems. The best OQNLP value is also
Waitcycle = 20
found very early: in the rst solver call in 80 of the
Threshfactor = 02
118 solved problems, and the second call in 19 more.
Distfactor = 075
This shows that, for these test problems, stage 1 of
OQNLP is very effective in nding a point in the
Downloaded from informs.org by [134.245.254.52] on 13 July 2017, at 03:44 . For personal use only, all rights reserved.
Table 3 Results and Effort Statistics for 128 Continuous-Variable Floudas et al. (1999) Problems
Variable Iterations Solver Solver Locals Function calls Function Time to Total First Second
range Problems Variables Constraints to best calls to calls found to best calls best time Failed L call L call
1 to 4 32 25 17 206.3 1.1 75 19 2639 21580 0.1 05 1 27 3
4 to 7 31 55 57 214.4 1.3 58 19 3815 47667 0.2 06 0 22 6
8 to 12 21 94 81 238.2 1.5 132 30 5756 196980 0.1 08 3 10 4
13 to 20 18 159 117 303.5 2.4 74 26 9682 52119 0.3 07 4 8 1
22 to 78 13 379 276 259.6 2.1 141 31 15624 230779 0.6 25 1 5 3
110 to 141 13 1164 801 305.0 2.7 237 227 NA NA 6.6 641 0 7 2
Total/avg. 128 251.3 1.8 105 35 5379 71909 0.4 17 9 80 19
Ugray et al.: Scatter Search and Local NLP Solvers: A Multistart Framework for Global Optimization
INFORMS Journal on Computing 19(3), pp. 328340, 2007 INFORMS 337
Table 4 Solving Nine Failed Problems with Looser Filters and Boundary Parameter = 1
Iterations Solver calls Total solver Base-case Locals Function calls Total Time Total
Problem name Variables Constraints to best to best calls solver calls found to best function calls to best time Gap
Table 5 Base Case and Loosened Filter Parameter Values 4.2. The Lennard-Jones and Morse
Energy-Minimization Problems
Parameter Base-case value Looser value
The Floudas et al. (1999) set of test problems includes
Waitcycle 20 10 two GAMS models that choose the locations of a clus-
Threshold_factor 02 10 ter of N particles to minimize the potential energy of
Distance_factor 075 01
the cluster, using two different potential energy func-
Boundary search parameter 05 10
tions, called Lennard-Jones and Morse. The decision
variables are the x y z coordinates of each parti-
Comparing the total solver calls and base-case cle. Particle 1 is located at the origin, and three posi-
solver calls columns shows that the new parameters tion components of particles 2 and 3 are xed, so
represent a substantial loosening of both lters. The each family of problems has 3N 6 variables. These
looser lters result in many more solver calls in all but problems have many local minima, and their number
problem 9_2_5, and the geometric mean solver calls increases rapidly with problem size, so they constitute
is 48.7 with the loose lters vs. 10.5 with the tighter a good test for global optimization algorithms.
ones. The behavior of 9_2_5 is surprising (six solver Results of applying OQNLP to 14 of these prob-
calls with loose lters versus 45 with tighter ones), but lems, using 200 stage 1 and 1,000 total iterations,
the run with looser lters nds the global minimum are shown in Tables 7 and 8. Each problem set was
at iteration 344, and after that its merit thresholds and solved with LSGRG2 and CONOPT. These results
set of local solutions differ from those of the base- use CONOPT for the Lennard-Jones problems and
case run. LSGRG2 for the Morse, because they provide slightly
Table 6 shows the geometric performance means better results, illustrating the value of being able to
and totals obtained from solving the 14 concave QP call several solvers. Because of the many distinct
problems with base-case parameters, with and with- local minima, the number of local minima found is
out the OptQuest use linear constraints option, equal to the number of solver calls for the three
which maps each trial point into a nearest point fea- largest Lennard-Jones problems and for all the Morse
sible for the linear constraints. Since these are linearly problems.
constrained problems, invoking this option guaran- The Lennard-Jones problems are the more difcult
tees that all trial points are feasible. of the two. The three largest problems have gaps of
Clearly, this option helps: there are roughly twice roughly 1% to 2%, using the looser lter parameters
as many solver calls on average when using it, and in Table 5. The default lter parameters led to positive
only two failures, vs. seven in the base case. The gaps gaps for the last three problems totaling 7.8%, while
for the two unsolved problems (2_1_7_5 and 2_1_9) this sum in Table 7 is 3.8%. The objective approaches
are between 1% and 3.5% in both cases. However, innity as the distance between any two particles
this option increases run times here by about a factor approaches zero, so its unconstrained minimization
of 30, so it is currently off by default. for N = 20 leads to about 50,000 domain violations
Table 6 Solving Concave QP Problems With and Without Use Linear Constraints
Iterations Solver calls Total Locals Fcn calls Total fcn Time Total
Case to best to best solver calls found to best calls to best time Failed
Table 7 Solving Six Lennard-Jones Problems Using CONOPT and Loose Filters
Table 8 Solving Eight Morse Problems Using LSGRG2 and Default 4.3. Problems with Discrete Variables
Parameters There are 11 MINLP problems in the Floudas et al.
Solver Total (1999) test set, with the total number of variables
Problem calls solver Locals Time to Total ranging from three to 29 and the number of binary
name Variables to best calls found best time Gap variables ranging from one to eight. Two of these,
EX12_2_3 and EX12_2_4, had been reformulated so
EX8_6_2_5 9 1 5 5 023 061 0.0000
EX8_6_2_10 24 1 15 15 057 444 0.0000 that all binaries appeared linearly, and we restored
EX8_6_2_15 39 1 6 6 141 643 0.0000 them to their original state where the binaries appear
EX8_6_2_20 54 2 43 43 420 5120 0.0000 nonlinearly. OQNLP allows such representations,
EX8_6_2_25 69 4 20 20 1344 5838 0.0000 while the other GAMS MINLP solvers do not. The
EX8_6_2_30 84 17 43 43 6856 16019 0.0000 nal test set contains 13 problems. These are far
EX8_6_2_40 114 7 33 33 6629 27391 0.0000
EX8_6_2_50 144 20 25 25 33720 40396 0.1251
too small to yield meaningful inferences about the
power of OQNLP on problems of practical size, but
allow preliminary testing of the two MINLP modes
(either divide by zero or integer power overow), and described in Section 3.5. The geometric means of some
this number grows rapidly with N . Hence we added measures of computational outcomes and effort for
constraints lower bounding this distance by 0.1 for all both modes are shown in Table 9, using the LSGRG2
distinct pairs of points, and the number of these con- NLP solver.
straints is shown in the table. None are active at the The rst table row is for mode (2), where OptQuest
best solution found. manipulates only the discrete variables. Each NLP
Table 8 shows the Morse potential results using the problem was cold started from the same initial
LSGRG2 solver and the default OQNLP parameters point in these runs, so the number of function calls
shown in Table 2. The objective here has no singu- could be reduced substantially by warm starts. All
larities, so there are no difculties with domain vio- runs are terminated by OptQuest after the small num-
lations, and the only constraints are variable bounds. ber of possible binary-variable combinations have
All problems are solved to very small gaps except the been completely enumerated. The optimal solution is
largest (144 variables), which has a gap of 0.125%. found on average about midway through the solution
The number of solver calls is much smaller than for process, but we expect that this will occur earlier
the Lennard-Jones problems, because the lters are as the number of discrete variables increases. The
much tighter. Each call leads to a different local opti- OptQuest logic requires that at least one population
mum. The largest problem is solved to a gap less than of binary solutions be evaluated before any learning
104 % with 5,000 total and 1,000 stage 1 iterations can occur, and the average number of solver calls to
and the same lter parameters. This run terminated nd the best solution here is about equal to the pop-
because the 3,000-second time limit was exceeded, ulation size of ten.
took 4,083 iterations, and found 210 distinct local The last three rows of Table 9 show results for
optima in 210 solver calls, compared to only 25 in the mode (1), where OptQuest manipulates both binary
base case. and continuous variables. In rows 2 and 3, we do
Discrete Options and Iterations L calls Total L Fcn calls Total Time Total
var mode parameters to best to best calls to best fcn calls to best time Failures
Discrete only Default 102 102 201 20658 57941 0.8 15 0
All Default 2617 40 321 10657 250222 0.3 08 7
All 1000 5000 15519 12 891 101968 2240257 0.8 37 1
All Default, use 2724 36 1158 11780 889830 9.9 261 0
Ugray et al.: Scatter Search and Local NLP Solvers: A Multistart Framework for Global Optimization
INFORMS Journal on Computing 19(3), pp. 328340, 2007 INFORMS 339
Table 10 Average Solver Calls vs. Number of Binary Variables Future research includes enhancing the lter logic.
As described above, the lters needed to be loosened
Binaries 1 3 4 5 6 8
Problems 1 3 2 1 4 2 to solve nine of the Floudas et al. (1999) test problems,
Discretes only 2 7 125 27 525 81 and this loosening could be done automatically. The
All 52 357 175 109 1955 5515 merit lter parameter threshfactor (new threshold =
threshfactor 1 + old threshold could be calculated
dynamically. Each time a penalty value is above
not require trial points to satisfy linear constraints,
the threshold, calculate the value of threshfactor that
Downloaded from informs.org by [134.245.254.52] on 13 July 2017, at 03:44 . For personal use only, all rights reserved.
here are so small. More extensive testing is needed, Floudas, C. A., P. M. Pardalos, C. S. Adjiman, W. R. Esposito,
which should clarify the relative merits of the two Z. Gumus, S. T. Harding, J. L. Klepeis, C. A. Meyer, C. A.
Schweiger. 1999. Handbook of Test Problems for Local and Global
MINLP modes discussed in Section 4.3. If OptQuest Optimization. Kluwer Academic Publishers, Boston, MA.
manipulates only the discrete variables, then all trial Glover, F. 1998. A template for scatter search and path relinking.
points generated by the current population may be J.-K. Hao, E. Lutton, E. Ronald, M. Schoenauer, D. Snyers, eds.
generated at once, and the solver calls at these points Articial Evolution, Lecture Notes in Computer Science, Vol. 1363.
may be done in any order. The points can be sorted by Springer Verlag, New York, 1354.
increasing distance from their nearest neighbor, and Laguna, M., R. Marti. 2002. The OptQuest callable library. S. Voss,
Downloaded from informs.org by [134.245.254.52] on 13 July 2017, at 03:44 . For personal use only, all rights reserved.