Professional Documents
Culture Documents
REVIEW Optimization Theory
REVIEW Optimization Theory
1. Introduction
Optimization is a term (statement) used to refer to a branch of computational science concerned with
finding the best solution to a problem. The word best means the solution can be acceptable for all candidate
solution or set of candidate solution.
Optimization need by everywhere in our lives. Eq. telephones calls need optimization for find the most
cost-effective and least crowded route to carry the calls; travel agent needs the effective route to take passenger
from source to destination to least time and cost; etc.
Many optimization methods have developed to solve types of problem. it is necessary to identify the
characteristics of the problem before we select the best approach to solving an optimization problem.
b. Types of variables
• Continuous problem need continuous-valued variables, eq. xj R, j = 1,…,
A continuous-valued random variable takes on a range of real values, e.g. ranges from 0 to ∞ as s
varies. Examples of continuous (-valued) random variables:
• The time when a particular arrival occurs.
• The time between consecutive arrivals.
Properties of Continuous Random Variables, from the Fundamental Theorem of Calculus, we have
∞
∞
In particular,
∞
∞
∞ 1
More generally,
d. Constrains used
• Used only boundary constraints : an unconstraint problem
• Constraint problem : have additional equality and/or inequality constrains
Mode
The most common value obtained in a set of observations. For example, for a data set (3, 7, 3, 9, 9, 3, 5, 1,
8, 5) (left histogram), the unique mode is 3. Similarly, for a data set (2, 4, 9, 6, 4, 6, 6, 2, 8, 2) (right
histogram), there are two modes: 2 and 6. A distribution with a single mode is said to be unimodal. A
distribution with more than one mode is said to be bimodal, trimodal, etc., or in general, multimodal. The
mode of a set of data is implemented in Mathematics as Mode[data].
Figure 1: An overview of objective functions. A: unimodal. B: essentially unimodal but with parasitic local
extreme. C: fundamentally multimodal, small number of local extreme. D: significant null-space effects. E:
fundamentally multimodal, huge number of local extreme. F: lacking any useful structure, brute force
probably required.
The global maximum is the point at the top
f. Number of optimization criteria
• Quantity to be optimized using only one objective function : uni-objective (single objective)
• More than one sub objective function which need to be simultaneously optimized : multi objective. the
multi objective problem can be written as:
where µi is the i-th objective function, g and h are the inequality and equality constraints, respectively,
and x is the vector of optimization or decision variables. The solution to the above problem is a set of
Pareto points. Pareto solutions are those for which improvement in one objective can only occur with
the worsening of at least one other objective. Thus, instead of a unique solution to the problem (which
is typically the case in traditional mathematical programming), the solution to a multi objective
problem is a (possibly infinite) set of Pareto points.
A design point in objective space µ * is termed Pareto optimal if there does not exist another feasible
An optimal solution searched by optimization algorithms / method with iteratively transforming a current
candidate solution into new, hopefully better solution. Optimization method divided into two classes, i.e. local
search and global search. Local search or local optima can cause global minimum. It only local information of
the search space surrounding the current solution to produce a new solution (explore neighborhood
environment). On the other hands, global search uses more information about search space to locate global
optimum (explore the entire search space). Base on the problem characteristics, optimization methods will group
as: unconstrained methods, constrains method, multi objective optimization method, multi solution methods,
dynamics method.
A problem will be called optimum, if all or any variables reach a convergence value. General condition for
∞
convergence defines as follow: if mean a point found a time step t, then the sequence, , convergence to
the global optimum, x*, if :
3. Unconstrained Optimization
Unconstrained Optimization place no restriction on the values that can assigned to variables of the problem.
The feasible space is simply the entire space. The general optimization problem is defined as:
, , ,…,
, .
It is suitable for simple problem or discrete problem. The choice of a good neighborhood structure is
generally important for the effectiveness of the process. The main weakness of the descent algorithm is its
inability to escape from local minima. This is symbolically illustrated in the below Figure: all solutions in the
neighborhood V(x,) are worse than x, although, further away, there exists a global minimum of F which cannot
be reached under the descent rule.
Trap in the local minimum
b. Beam Search
Beam search is a heuristic method for solving optimization problems. It is an adaptation of the branch and
bound method in which only some nodes are evaluated. Beam search is like breadth-first search since it
progresses level by level without backtracking. But unlike breadth-first search, beam search only moves
downward from the best β promising nodes (instead of all nodes) at each level and β is called the beam width.
The other nodes are simply ignored.
Different nodes at the same level represent different partial schedule. If the local evaluation is a function of
the partial schedule (as in the case of the lower bound based local evaluation function to minimize makes pan),
values of the local evaluation function obtained for expanding one beam node cannot be compared legitimately
with the values of the local evaluation functions obtained for expanding another beam node at the same level.
Therefore, nodes in each of the parallel beams are evaluated separately and only one node is selected for each.
It is a fast and approximate branch and bound method, which operates in a limited search space to find good
solutions for optimization problems. It searches a limited number of solution paths in parallel, and progresses
level by level without backtracking.
c. Tabu Search
Maintains information about how recently a search point has been visited.
e. Leapfrog Algorithm
LF is an optimization approach based on physical problem of the motion of a particle of unit mass in an
– dimension conservative force field
There are any advantages and disadvantages between algorithms for solving unconstraint problem. The
comparison of advantages and disadvantages of local search, beam search, tabu search, simulated annealing and
leapfrog algorithms are described in the list below:
Local search:
LS consist of running a local search procedure many times to perturbations of previously seen local optima.
Advantages of local search methods are that [7]:
1. in practice they are found to be the best performing algorithms for a large number of problems,
2. they can examine an enormous number of possible solutions in short computation time,
3. they are often more easily adapted to variants of problems and, thus, are more flexible, and
4. they are typically easier to understand and implement than exact methods.
Beam search:
Advantages :
1. potentially reducing the computation, and hence the time, of a search [11]
2. the memory consumption of the search is far less than its underlying search methods [12]
Disadvantages [10] :
1. the search may not result in an optimal goal and may not even reach a goal at all
2. Beam search has the potential to be incomplete
Despite these disadvantages beam search has found success in the practical areas of speech recognition,
vision, planning, and machine learning [10]
Tabu search
Advantages [6]:
1. Its use in continuous search spaces has not been common due to the difficulties of performing
neighborhood movements in continuous search space.
Disadvantages[6]:
1. The extra computational cost associated with the local search.
2. The extension of multi objective of tabu search to continuous search spaces while feasible may become
impractical because of discretization of the search space required.
The problem of tabu search is How to keep diversity so that points not necessarily within the
neighborhood of a candidate solution can be generated.
Leapfrog algorithms:
The advantage of this algorithm is that the velocities are explicitly calculated, however, the disadvantage is that
they are not calculated at the same time as the positions [5]
4. Constrain Optimization
Many real world problems have constraint while it will be solved. Constraint puts a restriction on the search
space, specifying regions of the space that are infeasible. There are 3 (three) constraint handling methods. These
are:
a. Boundary constrains
Define the borders of the search space. Upper and lower on each dimension of the search space define the
hypercube in which solution must be found.
b. equality constrains
Specify that a function of variables of the problem must be equal to a constraint
c. in equality constrains
Specify that a function of the variables must be less than or equal to (or, greater or equal to) a constraint
Constraint can be linier or non linier. here is a definition of a constraint optimization problem :
Minimize , , ,…,
Researches in constraint handling methods are numerous in the evolutionary computation (EC) and swarm
intelligence (SI) paradigms. Based on these research efforts, constraint handling methods have been categorized
in a number of classes:
, ,
The dual problem associated with the primal problem in equation above is the defined as:
Maximize , , ,
Subject to 0, 1, … ,
5. Conclusion
Optimization is a term (statement) used to refer to a branch of computational science concerned with
finding the best solution to a problem. The basic ingredient in each optimization problems are an objective
function, a set of unknowns or variables (x) and a set of constrains.
Characteristics of the problem make us know what the best method that can implement to the problem. Here
are the characteristics that classify: the number of variables (univariate and multivariate), types of variables
(Continuous problem, integer or discrete optimization problem needs integer variables, mixed integer problem
needs both of continuous and integer variables), Degree of non linier of the objective function (Linier problem,
Quadratic problem, Non linier problem), Constrains used (boundary constraints, Constraint problem), Number
of optima criteria (unimodal, multi modal), Number of optimization criteria (single objective, multi objective).
There is several future works that can do, i.e. implement algorithm unconstraint problem to any program
(i.e. matlab) and then make a benchmark to compare the best method (implement in some single objective)
REFERENCES:
1. Engelbrecht, A.P, 2005, Fundamental of Computational Swarm Intelligence, John Willey & Sons,
England
2. I. Sabuncuoglu, M. Bayiz, Job shop scheduling with beam search, European Journal of Operational
Research 118 (1999) 390±412
3. M. Pirlot, General local search methods, European Journal of Operational Research 92 (1996) 493-511
4. H. Akeb et al. / A beam search algorithm for the circular packing problem / Computers & Operations
Research 36 (2009) 1513– 1528
5. F. Ricca, B. Simeone / Local search algorithms for political districting/European Journal of
Operational Research 189 (2008) 1409–1426
6. Anonymous, Theory of molecular dynamics simulations, Retrieved on April 2, 2010 from
http://www.ch.embnet.org/MD_tutorial/pages/ MD.Part1.html#Leap-frog,
7. Carlos A. Coello Coello, Gary B. Lamont, David A. Van Veldhuizen, (2007), Evolutionary Algorithms
for solving multi-objective problems, Second Edition, Springer
8. Irina Dumitrescu and Thomas St¨utzle, Combinations of Local Search and Exact Algorithms, Book
Applications of Evolutionary Computing, ISSN : 0302-9743 (Print) 1611-3349 (Online) Volume
2611/2003
9. Anonymous, Simulated Annealing, Retrieved on : April 4, 2010, from
http://win.ua.ac.be/~verdonk/courses/opt/2005/sa.pdf,
10. Zhang, W. (1999). State-space search: Algorithms, complexity, extensions, and applications. Springer:
New York.
11. Xu, Y., Fern, A. (2007). On learning linear ranking functions for beam search. Retrieved on March 8,
2009, from http://www.machinelearning.org/proceedings/icml2007/papers/168.pdf
12. Furcy, D., Koenig, S. Limited discrepancy beam search. Retrieved on March 8, 2009, from
http://www.ijcai.org/papers/0596.pd