Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 12

Define Optimization.

State the general optimization problem


In mathematics and computer science, an optimization problem is the problem of finding the best solution from all
feasible solutions. Optimization problems can be divided into two categories depending on whether the variables are
continuous or discrete. An optimization problem with discrete variables is known as a combinatorial optimization
problem. In a combinatorial optimization problem, we are looking for an object such as an integer, permutation or graph
from a finite (or possibly countable infinite) set.
Optimization problems are common in many disciplines and various domains. In optimization problems, we have to find
solutions which are optimal or near-optimal with respect to some goals. Usually, we are not able to solve problems in
one step, but we follow some process which guides us through problem solving. Often, the solution process is separated
into different steps which are executed one after the other. Commonly used steps are recognizing and defining
problems, constructing and solving models, and evaluating and implementing solutions.

EXAMPLE 1: A farmer has 2400 ft of fencing and wants to fence off a rectangular field that borders a straight river. He
needs no fence along the river. What are the dimensions of the field that has the largest area?
The next step is to create a corresponding mathematical model:
Maximize: A = xy
Constraint: 2x + y = 2400
EXAMPLE 2: We need to enclose a field with a rectangular fence. We have 500 ft of fencing material and a building is on
one side of the field and so wont need any fencing. Determine the dimensions of the field that will enclose the largest
area.
Solution: We first draw a picture that illustrates the general case:
The next step is to create a corresponding mathematical model:
Maximize: A = xy
Constraint: x + 2y = 500
EXAMPLE 3: A printer needs to make a poster that will have a total area of 200 in2 and will have 1 in margins on the
sides, a 2 in margin on the top and a 1.5 in margin on the bottom. What dimensions of the poster will give the largest
printed area?
Solution: We first draw a picture. Then we create a corresponding mathematical model:
Maximize: A = (w 2)(h 3.5)
Constraint: wh = 200
EXAMPLE 4: Determine the area of the largest rectangle that can be inscribed in a circle of radius 4.
Solution: We first draw a picture:
The next step is to create a corresponding mathematical model:
Maximize: A = 2x 2y = 4xy
Constraint: x2 + y2 = 16
Define Fibonacci series. How is it used in optimization?
In mathematics, the Fibonacci numbers are the numbers in the following integer sequence:

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, ....

By definition, the first two Fibonacci numbers are 0 and 1, and each subsequent number is the sum of the previous two.
Some sources omit the initial 0, instead beginning the sequence with two 1s.
In mathematical terms, the sequence Fn of Fibonacci numbers is defined by the recurrence relation.

Fn = F n-1 + F n-2

with seed values

F0 = 0; F1 = 1

Fibonacci numbers are used in the analysis of financial markets, in strategies such as Fibonacci retracement, and are
used in computer algorithms such as the Fibonacci search technique and the Fibonacci heap data structure. The simple
recursion of Fibonacci numbers has also inspired a family of recursive graphs called Fibonacci cubes for interconnecting
parallel and distributed systems. They also appear in biological settings, such as branching in trees, arrangement of
leaves on a stem, the fruit spouts of a pineapple, the flowering of artichoke, an uncurling fern and the arrangement of a
pine cone.

Define Lagrange function and the conditions suitable for this.
A function that is used in the solution of problems on a conditional extremum of functions of several variables and
functionals. By means of a Lagrange function one can write down necessary conditions for optimality in problems on a
conditional extremum. One does not need to express some variables in terms of others or to take into account the fact
that not all variables are independent. The necessary conditions obtained by means of a Lagrange function form a closed
system of relations, among the solutions of which the required optimal solution of the problem on a conditional
extremum is to be found. A Lagrange function is used in both theoretical questions of linear and non-linear
programming and in the construction of certain computational methods.
Suppose, for example, that one has the following problem on a conditional extremum of a function of several variables:
Find a maximum or minimum of the function

(1)
under the conditions

(2)
The function defined by the expression

(3)
is called the Lagrange function, and the numbers are called Lagrange multipliers.
A Lagrange function is used in problems of non-linear programming that differ from the classical problems on a
conditional extremum in the presence of constraints of inequality type, as well as of constraints of equality type: Find a
minimum or maximum of

(6)
under the conditions

(7)
(8)
Classify integer programming problems

What is LP relaxation problem and its significance
n mathematics, the linear programming relaxation of a 0-1 integer program is the problem that arises by replacing the
constraint that each variable must be 0 or 1 by a weaker constraint, that each variable belong to the interval [0,1].
That is, for each constraint of the form

of the original integer program, one instead uses a pair of linear constraints

The resulting relaxation is a linear program, hence the name. This relaxation technique transforms an NP-hard
optimization problem (integer programming) into a related problem that is solvable in polynomial time (linear
programming); the solution to the relaxed linear program can be used to gain information about the solution to the
original integer program.
LP relaxation
For any IP we can generate an LP (called the LP relaxation) from the IP by taking the same objective function and same
constraints but with the requirement that variables are integer replaced by appropriate continuous constraints
e.g. x
i
= 0 or 1 can be replaced by the two continuous constraints x
i
>= 0 and x
i
<= 1
We can then solve this LP relaxation of the original IP.
The Principle of optimality
To use dynamic programming the problem must observe the principle of optimality that whatever the initial state is,
remaining decisions must be optimal with regard the state following from the first decision. Combinatorial problems
may have this property but may use too much memory/time to be efficient.
Statement of Principle of Optimality
"An optimal policy has the property that whatver the initial state and initial decision are the remaining decision must
constitute an optimal policy with regard to the state resulting from the first decision.
If an optimal state P results in a state Q , this initial state to this final state, the portion of the
0____________ P_________________ Q____________ 100
Figure 1: Optimal policy
original must be optimum. That is every part of optimal policy is optimal.

What are the requirements of dynamic programming?
Dynamic programming is a method for solving complex problems by breaking them down into simpler sub problems. It is
applicable to problems exhibiting the properties of overlapping subproblems which are only slightly smaller
[1]
and
optimal substructure (described below). When applicable, the method takes far less time than naive methods.
The key idea behind dynamic programming is quite simple. In general, to solve a given problem, we need to solve
different parts of the problem (subproblems), then combine the solutions of the subproblems to reach an overall
solution. Often, many of these subproblems are really the same. The dynamic programming approach seeks to solve
each subproblem only once, thus reducing the number of computations: once the solution to a given subproblem has
been computed, it is stored or "memo-ized": the next time the same solution is needed, it is simply looked up. This
approach is especially useful when the number of repeating subproblems grows exponentially as a function of the size of
the input.
DYNAMIC PROGRAMMING REQUIREMENTS
Requirements:
a) Optimal Substructure
b) Overlapping subproblem
Steps:
1) Characterize the structure of an optimal solution
2) Formulate a recursive solution
3) Compute the value of an opt. solution bottom-up. (get value rather than the structure)
4) Construct an optimal solution (structure) from computed infomation.
Memoization: Top-down, compute and store first time, reuse subsequent times.
Examples
Maximum Value Contiguous Subsequence. Given a sequence of n real numbers A(1) ... A(n), determine a contiguous
subsequence A(i) ... A(j) for which the sum of elements in the subsequence is maximized.
Making Change. You are given n types of coin denominations of values v(1) < v(2) < ... < v(n) (all integers). Assume v(1)
= 1, so you can always make change for any amount of money C. Give an algorithm which makes change for an amount
of money C with as few coins as possible. [on problem set 4]
Longest Increasing Subsequence. Given a sequence of n real numbers A(1) ... A(n), determine a subsequence (not
necessarily contiguous) of maximum length in which the values in the subsequence form a strictly increasing sequence.
[on problem set 4]
Box Stacking. You are given a set of n types of rectangular 3-D boxes, where the i^th box has height h(i), width w(i)
and depth d(i) (all real numbers). You want to create a stack of boxes which is as tall as possible, but you can only stack a
box on top of another box if the dimensions of the 2-D base of the lower box are each strictly larger than those of the 2-
D base of the higher box. Of course, you can rotate a box so that any side functions as its base. It is also allowable to use
multiple instances of the same type of box.
Building Bridges. Consider a 2-D map with a horizontal river passing through its center. There are n cities on the
southern bank with x-coordinates a(1) ... a(n) and n cities on the northern bank with x-coordinates b(1) ... b(n). You want
to connect as many north-south pairs of cities as possible with bridges such that no two bridges cross. When connecting
cities, you can only connect city i on the northern bank to city i on the southern bank. (Note: this problem was
incorrectly stated on the paper copies of the handout given in recitation.)
Integer Knapsack Problem (Duplicate Items Forbidden). This is the same problem as the example above, except here it
is forbidden to use more than one instance of each type of item.
Balanced Partition. You have a set of n integers each in the range 0 ... K. Partition these integers into two subsets such
that you minimize |S1 - S2|, where S1 and S2 denote the sums of the elements in each of the two subsets.
How to solve a maximization as a minimization problem?

What is the difference between elimination and interpolation methods?
The efficiency of elimination method can be measured in terms of the ratiof the final and initial intervals of uncertainty
Ln/Lo
The interpolation method is developed for one dimensional searches within multi variable optimization techniques and
generally more efficient than Fibonacci type approaches.
What are the roles of exploratory and pattern moves in the Hooke-Jeeves method?
The exploratory move is in used to explore the local behavior of the objective function nd the pattern move in used to
take the advantage of the pattern direction.

Why sequential linear programming method is called the cutting plane method


Applications of dynamic programming
Longest Common Subsequence.
Assembly Line Scheduling Problem.
Weighted Interval Scheduling.
Optimal Stopping Problems.
Optimal Binary Search Tree.
Flow Shop Scheduling.
Minimum Weighted Triangulation.
The Manhattan Tourist Problem.
Travelling Salesman Problem.
Shortest Path Problems
(a) Single Source Shortest Path.
(b) Single Destination Shortest Path.
(c) Single Pair Shortest Path.
(d) All Pairs Shortest Path.
Applications of zero one programming
For NC machine controlled
For automated assembly in pcboard
In new project selection and resource planniing

How is initial basic solution obtained in network simplex method?
The method in a nutshell is this. You start with a basic feasible solution of an LP in standard form (usually the one where
all the slack variables are equal to the corresponding right hand sides and all other variables are zero) and replace one
basic variable with one which is currently non-basic to get a new basic solution (since n-m variables remain zero). This is
done in a fashion which ensures that the new basic solution is feasible and its objective value is at least as much as that
of the previous BFS. This is repeated until it is clear that the current BFS can't be improved for a better objective value.
In this way the optimal solution is achieved.
It is clear that one factor is crucial to the method: which variable should replace which. The variable which is replaced is
called the leaving variable and the variable which replaces it is known as the entering variable. The design of the simplex
method is such so that the process of choosing these two variables allows two things to happen. Firstly, the new
objective value is an improvement(or at least equals) on the current one and secondly the new solution is feasible.
Let us now explain the method through an example. Consider our old chemical company problem in standard form:
Maximize z =
subject to,
,
,
,
,
.
Now an immediate BFS is obtained by putting all the equal to zero. (Clearly the solution thus obtained will be feasible
to the original problem as the right hand sides are all non-negative which is precisely our solution.) If we consider our
objective function z = then it is evident that an increase in or will increase our objective value. (Note
that currently both are zero being non-basic). A unit increase in will give a 5-fold increase in the objective value while
a unit increase in will give a 4-fold increase. It is logical that we elect to make the entering variable in the next
iteration.
In the tabular form of the simplex method the objective function is usually represented as .
Also the table contains the system of constraints along with the BFS that is obtained. Only the coefficients are written as
is usual when handling linear systems.
Basic z
BFS
z 1 -5 -4 0 0 0 0 0

0 6 4 1 0 0 0 24
0 1 2 0 1 0 0 6
0 -1 1 0 0 1 0 1
0 0 1 0 0 0 1 2
Now for the next iteration we have to decide the entering and the leaving variables. The entering variable is as we
discussed. In fact, due to our realignment of the objective function, the most negative value in the z-row of the simplex
table will always be the entering variable for the next iteration. This is known as the optimality condition. What about
the leaving variable? We have to account for the fact that our next basic solution is feasible. So our leaving variable must
be chosen with this thought in mind.
To decide the leaving variable we apply what is sometimes called as the feasibility condition. That is as follows: we
compute the quotient of the solution coordinates (that are 24, 6, 1 and 2) with the constraint coefficients of the entering
variable (that are 6, 1, -1 and 0). The following ratios are obtained: 24/6 = 4, 6/1 = 6, 1/-1 = -1 and 2/0 = undefined.
Ignoring the negative and the undefined ratio we now proceed to select the minimum of the other two ratios which is 4,
obtained from dividing 24(the value of ) by 6. Since the minimum involved the division by 's current value we take
the leaving variable as .
What is the justification behind this procedure? It is this. The minimum ratios actually represent the intercepts made by
the constraints on the axis. To see this look at the following graph:

Since currently all are 0 we are considering the BFS corresponding to the origin. Now, in the next iteration according
to the simplex method we should get a new BFS i.e move to a new corner point on the graph. We can induce an increase
in the value of only one variable at a time by making it an entering variable, and since is our entering variable our
plan is to increase the value of . From the graph we can see that the value of must be increased to 4 at the point
(4,0), which is the smallest non-negative intercept with the axis. An increase beyond that point is infeasible. Also at
(4,0) the slack variable assumes a zero value as the first constraint is satisfied as an equality there and so becomes
the leaving variable.
Now the problem is to determine the new solution. Although any procedure of solving a linear system can be applied at
this stage, usually Gauss Jordan elimination is applied. It is based on a result in linear algebra that the elementary row
transformations on a system [A|b] to [H|c] do not alter the solutions of the system. According to it the columns
corresponding to the basic-variables in the table are given the shape of an identity matrix. (Readers familiar with linear
algebra will recognize that it means that the matrix formed with the basis variable columns is transformed into reduced
row echelon form.) The solution can then be simply read off from the right most solution column (as n-m of the
variables are put to zero and the rest including z have coefficient 1 in one constraint each). Since z is also a variable it's
row is treated as one among the constraints comprising the linear system.
The entering variable column is called the pivot column and the leaving variable row is called the pivot row. The
intersection of the pivot column and the leaving variable column is called the pivot element. In our example the second
row (of ) is the pivot row and the second column(of ) is the pivot column.
The computations needed to produce the new basic solution are:
Replace the leaving variable in the 'Basic' column with the entering variable.
New pivot row = Current pivot row Pivot element
Entering and leaving arcs
The Network Simplex Method is an adaption of the bounded variable primal simplex algorithm, specifically for the MCF
problem. The basis is represented as a rooted spanning tree of the underlying network, in which variables are
represented by arcs, and the simplex multipliers by node potentials. At each iteration, an entering variable is selected by
some pricing strategy, based on the dual multipliers (node potentials), and forms a cycle with the arcs of the tree. The
leaving variable is the arc of the cycle with the least augmenting flow. The substitution of entering for leaving arc, and
the reconstruction of the tree is called a pivot. When no non-basic arc remains eligible to enter, the optimal solution has
been reached.

What is a neural network? How is it used in optimization?
The term neural network was traditionally used to refer to a network or circuit of biological neurons.
[1]
The modern
usage of the term often refers to artificial neural networks, which are composed of artificial neurons or nodes. Thus the
term has two distinct usages:
1. Biological neural networks are made up of real biological neurons that are connected or functionally related in a
nervous system. In the field of neuroscience, they are often identified as groups of neurons that perform a
specific physiological function in laboratory analysis.
2. Artificial neural networks are composed of interconnecting artificial neurons (programming constructs that
mimic the properties of biological neurons). Artificial neural networks may either be used to gain an
understanding of biological neural networks, or for solving artificial intelligence problems without necessarily
creating a model of a real biological system. The real, biological nervous system is highly complex: artificial
neural network algorithms attempt to abstract this complexity and focus on what may hypothetically matter
most from an information processing point of view. Good performance (e.g. as measured by good predictive
ability, low generalization error), or performance mimicking animal or human error patterns, can then be used
as one source of evidence towards supporting the hypothesis that the abstraction really captured something
important from the point of view of information processing in the brain. Another incentive for these
abstractions is to reduce the amount of computation required to simulate artificial neural networks, so as to
allow one to experiment with larger networks and train them on larger data sets.

Neural Network For Optimization
An artificial neural network is an information or signal processing system composed of a large number of simple
processing elements, called artificial neurons or simply nodes, which are interconnected by direct links called
connections and which cooperate to perform parallel distributed processing in order to solve a desired computational
task. The potential benefits of neural networks extend beyond the high computation rates provided by massive
parallelism. The neural network models are specified by the net topology, node characteristics, and training or learning
rules. These rules specify an initial set of weights and indicate how weights should be adapted during use to improve
performance. Roughly speaking, these computations fall into two categories: natural problems and optimization
problems. Natural problems, such as pattern recognition, are typically implemented on a feed-forward neural network.
Optimization problems are typically implemented on a feedback network. These networks interconnect the neurons
with a feedback path. A typical feedback neural network is the Hopfield neural network [Hop85]. Figure 4 shows the
circuit structure of the neuron and its functional structure. This differential equation describes the neuron:

(1)
where (j=1,2..., n), and g( ) is the sigmoid activation function. It is shown in [Hop85]
how to choose the values of synapse so that (1) represents the dynamics corresponding to a given energy
function. If the energy function corresponds to an optimization objective, the initialization of the 's to an initial
configuration will result in an equilibration which settles to a local minimum of the objective function. One famous
example using the neural networks is the Traveling Salesman Problem (TSP) [Wil88], in which a salesman is supposed to
tour a number of cities (visiting each exactly once, then returning to where he started) and desires to minimize the total
distance of the tour. The intercity distances are given as the input, and the desired output is the shortest (or near-
shortest tour). The operation of the feedback network implies a descent on energy surface. In order to implement a
solution to the TSP, or any other optimization problem on a feedback network, the energy function is used as a medium.
By designing the network so that the minimum of the energy function coincides with a minimum-length tour, the
network becomes a computer that searches for the minimum tour. This example stirred great excitement since it
suggested that the TSP could be solved approximately, using an analog circuit in microseconds.
Hybrid algorithm
Hybrid algorithms exploit the good properties of different methods by applying them to problems they can efficiently
solve. For example, search is efficient when the problem has many solutions, while inference is efficient in proving
unsatisfiability of overconstrained problems.
Examples of hybrid algorithms

Hybrid algorithm for image processing in medical field
Hybrid algorithm for time tabling problem
Hybrid algorithm for pick up and delivery vehicle
Hybrid algorithm for video based eye tracking for safety purposes

What is pheromone matrix
A chemical secreted by an animal, especially an insect, that influences the behavior or development of others of
the same species, often functioning as an attractant of the opposite sex.



Pheromone matrix works on item sizes, not items themselves
There can be several items of size i or j, but there are fewer item sizes than there are items, so small pheromone matrix
Pheromone matrix encodes good packing patterns combinations of sizes.

In ACO we use artificial ants in image edge detection. These ants to move on image I and there by forming a pheromone
matrix, the content of which represents the edge data. The movements of the ants are controlled intensity values of
image pixels. One ant is randomly selected from A ants in the nth step, and it will consecutively move on the image for L
steps.
update of pheromone matrix is performed after the movement of each ant in each construction step. The pheromone
matrix is updated according equation. The pheromone matrix is again updated after all the ants move in each
construction step. This is done according to equation.
Updation of pheromone matrix
The updation of pheromone matrix is done by
Method 1Update by origin: an ant only updates the pheromone trail matrices in its own colony. The
algorithm using method 1 will be called UnsortBicriterion.[Singe criterion]
Method 2Update by region: the sequence of solutions along the non-dominated front is split into NC parts of
equal size. Ants that have found solutions in the ith part update the pheromone trails in colony i, i 2 [1,NC].
The aim is to explicitly guide the ant colonies to search in different regions of the Pareto front, each of them in
one region. The algorithm using method 2 is called BicriterionMC.

The approach to update can be heterogeneous colony approach where we propose all the ants in the non
dominated front of the actual generation are allowed to update and this can be done when there are not too
few ants in the colony
The other approach to update is multi colony approach here the single colony approach is run several times
and the global non dominated front is determined from all the iterations.

Convex function
In mathematics, a real-valued function defined on an interval is called convex (or convex downward or
concave upward) if the graph of the function lies below the line segment joining any two points of the graph;
or more generally, any two points in a vector space. Equivalently, a function is convex if its epigraph (the set of
points on or above the graph of the function) is a convex set. Well-known examples of convex functions are the
quadratic function and the exponential function for any real number x.
Convex functions play an important role in many areas of mathematics. They are especially important in the
study of optimization problems where they are distinguished by a number of convenient properties. For
instance, a (strictly) convex function on an open set has no more than one minimum. Even in infinite-
dimensional spaces, under suitable additional hypotheses, convex functions continue to satisfy such properties
and, as a result, they are the most well-understood functionals in the calculus of variations.



Pattern search methods
The pattern search methods of Hooke and Jeeves is a sequential technique each step of which consists of two
kinds of moves , the exploratory move to find the local behavior of the objective function and the pattern
move to take advantage of the pattern direction.

Dynamic programming
The dynamic programming technique represents a multistage decision problem as a sequence of single stage
problems. Thus N-variable problem is represented as a sequence of N single variable problems that are solved
successively.

Integer programming
In optimization technique in many situations it is entirely appropriate and possible to have fractional values
and solutions. For example it is possible to use a plate thickness of 2.60mm in a construction or 3.25 hours of
labor time for a project or 1.74nitrate solution for a chemical composition. So when all the variables are
constrained to take only integer values in a optimization problem it is known as integer programming

Augmenting path
A path constructed by repeatedly finding a path of positive capacity from a source to a sink and then adding it to the
flow. It can be shown that the flow through a network is optimal if and only if it contains no augmenting path.

Augmenting path: an alternating path between two free vertices Augmentation: given an
augmenting path,change its unmatched edges to matched and vice-versa, increasing the size
of the matching by one.


P & NP type problems
Any P type problem can be solved in "polynomial time." (A polynomial is a mathematical expression consisting of a sum
of terms, each term including a variable or variables raised to a power and multiplied by a coefficient.) A P type problem
is a polynomial in the number of bits that it takes to describe the instance of the problem at hand. An example of a P
type problem is finding the way from point A to point B on a map. An NP type problem requires vastly more time to
solve than it takes to describe the problem. An example of an NP type problem is breaking a 128-bit digital cipher

What is NP?
NP is the set of all decision problems (question with yes-or-no answer) for which the 'yes'-answers can
be verified in polynomial time (O(n^k) where n is the problem size, and k is a constant) by a
deterministic Turing machine. Polynomial time is sometimes used as the definition of fast or quickly.
What is P?
P is the set of all decision problems which can be solved in polynomial time by a deterministic Turing
machine. Since it can solve in polynomial time, it can also be verified in polynomial time. Therefore P is
a subset of NP.
What is NP-Complete?
A problem x that is in NP is also in NP-Complete if and only if every other problem in NP can be quickly
(ie. in polynomial time) transformed into x. In other words:
x is in NP, and
Every problem in NP is reducible to x
So what makes NP-Complete so interesting is that if any one of the NP-Complete problems was to be
solved quickly then all NP problems can be solved quickly
NP-Complete comes from:
Nondeterministic Polynomial
Complete - Solve one, Solve them all
There are more NP-Complete problems than provably intractable problems.
What is NP-Hard?
NP-Hard are problems that are at least as hard as the hardest problems in NP. Note that NP-Complete
problems are also NP-hard. However not all NP-hard problems are NP (or even a decision problem),
despite having 'NP' as a prefix. That is the NP in NP-hard does not mean 'non-deterministic polynomial
time'. Yes this is confusing but its usage is entrenched and unlikely to change.

Tabu search and aspiration criteria
Tabu search is a metaheuristic local search algorithm that can be used for solving combinatorial optimization
problems (problems where an optimal ordering and selection of options is desired - an example, the traveling
salesman problem.
. The solutions admitted to the new neighborhood, , are determined through the use of memory
structures. Using these memory structures, the search progresses by iteratively moving from the current solution
to an improved solution in .
These memory structures form what is known as the tabu list, a set of rules and banned solutions used to filter
which solutions will be admitted to the neighborhood to be explored by the search. In its simplest form,
a tabu list is a short-term set of the solutions that have been visited in the recent past (less than iterations ago,
where is the number of previous solutions to be stored - is also called the tabu tenure).
The memory structures used in tabu search can be divided into three categories
Short-term: The list of solutions recently considered. If a potential solution appears on this list, it cannot be
revisited until it reaches an expiration point.
Intermediate-term: A list of rules intended to bias the search towards promising areas of the search space.
Long-term: Rules that promote diversity in the search process (i.e. regarding resets when the search becomes
stuck in a plateau or a suboptimal dead-end).
The simplest and most commonly used aspiration criterion, found in almost all tabu search implementations, allows a tabu
move when it results in a solution with an objective value better than that of the current best-known solution (since the
new solution has obviously not been previously visited).
The most commonly used stopping criteria in tabu search are
after a fixed number of iterations (or a fixed amount of CPU time);
after some number of consecutive iterations without an improvement in the objective function value (the criterion used
in most implementations);
when the objective function reaches a pre-specified threshold value

Definition of fuzzy
Fuzzy not clear, distinct, or precise; blurred
Definition of fuzzy logic
A form of knowledge representation suitable for notions that cannot be defined precisely, but which
depend upon their contexts.
Fuzzy Logic provides a more efficient and resourceful way to solve Control Systems.
Some Examples
Temperature Controller
Anti Lock Break System ( ABS )

You might also like