Downhill Simplex Search

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

DOWNHILL SIMPLEX SEARCH

• A derivative free optimization method for multidimensional function optimization.

• As compared to other methods, this method is not very efficient compared to derivative based
optimization.

• However, the concept of this method is simple and has geometrical interpretation.

• Proposed by John Nelder and Roger Mead in 1965.

• Also called as amoeba method, Nelder-Mead method.


CORE OF THE ALGORITHM
• For a function with N variables, algorithm starts with N+1 points in a N dimensional
space, defining an initial simplex.
• The downhill simplex search repeatedly replaces the point having the highest function
value in a simplex with another point.
• Take a random initial stating point P0 and then the other N points can be taken by

where ei are N unit vectors. is a constant.


• We write yi for the function value at Pi and let

• Or
HOW THE ALGORITHM WORKS

• Calculate the function value at each of the vertices of the simplex.

• Sort the points in order of the value of the function at that point.

• Calculate the centroid (average) of these n+1 points.

• Generate a reflection point P* of Ph .

• Calculate function value at this point. Depending upon this we have 4 cases:

• (1) Reflection away from Ph


• (2) Reflection and expansion away from Ph
• (3) contraction along 1D connecting Ph and
• (4) Shrinkage towards Pl .

You might also like