Note 7 - Numerical Optimization

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

In20 S2 MA1024

7 Numerical Optimization
7.1 Introduction
• Optimization is the term often used for minimizing or maximizing a function.

• Geometrically, the maximum or minimum occurs at the turning point or at the end points
of a function.

• Mathematically, the derivative of function is zero at the turning point. Moreover, the
second derivative, f 00 (x), indicates whether the optimum is a minimum or a maximum: if
f 00 (x) < 0, the point is a maximum; if f 00 (x) > 0, the point is a minimum.

Figure 7.1

• In one dimensional optimization problem, we need to find x that corresponds the deriva-
tive f 0 (x) is equal to zero.

• In engineering, the quantity that we wish to optimize, f (x), called the merit function or
objective function, and the quantities that we are free to adjust, x, known as the design
variables.

• There are two main optimizations:

– Constrained optimization : restrictions or constraints are placed on the design vari-


ables
– Unconstrained optimization : no restrictions are placed on the design variables

• Sometimes both local and global optima can occur in optimization as shown in figure.
Such cases are called multimodal. If function has single optimum (i.e. maximum or
minumum), then it is unimodal.

• In almost all instances, we will be interested in finding the absolute highest or lowest
value of a function. Thus, we must take care that we do not mistake a local result for the
global optimum.

1 UoM
In20 S2 MA1024

Figure 7.2

7.2 One-Dimensional Unconstrained Optimization


7.2.1 Golden Section Search
• For simplicity, we will focus on the problem of finding a maximum of a one-dimensional
function.

• The method starts with two initial guesses, xl and xu , that bracket one extremum of f (x).

• Choose two interior points x1 and x2 according to the golden ratio1

x1 = xl + d (7.2)
x2 = xu − d (7.3)

where √
5−1
d= (xu − xl ) (7.4)
2
• If f (x1 ) > f (x2 ), then the domain of x to the left of x2 , from xl to x2 , can be eliminated
because it does not contain the maximum. For this case, x2 becomes the new xl for the
next round.

Figure 7.3

• If f (x2 ) > f (x1 ), then the domain of x to the right of x1 , from x1 to xu would have been
eliminated. In this case, x1 becomes the new xu for the next round.
1
Two quantities are in the golden ratio if their ratio is the same as the ratio of their sum to the larger of the
two quantities.i.e. if a > b > 0 √
a+b a def 1+ 5
= = ϕ= (7.1)
a b 2

2 UoM
In20 S2 MA1024

• The best thing of this method is that we do not have to recalculate all the function values
for the next iteration.

• For example, for the case f (x1 ) > f (x2 ) the old x1 becomes the new x2 and therefore
f (x1 ) of previous step is same as f (x2 ) of current step.

Figure 7.4

• For this case, x1 has to be determined as before.



5−1
new x1 = xl + (xu − xl ) (7.5)
2

• A similar approach would be used for the alternate case.

• Repeat the steps until the approximate solution closes to true solution.

• Suppose the optimum function value be in the interval (x2 , xu ).

• Then the true value be at the far right or far left as shown in figure.

Figure 7.5

• Thus, the maximum distance from the estimate would be

3 UoM
In20 S2 MA1024

x2 − x1 = (2R − 1)(xu − xl ) xu − x1 = (1 − R)(xu − xl )


= 0.236(xu − xl ) (7.6) = 0.382(xu − xl ) (7.7)

5−1
where R = .
2
• The case (b) would represent the maximum error.

• Therefore, we may use following criterion to terminate the iteraions.



xu − xl

(1 − R)
(7.8)
xopt

example 7.1 Use the golden-section search to find the maximum of

x2
f (x) = 2 sin(x) −
10
within the interval xl = 0 and xu = 4.

Figure 7.6

Solution

• The two interior points are



5−1
R= = 0.618 (7.9)
√ 2
5−1
d= (4 − 0) = 2.472 (7.10)
2
x1 = 0 + 2.472 = 2.472 , f (x1 ) = 0.63 (7.11)
x2 = 4 − 2.472 = 1.528 , f (x2 ) = 1.765 (7.12)

4 UoM
In20 S2 MA1024

• Since f (x2 ) > f (x1 ), the maximum is in the interval defined by xl , x2 , and x1 . That is,
the maximum lies in the interval [xl , x1 ] = [0, 2.472].

• In this step,
xu − xl (4 − 0)

max error = (1 − R) = (1 − 0.618) =1 (7.13)
xopt

1.528

• In the second step two interior points are:

new x1 = former x2 = 1.528 , f (x1 ) = 1.765 (7.14)


5−1
d= (2.472 − 0) = 1.528 (7.15)
2
new x2 = 2.472 − 1.528 = 0.944 , f (x2 ) = 1.531 (7.16)

• Since f (x1 ) > f (x2 ), the maximum is in the interval defined by x2 , x1 , and xu . That is,
the maximum lies in the interval [x2 , xu ] = [0.944, 2.472].

• In this step,

xu − xl (2.472 − 0)

max error = (1 − R) = (1 − 0.618) = 0.618 (7.17)
xopt

1.528

• After 21 iterations, the maximum occurs at x = 1.427527495583624 with a function value


of 1.775725652504815 and accuracy 10−4 .

7.2.2 Newton’s Method


• Recall that the Newton’s method to find the root x of a function such that f (x) = 0 is

f (xi )
xi+1 = xi − (7.18)
f 0 (xi )

• The optimization occurs at the x where f 0 (x) = 0.

• Thus, Newton’s method to find the optimum of function can be derived as

f 0 (xi )
xi+1 = xi − (7.19)
f 00 (xi )

example 7.2 Use Newton’s method to find the maximum of


x2
f (x) = 2 sin(x) −
10
within the interval xl = 0 and xu = 4 with an initial guess of 2.5.

5 UoM
In20 S2 MA1024

Solution

• The first and second derivatives of the function are


x 1
f 0 (x) = 2 cos(x) − , f 00 (x) = −2 sin(x) −
5 5

• Newton’s method gives


xi
2 cos(xi ) −
xi+1 = xi − 5
1
−2 sin(xi ) −
5

x0 2.5
2 cos(x0 ) − 2 cos(2.5) −
x1 = x 0 − 5 = 2.5 − 5 = 0.99508
1 1
−2 sin(x0 ) − −2 sin(2.5) −
5 5
0.99508
2 cos(0.99508) −
x2 = 0.99508 − 5 = 1.46901
1
−2 sin(0.99508) −
5

• After 3 iterations, the maximum occurs at x = 1.427527495583624 with a function value


of 1.775725652504815 and accuracy 10−4 .

1
REFERENCES
(i) Numerical Analysis, Richard L. Burden, J.Douglas Faires.
(ii) Numerical Methods for Engineers, Steven C. Chapra, Raymond P. Canale

6 UoM

You might also like