Lec5 Fminunc

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

Finding the minimum of a function of several variables using “fminunc”

1
fminunc

Find minimum of unconstrained multivariable function

Nonlinear programming solver. Gradient-based.

Finds the minimum of a problem specified by

min f(x)

where f(x) is a function that returns a scalar.

Note that Gradient-free (fminsearch in Matlab), Gradient-based (fminunc in Matlab)

2
x = fminunc(fun,x0)
x = fminunc(fun,x0,options)

x = fminunc(fun,x0)

starts at the point x0 and attempts to find a local minimum x of the


function described in fun. The point x0 can be a scalar, vector, or matrix.

3
fminunc uses either 'quasi-newton' (default) or 'trust-region'
algorithms.

The 'trust-region' algorithm requires you to provide the gradient, or


else fminunc uses the 'quasi-newton' algorithm.

4
Example: A polynomial function is given as

𝑓 𝑥 = 3𝑥12 + 2𝑥1 𝑥2 + 𝑥22 − 4𝑥1 + 5𝑥2

Find the minimum value of this function

5
Solution:
fun = @(x)3*x(1)^2 + 2*x(1)*x(2) + x(2)^2 - 4*x(1) + 5*x(2);

Call fminunc to find a minimum of fun near [1,1].

x0 = [1,1];
[x,fval] = fminunc(fun,x0)

x=
2.2500 -4.7500

fval =

-16.3750

6
7
8
fminunc with gradient information

Fminunc can converge faster and more reliable when you provide derivatives.

9
Example: Rosenbrock function

𝑓 𝑥1 , 𝑥2 = 1 − 𝑥1 2 + 100 𝑥2 − 𝑥12 2

Find the minimum value of this function using fminunc supplied


with gradient information

10
Solution:

𝜕𝑓 𝑥 𝜕𝑓 𝑥
Δ𝑓 𝑥 = →
𝜕𝑥1 𝜕𝑥2

Δ𝑓 𝑥 = [−400 𝑥2 − 𝑥12 𝑥1 − 2 1 − 𝑥1

200 𝑥2 − 𝑥12

11
12
13
xmin =

1.0000 1.0000

fmin =

1.2262e-10

Elapsed time is 0.409549 seconds.

xmin =

1.0000 1.0000

fmin =

1.9886e-17

Elapsed time is 0.361601 seconds. 14


Example: Find the minimum value of the following function using fminunc

𝑥 2 +𝑦 2
𝑥2 + 𝑦2
𝑓 𝑥, 𝑦 = 𝑥𝑒 − +
20

15
Solution:
fun = @(x)x(1)*exp(-(x(1)^2 + x(2)^2)) + (x(1)^2 + x(2)^2)/20;

x0 = [1,2];
[x,fval] = fminunc(fun,x0)

fun = @(x,y) x*exp(-(x^2 + y^2)) + (x^2 + y^2)/20;

x0 = [1,2];
[x,fval] = fminunc(fun,x0)

16
x=

-0.6691 0.0000

fval =

-0.4052

17
Examine the Solution Process

fun = @(x)x(1)*exp(-(x(1)^2 + x(2)^2)) + (x(1)^2 + x(2)^2)/20;

options = optimoptions(@fminunc,'Display','iter','Algorithm','quasi-newton');

x0 = [1,2];
[x,fval,exitflag,output] = fminunc(fun,x0,options)

18
First-order
Iteration Func-count f(x) Step-size optimality
0 3 0.256738 0.173
1 6 0.222149 1 0.131
2 9 0.15717 1 0.158
3 18 -0.227902 0.438133 0.386
4 21 -0.299271 1 0.46
5 30 -0.404028 0.102071 0.0458
6 33 -0.404868 1 0.0296
7 36 -0.405236 1 0.00119
8 39 -0.405237 1 0.000252
9 42 -0.405237 1 7.97e-07

Local minimum found.

Optimization completed because the size of the gradient is less than


the value of the optimality tolerance.
x = 1×2

-0.6691 0.0000

fval = -0.4052

19
exitflag = 1
output = struct with fields:
iterations: 9
funcCount: 42
stepsize: 2.9343e-04
lssteplength: 1
firstorderopt: 7.9721e-07
algorithm: 'quasi-newton'
message: '...'

The exit flag 1 shows that the solution is a local optimum.

The output structure shows the number of iterations, number of function evaluations,
and other information.

The iterative display also shows the number of iterations and function evaluations
20
Example: Find the minimum value of the following function using fminunc

𝑓 𝑥 = 𝑒 𝑥1 (4𝑥12 + 2𝑥22 + 4𝑥1 𝑥2 + 2𝑥2 + 1

21
Solution: Step 1: Write a file objfun.m.

function f = objfun(x)
f = exp(x(1)) * (4*x(1)^2 + 2*x(2)^2 + 4*x(1)*x(2) + 2*x(2) + 1);

Step 2: Set options.

options = optimoptions(@fminunc,'Algorithm','quasi-newton');

Step 3: Invoke fminunc using the options.

x0 = [-1,1]; % Starting guess


[x,fval,exitflag,output] = fminunc(@objfun,x0,options);

22
This produces the following output:
x,fval,exitflag,output.firstorderopt

x=
0.5000 -1.0000

fval =
3.6609e-16

exitflag =
1

ans =
7.3704e-08
The exitflag tells whether the algorithm converged. exitflag = 1 means a local minimum
was found

23

You might also like