Matlab Tutorial2

You might also like

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 20

2.

Optimization Toolbox

이번 교육에서는 Optimization Toolbox를 어떻게 사용하는지에 대해 배우기로 한다. 다양한


최적화 방법이 본 toolbox에 포함되어 있지만 대표적인 두 가지만 배우기로 하자. 나머지
방법은 매뉴얼을 참조하도록 하자. 교육 목표로는 다음과 같은 작업들을 쉽게 할 수 있어야
하겠다.
1. 주어진 최적화 문제를 풀 수 있어야 한다.
2. 다양한 결과 표시 방법을 알아야 한다.

2.1 Unconstrained Optimization


배우고자 하는 함수는 unconstrained nonlinear optimization problem을 풀기 위한 fminunc
이다. 이것은 초기치 근처의 local minimum을 구하는 것이다.

>> help fminunc


FMINUNC finds the minimum of a function of several variables.
X=FMINUNC(FUN,X0) starts at X0 and attempts to find a local minimizer X
of the function FUN. FUN accepts input X and returns a scalar function
value F evaluated at X. X0 can be a scalar, vector or matrix.

X=FMINUNC(FUN,X0,OPTIONS) minimizes with the default optimization


parameters replaced by values in the structure OPTIONS, an argument
created with the OPTIMSET function. See OPTIMSET for details. Used
options are Display, TolX, TolFun, DerivativeCheck, Diagnostics,
FunValCheck GradObj, HessPattern, Hessian, HessMult, HessUpdate,
InitialHessType, InitialHessMatrix, MaxFunEvals, MaxIter,
DiffMinChange and DiffMaxChange, LargeScale, MaxPCGIter,
PrecondBandWidth, TolPCG, TypicalX. Use the GradObj option to specify
that FUN also returns a second output argument G that is the partial
derivatives of the function df/dX, at the point X. Use the Hessian
option to specify that FUN also returns a third output argument H that
is the 2nd partial derivatives of the function (the Hessian) at the
point X. The Hessian is only used by the large-scale method, not the
line-search method.

1
[X,FVAL]=FMINUNC(FUN,X0,...) returns the value of the objective
function FUN at the solution X.

[X,FVAL,EXITFLAG]=FMINUNC(FUN,X0,...) returns an EXITFLAG that describes


the exit condition of FMINUNC. Possible values of EXITFLAG and the
corresponding exit conditions are

1 FMINUNC converged to a solution X.


2 Change in X smaller than the specified tolerance.
3 Change in the objective function value smaller than the specified
tolerance (only occurs in the large-scale method).
0 Maximum number of function evaluations or iterations reached.
-1 Algorithm terminated by the output function.
-2 Line search cannot sufficiently decrease the objective function along
the current search direction (only occurs in the medium-scale method).

[X,FVAL,EXITFLAG,OUTPUT]=FMINUNC(FUN,X0,...) returns a structure OUTPUT


with the number of iterations taken in OUTPUT.iterations, the number
of function evaluations in OUTPUT.funcCount, the algorithm used in
OUTPUT.algorithm, the number of CG iterations (if used) in
OUTPUT.cgiterations, the first-order optimality (if used) in
OUTPUT.firstorderopt, and the exit message in OUTPUT.message.

[X,FVAL,EXITFLAG,OUTPUT,GRAD]=FMINUNC(FUN,X0,...) returns the value


of the gradient of FUN at the solution X.

[X,FVAL,EXITFLAG,OUTPUT,GRAD,HESSIAN]=FMINUNC(FUN,X0,...) returns the


value of the Hessian of the objective function FUN at the solution X.

Examples
FUN can be specified using @:
X = fminunc(@myfun,2)

where MYFUN is a MATLAB function such as:

function F = myfun(x)

2
F = sin(x) + 3;

To minimize this function with the gradient provided, modify


the MYFUN so the gradient is the second output argument:
function [f,g]= myfun(x)
f = sin(x) + 3;
g = cos(x);
and indicate the gradient value is available by creating an options
structure with OPTIONS.GradObj set to 'on' (using OPTIMSET):
options = optimset('GradObj','on');
x = fminunc('myfun',4,options);

FUN can also be an anonymous function:


x = fminunc(@(x) 5*x(1)^2 + x(2)^2,[5;1])

If FUN is parameterized, you can use anonymous functions to capture the problem-
dependent parameters. Suppose you want to minimize the objective given in the
function MYFUN, which is parameterized by its second argument A. Here MYFUN is
an M-file function such as

function [f,g] = myfun(x,a)

f = a*x(1)^2 + 2*x(1)*x(2) + x(2)^2; % function


g = [2*a*x(1) + 2*x(2) % gradient
2*x(1) + 2*x(2)];

To optimize for a specific value of A, first assign the value to A. Then


create a one-argument anonymous function that captures that value of A
and calls MYFUN with two arguments. Finally, pass this anonymous function
to FMINUNC:

a = 3; % define parameter first


options = optimset('GradObj','on'); % indicate gradient is provided
x = fminunc(@(x) myfun(x,a),[1;1],options)

See also optimset, fminsearch, fminbnd, fmincon, @, inline.

3
Reference page in Help browser
doc fminunc

- 다음과 같은 목적 함수를 m-file (이름 : myfunc.m)로 만든 다음 pwd 디렉토리에 저장을


하도록 하자. 이 함수의 경우 최적 해는 x=[1 1]이고 최적 값은 0이다.

function fval=myfunc(x)
fval = (x(1)-1)^4+(x(2)-1)^2;
return

그리고 Command Window에서 다음을 입력하고 리턴 키를 누르면 fminunc 함수는

의 최적화 문제를 다음과 같이 풀어준다.

>> [x fval]=fminunc(@myfunc,[2 0])


Warning: Gradient must be provided for trust-region method;
using line-search method instead.
> In fminunc at 241
Optimization terminated: relative infinity-norm of gradient less than options.TolFun.

x=
0.9955 1.0000
fval =
4.0847e-010
>>
최적 해는 x이고 그때 최적 값은 fval이다.

fminunc함수에는 최적화를 푸는데 사용되는 여러 가지 파라미터를 optimset함수를 이용하여


지정할 수 있다. 다음은 어떤 것이 가능한지 보여 준다.
>> help optimset
OPTIMSET Create/alter OPTIM OPTIONS structure.
OPTIONS = OPTIMSET('PARAM1',VALUE1,'PARAM2',VALUE2,...) creates an
optimization options structure OPTIONS in which the named parameters have
the specified values. Any unspecified parameters are set to [] (parameters
with value [] indicate to use the default value for that parameter when

4
OPTIONS is passed to the optimization function). It is sufficient to type
only the leading characters that uniquely identify the parameter. Case is
ignored for parameter names.
NOTE: For values that are strings, correct case and the complete string
are required; if an invalid string is provided, the default is used.

OPTIONS = OPTIMSET(OLDOPTS,'PARAM1',VALUE1,...) creates a copy of


OLDOPTS
with the named parameters altered with the specified values.

OPTIONS = OPTIMSET(OLDOPTS,NEWOPTS) combines an existing options


structure
OLDOPTS with a new options structure NEWOPTS. Any parameters in NEWOPTS
with non-empty values overwrite the corresponding old parameters in
OLDOPTS.

OPTIMSET with no input arguments and no output arguments displays all


parameter names and their possible values, with defaults shown in {}
when the default is the same for all functions that use that option -- use
OPTIMSET(OPTIMFUNCTION) to see options for a specific function.).

OPTIONS = OPTIMSET (with no input arguments) creates an options structure


OPTIONS where all the fields are set to [].

OPTIONS = OPTIMSET(OPTIMFUNCTION) creates an options structure with all


the parameter names and default values relevant to the optimization
function named in OPTIMFUNCTION. For example,
optimset('fminbnd')
or
optimset(@fminbnd)
returns an options structure containing all the parameter names and
default values relevant to the function 'fminbnd'.

OPTIMSET PARAMETERS for MATLAB


Display - Level of display [ off | iter | notify | final ]
MaxFunEvals - Maximum number of function evaluations allowed

5
[ positive integer ]
MaxIter - Maximum number of iterations allowed [ positive scalar ]
TolFun - Termination tolerance on the function value [ positive scalar ]
TolX - Termination tolerance on X [ positive scalar ]
FunValCheck - Check for invalid values, such as NaN or complex, from
user-supplied functions [ {off} | on ]
OutputFcn - Name of installable output function [ function ]
This output function is called by the solver after each
iteration.

Note: To see OPTIMSET parameters for the OPTIMIZATION TOOLBOX


(if you have the Optimization Toolbox installed), type
help optimoptions

Examples
To create options with the default options for fzero
options = optimset('fzero');
To create an options structure with TolFun equal to 1e-3
options = optimset('TolFun',1e-3);
To change the Display value of options to 'iter'
options = optimset(options,'Display','iter');

See also optimget, fzero, fminbnd, fminsearch, lsqnonneg.

Reference page in Help browser


doc optimset

-위의 최적화 예제에서 최적화를 종료하는 조건을 아래와 같이 바꾸어 보자.


>> option = optimset('TolX',10^(-10),'TolFun',10^(-10));
>> [x fval]=fminunc(@myfunc,[2 0],option)
Warning: Gradient must be provided for trust-region method;
using line-search method instead.
> In fminunc at 241
Optimization terminated: relative infinity-norm of gradient less than options.TolFun.

6
x=
0.9997 1.0000
fval =
5.5791e-015
>>
종료 조건이 매우 작아 더욱 정확한 최적화 결과를 얻었다.

- 위의 최적화 예제에서 최적화 결과 표시를 아래와 같이 바꾸어 보자.


>> option = optimset('Display','iter');
>> [x fval]=fminunc(@myfunc,[2 0],option)
Warning: Gradient must be provided for trust-region method;
using line-search method instead.
> In fminunc at 241
Gradient's
Iteration Func-count f(x) Step-size infinity-norm
0 3 2 4
1 6 0.25 0.25 1
2 9 0.0334958 1 0.366
3 12 2.69034e-005 1 0.00149
4 18 2.24196e-005 10 0.00134
5 21 9.65663e-006 1 0.00232
6 24 3.07446e-006 1 0.00112
7 27 8.78677e-007 1 0.000115
8 30 3.36336e-007 1 0.000212
9 33 1.14228e-007 1 0.000155
10 36 3.47925e-008 1 3.66e-005
11 39 1.16455e-008 1 1.61e-005
12 42 3.98655e-009 1 1.97e-005
13 45 1.26663e-009 1 7.82e-006
14 48 4.0847e-010 1 3.63e-007
Optimization terminated: relative infinity-norm of gradient less than options.TolFun.

x=
0.9955 1.0000
fval =
4.0847e-010

7
>>
매 iteration마다 결과를 보여 준다.

- 만약에 목적함수의 gradient (기울기)를 사용자가 지정을 해주면 최적화 toolbox는 보다


효과적으로 최적 점을 찾는다. Gradient를 제공하기 위해서는 다음과 같이 목적 함수를 바꾸면
된다.

function [fval grad]=myfunc(x)


fval = (x(1)-1)^4+(x(2)-1)^2;
grad = [4*(x(1)-1)^3; 2*(x(2)-1)];
return

그리고 Command Window에서 다음을 입력하고 리턴 키를 누르면 아래의 결과를 얻을 수 있다.

>> options = optimset('GradObj','on','Display','iter');


>> [x fval]=fminunc(@myfunc,[2 0],options)

Norm of First-order
Iteration f(x) step optimality CG-iterations
0 2 4
1 0.197531 1.05409 1.19 1
2 0.0390184 0.222222 0.351 1
3 0.00770735 0.148148 0.104 1
4 0.00152244 0.0987654 0.0308 1
5 0.000300729 0.0658436 0.00913 1
6 5.94032e-005 0.0438957 0.00271 1
7 1.1734e-005 0.0292638 0.000802 1
8 2.31782e-006 0.0195092 0.000238 1
9 4.57842e-007 0.0130061 7.04e-005 1
10 9.04381e-008 0.00867077 2.09e-005 1
Optimization terminated: Relative function value changing by less than
OPTIONS.TolFun.

x=
1.0173 1.0000
fval =

8
9.0438e-008
>>

- 목적함수에 독립변수 외에 다른 변수를 사용하고자 할 때는 다음과 같이 하면 된다. 즉, 목적


함수에 원하는 변수를 추가하고 fminunc함수의 인자 끝에 원하는 변수 값을 넣으면 된다.

function [fval grad]=myfunc(x,a,b)


fval = (x(1)-1)^a+(x(2)-1)^b;
grad = [a*(x(1)-1)^(a-1); b*(x(2)-1)^(b-1)];
return

>> options = optimset('GradObj','on','Display','iter');


>> [x fval]=fminunc(@myfunc,[2 0],options,4,2)

Norm of First-order
Iteration f(x) step optimality CG-iterations
0 2 4
1 0.197531 1.05409 1.19 1
2 0.0390184 0.222222 0.351 1
3 0.00770735 0.148148 0.104 1
4 0.00152244 0.0987654 0.0308 1
5 0.000300729 0.0658436 0.00913 1
6 5.94032e-005 0.0438957 0.00271 1
7 1.1734e-005 0.0292638 0.000802 1
8 2.31782e-006 0.0195092 0.000238 1
9 4.57842e-007 0.0130061 7.04e-005 1
10 9.04381e-008 0.00867077 2.09e-005 1
Optimization terminated: Relative function value changing by less than
OPTIONS.TolFun.

x=
1.0173 1.0000
fval =
9.0438e-008
>>

9
2.2 Constrained Optimization
배우게 될 함수는 fmincon으로 constraint를 가지는 nonlinear optimization problem을 풀기
위한 것이다. 이것은 constraints를 만족하는 영역 내에서 초기치 근처의 local minimum을
구하는 것이다.

>> help fmincon


FMINCON finds a constrained minimum of a function of several variables.
FMINCON attempts to solve problems of the form:
min F(X) subject to: A*X <= B, Aeq*X = Beq (linear constraints)
X C(X) <= 0, Ceq(X) = 0 (nonlinear constraints)
LB <= X <= UB

X=FMINCON(FUN,X0,A,B) starts at X0 and finds a minimum X to the function


FUN, subject to the linear inequalities A*X <= B. FUN accepts input X and
returns a scalar function value F evaluated at X. X0 may be a scalar,
vector, or matrix.

X=FMINCON(FUN,X0,A,B,Aeq,Beq) minimizes FUN subject to the linear equalities


Aeq*X = Beq as well as A*X <= B. (Set A=[] and B=[] if no inequalities exist.)

X=FMINCON(FUN,X0,A,B,Aeq,Beq,LB,UB) defines a set of lower and upper


bounds on the design variables, X, so that a solution is found in
the range LB <= X <= UB. Use empty matrices for LB and UB
if no bounds exist. Set LB(i) = -Inf if X(i) is unbounded below;
set UB(i) = Inf if X(i) is unbounded above.

X=FMINCON(FUN,X0,A,B,Aeq,Beq,LB,UB,NONLCON) subjects the minimization to


the
constraints defined in NONLCON. The function NONLCON accepts X and returns
the vectors C and Ceq, representing the nonlinear inequalities and equalities
respectively. FMINCON minimizes FUN such that C(X)<=0 and Ceq(X)=0.
(Set LB=[] and/or UB=[] if no bounds exist.)

X=FMINCON(FUN,X0,A,B,Aeq,Beq,LB,UB,NONLCON,OPTIONS) minimizes with the


default optimization parameters replaced by values in the structure
OPTIONS, an argument created with the OPTIMSET function. See OPTIMSET

10
for details. Used options are Display, TolX, TolFun, TolCon,
DerivativeCheck, Diagnostics, FunValCheck, GradObj, GradConstr,
Hessian, MaxFunEvals, MaxIter, DiffMinChange and DiffMaxChange,
LargeScale, MaxPCGIter, PrecondBandWidth, TolPCG, TypicalX, Hessian,
HessMult, HessPattern. Use the GradObj option to specify that FUN also
returns a second output argument G that is the partial derivatives of
the function df/dX, at the point X. Use the Hessian option to specify
that FUN also returns a third output argument H that is the 2nd
partial derivatives of the function (the Hessian) at the point X. The
Hessian is only used by the large-scale method, not the line-search
method. Use the GradConstr option to specify that NONLCON also returns
third and fourth output arguments GC and GCeq, where GC is the partial
derivatives of the constraint vector of inequalities C, and GCeq is the
partial derivatives of the constraint vector of equalities Ceq. Use
OPTIONS = [] as a place holder if no options are set.

[X,FVAL]=FMINCON(FUN,X0,...) returns the value of the objective


function FUN at the solution X.

[X,FVAL,EXITFLAG]=FMINCON(FUN,X0,...) returns an EXITFLAG that describes the


exit condition of FMINCON. Possible values of EXITFLAG and the corresponding
exit conditions are

1 First order optimality conditions satisfied to the specified tolerance.


2 Change in X less than the specified tolerance.
3 Change in the objective function value less than the specified tolerance.
4 Magnitude of search direction smaller than the specified tolerance and
constraint violation less than options.TolCon.
5 Magnitude of directional derivative less than the specified tolerance
and constraint violation less than options.TolCon.
0 Maximum number of function evaluations or iterations reached.
-1 Optimization terminated by the output function.
-2 No feasible point found.

[X,FVAL,EXITFLAG,OUTPUT]=FMINCON(FUN,X0,...) returns a structure


OUTPUT with the number of iterations taken in OUTPUT.iterations, the number

11
of function evaluations in OUTPUT.funcCount, the algorithm used in
OUTPUT.algorithm, the number of CG iterations (if used) in OUTPUT.cgiterations,
the first-order optimality (if used) in OUTPUT.firstorderopt, and the exit
message in OUTPUT.message.

[X,FVAL,EXITFLAG,OUTPUT,LAMBDA]=FMINCON(FUN,X0,...) returns the


Lagrange multipliers
at the solution X: LAMBDA.lower for LB, LAMBDA.upper for UB, LAMBDA.ineqlin is
for the linear inequalities, LAMBDA.eqlin is for the linear equalities,
LAMBDA.ineqnonlin is for the nonlinear inequalities, and LAMBDA.eqnonlin
is for the nonlinear equalities.

[X,FVAL,EXITFLAG,OUTPUT,LAMBDA,GRAD]=FMINCON(FUN,X0,...) returns the


value of
the gradient of FUN at the solution X.

[X,FVAL,EXITFLAG,OUTPUT,LAMBDA,GRAD,HESSIAN]=FMINCON(FUN,X0,...)
returns the
value of the HESSIAN of FUN at the solution X.

Examples
FUN can be specified using @:
X = fmincon(@humps,...)
In this case, F = humps(X) returns the scalar function value F of the HUMPS
function
evaluated at X.

FUN can also be an anonymous function:


X = fmincon(@(x) 3*sin(x(1))+exp(x(2)),[1;1],[],[],[],[],[0 0])
returns X = [0;0].

If FUN or NONLCON are parameterized, you can use anonymous functions to capture
the problem-dependent parameters. Suppose you want to minimize the objective
given in the function MYFUN, subject to the nonlinear constraint NONLCON, where
these two functions are parameterized by their second argument A and B,
respectively.

12
Here MYFUN and MYCON are M-file functions such as

function f = myfun(x,a)
f = x(1)^2 + a*x(2)^2;

and

function [c,ceq] = mycon(x,b)


c = b/x(1) - x(2);
ceq = [];

To optimize for specific values of A and B, first assign the values to these
two parameters. Then create two one-argument anonymous functions that capture
the values of A and B, and call MYFUN and MYCON with two arguments. Finally,
pass these anonymous functions to FMINCON:

a = 2; b = 1.5; % define parameters first


x = fmincon(@(x)myfun(x,a),[1;2],[],[],[],[],[],[],@(x)mycon(x,b))

See also optimset, fminunc, fminbnd, fminsearch, @, function_handle.

Reference page in Help browser


doc fmincon

>>

- 다음의 constrained minimization problem을 풀어 보자.

sbject to

이 경우는 찾고자 하는 변수의 범위를 지정해준 경우이다. 그러므로 다음과 같이 m-file은


그대로 이지만 command window에서 fmincon을 다음과 같이 변경하여 실행해야 한다.

13
function [fval grad]=myfunc(x)
fval = (x(1)-1)^4+(x(2)-1)^2;
grad = [4*(x(1)-1)^3; 2*(x(2)-1)];
return

>> options = optimset('GradObj','on','Display','iter');


>> [x fval]=fmincon(@myfunc,[2 0],[],[],[],[],[0.3 1.5],[0.9 2.9],[],options)

Norm of First-order
Iteration f(x) step optimality CG-iterations
0 1.4656 1.68
1 0.583368 0.554641 0.391 1
2 0.323028 0.407754 0.0741 1
3 0.2581 0.266541 0.00689 1
4 0.250619 0.147868 0.000721 1
5 0.250253 0.112756 0.00021 1
6 0.250147 0.0996705 5.34e-005 1
7 0.250109 0.0786136 9.17e-006 1
8 0.250101 0.0436172 5.13e-007 1
Optimization terminated: first-order optimality less than OPTIONS.TolFun,
and no negative/zero curvature detected in trust region model.

x=
0.8999 1.5000
fval =
0.2501
>>

- 다음의 constrained minimization problem을 풀어 보자.

sbject to

14
위의 최적화 문제는 bound constraints와 nonlinear constraints를 가진다. 이 문제를 풀기
위해 목적함수를 다음의 m-file (이름: myfunc.m) 로 말들자.

function [fval grad]=myfunc(x)


fval = (x(1)-1)^4+(x(2)-1)^2;
grad = [4*(x(1)-1)^3; 2*(x(2)-1)];

다음으로 nonlinear constraints를 다음의 m-file (이름: mynonlinear.m)로 만들자.

function [c ceq]=mynonlinear(x)
c = x(1)^2+x(2)^2-3;
ceq=[];

다음으로 command windows에서 다음을 수행하면 된다.

>> options = optimset('GradObj','on','Display','iter');


>> [x fval]=fmincon(@myfunc,[0.5 1.5],[],[],[],[],[0.3 0.5],[0.9
2.9],@mynonlinear,options)
Warning: Large-scale (trust region) method does not currently solve this type of
problem,
switching to medium-scale (line search).
> In fmincon at 260

max Directional First-order


Iter F-count f(x) constraint Step-size derivative optimality Procedure
0 3 0.3125 -0.2
1 7 0.2501 0 1 0.998 1
2 11 0.000218208 0 1 0.0111 0.0544
3 15 0.0001 0 1 0 0
Optimization terminated: first-order optimality measure less
than options.TolFun and maximum constraint violation is less
than options.TolCon.
Active inequalities (to within options.TolCon = 1e-006):
lower upper ineqlin ineqnonlin
1
x=

15
0.9000 1.0000
fval =
1.0000e-004
>>

- 다음의 constrained minimization problem을 nonlinear constarints만 사용하여 풀어 보자.

sbject to
( , 와 동일)
( , 와 동일)

위의 최적화 문제는 nonlinear constraints로 표현이 될 수 있다. 목적함수를 다음의 m-file (


이름: myfunc.m) 로 말들자.

function [fval grad]=myfunc(x)


fval = (x(1)-1)^4+(x(2)-1)^2;
grad = [4*(x(1)-1)^3; 2*(x(2)-1)];

다음으로 nonlinear constraints를 다음의 m-file (이름: mynonlinear2.m)로 만들자.

function [c ceq]=mynonlinear2(x)
c(1,1) = x(1)-0.9;
c(2,1) = 0.3-x(1);
c(3,1) = 0.5-x(2);
c(4,1) = x(2)-2.9;
c(5,1) = x(1)^2+x(2)^2-3;
ceq(1,1)=x(1)-x(2);

다음으로 command windows에서 다음을 수행하면 된다.

>> [x fval]=fmincon(@myfunc,[0.5 1.5],[],[],[],[],[],[],@mynonlinear2,options)


Warning: Large-scale (trust region) method does not currently solve this type of
problem,

16
switching to medium-scale (line search).
> In fmincon at 260

max Directional First-order


Iter F-count f(x) constraint Step-size derivative optimality Procedure
0 3 0.3125 1 Infeasible start point
1 7 0.0664062 2.22e-016 1 0.359 0.75
2 11 0.0101 0 1 -0.0306 0.423
3 15 0.0101 0 1 0 0.4
Optimization terminated: first-order optimality measure less than options.TolFun
and maximum constraint violation is less than options.TolCon.
Active inequalities (to within options.TolCon = 1e-006):
lower upper ineqlin ineqnonlin
1

x=
0.9000 0.9000
fval =

0.0101

2.3 Solving Nonlinear Equations


다음의 fsolve함수는 비선형 함수가 영이 되도록 하는 인자를 찾는 것이다.
>> help fsolve
FSOLVE solves systems of nonlinear equations of several variables.

FSOLVE attempts to solve equations of the form:

F(X)=0 where F and X may be vectors or matrices.

X=FSOLVE(FUN,X0) starts at the matrix X0 and tries to solve the


equations in FUN. FUN accepts input X and returns a vector (matrix) of
equation values F evaluated at X.

X=FSOLVE(FUN,X0,OPTIONS) minimizes with the default optimization


parameters replaced by values in the structure OPTIONS, an argument

17
created with the OPTIMSET function. See OPTIMSET for details. Used
options are Display, TolX, TolFun, DerivativeCheck, Diagnostics,
FunValCheck, Jacobian, JacobMult, JacobPattern, LineSearchType,
LevenbergMarquardt, MaxFunEvals, MaxIter, DiffMinChange and
DiffMaxChange, LargeScale, MaxPCGIter, PrecondBandWidth, TolPCG,
TypicalX. Use the Jacobian option to specify that FUN also returns a
second output argument J that is the Jacobian matrix at the point X.
If FUN returns a vector F of m components when X has length n, then J
is an m-by-n matrix where J(i,j) is the partial derivative of F(i)
with respect to x(j). (Note that the Jacobian J is the transpose of the
gradient of F.)

[X,FVAL]=FSOLVE(FUN,X0,...) returns the value of the equations FUN at X.

[X,FVAL,EXITFLAG]=FSOLVE(FUN,X0,...) returns an EXITFLAG that describes the


exit condition of FSOLVE. Possible values of EXITFLAG and the corresponding
exit conditions are

1 FSOLVE converged to a solution X.


2 Change in X smaller than the specified tolerance.
3 Change in the residual smaller than the specified tolerance.
4 Magnitude of search direction smaller than the specified tolerance.
0 Maximum number of function evaluations or iterations reached.
-1 Algorithm terminated by the output function.
-2 Algorithm seems to be converging to a point that is not a root.
-3 Trust region radius became too small.
-4 Line search cannot sufficiently decrease the residual along the current
search direction.

[X,FVAL,EXITFLAG,OUTPUT]=FSOLVE(FUN,X0,...) returns a structure OUTPUT


with the number of iterations taken in OUTPUT.iterations, the number of
function evaluations in OUTPUT.funcCount, the algorithm used in
OUTPUT.algorithm,
the number of CG iterations (if used) in OUTPUT.cgiterations, the first-order
optimality (if used) in OUTPUT.firstorderopt, and the exit message in
OUTPUT.message.

18
[X,FVAL,EXITFLAG,OUTPUT,JACOB]=FSOLVE(FUN,X0,...) returns the
Jacobian of FUN at X.

Examples
FUN can be specified using @:
x = fsolve(@myfun,[2 3 4],optimset('Display','iter'))

where MYFUN is a MATLAB function such as:

function F = myfun(x)
F = sin(x);

FUN can also be an anonymous function:

x = fsolve(@(x) sin(3*x),[1 4],optimset('Display','off'))

If FUN is parameterized, you can use anonymous functions to capture the


problem-dependent parameters. Suppose you want to solve the system of
nonlinear equations given in the function MYFUN, which is parameterized
by its second argument A. Here MYFUN is an M-file function such as

function F = myfun(x,a)
F = [ 2*x(1) - x(2) - exp(a*x(1))
-x(1) + 2*x(2) - exp(a*x(2))];

To solve the system of equations for a specific value of A, first assign the
value to A. Then create a one-argument anonymous function that captures
that value of A and calls MYFUN with two arguments. Finally, pass this anonymous
function to FSOLVE:

a = -1; % define parameter first


x = fsolve(@(x) myfun(x,a),[-5;-5])

See also optimset, lsqnonlin, @, inline.

19
Reference page in Help browser
doc fsolve
>>

-fsolve 함수를 이용하여 다음의 문제를 풀어 보자.

x1 x 2  x1  2 x 2  4

먼저 다음과 같은 m-file(이름: test.m)을 만들자.


function [f]=test(x)
f(1,1)=x(1)^2+x(2)^2-2;
f(2,1)=x(1)*x(2)+x(1)+2*x(2)-4;
return

다음으로 아래를 실행하면 근을 찾아 준다.


>> [x fval] = fsolve(@test,[2 0])
Optimization terminated: first-order optimality is less than options.TolFun.
x=
1.0000 1.0000
fval =
1.0e-007 *
0.1893
-0.0908
>>

20

You might also like