Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

Roots and Optimization

ROOTS: OPEN METHODS


The open methods require only a single starting value or two starting values that
do not necessarily bracket the root. They sometimes diverge or move away from
the true root as the computation progresses. However, when the open methods
converge they usually do so much more quickly than the bracketing methods.

Graphical depiction of
the fundamental
difference between the
(a) bracketing and (b)
and (c) open methods.
The method can either
(b) diverge or (c)
converge rapidly,
depending on the
shape of the function
and the value of the
initial guess

SIMPLE FIXED-POINT ITERATION:


formula to predict the root using fixed-point iteration (one-point iteration or successive
substitution) is developed by rearranging:

f (x) = 0 f (x) + x = 0 + x x = g(x)


the approximate error:

Solution:

The true percent relative error for each iteration is roughly proportional (for this case, by
a factor of about 0.5 to 0.6) to the error from the previous iteration. This property, called
linear convergence, is characteristic of fixed-point iteration.

The concepts of convergence and divergence can be depicted graphically:

Fig. 6.2a is a graph for the function:

alternative graphical approach is to separate the equation into two component parts
(Fig.6.2b):

y1 = f1(x) = x and y2 = f2(x) = e-x


Two alternative graphical methods for determining
the root of

(a) Root at the point where it crosses the x axis; (b)


root at the intersection of the component functions.

Illustration of
the convergence and divergence of
fixed-point iteration:

“Cobweb plots” depicting


convergence (a and b) and
divergence (c and d ). Graphs (a) and
(c) are called monotone patterns
whereas (b) and (c) are called
oscillating or spiral patterns.

the error for any iteration is:

Note: convergence occurs when

|g′ (x)| < 1.


NEWTON-RAPHSON:

Solution:

the error is roughly proportional to the


square of the previous error

The number of significant figures of accuracy approximately doubles with each


iteration. This behavior is called quadratic convergence
A Slowly Converging Function with Newton-Raphson:

Notice how the first guess


(x=0.5) is in a region where
the slope is near zero.

The inset shows how a


near-zero slope initially
shoots the solution far from
the root. Thereafter,

the solution very slowly


converges on the root.
Four cases where the Newton-Raphson method exhibits poor convergence.

the case where an


inflection point
(i.e., f ″ (x) = 0) occurs
in the vicinity of a root

the tendency to
oscillate around a local
maximum or minimum.

tendency to move away


from the area of
interest is due to the
fact that near-zero
slopes are
encountered.

a zero slope [f ′(x) = 0] is a


real disaster because it
causes division by zero in
the Newton-Raphson
formula
MATLAB M-file:
SECANT METHODS
A potential problem in implementing the Newton-Raphson method is the evaluation of the derivative.
There are certain functions whose derivatives may be difficult or inconvenient to evaluate.
Use backward finite difference:

an alternative approach: involves a fractional perturbation


Please check using
Matlab…!

If is δ too small, the method can be swamped by roundoff error caused by subtractive cancellation in
the denominator of the modified secant equation. If it is too big, the technique can become inefficient
and even divergent. However, if chosen correctly, it provides a nice alternative for cases where
evaluating the derivative is difficult and developing two initial guesses is inconvenient.

BRENT’S METHOD:
Brent’s root-location method (developed by Richard Brent) is a clever algorithm that
does just that by applying a speedy open method wherever possible, but reverting to a
reliable bracketing method if necessary.

The bracketing technique: bisection method,


whereas 2 open methods: secant method & inverse quadratic interpolation.

Inverse Quadratic
Interpolation:
Two parabolas fit to three points. The parabola
written as a function of x, y = f ( x ), has
complex roots and hence does not intersect
the x axis. In contrast, if the variables are
reversed, and the parabola developed as x
=f ( y ), the function does intersect the x axis.
For the data points (1, 2), (2, 1), and (4, 5):

Please plot the two functions (i.e. f(x) and g(x)) using Matlab…!

Brent’s Method Algorithm:

The general idea behind the Brent’s root-finding method is whenever possible to use one of
the quick open methods. In the event that these generate an unacceptable result (i.e., a root
estimate that falls outside the bracket), the algorithm reverts to the more conservative
bisection method. This process is then repeated until the root is located to within an
acceptable tolerance. As might be expected, bisection typically dominates at first but as the
root is approached, the technique shifts to the faster open methods.

Fzerosimp (developed by Cleve Moler) is the simplified version of fzero function which is
the professional root-location function employed in MATLAB
MATLAB FUNCTION: fzero
Syntax:

The parameters commonly used with the fzero function:

The fzero function works as follows. If a single initial guess is passed, it first performs a search to identify
a sign change. This search starts at the single initial guess and then takes increasingly bigger steps in both
the positive and negative directions until a sign change is detected. Thereafter, the fast methods (secant
and inverse quadratic interpolation) are used unless an unacceptable result occurs (e.g., the root estimate
falls outside the bracket). If an unacceptable result happens, bisection is implemented until an acceptable
root is obtained with one of the fast methods. As might be expected, bisection typically dominates at first
but as the root is approached, the technique shifts to the faster methods.
Use the optimset function to set a low maximum tolerance and a less accurate estimate of the root
results:

MATLAB Function: roots


to determine all the roots, both real and complex, of a polynomial

Polynomials, a special type of nonlinear algebraic equation of the


general form:

Example:
Example:

Polynomials are entered into MATLAB:

create a polynomial:

The roots: 0.5 & -1

Or:

>> c = conv([1 -0.5],[1 1])


The resistance to flow in such conduits is parameterized by a dimensionless number called the friction
factor. For turbulent flow, the Colebrook equation provides a means to calculate the friction factor:

where ε = the roughness (m), D = diameter (m), and Re = the Reynolds number:

where ρ = the fluid’s density (kg/m3), V = its velocity (m/s), and μ = dynamic viscosity (N · s/m2).

Reynolds number also serves as the criterion for whether flow is turbulent (Re > 4000).
Bisection:

False position: similar results

Newton-Raphson:

Using the Swamee Jain equation to provide the initial guess:


Newton-Raphson:

Using Matlab’s fzero function:

Using simple fixed-point iteration:

Fixedpoint iteration yields predictions


with relative errors less than
0.008% in six or fewer iterations!

The take-home message from this case study is that even great, professionally developed
software like MATLAB is not always foolproof. Further, there is usually no single method
that works best for all problems. Sophisticated users understand the strengths and
weaknesses of the available numerical techniques. In addition, they understand enough
of the underlying theory so that they can effectively deal with situations where a
method breaks down.

You might also like