Y G (X) y X X X: Results: Bisection Method Interval Numerical Solution Absolute Error Number of Iterations

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Universidade de Brasília

Faculdade de Tecnologia
Departamento de Engenharia Mecânica
Programa de Pós-Graduação em Ciências Mecânicas
DPG1119 | Métodos Numéricos em Ciências Mecânicas
Prof. Dr. Taygoara Felamingo de Oliveira
7 de fevereiro de 2022

Assignment 2: Numerical solution of nonlinear equations


Author: Gabriel Barbosa de Araujo

Consider the equation f(x) = 2cosh(x/4)-x = 0 and the function g(x) = 2cosh(x/4), such that a
possible fixed point iterator is given by
x k+1=g ( x k )
1. Plot (in the same frame) the curves y=g(x ) and y=x for 𝑥∈[0,10]. There are two roots,
x 1 and x 2 , such that 2≤x 1≤4 , and 8≤x2≤10 .

Figure 1: Functions g(x) and y(x). Intersection points represent the roots of f(x).

2. Use the bisection method to find x 1 and x 2 , up to an absolute tolerance atol=10−8 ,


starting at the intervals 𝑥∈[2,4] and 𝑥∈[8,10]. How many iterations are needed for each case?
Discuss your results.

Results: Bisection Method


Interval [2, 4] [8, 10]
Numerical solution 2.3576 8.5072
Absolute error 7.4506e-9 7.4506e-9
Number of iterations 28 28

By the Intermediate Value Theorem, it was possible to assure the existence of a root of f(x)
in both intervals because the product of the function values on the extremes (f(a).f(b)) is negative. It
is interesting to observe here that on both intervals the method converged with the same number of
iterations and precision. That can be explained when we look at the equation that determines the
number of iterations for convergence of the Bisection Method:

b−a
n=log 2 ( ) (a)
2atol

The expression above shows that, if the method converges, it will occur in a number of
iterations that only depends on the size of the interval [a, b] where the root is and the absolute
tolerance defined in the beginning of the problem. Therefore, as the intervals [2, 4] and [8, 10] have
the same length and an equal tolerance is considered, it makes sense that both solutions converged
in an equal number of iterations and absolute error.
Another point that must be noticed is that the number of iterations is quite high for a
problema that shows a lot of information about the function in a relatively small interval. However,
by the IVT it is possible to be sure that the solution converges to some root even before computing
the first iteration on the program.

3. There are other methods based on successive divisions of an initial interval. The regula falsi
method is an "interval division" algorithm that often converges faster to bisection. Study the regula
falsi method and apply it to find the roots x 1 and x 2 in the same initial intervals given in the
previous item. Compare the number of iterations necessary to reach the same atol using regula falsi
and bisection. For which situations does bisection converge faster than regula falsi?

Results: Regula Falsi Method x Bisection


Interval [2, 4] [8, 10]
Method Bisection Regula Falsi Bisection Regula Falsi
Numerical
2.3576 2.3576 8.5072 8.5072
solution
Absolute error 7.4506e-9 4,8910e-9 7.4506e-9 4,7004e-9
Number of
28 7 28 16
iterations

On the interval [2, 4], one can notice a much faster convergence using the Regula Falsi
Method. That can be explained by the fact that the module of the second derivative f’’(x) is small
there, that is, the curve f(x) does not have a sharp concavity close to the root, which allows a greater
reduction on the iterated intervals. Otherwise, if we had seen a greater concavity in the vicinity of
the root, the convergence of this method would be much slower. Another factor that contributes for
a slower convergence of the Regula Falsi Method is when the root is close to one of the interval
ends and the first derivative evaluated on the root and that end – let’s say, a – are close, that is,
*
f ' (a)≈f '( x ) . When these phenomena are quite accentuated, it is seen that the Bisection
Method converges faster.
Now, on the interval [8, 10] the Regula Falsi Method took more than the double of iterations
to converge. After four iterations (check Figure (2)), few decrements were seen on the absolute
error. When we analyze the graph of f(x) there, we see something similar to what was described
above, because the derivative of the root there is close to the one of the extreme a = (8,f(8)). As
there is no change of signal of f(a).f(b), the extreme of b (10, f(10)) remains static while the other
one approaches more slowly to the analytical root.
Figure 2: Iterations of the Regula Falsi Method in the interval [8, 10].

4. Use the fixed-point iterator x k+1=g ( x k ) to seek for the roots in the given intervals. Discuss
your results, under the light of the Fixed-Point Theorem.

Results: Fixed Point Method


Interval [2, 4] [8, 10]
Numerical solution 2.3576 ---
Absolute error 7.4506e-9 ---
Number of iterations 28 ---

Using the Fixed Point iterator above, solution converged slowly on the first interval, with the
same number of iterations of the Bisection Method for an initial guess of x 0=4 and 16 iterations
for x 0=2 . On the interval [8, 10], the method diverges for all initial values x 0 located in the
interval [8.5072, 10]. For initial guesses in [8, 8.5072] the method converges, but on the direction of
the root in 2.3576.

Figure 3: g(x) (in blue) with the extremes of [2, 4] highlighted.

Figure 4: g(x) (in blue) with the extremes of [8, 10] highlighted.
By the Fixed Point Theorem, it is possible to assure the existence of a fixed point only in the
interval [2, 4], because there the function is limited between g(2) = 2.255 and g(4) = 3.086, thus
satisfying the condition a≤g ( x)≤b for every x∈[2,4 ] . On the other interval that is not
possible, because the function g(x) has values that are between g(8) = 7.524 and g(10) = 12.265.
On the first interval, it is also possible to assure that fixed point is unique, because the
absolute value of the first derivative is |g’(2)| = 0.5876. That value is directly linked to the
convergence rate, that can be calculated through the following expression (defining ρ=|g ' (x)max|
):
r=−log 10 ρ
⇒r =−log 10 0,5876≃❑0,2310

The Bisection Method shows r ≃ 0,300 - as these values are relatively close, it would be
expected a linear convergence with a lot of iterations, which indeed occurred – at least for x 0=4 .

5. Implement Newton-Raphson and Secant methods to the same problem and use it to find the roots.

• Newton-Raphson Method iterator for this problem:


2 cosh (x k /4)−x k
x k+1=x k −
0.5 sinh( x k /4)−1
• Secant Method iterator:
(2 cosh ( x k /4 )−x k )(x k −x k−1)
x k+1=x k −
(2 cosh( x k /4)−x k )−2 cosh (x k−1 /4)+ x k−1

Results: Newton-Raphson Method x Secant


Interval [2, 4] [8, 10]
Method Newton Secant Newton Secant
Numerical
2.3576 2.3576 8.5072 8.5072
solution
Absolute error 9.8349e-10 5.7323e-12 3.5527e-15 7.2831e-14
Number of
5 7 5 8
iterations

The initial guess for the Newton-Raphson Method was in the middle of the interval (
x 0=3 for [2, 4] and x 0=9 for [8, 10] ), while for the Secant I chose the extremes as a start
point. Both of the methods implemented above showed faster convergence than Bisection, Regula
Falsi (for the second interval) and Fixed Point Methods. It was also surprising the high precision
reached for the solution with only a few iterations.
Unlike the Fixed Point Method, the convergence of the Newton-Raphson Method does not
depend on the choice of the function g(x), which is an advantage. However, it requires the
knowledge of the first derivative f’(x) of the function and of the function local behavior. For
example, a local minimum where f’(x) = 0 is a critical region where this Method may not converge.
Therefore, there is a strong dependence on the initial guess too.
The Secant Method is a variation of Newton-Raphson Method (also referred to as quasi-
Newton), therefore usually shows similar number of iterations and precision, as seen on the results.
One of the differences is the convergence rate – according to Ascher (2011), Newton-Raphson
converges quadratically, while the Secant converges superlinearly.
6. When f ' ' ( x) is available, it is possible to derive an improved Newton-Raphson method. To
do that, expand f (x k −δ x k ) using a Taylor series around x k and keeping the second order
term. Isolating δ x k and wisely choosing it between the two options (hint: the value of f′ is the
key to this choice), a new method can be designed doing x k+1=x k −δ x k . Derive the improved
NR method and give it a try in the present problem.

Expanding f (x k −δ x k ) in a Taylor Series around x k , keeping the second order term:

1
f ( x−δ x k )=f (x k )−f ' (x k ) δ x k + f ' '( x k )² δ x k ²
2
1
f ' ' ( xk ) ² δ x k ²+ δ x k f ' ( x k )+ f (x k )=0
2
Applying Bhaskara’s Equation for solution of a 2nd degree polynomial:

−f ' ( x k )± √(f ' ² ( x k )−2 f ' ' ²( x k )f ( x k ))


δ x k=
f ' ' ²(x k )

−f ' (x k )+ √ (f ' ²(x k )−2 f ' ' ² (x k )f ( x k ))


δ x k ¹=
f ' ' ² ( xk)

−f ' ( x k )− √(f ' ² (x k )−2 f ' ' ²(x k ) f ( x k ))


δ x k ²=
f ' ' ²( x k )
*
As x k tends to the solution x , there will be a problem if we choose the second solution,
because this delta that increments each iteration will gradually tend to zero in the iterative equation.
So, a better choice would be δ x k ¹ , with the following iterative equation for an improved
Newton-Raphson Method:

f ' ( x k )−√ ( f ' ²(x k )−2 f ' ' ²( x k ) f (x k ))


x k+1=x k −
f ' ' ² (x k )

Results: Improved Newton-Raphson Method


Interval [2, 4] [8, 10]
Numerical solution 2.3576 2.3576
Absolute error 2.9009e-9 1.4067e-11
Number of iterations 4 5

Results show that the improved Newton-Raphson Method with the second order term converges for
all the interval [0, 10]. However, for initial guesses inside the interval [8, 10] the method always
converges towards the first root, located between x a=2 and x b=4 . As with the first order
Newton-Raphson Method, this one also strongly depends on the nature of the function in the
vicinity of the roots, as a distant initial guess is most likely to cause a rapid divergence. Locally,
however, convergence is fully met.

7. Compare the convergence rate for all numerical methods in this assignment.

The convergence rate for the bisection method was already calculated in the 4th item, giving
r = 0,3. For the Fixed Point Method, we have r = 0,231. Figures (5) and (6) below show the
convergence rate for the numerical methods used in this assignment, The values for each iteration
for the Bisection Method and for the Fixed Point Method were close to those calculated above for
both intervals. And indeed, the most efficient methods were the Newton-Raphson Method and the
Improved Newton-Raphson Method, showing a quadratic convergence behavior. Newton-Raphson
without the 2nd order term showed better results, as it was able to solve the equation for both
intervals with very few interations. As presented in the literature (Ascher, 2011), the Secant and
Regula Falsi convergence rates show a superlinear behavior, starting with a higher value and then
slowly reducing.
It is interesting to notice that the most stable and robust method – Bissection – was indeed
one of the slowest, while the quickest ones were more sensitive to the initial guess on x* and its
derivatives behavior.

Figure 5: Convergence rate for all methods used to find the root in [2, 4].

Figure 6: Convergence rate for all methods used to find the root in [8, 10].

You might also like