NUMERICAL METHODS COMPILATION - Honculada - Angel

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 38

NUMERICAL

methods

HONCULADA, ANGEL M.
ME 613 - MEngg-ME CITU
NUMERICAL
methods

a compilation of different
Numerical Methods
as course requirement of
ME 613 (Numerical Methods)
in the Masters of Engineering program
at Cebu Institute of Techonology - University
Cebu City, Philippines.
© 2017

Engr. ANGEL M. HONCULADA


Student

Engr. ANNALOU P. VILLANUEVA, MSECE


Professor
INTRODUCTION

N umerical algorithms are almost as old as human civilization. The Rhind Papyrus (˜1650
BC) of ancient Egypt describes a rootfinding method for solving a simple equation,
Archimedes of Syracuse (287-212 BC) created much new mathematics, including the “method
of exhaustion” for calculating lengths, areas, and volumes of geometric figures. When used as a
method to find approximations, it is in much the spirit of modern numerical integration; and it
was an important precursor to the development of the calculus by Isaac Newton and Gottfried
Leibnitz.

A major impetus to developing numerical procedures was the invention of the calculus by
Newton and Leibnitz, as this led to accurate mathematical models for physical reality, first in
the physical sciences and eventually in the other sciences, engineering, medicine, and business.
These mathematical models cannot usually be solved explicitly, and numerical methods to
obtain approximate solutions are needed. Another important aspect of the development
of numerical methods was the creation of logarithms by Napier (1614) and others, giving
a much simpler manner of carrying out the arithmetic operations of multiplication,
division, and exponentiation. Newton created a number of numerical methods for solving
a variety of problems, and his name is attached today to generalizations of his original
ideas. Of special note is his work on rootfinding and polynomial interpolation. Following
Newton, many of the giants of mathematics of the 18th and 19th centuries made major
contributions to the numerical solution of mathematical problems. Foremost among these
are Leonhard Euler (1707-1783), Joseph-Louis Lagrange (1736-1813), and Karl Friedrich
Gauss (1777-1855). Up to the late 1800’s, it appears that most mathematicians were quite
broad in their interests, and many of them were interested in and contributed to numerical
analysis. (K. Atkinson)

Numerical methods are techniques by which mathematical problems are formulated so that
they can be solved with arithmetic operations. Although there are many kinds of numerical
methods, they have one common characteristic: they invariably involve large numbers of
tedious arithmetic calculations. It is little wonder that with the development of fast, efficient
digital computers, the role of numerical methods in engineering problem solving has increased
dramatically in recent years. (S. Chapra / R. Canale)

With the growth in importance of using computers to carry out numerical procedures in solving
mathematical models of the world, an area known as scientific computing or computational
science has taken shape during the 1980s and 1990s. This area looks at the use of numerical
analysis from a computer science perspective. It is concerned with using the most powerful
tools of numerical analysis, computer graphics, symbolic mathematical computations, and
graphical user interfaces to make it easier for a user to set up, solve, and interpret complicated
mathematical models of the real world. (K. Atkinson)
CONTENTS

I. FIXED-POINT ITERATION

II. NEWTON-RAPHSON METHOD (ROOTS)

III. BISECTION METHOD

IV. SECANT METHOD

V. FALSE POSITON METHOD

VI. TAYLOR SERIES

VII. JACOBI METHOD

VIII. GAUSS-SEIDEL METHOD

IX. NEWTON-RAPHSON METHOD (SYSTEMS OF EQUATIONS)

X. GAUSS ELIMINATION

XI. GAUSS-JORDAN ELIMINAITON

XII. L U DECOMPOSITION

XIII. LAGRANGE APPROXIMATION

XIV. LEAST SQUARE METHOD (LINEAR)

XV. LEAST SQUARE METHOD (POLYNOMIAL)

XVI. NON LINEAR CURVE FITTING

XVII. COMPOSITE TRAPEZOIDAL RULE

XVIII. COMPOSITE SIMPSON’S RULE

XIX. EULER’S METHOD

XX. HEUN’S METHOD


FIXED-POINT ITERATION
DEFINITION / OBJETCIVE:
Fixed point iteration is a method that will solve the root of a given equation. It is also known as
one-point iteration or successive substitution. There are infinite ways to introduce an equivalent
fixed point problem for a given equation.
ALGORITHM:
- this iteration first to reformulate the given equation to an equivalent fixed-point problems a for-
FIXED-POINT
mula ITERATION
to predict the root by rearranging f(x) = 0 so that: x = g(x)

- given
- alsoan initial
known the root,orxsuccessive
guess of iteration
as one-point i
, the iterative formula for solving a new estimate is: xi+1 = g(x)
substitution
- The resulting value is denoted by xi+1; and then the process is repeated, this time substituting xi
- employs a formula to predict the root by rearranging
into the right side.
f(x)This is repeated
= 0 so until convergence occurs or until the iteration is terminated. - -
that: x = g(x)
- Error is then computed as [(new - old) / new] in terms of xi+1
- given an initial guess of the root, xi, the iterative formula for solving a
EXAMPLE: new estimate is: xi+1 = g(x)
Determine the root of the equation f(x) = x2 - 6x + 1 = 0
EXAMPLE: Determine the root of the equation f(x) = x 2 - 6x + 1 = 0
SOLUTION:
gPOSSIBLE
1
(x) = (x²SOLUTION:
+ 1) / 6
g2(x) =g₁(x)
(6x=-(x²+1)/6
1)0.5
g3(x) =g2(-1)
(x) =/(6x-1)^.5
(x - 6)
g3(x) = (-9)/(x-6)
initial
INITIALvalue:
VALUE:1 1
i xi+1 g1(x) E g2(x) E g3(x) E
0 x₁ 0.333333333 2.236067977 0.2
1 x₂ 0.185185185 80 3.523692362 36.54190697 0.172413793 16
2 x₃ 0.172382259 7.427055703 4.488001134 21.48637539 0.171597633 0.475624257
3 x₄ 0.171619274 0.444579986 5.091955106 11.86094457 0.171573604 0.014005112
4 x₅ 0.171575529 0.025495865 5.436150351 6.331599065 0.171572897 0.000412276
5 x₆ 0.171573027 0.001458363 5.622890903 3.321077276 0.171572876 1.21363E-05
6 x₇ 0.171572884 8.34059E-05 5.721655828 1.726159846 0.171572875 3.57258E-07
7 x₈ 0.171572876 4.77007E-06 5.773208377 0.892961865 0.171572875 1.05167E-08
8 x₉ 0.171572875 2.72805E-07 5.799935367 0.460815305 0.171572875 3.09582E-10
9 x₁₀ 0.171572875 1.5602E-08 5.81374339 0.237506572 0.171572875 9.10773E-12
10 x11 0.171572875 8.92298E-10 5.820864226 0.122332973 0.171572875 2.75011E-13
11 x₁2 0.171572875 5.10227E-11 5.824533059 0.062989306 0.171572875 0
12 x13 0.171572875 2.94424E-12 5.826422432 0.032427665 0.171572875 0
13 x14 0.171572875 1.45594E-13 5.827395181 0.016692685 0.171572875 0
14 x15 0.171572875 1.61771E-14 5.82789594 0.00859245 0.171572875 0
15 x1 ₆ 0.171572875 0 5.828153707 0.004422804 0.171572875 0

NUMERICAL METHODS 01
INITIAL VALUE: 2
initial value: 2
INITIAL VALUE: 2
i xi+1 g1(x) E g2(x) E g3(x) E
0 x₁ 0.833333333 3.31662479 0.25
1 x₂ 0.282407407 195.0819672 4.347384126 23.70987486 0.173913043 43.75
2 x₃ 0.179958991 56.92875717 5.00842338 13.1985498 0.171641791 1.323251418
3 x₄ 0.172064206 4.588278067 5.389855312 7.076849179 0.171574904 0.038984184
4 x₅ 0.171601015 0.269923344 5.598136464 3.720544392 0.171572935 0.001147614
55 x₆x₆ 0.171574485
0.171574485 0.015462935
0.015462935 5.708661733
5.708661733 1.936097711
1.936097711 0.171572877
0.171572877 3.37826E-05
3.37826E-05
66 x₇x₇ 0.171572967
0.171572967 0.000884425
0.000884425 5.76645215
5.76645215 1.002183238
1.002183238 0.171572875
0.171572875 9.94468E-07
9.94468E-07
77 x₈x₈ 0.171572881
0.171572881 5.05814E-05
5.05814E-05 5.796439675
5.796439675 0.517343854
0.517343854 0.171572875
0.171572875 2.92744E-08
2.92744E-08
88 x₉x₉ 0.171572876
0.171572876 2.8928E-06
2.8928E-06 5.811939267
5.811939267 0.266685388
0.266685388 0.171572875
0.171572875 8.6174E-10
8.6174E-10
99 x₁₀
x₁₀ 0.171572875
0.171572875 1.65442E-07
1.65442E-07 5.81993433
5.81993433 0.137373758
0.137373758 0.171572875
0.171572875 2.53819E-11
2.53819E-11
10
10 x11
x11 0.171572875
0.171572875 9.46176E-09
9.46176E-09 5.824054085
5.824054085 0.070736888
0.070736888 0.171572875
0.171572875 7.44148E-13
7.44148E-13
11
11 x₁x₁2 0.171572875
0.171572875 5.41141E-10
5.41141E-10 5.826175805
5.826175805 0.03641703
0.03641703 0.171572875
0.171572875 1.61771E-14
1.61771E-14
2
12
12 x13
x13 0.171572875
0.171572875 3.09469E-11
3.09469E-11 5.827268213
5.827268213 0.018746494
0.018746494 0.171572875
0.171572875 00
13
13 x14
x14 0.171572875
0.171572875 1.77948E-12
1.77948E-12 5.827830581
5.827830581 0.009649692
0.009649692 0.171572875
0.171572875 00
14
14 x15
x15 0.171572875
0.171572875 9.70628E-14
9.70628E-14 5.828120064
5.828120064 0.004967014
0.004967014 0.171572875
0.171572875 00
15
15 x1x₆ ₆ 0.171572875
0.171572875 00 5.828269073
5.828269073 0.002556651
0.002556651 0.171572875
0.171572875 00
1

initial value: 3
INITIAL
INITIALVALUE:
VALUE: 33
ii xxi+1 gg1(x) EE gg2(x) EE gg3(x) EE
i+1 1(x) 2(x) 3(x)
00 x₁x₁ 1.666666667
1.666666667 4.123105626
4.123105626 0.333333333
0.333333333
11 x₂x₂ 0.62962963
0.62962963 164.7058824
164.7058824 4.872230881
4.872230881 15.3754055
15.3754055 0.176470588
0.176470588 88.88888889
88.88888889
22 x₃x₃ 0.232738912
0.232738912 170.5304519
170.5304519 5.313509695
5.313509695 8.304846328
8.304846328 0.171717172
0.171717172 2.76816609
2.76816609
33 x₄x₄ 0.175694567
0.175694567 32.46790492
32.46790492 5.557072806
5.557072806 4.382938989
4.382938989 0.171577123
0.171577123 0.081624324
0.081624324
44 x₅x₅ 0.17181143
0.17181143 2.260115466
2.260115466 5.687041132
5.687041132 2.285341759
2.285341759 0.171573
0.171573 0.002402915
0.002402915
55 x₆x₆ 0.171586528
0.171586528 0.131072187
0.131072187 5.755193028
5.755193028 1.184180894
1.184180894 0.171572879
0.171572879 7.07353E-05
7.07353E-05
66 x₇x₇ 0.171573656
0.171573656 0.007502216
0.007502216 5.790609482
5.790609482 0.611618752
0.611618752 0.171572875
0.171572875 2.08225E-06
2.08225E-06
77 x₈x₈ 0.17157292
0.17157292 0.000429079
0.000429079 5.808929066
5.808929066 0.315369387
0.315369387 0.171572875
0.171572875 6.12958E-08
6.12958E-08
88 x₉x₉ 0.171572878
0.171572878 2.45395E-05
2.45395E-05 5.818382455
5.818382455 0.162474528
0.162474528 0.171572875
0.171572875 1.80438E-09
1.80438E-09
99 x₁₀
x₁₀ 0.171572875
0.171572875 1.40344E-06
1.40344E-06 5.823254651
5.823254651 0.083667918
0.083667918 0.171572875
0.171572875 5.31257E-11
5.31257E-11
10
10 x11
x11 0.171572875
0.171572875 8.02639E-08
8.02639E-08 5.825764148
5.825764148 0.043075839
0.043075839 0.171572875
0.171572875 1.553E-12
1.553E-12
11
11 x₁x₁2 0.171572875
0.171572875 4.59036E-09
4.59036E-09 5.82705628
5.82705628 0.022174694
0.022174694 0.171572875
0.171572875 4.85314E-14
4.85314E-14
2
12
12 x13
x13 0.171572875
0.171572875 2.62523E-10
2.62523E-10 5.827721483
5.827721483 0.011414458
0.011414458 0.171572875
0.171572875 00
13
13 x14
x14 0.171572875
0.171572875 1.50286E-11
1.50286E-11 5.828063906
5.828063906 0.005875428
0.005875428 0.171572875
0.171572875 00
14
14 x15
x15 0.171572875
0.171572875 8.41211E-13
8.41211E-13 5.828240166
5.828240166 0.003024243
0.003024243 0.171572875
0.171572875 00
15
15 x1x₆ ₆ 0.171572875
0.171572875 6.47085E-14
6.47085E-14 5.828330893
5.828330893 0.001556648
0.001556648 0.171572875
0.171572875 00
1
16
16 x1x₇ ₇ 0.171572875
0.171572875 00 5.828377592
5.828377592 0.000801239
0.000801239 0.171572875
0.171572875 00
1

Based on the result of the Iterations, the root of the given function is = 0.171572875

NUMERICAL METHODS 02
GRAPH:

f(x) = x2 - 6x + 1

x = 0.171572875

NUMERICAL METHODS 03
NEWTON-RAPHSON
DEFINITION / OBJETCIVE:
Newton’s method (also known as theNewton–Raphson method), named after Isaac Newton and
Joseph Raphson, is a method for finding successively better approximations to the roots (or ze-
roes) of a real-valued function. It is one example of a root-finding algorithm.

NEWTON-RAPHSON METHOD
ALGORITHM:
- solve for the first derivative of the given equation f(x) = 0 to f(x)
Newton's method (also known as theNewton–Raphson method), named after
- given
IsaacanNewton
initial guess
and Joseph root, xi, the
of theRaphson, is a iterative
method formula forsuccessively
for finding solving a new estimate is:
better
xi+1 = xi - (f(x
approximations )i / f’(x
to the roots
i
))(or zeroes) of a real-valued function. It is one example of
a root-finding algorithm.
- The resulting value is denoted by xi+1; and then the process is repeated, this time substituting xi
into the right side. This is repeated until convergence occurs or until the iteration is terminated.

- Error is then computed as [(new - old) / new] in terms of xi+1

EXAMPLE:
Determine the root of the equation f(x) = x3 - x2 + x - 2 = 0
EXAMPLE: Determine the root of the equation f(x) = x 3 - x 2 + x - 2 = 0
SOLUTION:
SOLUTION:
solving for: f’(x) = 3x² - 2x + 1
f'(x) = 3x² - 2x + 1
initial value: 1
INITIAL VALUE: 1
i xᵢ₊₁ f(xᵢ) f'(xᵢ) E
0 x₁ 1 -1 2
1 x₂ 1.5 0.625 4.75 33.33333333
2 x₃ 1.368421053 0.058317539 3.880886427 9.615384615
3 x₄ 1.353394192 0.000697796 3.788239134 1.110309221
4 x₅ 1.353209992 1.03825E-07 3.787111861 0.013612115
5 x₆ 1.353209964 0 3.787111693 2.02595E-06
6 x₇ 1.353209964 0 3.787111693 0
7 x₈ 1.353209964 0 3.787111693 0
8 x₉ 1.353209964 0 3.787111693 0
9 x₁₀ 1.353209964 0 3.787111693 0
10 x₁₁ 1.353209964 0 3.787111693 0

Based on the result of the Iterations, the root of the given function is = 1.353209964

NUMERICAL METHODS 04
GRAPH:

f(x) = x3 - x2 + x - 2

x = 1.353209964

NUMERICAL METHODS 05
BISECTION METHOD
DEFINITION / OBJETCIVE:
Bisection method is the simplest among all the numerical schemes to solve the transcendental
equations. This scheme is based on the intermediate value therem for continous function . The
algorithm could be defined as follows. Suppose we need a root for f (x) = 0 and we have an error
BISECTION
tolerance of ε (theMETHOD
absolute error in calculating the root must be less that ε).
Bisection method is the simplest among all the numerical schemes to solve the transcendental
ALGORITHM:
equations. This scheme is based on the intermediate value therem for continous function . The
- Consider
algorithminitial
couldvalues of two
be defined numbers
as follows. xl andwe
Suppose xu.need a root for f (x) = 0 and we have an error
tolerance of ε (the absolute error in calculating the root must be less that ε).
- Define xr = (xl + xu) / 2 . If xu − xr ≤ ε then accept c as the root and stop. If f(xl)*f(xr) ≤ 0 then set
xr as the new
Bisection xu. Otherwise, set xr as the new xl. Then repeat the process for the next iteration.
Algorithm:
Step 1: Find two numbers a and b at which f has different signs.
- Error
Stepis
2: then
Definecomputed as. [(new - old) / new] in terms of xr
xr = (xl+xu)/2
Step 3: If xu −xr ≤ ε then accept c as the root and stop.
EXAMPLE:
Step 4: If f (xl)f (xr) ≤ 0 then set c as the new b. Otherwise, set c as the new a. Return to step 1.
Consider finding the root of f(x) = 0.5ex - 5x + 2 on the interval [1, 2]
EXAMPLE: Consider finding the root of f(x) = 0.5e x - 5x + 2 on the interval [1, 2]
SOLUTION:
initial value: xl = 0; xu = 1

i xl xu xr f(xl) f(xu) f(xr) E


1 0 1 0.5 2.5 -1.64086 0.324361 1
2 0.5 1 0.75 0.324361 -1.64086 -0.6915 0.5
3 0.5 0.75 0.625 0.324361 -0.6915 -0.19088 0.25
4 0.5 0.625 0.5625 0.324361 -0.19088 0.065027 0.125
5 0.5625 0.625 0.59375 0.065027 -0.19088 -0.06337 0.0625
6 0.5625 0.59375 0.578125 0.065027 -0.06337 0.000721 0.03125
7 0.578125 0.59375 0.585938 0.000721 -0.06337 -0.03135 0.015625
8 0.578125 0.585938 0.582031 0.000721 -0.03135 -0.01532 0.007813
9 0.578125 0.582031 0.580078 0.000721 -0.01532 -0.0073 0.003906
10 0.578125 0.580078 0.579102 0.000721 -0.0073 -0.00329 0.001953
11 0.578125 0.579102 0.578613 0.000721 -0.00329 -0.00128 0.000977
12 0.578125 0.578613 0.578369 0.000721 -0.00128 -0.00028 0.000488
13 0.578125 0.578369 0.578247 0.000721 -0.00028 0.00022 0.000244
14 0.578247 0.578369 0.578308 0.00022 -0.00028 -3.1E-05 0.000122
15 0.578247 0.578308 0.578278 0.00022 -3.1E-05 9.45E-05 6.1E-05
16 0.578278 0.578308 0.578293 9.45E-05 -3.1E-05 3.18E-05 3.05E-05
17 0.578293 0.578308 0.5783 3.18E-05 -3.1E-05 4.16E-07 1.53E-05
18 0.5783 0.578308 0.578304 4.16E-07 -3.1E-05 -1.5E-05 7.63E-06
19 0.5783 0.578304 0.578302 4.16E-07 -1.5E-05 -7.4E-06 3.81E-06
20 0.5783 0.578302 0.578301 4.16E-07 -7.4E-06 -3.5E-06 1.91E-06

Based on the result of the Iterations, the root of the given function is = 0.578301

NUMERICAL METHODS 06
GRAPH:

f(x) = x3 - x2 + x - 2

x = 0.578301

NUMERICAL METHODS 07
SECANT METHOD

DEFINITION / OBJETCIVE:
In numerical analysis, the secant method is a root-finding algorithm that uses a succession of
roots of secant lines to better approximate a root of a function f(x). The secant method can be
thought of as a finite difference approximation of Newton’s method.

SECANT
ALGORITHM: METHOD
- Consider initial analysis,
In numerical values ofthetwosecant
numbers x0 and
method is axroot-finding
1
. algorithm that uses a succession of
roots of secant lines to better approximate a root of a function
- Define xi+1 = xi - {[(f(xi) * (xi-1 - xi)] / [f(xi-1) - f(xi)]}. Then repeat the f. The secantformethod
process can
the next be
iteration.
thought of as a finite difference approximation of Newton's method.
- Error is then computed as [(new - old) / new] in terms of xi+1

EXAMPLE:
Find the root of 3x + sin x - ex = 0
EXAMPLE: Find the root of 3x + sin x - e x = 0
SOLUTION:
initial value: x0 = 0; x1 = 1
GIVEN VALUES: x₀ = 0 x₁ = 1
i xᵢ₊₁ f(x) E
0 x₀ 0 -1
1 x₁ 1 1.123189
2 x₂ 0.47099 0.265159 112.3189
3 x₃ 0.307508 -0.13482 53.16313
4 x₄ 0.362613 0.005479 15.19657
5 x₅ 0.360461 9.95E-05 0.596946
6 x₆ 0.360422 -7.8E-08 0.011045
7 x₇ 0.360422 1.11E-12 8.65E-06
8 x₆ 0.360422 0 1.23E-10
9 x₇ 0.360422 0 0

Based on the result of the Iterations, the root of the given function is = 0.360422

NUMERICAL METHODS 08
GRAPH:

x = 0.360422
f(x) = 3x + sin x - ex

NUMERICAL METHODS 09
FALSE POSITION METHOD

DEFINITION / OBJETCIVE:
The false position method or regula falsi method is a term for problem-solving methods in arith-
metic, algebra, and calculus. In simple terms, these methods begin by attempting to evaluate a
problem using test (“false”) values for the variables, and then adjust the values accordingly.

FALSE POSITION METHOD


ALGORITHM:
- Consider
The false initial values
position methodof two numbers
or regula falsixmethod
l
and xu.is a term for problem-solving methods in arithmetic,
algebra, and calculus. In simple terms, these methods begin by attempting to evaluate a problem using
- Define xr = xu - {[(f(xu) * (xl - xu)] / [f(xl) - f(xu)]}.
test ("false") values for the variables, and then adjust the values accordingly.
- Error is then computed as [(new - old) / new] in terms of xr
EXAMPLE:
Consider finding the root of f(x) = x3 - 0.165x2 + 3.993x10-4 with interval 0 - 0.11

SOLUTION:
initial value: xl = 0; xu = 1
EXAMPLE: Consider finding the root of f(x) = x 3 - 0.165x 2 + 3.993x10 -4 with interval 0 - 0.11

i xl xu xr f(xl) f(xu) f(xr) E


1 0 0.11 0.066 0.0003993 -0.0002662 -3.2E-05
2 0 0.066 0.06111 0.0003993 -3.194E-05 1.13E-05 8
3 0.061111 0.066 0.06239 1.132E-05 -3.194E-05 -1.1E-07 2.050265
4 0.061111 0.06239 0.06238 1.132E-05 -1.131E-07 -3.3E-10 0.020292
5 0.061111 0.062378 0.06238 1.132E-05 -3.347E-10 -9.9E-13 6E-05
6 0.061111 0.062378 0.06238 1.132E-05 -9.896E-13 -2.9E-15 1.77E-07
7 0.061111 0.062378 0.06238 1.132E-05 -2.926E-15 -8.6E-18 5.25E-10
8 0.061111 0.062378 0.06238 1.132E-05 -8.565E-18 0 1.52E-12

Based on the result of the Iterations, the root of the given function is = 0.06238

NUMERICAL METHODS 10
GRAPH:

f(x) = x3 - 0.165x2 + 3.993x10-4

x = 0.06238

NUMERICAL METHODS 11
TAYLOR SERIES
DEFINITION / OBJETCIVE:
The Taylor series method is one of the earliest analytic-numeric algorithms for approximate solution
of initial value problems for ordinary differential equations. It provides a means to predict a function
value at 1 point in terms of the function value and its derivatives at another point.

ALGORITHM:
- Consider initial values of two numbers xl and xu.

- Determine the derivative of nth order. Then following the formula Σ[f(xi) + fn(xi)(h)/n! ... ] where
h = (xi+1 - xi)
- Error is then computed as [(new - old) / new] in terms of the summation.

EXAMPLE:
Obtain the Taylor Series for f(3) = x5 + 3x4 - x2 + 8 around the point 1
SOLUTION:
xi = 1; xi+1 = 3
h = (xi+1 - xi) = 3 - 1 = 2

f(1) = 15 + 3(14) - 12 + 8 = 11
f’(1) = 5(1 ) + 12(1 ) - 2(1)
4 3
= 15
f”(1) = 20(13) + 36(12) - 2 = 54
f”’(1) = 60(1 ) + 72(1)
2
= 132
f (1) = 120(1) + 72
IV
= 192
fV(1) = 120 = 120

f(x) = f(xi) + f’(xi)(h) + [ f”(xi)(h2)] / 2! + [(f”’(xi)(h3)] / 3! + [(fIVxi)(h4)] / 4! + [(fVxi)(h5)] / 5!


f(x) = 11 + 15(2) + [ 54(4)] / 2 + [132(8)] / 6 + [192(16)] / 24 + [120(32)] / 120
f(x) = 485

n fn(x) fn(x)(hn)/n! SUM E


0 11 11 11
1 15 30 41 73.17073171
2 54 108 149 72.48322148
3 132 176 325 54.15384615
4 192 128 453 28.25607064
5 120 32 485 6.597938144

Based on the result of the Iterations, the value of the given function is = 485

12
GRAPH:

(3, 485)

f(x) = x5 + 3x4 - x2 + 8

NUMERICAL METHODS 13
JACOBI METHOD
DEFINITION / OBJETCIVE:
The first iterative technique is called the Jacobi method, after Carl Gustav Jacob Jacobi (1804–
1851). This method helps in solving systems of equations by iterative method using the values of
each unknowns within the equations
ALGORITHM:
- this iteration first to reformulate the given system equations each of the uknowns g(x).
JACOBI
- given an initialMETHOD
guess, xk, the iterative formula for solving a new estimate is: xk+1 = g(x)

- TheEXAMPLE:
resulting value is values
Find the denoted byy x
of x, &k+1 ; and
z for thethen the process
following systems is equations: this time substituting xi
of repeated,
into the right side. This is repeated until
4x+y+3z=17 convergence occurs or until the iteration is terminated.
x+5y+z=14
- Error is then computed as [(new - old) / new] in terms of xi+1
2x-y+8z=12
EXAMPLE:
Find the values of x, y & z for the following systems of equations:
reaaranging equations:
4x + y + 3z = 17 ; x + 5y + z = 14 ; 2x - y + 8z = 12
x = (-y - 3z +17)/4
SOLUTION:
y = (-x - z +14)/5
z = (-2x
Getting the+ yvalues
+ 12)/8 of x, y, & z using each the eaquations:
x = (-y - 3z + 17) / 4 ; y = (-x - z + 14) / 5 ; z = (-2x + y + 12) / 8

xk yk zk xk+1 yk+1 zk+1


0 0 0 4.25 2.8 1.5
4.25 2.8 1.5 2.425 1.65 1.31875
2.425 1.65 1.31875 2.8484375 2.05125 1.403125
2.8484375 2.05125 1.403125 2.68484375 1.9496875 1.400351563
2.68484375 1.9496875 1.400351563 2.712314453 1.982960938 1.408105469
2.712314453 1.982960938 1.408105469 2.698180664 1.975916016 1.408830811
2.698180664 1.975916016 1.408830811 2.699397888 1.978597705 1.409716919
2.699397888 1.978597705 1.409716919 2.698062885 1.978177039 1.409899977
2.698062885 1.978177039 1.409899977 2.698030758 1.978407428 1.410014269
2.698030758 1.978407428 1.410014269 2.697887441 1.978390995 1.410047084
2.697887441 1.978390995 1.410047084 2.697866939 1.978413095 1.410062944
2.697866939 1.978413095 1.410062944 2.697849518 1.978414023 1.41006827
2.697849518 1.978414023 1.41006827 2.697845292 1.978416442 1.410070563
2.697845292 1.978416442 1.410070563 2.697842967 1.978416829 1.410071394
2.697842967 1.978416829 1.410071394 2.697842247 1.978417128 1.410071733
2.697842247 1.978417128 1.410071733 2.697841918 1.978417204 1.41007186
2.697841918 1.978417204 1.41007186 2.697841804 1.978417244 1.410071911
2.697841804 1.978417244 1.410071911 2.697841756 1.978417257 1.41007193
2.697841756 1.978417257 1.41007193 2.697841738 1.978417263 1.410071938
2.697841738 1.978417263 1.410071938 2.697841731 1.978417265 1.410071941
2.697841731 1.978417265 1.410071941 2.697841728 1.978417266 1.410071942
2.697841728 1.978417266 1.410071942 2.697841727 1.978417266 1.410071942
2.697841727 1.978417266 1.410071942 2.697841727 1.978417266 1.410071942
2.697841727 1.978417266 1.410071942 2.697841727 1.978417266 1.410071942

NUMERICAL METHODS 14
GAUSS-SEIDEL METHOD
DEFINITION / OBJETCIVE:
Gauss-Seidel is the most common use iterative method in solving systems of equations. It is
closely related to the Jacobi method except that it uses the solved value of the first unknown as it
iterates for the succeding unknowns.
ALGORITHM:
- this iteration first to reformulate the given system equations each of the uknowns g(x).

GAUSS-SEIDEL METHOD
- given an initial guess, xk, the iterative formula for solving a new estimate is: xk+1 = g(x). Then in
solving for the next unknown yk use the solved value for xk+1 instead of the initial value, and this
will do also in solving for the next unknown.
EXAMPLE: Find the values of x, y & z for the following systems of equations:
by xk+1; and then the process is repeated, this time substituting xi
- The resulting value is denoted4x+y+3z=17
into the right side. This is repeated until convergence occurs or until the iteration is terminated.
x+5y+z=14
- Error is then computed as [(new 2x-y+8z=12
- old) / new] in terms of xi+1
EXAMPLE:
reaaranging equations:
Find the values of x, y & z for the following systems of equations:
x = (-y4x
- 3z
+ +17)/4
y + 3z = 17 ; x + 5y + z = 14 ; 2x - y + 8z = 12
y = (-x - z +14)/5
SOLUTION:
z = (-2x + y + 12)/8
Getting the values of x, y, & z using each the eaquations:
x = (-y - 3z + 17) / 4 ; y = (-x - z + 14) / 5 ; z = (-2x + y + 12) / 8

xk yk zk xk+1 yk+1 zk+1


0 0 0 4.25 1.95 1.2125
4.25 1.95 1.2125 2.853125 1.986875 1.39171875
2.853125 1.986875 1.39171875 2.709492188 1.979757813 1.408783203
2.709492188 1.979757813 1.408783203 2.698473145 1.97854873 1.410009448
2.698473145 1.97854873 1.410009448 2.697855731 1.978426964 1.410071404
2.697855731 1.978426964 1.410071404 2.697839706 1.978417778 1.410072259
2.697839706 1.978417778 1.410072259 2.697841361 1.978417276 1.410071989
2.697841361 1.978417276 1.410071989 2.697841689 1.978417264 1.410071947
2.697841689 1.978417264 1.410071947 2.697841724 1.978417266 1.410071943
2.697841724 1.978417266 1.410071943 2.697841726 1.978417266 1.410071942
2.697841726 1.978417266 1.410071942 2.697841727 1.978417266 1.410071942
2.697841727 1.978417266 1.410071942 2.697841727 1.978417266 1.410071942
2.697841727 1.978417266 1.410071942 2.697841727 1.978417266 1.410071942
2.697841727 1.978417266 1.410071942 2.697841727 1.978417266 1.410071942
2.697841727 1.978417266 1.410071942 2.697841727 1.978417266 1.410071942
2.697841727 1.978417266 1.410071942 2.697841727 1.978417266 1.410071942
2.697841727 1.978417266 1.410071942 2.697841727 1.978417266 1.410071942

NUMERICAL METHODS 15
NEWTON-RAPHSON
DEFINITION / OBJETCIVE:
Newton-Raphson method uses matrix determinants in determining solutions of given systems of
equations.
ALGORITHM:
- Determine the partial derivative of each given function. Then forming it in a square matrix solve
for its determinant.

- Then form another matrix in which the column of the unknown to be solved is replaced by the
negative value of the function. -f(x). afterwhich solve for the determinant.

- The solution can be obtained by the quotient of the determinant of the replaced matrix with the
determinant of the first matrix )which is the partial derivative with respect of each unknown).

EXAMPLE:
Solve the following systems of equations:
f1 = 2x - 3y - 2z - 10 ; f2 = 3x - 4y + 3z - 8 ; f3 = 4x - 5y + 4z - 10

SOLUTION:
partial derivative of the given function in terms of each variable (in matrix form):

2 -3 -2

3 -4 3 = -4 (determinant)

4 -5 4

from the given eqauations: -f1 = 10; -f2 = 8; -f3 = 10


solve for x:
10 2 -3 -2
10 -3 -2
8 3 -4 3
8 -4 3 = -4 (determinant)
10 4 -5 4 x = -4 / -4
10 -5 4 x=1
solve for y: solve for z:
2 10 -2 2 -3 10

3 8 3 = 8 (determinant) 3 -4 8 = 4 (determinant)
y = 8 / -4 z = 4 / -4
4 10 4 y = -2 4 -5 10 z = -1

NUMERICAL METHODS 16
GAUSS ELIMINATION
DEFINITION / OBJETCIVE:
Gauss Elimination or what is called as Gaussian Elimination is a method use to solve systems of
equations.
ALGORITHM:
- Form an Augmented matrix with the coefficients and constant of the given equations.

- Forward elimination - by row operations, make the main diagonal of the matrix equal to one and
below it are zeros. Afterwhich back substitution - substitute the remaining values as coefficints of
the given equations.
EXAMPLE:
Solve the following systems of equations:
x + 2y + 4z = -3 ; 3x + 2y - z = 4 ; 5x - 6y + 4z = 1

SOLUTION:
1 2 4 -3 Augmented Matrix

3 2 -1 4 R2 - 3R1

5 -6 4 1 R3 - 5R1 Back substitution:

36z = -36
1 2 4 -3 z = -36/36
z = -1
0 -4 -13 13 R2 / -4
y + 3.25z = -3.25
0 -16 -16 16 y = -3.25 - (3.25)(-1)
y=0
1 2 4 -3 x + 2y + 4z = -3
x = -3 - (2)(0) - (4)(-1)
0 1 3.25 -3.25 x=1
0 -16 -16 16 R3 + 16R2

1 2 4 -3

0 1 3.25 -3.25

0 0 36 -36

NUMERICAL METHODS 17
GAUSS-JORDAN
DEFINITION / OBJETCIVE:
Gauss-Jordan method is a modified method of the Gaussian Elimination that still use to determine
the solution of any given systems of equations.
ALGORITHM:
- Form an Augmented matrix with the coefficients and constant of the given equations.

- Forward elimination - by row operations, make the main diagonal of the matrix equal to one and
below and above it are zeros. Substitution - Afterwhich the remaining value at the last row are the
corresponding value of the unknowns.
EXAMPLE:
Solve the following systems of equations:
x + y + z = 2 ; 2x + 5y + 3z = 1 ; 3x - y - 2z = -1

SOLUTION:
Augmented Matrix
1 1 1 2 1 0 2/3 3 R1 - 2/3 R3

2 5 3 1 R2 - 2R1 0 1 1/3 -1 R2 - 1/3 R3

3 -1 -2 -1 R3 - 3R1 0 0 1 3

1 1 1 2 1 0 0 1 =x

0 3 1 -3 R2 / 3 0 1 0 -2 =y

0 -4 -5 -7 0 0 1 3 =z

1 1 1 2 R1 + R2
Substitution:
0 1 1/3 -1
(1)(x) + (0)(y) + (0)(z) = 1
0 -4 -5 -7 R3 + 4R2 x=1

(0)(x) + (1)(y) + (0)(z) = -2


1 0 2/3 3 y = -2

0 1 1/3 -1 (0)(x) + (0)(y) + (1)(z) = 3


z=3
0 0 -11/3 -11 -3/11 R3

NUMERICAL METHODS 18
LU DECOMPOSITION
DEFINITION / OBJETCIVE:
Suppose we have the system of equations AX = B. The motivation for an LU decomposition is
based on the observation that systems of equations involving triangular coefficient matrices are
easier to deal with. Indeed, the whole point of Gaussian Elimination is to replace the coefficient
matrix with one that is triangular. The LU decomposition is another approach designed to exploit
triangular systems.

EXAMPLE:
Solve the following systems of equations:
x + y + z = 1 ; 4x + 3y - z = 6 ; 3x + 5y + 3z = 4

SOLUTION:
1 1 1 x 1

A = 4 3 -1 X= y C= 6 such that AX = C

3 5 3 z 4

by elimination R2 - 4R1 & R3 - 3R1: R3 - -2R2

1 1 1 1 1 1

0 -1 -5 0 -1 -5

0 2 0 0 0 -10

1 0 0 1 1 1 a

L = 4 1 0 U= 0 -1 -5 z= b

3 -2 1 0 0 -10 c

such that LZ = C
1 0 0 a 1

b = 6
4 1 0

3 -2 1 c 4

NUMERICAL METHODS 19
by substitution, we have a = 1 , 4a + b = 6 , 3a - 2b + c = 4 .
a=1

b=2

c=5

solve UX = Z

1 1 1 x

0 -1 -5 y =

0 0 -10 z

1
by substitution, we get x + y + z = 1 , -y - 5z = 2 , -10z = 5 .
x=1 2

y = .5 5

z = -.5

NUMERICAL METHODS 20
LAGRANGE APPROXIMATION
DEFINITION / OBJETCIVE:
In numerical analysis, Lagrange polynomials are used for polynomial interpolation. For a given set
of distinct points xj and numbers yj, the Lagrange polynomial is the polynomial of lowest degree
that assumes at each point xj the corresponding value yj (i.e. the functions coincide at each point).
The interpolating polynomial of the least degree is unique, however, and since it can be arrived
at through multiple methods, referring to “the Lagrange polynomial” is perhaps not as correct as
referring to “the Lagrange form” of that unique polynomial.

ALGORITHM:

NUMERICAL METHODS 21
EXAMPLE:
A robot arm with a rapid laser scanner is doing a quick quality check on holes drilled in a
rectangular plate. The centers of the holes in the plate describe the path the arm needs to
take, and the hole centers are located on a Cartesian coordinate system (with the origin at the
bottom left corner of the plate) given by the specifications in Table 1.

Table 1 The coordinates of the holes on the plate.


(in.) (in.)
2.00 7.2
4.25 7.1
5.25 6.0
7.81 5.0
9.20 3.5
10.60 5.0

If the laser is traversing from to to in a quadratic path, what is the value of at using a second
order Lagrange polynomial? Find the absolute relative approximate error for the second order
polynomial approximation.

SOLUTION:
For second order Lagrange polynomial interpolation (also called quadratic interpolation), we
choose the value of given by
2
y ( x) = ∑ Li ( x) y ( xi ) = L0 ( x) y ( x0 ) + L1 ( x) y ( x1 ) + L2 ( x) y ( x 2 )
i =0

Since we want to find the value of y at x = 4.0, using the three points as x0 = 2.00; x1 = 4.25,
and x2 = 5.25 , then

x0 = 2.00 y0 = 7.2
x1 = 4.25 y1 = 7.1
x2 = 5.25 y2 = 6.0

NUMERICAL METHODS 22
LEAST SQUARE - LINEAR
DEFINITION / OBJETCIVE:
The method of least squares gives a way to find the best estimate, assuming that the errors (i.e.
the differences from the true value) are random and unbiased. It calculates the line of best fit by
minimising the sum of the squares of the vertical distances of the points to the line.

ALGORITHM:
In getting the equation that best fits a given data or set of points, consider the the equation of a
linear function y = Ax + B.

Then consider the formula to obtain the values for A & B:

EXAMPLE:
Find the linear equation that fits the given points
(-2, -4.2), (-1, -2.85), (0, 0.1), (1, 2.1), (2, 4), (3, 5.9)

SOLUTION:

19A + 3B = 39.05
3A + 6B = 5.05

by elimination or cramer’s rule:


A = 2.087; B = - 0.202

so the equation of the line that best fit is y = 2.087x - 0.202

NUMERICAL METHODS 23
GRAPH:

NUMERICAL METHODS 24
LEAST SQUARE - POLYNOMIAL
DEFINITION / OBJETCIVE:
Like the method of least squares that estimate the best fit line to a given data, this time it is t obtain
a polynomial equation (i.e. quadratic) that fit the given data.
ALGORITHM:
In getting the equation that best fits a given data or set of points, consider the the equation of a
quadratic function y = Ax2 + Bx + C.

Then consider the formula to obtain the values for A & B:

EXAMPLE:
Find the polynomial equation that fits the given points
(-2, 4.2), (-1, .85), (0, 0.1), (1, 1), (2, 4), (3, 8.5)

SOLUTION:

115A + 27B + 19C = 111.15


27A + 19B + 3C = 25.25
19A + 3B + 6C = 18.65

by elimination or cramer’s rule:


A = 0.969; B = - 0.059; C = 0.07

so the equation of the parabola (quadratic) that best fit is


y = 0.969x2 - 0.059x + 0.07
NUMERICAL METHODS 25
GRAPH:

NUMERICAL METHODS 26
NON-LINEAR CURVE FITTING
DEFINITION / OBJETCIVE:
Suppose that we wish to t a function y = f(x) to data for which a linear function is clearly not
appropriate. We generally know this because we see a denite non-linear pattern in the scatterplot
(or in a residual plot) or because the science behind the relationship tells us that a non-linear
relationship might be more appropriate.

Of course we cannot simply search for an arbitrary function f. We could plot the data exactly with a
polynomial of suciently high degree, but such a polynomial is unlikely to be a useful model. Therefore,
to plot a non-linear function to data, we generally constrain the function we are looking for to be in
some small class of functions. Usually this class is dened by a small number of parameters. This
is what we did in the linear case - we limit ourselves to a two-parameter (A; B) family of functions.

ALGORITHM:
In getting an exponential equation (or plot an approximate curve) that best fit the given data we
consider the following:

y = CeAx ln(y) = Ax + ln(C)


X = x; Y = ln(y); B = (C)
Y = AX + B
C = eB

EXAMPLE:
Find the exponential equation that fits the given points
(0.5, 900), (0.9, 1000), (1.0, 1400), (1.2, 1880),
(1.8, 2800), (2.1, 3700) , (3.0, 4800)

SOLUTION:

NUMERICAL METHODS 27
20.15A + 10.5B = 82.879
10.5A + 7B = 53.123

by elimination or cramer’s rule:


A = 0.726; B = 6.5; C = 665.142

so the exponential equation that best fit is


y=665.142e0.726x

GRAPH:

NUMERICAL METHODS 28
COMPOSITE TRAPEZOIDAL RULE
DEFINITION / OBJETCIVE:
The composite trapezoidal rule is a method for approximating a definite integral by evaluating the
integrand at n points. Let [a,b] be the interval of integration with a partition a=x0<x1<…<xn=b.

ALGORITHM:
using the following recursive fomula

J = (h/2)[f(a) + f(b)] + h Σf(xk)


where h = (b - a)/n

EXAMPLE:
Find the trapezoidal approximations of the given
definite integral for n = 20 : f(x) = x-1/3 dx from
.25 to 4; where the step size is .1875

SOLUTION:

SUMMATION of f(xn) = 18.12403608

J = 3.606

NUMERICAL METHODS 29
COMPOSITE SIMPSON’S RULE
DEFINITION / OBJETCIVE:
Simpson’s rule is also known as parabolic rule.
A basic approximation formula for definite integrals which states that the integral of a real-valued
function ƒ on an interval [a,b ] is approximated by h [ƒ(a) + 4ƒ(g + h) + ƒ(b)]/3, where h = (b-a)/2; this
is the area under a parabola which coincides with the graph of ƒ at the abscissas a, a + h, and b.

A method of approximating a definite integral over an interval which is equivalent to dividing the
interval into equal subintervals and applying the formula in the first definition to each subinterval.

ALGORITHM:
using the following recursive fomula

S(f,h) = (h/3)[f(a) + f(b)] + (2h/3)[Σf(x2k) + (4h/3)[Σf(x2k-1)


where h = (b - a)/n
EXAMPLE:
Find the approximate of the given definite integral using the simpsons rule:
for n = 20 : f(x) = x-1/3 dx from .25 to 4

SOLUTION:
a = .25 f(a) = 1.587
b = 4 f(b) = 0.630
h = 0.1875

S = 4.092

NUMERICAL METHODS 30
EULER’S METHOD
DEFINITION / OBJETCIVE:
the Euler method (also called forward Euler method) is a first-order numerical procedure for solving
ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit method
for numerical integration of ordinary differential equations and is the simplest Runge–Kutta method.
The Euler method is named after Leonhard Euler, who treated it in his book Institutionum calculi
integralis (published 1768–70).

The Euler method is a first-order method, which means that the local error (error per step) is
proportional to the square of the step size, and the global error (error at a given time) is proportional
to the step size. The Euler method often serves as the basis to construct more complex methods,
e.g., predictor–corrector method.

ALGORITHM:
Euler’s method is defined by the formula:
yk+1 = yk + hf(tk,yk)
xk+1 = xk + h

EXAMPLE:
Solve y’ = y + x2 where y’ = f(x,y); subject to y(0)=1.
The actual y = - x2 - 2x - 2 + c; where c is a constant determined by the initial conditions. Since we
have chosen y(0)=1.Solve y’ = y + x2 where y’ = f(x,y)
Subject to y(0)=1.

SOLUTION:
y’=f(x,y) h = 0.2
by defining a small Δx, called h. Then
yn+1 = yn + h f(xn, yn)
with initial condition y0=y(0)

NUMERICAL METHODS 31
GRAPH:

NUMERICAL METHODS 32
HEUN’S METHOD
DEFINITION / OBJETCIVE:
In mathematics and computational science, Heun’s method may refer to the improved or modified
Euler’s method (that is, the explicit trapezoidal rule), or a similar two-stage Runge–Kutta method.
It is named after Karl Heun and is a numerical procedure for solving ordinary differential equations
(ODEs) with a given initial value.

ALGORITHM:
Heun’s method is defined by the formula:

yk+1 = yk + h/2 [f(tk,yk) + f(tk+1, pk+1)]


tk+1 = tk + h
pk+1 = yk + hf(tk,yk)

where h is the step size from k to k+1

EXAMPLE:
Solve y’ = y + x2 where y’ = f(x,y); subject to y(0)=1.
The actual y = - x2 - 2x - 2 + c; where c is a constant determined by the initial conditions. Since we
have chosen y(0)=1.

SOLUTION:
y’=f(x,y) h = 0.2
by defining a small Δx, called h. Then
yn+1 = yn + h f(xn, yn)
with initial condition
y0=y(0)

NUMERICAL METHODS 33
GRAPH:

NUMERICAL METHODS 34

You might also like