Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 13

NATIONAL INSTITUTE OF TECHNOLOGY ,AGARTALA

SUBJECT :-NUMERICAL METHOD ASSIGNMENT


SUBMITTED BY :- RATNADEEP ROY
BRANCH:-CHEMICAL ENGINEERING
ENROLL NO:- 17UCH007
REG NO:- 178172
DATE OF SUBMISSION:- 28.03.2019
Bisection method:-
Let f(x)=0 be the given equation . let x0 and x1 be two real values of x at P and Q respectively
such that f(x1) is positive and f(x0) is negative or vice versa .Then there is one root of equation
f(x)=0 between x0 and x1 .Now, this interval [x0, x1] is divided into two sub intervals [x0, x2] and
[x2, x1] ,where x2= x0 + x1 /2
If f(x0) and F(x2) are of opposite signs then the interval [x0, x2] is divided into[x0, x3] and[x3,
x2],where x3 =x0+ x2/2 . However, if f(x0) and f(x2) are of the same sign then f(x1) and f(x2) will be
opposite signs and the interval [x1, x2] is divided into [x1, x3] and [x3, x2] , where x3= x1 + x2/2 .
This process is continued till the desire accuracy is obtained.
Regular Falsi Method:-

The convergce process in the bisection method is very slow. It depends only on the choice of end points of the interval [a,b].
The function f(x) does not have any role in finding the point c (which is just the mid-point of a and b). It is used only to decide
the next smaller interval [a,c] or [c,b]. A better approximation to c can be obtained by taking the straight line L joining the
points (a,f(a)) and (b,f(b)) intersecting the x-axis. To obtain the value of c we can equate the two expressions of the slope m of
the line L.

f(b) − f(a) 0 − f(b)


m= =
(b−a) (c−b)
=> (c-b) * (f(b)-f(a)) = -(b-a) * f(b)

f(b) ∗ (b−a)
c=b-
f(b) − f(a)
. Now the next smaller interval which brackets the root can be obtained
by checking
f(a) * f(b) < 0 then b = c
> 0 then a = c
= 0 then c is the root.
Selecting c by the above expression is called Regula-Falsi method or
False position method
Newton Raphson Method:-
Let f(x)=0 be the given equation and x be an approximate root of the equation f()=0 . If x=x+h
be the exact root then f()=0 that is f()=0
Expanding f() by Taylor series,
ℎ2
f(x1)=f(x0+h)=f(x0)+hf’(x1)+ 𝑓′′(x0) + ⋯=0
2!
Since h is small , neglecting h2 and higher power of h,
f(x0)+hf’(x0)=0
𝑓(x0)
h= -
𝑓′(x0)
𝑓(x0)
x= x + h =x -
𝑓(x0)
Similarly, starting with x1, a still better approximation x2 is obtained.
𝑓(x1)
x2 = x1 -
𝑓′(x1)
𝑓(x )
xn+1=xn - n
𝑓′(xn)
This equation is known as Newton Raphson formula.
Secant method:-
Although the Newton-Raphson method is very powerfull to solve non-linear equations, evaluating of the function derivative is the major difficulty of this
method. To overcome this deficiency, the secant method starts the iteration by employing two starting points and approximates the function derivative by
evaluating of the slope of the line passing through these points. The secant method has been shown in Fig. 1. As it is illustrated in Fig. 1, the new guess of the
root of the function f(x) can be found as follows:
Newton Raphson Method for Multivariable:-

The above method can be generalized to multi-variate case to solve n simultaneous algebraic equations

The Newton Raphson formula for multi variable problem


Guass Jacobi Method:-
In numerical linear algebra, the Jacobi method is an iterative algorithm for determining the solutions of a diagonally
dominant system of linear equations. Each diagonal element is solved for, and an approximate value is plugged in.
The process is then iterated until it converges. This algorithm is a stripped-down version of the Jacobi transformation
method of matrix diagonalization. The method is named after Carl Gustav Jacob Jacobi.

a11x+ a12x+ a13x=b1


a21x+ a22x+ a23x= b2
a31x+ a32x+ a33x= b3

Where a11, a22, a33 are large as compared to the other coefficients in the corresponding row and satisfy the condition
of convergence as follows:

Rewriting the equations for x , y, and z respectively ,

1
x=𝑎11(b1 –a12 y- a13 z)
1
y=𝑎22(b2 –a21 y- a23 z)
1
z= (b –a31 y- a32 z)
𝑎33 3
Juass seidel Method:-
The method is applicable to the system of equations in which leading diagonal elements of the
coefficient matrix are dominant in their respective rows.Consider the system of equations:
a11x+ a12x+ a13 = b1
a21x+ a22x+ a23x= b2
a31x+ a32x+ a33x= b3
Where a11, a22, a33 are large as compared to the other coefficients in the corresponding
row and satisfy the condition of convergence as follows:
Rewriting the equations for x , y, and z respectively,
1
x= (b –a12 y- a13 z)
𝑎11 1
1
y= (b2 –a21 y- a23 z)
𝑎22
1
z= (b3 –a31 y- a32 z)
𝑎33
LU Decomposition:-
Newton’s Forward Interpolation:-
Interpolation is the technique of estimating the value of a function for any intermediate value of the
independent variable, while the process of computing the value of the function outside the given range is
called extrapolation.
Forward Differences : The differences y1 – y0, y2 – y1, y3 – y2, ……, yn – yn–1 when denoted by dy0,
dy1, dy2, ……, dyn–1 are respectively, called the first forward differences. Thus the first forward differences
are :
Newton’s Backward Interpolation:-
Backward Differences : The differences y1 – y0, y2 – y1, ……, yn – yn–1 when denoted by dy1, dy2, ……, dyn,
respectively, are called first backward difference. Thus the first backward differences are :
Newton’s divided difference:-
𝑓 𝑥𝑖 −𝑓(𝑥𝑗)
Let us assume that the function f(x) is linear then we have 𝑥𝑖−𝑥𝑗

where xi and xj are any two tabular points, is independent of xi and xj. This ratio is called the first divided difference
𝑓 𝑥𝑖 −𝑓(𝑥𝑗)
of f(x) relative to xi and xj and is denoted by f [xi, xj]. That is f[xi, xj]= 𝑥𝑖−𝑥𝑗

f(x) − f(x0)
= f[x0, x1]
(x − x0)

f[x1, x2] − f[x0, x1]


f[x0, x1, x2] = x2−x0

The kth degree polynomial approximation to f(x) can be written as


f(x) = f [x0] + (x - x0) f [x0, x1] + (x - x0) (x - x1) f [x0, x1, x2]
+ . . . + (x - x0) (x - x1) . . . (x - xk-1) f [x0, x1, . . ., xk].
This formula is called Newton's Divided Difference Formula. Once we have the divided differences of the
function f relative to the tabular points then we can use the above formula to compute f(x) at any non tabular
point
Simpson 1/3rd rule:-

You might also like