Practical

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

UNIT 1

Theory and algorithm of Bisection Method

For any continuous function f(x)

• Find two points, say a and b such that : a<b and f (a) ∗ f (b) < 0
• Find the midpoint of a and b, say “t”
• t is the root of the given function if f(t) = 0; else follow the next step
• Divide the interval [a, b] -

1. If f (t) ∗ f (a) < 0 there exist a root between t and a


2. Else if f (t) ∗ f (b) < 0 there exist a root between t and b
• Repeat above three steps until f(t) = 0.
The bisection method is an approximation method to find the roots of the given equation by repeatedly dividing the interval.
This method will divide the interval until the resulting interval is found, which is extremely small.

Input: Function f , interval endpoints a, b; Tolerance ϵ


Output: Approximation of the root
while |b − a| > ϵ do
c ← (a + b)/2 ; // Calculate midpoint
f (a) ← Evaluate f (a);
f (c) ← Evaluate f (c);
if f (a) · f (c) < 0 then
b←c; // Root lies in the left subinterval
end
else
a←c; // Root lies in the right subinterval
end
end
return (a + b)/2
Algorithm 1: Bisection Method

Analytical Solution of Bisection Method

f (x) = x2 − 2 intial guess of the interval :


{(a,b)—(1,2)}

• 1st Iteration : f (1) = −1 and f (2) = 2 { ∵


f (1).f (2) < 0 }

1+2
c= = 1.5 , f (1.5) = 0.25
2
since f (1).f (1.5) < 0 new interval will become (1, 1.5) {∵ here
b =c }

error = |1.5 − 1| = 0.5


Figure 1: Bisection method
• 2 Iteration : f (1) = −1
nd
and f (1.5) = 0.25 { ∵
f (1).f (1.5) < 0 }

1 + 1.5
c= = 1.25 , f (1.25) = −0.4375
2
since f (1.25).f (1.5) < 0 new interval will become (1.25, 1.5) {∵
here a =c }

error = |1.5 − 1.25| = 0.25

• 3rd Iteration : f (1.25) = −0.4375 and f (1.5) = 0.25


{ ∵ f (1.25).f (1.5) < 0 }

1.25 + 1.5
c= = 1.375 , f (1.375) = −0.109375
2
since f (1.375).f (1.5) < 0 new interval will become (1.375, 1.5)
{∵ here a=c }

error = |1.5 − 1.375| = 0.125


• 4th Iteration : f (1.375) = −0.109375 and f (1.5) = 0.25
{ ∵ f (1.375).f (1.5) < 0 }

1.375 + 1.5
c= = 1.4375 , f (1.4375) = 0.06640625
2
since f (1.375).f (1.4375) < 0 new interval will become (1.375, 1.4375)
{∵ here b=c }

error = |1.4375 − 1.375| = 0.0625

• 5th Iteration : f (1.375) = −0.109375 and f (1.4375) = 0.06640625


{ ∵ f (1.375).f (1.4375) < 0 }

1.375 + 1.4375
c= = 1.40625 , f (1.40625) = −0.0224609375
2
since f (1.40625).f (1.4375) < 0 new interval will become (1.40625, 1.4375)
{∵ here a=c }

error = |1.40625 − 1.4375| = 0.03125

Code and Output of Bisection Method

Below is the Python Code to solve the equation = tan(x) − x


import math
import numpy as np
import matplotlib.pyplot as plt

def f(x):
return math.tan(x)-x
def bisection(a,b,tole):
#error,c = math.inf ,1
while (abs(a-b)>tole):
if f(a)*f(b)>0:
print("bisection is not possible ")
return
if f(a)*f(b)==0:
if f(a)==0:
print(a, "is the root ")
else:
print(b, "is the root ")
elif f(a)*f(b)<0:
c=(a+b)/2
if f(a)*f(c)<0:
b=c
error = np.abs(a-b)
elif f(a)*f(c)>0:
a=c
error = np.abs(a-b)
elif f(c)==0 :
print(c," is the root")
error = np.abs(a-b)
break
print("The root is in the interval (",a,",",b,") and error is " ,error*100/c,"%")
return c
a= float(input("Enter the lower limit "))
b = float(input("Enter the upper limit "))
tole = eval(input("enter the tolerance value = "))
r = bisection(a,b,tole)
print ("ans is ", r)
x= np.linspace(-10,10,500)
f = np.vectorize(f)
plt.plot(x,f(x))
plt.plot(x,f(x),color = 'blue' , label = 'f(x) ')
plt.legend()
plt.xlabel("X-Axis")
plt.ylabel("Y-Axis")
plt.grid()
plt.show()

Figure 2: tan(x) − x

Below is the Python Code to solve the equation = x2 − 2


import math
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import fsolve

def f(x):
return x**2-2
def bisection(a,b,tole):
while (np.abs(a-b)>tole):
if f(a)*f(b)>0:
print("bisection is not possible ")
return
if f(a)*f(b)==0:
if f(a)==0:
print(a, "is the root ")
else:
print(b, "is the root ")
elif f(a)*f(b)<0:
c=(a+b)/2
if f(a)*f(c)<0:
b=c
error = np.abs(a-b)
elif f(a)*f(c)>0:
a=c
error = np.abs(a-b)
elif f(c)==0 :
print(c," is the root")
error = np.abs(a-b)
break
print("The root is in the interval (",a,",",b,") and error is " ,error*100/c,"%")
return c
a= float(input("Enter the lower limit "))
b = float(input("Enter the upper limit "))
tole = eval(input("enter the tolerance value = "))
r = bisection(a,b,tole)
print ("Root is ", r)

short= fsolve(f,[-1.5,1.5])
print("fsolve gives the all roots of equation = " , short)
x= np.linspace(-2,2,100)
plt.plot(x,f(x))
plt.plot(x,f(x),color = 'blue' , label = 'f(x) ')
plt.legend()
plt.xlabel("X-Axis")
plt.ylabel("Y-Axis")
plt.grid()
plt.show()

Figure 3: x2 − 2

Theory and algorithm of Newton-Raphson Method

The Newton-Raphson method is an iterative root-finding technique that utilizes tangent lines to approximate the root of a
function. By iteratively refining the initial guess, the method converges rapidly to the root. The key idea is to move along the
tangent line of the function’s curve, which brings us closer and closer to the root.
Let’s consider a function f (x) and its derivative f ′ (x). Starting with an initial guess x0 , we can construct a tangent line at the
point (x0 , f (x0 )). The equation of this tangent line is given by:

y = f (x0 ) + f ′ (x0 )(x − x0 )

where y represents the function value and x represents the variable.


To find the x-intercept of the tangent line, we set y = 0, which yields:

0 = f (x0 ) + f ′ (x0 )(x − x0 )

Solving for x, we get:


f (x0 )
x = x0 −
f ′ (x0 )
This formula provides us with an improved estimate x1 for the root.
We can then repeat this process iteratively, using the updated estimate x1 as the new starting point. The general formula for
the Newton-Raphson iteration is:
f (xn )
xn+1 = xn − ′
f (xn )
where xn represents the nth estimate of the root.
The pseudocode for the Newton-Raphson method is as follows:
Input: Function f , derivative function f ′ , initial guess x0 , Tolerance ϵ
Output: Approximation of the root
x ← x0 ;
while |f (x)| > ϵ do
f ′ (x) ← Evaluate f ′ (x);
∆x ← − ff′(x)
(x) ; // Calculate the update
x ← x + ∆x ; // Update the estimate
end
return x
Algorithm 2: Newton-Raphson Method
Analytical Solution of Newton Raphson Method

f (x) = x3 − 20 f ′ (x) = 3x2 intial guess : x1 = 3


f (x1 )
• 1st Iteration : x2 = x1 −
f ′ (x1 )
7
x2 = 3 − 27 = 2.74

x2 = 2.74
|2.74−3| Figure 4: Newton Raphson
error = 3 · 100 = 8.67%

f (x2 )
• 2nd Iteration : x3 = x2 −
f ′ (x2 )
.57
x3 = 2.74 − 22.52 = 2.72

x3 = 2.72
|2.74−2.72|
error = 2.74 · 100 = 0.4%

f (x3 )
• 3rd Iteration : x4 = x3 −
f ′ (x3 )
0.12
x4 = 2.72 − 22.19 = 2.715

x4 = 2.715
|2.72−2.715|
error = 2.72 · 100 = 0.18%

f (x4 )
• 4th Iteration : x5 = x4 −
f ′ (x4 )
0.01
x5 = 2.715 − 22.11 = 2.7146

x5 = 2.7146
|2.715−2.7146|
error = 2.715 · 100 = 0.014%

f (x5 )
• 5th Iteration : x6 = x5 −
f ′ (x5 )
0.004
x6 = 2.7146 − 22.10 = 2.7145

x6 = 2.7145
|2.7145−2.71456|
error = 2.71456 · 100 = 0.0066%

Code and Output of Newton Raphson Method

Below is the Python Code to solve the equation = x3 − 20


import math
x0 = eval(input("enter the value of intial guess = "))
tol = eval(input("enter the value of allowed error = "))

def newton(x0,tol):
def fun(x):
y = x**3 - 20
return y
def funp(x):
yp = 3*x**2
return yp
x1 =0
count =0

while True:
x1 = x0 - fun(x0)/funp(x0)
err = abs((x1 -x0)/x0)*100
x0 = x1
count+=1
if err<tol:
break
print(x1," is the closest root after the ",count ," iteration with relative error possible ", err)

newton(x0,tol)

Figure 5: x3 − 20

Below is the Python Code to solve the equation = tan(x) − x


import math
x0 = eval(input("enter the value of intial guess = "))
tol = eval(input("enter the value of allowed error = "))

def newton(x0,tol):
def fun(x):
y = math.tan(x) -x
return y
def funp(x):
yp = (1/math.cos(x)**2)-1
return yp
x1 =0
count =0
while True:
x1 = x0 - fun(x0)/funp(x0)
err = abs((x1 -x0)/x0)*100
x0 = x1
count+=1
if err<tol:
break
print(x1," is the closest root after the ",count ," iteration with relative error possible ", err)

newton(x0,tol)

Figure 6: tan(x) − x

Theory and algorithm of Secant Method

The secant method is a numerical technique for finding the root of a


function without explicitly calculating the derivative. It is based on the idea
of approximating the derivative using the slope of a secant line drawn through
two points on the function’s curve. The secant method utilizes a recurrence
relation to iteratively update the estimate of the root.
Let’s consider two initial guesses, x0 and x1 , which are close to the true
root. We can draw a secant line through the points (x0 , f (x0 )) and (x1 , f (x1 )).
The equation of the secant line can be expressed as:
f (x1 ) − f (x0 )
y = f (x0 ) + (x − x0 )
x1 − x0
where y represents the function value and x represents the variable.
To find the x-intercept of the secant line, we set y = 0, which yields:
f (x1 ) − f (x0 )
0 = f (x0 ) + (x − x0 )
x1 − x0
Solving for x, we get the updated estimate for the root:
f (x1 ) · (x1 − x0 )
x = x1 −
f (x1 ) − f (x0 )
This formula provides us with an improved estimate x2 for the root.
We can then repeat this process iteratively, using the updated estimates
x1 and x2 as the new points on the secant line. The general formula for the
secant method iteration is:
f (xn ) · (xn − xn−1 )
xn+1 = xn −
f (xn ) − f (xn−1 )
where xn represents the nth estimate of the root.
The pseudocode for the secant method is as follows:
Input: Function f , initial guesses x0 and x1 , Tolerance ϵ
Output: Approximation of the root
xprev ← x0 ;
x ← x1 ;
while |f (x)| > ϵ do
f (x)·(x−xprev )
∆x ← − f (x)−f (xprev ) ; // Calculate the update
xprev ← x;
x ← x + ∆x ; // Update the estimate
end
return x
Algorithm 3: Secant Method

Analytical Solution of Secant Method :

f (x) = x3 − 20 intial guess : x1 = 4 , x2 = 5.5


f (x2 ) · (x2 − x1 )
• 1st Iteration : x3 = x2 −
f (x2 ) − f (x1 )
f (5.5) · (5.5 − 4) 219.5625
x3 = 5.5 − = 5.5 − =
f (5.5) − f (4) 102.375
3.35

x3 = 3.35
|3.35−5.5|
error = 5.5 = 0.64

f (x3 ) · (x3 − x2 )
• 2nd Iteration : x4 = x3 −
f (x3 ) − f (x2 )

f (3.35) · (3.35 − 5.5) −37.8


x4 = 3.35 − = 3.35 − =
f (3.35) − f (5) −128.7
3.05

x4 = 3.05 Figure 7: Secant method

|3.35−3.05|
error = 3.35 = 0.09

f (x4 ) · (x4 − x3 )
• 3rd Iteration : x5 = x4 −
f (x4 ) − f (x3 )

f (3.05) · (3.05 − 3.35) −2.511


x5 = 3.05 − = 3.05 − =
f (3.05) − f (3.35) −9.22
2.77

x5 = 2.77
|3.05−2.77|
error = 3.05 = 0.1
Code and Output of Secant Method

Below is the Python Code to solve the equation = tan(x) − x


import math
x1 = eval(input("enter the value of lower limit = "))
x2 = eval(input("enter the value of upper limit = "))
tol = eval(input("enter the acceptable maximum error = "))

def f(x):
y =math.tan(x) - x
return y
def secant(x1,x2,tol):
error = ((abs(x2-x1))/x2)
while error>tol:
x3 = x2 - f(x2)*(x2 -x1)/(f(x2) - f(x1))
x1,x2 = x2 ,x3
error = (abs(x1 -x2 )/x2)
print("The root is ",x3,"with possible error of ",error,count)

secant(x1,x2,tol)

Figure 8

Below is the Python Code to solve the equation = x3 − 20


import math
x1 = eval(input("enter the value of lower limit = "))
x2 = eval(input("enter the value of upper limit = "))
tol = eval(input("enter the acceptable maximum error = "))

def f(x):
y =x**3-20
return y
def secant(x1,x2,tol):
error = ((abs(x2-x1))/x2)
while error>tol:
x3 = x2 - f(x2)*(x2 -x1)/(f(x2) - f(x1))
x1,x2 = x2 ,x3
error = (abs(x1 -x2 )/x2)
print("The root is ",x3,"with possible error of ",error,count)
secant(x1,x2,tol)

Figure 9
UNIT 2

Theory of Least square fitting

1 Theory of Least Square Fitting


Linear least square fitting is a technique used to find the best-fit line that minimizes the sum of the squared residuals between the
observed data points and the predicted values. Given a set of data points {(xi , yi )}, the goal is to find the line of the form y = mx + c
that best fits the data. Linear curve :
y = mx + c
Slope and intercept is given by the definition of least square curve fitting :
P P P
n x.y − ( x).( y)
Slope(m) = P P
n x2 − ( x)2

Intercept(c) = ȳ − m · x̄

Theory of Linear Least square fitting

The best-fit line is then given by y = mx + c, where m and c are the values obtained from solving the normal equations.

Code and output of Linear square fitting

# considerring the equation to be y = mx + c


import math
import matplotlib.pyplot as plt
import numpy as np

x=np.array([i for i in range(1,7)])


y=np.array([1.5,2.5,3.5,4.5,13.6,9.9])

xy = x*y
xx = x*x
n= len(x)

sxy = sum(xy)
sxx = sum(xx)
sx = sum(x)
sy = sum(y)

# m = n.sum(x.y) - sum(x).sum(y)/ {n.sum(x^2) - sum(x).sum(x)}

m = (n*sxy - sx*sy)/(n*sxx - sx*sx)


Figure 10: Linear Function
c = (sy - m*sx)/n
z=[]

for i in x :
z.append(m*i+c)

print("Data points : " )


for i , j in zip(x,y):
print(f'({i},{j})', end = '')
print()
print("Fitted points : ")
for i , j in zip(x,z):
print(f'({i},{j})', end = '')
plt.scatter(x,y,color = 'black' , label = 'Data Points ')
plt.plot(x,y,color = 'blue' , label = 'Data Points ')
plt.plot(x,z,color = 'red' , label = 'Fitted Line ')
xx = np. linspace(0,15,100)
plt.xlabel("X axis ")
plt.ylabel("Y axis")
plt.title("Linear least Square Fitting")
plt.legend()
plt.grid()
plt.show()

Theory of square fitting of power function

By taking the logarithm of both sides of the power function, you can transform
it into a linear equation that can be fitted using linear regression. Here’s how you
can do it:
• Start with the exponential equation: y = axb
• Take the logarithm (loge ) of both sides of the equation: loge (y) = loge (axb ).
• Apply the properties of logarithms: loge (y) = loge (a) + b.loge (x)

• Now fit the curve same as linear regression Y = MX+C M and C are slope and
intercept
Y = loge (y), X = loge (x), M = b, C = loge (a)

Code and output of square fitting of power function

# considerring the power function to be y =ax**b


# logy = log(a) + blog(x)
# now comparing with linear equation ..........Y = log(y)...\\
#..X = log(x) ........m=b.........c= log(a)
#.....Y = mX+c
import math
import matplotlib.pyplot as plt
import numpy as np

x=np.array([i for i in range(1,6)])


y=np.array([2,16,54,128,251])

X = np.log(x)
Y= np.log(y)

X_mean = np.mean(X)
Y_mean = np.mean(Y)
n=len(x)

sxy = np.sum(X*Y) -n*(X_mean*Y_mean )


sxx = np.sum(X*X) -n*(X_mean*X_mean ) Figure 11: Power Function
# m = b
m= sxy/sxx
c = Y_mean - m*X_mean
a = 2.71828**c
print("Value of the slope = ", m)
print("Value of the intercept = ", c)
print()
print("The value of a in function ",a)
b=m
print("The value of b in the function ",b)

z=[]

for i in x:
z.append((a*i)**b)
xx = np.linspace(1,6,100)
yfit = a*np.power(xx,b)
print("Data points : " )
for i , j in zip(x,y):
print(f'({i},{j})', end = '')
print()
print("Fitted points : ")
for i , j in zip(x,z):
print(f'({i},{j})')
plt.scatter(x,y,color = 'black' , label = 'Data Points ')
plt.plot(x,y ,color = 'blue' , label = 'Data Points ')
plt.plot(x,z,color = 'red' , label = 'Fitted Line ')
xx = np. linspace(1,6,100)
plt.xlabel("X axis ")
plt.ylabel("Y axis")
plt.title("Exponential least Square Fitting")
plt.legend()
plt.grid()
plt.show()

Theory of square fitting of exponential function

By taking the logarithm of both sides of the exponential equation, you can trans-
form it into a linear equation that can be fitted using linear regression. Here’s how
you can do it:
• Start with the exponential equation: y = aebx
• Take the logarithm (log) of both sides of the equation:

loge (y) = loge (aebx ).

• Apply the properties of logarithms: loge (y) = loge (a) + b.loge (x)
• Now fit the curve same as linear regression Y = MX+C {M and C are slope and
intercept}

Y = loge (y), X = loge (x), M = b, C = loge (a)

Code and output of square fitting of exponential function

# considerring the exponential equation to be y =ae^bx


# logy = log(a) + bx
# now comparing with linear equation ...Y = log(y)...X = x
#m=b...c= log(a)
#.....Y = mX+c
import math
import matplotlib.pyplot as plt
import numpy as np

x=np.array([i for i in range(1,7)])


y=np.array([2,5,11,29,78,94])
X = x
Y= np.log(y)
X_mean = np.mean(X)
Y_mean = np.mean(Y)
n=len(x)
sxy = np.sum(X*Y) -n*(X_mean*Y_mean )
sxx = np.sum(X*X) -n*(X_mean*X_mean )
# m = b
m= sxy/sxx
c = Y_mean - m*X_mean
a = 2.71828**c

print("Value of the slope = ", m)


Figure 12: Exponential function
print("Value of the intercept = ", c)
print()
print("The value of a in function ",a)
b=m
print("The value of b in the function ",b)
z=[]
for i in x:
z.append((a*2.71828)**i)
xx = np.linspace(1,6,100)
#yfit = a*np.power(xx,b)
print("Data points : " )
for i , j in zip(x,y):
print(f'({i},{j})', end = '')
print()
print("Fitted points : ")
for i , j in zip(x,z):
print(f'({i},{j})')

plt.scatter(x,y ,color = 'black' , label = 'Data Points ')


plt.plot(x,y ,color = 'blue' , label = 'Data Points ')
plt.plot(x,z,color = 'red' , label = 'Fitted Line ')
plt.scatter(x,z,color = 'black' , label = 'Fitted Line ')
xx = np. linspace(1,5,10)
plt.xlabel("X axis ")
plt.ylabel("Y axis")
#plt.plot(xx,yfit)
plt.title("Exponential least Square Fitting")
plt.legend()
plt.grid()
plt.show()
UNIT 3

Taylor’s Expansion

2 Theory of Taylor’s expansion


TAYLOR SERIES The Taylor series is a way to represent a function as an infinite
sum of terms, where each term is obtained from the derivatives of the function
evaluated at a specific point. It provides an approximation of the function around
that point. The general form of a Taylor series for a function f(x) centered at a
point a is:
The general Taylor series expansion of a function f (x) around a point a is given
by:
f ′ (a) f ′′ (a) f ′′′ (a)
f (x) = f (a) + (x − a) + (x − a)2 + (x − a)3 + . . .
1! 2! 3!
The above Taylor series expansion is given for a real values function f(x) where
f’(a), f”(a), f”’(a), etc., denotes the derivative of the function at point a. If the
value of point ‘a’ is zero, then the Taylor series is also called the Maclaurin series.

Maclaurin series of log(1+x)

Let’s calculate the derivatives of log(1+x):


f(x) = log(1+x)
1
f’(x) =
(1 + x)
−1
f”(x) =
((1 + x)2 )
2
f”’(x) =
((1 + x)3 )
Evaluating these derivatives at x = 0 gives:

f(0) = log(1+0) = 0
1
f’(0) = =1
(1 + 0)
−1
f”(0) = = −1
((1 + 0)2 )
2
f”’(0) = =2
((1 + 0)3 )
Using the Maclaurin series formula, we have:

f ′ (0) f ′′ (0) 2 f ′′′ (0) 3


f (x) = f (0) + x+ x + x + ...
1! 2! 3!
The Maclaurin series expansion of log(1 + x) is given by:

x2 x3 x4 x5 X xn
log(1 + x) = x − + − + − ... = (−1)n−1
2 3 4 5 n=1
n

Code and output of Taylor’s expansion of log(1+x)

import math
import numpy as np
import matplotlib.pyplot as plt

n = int(input ("enter the number of term in series = "))


t = int(input ("enter the number of multiple of x = "))
def log(x,n,t):
log.png
y_ = 0
for i in range(n):
y_ += ((-1)**(i))*((t*x)**(i+1))/(i+1)
return y_
x = np.arange(0,1,0.05)
o = log(x,n,t)
y = np.log(1+x)

error = o-y
print(error)
plt.ylim([-1,1])
plt.plot(x,y,"red",label = 'log(x)')
plt.plot(x,o,"x",label ="taylor ")
plt.plot(x,error,linestyle = "dashed",label ="error")
plt.xlabel("x axis ")
plt.ylabel("y axis ")
plt.title("Maclurian series of log(1+tx)")
plt.legend()
plt.grid()
plt.show()

Maclaurin sine series

To derive the Maclaurin series formula for sin(x), we need to find the derivatives
of sin(x) and evaluate them at x = 0.
f(x) = sin(x)
f’(x) = cos(x)
f”(x) = -sin(x)
f”’(x) = -cos(x)
Evaluating these derivatives at x = 0 gives:
f(0) = 0
f’(0) = 1
f”(0) = 0
f”’(0) = -1
Substituting these values into the Maclaurin series formula,

f ′ (0) f ′′ (0) 2 f ′′′ (0) 3


f (x) = f (0) + x+ x + x + ...
1! 2! 3!
The Maclaurin series expansion of sin(x) is given by:

x3 x5 x7 x9 X (−1)n 2n+1
sin(x) = x − + − + − ... = x
3! 5! 7! 9! n=0
(2n + 1)!

Code and output of Taylor’s expansion of sin(x)

import math
import numpy as np
import matplotlib.pyplot as plt

n = int(input ("enter the number of term in series = "))


t = int(input ("enter the number of multiple of sine = "))

def sine(x,n,t):
y_ = 0
for i in range(n):
y_ += ((-1)**(i))*(t*x**(2*i+1))/(math.factorial(2*i+1))
return y_
x = np.arange(-2*np.pi,2*np.pi,0.1 )
Figure 14: sin(x)
o = sine(x,n,t)
y = np.sin(x)

error = o-y
for i in range (4):
z = [sine(r,i,t) for r in x]
plt.plot(x,z,label=f"terms ={i}")

plt.plot(x,y,"red",label = 'sinx')
plt.plot(x,o,label ="taylor ")
plt.plot(x,error,linestyle = "dashed",label ="error")
plt.xlabel("x axis ")
plt.ylabel("y axis ")
plt.ylim([-2,2])
plt.title("Maclurian Sine series ")
plt.legend()
plt.grid()
plt.show()

Maclaurin cosine series

Following a similar process as above, we can derive the Maclaurin series formula
for cos(x).
f(x) = cos(x)
f’(x) = -sin(x)
f”(x) = -cos(x)
f”’(x) = sin(x)
Evaluating these derivatives at x = 0 gives:
f(0) = 1
f’(0) = 0
f”(0) = -1
f”’(0) = 0
Using the Maclaurin series formula, we have:

f ′ (0) f ′′ (0) 2 f ′′′ (0) 3


f (x) = f (0) + x+ x + x + ...
1! 2! 3!
The Maclaurin series expansion of cos(x) is given by:

x2 x4 x6 x8 X (−1)n 2n
cos(x) = 1 − + − + − ... = x
2! 4! 6! 8! n=0
(2n)!
Code and output of Taylor’s expansion of cosine series

import math
import numpy as np
import matplotlib.pyplot as plt

n = int(input ("enter the number of term in series = "))

def cosine(x,n):
y_ = 0
for i in range(n):
y_ += ((-1)**(i))*(x**(2*i))/(math.factorial(2*i))

return y_
x = np.arange(-2*np.pi,2*np.pi,0.1 ) Figure 15: Cos(x)
o = cosine(x,n)
y = np.cos(x)

error = o-y
for i in range (1,5):
z = [cosine(r,i) for r in x]
plt.plot(x,z,label=f"terms ={i}")
plt.ylim([-2,2])
plt.plot(x,y,"red",label = 'sinx')
plt.plot(x,o,"x",label ="taylor ")
plt.plot(x,error,linestyle = "dashed",label ="error")
plt.xlabel("x axis ")
plt.ylabel("y axis ")
plt.title("Maclurian Cosine series")
plt.legend()
plt.grid()
plt.show()

Maclaurin series of ex

LLet’s start by calculating the derivatives of ex :


f(x) = ex
f’(x) = ex
f”(x) = ex
f”’(x) = ex
Evaluating these derivatives at x = 0 gives:
f(0) = 1
f’(0) = 1
f”(0) = 1
f”’(0) = 1
Using the Maclaurin series formula, we have:
f ′ (0) f ′′ (0) 2 f ′′′ (0) 3
f (x) = f (0) + x+ x + x + ...
1! 2! 3!
The Maclaurin series expansion of ex is given by:

x x2 x3 x4 X xn
e =1+x+ + + + ... =
2! 3! 4! n=0
n!

Code and output of Taylor’s expansion of ex

import math
import numpy as np
import matplotlib.pyplot as plt

n = int(input ("enter the number of term in series = "))

def exponent(x,n):
y_ = 0
for i in range(n):
y_ += (x**i)/math.factorial(i)
return y_

x = np.arange(-2,2,0.05 )
o=exponent(x,n)
y = np.exp(x)

error = o-y

plt.plot(x,y,"red",label = 'exp(x)')
plt.plot(x,o,"x",label ="Taylor ")
plt.plot(x,error,"*",label ="error")
plt.xlabel("x axis ")
plt.ylabel("y axis ")
plt.title("Maclurian series of exp(x)")
plt.legend()
plt.grid()
plt.show()

Theory of Legendre polynomials

Legendre Polynomials
The Legendre polynomials, denoted by Pn (x), are a family of orthogonal poly-
nomials that arise in various branches of mathematics, especially in the field of ap-
proximation theory and mathematical physics. They are defined by the Rodrigues’
formula:
1 dn  2
(x − 1)n

Pn (x) = n n
2 n! dx
n
d
where n is a non-negative integer and dx n represents the nth derivative with

respect to x.
The Legendre polynomials satisfy the orthogonality condition:
Z 1
2
Pm (x)Pn (x)dx = δmn
−1 2n + 1
where δmn is the Kronecker delta, defined as δmn = 1 if m = n and δmn = 0 if
m ̸= n.
The Legendre polynomials also have the following recurrence relation, known as
the Bonnet’s recursion formula:

(n + 1)Pn+1 (x) = (2n + 1)xPn (x) − nPn−1 (x)


where P−1 (x) = 0 and P0 (x) = 1.
Legendre polynomials have numerous applications, including solving partial dif-
ferential equations, expressing functions in terms of orthogonal basis functions, nu-
merical integration, and approximating functions using polynomial interpolation.

Code and output of Legendre polynomials

import math
import numpy as np
import matplotlib.pyplot as plt

n= int(input("enter the value of n = "))


def legendre_polynomial(x,n):
if n==0:
return 1
if n==1:
return x
elif n>1 :
prev = 1
pcur = x
pnext =0
for i in range (2,n+1): Figure 17: Legendre polynomials
pnext = ((2*i+1)*x*pcur - (i+1)*prev)/i
prev = pcur
pcur= pnext
return pcur

x= np.arange(-1,1,0.01)
legendre_polynomial = np.vectorize(legendre_polynomial)
plt.plot(x,legendre_polynomial(x,n))
for i in range (n):
plt.plot(x,legendre_polynomial(x,i) , label = f"Legendre polynomials for {i} term . ")

plt.xlabel('X')
plt.ylabel('Y')
plt.title("Legendre Polynomials ")
plt.legend(fontsize =10)
plt.grid()
plt.show()
UNIT 4

Theory of Lagrange interplotation

Lagrange Interplotation

3 Theory of Lagrange Interpolation


Given a set of n + 1 distinct data points (x0 , y0 ), (x1 , y1 ), . . . , (xn , yn ), where xi ̸= xj
for i ̸= j, the Lagrange interpolation polynomial is defined as:
n
X
P (x) = yi Li (x)
i=0

where Li (x) is the i-th Lagrange basis polynomial given by:


n
Y x − xj
Li (x) =
xi − xj
j=0,j̸=i

The Lagrange basis polynomials have the property that Li (xi ) = 1 and Li (xj ) =
0 for j ̸= i. Therefore, the Lagrange interpolation polynomial P (x) passes through
all the given data points.
The interpolation error E(x) is defined as the difference between the actual
function f (x) and the interpolation polynomial P (x):

E(x) = f (x) − P (x)


where f (x) is the true function that we wish to approximate.

Code and output of Lagrange interplotation

import math
import numpy as np
import matplotlib.pyplot as plt

n= int(input("Enter the number of points available: "))


x=[]
y=[]
for i in range (1,n+1):
x1 =int(input(f"Enter the value of x{i}:"))
y1= int(input(f"Enter the value of y{i}: "))
x.append(x1)
y.append(y1)

x_arr=np.array(x)
y_arr=np.array(y)
def f(x_find):
Figure 18: Legendre polynomials
sum1=0
for i in range(n):
term= 1
for j in range(n):
if i!=j:
c = ((x_find) - (x_arr[j])) / ((x_arr[i]) - (x_arr[j]))
term*=c
else:
continue
term = term* y_arr[i]
sum1+=term
return sum1

x_find = eval(input("Enter the value of x where you want to find y : "))


s = f(x_find)
print("The value of y at x = ", x_find , "is : ", s)
x_arr1 = np.arange(-10,10,0.1)
y_arr1 = f(x_arr1)
plt.xlabel('x')
plt.ylabel("Interpolated polynomial")
plt.grid()
plt.plot(x_arr1 ,y_arr1 ,"black")
plt.show()

You might also like