Professional Documents
Culture Documents
Practical
Practical
Practical
• Find two points, say a and b such that : a<b and f (a) ∗ f (b) < 0
• Find the midpoint of a and b, say “t”
• t is the root of the given function if f(t) = 0; else follow the next step
• Divide the interval [a, b] -
1+2
c= = 1.5 , f (1.5) = 0.25
2
since f (1).f (1.5) < 0 new interval will become (1, 1.5) {∵ here
b =c }
1 + 1.5
c= = 1.25 , f (1.25) = −0.4375
2
since f (1.25).f (1.5) < 0 new interval will become (1.25, 1.5) {∵
here a =c }
1.25 + 1.5
c= = 1.375 , f (1.375) = −0.109375
2
since f (1.375).f (1.5) < 0 new interval will become (1.375, 1.5)
{∵ here a=c }
1.375 + 1.5
c= = 1.4375 , f (1.4375) = 0.06640625
2
since f (1.375).f (1.4375) < 0 new interval will become (1.375, 1.4375)
{∵ here b=c }
1.375 + 1.4375
c= = 1.40625 , f (1.40625) = −0.0224609375
2
since f (1.40625).f (1.4375) < 0 new interval will become (1.40625, 1.4375)
{∵ here a=c }
def f(x):
return math.tan(x)-x
def bisection(a,b,tole):
#error,c = math.inf ,1
while (abs(a-b)>tole):
if f(a)*f(b)>0:
print("bisection is not possible ")
return
if f(a)*f(b)==0:
if f(a)==0:
print(a, "is the root ")
else:
print(b, "is the root ")
elif f(a)*f(b)<0:
c=(a+b)/2
if f(a)*f(c)<0:
b=c
error = np.abs(a-b)
elif f(a)*f(c)>0:
a=c
error = np.abs(a-b)
elif f(c)==0 :
print(c," is the root")
error = np.abs(a-b)
break
print("The root is in the interval (",a,",",b,") and error is " ,error*100/c,"%")
return c
a= float(input("Enter the lower limit "))
b = float(input("Enter the upper limit "))
tole = eval(input("enter the tolerance value = "))
r = bisection(a,b,tole)
print ("ans is ", r)
x= np.linspace(-10,10,500)
f = np.vectorize(f)
plt.plot(x,f(x))
plt.plot(x,f(x),color = 'blue' , label = 'f(x) ')
plt.legend()
plt.xlabel("X-Axis")
plt.ylabel("Y-Axis")
plt.grid()
plt.show()
Figure 2: tan(x) − x
def f(x):
return x**2-2
def bisection(a,b,tole):
while (np.abs(a-b)>tole):
if f(a)*f(b)>0:
print("bisection is not possible ")
return
if f(a)*f(b)==0:
if f(a)==0:
print(a, "is the root ")
else:
print(b, "is the root ")
elif f(a)*f(b)<0:
c=(a+b)/2
if f(a)*f(c)<0:
b=c
error = np.abs(a-b)
elif f(a)*f(c)>0:
a=c
error = np.abs(a-b)
elif f(c)==0 :
print(c," is the root")
error = np.abs(a-b)
break
print("The root is in the interval (",a,",",b,") and error is " ,error*100/c,"%")
return c
a= float(input("Enter the lower limit "))
b = float(input("Enter the upper limit "))
tole = eval(input("enter the tolerance value = "))
r = bisection(a,b,tole)
print ("Root is ", r)
short= fsolve(f,[-1.5,1.5])
print("fsolve gives the all roots of equation = " , short)
x= np.linspace(-2,2,100)
plt.plot(x,f(x))
plt.plot(x,f(x),color = 'blue' , label = 'f(x) ')
plt.legend()
plt.xlabel("X-Axis")
plt.ylabel("Y-Axis")
plt.grid()
plt.show()
Figure 3: x2 − 2
The Newton-Raphson method is an iterative root-finding technique that utilizes tangent lines to approximate the root of a
function. By iteratively refining the initial guess, the method converges rapidly to the root. The key idea is to move along the
tangent line of the function’s curve, which brings us closer and closer to the root.
Let’s consider a function f (x) and its derivative f ′ (x). Starting with an initial guess x0 , we can construct a tangent line at the
point (x0 , f (x0 )). The equation of this tangent line is given by:
x2 = 2.74
|2.74−3| Figure 4: Newton Raphson
error = 3 · 100 = 8.67%
f (x2 )
• 2nd Iteration : x3 = x2 −
f ′ (x2 )
.57
x3 = 2.74 − 22.52 = 2.72
x3 = 2.72
|2.74−2.72|
error = 2.74 · 100 = 0.4%
f (x3 )
• 3rd Iteration : x4 = x3 −
f ′ (x3 )
0.12
x4 = 2.72 − 22.19 = 2.715
x4 = 2.715
|2.72−2.715|
error = 2.72 · 100 = 0.18%
f (x4 )
• 4th Iteration : x5 = x4 −
f ′ (x4 )
0.01
x5 = 2.715 − 22.11 = 2.7146
x5 = 2.7146
|2.715−2.7146|
error = 2.715 · 100 = 0.014%
f (x5 )
• 5th Iteration : x6 = x5 −
f ′ (x5 )
0.004
x6 = 2.7146 − 22.10 = 2.7145
x6 = 2.7145
|2.7145−2.71456|
error = 2.71456 · 100 = 0.0066%
def newton(x0,tol):
def fun(x):
y = x**3 - 20
return y
def funp(x):
yp = 3*x**2
return yp
x1 =0
count =0
while True:
x1 = x0 - fun(x0)/funp(x0)
err = abs((x1 -x0)/x0)*100
x0 = x1
count+=1
if err<tol:
break
print(x1," is the closest root after the ",count ," iteration with relative error possible ", err)
newton(x0,tol)
Figure 5: x3 − 20
def newton(x0,tol):
def fun(x):
y = math.tan(x) -x
return y
def funp(x):
yp = (1/math.cos(x)**2)-1
return yp
x1 =0
count =0
while True:
x1 = x0 - fun(x0)/funp(x0)
err = abs((x1 -x0)/x0)*100
x0 = x1
count+=1
if err<tol:
break
print(x1," is the closest root after the ",count ," iteration with relative error possible ", err)
newton(x0,tol)
Figure 6: tan(x) − x
x3 = 3.35
|3.35−5.5|
error = 5.5 = 0.64
f (x3 ) · (x3 − x2 )
• 2nd Iteration : x4 = x3 −
f (x3 ) − f (x2 )
|3.35−3.05|
error = 3.35 = 0.09
f (x4 ) · (x4 − x3 )
• 3rd Iteration : x5 = x4 −
f (x4 ) − f (x3 )
x5 = 2.77
|3.05−2.77|
error = 3.05 = 0.1
Code and Output of Secant Method
def f(x):
y =math.tan(x) - x
return y
def secant(x1,x2,tol):
error = ((abs(x2-x1))/x2)
while error>tol:
x3 = x2 - f(x2)*(x2 -x1)/(f(x2) - f(x1))
x1,x2 = x2 ,x3
error = (abs(x1 -x2 )/x2)
print("The root is ",x3,"with possible error of ",error,count)
secant(x1,x2,tol)
Figure 8
def f(x):
y =x**3-20
return y
def secant(x1,x2,tol):
error = ((abs(x2-x1))/x2)
while error>tol:
x3 = x2 - f(x2)*(x2 -x1)/(f(x2) - f(x1))
x1,x2 = x2 ,x3
error = (abs(x1 -x2 )/x2)
print("The root is ",x3,"with possible error of ",error,count)
secant(x1,x2,tol)
Figure 9
UNIT 2
Intercept(c) = ȳ − m · x̄
The best-fit line is then given by y = mx + c, where m and c are the values obtained from solving the normal equations.
xy = x*y
xx = x*x
n= len(x)
sxy = sum(xy)
sxx = sum(xx)
sx = sum(x)
sy = sum(y)
for i in x :
z.append(m*i+c)
By taking the logarithm of both sides of the power function, you can transform
it into a linear equation that can be fitted using linear regression. Here’s how you
can do it:
• Start with the exponential equation: y = axb
• Take the logarithm (loge ) of both sides of the equation: loge (y) = loge (axb ).
• Apply the properties of logarithms: loge (y) = loge (a) + b.loge (x)
• Now fit the curve same as linear regression Y = MX+C M and C are slope and
intercept
Y = loge (y), X = loge (x), M = b, C = loge (a)
X = np.log(x)
Y= np.log(y)
X_mean = np.mean(X)
Y_mean = np.mean(Y)
n=len(x)
z=[]
for i in x:
z.append((a*i)**b)
xx = np.linspace(1,6,100)
yfit = a*np.power(xx,b)
print("Data points : " )
for i , j in zip(x,y):
print(f'({i},{j})', end = '')
print()
print("Fitted points : ")
for i , j in zip(x,z):
print(f'({i},{j})')
plt.scatter(x,y,color = 'black' , label = 'Data Points ')
plt.plot(x,y ,color = 'blue' , label = 'Data Points ')
plt.plot(x,z,color = 'red' , label = 'Fitted Line ')
xx = np. linspace(1,6,100)
plt.xlabel("X axis ")
plt.ylabel("Y axis")
plt.title("Exponential least Square Fitting")
plt.legend()
plt.grid()
plt.show()
By taking the logarithm of both sides of the exponential equation, you can trans-
form it into a linear equation that can be fitted using linear regression. Here’s how
you can do it:
• Start with the exponential equation: y = aebx
• Take the logarithm (log) of both sides of the equation:
• Apply the properties of logarithms: loge (y) = loge (a) + b.loge (x)
• Now fit the curve same as linear regression Y = MX+C {M and C are slope and
intercept}
Taylor’s Expansion
f(0) = log(1+0) = 0
1
f’(0) = =1
(1 + 0)
−1
f”(0) = = −1
((1 + 0)2 )
2
f”’(0) = =2
((1 + 0)3 )
Using the Maclaurin series formula, we have:
import math
import numpy as np
import matplotlib.pyplot as plt
error = o-y
print(error)
plt.ylim([-1,1])
plt.plot(x,y,"red",label = 'log(x)')
plt.plot(x,o,"x",label ="taylor ")
plt.plot(x,error,linestyle = "dashed",label ="error")
plt.xlabel("x axis ")
plt.ylabel("y axis ")
plt.title("Maclurian series of log(1+tx)")
plt.legend()
plt.grid()
plt.show()
To derive the Maclaurin series formula for sin(x), we need to find the derivatives
of sin(x) and evaluate them at x = 0.
f(x) = sin(x)
f’(x) = cos(x)
f”(x) = -sin(x)
f”’(x) = -cos(x)
Evaluating these derivatives at x = 0 gives:
f(0) = 0
f’(0) = 1
f”(0) = 0
f”’(0) = -1
Substituting these values into the Maclaurin series formula,
import math
import numpy as np
import matplotlib.pyplot as plt
def sine(x,n,t):
y_ = 0
for i in range(n):
y_ += ((-1)**(i))*(t*x**(2*i+1))/(math.factorial(2*i+1))
return y_
x = np.arange(-2*np.pi,2*np.pi,0.1 )
Figure 14: sin(x)
o = sine(x,n,t)
y = np.sin(x)
error = o-y
for i in range (4):
z = [sine(r,i,t) for r in x]
plt.plot(x,z,label=f"terms ={i}")
plt.plot(x,y,"red",label = 'sinx')
plt.plot(x,o,label ="taylor ")
plt.plot(x,error,linestyle = "dashed",label ="error")
plt.xlabel("x axis ")
plt.ylabel("y axis ")
plt.ylim([-2,2])
plt.title("Maclurian Sine series ")
plt.legend()
plt.grid()
plt.show()
Following a similar process as above, we can derive the Maclaurin series formula
for cos(x).
f(x) = cos(x)
f’(x) = -sin(x)
f”(x) = -cos(x)
f”’(x) = sin(x)
Evaluating these derivatives at x = 0 gives:
f(0) = 1
f’(0) = 0
f”(0) = -1
f”’(0) = 0
Using the Maclaurin series formula, we have:
import math
import numpy as np
import matplotlib.pyplot as plt
def cosine(x,n):
y_ = 0
for i in range(n):
y_ += ((-1)**(i))*(x**(2*i))/(math.factorial(2*i))
return y_
x = np.arange(-2*np.pi,2*np.pi,0.1 ) Figure 15: Cos(x)
o = cosine(x,n)
y = np.cos(x)
error = o-y
for i in range (1,5):
z = [cosine(r,i) for r in x]
plt.plot(x,z,label=f"terms ={i}")
plt.ylim([-2,2])
plt.plot(x,y,"red",label = 'sinx')
plt.plot(x,o,"x",label ="taylor ")
plt.plot(x,error,linestyle = "dashed",label ="error")
plt.xlabel("x axis ")
plt.ylabel("y axis ")
plt.title("Maclurian Cosine series")
plt.legend()
plt.grid()
plt.show()
Maclaurin series of ex
import math
import numpy as np
import matplotlib.pyplot as plt
def exponent(x,n):
y_ = 0
for i in range(n):
y_ += (x**i)/math.factorial(i)
return y_
x = np.arange(-2,2,0.05 )
o=exponent(x,n)
y = np.exp(x)
error = o-y
plt.plot(x,y,"red",label = 'exp(x)')
plt.plot(x,o,"x",label ="Taylor ")
plt.plot(x,error,"*",label ="error")
plt.xlabel("x axis ")
plt.ylabel("y axis ")
plt.title("Maclurian series of exp(x)")
plt.legend()
plt.grid()
plt.show()
Legendre Polynomials
The Legendre polynomials, denoted by Pn (x), are a family of orthogonal poly-
nomials that arise in various branches of mathematics, especially in the field of ap-
proximation theory and mathematical physics. They are defined by the Rodrigues’
formula:
1 dn 2
(x − 1)n
Pn (x) = n n
2 n! dx
n
d
where n is a non-negative integer and dx n represents the nth derivative with
respect to x.
The Legendre polynomials satisfy the orthogonality condition:
Z 1
2
Pm (x)Pn (x)dx = δmn
−1 2n + 1
where δmn is the Kronecker delta, defined as δmn = 1 if m = n and δmn = 0 if
m ̸= n.
The Legendre polynomials also have the following recurrence relation, known as
the Bonnet’s recursion formula:
import math
import numpy as np
import matplotlib.pyplot as plt
x= np.arange(-1,1,0.01)
legendre_polynomial = np.vectorize(legendre_polynomial)
plt.plot(x,legendre_polynomial(x,n))
for i in range (n):
plt.plot(x,legendre_polynomial(x,i) , label = f"Legendre polynomials for {i} term . ")
plt.xlabel('X')
plt.ylabel('Y')
plt.title("Legendre Polynomials ")
plt.legend(fontsize =10)
plt.grid()
plt.show()
UNIT 4
Lagrange Interplotation
The Lagrange basis polynomials have the property that Li (xi ) = 1 and Li (xj ) =
0 for j ̸= i. Therefore, the Lagrange interpolation polynomial P (x) passes through
all the given data points.
The interpolation error E(x) is defined as the difference between the actual
function f (x) and the interpolation polynomial P (x):
import math
import numpy as np
import matplotlib.pyplot as plt
x_arr=np.array(x)
y_arr=np.array(y)
def f(x_find):
Figure 18: Legendre polynomials
sum1=0
for i in range(n):
term= 1
for j in range(n):
if i!=j:
c = ((x_find) - (x_arr[j])) / ((x_arr[i]) - (x_arr[j]))
term*=c
else:
continue
term = term* y_arr[i]
sum1+=term
return sum1