Download as pdf or txt
Download as pdf or txt
You are on page 1of 38

University Ins tute of Engineering and Technology

Panjab University Chandigarh

DAA Practical File

Submitted to: Submitted by:


Ms. Amanpreet Kaur Name: Sumit
Roll no: UE218102
Class: B.E(IT), 3rd year
Sem: 5th
ti
Practical 1:

Details of platform , language and operating system

Operating system -Mac OS Ventura 13.2.1 (22D68)

Specifications -
Processor : Apple M1

System Type : 64-bit operating system, x64-based processor

RAM : 8.00 GB

Language – Python
Python is a dynamic, interpreted (bytecode-compiled) language. There are no type
declarations of variables, parameters, functions, or methods in source code. This makes
the code short and flexible, and you lose the compile-time type checking of the source
code.

Features of Python :
1) Easy to Learn and Use

2
2) 2) Expressive Language
3) Interpreted Language
4) Cross-platform Language
5) Free and Open Source
6) Object-Oriented Language
7) Extensible
8) Large Standard Library
9) GUI Programming Support
10) Integrated
11. Embeddable
12. Dynamic Memory Allocation

Platform –Jupyter Notebook

The Jupyter Notebook is an open source web application that you can use to create
and share documents that contain live code, equations, visualizations, and text. Jupyter Notebook
is maintained by the people at Project Jupyter.

Jupyter Notebooks are a spin-off project from the IPython project, which used to have an IPython
Notebook project itself. The name, Jupyter, comes from the core supported programming
languages that it supports: Julia, Python, and R. Jupyter ships with the IPython kernel, which
allows you to write your programs in Python, but there are currently over 100 other kernels that
you can also use.

3
Practical 2:
AIM:
Write a program to sort a given set of elements using bubble sort and determine
the time required to sort the elements. Repeat the exp for different values of n , the
number of elements in list to be sorted and plot a graph of the time v/s n elements
1)Random No Generator:
import random
random.shuffle(average)
2)Time of Execution:
import timeit
starttime=time.time()
bubblesort(best)
endtime=time.time()-starttime
bestcase.append(endtime)

3)Plotting the outcome:


import matplotlib.pyplot as plt
plt.figure(figsize=(7,7))

plt.plot(inputs,bestcase,color='g')
plt.annotate('Worstcase input', xy =(1500, 500))
plt.annotate('Bestcase input', xy =(1600, 10))
plt.annotate('Averagecase input', xy =(1600, 250))
plt.plot(inputs,worstcase,color='r')
plt.plot(inputs,averagecase,color='y')
plt.ylabel('Time(ms)')
plt.xlabel('No. of inputs')
plt.show()

4
BUBBLE SORT

import matplotlib.pyplot as plt


import random
import timeit
import time
import numpy as np

def bubblesort(arr):
n = len(arr)
swapped = False
for i in range(n-1):
for j in range(0, n-i-1):
if arr[j] > arr[j + 1]:
swapped = True
arr[j], arr[j + 1] = arr[j + 1], arr[j]

if not swapped:
return

bestcase=[]
worstcase=[]
averagecase=[]
inputs=list(range(1000,2001,100))
for i in inputs:
best=list(range(0,i))
worst=best.copy()
worst.reverse()
average=best.copy()

5
random.shuffle(average)

starttime=time.time()
bubblesort(best)
endtime=time.time()-starttime
bestcase.append(endtime)

starttime2=time.time()
bubblesort(average)
endtime2=time.time()-starttime
averagecase.append(endtime2)

starttime3=time.time()
bubblesort(worst)
endtime3=time.time()-starttime
worstcase.append(endtime3)

bestcase=np.array(bestcase)*1000
worstcase=np.array(worstcase)*1000
averagecase=np.array(averagecase)*1000

Plotting graph with matplotlib

plt.figure(figsize=(7,7))

plt.plot(inputs,bestcase,color='g')
plt.annotate('Worstcase input', xy =(1500, 500))
plt.annotate('Bestcase input', xy =(1600, 10))
plt.annotate('Averagecase input', xy =(1600, 250))
plt.plot(inputs,worstcase,color='r')
plt.plot(inputs,averagecase,color='y')

6
plt.ylabel('Time(ms)')
plt.xlabel('No. of inputs')
plt.show()

Output

7
Practical 3:
AIM:
Write a program to search a given elements using following algorithm and
determine the time required to sort the elements. Repeat the exp for different values
of n , the number of elements in list to be sorted and plot a graph of the time v/s n
elements
Performance measure of :
1. Linear search
2. Binary search
1)Random No Generator:
import random
random.shuffle(average)
2)Time of Execution:
import timeit
starttime=time.time()
bubblesort(best)
endtime=time.time()-starttime
bestcase.append(endtime)

3)Plotting the outcome:


import matplotlib.pyplot as plt
plt.figure(figsize=(7,7))

plt.plot(inputs,bestcase,color='g')
plt.annotate('Worstcase input', xy =(1500, 500))
plt.annotate('Bestcase input', xy =(1600, 10))
plt.annotate('Averagecase input', xy =(1600, 250))
plt.plot(inputs,worstcase,color='r')
plt.plot(inputs,averagecase,color='y')
plt.ylabel('Time(ms)')

8
plt.xlabel('No. of inputs')
plt.show()
Linear Search

import matplotlib.pyplot as plt


import random
import time
import numpy as np
import pandas as pd

def linearsearch(elements,value):
for i,n in enumerate(elements):
if n==value:
return i
else:
continue
return -1

bestcase=[]
worstcase=[]
averagecase=[]
inputs=list(range(10,10011,100))

for i in inputs:
elements=list(range(0,i))
random.shuffle(elements)
bestvalue=elements[0]
worstvalue=-1
averagevalue=random.choice(elements)

starttime=time.time()

9
linearsearch(elements,bestvalue)
endtime=time.time()-starttime
bestcase.append(endtime)

starttime2=time.time()
linearsearch(elements,averagevalue)
endtime2=time.time()-starttime
averagecase.append(endtime2)

starttime3=time.time()
linearsearch(elements,worstvalue)
endtime3=time.time()-starttime
worstcase.append(endtime3)

bestcase=np.array(bestcase)*1000
worstcase=np.array(worstcase)*1000
averagecase=np.array(averagecase)*1000

inputr=inputs[::10]
dicti={'No. of inputs':inputr,'time(in ms) bestcase':bestcase[::10],
'time(in ms) worstcase':worstcase[::10],'time(in ms)
averagecase':averagecase[::10]}
df = pd.DataFrame(dicti)
df.to_csv('linearsearch.csv')

inputss=np.array(inputs)
plt.figure(figsize=(7,7))

plt.plot(inputss,bestcase,color='g')
plt.annotate('Worstcase input', xy =(5500, 0.6))
plt.annotate('Bestcase input', xy =(6300, 0.3))

10
plt.annotate('Averagecase input', xy =(6000, 0.03))
plt.plot(inputss,worstcase,color='r')
plt.plot(inputss,averagecase,color='y')
plt.ylabel('Time(ms)')
plt.xlabel('No. of inputs')
plt.show()

Output

11
Binary Search:

import matplotlib.pyplot as plt


import random
import time
import numpy as np
import math
import pandas as pd

def binarysearch(array, x, low, high):

while low <= high:

mid = low +math.floor( (high - low)/2)

if array[mid] == x:
return mid

elif array[mid] < x:


low = mid + 1

else:
high = mid - 1

return -1

bestcase=[]
worstcase=[]

12
averagecase=[]
inputs=list(range(10,50001,100))

for i in inputs:
elements=list(range(0,i))
random.shuffle(elements)
bestvalue=elements[math.ceil(i/2)-1]
worstvalue=i+5
averagevalue=random.choice(elements)

starttime=time.time()
binarysearch(elements,bestvalue,0,len(elements)-1)
endtime=time.time()-starttime
bestcase.append(endtime)

starttime2=time.time()
binarysearch(elements,averagevalue,0,len(elements)-1)
endtime2=time.time()-starttime
averagecase.append(endtime2)

starttime3=time.time()
binarysearch(elements,worstvalue,0,len(elements)-1)
endtime3=time.time()-starttime
worstcase.append(endtime3)

bestcase=np.array(bestcase)*1000
worstcase=np.array(worstcase)*1000
averagecase=np.array(averagecase)*1000

inputr=inputs[::50]
dicti={'No. of inputs':inputr,'time(in ms) bestcase':bestcase[::50],

13
'time(in ms) worstcase':worstcase[::50],'time(in ms)
averagecase':averagecase[::50]}
df = pd.DataFrame(dicti)
df.to_csv('binarysearch.csv')

inputss=np.array(inputs)
plt.figure(figsize=(7,7))

plt.plot(inputss,bestcase,color='g')
# plt.annotate('Worstcase input', xy =(5500, 0.6))
# plt.annotate('Bestcase input', xy =(6300, 0.3))
# plt.annotate('Averagecase input', xy =(6000, 0.03))
plt.plot(inputss,worstcase,color='r')
plt.plot(inputss,averagecase,color='y')
plt.ylabel('Time(ms)')
plt.xlabel('No. of inputs')
plt.show()

Output

14
15
Practical 4:
AIM:
Write a program to sort a given set of elements using following algorithm and
determine the time required to sort the elements. Repeat the exp for different values
of n , the number of elements in list to be sorted and plot a graph of the time v/s n
elements
Performance measure of :
1. Merge sort
2. Heap sort
3. Selection sort
4. Radix sort

1)Random No Generator:
import random
random.shuffle(average)
2)Time of Execution:
import timeit
starttime=time.time()
bubblesort(best)
endtime=time.time()-starttime
bestcase.append(endtime)

3)Plotting the outcome:


import matplotlib.pyplot as plt
plt.figure(figsize=(7,7))

plt.plot(inputs,bestcase,color='g')
plt.annotate('Worstcase input', xy =(1500, 500))
plt.annotate('Bestcase input', xy =(1600, 10))

16
plt.annotate('Averagecase input', xy =(1600, 250))
plt.plot(inputs,worstcase,color='r')
plt.plot(inputs,averagecase,color='y')
plt.ylabel('Time(ms)')
plt.xlabel('No. of inputs')
plt.show()

Merge sort

import matplotlib.pyplot as plt


import random
import time
import numpy as np
import math
import pandas as pd

def mergeSort(arr):
if len(arr) > 1:

mid = math.floor(len(arr)/2)

L = arr[:mid]

R = arr[mid:]

mergeSort(L)

mergeSort(R)

i=j=k=0

17
while i < len(L) and j < len(R):
if L[i] <= R[j]:
arr[k] = L[i]
i += 1
else:
arr[k] = R[j]
j += 1
k += 1

# Checking if any element was left


while i < len(L):
arr[k] = L[i]
i += 1
k += 1

while j < len(R):


arr[k] = R[j]
j += 1
k += 1

bestcase=[]
worstcase=[]
averagecase=[]
inputs=list(range(1000,2001,100))
for i in inputs:
best=list(range(0,i))
worst=best.copy()
worst.reverse()
average=best.copy()
random.shuffle(average)

18
starttime=time.time()
mergeSort(best)
endtime=time.time()-starttime
bestcase.append(endtime)

starttime2=time.time()
mergeSort(average)
endtime2=time.time()-starttime
averagecase.append(endtime2)

starttime3=time.time()
mergeSort(worst)
endtime3=time.time()-starttime
worstcase.append(endtime3)

bestcase_array=np.array(bestcase)*1000
worstcase_array=np.array(worstcase)*1000
averagecase_array=np.array(averagecase)*1000

dicti={'No. of inputs':inputs,'time(in ms) bestcase':bestcase_array,'time(in ms)


worstcase':worstcase_array,'time(in ms) averagecase':averagecase_array}
df = pd.DataFrame(dicti)
df.to_csv('mergesort.csv')

plt.figure(figsize=(7,7))

plt.plot(inputs,bestcase_array,color='g')
plt.annotate('Worstcase input', xy =(1500, 12))
plt.annotate('Bestcase input', xy =(1600, 9))
plt.annotate('Averagecase input', xy =(1600, 3))
plt.plot(inputs,worstcase_array,color='r')

19
plt.plot(inputs,averagecase_array,color='y')
plt.ylabel('Time(ms)')
plt.xlabel('No. of inputs')
plt.show()

Output

20
Heap sort:

import matplotlib.pyplot as plt


import random
import time
import numpy as np
import math
import pandas as pd
import heapq

def heap_sort(arr):
heapq.heapify(arr)
result = []
while arr:
result.append(heapq.heappop(arr))
return result

bestcase=[]
worstcase=[]
averagecase=[]
inputs=list(range(1000,2001,100))
for i in inputs:
best=list(range(0,i))
worst=best.copy()
worst.reverse()
average=best.copy()
random.shuffle(average)

starttime=time.time()
heap_sort(best)

21
endtime=time.time()-starttime
bestcase.append(endtime)

starttime2=time.time()
heap_sort(average)
endtime2=time.time()-starttime
averagecase.append(endtime2)

starttime3=time.time()
heap_sort(worst)
endtime3=time.time()-starttime
worstcase.append(endtime3)

bestcase_array=np.array(bestcase)*1000
worstcase_array=np.array(worstcase)*1000
averagecase_array=np.array(averagecase)*1000

dicti={'No. of inputs':inputs,'time(in ms) bestcase':bestcase_array,


'time(in ms) worstcase':worstcase_array,'time(in ms)
averagecase':averagecase_array}
df = pd.DataFrame(dicti)
df.to_csv('heapsort.csv')

plt.figure(figsize=(7,7))

plt.plot(inputs,bestcase_array,color='g')
plt.annotate('Worstcase input', xy =(1500, 3))
plt.annotate('Bestcase input', xy =(1600, 2))
plt.annotate('Averagecase input', xy =(1600, 1))
plt.plot(inputs,worstcase_array,color='r')
plt.plot(inputs,averagecase_array,color='y')

22
plt.ylabel('Time(ms)')
plt.xlabel('No. of inputs')
plt.show()

Output

23
Selection sort:

import matplotlib.pyplot as plt


import random
import time
import numpy as np
import math
import pandas as pd

def selectionSort(array, size):

for ind in range(size):


min_index = ind

for j in range(ind + 1, size):


# select the minimum element in every iteration
if array[j] < array[min_index]:
min_index = j
# swapping the elements to sort the array
(array[ind], array[min_index]) = (array[min_index], array[ind])

bestcase=[]
worstcase=[]
averagecase=[]
inputs=list(range(1000,2001,100))
for i in inputs:
best=list(range(0,i))
worst=best.copy()
worst.reverse()

24
average=best.copy()
random.shuffle(average)

starttime=time.time()
selectionSort(best,len(best))
endtime=time.time()-starttime
bestcase.append(endtime)

starttime2=time.time()
selectionSort(average,len(average))
endtime2=time.time()-starttime
averagecase.append(endtime2)

starttime3=time.time()
selectionSort(worst,len(average))
endtime3=time.time()-starttime
worstcase.append(endtime3)

bestcase_array=np.array(bestcase)*1000
worstcase_array=np.array(worstcase)*1000
averagecase_array=np.array(averagecase)*1000

dicti={'No. of inputs':inputs,'time(in ms) bestcase':bestcase_array,


'time(in ms) worstcase':worstcase_array,'time(in ms)
averagecase':averagecase_array}
df = pd.DataFrame(dicti)
df.to_csv('selectionsort.csv')

plt.figure(figsize=(7,7))

plt.plot(inputs,bestcase_array,color='g')

25
plt.annotate('Worstcase input', xy =(1500, 3))
plt.annotate('Bestcase input', xy =(1600, 2))
plt.annotate('Averagecase input', xy =(1600, 1))
plt.plot(inputs,worstcase_array,color='r')
plt.plot(inputs,averagecase_array,color='y')
plt.ylabel('Time(ms)')
plt.xlabel('No. of inputs')
plt.show()

Output

26
Radix sort:

import matplotlib.pyplot as plt


import random
import time
import numpy as np
import math
import pandas as pd

def countingSort(arr, exp1):

n = len(arr)

output = [0] * (n)

count = [0] * (10)

27
for i in range(0, n):
index = (arr[i]/exp1)
count[int((index)%10)] += 1

for i in range(1,10):
count[i] += count[i-1]

i = n-1
while i>=0:
index = (arr[i]/exp1)
output[ count[ int((index)%10) ] - 1] = arr[i]
count[int((index)%10)] -= 1
i -= 1

i=0
for i in range(0,len(arr)):
arr[i] = output[i]

def radixSort(arr):
max1 = max(arr)
exp = 1
while max1 // exp > 0:
countingSort(arr,exp)
exp *= 10

bestcase=[]
worstcase=[]
averagecase=[]
inputs=list(range(1000,2001,100))

28
for i in inputs:
best=list(range(0,i))
worst=best.copy()
worst.reverse()
average=best.copy()
random.shuffle(average)

starttime=time.time()
radixSort(best)
endtime=time.time()-starttime
bestcase.append(endtime)

starttime2=time.time()
radixSort(average)
endtime2=time.time()-starttime
averagecase.append(endtime2)

starttime3=time.time()
radixSort(worst)
endtime3=time.time()-starttime
worstcase.append(endtime3)

bestcase_array=np.array(bestcase)*1000
worstcase_array=np.array(worstcase)*1000
averagecase_array=np.array(averagecase)*1000

dicti={'No. of inputs':inputs,'time(in ms) bestcase':bestcase_array,


'time(in ms) worstcase':worstcase_array,'time(in ms)
averagecase':averagecase_array}
df = pd.DataFrame(dicti)
df.to_csv('radixsort.csv')

29
plt.figure(figsize=(7,7))

plt.plot(inputs,bestcase_array,color='g')
plt.annotate('Worstcase input', xy =(1500, 14))
plt.annotate('Bestcase input', xy =(1600, 5))
plt.annotate('Averagecase input', xy =(1600, 10))
plt.plot(inputs,worstcase_array,color='r')
plt.plot(inputs,averagecase_array,color='y')
plt.ylabel('Time(ms)')
plt.xlabel('No. of inputs')
plt.show()

Output

30
31
PRACTICAL – 5
AIM: Write a program to find optimal solution for Fractional Knapsack
problem
The basic idea of the greedy approach is to calculate the ratio profit/weight for each item
and sort the item on the basis of this ratio. Then take the item with the highest ratio and
add them as much as we can (can be the whole element or a fraction of it).
This will always give the maximum profit because, in each step it adds an element such
that this is the maximum possible profit for that much weight.

CODE:

class Solution:
def solve(self, weights, values, capacity):
profit = 0
for pair in sorted(zip(weights, values), key=lambda x: -x[1]/x[0]):
if not bool(capacity):
break
if pair[0] > capacity:
profit += int(pair[1] / (pair[0] / capacity))
capacity = 0
elif pair[0] <= capacity:
profit+= pair[1]
capacity -= pair[0]
return int(profit)

ob = Solution()
weights = [6, 7, 3]
values = [110, 120, 2]
capacity = 10
print(ob.solve(weights, values, capacity))

OUTPUT:

178

Write a program to find minimum spanning tree using Prim’s Algorithm


The algorithm starts with an empty spanning tree. The idea is to maintain two sets of
vertices. The first set contains the vertices already included in the MST, and the other set
contains the vertices not yet included. At every step, it considers all the edges that connect
the two sets and picks the minimum weight edge from these edges. After picking the edge,
it moves the other endpoint of the edge to the set containing MST.

32
CODE:
import sys

class Graph():
def __init__(self, vertices):
self.V = vertices
self.graph = [[0 for column in range(vertices)]
for row in range(vertices)]

def printMST(self, parent):


print("Edge \tWeight")
for i in range(1, self.V):
print(parent[i], "-", i, "\t", self.graph[i][parent[i]])

def minKey(self, key, mstSet):

min = sys.maxsize

for v in range(self.V):
if key[v] < min and mstSet[v] == False:
min = key[v]
min_index = v

return min_index

def primMST(self):
key = [sys.maxsize] * self.V
parent = [None] * self.V
key[0] = 0
mstSet = [False] * self.V

parent[0] = -1

for cout in range(self.V):

u = self.minKey(key, mstSet)
mstSet[u] = True

for v in range(self.V):
if self.graph[u][v] > 0 and mstSet[v] == False \
and key[v] > self.graph[u][v]:
key[v] = self.graph[u][v]
parent[v] = u

self.printMST(parent)

33
if __name__ == '__main__':
g = Graph(5)
g.graph = [[0, 2, 0, 6, 0],
[2, 0, 3, 8, 5],
[0, 3, 0, 0, 7],
[6, 8, 0, 0, 9],
[0, 5, 7, 9, 0]]
g.primMST()

OUTPUT:

Edge Weight
0 - 1 2
1 - 2 3
0 - 3 6
1 - 4 5

PRACTICAL – 6

AIM: Write a program to find minimum spanning tree using Kruskal’s


Algorithm
In Kruskal’s algorithm, sort all edges of the given graph in increasing order. Then it keeps
on adding new edges and nodes in the MST if the newly added edge does not form a
cycle. It picks the minimum weighted edge at first and the maximum weighted edge at
last. Thus we can say that it makes a locally optimal choice in each step in order to find
the optimal solution. Hence this is a Greedy Algorithm.

CODE:

class Graph:
def __init__(self, vertices):
self.V = vertices
self.graph = []

def addEdge(self, u, v, w):


self.graph.append([u, v, w])

def find(self, parent, i):


if parent[i] != i:
parent[i] = self.find(parent, parent[i])
return parent[i]

34
def union(self, parent, rank, x, y):
if rank[x] < rank[y]:
parent[x] = y
elif rank[x] > rank[y]:
parent[y] = x
else:
parent[y] = x
rank[x] += 1

def KruskalMST(self):
result = []
i=0
e=0
self.graph = sorted(self.graph, key=lambda item: item[2])
parent = []
rank = []
for node in range(self.V):
parent.append(node)
rank.append(0)
while e < self.V - 1:
u, v, w = self.graph[i]
i=i+1
x = self.find(parent, u)
y = self.find(parent, v)
if x != y:
e=e+1
result.append([u, v, w])
self.union(parent, rank, x, y)
minimumCost = 0
for u, v, weight in result:
minimumCost += weight
print("Edges in the constructed MST")
for u, v, weight in result:
minimumCost += weight
print("%d -- %d == %d" % (u, v, weight))
print("Minimum Spanning Tree", minimumCost)

if __name__ == '__main__':
g = Graph(4)
g.addEdge(0, 1, 10)
g.addEdge(0, 2, 6)
g.addEdge(0, 3, 5)
g.addEdge(1, 3, 15)
g.addEdge(2, 3, 4)
g.KruskalMST()

35
OUTPUT:
Edges in the constructed MST
2 -- 3 == 4
0 -- 3 == 5
0 -- 1 == 10
Minimum Spanning Tree 38

Write a program to implement Dijkstra Algorithm


The idea is to generate a SPT (shortest path tree) with a given source as a root. Maintain
an Adjacency Matrix with two sets,
• one set contains vertices included in the shortest-path tree,
• other set includes vertices not yet included in the shortest-path tree.
At every step of the algorithm, find a vertex that is in the other set (set not yet included)
and has a minimum distance from the source.

CODE:

import heapq
iPair = tuple

class Graph:
def __init__(self, V: int):
self.V = V
self.adj = [[] for _ in range(V)]

def addEdge(self, u: int, v: int, w: int):


self.adj[u].append((v, w))
self.adj[v].append((u, w))

def shortestPath(self, src: int):


pq = []
heapq.heappush(pq, (0, src))
dist = [float('inf')] * self.V
dist[src] = 0

36
while pq:
d, u = heapq.heappop(pq)
for v, weight in self.adj[u]:
if dist[v] > dist[u] + weight:
dist[v] = dist[u] + weight
heapq.heappush(pq, (dist[v], v))

for i in range(self.V):
print(f"{i} \t\t {dist[i]}")

if __name__ == "__main__":
V=9
g = Graph(V)
g.addEdge(0, 1, 4)
g.addEdge(0, 7, 8)
g.addEdge(1, 2, 8)
g.addEdge(1, 7, 11)
g.addEdge(2, 3, 7)
g.addEdge(2, 8, 2)
g.addEdge(2, 5, 4)
g.addEdge(3, 4, 9)
g.addEdge(3, 5, 14)
g.addEdge(4, 5, 10)
g.addEdge(5, 6, 2)
g.addEdge(6, 7, 1)
g.addEdge(6, 8, 6)
g.addEdge(7, 8, 7)
g.shortestPath(0)

37
CODE:
0 0
1 4
2 12
3 19
4 21
5 11
6 9
7 8
8 14

38

You might also like