Udaydaa1 7

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 75

University School of

Automation &Robotics

Analysis and Design


of Algorithm
(ARM-254)

Submitted to: Submitted by:


Mr. Amrit Pal Singh Uday bari
(Assistant Professor). Enrl.no: 07819011921
AI&DS B2a
En.no : 07819011921

Uday bari

LIST OF PRACTICALS
ANALYSIS AND DESIGN OF ALGORITHMS (ARM – 254)

Lab Sheet S.No. Topic Signature


&
Marks

1. Revise pseudocode for sorting an array (int, float, or char type) using following
sorting techniques:
● Selection sort
● Bubble sort
● Merge sort (Recursive)
● Quick sort (Recursive)
A. Plot the complexity chart for n=10 to 100.
B. Analyze their complexities in best case, average case and worst case.

2. Revise pseudocode for searching within an array (int, float, or char type) using
following searching techniques:
● Linear Search
● Binary Search
A. Plot the complexity chart for n=10 to 100.
Lab Sheet B. Analyze their complexities in best case, average case and worst case.
1
3. You have been given two sorted lists of size M and N. It is desired to find the
Kth
smallest element out of M+N elements of both lists. Propose and implement an
efficient algorithm to accomplish the task. Further, propose and implement an
efficient algorithm to accomplish the task considering that elements in both
lists are unsorted.

4. You are given a list of n-1 integers and these integers are in the range of 1-n.
There
are no duplicates in the list. One of the integers is missing in the list. Write an
efficient code to find the missing integer.

5. You have been given a sorted array ARR (of size M, where M is very large) of
two
elements, 0 and 1. It is desired to compute the count of 0s in the array ARR.
Propose and implement an efficient algorithm to accomplish the task.
6. Let there be an array of N random elements. We need to sort this array in
ascending order. If n is very large (i.e. N= 1,00,000) then Quicksort may be
Lab Sheet considered as the fastest algorithm to sort this array. However, we can further
2 optimize its performance by hybridizing it with insertion sort. Therefore, if n is
small (i.e. N<= 10) then we apply insertion sort to the array otherwise Quick
Sort is applied. Implement the above discussed hybridized Quick Sort and
compare the running time of normal Quick sort and hybridized quick sort. Run
each type of sorting 10 times on a random set of inputs and compare the
average time returned by these algorithms.
7. Implement the Strassen’s multiplication method (using Divide and Conquer
Strategy) and naive multiplication method. Compare these methods in terms of
time taken using the nXn matrix where n=3, 4, 5, 6, 7 and 8 (compare in bar
graph).

8. Implement the multiplication of two N-bit numbers (using Divide and Conquer
Strategy) and naive multiplication method. Compare these methods in terms of
time taken using N-bit numbers where n=4, 8, 16, 32 and 64.
Lab Sheet
3 9. Maximum Value Contiguous Subsequence: Given a sequence of n numbers
A(1) ...A(n), give an algorithm for finding a contiguous subsequence A(i)
...A(j) for which the sum of elements in the subsequence is maximum.
Example: {-2, 11, -4, 13, -5, 2} → 20 and {1, -3, 4, -2, -1, 6 } → 7.

10. Implement the algorithm (Algo_1) presented below and discuss which task this
algorithm performs. Also, analyze the time complexity and space complexity
of the given algorithm. Further, implement the algorithm with following
modification: replace m = ⌈2n/3⌉ with m = ⌊2n/3⌋, and compare the tasks
performed by the given algorithm and modified algorithm.

11. Implement LCS algorithm for A[1 .. n] and B[1 .. l] sequences.

12. Given an array A[1 .. n] of integers, compute the length of a longest increasing
subsequence. A sequence B[1 .. l] is increasing if B[i] > B[i − 1] for every
index
i ≥ 2. For example, given the array
⟨3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8, 9, 7, 9, 3, 2, 3, 8, 4, 6, 2, 7⟩

Lab Sheet 13. Given an array A[1 .. n] of integers, compute the length of a longest alternating
4 subsequence. A sequence B[1 .. l] is alternating if B[i] < B[i − 1] for every
even index i ≥ 2, and B[i] > B[i − 1] for
every odd index i ≥ 3. For example, given the array
⟨3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8, 9, 7, 9, 3, 2, 3, 8, 4, 6, 2, 7⟩

14. Given an array A[1 .. n], compute the length of a longest palindrome
subsequence of A. Recall that a sequence B[1 .. l] is a palindrome if B[i] =
B[l− i + 1] for every index i.

15. Given an array A[1 .. n] of integers, compute the length of a longest convex
subsequence of A. A sequence B[1 .. l] is convex if B[i] − B[i − 1] > B[i − 1]
− B[i − 2] for every index i ≥ 3.
16. Implement MCM algorithm for the given n matrix <M1xM2… ............ Mn>
where the size of the matrix is Mi=di-1 x di.
Lab Sheet
5 17. Implement OBST for given n keys (K1,K2…..Km) whose pi and qi (dummy
keys) are given.

18. Implement 0/1 Knapsack problem using dynamic programming.

19. Wap to Implement breadth first search algorithm for given graph G.

Lab Sheet 20. Wap to Implement depth first search algorithm for given graph G.
6
21. Wap to Implement topological sorting.

22. Wap to find the strongly connected components in a Graph.

23. Wap to Implement Prim's algorithm for given graph G.


Lab Sheet
7 24. Wap to Implement Kruskal's algorithm for given graph G.

25. Wap to Implement dijkstra algorithm to find single source shortest path.
LAB SHEET 1
En.no : 07819011921
Name :Udaybari
Batch : AI_DS B2A

Q1. Revise pseudocode for sorting an array (int, float, or char type) using

Following sorting techniques:

● Selection sort

● Bubble sort

● Merge sort (Recursive)

● Quick sort (Recursive)

A. Plot the complexity chart for n=10 to 100.

B. Analyse their complexities in best case, average case and worst case.

Source Code:

#labsheet1
#07819011921
#udaybari AI_DS B2A

import random
import time
import matplotlib.pyplot as plt
print("07819011921 udaybari")
#SELECTION SORT
def selsort(array):
n=len(arr)
for i in range(n):
min_idx=i
for j in range(i+1,n):
if array[j]<array[min_idx]:
min_idx=j
array[i],array[min_idx]=array[min_idx],array[i]
matrix1=[]
x1=[]
for i in range(1000,5000,100):
li=range(99999)
amount=i
x1.append(i)
arr=[random.choice(li) for _ in range(amount)]
start=time.time()
selsort(arr)
end=time.time()
final=(end-start)
matrix1.append(final)
plt.plot(x1,matrix1,label="SELECTION SORT",color="y")
plt.legend()
#INSERTION SORT
def insort(arr):
size=len(arr)
for i in range(1,size):
key = arr[i]
j = i-1
while j >= 0 and key < arr[j] :
arr[j + 1] = arr[j]
j -= 1
arr[j + 1] = key

matrix2=[]
x2=[]
for i in range(1000,5000,100):
li=range(99999)
amount=i
x2.append(i)
arr=[random.choice(li) for _ in range(amount)]
start=time.time()
insort(arr)
end=time.time()
final=(end-start)
matrix2.append(final)

plt.plot(x2,matrix2,label="INSERTION SORT",color="g")
plt.legend()

#BUBBLE SORT
def bubsort(arr):
size=len(arr)
for i in range(size-1):
for j in range(size-i-1):
if arr[j]>arr[j+1]:
arr[j],arr[j+1]=arr[j+1],arr[j]
matrix3=[]
x3=[]
for i in range(1000,5000,100):
li=range(99999)
amount=i
x3.append(i)
arr=[random.choice(li) for _ in range(amount)]
start=time.time()
bubsort(arr)
end=time.time()
final=end-start
matrix3.append(final)

plt.plot(x3,matrix3,label="BUBBLE SORT",color="r")
plt.legend()

#QUICK SORT
def partition(array,low,high):
pivot=array[high]
i=low-1
for j in range(low, high):
if array[j] <= pivot:
i=i+1
array[i], array[j] = array[j], array[i]
array[i + 1], array[high] = array[high], array[i + 1]
return i+1

def quick(array,low,high):
if low<high:
pi=partition(array,low,high)
quick(array,low,pi-1)
quick(array,pi+1,high)

x,y=[],[]
l=range(99999)
amount=1000
arr=[random.choice(l) for _ in range(amount)]
for j in range(1000,5000,100):
arr=[random.choice(l) for _ in range(amount)]
start=time.time()
quick(arr,0,amount-1)
end=time.time()
diff=end-start
final=diff
y.append(final)
for k in range(1000,5000,100):
x.append(k)
plt.plot(x,y,label="QUICK SORT",color="purple")
plt.legend()

#MERGE SORT
def merge(arr, l, m, r):
n1 = m - l + 1
n2 = r - m
L = [0] * (n1)
R = [0] * (n2)
for i in range(0, n1):
L[i] = arr[l + i]
for j in range(0, n2):
R[j] = arr[m + 1 + j]
i=0
j=0
k=l
while i < n1 and j < n2:
if L[i] <= R[j]:
arr[k] = L[i]
i += 1
else:
arr[k] = R[j]
j += 1
k += 1
while i < n1:
arr[k] = L[i]
i += 1
k += 1
while j < n2:
arr[k] = R[j]
j += 1
k += 1
def mergeSort(arr, l, r):
Analysis : -
The time complexity of the sorting algorithms in the code is as follows:

1 Selection Sort: The time complexity of selection sort is O(n^2), where n is the

number of elements in the array.

2 Insertion Sort: The time complexity of insertion sort is O(n^2), where n is the

number of elements in the array.

3 Bubble Sort: The time complexity of bubble sort is O(n^2), where n is the number

of elements in the array.

4 Quick Sort: The average case time complexity of quicksort is O(n log n), where n is

the number of elements in the array. However, in the worst case, the time

complexity can be O(n^2).

5 Merge Sort: The time complexity of merge sort is O(n log n), where n is the

number of elements in the array.


Q2. Revise pseudocode for searching within an array (int, float, or char type) usingfollowing

searching techniques:

● Linear Search
● Binary Search

A. Plot the complexity chart for n=10 to 100.

B. Analyse their complexities in best case, average case and worst case

Source Code:

import timeit
import matplotlib.pyplot as plt
import random
print ("07819011921 uday bari")
def linearSearch(arr, x):
for i in range(len(arr)):
if arr[i] == x:
return i
return -1

def binarySearch(arr, x):


low = 0
high = len(arr) - 1
while low <= high:
mid = (low + high) // 2
if arr[mid] < x:
low = mid + 1
elif arr[mid] > x:
high = mid - 1
else:
return mid
return -1

linear_search_times = []
binary_search_times = []

for n in range(10, 105):


arr = sorted([random.randint(1, n) for i in range(n)])
x = random.randint(1, n)

linear_search_time = timeit.timeit(lambda: linearSearch(arr, x), number=1000)


binary_search_time = timeit.timeit(lambda: binarySearch(arr, x), number=1000)

linear_search_times.append(linear_search_time)
binary_search_times.append(binary_search_time)

plt.plot(range(10, 105), linear_search_times, label='Linear Search')


plt.plot(range(10, 105), binary_search_times, label='Binary Search')
plt.xlabel('Size of Array (n)')
plt.ylabel('Execution Time (in seconds)')
plt.title('Complexity Chart for Linear Search and Binary Search')
plt.legend()
plt.show()
Analysis : -
The time complexity of the search algorithms in the code is as follows:

Linear Search: The time complexity of linear search is O(n), where n is the number

of elements in the array. In the code, linear search is performed 1000 times for each

array size ranging from 10 to 104.

Binary Search: The time complexity of binary search is O(log n), where n is the

number of elements in the sorted array. In the code, binary search is performed

1000 times for each array size ranging from 10 to 104., it's important to note that

the array is sorted before performing the binary search.

Q3. You have been given two sorted lists of size M and N. It is desired to find the
Kth smallest element out of M+N elements of both lists. Propose and implement an
efficient algorithm to accomplish the task. Further, propose and implement an
efficient algorithm to accomplish the task considering that elements in both lists are
unsorted.
Source Code:

def find_kth_sorted(a, b, k):


m, n = len(a), len(b)
i, j = 0, 0

while i < m and j < n:


if a[i] < b[j]:
if i+1 == k:
return a[i]
i += 1
else:
if j+1 == k:
return b[j]
j += 1

while i < m:
if i+1 == k:
return a[i]
i += 1

while j < n:
if j+1 == k:
return b[j]
j += 1

a=[3,5,7,8,10,13,15,25,49,50,69]
b=[1,2,6,5,10,19,35,45,64,75,87]
k=11
print(find_kth_sorted(a,b,k))
Analysis : -
The time complexity of the find_kth_sorted function in the code is O(m + n),

where m and n are the lengths of arrays a and b, respectively.

Q4. You are given a list of n-1 integers and these integers are in the range of 1-n. There
are no duplicates in the list. One of the integers is missing in the list. Write an efficient
code to find themissing integer .

Source Code:
print("07819011921 udaybari")
def missing_num(arr,n):
sum = (n * (n+1))/2
sum2 = 0
for i in arr:
sum2 += i
return sum - sum2

array = [1,2,3,4,5,7]
print(missing_num(array,len(array) + 1))
Analysis : -

The time complexity of the missing_num function in the code is O(n), where n is the
length of the array.

Q5 You have been given a sorted array ARR (of size M, where M is very large) of two
elements, 0 and 1. It is desired to compute the count of 0s in the array ARR. Propose and
implement an efficient algorithm to accomplish the task.

Source Code:
print("07819011921 udaybari")
def BinarySearch(array):
s=0
e = len(array) - 1
soln = e + 1
while(s <= e):
mid = (int)(s + (e - s)/2)
if(array[mid] == 1):
soln = mid
e = mid - 1
else:
s = mid + 1
return soln
arr = [0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1]
print(BinarySearch(arr))

Analysis : -

The time complexity of the BinarySearch function in the code is O(log n), where n is the
length of the array.
LAB SHEET 2
En.no : 07819011921
Name : Udaybari Batch
Batch : AI_DS B2A

Q)Let theíe be an aííay of N íandom elements. We need to soít this aííay in


ascending oídeí. Ifn is veíy laíge (i.e. N= 1,00,000) then Quicksoít may be
consideíed as the fastest algoíithm to soít this aííay. Howeveí, we can fuítheí
optimize its peífoímance by hybíidizing it with inseítion soít. ľheíefoíe, if n
is small (i.e. N<= 10) then we apply inseítion soít to the aííay otheíwise
Quick Soít is applied. Implement the above discussed hybíidized Quick Soít
and compaíe the íunning time of noímal Quick soít and hybíidized quick
soít. Run each type of soíting 10 times on a íandom set of inputs and
compaíe the aveíage time íetuíned by these algoíithms

Source code:
import time import
random
import numpy as np
import matplotlib.pyplot as plt

def insertion_sort(arr, low, n): for i in range(low + 1,


n + 1):
val = arr[i]j = i
while j > low and arr[j-1] > val:arr[j] = arr[j-1]
j -= 1
arr[j] = val

def partition(arr, low, high):pivot = arr[high]


i = j = low
for i in range(low, high):if arr[i] < pivot:
arr[i], arr[j] = arr[j], arr[i]j += 1
arr[j], arr[high] = arr[high], arr[j]return j

def quick_sort(arr, low, high):if low < high:


pivot = partition(arr, low, high)
quick_sort(arr, low, pivot-1) quick_sort(arr,
pivot + 1, high)
return arr

def condition_sort(arr, low, high):while low < high:


if high - low + 1 < 10: insertion_sort(arr, low, high)
break
else:
pivot = partition(arr, low, high)if pivot - low <
high - pivot:
condition_sort(arr, low, pivot-1)low = pivot + 1
else:
condition_sort(arr, pivot + 1, high)high = pivot - 1

l = range(10000)amount = 0

print("udaybari 07819011921")x = []
y = []

for j in range(10, 3000, 100):amount += j


arr = [random.choice(l) for _ in range(amount)]start_time = time.time()
condition_sort(arr, 0, amount-1)end_time =
time.time()
result = end_time - start_timefinal = result *
100 y.append(final)
x.append(j)

plt.plot(x, y)
plt.show()
Analysis : -
the time complexity can range from O(n log n) to O(n^2), depending on the

input and pivot selection.

the space complexity of the code is O(n) .

LAB SHEET 3

Name :
Udaybari

Enrollment no
: 07819011921

Batch : AI_DS
B2A

Q1. Implement the Strassen’s multiplication method (using Divide and

Conquer Strategy) and naive multiplication method. Compare these

methods in terms of time taken using the nXn matrix where n=3, 4, 5, 6, 7

and 8 (compare in bar graph).

Source Code:
import numpy as np

import time

def pad_matrix(matrix):

n = matrix.shape[0]

next_power_of_2 = 2**int(np.ceil(np.log2(n)))

padded_matrix = np.zeros((next_power_of_2, next_power_of_2))


padded_matrix[:n, :n] = matrix

return padded_matrix

def unpad_matrix(matrix, original_size):

return matrix[:original_size, :original_size]

def naive_mult(a, b):n

= a.shape[0]

c = np.zeros((n,n))

for i in range(n):

for j in range(n):

for k in range(n):

c[i,j] += a[i,k] * b[k,j]


return c

def strassen_mult(a, b):

n = a.shape[0]if

n <= 64:

return naive_mult(a, b)

else:

a = pad_matrix(a) b

= pad_matrix(b) n =

a.shape[0]

a11 = a[:n//2, :n//2]

a12 = a[:n//2, n//2:]

a21 = a[n//2:, :n//2]

a22 = a[n//2:, n//2:]

b11 = b[:n//2, :n//2]

b12 = b[:n//2, n//2:]

b21 = b[n//2:, :n//2]

b22 = b[n//2:, n//2:]

p1 = strassen_mult(a11+a22, b11+b22)p2 =

strassen_mult(a21+a22, b11)

p3 = strassen_mult(a11, b12-b22) p4

= strassen_mult(a22, b21-b11) p5 =

strassen_mult(a11+a12, b22)

p6 = strassen_mult(a21-a11, b11+b12)p7 =

strassen_mult(a12-a22, b21+b22)

c11 = p1 + p4 - p5 + p7

c12 = p3 + p5

c21 = p2 + p4

c22 = p1 - p2 + p3 + p6
c = np.zeros((n,n))

c[:n//2, :n//2] = c11

c[:n//2, n//2:] = c12

c[n//2:, :n//2] = c21

c[n//2:, n//2:] = c22

c = unpad_matrix(c, a.shape[0] - n)

return c

ns = [3, 4, 5, 6, 7, 8]

naive_times = []

strassen_times = []

for n in ns:

A = np.random.randint(10, size=(n, n))B =

np.random.randint(10, size=(n, n))

start_time = time.time()C1

= naive_mult(A, B)

naive_times.append(time.time() - start_time)

start_time = time.time()C2

= strassen_mult(A, B)

strassen_times.append(time.time() - start_time)

assert np.array_equal(C1, C2)

import matplotlib.pyplot as plt

X_axis = np.arange(len(ns))

plt.bar(X_axis - 0.2, naive_times, 0.4, label='Naive multiplication') plt.bar(X_axis + 0.2,

strassen_times, 0.4, label='Strassen multiplication')plt.xticks(X_axis, ns)


plt.xlabel('n') plt.ylabel('Time

taken (s)')

plt.title('Comparison of Naive and Strassen Methods for Matrix Multiplication') plt.legend()

plt.show()
Q2. Implement the multiplication of two N-bit numbers (using Divide and

Conquer Strategy) andnaive multiplication method. Compare these methods in

terms of time taken using N-bit numbers where n=4, 8, 16, 32 and 64.

Source Code:

def karatsuba(x, y):


if x < 10 or y < 10:
return x * y

m = max(len(str(x)), len(str(y)))
m2 = m // 2

high1, low1 = x // 10**m2, x % 10**m2


high2, low2 = y // 10**m2, y % 10**m2

z0 = karatsuba(low1, low2)
z1 = karatsuba((low1 + high1), (low2 + high2))
z2 = karatsuba(high1, high2)

return (z2 * 10**(2*m2)) + ((z1 - z2 - z0) * 10**(m2)) + z0

def naive_multiply(x, y):


result = 0
while y:
result += x * (y % 10)
y //= 10
x *= 10
return result

import timeit

for n in [4, 8, 16, 32, 64]:


x = int('9' * n)
y = int('8' * n)

karatsuba_time = timeit.timeit(lambda: karatsuba(x, y), number=1000)


naive_time = timeit.timeit(lambda: naive_multiply(x, y), number=1000)

print(f"N = {n}")
print(f"Karatsuba time: {karatsuba_time:.6f} seconds")
print(f"Naive multiplication time: {naive_time:.6f} seconds\n")
Analysis : -

The time complexity of the karatsuba function turns out to be O(n^log2(3))


The purpose of this code is to compare the performance of Karatsuba multiplication with the naive
multiplication algorithm for large numbers. It demonstrates that Karatsuba multiplication is more
efficient in terms of execution time, especially for larger input sizes.

Q3. Maximum Value Contiguous Subsequence: Given a sequence of n numbers A(1)


...A(n),
give an algorithm for finding a contiguous subsequence A(i) ...A(j) for

which the sum of elements in the subsequence is maximum. Example: {-2,

11, -4, 13, -5, 2} → 20 and {1, -3, 4, -2, -1, 6 }→ 7.

Source Code:
def max_contig_sum(seq):

n = len(seq)

max_ending_here = max_so_far = seq[0]


start = end = 0

for i in range(1, n):

if seq[i] > max_ending_here + seq[i]:

start = i

max_ending_here = seq[i]

else:

max_ending_here += seq[i]

if max_ending_here > max_so_far:

end = i

max_so_far = max_ending_here

return seq[start:end+1], max_so_far

a = [-2, 11, -4, 13, -5, 2]

b = [1, -3, 4, -2, -1, 6]


c = [3, -2, 9, 7, 5, -8, 10, -3]

print(max_contig_sum(a))

print(max_contig_sum(b))

print(max_contig_sum(c))

Analysis : -

The time complexity of the max_contig_sum function is O(n), where n is the length of the input sequence seq.
Q4. Implement the algorithm (Algo_1) presented below and discuss which task this

algorithm performs. Also, analyse the time complexity and space complexity of the

given algorithm.

Further, implement the algorithm with following modification: replace m = ⌈2n/3⌉ with m =
⌊2n/3⌋, and compare the tasks performed by the given algorithm and modified algorithm.

Algo_1(A [0 ... n-1])

if n = 2 and A[0] > A[1]

swap A[0] ↔ A[1]

else if n > 2

m = ⌈2n/3⌉

Algo_1 (A [0 .. m − 1])

Algo_1 (A [n – m .. n − 1])

Algo_1 (A [0 .. m − 1])

}
Source Code:
import time

import numpy as np

import matplotlib.pyplot as plt

def algo_1_original(A):n

= len(A)

if n == 2 and A[0] > A[1]:

A[0], A[1] = A[1], A[0]

elif n > 2:

m = (2 * n) // 3

algo_1_original(A[:m])

algo_1_original(A[n-m:])

algo_1_original(A[:m])

return A

def algo_1_modified(A):n

= len(A)

if n == 2 and A[0] > A[1]:

A[0], A[1] = A[1], A[0]

elif n > 2:

m = (2 * n) // 3

algo_1_modified(A[:m])

algo_1_modified(A[n-m:])

algo_1_modified(A[:m-1])

return A

A = [5, 7, 10, 22, 3, 9, 14, 1, 11]

algo_1_original(A)

B = [5, 7, 10, 22, 3, 9, 14, 1, 11]

algo_1_modified(B)
n_values = [3, 4, 5, 6, 7, 8]

times_original = []

times_modified = []

for n in n_values:

A = list(range(n, 0, -1))

B = list(range(n, 0, -1))

start = time.time()

algo_1_original(A)

end = time.time()

times_original.append(end - start)

start = time.time()

algo_1_modified(B)

end = time.time()

times_modified.append(end - start)

X_axis = np.arange(len(n_values))

plt.bar(X_axis - 0.2, times_original, 0.4, label='Original')

plt.bar(X_axis + 0.2, times_modified, 0.4, label='Modified')plt.xticks(X_axis,

n_values)

plt.xlabel("Array size (n)")

plt.ylabel("Execution time (seconds)")

plt.legend()

plt.show()
Analysis : -
time complexity of algo_1_original is O(n log n).

the space complexity of algo_1_modified is O(1


LAB SHEET 4
Dynamic Programming

En.no : 07819011921
Name : Udaybari
Batch Batch : AI_DS
B2A

Q1) Implement LCS algorithm for A[1 .. n] and B[1 .. l] sequences.

The LCS (Longest Common Subsequence) algorithm is a dynamic programming


algorithm that is used to find the longest subsequence that is common to two given
sequences A[1 .. n] and B[1 .. l]. The algorithm works by comparing the characters of
the two sequences and finding the longest common subsequence between them.
The dynamic programming approach involves building a matrix of size (n+1)x(l+1) to
store the lengths of LCS of all sub-sequences of the two given sequences. The matrix is
initialized with 0's and then iteratively filled in with the LCS of the sub-sequences.
The final answer can be obtained by reading the value of LCS[n][l] from the matrix.
The time complexity of the LCS algorithm is O(nl), where n and l are the lengths of the
two sequences. This is because we are filling a matrix of size (n+1)x(l+1), which takes
O(nl) time, and then reading the final value from the matrix, which takes O(1) time.
The space complexity of the algorithm is also O(nl) since we are storing a matrix of
size (n+1)x(l+1).

Source Code:

print('uday bari 07819011921')


def lcs(s1, s2, m, n):
Lcs_table = [[0 for j in range(n + 1)] for i in range(m + 1)]

for i in range(1, m + 1):


for j in range(1, n + 1):
if s1[i - 1] == s2[j - 1]:
Lcs_table[i][j] = Lcs_table[i - 1][j - 1] + 1
else:
Lcs_table[i][j] = max(Lcs_table[i - 1][j], Lcs_table[i][j - 1])

index = Lcs_table[m][n]
temp = index

lcs = [''] * (index + 1)

i, j = m, n
while i > 0 and j > 0:
if s1[i - 1] == s2[j - 1]:
lcs[index - 1] = s1[i - 1]
i -= 1
j -= 1
index -= 1
elif Lcs_table[i - 1][j] > Lcs_table[i][j - 1]:
i -= 1
else:
j -= 1
print("String 1: " + s1 + "\nString 2: " + s2 + "\nLCS: " + "".join(lcs))

S1 = input("Please enter String1 = ")


S2 = input("Please enter String2 = ")
m = len(S1)
n = len(S2)
lcs(S1, S2, m, n)
Analysis : -

the time complexity is O(m * n).


the space complexity is O(m * n)

Q2) Given an array A[1 .. n] of integers, compute the length of a longest increasing
subsequence. A sequence B[1 .. l] is increasing if B[i] > B[i − 1] for every index i ≥ 2.
Forexample, given the array
⟨3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8, 9, 7, 9, 3, 2, 3, 8, 4, 6, 2, 7⟩
your algorithm should return the integer 6, because 1, 4, 5, 6, 8, 9 is a longest
increasingsubsequence (one of many).

Source Code:
import numpy as np
print('uday bari 07819011921')
def longest_increasing_subsequence(A):
n = len(A)
L = [1] * n
for i in range(1, n):
for j in range(i):
if A[i] > A[j] and L[i] < L[j] + 1:
L[i] = L[j]+1
max_len = 0
for i in range(n):
max_len = max(max_len, L[i])

return max_len

A = np.random.randint(15, size=(20))
print(A)
print(longest_increasing_subsequence(A))
The time complexity of the longest_increasing_subsequence_length function is
O(n^2), where n is the length of the input array A. This is because the function
uses a nested loop structure to compare each pair of elements in A, resulting in
O(n^2) iterations. Therefore, the algorithm has a polynomial time complexity.
Q3.) Given an array A [1 .. n] of integers, compute the length of a longest alternating
subsequence. A sequence B [1 .. l] is alternating if B[i] < B [i − 1] for every even index i
≥ 2,and B[i] > B [i − 1] for every odd index i ≥ 3. For example, given the array ⟨3, 1, 4, 1,
5, 9, 2,
6, 5, 3, 5, 8, 9, 7, 9, 3, 2, 3, 8, 4, 6, 2, 7⟩. Your algorithm should return the integer 17,
because
3, 1, 4, 1, 5, 2, 6, 5, 8, 7, 9, 3, 8, 4, 6, 2, 7 is a longest alternating subsequence (one of
many).

Source Code:
def longest_alternating_subsequence(A):
n = len(A)
inc = [1] * n
dec = [1] * n

for i in range(n):
for j in range(i):
if A[j] < A[i]:
inc[i] = max(inc[i], dec[j] + 1)
elif A[j] > A[i]:
dec[i] = max(dec[i], inc[j] + 1)

maxLen = max(max(inc), max(dec))


return maxLen

A = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8, 9, 7, 9, 3, 2, 3, 8, 4, 6, 2, 7]
result = longest_alternating_subsequence(A)
print(result)
Complexity Analysis:
 Time Complexity: The algorithm uses nested loops to iterate through the array,
resulting in a time complexity of O(n^2), where n is the length of the input array
A.
 Space Complexity: The algorithm uses two additional arrays, inc and dec, of size n to
store intermediate results. Hence, the space complexity is O(n).
Q4.) Given an array A[1 .. n], compute the length of a longest palindrome
subsequence of A. Recall that a sequence B[1 .. l] is a palindrome if B[i] =
B[l− i + 1] for every index i.

Source Code:
def longest_palindrome_subsequence(A):
n = len(A)
dp = [[0] * n for _ in range(n)]

for i in range(n-1, -1, -1):


dp[i][i] = 1
for j in range(i+1, n):
if A[i] == A[j]:
dp[i][j] = dp[i+1][j-1] + 2
else:
dp[i][j] = max(dp[i+1][j], dp[i][j-1])

return dp[0][n-1]

#Ex

A = [1, 2, 3, 4, 3, 2, 1]
result = longest_palindrome_subsequence(A)
print(result)
Complexity Analysis:
 Time Complexity: The algorithm uses nested loops to iterate through the array
andfill the dp array. Hence, the time complexity is O(n^2), where n is the length of
the input array A.
 Space Complexity: The algorithm uses a 2D dp array of size n x n to store
intermediate results. Therefore, the space complexity is O(n^2).
Q5.) Given an array A[1 .. n] of integers, compute the length of a longest
convex subsequence of A. A sequence B[1 .. l] is convex if B[i] − B[i − 1] >
B[i − 1] − B[i − 2] for every index i ≥ 3.

Source Code:
def longest_convex_subsequence(A):
n = len(A)
dp = [1] * n

for i in range(2, n):


for j in range(1, i):
if A[i] - A[j] > A[j-1] - A[j-2]:
dp[i] = max(dp[i], dp[j] + 1)

maxLen = max(dp)
return maxLen

#Ex
A = [3, 1, 4, 1, 5, 12, 2, 6, 5, 3, 5, 8, 9, 15, 9, 3, 2, 16, 8, 4, 6, 2, 4]
result = longest_convex_subsequence(A)
print(result)
Complexity Analysis:
 Time Complexity: The program uses nested loops to iterate through the array and
update the dp array. Hence, the time complexity is O(n^2), where n is the length of
the input array A.
 Space Complexity: The program uses an additional dp array of size n to store
intermediate results. Therefore, the space complexity is O(n).
LAB SHEET 5

En.no : 07819011921
Name :
Udaybari
Batch Batch :
AI_DS B2A

Q1. Implement MCM algorithm for the given n matrix <M1xM2… Mn>
where the size of the matrix is Mi=di-1 x di.

Source Code:

def matrix_chain_multiplication(dims):
n = len(dims) - 1

dp = [[float('inf')] * n for _ in range(n)]

for i in range(n):
dp[i][i] = 0

for chain_len in range(2, n + 1):


for i in range(n - chain_len + 1):
j = i + chain_len - 1
for k in range(i, j):
cost = dp[i][k] + dp[k + 1][j] + dims[i] * dims[k + 1] * dims[j + 1]
if cost < dp[i][j]:
dp[i][j] = cost

return dp[0][n - 1]

matrix_dimensions = [10, 30, 5, 60]


minimum_scalar_multiplications = matrix_chain_multiplication(matrix_dimensions)
print("Minimum scalar multiplications:", minimum_scalar_multiplications)
Complexity Analysis:

the time complexity of the code is O(n^3), where n is the number of matrices
the space complexity of the code is O(n^2).

Q2 Implement OBST for given n keys (K1,K2…..Km) whose pi and qi


(dummy keys) are given.

Source Code:

class Node:

def __init__(self, key):

self.key = key

self.left = None

self.right = None

keys = ["K1", "K2", "K3", "K4", "K5"]

p = [0.15, 0.1, 0.05, 0.1, 0.2]

q = [0.05, 0.1, 0.05, 0.05, 0.05, 0.1]

def optimal_bst(keys, p, q):

n = len(keys)

cost = [[0] * (n + 1) for _ in range(n + 2)]

root = [[0] * (n + 1) for _ in range(n + 2)]

for i in range(1, n + 2):

cost[i][i - 1] = q[i - 1]
for l in range(1, n + 1):

for i in range(1, n - l + 2):

j=i+l-1

cost[i][j] = float('inf')

for r in range(i, j + 1):

c = cost[i][r - 1] + cost[r + 1][j] + sum(p[i - 1:j]) + sum(q[i - 1:j + 1])

if c < cost[i][j]:

cost[i][j] = c

root[i][j] = r

return cost[1][n], root

def construct_obst(keys, root, i, j):

if i > j:

return None

k = root[i][j]

if k == 0:

return None

node = Node(keys[k - 1])

node.left = construct_obst(keys, root, i, k - 1)

node.right = construct_obst(keys, root, k + 1, j)

return node
cost, root = optimal_bst(keys, p, q)

obst = construct_obst(keys, root, 1, len(keys))

print("Minimum cost:", cost)

def inorder(node):

if node:

inorder(node.left)

print(node.key)

inorder(node.right)

print("Inorder traversal of the OBST:")

inorder(obst)
Complexity Analysis:

The time complexity of the optimal_bst function is approximately O(n^3)


the space complexity is O(n).

Q3 . Implement 0/1 Knapsack problem using dynamic


programming.

Source Code:
def knapsack(weights, values, capacity):
n = len(weights)
dp = [[0] * (capacity + 1) for _ in range(n + 1)]

for i in range(1, n + 1):


for w in range(1, capacity + 1):
# If the current item's weight is less than or equal to the current capacity
if weights[i - 1] <= w:
# Choose the maximum value between including the current item or excluding it
dp[i][w] = max(values[i - 1] + dp[i - 1][w - weights[i - 1]], dp[i - 1][w])
else:
# Exclude the current item
dp[i][w] = dp[i - 1][w]

# Retrieve the selected items


selected_items = []
i, j = n, capacity
while i > 0 and j > 0:
if dp[i][j] != dp[i - 1][j]:
selected_items.append(i - 1)
j -= weights[i - 1]
i -= 1

# Return the maximum value and the selected items


return dp[n][capacity], selected_items[::-1]

# Example usage
weights = [2, 3, 4, 5]
values = [3, 4, 5, 6]
capacity = 7

max_value, selected_items = knapsack(weights, values, capacity)

print("Maximum Value:", max_value)


print("Selected Items:", selected_items)
Complexity Analysis:
The time complexity of the knapsack function is O(n * capacity), where n is the number of
items and capacity is the maximum capacity of the knapsack.

the space complexity of the selected_items list is O(n).


LAB SHEET 6
En.no : 07819011921
Name :
Uday bari
Batch : AI_DS
B2A

Q1 Wap to Implement breadth first search algorithm for given


graph G.

Source Code:
from collections import defaultdict, deque

class Graph:
def __init__(self):
self.graph = defaultdict(list)

def addEdge(self, u, v):


self.graph[u].append(v)

def BFS(self, start):


visited = set()
queue = deque()

visited.add(start)
queue.append(start)

while queue:
node = queue.popleft()
print(node, end=" ")

for neighbor in self.graph[node]:


if neighbor not in visited:
visited.add(neighbor)
queue.append(neighbor)
# Example usage
g = Graph()
g.addEdge(0, 1)
g.addEdge(0, 2)
g.addEdge(1, 2)
g.addEdge(2, 0)
g.addEdge(2, 3)
g.addEdge(3, 3)

start_node = 2
print("BFS traversal starting from node", start_node)
g.BFS(start_node)

Complexity Analysis:

the time complexity is O(V^2)


the space complexity is O(V + E)
Q2 Wap to Implement depth first search algorithm for given
graph G.
Source Code:

from collections import defaultdict

class Graph:
def __init__(self):
self.graph = defaultdict(list)

def addEdge(self, u, v):


self.graph[u].append(v)

def DFS(self, start):


visited = set()
self._DFSUtil(start, visited)

def _DFSUtil(self, node, visited):


visited.add(node)
print(node, end=" ")

for neighbor in self.graph[node]:


if neighbor not in visited:
self._DFSUtil(neighbor, visited)

# Example usage
g = Graph()
g.addEdge(0, 1)
g.addEdge(0, 2)
g.addEdge(1, 2)
g.addEdge(2, 0)
g.addEdge(2, 3)
g.addEdge(3, 3)

start_node = 2
print("DFS traversal starting from node", start_node)
g.DFS(start_node)
Complexity Analysis:

the time complexity is O(|V|^2),


the space complexity is O(V)

Q3 Wap to Implement topological sorting.

Source Code:
from collections import defaultdict

class Graph:
def __init__(self, num_vertices):
self.graph = defaultdict(list)
self.num_vertices = num_vertices

def addEdge(self, u, v):


self.graph[u].append(v)

def topologicalSort(self):
visited = [False] * self.num_vertices
stack = []

for vertex in range(self.num_vertices):


if not visited[vertex]:
self._topologicalSortUtil(vertex, visited, stack)

return stack[::-1]

def _topologicalSortUtil(self, vertex, visited, stack):


visited[vertex] = True

for neighbor in self.graph[vertex]:


if not visited[neighbor]:
self._topologicalSortUtil(neighbor, visited, stack)

stack.append(vertex)
# Example usage
g = Graph(6)
g.addEdge(5, 2)
g.addEdge(5, 0)
g.addEdge(4, 0)
g.addEdge(4, 1)
g.addEdge(2, 3)
g.addEdge(3, 1)

print("Topological Sort Order:")


topological_order = g.topologicalSort()
for vertex in topological_order:
print(vertex, end=" ")
Complexity Analysis:

the time complexity is O(|V|^2),


the space complexity is O(V)

Q4 Wap to find the strongly connected components in a Graph.

Source Code:

from collections import defaultdict

class Graph:
def __init__(self, num_vertices):
self.graph = defaultdict(list)
self.num_vertices = num_vertices
self.time = 0

def addEdge(self, u, v):


self.graph[u].append(v)

def SCC(self):
low = [-1] * self.num_vertices
disc = [-1] * self.num_vertices
stackMember = [False] * self.num_vertices
stack = []
result = []
for vertex in range(self.num_vertices):
if disc[vertex] == -1:
self._SCCUtil(vertex, low, disc, stackMember, stack, result)

return result

def _SCCUtil(self, vertex, low, disc, stackMember, stack, result):


disc[vertex] = self.time
low[vertex] = self.time
self.time += 1
stack.append(vertex)
stackMember[vertex] = True

for neighbor in self.graph[vertex]:


if disc[neighbor] == -1:
self._SCCUtil(neighbor, low, disc, stackMember, stack, result)
low[vertex] = min(low[vertex], low[neighbor])
elif stackMember[neighbor]:
low[vertex] = min(low[vertex], disc[neighbor])

if low[vertex] == disc[vertex]:
scc = []
while True:
node = stack.pop()
stackMember[node] = False
scc.append(node)
if node == vertex:
break
result.append(scc)
# Example usage
g = Graph(8)
g.addEdge(0, 1)
g.addEdge(1, 2)
g.addEdge(2, 0)
g.addEdge(1, 3)
g.addEdge(3, 4)
g.addEdge(4, 5)
g.addEdge(5, 3)
g.addEdge(5, 6)
g.addEdge(6, 7)
g.addEdge(7, 6)

print("Strongly Connected Components:")


sccs = g.SCC()
for scc in sccs:
print(scc)
Complexity Analysis:

the overall time complexity is O(V + E).


the overall space complexity is O(V).

LAB SHEET 7
En.no : 07819011921
Name :
Udaybari
Batch Batch :
AI_DS B2A

Q1 Wap to Implement Prim's algorithm for given graph G.

Source Code:
import heapq
from collections import defaultdict

class Graph:
def __init__(self):
self.graph = defaultdict(list)

def addEdge(self, u, v, weight):


self.graph[u].append((v, weight))
self.graph[v].append((u, weight))

def primMST(self):
visited = set()
min_heap = []
MST = []

start_node = list(self.graph.keys())[0]
visited.add(start_node)

for neighbor, weight in self.graph[start_node]:


heapq.heappush(min_heap, (weight, start_node, neighbor))

while min_heap:
weight, u, v = heapq.heappop(min_heap)

if v not in visited:
visited.add(v)
MST.append((u, v, weight))

for neighbor, edge_weight in self.graph[v]:


if neighbor not in visited:
heapq.heappush(min_heap, (edge_weight, v, neighbor))

return MST

# Example usage
g = Graph()
g.addEdge('A', 'B', 4)
g.addEdge('A', 'C', 1)
g.addEdge('B', 'C', 3)
g.addEdge('B', 'D', 2)
g.addEdge('C', 'D', 5)
g.addEdge('D', 'E', 6)

print("Minimum Spanning Tree (MST) edges:")


mst_edges = g.primMST()
for edge in mst_edges:
print(edge[0], "--", edge[1], ":", edge[2])
Complexity Analysis:

the overall time complexity of the Prim's algorithm is O(Elog E).


the overall space complexity is O(V + E).

Q2 Wap to Implement Kruskal's algorithm for given graph G.

Source Code:
class Graph:
def __init__(self, num_vertices):
self.num_vertices = num_vertices
self.edges = []

def addEdge(self, u, v, weight):


self.edges.append((u, v, weight))

def find(self, parent, i):


if parent[i] == i:
return i
return self.find(parent, parent[i])

def union(self, parent, rank, x, y):


root_x = self.find(parent, x)
root_y = self.find(parent, y)
if rank[root_x] < rank[root_y]:
parent[root_x] = root_y
elif rank[root_x] > rank[root_y]:
parent[root_y] = root_x
else:
parent[root_y] = root_x
rank[root_x] += 1

def kruskalMST(self):
result = []
self.edges.sort(key=lambda x: x[2])
parent = []
rank = []

for vertex in range(self.num_vertices):


parent.append(vertex)
rank.append(0)

i=0
e=0

while e < self.num_vertices - 1:


u, v, weight = self.edges[i]
i += 1
x = self.find(parent, u)
y = self.find(parent, v)

if x != y:
e += 1
result.append((u, v, weight))
self.union(parent, rank, x, y)

return result
# Example usage
g = Graph(6)
g.addEdge(0, 1, 4)
g.addEdge(0, 2, 1)
g.addEdge(1, 2, 3)
g.addEdge(1, 3, 2)
g.addEdge(2, 3, 5)
g.addEdge(3, 4, 6)
g.addEdge(4, 5, 7)

print("Minimum Spanning Tree (MST) edges:")


mst_edges = g.kruskalMST()
for edge in mst_edges:
print(edge[0], "--", edge[1], ":", edge[2])
Complexity Analysis:

the overall time complexity of Kruskal's algorithm is dominated by the sorting step and is O(E log E).
the overall space complexity is O(E + V).

Q3 Wap to Implement dijkstra algorithm to find single source shortest


path.

Source Code:

import heapq
from collections import defaultdict

class Graph:
def __init__(self):
self.graph = defaultdict(list)

def addEdge(self, u, v, weight):


self.graph[u].append((v, weight))
self.graph[v].append((u, weight))

def dijkstra(self, source):


distances = {vertex: float('inf') for vertex in self.graph}
distances[source] = 0

min_heap = [(0, source)]


visited = set()
while min_heap:
dist, vertex = heapq.heappop(min_heap)

if vertex in visited:
continue

visited.add(vertex)

for neighbor, weight in self.graph[vertex]:


new_dist = dist + weight

if new_dist < distances[neighbor]:


distances[neighbor] = new_dist
heapq.heappush(min_heap, (new_dist, neighbor))

return distances

# Example usage
g = Graph()
g.addEdge('A', 'B', 4)
g.addEdge('A', 'C', 1)
g.addEdge('B', 'C', 3)
g.addEdge('B', 'D', 2)
g.addEdge('C', 'D', 5)
g.addEdge('D', 'E', 6)

source_vertex = 'A'
distances = g.dijkstra(source_vertex)
print("Shortest distances from the source vertex", source_vertex)
for vertex, distance in distances.items():
print("Vertex:", vertex, "Distance:", distance)
Complexity Analysis:

the time complexity of Dijkstra's algorithm is dominated by the heap operations and is O((V + E) log V),
where E is the number of edges.
the space complexity is O(E + V).

You might also like