Download as pdf or txt
Download as pdf or txt
You are on page 1of 56

DAA CAE QB

Q1. Which are the Mathematical Notations?

Asymptotic Notations are programming languages that allow you to analyze an


algorithm’s running time by identifying its behavior as its input size grows.You
compare space and time complexity using asymptotic analysis.It compares two
algorithms based on changes in their performance as the input size is increased or
decreased.

There are mainly three asymptotic notations:

Big-O Notation (O-notation)

Omega Notation (Ω-notation)

Theta Notation (Θ-notation)

1. Theta Notation (Θ-Notation):

Theta notation encloses the function from above and below. Since it represents the
upper and the lower bound of the running time of an algorithm, it is used for
analyzing the average-case complexity of an algorithm.Theta (Average Case) You
add the running times for each possible input combination and take the average in
the average case.

2. Big-O Notation (O-notation):

Big-O notation represents the upper bound of the running time of an algorithm.
Therefore, it gives the worst-case complexity of an algorithm.It is the most widely
used notation for Asymptotic analysis.It specifies the upper bound of a function.It
returns the highest possible output value(big-O) for a given input.Big-Oh(Worst
Case) It is defined as the condition that allows an algorithm to complete statement
execution in the longest amount of time possible.The maximum time required by
an algorithm or the worst-case time complexity.

3. Omega Notation (Ω-Notation):

Omega notation represents the lower bound of the running time of an algorithm.
Thus, it provides the best case complexity of an algorithm.The execution time

1
serves as a lower bound on the algorithm’s time complexity.It is defined as the
condition that allows an algorithm to complete statement execution in the shortest
amount of time

Q2.Ellaborate the concept of time and space complexity

Space Complexity: Space complexity of an algorithm represents the amount of


memory space needed the algorithm in its life cycle.Space needed by an algorithm
is equal to the sum of the following two components

A fixed part that is a space required to store certain data and variables (i.e. simple
variables and constants, program size etc.), that are not dependent of the size of the
problem.

A variable part is a space required by variables, whose size is totally dependent on


the size of the problem. For example, recursion stack space, dynamic memory
allocation etc.

Efficient use of memory is essential, especially in resource-constrained


environments. Analyzing space complexity aids in selecting algorithms that
optimize memory usage, contributing to the overall efficiency and scalability of the
algorithm.

In conclusion, a thorough understanding of time and space complexity is crucial


for designing, analyzing, and selecting algorithms that meet performance
requirements in diverse computing environments. It enables developers and
engineers to make informed choices, striking a balance between computational
efficiency and resource utilization.

Time Complexity:Time Complexity of an algorithm is the representation of the


amount of time required by the algorithm to execute to completion. Time
requirements can be denoted or defined as a numerical function t(N), where t(N)
can be measured as the number of steps, provided each step takes constant time.

2
For example, in case of addition of two n-bit integers, N steps are taken.
Consequently, the total computational time is t(N) = c*n, where c is the time
consumed for addition of two bits. Here, we observe that t(N) grows linearly as
input size increases.

Time Complexity:*

Time complexity is a fundamental concept in the analysis of algorithms,


representing the computational efficiency concerning the input size. It quantifies
the amount of time an algorithm takes to complete as a function of the input size.
The Big O notation is commonly used to express time complexity, providing an
upper bound on the growth rate of an algorithm's running time.

Understanding time complexity is crucial for evaluating and comparing algorithms.


In practice, we strive to choose algorithms with lower time complexities to ensure
efficient performance, especially when dealing with large datasets.

3
Q3]Define the algorithm and properties of algorithms.

An algorithm is a set of commands that must be followed for a computer to


perform calculations or other problem-solving operations.

According to its formal definition, an algorithm is a finite set of instructions


carried out in a specific order to perform a particular task.

It is not the entire program or code; it is simple logic to a problem represented as


an informal description in the form of a flowchart or pseudocode.

Characteristics:

1) Input : There are more quantities that are extremely supplied.

2) Output : At least one quantity is produced.

3) Definiteness : Each instruction of the algorithm should be clear and


unambiguous.

4) Finiteness : The process should be terminated after a finite number of steps.

5) Effectiveness : Every instruction must be basic enough to be carried out


theoretically or by using paper and pencil.

Properties:

Non Ambiguity: Each step in an algorithm should be non-ambiguous. That means


each instruction should be clear and precise. The instruction in any algorithm
should not denote any conflicting meaning. This property also indicates the
effectiveness of algorithm.

Range of Input : The range of input should be specified. This is because normally
the algorithm is input driven and if the range of input is not being specified then
algorithm can go in an infinite state.

Multiplicity : The same algorithm can be represented into several different ways.
That means we can write in simple English the sequence of instruction or we can
write it in form of pseudo code. Similarly, for solving the same problem we can
write several different algorithms.

4
Speed : The algorithmis written using some specified ideas. Bus such algorithm
should be efficient and should produce the output with fast speed.

Finiteness : The algorithm should be finite. That means after performing required
operations it should be terminate.

Q4// initialize the next term (3rd term)

int nextTerm = t1 + t2;

// get no. of terms from user

printf("Enter the number of terms: ");

scanf("%d", &n);

// print the first two terms t1 and t2

printf("Fibonacci Series: %d, %d, ", t1, t2);

// print 3rd to nth terms

for (i = 3; i <= n; ++i) {

printf("%d, ", nextTerm);

t1 = t2;

t2 = nextTerm;

nextTerm = t1 + t2;

Give the time complexity equation of the above given code.

The time complexity of the given code can be analyzed as follows:

1. The code initializes variables `t1`, `t2`, `nextTerm`, `n`, and `i` which takes
constant time. Let's assume this time complexity is O(1).

5
2. The code involves a loop that iterates from `i = 3` to `n`. Inside the loop, the
following operations are performed:

- Printing a value (constant time operation)

- Updating values of `t1`, `t2`, and `nextTerm` (constant time operations)

Since the loop runs from 3 to `n`, it will run `n - 2` times. Therefore, the time
complexity of the loop is O(n).

3. After the loop, there is no significant additional operation in terms of time


complexity.Combining all these factors, the overall time complexity of the given
code can be approximated as O(1) (for the initialization) + O(n) (for the loop) =
O(n).

So, the time complexity equation for the given code is T(n) = O(n), where n is the
number of terms input by the user.

Q5. Analyse the equation of best case, worst case and average case by using
giving the graph.

Q6]

sum = 0

for i in range(1, 101):

sum = sum + i

print(sum)

Give the time complexity equation of the above given code

The provided code calculates the sum of numbers from 1 to 100 using a loop. The
time complexity of this code can be analyzed as follows:

6
1. Initializing the variable `sum` takes constant time. Let's assume this time
complexity is O(1).

2. The loop iterates from `i = 1` to `i = 100`. Inside the loop, the following
operations are performed:

- Addition (`sum = sum + i`) takes constant time.

- Printing a value (constant time operation)

Since the loop runs 100 times (from 1 to 100), the time complexity of the loop is
O(100), which simplifies to O(1) when considering big O notation.

3. After the loop, there are no significant additional operations in terms of time
complexity.

Combining all these factors, the overall time complexity of the given code can be
approximated as O(1) (for the initialization) + O(1) (for the loop) = O(1).

So, the time complexity equation for the given code is T(n) = O(1), where `n`
represents the number of iterations (which is 100 in this case).

Q7. Ilustrate the concept of space complexity and the equation for the
algorithms.

Space complexity refers to the amount of memory space required by an algorithm


to solve a problem as a function of the input size. It measures how much memory
an algorithm uses to store temporary data, variables, and other data structures
during its execution. Space complexity is crucial because it helps assess the
efficiency of an algorithm in terms of memory usage.

The space complexity equation for an algorithm can be expressed using Big O
notation, similar to time complexity. It provides an upper bound on the amount of
memory space the algorithm uses relative to the size of the input.

Example 1: Constant Space Complexity (O(1))

7
def constant_space(n):

a=5

b = 10

return a + b

# Space complexity: O(1)

Example 2: Linear Space Complexity (O(n))

def linear_space(n):

data = [0] * n # Creating an array of size n

for i in range(n):

data[i] = i

return data

# Space complexity: O(n)

Example 3: Quadratic Space Complexity (O(n^2))

def quadratic_space(n):

matrix = [[0] * n for _ in range(n)] # Creating a 2D matrix of size n x n

for i in range(n):

for j in range(n):

matrix[i][j] = i * j

return matrix

# Space complexity: O(n^2)

8
Example 4: Recursive Space Complexity

def recursive_space(n):

if n <= 0:

return

recursive_space(n - 1)

# Space complexity: O(n) due to the recursive call stack

The space complexity of an algorithm depends on factors like the data structures
used, the number of variables, and the depth of recursion. The most significant
contributor to space complexity is often the function call stack when dealing with
recursion.

When analyzing space complexity, it's important to consider temporary space used
by variables, the input size, and any additional data structures created during the
algorithm's execution. Similar to time complexity, space complexity helps us
choose the most efficient algorithm for a given problem based on the available
memory resources.

9
Explain Binary Search with pseudo code

Binary search is the search technique that works efficiently on sorted lists. Hence,
to search an element into some list using the binary search technique, we must
ensure that the list is sorted

Binary search follows the divide and conquer approach in which the list is divided
into two halves, and the item is compared with the middle element of the list. If the
match is found then, the location of the middle element is returned. Otherwise, we
search into either of the halves depending upon the result produced through the
match.

Best Case Complexity - In Binary search, best case occurs when the element to
search is found in first comparison, i.e., when the first middle element itself is the
element to be searched. The best-case time complexity of Binary search is O(1).

Average Case Complexity - The average case time complexity of Binary search is
O(logn).

Worst Case Complexity - In Binary search, the worst case occurs, when we have to
keep reducing the search space till it has only one element. The worst-case time
complexity of Binary search is O(logn).

Binary_Search(a, lower_bound, upper_bound, val) // 'a' is the given array,


'lower_bound' is the index of the first array element, 'upper_bound' is the index of

10
the last array element, 'val' is the value to search

Step 1: set beg = lower_bound, end = upper_bound, pos = - 1

Step 2: repeat steps 3 and 4 while beg <=end

Step 3: set mid = (beg + end)/2

Step 4: if a[mid] = val

set pos = mid

print pos

go to step 6

else if a[mid] > val

set end = mid - 1

else

set beg = mid + 1

[end of if]

[end of loop]

Step 5: if pos = -1

print "value is not present in the array"

[end of if]

Step 6: exit

11
Explain Quick Sort with pseudo code

QuickSort is a sorting algorithm based on the Divide and Conquer algorithm that
picks an element as a pivot and partitions the given array around the picked pivot
by placing the pivot in its correct position in the sorted array.

The key process in quickSort is a partition(). The target of partitions is to place the
pivot (any element can be chosen to be a pivot) at its correct position in the sorted
array and put all smaller elements to the left of the pivot, and all greater elements
to the right of the pivot.

Partition is done recursively on each side of the pivot after the pivot is placed in its
correct position and this finally sorts the array.

function quickSort(arr, low, high):

if low < high:

pivotIndex = partition(arr, low, high)

12
quickSort(arr, low, pivotIndex - 1) # Recursively sort the left subarray

quickSort(arr, pivotIndex + 1, high) # Recursively sort the right subarray

function partition(arr, low, high):

pivot = arr[high]

i = low - 1

for j from low to high - 1:

if arr[j] <= pivot:

i=i+1

swap(arr[i], arr[j])

swap(arr[i + 1], arr[high])

return i + 1

13
Explain Merge Sort with pseudo code

Merge sort is similar to the quick sort algorithm as it uses the divide and conquer
approach to sort the elements. It is one of the most popular and efficient sorting
algorithm. It divides the given list into two equal halves, calls itself for the two
halves and then merges the two sorted halves. We have to define the merge()
function to perform the merging.

The sub-lists are divided again and again into halves until the list cannot be divided
further. Then we combine the pair of one element lists into two-element lists,
sorting them in the process. The sorted two-element pairs is merged into the four-
element lists, and so on until we get the sorted list.

MergeSort(arr):

if length of arr <= 1:

return arr

mid = length of arr // 2

14
left_half = MergeSort(arr[0:mid])

right_half = MergeSort(arr[mid:end])

return Merge(left_half, right_half)

Merge(left, right):

merged_arr = empty array

while left is not empty and right is not empty:

if left[0] <= right[0]:

append left[0] to merged_arr

remove first element from left

else:

append right[0] to merged_arr

remove first element from right

append remaining elements of left and right to merged_arr

return merged_arr

15
Q8.What do you understand from the recursion concept.

Recursion is the process of repeating items in a self-similar way. In programming


languages, if a program allows you to call a function inside the same function, then
it is called a recursive call of the function.

void recursion() {

recursion(); /* function calls itself */

int main() {

recursion();

The C programming language supports recursion, i.e., a function to call itself. But
while using recursion, programmers need to be careful to define an exit condition
from the function, otherwise it will go into an infinite loop.

Recursive functions are very useful to solve many mathematical problems, such as
calculating the factorial of a number, generating Fibonacci series, etc.

16
Q10 Elaborate the concept of master method in recursion.

The Master Method is a specific technique used for analyzing the time complexity
of divide-and-conquer algorithms that follow a certain recurrence relation. It
provides a convenient way to determine the time complexity of such algorithms
without having to go through detailed analysis using methods like recurrence trees
or substitution.

The recurrence relation that the Master Method is applicable to has the following
form:

T(n) = aT(n/b) + f(n)

Where:

T(n) is the time complexity of the algorithm for input size n.

a is the number of subproblems that each have a size of n/b (where b > 1).

f(n) is the time complexity of the work done outside of the recursive calls
(combine, partition, etc.).

n/b represents the size of each subproblem relative to the original problem size.

The Master Method provides a way to determine the time complexity of the
algorithm based on the values of a, b, and the function f(n).

There are three cases that the Master Method covers:

Case 1: If f(n) is dominated by n^c where c < log_b(a):

The time complexity is O(n^log_b(a)).

Case 2: If f(n) is the same order as n^c log^k n where c = log_b(a):

The time complexity is O(n^c log^{k+1} n).

Case 3: If f(n) is dominated by n^c where c > log_b(a):

The time complexity is O(f(n)).

17
The Master Method is particularly useful when you can express the time
complexity of the work done outside the recursive calls using known functions like
polynomial or logarithmic functions. It simplifies the analysis and allows you to
quickly determine the time complexity of the algorithm without explicitly
constructing a recurrence tree or performing substitution.

18
Q11. Analyse the use of divide and conquer concept in analysing the
algorithms.

Divide and Conquer is an algorithmic pattern. In algorithmic methods, the design


is to take a dispute on a huge input, break the input into minor pieces, decide the
problem on each of the small pieces, and then merge the piecewise solutions into a
global solution. This mechanism of solving the problem is called the Divide &
Conquer Strategy.

Divide and Conquer algorithm consists of a dispute using the following three steps.

Divide the original problem into a set of subproblems.

Conquer: Solve every subproblem individually, recursively.

Combine: Put together the solutions of the subproblems to get the solution to the
whole problem.

Examples: The specific computer algorithms are based on the Divide & Conquer
approach:

Maximum and Minimum Problem

Binary Search

Sorting (merge sort, quick sort)

Tower of Hanoi.

19
Q12 Explain the binary search with the help of divide and conquer strategy.

Binary Search is one of the fastest searching algorithms.

It is used for finding the location of an element in a linear array.

It works on the principle of divide and conquer technique.

Binary Search Algorithm can be applied only on Sorted arrays o, the elements must
be arranged in-

Either ascending order if the elements are numbers.

Or dictionary order if the elements are strings.

To apply binary search on an unsorted array,

First, sort the array using some sorting technique.

Then, use binary search algorithm.

There is a linear array ‘a’ of size ‘n’.

Binary search algorithm is being used to search an element ‘item’ in this linear
array.

If search ends in success, it sets loc to the index of the element otherwise it sets loc
to -1.

Variables beg and end keeps track of the index of the first and last element of the
array or sub array in which the element is being searched at that instant.

Variable mid keeps track of the index of the middle element of that array or sub
array in which the element is being searched at that instant.

Binary Search Algorithm searches an element by comparing it with the middle


most element of the array.Then, following three cases are possible-

20
Case-01 : If the element being searched is found to be the middle most element, its
index is returned.

Case-02 : If the element being searched is found to be greater than the middle most
element, then its search is further continued in the right sub array of the middle
most element.

Case-03 : If the element being searched is found to be smaller than the middle most
element,then its search is further continued in the left sub array of the middle most
element.This iteration keeps on repeating on the sub arrays until the desired
element is found or size of the sub array reduces to zero.

21
Q13 a1=[11, 14, 25, 30, 40, 41, 52, 57, 70] Consider the above array and solve
the problem by using the divide and conquer strategy.

the steps of the divide and conquer strategy to solve the problem of searching for a
target value in the array a1=[11, 14, 25, 30, 40, 41, 52, 57, 70].

Problem Statement: Given the sorted array a1 and a target value, find whether the
target value exists in the array and if it does, find its index.

Solution using Divide and Conquer:

Divide: Divide the array into two halves.

Conquer: Compare the middle element with the target value.

Combine: Depending on the comparison, continue searching in either the left or


right half of the array.

Here are the steps:

Step 1: Divide

The array is divided into two halves:

Left Half: [11, 14, 25, 30]

Right Half: [40, 41, 52, 57, 70]

Step 2: Conquer

Compare the middle element of the array with the target value (let's say the target
value is 41). The middle element is 30, which is smaller than 41, so we focus on
the right half of the array.

Step 3: Divide and Conquer

Repeat the process for the right half:

Left Half: [40, 41]

Right Half: [52, 57, 70]

22
Step 4: Conquer

Now, compare the middle element of the right half (which is 57) with the target
value (41). Since 41 is smaller than 57, we focus on the left half.

Step 5: Divide and Conquer

Continue the process for the left half:

Left Half: [40]

Right Half: [41]

Step 6: Conquer

Now, compare the only element in the left half (which is 40) with the target value
(41). Since 41 is greater than 40, we move to the right half.

Step 7: Divide and Conquer

Continue the process for the right half:

Left Half: [41]

Right Half: []

Step 8: Conquer

In the right half, there are no elements, so we conclude that the target value (41) is
not found in the array.

This process demonstrates how the divide and conquer strategy is applied to search
for a target value in a sorted array. In this case, the target value 41 is not found in
the array. The steps involve repeatedly dividing the array into halves and
narrowing down the search based on comparisons with the middle elements.

23
Q14 arr[] = {38, 27, 43, 10} Consider the above array and solve the problem
by using the

merge sort and divide and conquer strategy.

Certainly, let's walk through the steps of the Merge Sort algorithm for the given
array arr[] = {38, 27, 43, 10}:

Step 1: Divide

Divide the array into two halves:

Left Half: {38, 27}

Right Half: {43, 10}

Step 2: Conquer (Recursive Sorting)

Recursively sort the left and right halves:

For the left half:

Divide: {38}, {27}

Conquer: {27, 38}

For the right half:

Divide: {43}, {10}

Conquer: {10, 43}

Step 3: Combine

Merge the sorted halves back together:

24
Merge: {10, 27, 38, 43}

The final sorted array is {10, 27, 38, 43}.

Here's a summary of the steps:

Divide: Split the array into smaller halves.

Left Half: {38, 27}

Right Half: {43, 10}

Conquer: Recursively sort the halves.

Left Half: {27, 38}

Right Half: {10, 43}

Combine: Merge the sorted halves.

Merged Array: {10, 27, 38, 43}

The sorted array is {10, 27, 38, 43}.

Q15 How to find minimum and maximum element in array using divide and
conquer? Give the examples.

Finding the minimum and maximum elements in an array using the divide and
conquer strategy can be done by recursively dividing the array into smaller
subproblems and then combining the results to get the overall minimum and
maximum. Here's how you can do it:

Algorithm:

Base Case: If the array contains only one element, return that element as both the
minimum and maximum.

Divide: Divide the array into two halves.

Conquer: Recursively find the minimum and maximum in both halves.

25
Combine: Compare the minimum and maximum values from the two halves to
determine the minimum and maximum for the entire array.

Here's a Python implementation of this algorithm:

def find_min_max(arr, low, high):

# Base case: if only one element

if low == high:

return arr[low], arr[low]

# If there are two elements

if high - low == 1:

return (arr[low], arr[high]) if arr[low] < arr[high] else (arr[high], arr[low])

mid = (low + high) // 2

left_min, left_max = find_min_max(arr, low, mid)

right_min, right_max = find_min_max(arr, mid + 1, high)

return (left_min, right_max) if left_min < right_min else (right_min, left_max)

# Example array

arr = [14, 8, 23, 40, 12, 42, 31, 6]

# Call the function to find min and max

min_val, max_val = find_min_max(arr, 0, len(arr) - 1)

print("Minimum:", min_val)

print("Maximum:", max_val)

Example:

26
For the example array arr = [14, 8, 23, 40, 12, 42, 31, 6], the code will output:

Minimum: 6

Maximum: 42

In this example, the divide and conquer strategy is used to recursively find the
minimum and maximum elements in the array. The algorithm effectively breaks
down the problem into smaller subproblems and combines the results to achieve
the desired outcome.

Q16 T (n) = 8 T apply master theorem on it. Solve the equation by master
theorem

Q17 T (n) = 2 Solve the equation by master theorem.

Q18 List the different types of recursion methods.

1. Substitution Method:

The substitution method is an approach where you guess the solution to a


recurrence relation and then use mathematical induction to prove its correctness.
You start by assuming a solution and then prove that it satisfies the recurrence
relation. This method is often used for solving recurrences that can be challenging
to solve directly.

2. Iteration Method:

The iteration method involves expanding the recurrence relation through iterations,
essentially "unrolling" the recurrence into a sequence of equations. This helps you
observe patterns and make conjectures about the solution. The iteration method is
useful when the recurrence relation is simple and follows a clear pattern.

3. Recurrence Tree Method:

The recurrence tree method is a graphical approach. It involves representing the

27
recurrence relation as a tree, where each level of the tree corresponds to a recursive
call and the branching represents the multiple subproblems generated. The total
work done at each level is summed up to determine the overall time complexity.
This method is particularly useful when analyzing recursive algorithms with
varying subproblem sizes.

4. Master Method:

The master method is a specific technique used to analyze the time complexity of
divide-and-conquer algorithms with a particular form of recurrence relation: `T(n)
= aT(n/b) + f(n)`. It provides a direct formula to determine the time complexity
based on the values of `a`, `b`, and `f(n)`. The master method is a quick and
efficient way to analyze the time complexity of certain recursive algorithms
without going through the detailed process of recurrence tree or substitution
methods.

These methods provide different ways to analyze and solve recurrence relations
that arise in the context of recursive algorithms. The choice of method depends on
the nature of the recurrence relation, its complexity, and the specific form of the
algorithm being analyzed.

Q19 Differentiate between tree and master methods of recursion.

Applicability:

Tree Method: Applicable to algorithms with varying subproblem sizes.

Master Method: Applicable to divide-and-conquer algorithms with a specific


recurrence relation.

Representation:

Tree Method: Represents the recursive calls as a tree structure.

Master Method: Provides a direct formula to determine the time complexity.

Focus:

Tree Method: Focuses on understanding the structure of the recursive calls and

28
their impact on the overall time complexity.

Master Method: Focuses on providing a general framework to solve recurrence


relations of a specific form.

Usage:

Tree Method: Useful when subproblems are not of equal size and when the
recurrence relation isn't in the required form for the Master Method.

Master Method: Useful when analyzing divide-and-conquer algorithms with a


suitable recurrence relation.

29
Q20 Elaborate the concept of tower of Hanoi by using recursion method.

The Tower of Hanoi is a classic mathematical puzzle that involves moving a stack
of disks from one peg to another peg, using a third peg as an intermediate, while
following specific rules. The puzzle is commonly used to demonstrate the concept
of recursion. The rules of the Tower of Hanoi are as follows:

Only one disk can be moved at a time.

Each move involves taking the top disk from one stack and placing it on top of
another stack.

No disk can be placed on top of a smaller disk.

The goal is to move the entire stack of disks from the source peg to the target peg,
using the auxiliary peg as an intermediate.

Recursive Solution:

The Tower of Hanoi problem can be elegantly solved using a recursive approach.
The key idea is to break down the problem into smaller subproblems that are
essentially the same as the original problem but with fewer disks. Here's how the
recursive solution works:

Base Case: If there's only one disk to move, simply move it from the source peg to
the target peg.

Recursive Case: For moving n disks from source to target, you can think of it as
moving the top n-1 disks from the source to the auxiliary peg, then moving the
bottom disk (the largest one) to the target peg, and finally moving the n-1 disks
from the auxiliary peg to the target peg.

Python implementation of the Tower of Hanoi problem using recursion:

def tower_of_hanoi(n, source, auxiliary, target):

if n == 1:

30
print(f"Move disk 1 from {source} to {target}")

return

tower_of_hanoi(n - 1, source, target, auxiliary)

print(f"Move disk {n} from {source} to {target}")

tower_of_hanoi(n - 1, auxiliary, source, target)

# Number of disks

num_disks = 3

# Call the function to solve Tower of Hanoi

tower_of_hanoi(num_disks, 'A', 'B', 'C')

Example:

For num_disks = 3, the program will output:

Move disk 1 from A to C

Move disk 2 from A to B

Move disk 1 from C to B

Move disk 3 from A to C

Move disk 1 from B to A

Move disk 2 from B to C

Move disk 1 from A to C

The recursive approach elegantly breaks down the Tower of Hanoi problem into
smaller subproblems, allowing you to move a sequence of disks from one peg to
another while following the rules of the puzzle.

31
UNIT 3 and 4
1.Discuss the basic strategy of greedy method.

Greedy Method is one of the strategy like divide and conquer used to
solve the problem
This method is used to solve the optimization problem
An optimization problem is a problem that demands minimum
or maximum results
It is the simplest and most straightforward approach is the Greedy
method.
The main function of this approach is that the decision is taken on the
basis of current available information

Whatever the current information is decision is made without worrying


about the effect of the current decision in future.
This technique is used to determine the feasible solution that may or
may not be optimal
In many problems, it does not produce an optimal solution though it
gives an approximate (near optimal) solution in a reasonable time.
The feasible solution is a subset that satisfy some given criteria, the
optimal solution is the solution which is best and most favourable
solution in the subset.

All greedy algorithms follow a basic structure:


1. declare an empty result = 0.
2. We make a greedy choice to select, If the choice is feasible add it
to the final result.
3. return the result.

Greedy approach is used to solve many problems, such as

32
• Finding the shortest path between two vertices using Dijkstra’s al-
gorithm.
• Finding the minimal spanning tree in a graph using Prim’s
/Kruskal’s algorithm, etc.

Examples

Most networking algorithms use the greedy approach. Here is a list of


few of them −

• Travelling Salesman Problem


• Prim's Minimal Spanning Tree Algorithm
• Kruskal's Minimal Spanning Tree Algorithm
• Dijkstra's Minimal Spanning Tree Algorithm
• Graph - Map Coloring
• Knapsack Problem
• Job Scheduling Problem

33
2. Explain the application to job sequencing with deadline problem.

Algorithm

• Find the maximum deadline value from the input set of jobs.
• Once, the deadline is decided, arrange the jobs in descending order
of their profits.
• Selects the jobs with highest profits, their time periods not exceed-
ing the maximum deadline.
• The selected set of jobs are the output.

Job Sequencing with Deadlines problem uses the greedy approach. So


we have to find the best method/option in the greedy method out of
many present ways. In this method/ approach, we focus on the first
stage, decide the output, and don’t think about the future.

In a job sequencing with deadlines problem, the objective is to find a


sequence of jobs completed within their deadlines, giving a maximum
profit. Let us consider the set of n given jobs associated with deadlines,
and profit is earned if a job is completed by its deadline. These jobs need
to be ordered so that the maximum profit is earned.

It may happen that all the given jobs can not be completed within their
deadlines. Assume that the deadline of ith job Ji is di and the profit
received from job Ji is pi. Hence, the optimal solution of job sequencing
with deadlines algorithm is a feasible solution with maximum profit.

Points to Remember for Job Sequencing with Deadlines

• Each job has deadline di & it can process the job within its dead-
line; only one job can be processed at a time.
• Only one CPU is available for processing all jobs.
• CPU can take only one unit at a time for processing any job.
• All jobs arrived at the same time.

34
Job Sequencing with Deadlines Example

Let us consider a given job sequencing problem, as shown in the table


below. We have to find the sequence of jobs completed within their
deadlines and give the maximum profit. Each job is associated with the
deadline and profit as given below:

job J1 J2 J3 J4 J5
deadline 2 1 1 2 3
profit 40 100 20 60 20

The given jobs are sorted as per their profit in descending order to solve
this problem. Hence, the jobs are ordered after sorting, as shown in the
following table.

job J2 J4 J1 J5 J3
deadline 1 2 2 3 1
profit 100 60 40 20 20

From the given set of jobs, first, we select J2, as it should be completed
within its deadline and contributes maximum profit.

• Next, J4 is selected as it gives more profit than J1.


• J1 cannot be selected in the next clock as its deadline is over.
Hence J5 is selected as it executes within its deadline.
• Job J3 is discarded as it should not be executed within its deadline.

Therefore, the sequence of jobs (J2, J4, J5) is executed within their dead-
line and gives the maximum profit.

The total profit of the sequence is 100 + 60 + 20 = 180.

35
3. Difference between greedy and dynamic method.

Greedy Dynamic

Producing decision produces only one produces hundreds


sequences of decision se-
quences
Results fast slow

Approach the bottom-up ap- the top-down ap-


proach proach
Solution Different solutions Optimal solution

Solution evalua- based on how well based on how well


tion it performs in the it performs in the
current situation. future
Decisions made Based on current based on the future
state of the pro- state of the pro-
gram gram.
Usage situation When solution is When solution is
known not known
Solution to every available Not available
problem
Example Fractional knap- 0/1 knapsack prob-
sack lem

36
4.Solve the following problem with minimum cost spanning trees.

A Minimum Spanning Tree (MST) is a subset of edges of a connected weighted


undirected graph that connects all the vertices together with the minimum
possible total edge weight.
To derive an MST, Prim’s algorithm or Kruskal’s algorithm can be used.
Properties
• A spanning tree does not have any cycle.
• Any vertex can be reached from any other vertex.

One graph may have more than one spanning tree

If there are n vertices then the spanning tree should havr (n-1) number of edges
In this context, if each edge of the graph is associated with a weight and there ex-
ists more than one spanning tree, we need to find the minimum spanning tree of the
graph.

Using Kruskal Algorithm:

37
Using Prim’s algorithm

38
4. Solve the single source shortest path

39
6. Explain the strategy of dynamic programming in details.

Dynamic programming is a technique that breaks the problems into sub-


problems, and saves the result for future purposes so that we do not need
to compute the result again. The subproblems are optimized to optimize
the overall solution is known as optimal substructure property. The main
use of dynamic programming is to solve optimization problems. Here,
optimization problems mean that when we are trying to find out the
minimum or the maximum solution of a problem

How does the dynamic programming approach work?

The following are the steps that the dynamic programming follows:

o It breaks down the complex problem into simpler subproblems.


o It finds the optimal solution to these sub-problems.
o It stores the results of subproblems (memoization). The process of
storing the results of subproblems is known as memorization.
o It reuses them so that same sub-problem is calculated more than
once.
o Finally, calculate the result of the complex problem.

Approaches of dynamic programming

There are two approaches to dynamic programming:

o Top-down approach
o Bottom-up approach

Top-down approach

The top-down approach follows the memorization technique, while bot-


tom-up approach follows the tabulation method. Here memorization is

40
equal to the sum of recursion and caching. Recursion means calling the
function itself, while caching means storing the intermediate results.

Advantages

o It is very easy to understand and implement.


o It solves the subproblems only when it is required.
o It is easy to debug.

Disadvantages

It uses the recursion technique that occupies more memory in the call
stack. Sometimes when the recursion is too deep, the stack overflow
condition will occur.

It occupies more memory that degrades the overall performance.

Bottom-Up approach

The bottom-up approach is also one of the techniques which can be used
to implement the dynamic programming. It uses the tabulation technique
to implement the dynamic programming approach. It solves the same
kind of problems but it removes the recursion. If we remove the recur-
sion, there is no stack overflow issue and no overhead of the recursive
functions. In this tabulation technique, we solve the problems and store
the results in a matrix.

41
7. Solve the problem of multistage graphs.

42
8. Solve the problem on traveling salesman problem.

A B C D

A 0 20 42 35

B 20 0 30 34

C 42 30 0 12

D 35 34 12 0

43
10.Solve the all pair shortest path.

44
UNIT 5

1) Differentiate between traversal and searching technique .

45
2) Explain BFS with example

Breadth First Search (BFS) can find the shortest path and mini-
mum spanning tree for unweighted graphs. In an unweighted
graph, the shortest path has the least number of edges, and BFS
always reaches a vertex from a source using the minimum num-
ber of edges. Any spanning tree is a minimum spanning tree in
unweighted graphs, and either BFS or DFS can be used to find a
spanning tree.

There are many ways to traverse the graph, but among them,
BFS is the most commonly used approach. It is a recursive
algorithm to search all the vertices of a tree or graph data
structure. BFS puts every vertex of the graph into two categories
- visited and non-visited. It selects a single node in a graph and,
after that, visits all the nodes adjacent to the selected node

46
47
3) Write down sequence of graph using DFS

Depth first Search or Depth first traversal is a recursive algorithm for searching all
the vertices of a graph or tree data structure. Traversal means visiting all the nodes
of a graph.

Depth First Search Algorithm


A standard DFS implementation puts each vertex of the graph into one of two
categories:

Visited
Not Visited
The purpose of the algorithm is to mark each vertex as visited while avoiding
cycles.

The DFS algorithm works as follows:

Start by putting any one of the graph's vertices on top of a stack.


Take the top item of the stack and add it to the visited list.
Create a list of that vertex's adjacent nodes. Add the ones which aren't in the visited
list to the top of the stack.
Keep repeating steps 2 and 3 until the stack is empty

48
49
50
4) Explain backtracking with example

Backtracking is a problem-solving algorithmic technique that involves find-


ing a solution incrementally by trying different options and undoing them if
they lead to a dead end. It is commonly used in situations where you need to
explore multiple possibilities to solve a problem, like searching for a path in
a maze or solving puzzles like Sudoku. When a dead end is reached, the al-
gorithm backtracks to the previous decision point and explores a different
path until a solution is found or all possibilities have been exhausted.

void FIND_SOLUTIONS( parameters):

if (valid solution):
store the solution
Return
for (all choice):
if (valid choice):

APPLY (choice)

FIND_SOLUTIONS (parameters)

BACKTRACK (remove choice)


Return

Applications of Backtracking

Creating smart bots to play Board Games such as Chess.


Solving mazes and puzzles such as N-Queen problem.
Network Routing and Congestion Control.
Decryption
Text Justification

Example—Take example of DFS

51
5) What do you understand by NB/P hamalton problem

NP-Complete:
The term "NP-complete" refers to a class of decision problems in computa-
tional complexity theory. A problem is NP-complete if it belongs to the class
NP (nondeterministic polynomial time) and has the property that any other
problem in NP can be reduced to it in polynomial time. In simpler terms,
solving any NP-complete problem efficiently would imply an efficient solu-
tion for all problems in NP. The concept was introduced by Stephen Cook in
1971.
P Hamiltonian Path Problem:
The "P Hamiltonian Path Problem" refers to the class of problems that can
be solved in polynomial time. Specifically, if there is an algorithm that can
determine whether a Hamiltonian path exists in a given graph in polynomial
time, then the Hamiltonian path problem is said to be in P.

52
6) Explain concept of hamalton problem

The Hamiltonian Path Problem is a classic problem in graph theory. It in-


volves finding a Hamiltonian path in a given graph. A Hamiltonian path is a
simple path that visits every vertex of the graph exactly once. If such a path
exists, the graph is said to have a Hamiltonian path; otherwise, it does not.

Key points:

1. Graph Representation:
- The problem is defined on a graph, which consists of vertices (nodes) and
edges (connections between nodes).

2. Hamiltonian Path:
- A Hamiltonian path is a way to traverse the entire graph by visiting each
vertex exactly once.

3. Objective:
- The goal is to determine whether there exists a Hamiltonian path in the
given graph.

4. Complexity:
- The Hamiltonian Path Problem is NP-complete, meaning that it is com-
putationally challenging. No known polynomial-time algorithm exists to
solve it for all cases.

5. Applications:
- The problem has practical applications in various fields, including net-
work design, optimization, and logistics.

6. Algorithms:
- Solving the Hamiltonian Path Problem often involves algorithmic ap-
proaches such as backtracking or dynamic programming. These algorithms

53
explore different paths in the graph to check for the existence of a Hamilto-
nian path.

In summary, the Hamiltonian Path Problem revolves around finding a spe-


cific kind of path in a graph, and its solution has implications in various real-
world scenarios. The challenge lies in the computational complexity of solv-
ing the problem for arbitrary graphs.

54
Graph Colouring
Graph coloring can be described as a process of assigning colors to the vertices of a
graph. In this, the same color should not be used to fill the two adjacent vertices. We
can also call graph coloring as Vertex Coloring. In graph coloring, we have to take care
that a graph must not contain any edge whose end vertices are colored by the same
color. This type of graph is known as the Properly colored graph.

Example of Graph coloring

In this graph, we are showing the properly colored graph, which is described as follows:

The above graph contains some points, which are described as follows:

o The same color cannot be used to color the two adjacent vertices.
o Hence, we can call it as a properly colored graph.

Applications of Graph coloring

There are various applications of graph coloring. Some of their important applications
are described as follows:

o Assignment
o Map coloring
o Scheduling the tasks
o Sudoku
o Prepare time table
o Conflict resolution

A graph coloring is an assignment of labels, called colors, to the vertices of a graph


such that no two adjacent vertices share the same color. The chromatic number χ(G) of
a graph G is the minimal number of colors for which such an assignment is possible.

55
8 Queens Problem

The eight queens problem is the problem of placing eight queens on an 8×8
chessboard such that none of them attack one another (no two are in the same row,
column, or diagonal). More generally, the n queens problem places n queens on an
n×n chessboard

Explanation:

This pseudocode uses a backtracking algorithm to find a solution to the 8 Queen


problem, which consists of placing 8 queens on a chessboard in such a way that no
two queens threaten each other.
The algorithm starts by placing a queen on the first column, then it proceeds to the
next column and places a queen in the first safe row of that column.
If the algorithm reaches the 8th column and all queens are placed in a safe position,
it prints the board and returns true.
If the algorithm is unable to place a queen in a safe position in a certain column, it
backtracks to the previous column and tries a different row.
The “isSafe” function checks if it is safe to place a queen on a certain row and
column by checking if there are any queens in the same row, diagonal or anti-
diagonal.
It’s worth to notice that this is just a high-level pseudocode and it might need to be
adapted depending on the specific implementation and language you are using.

56

You might also like