Professional Documents
Culture Documents
DAA All1
DAA All1
Theta notation encloses the function from above and below. Since it represents the
upper and the lower bound of the running time of an algorithm, it is used for
analyzing the average-case complexity of an algorithm.Theta (Average Case) You
add the running times for each possible input combination and take the average in
the average case.
Big-O notation represents the upper bound of the running time of an algorithm.
Therefore, it gives the worst-case complexity of an algorithm.It is the most widely
used notation for Asymptotic analysis.It specifies the upper bound of a function.It
returns the highest possible output value(big-O) for a given input.Big-Oh(Worst
Case) It is defined as the condition that allows an algorithm to complete statement
execution in the longest amount of time possible.The maximum time required by
an algorithm or the worst-case time complexity.
Omega notation represents the lower bound of the running time of an algorithm.
Thus, it provides the best case complexity of an algorithm.The execution time
1
serves as a lower bound on the algorithm’s time complexity.It is defined as the
condition that allows an algorithm to complete statement execution in the shortest
amount of time
A fixed part that is a space required to store certain data and variables (i.e. simple
variables and constants, program size etc.), that are not dependent of the size of the
problem.
2
For example, in case of addition of two n-bit integers, N steps are taken.
Consequently, the total computational time is t(N) = c*n, where c is the time
consumed for addition of two bits. Here, we observe that t(N) grows linearly as
input size increases.
Time Complexity:*
3
Q3]Define the algorithm and properties of algorithms.
Characteristics:
Properties:
Range of Input : The range of input should be specified. This is because normally
the algorithm is input driven and if the range of input is not being specified then
algorithm can go in an infinite state.
Multiplicity : The same algorithm can be represented into several different ways.
That means we can write in simple English the sequence of instruction or we can
write it in form of pseudo code. Similarly, for solving the same problem we can
write several different algorithms.
4
Speed : The algorithmis written using some specified ideas. Bus such algorithm
should be efficient and should produce the output with fast speed.
Finiteness : The algorithm should be finite. That means after performing required
operations it should be terminate.
scanf("%d", &n);
t1 = t2;
t2 = nextTerm;
nextTerm = t1 + t2;
1. The code initializes variables `t1`, `t2`, `nextTerm`, `n`, and `i` which takes
constant time. Let's assume this time complexity is O(1).
5
2. The code involves a loop that iterates from `i = 3` to `n`. Inside the loop, the
following operations are performed:
Since the loop runs from 3 to `n`, it will run `n - 2` times. Therefore, the time
complexity of the loop is O(n).
So, the time complexity equation for the given code is T(n) = O(n), where n is the
number of terms input by the user.
Q5. Analyse the equation of best case, worst case and average case by using
giving the graph.
Q6]
sum = 0
sum = sum + i
print(sum)
The provided code calculates the sum of numbers from 1 to 100 using a loop. The
time complexity of this code can be analyzed as follows:
6
1. Initializing the variable `sum` takes constant time. Let's assume this time
complexity is O(1).
2. The loop iterates from `i = 1` to `i = 100`. Inside the loop, the following
operations are performed:
Since the loop runs 100 times (from 1 to 100), the time complexity of the loop is
O(100), which simplifies to O(1) when considering big O notation.
3. After the loop, there are no significant additional operations in terms of time
complexity.
Combining all these factors, the overall time complexity of the given code can be
approximated as O(1) (for the initialization) + O(1) (for the loop) = O(1).
So, the time complexity equation for the given code is T(n) = O(1), where `n`
represents the number of iterations (which is 100 in this case).
Q7. Ilustrate the concept of space complexity and the equation for the
algorithms.
The space complexity equation for an algorithm can be expressed using Big O
notation, similar to time complexity. It provides an upper bound on the amount of
memory space the algorithm uses relative to the size of the input.
7
def constant_space(n):
a=5
b = 10
return a + b
def linear_space(n):
for i in range(n):
data[i] = i
return data
def quadratic_space(n):
for i in range(n):
for j in range(n):
matrix[i][j] = i * j
return matrix
8
Example 4: Recursive Space Complexity
def recursive_space(n):
if n <= 0:
return
recursive_space(n - 1)
The space complexity of an algorithm depends on factors like the data structures
used, the number of variables, and the depth of recursion. The most significant
contributor to space complexity is often the function call stack when dealing with
recursion.
When analyzing space complexity, it's important to consider temporary space used
by variables, the input size, and any additional data structures created during the
algorithm's execution. Similar to time complexity, space complexity helps us
choose the most efficient algorithm for a given problem based on the available
memory resources.
9
Explain Binary Search with pseudo code
Binary search is the search technique that works efficiently on sorted lists. Hence,
to search an element into some list using the binary search technique, we must
ensure that the list is sorted
Binary search follows the divide and conquer approach in which the list is divided
into two halves, and the item is compared with the middle element of the list. If the
match is found then, the location of the middle element is returned. Otherwise, we
search into either of the halves depending upon the result produced through the
match.
Best Case Complexity - In Binary search, best case occurs when the element to
search is found in first comparison, i.e., when the first middle element itself is the
element to be searched. The best-case time complexity of Binary search is O(1).
Average Case Complexity - The average case time complexity of Binary search is
O(logn).
Worst Case Complexity - In Binary search, the worst case occurs, when we have to
keep reducing the search space till it has only one element. The worst-case time
complexity of Binary search is O(logn).
10
the last array element, 'val' is the value to search
print pos
go to step 6
else
[end of if]
[end of loop]
Step 5: if pos = -1
[end of if]
Step 6: exit
11
Explain Quick Sort with pseudo code
QuickSort is a sorting algorithm based on the Divide and Conquer algorithm that
picks an element as a pivot and partitions the given array around the picked pivot
by placing the pivot in its correct position in the sorted array.
The key process in quickSort is a partition(). The target of partitions is to place the
pivot (any element can be chosen to be a pivot) at its correct position in the sorted
array and put all smaller elements to the left of the pivot, and all greater elements
to the right of the pivot.
Partition is done recursively on each side of the pivot after the pivot is placed in its
correct position and this finally sorts the array.
12
quickSort(arr, low, pivotIndex - 1) # Recursively sort the left subarray
pivot = arr[high]
i = low - 1
i=i+1
swap(arr[i], arr[j])
return i + 1
13
Explain Merge Sort with pseudo code
Merge sort is similar to the quick sort algorithm as it uses the divide and conquer
approach to sort the elements. It is one of the most popular and efficient sorting
algorithm. It divides the given list into two equal halves, calls itself for the two
halves and then merges the two sorted halves. We have to define the merge()
function to perform the merging.
The sub-lists are divided again and again into halves until the list cannot be divided
further. Then we combine the pair of one element lists into two-element lists,
sorting them in the process. The sorted two-element pairs is merged into the four-
element lists, and so on until we get the sorted list.
MergeSort(arr):
return arr
14
left_half = MergeSort(arr[0:mid])
right_half = MergeSort(arr[mid:end])
Merge(left, right):
else:
return merged_arr
15
Q8.What do you understand from the recursion concept.
void recursion() {
int main() {
recursion();
The C programming language supports recursion, i.e., a function to call itself. But
while using recursion, programmers need to be careful to define an exit condition
from the function, otherwise it will go into an infinite loop.
Recursive functions are very useful to solve many mathematical problems, such as
calculating the factorial of a number, generating Fibonacci series, etc.
16
Q10 Elaborate the concept of master method in recursion.
The Master Method is a specific technique used for analyzing the time complexity
of divide-and-conquer algorithms that follow a certain recurrence relation. It
provides a convenient way to determine the time complexity of such algorithms
without having to go through detailed analysis using methods like recurrence trees
or substitution.
The recurrence relation that the Master Method is applicable to has the following
form:
Where:
a is the number of subproblems that each have a size of n/b (where b > 1).
f(n) is the time complexity of the work done outside of the recursive calls
(combine, partition, etc.).
n/b represents the size of each subproblem relative to the original problem size.
The Master Method provides a way to determine the time complexity of the
algorithm based on the values of a, b, and the function f(n).
17
The Master Method is particularly useful when you can express the time
complexity of the work done outside the recursive calls using known functions like
polynomial or logarithmic functions. It simplifies the analysis and allows you to
quickly determine the time complexity of the algorithm without explicitly
constructing a recurrence tree or performing substitution.
18
Q11. Analyse the use of divide and conquer concept in analysing the
algorithms.
Divide and Conquer algorithm consists of a dispute using the following three steps.
Combine: Put together the solutions of the subproblems to get the solution to the
whole problem.
Examples: The specific computer algorithms are based on the Divide & Conquer
approach:
Binary Search
Tower of Hanoi.
19
Q12 Explain the binary search with the help of divide and conquer strategy.
Binary Search Algorithm can be applied only on Sorted arrays o, the elements must
be arranged in-
Binary search algorithm is being used to search an element ‘item’ in this linear
array.
If search ends in success, it sets loc to the index of the element otherwise it sets loc
to -1.
Variables beg and end keeps track of the index of the first and last element of the
array or sub array in which the element is being searched at that instant.
Variable mid keeps track of the index of the middle element of that array or sub
array in which the element is being searched at that instant.
20
Case-01 : If the element being searched is found to be the middle most element, its
index is returned.
Case-02 : If the element being searched is found to be greater than the middle most
element, then its search is further continued in the right sub array of the middle
most element.
Case-03 : If the element being searched is found to be smaller than the middle most
element,then its search is further continued in the left sub array of the middle most
element.This iteration keeps on repeating on the sub arrays until the desired
element is found or size of the sub array reduces to zero.
21
Q13 a1=[11, 14, 25, 30, 40, 41, 52, 57, 70] Consider the above array and solve
the problem by using the divide and conquer strategy.
the steps of the divide and conquer strategy to solve the problem of searching for a
target value in the array a1=[11, 14, 25, 30, 40, 41, 52, 57, 70].
Problem Statement: Given the sorted array a1 and a target value, find whether the
target value exists in the array and if it does, find its index.
Step 1: Divide
Step 2: Conquer
Compare the middle element of the array with the target value (let's say the target
value is 41). The middle element is 30, which is smaller than 41, so we focus on
the right half of the array.
22
Step 4: Conquer
Now, compare the middle element of the right half (which is 57) with the target
value (41). Since 41 is smaller than 57, we focus on the left half.
Step 6: Conquer
Now, compare the only element in the left half (which is 40) with the target value
(41). Since 41 is greater than 40, we move to the right half.
Right Half: []
Step 8: Conquer
In the right half, there are no elements, so we conclude that the target value (41) is
not found in the array.
This process demonstrates how the divide and conquer strategy is applied to search
for a target value in a sorted array. In this case, the target value 41 is not found in
the array. The steps involve repeatedly dividing the array into halves and
narrowing down the search based on comparisons with the middle elements.
23
Q14 arr[] = {38, 27, 43, 10} Consider the above array and solve the problem
by using the
Certainly, let's walk through the steps of the Merge Sort algorithm for the given
array arr[] = {38, 27, 43, 10}:
Step 1: Divide
Step 3: Combine
24
Merge: {10, 27, 38, 43}
Q15 How to find minimum and maximum element in array using divide and
conquer? Give the examples.
Finding the minimum and maximum elements in an array using the divide and
conquer strategy can be done by recursively dividing the array into smaller
subproblems and then combining the results to get the overall minimum and
maximum. Here's how you can do it:
Algorithm:
Base Case: If the array contains only one element, return that element as both the
minimum and maximum.
25
Combine: Compare the minimum and maximum values from the two halves to
determine the minimum and maximum for the entire array.
if low == high:
if high - low == 1:
# Example array
print("Minimum:", min_val)
print("Maximum:", max_val)
Example:
26
For the example array arr = [14, 8, 23, 40, 12, 42, 31, 6], the code will output:
Minimum: 6
Maximum: 42
In this example, the divide and conquer strategy is used to recursively find the
minimum and maximum elements in the array. The algorithm effectively breaks
down the problem into smaller subproblems and combines the results to achieve
the desired outcome.
Q16 T (n) = 8 T apply master theorem on it. Solve the equation by master
theorem
1. Substitution Method:
2. Iteration Method:
The iteration method involves expanding the recurrence relation through iterations,
essentially "unrolling" the recurrence into a sequence of equations. This helps you
observe patterns and make conjectures about the solution. The iteration method is
useful when the recurrence relation is simple and follows a clear pattern.
27
recurrence relation as a tree, where each level of the tree corresponds to a recursive
call and the branching represents the multiple subproblems generated. The total
work done at each level is summed up to determine the overall time complexity.
This method is particularly useful when analyzing recursive algorithms with
varying subproblem sizes.
4. Master Method:
The master method is a specific technique used to analyze the time complexity of
divide-and-conquer algorithms with a particular form of recurrence relation: `T(n)
= aT(n/b) + f(n)`. It provides a direct formula to determine the time complexity
based on the values of `a`, `b`, and `f(n)`. The master method is a quick and
efficient way to analyze the time complexity of certain recursive algorithms
without going through the detailed process of recurrence tree or substitution
methods.
These methods provide different ways to analyze and solve recurrence relations
that arise in the context of recursive algorithms. The choice of method depends on
the nature of the recurrence relation, its complexity, and the specific form of the
algorithm being analyzed.
Applicability:
Representation:
Focus:
Tree Method: Focuses on understanding the structure of the recursive calls and
28
their impact on the overall time complexity.
Usage:
Tree Method: Useful when subproblems are not of equal size and when the
recurrence relation isn't in the required form for the Master Method.
29
Q20 Elaborate the concept of tower of Hanoi by using recursion method.
The Tower of Hanoi is a classic mathematical puzzle that involves moving a stack
of disks from one peg to another peg, using a third peg as an intermediate, while
following specific rules. The puzzle is commonly used to demonstrate the concept
of recursion. The rules of the Tower of Hanoi are as follows:
Each move involves taking the top disk from one stack and placing it on top of
another stack.
The goal is to move the entire stack of disks from the source peg to the target peg,
using the auxiliary peg as an intermediate.
Recursive Solution:
The Tower of Hanoi problem can be elegantly solved using a recursive approach.
The key idea is to break down the problem into smaller subproblems that are
essentially the same as the original problem but with fewer disks. Here's how the
recursive solution works:
Base Case: If there's only one disk to move, simply move it from the source peg to
the target peg.
Recursive Case: For moving n disks from source to target, you can think of it as
moving the top n-1 disks from the source to the auxiliary peg, then moving the
bottom disk (the largest one) to the target peg, and finally moving the n-1 disks
from the auxiliary peg to the target peg.
if n == 1:
30
print(f"Move disk 1 from {source} to {target}")
return
# Number of disks
num_disks = 3
Example:
The recursive approach elegantly breaks down the Tower of Hanoi problem into
smaller subproblems, allowing you to move a sequence of disks from one peg to
another while following the rules of the puzzle.
31
UNIT 3 and 4
1.Discuss the basic strategy of greedy method.
Greedy Method is one of the strategy like divide and conquer used to
solve the problem
This method is used to solve the optimization problem
An optimization problem is a problem that demands minimum
or maximum results
It is the simplest and most straightforward approach is the Greedy
method.
The main function of this approach is that the decision is taken on the
basis of current available information
32
• Finding the shortest path between two vertices using Dijkstra’s al-
gorithm.
• Finding the minimal spanning tree in a graph using Prim’s
/Kruskal’s algorithm, etc.
Examples
33
2. Explain the application to job sequencing with deadline problem.
Algorithm
• Find the maximum deadline value from the input set of jobs.
• Once, the deadline is decided, arrange the jobs in descending order
of their profits.
• Selects the jobs with highest profits, their time periods not exceed-
ing the maximum deadline.
• The selected set of jobs are the output.
It may happen that all the given jobs can not be completed within their
deadlines. Assume that the deadline of ith job Ji is di and the profit
received from job Ji is pi. Hence, the optimal solution of job sequencing
with deadlines algorithm is a feasible solution with maximum profit.
• Each job has deadline di & it can process the job within its dead-
line; only one job can be processed at a time.
• Only one CPU is available for processing all jobs.
• CPU can take only one unit at a time for processing any job.
• All jobs arrived at the same time.
34
Job Sequencing with Deadlines Example
job J1 J2 J3 J4 J5
deadline 2 1 1 2 3
profit 40 100 20 60 20
The given jobs are sorted as per their profit in descending order to solve
this problem. Hence, the jobs are ordered after sorting, as shown in the
following table.
job J2 J4 J1 J5 J3
deadline 1 2 2 3 1
profit 100 60 40 20 20
From the given set of jobs, first, we select J2, as it should be completed
within its deadline and contributes maximum profit.
Therefore, the sequence of jobs (J2, J4, J5) is executed within their dead-
line and gives the maximum profit.
35
3. Difference between greedy and dynamic method.
Greedy Dynamic
36
4.Solve the following problem with minimum cost spanning trees.
If there are n vertices then the spanning tree should havr (n-1) number of edges
In this context, if each edge of the graph is associated with a weight and there ex-
ists more than one spanning tree, we need to find the minimum spanning tree of the
graph.
37
Using Prim’s algorithm
38
4. Solve the single source shortest path
39
6. Explain the strategy of dynamic programming in details.
The following are the steps that the dynamic programming follows:
o Top-down approach
o Bottom-up approach
Top-down approach
40
equal to the sum of recursion and caching. Recursion means calling the
function itself, while caching means storing the intermediate results.
Advantages
Disadvantages
It uses the recursion technique that occupies more memory in the call
stack. Sometimes when the recursion is too deep, the stack overflow
condition will occur.
Bottom-Up approach
The bottom-up approach is also one of the techniques which can be used
to implement the dynamic programming. It uses the tabulation technique
to implement the dynamic programming approach. It solves the same
kind of problems but it removes the recursion. If we remove the recur-
sion, there is no stack overflow issue and no overhead of the recursive
functions. In this tabulation technique, we solve the problems and store
the results in a matrix.
41
7. Solve the problem of multistage graphs.
42
8. Solve the problem on traveling salesman problem.
A B C D
A 0 20 42 35
B 20 0 30 34
C 42 30 0 12
D 35 34 12 0
43
10.Solve the all pair shortest path.
44
UNIT 5
45
2) Explain BFS with example
Breadth First Search (BFS) can find the shortest path and mini-
mum spanning tree for unweighted graphs. In an unweighted
graph, the shortest path has the least number of edges, and BFS
always reaches a vertex from a source using the minimum num-
ber of edges. Any spanning tree is a minimum spanning tree in
unweighted graphs, and either BFS or DFS can be used to find a
spanning tree.
There are many ways to traverse the graph, but among them,
BFS is the most commonly used approach. It is a recursive
algorithm to search all the vertices of a tree or graph data
structure. BFS puts every vertex of the graph into two categories
- visited and non-visited. It selects a single node in a graph and,
after that, visits all the nodes adjacent to the selected node
46
47
3) Write down sequence of graph using DFS
Depth first Search or Depth first traversal is a recursive algorithm for searching all
the vertices of a graph or tree data structure. Traversal means visiting all the nodes
of a graph.
Visited
Not Visited
The purpose of the algorithm is to mark each vertex as visited while avoiding
cycles.
48
49
50
4) Explain backtracking with example
if (valid solution):
store the solution
Return
for (all choice):
if (valid choice):
APPLY (choice)
FIND_SOLUTIONS (parameters)
Applications of Backtracking
51
5) What do you understand by NB/P hamalton problem
NP-Complete:
The term "NP-complete" refers to a class of decision problems in computa-
tional complexity theory. A problem is NP-complete if it belongs to the class
NP (nondeterministic polynomial time) and has the property that any other
problem in NP can be reduced to it in polynomial time. In simpler terms,
solving any NP-complete problem efficiently would imply an efficient solu-
tion for all problems in NP. The concept was introduced by Stephen Cook in
1971.
P Hamiltonian Path Problem:
The "P Hamiltonian Path Problem" refers to the class of problems that can
be solved in polynomial time. Specifically, if there is an algorithm that can
determine whether a Hamiltonian path exists in a given graph in polynomial
time, then the Hamiltonian path problem is said to be in P.
52
6) Explain concept of hamalton problem
Key points:
1. Graph Representation:
- The problem is defined on a graph, which consists of vertices (nodes) and
edges (connections between nodes).
2. Hamiltonian Path:
- A Hamiltonian path is a way to traverse the entire graph by visiting each
vertex exactly once.
3. Objective:
- The goal is to determine whether there exists a Hamiltonian path in the
given graph.
4. Complexity:
- The Hamiltonian Path Problem is NP-complete, meaning that it is com-
putationally challenging. No known polynomial-time algorithm exists to
solve it for all cases.
5. Applications:
- The problem has practical applications in various fields, including net-
work design, optimization, and logistics.
6. Algorithms:
- Solving the Hamiltonian Path Problem often involves algorithmic ap-
proaches such as backtracking or dynamic programming. These algorithms
53
explore different paths in the graph to check for the existence of a Hamilto-
nian path.
54
Graph Colouring
Graph coloring can be described as a process of assigning colors to the vertices of a
graph. In this, the same color should not be used to fill the two adjacent vertices. We
can also call graph coloring as Vertex Coloring. In graph coloring, we have to take care
that a graph must not contain any edge whose end vertices are colored by the same
color. This type of graph is known as the Properly colored graph.
In this graph, we are showing the properly colored graph, which is described as follows:
The above graph contains some points, which are described as follows:
o The same color cannot be used to color the two adjacent vertices.
o Hence, we can call it as a properly colored graph.
There are various applications of graph coloring. Some of their important applications
are described as follows:
o Assignment
o Map coloring
o Scheduling the tasks
o Sudoku
o Prepare time table
o Conflict resolution
55
8 Queens Problem
The eight queens problem is the problem of placing eight queens on an 8×8
chessboard such that none of them attack one another (no two are in the same row,
column, or diagonal). More generally, the n queens problem places n queens on an
n×n chessboard
Explanation:
56