Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Ques 1 . What is Master Method ?

Ans .
�(�)=��(��)+�(�)T(n)=aT(bn )+f(n)

Ques 2 . What is the difference between Dynamic Programming and Divide and
Conquer Mechanism?
Ans. Dynamic Programming (DP) and Divide and Conquer (D&C) are both
algorithmic paradigms that solve problems by breaking them down into
subproblems. However, there are key differences in their mechanisms and
approaches:
Divide and Conquer:
Subproblem Independence:
D&C breaks down a problem into smaller subproblems.
The subproblems are solved independently.
The solutions of subproblems are combined to solve the original problem.
Recursion:
D&C typically involves recursive algorithms.
The problem is divided into smaller instances of the same problem until base
cases are reached.
Overlapping Subproblems:
D&C may not handle overlapping subproblems efficiently.
If the same subproblem is solved multiple times, D&C might not optimize by
storing and reusing previous solutions.
Example Algorithm:
QuickSort and MergeSort are classic examples of Divide and Conquer
algorithms.
Binary search is another example.
Dynamic Programming:
Optimal Substructure:
DP requires the problem to have optimal substructure.
The optimal solution to the problem can be constructed from optimal
solutions of its subproblems.
Memoization/Tabulation:
DP optimizes by storing solutions to subproblems and reusing them when
needed.
Memoization involves storing solutions in a table for future reference.
Tabulation involves building a table of solutions bottom-up.
Bottom-Up Approach:
DP often uses a bottom-up approach to solve the problem.
It starts from the simplest subproblems and builds solutions for larger
subproblems.
Overlapping Subproblems:
DP explicitly addresses overlapping subproblems.
The solutions to subproblems are stored in a data structure (like a table) and
reused when needed.
Example Algorithms:
Fibonacci series calculation using memoization or tabulation.
Shortest Path algorithms like Floyd-Warshall.
Longest Common Subsequence (LCS) problem.

Ques 3 . What do you mean by Approximation Algorithms? Give two examples.


Ans. Approximation algorithms are algorithms designed to find near-optimal solutions
for optimization problems, especially those known to be NP-hard (nondeterministic
polynomial time). These algorithms provide solutions that are close to the optimal
solution but may not guarantee the absolute optimal solution. The goal is to efficiently
find a solution that is "good enough" within a reasonable amount of time.

Here are two examples of approximation algorithms:

• Greedy Algorithm for the Vertex Cover Problem:


• Problem: Given an undirected graph, find the minimum-sized vertex cover (a set
of vertices that covers all edges).
• Approximation Algorithm: The greedy algorithm repeatedly selects the vertex
with the highest degree and removes both the vertex and its incident edges from
the graph.
• Approximation Ratio: The greedy algorithm guarantees a solution that is at
most twice the size of the optimal solution. In other words, the approximation
ratio is 2.
• Christofides Algorithm for the Metric Traveling Salesman Problem (TSP):
• Problem: In the TSP, a salesman must visit each city exactly once and return to
the starting city, minimizing the total distance traveled.
• Approximation Algorithm: Christofides algorithm combines a minimum
spanning tree with a matching and a shortcutting process to find a solution.
• Approximation Ratio: Christofides algorithm guarantees a solution that is at
most 3/2 times the length of the optimal solution. In other words, the
approximation ratio is 1.5.
Ques 4 . What do you mean by Lower Bounds?
Ans. In the context of algorithm analysis and computational complexity, a lower bound
is a theoretical limit on the efficiency of an algorithm or a problem-solving approach.
Lower bounds help establish a baseline for the minimum amount of resources (such as
time or space) required to solve a particular problem. They are crucial for understanding
the inherent difficulty or complexity of a problem and can be used to assess the
optimality of algorithms.

There are different types of lower bounds:

• Time Complexity Lower Bounds:


• These bounds describe the minimum amount of time required to solve a
problem. For example, if a problem has a proven lower bound of Ω(f(n)), it means
that any algorithm solving that problem must take at least �(�)f(n) time in the
worst case.
• Space Complexity Lower Bounds:
• These bounds focus on the minimum amount of memory or space required to
solve a problem. If a problem has a lower bound of Ω(g(n)), it means that any
algorithm solving that problem must use at least �(�)g(n) space in the worst
case.
• Communication Complexity Lower Bounds:
• In distributed computing or parallel computing, communication complexity lower
bounds describe the minimum amount of communication needed between
different components or processors.
• Decision Tree Lower Bounds:
• Decision trees are used to model the computation of algorithms. Lower bounds
on decision trees represent the minimum number of comparisons (or other basic
operations) needed to solve a problem.

Ques 5 What is Greedy Method?


Ans. The Greedy Method is an algorithmic paradigm that follows the problem-solving
heuristic of making the locally optimal choice at each stage with the hope of finding a
global optimum. In other words, at each step, the algorithm makes the best possible
decision based on the information available at that moment, without considering the
consequences of that decision on future steps.

Key characteristics of greedy algorithms include:

• Greedy Choice Property:


• At each step, the algorithm selects the best option available at that particular
moment, without considering the overall problem.
• Optimal Substructure:
• The solution to the problem can be constructed by combining optimal solutions
to its subproblems.

Ques 6.State Cook's theorem.


Ans. I believe you might be referring to Cook's Theorem, which is a fundamental result
in theoretical computer science. Cook's Theorem, also known as Cook-Levin Theorem,
was formulated by Stephen Cook in 1971. It is a cornerstone in the theory of
computational complexity and is the foundation for the concept of NP-completeness.

Cook's Theorem states:

Every problem in NP (nondeterministic polynomial time) is polynomial-time


reducible to the Boolean satisfiability problem (SAT).

Here's a bit more explanation:

• NP (Nondeterministic Polynomial Time): NP is a complexity class that includes


decision problems for which a proposed solution can be verified quickly (in
polynomial time). However, finding a solution may not be as easy.
• Boolean Satisfiability Problem (SAT): SAT is the problem of determining
whether there exists an assignment of truth values to variables in a propositional
logic formula such that the formula evaluates to true.
Q.7 write down the algorithm of Binary Search?

Ans.
Ques 8. Give a recurrcncc for merger sort algorithm and solvc it.

Ans.
Answer

Ques 9. 9 What arc the constraints required for a Backtracking method?

Ans. Backtracking is a general algorithm for finding all (or some) solutions to
computational problems, particularly constraint satisfaction problems. When using a
backtracking approach, certain constraints and components are essential for the method
to work effectively. Here are the key constraints and requirements for a Backtracking
method:

• Decision Space:
• The problem should be decomposable into a set of decisions. Each decision
represents a choice or an option that contributes to the solution.
• Feasibility Function:
• There should be a feasibility function that checks whether a partial solution can
be extended to a complete solution. This function helps in pruning the search
space when a partial solution cannot lead to a valid solution.
• Objective Function (optional):
• For optimization problems, an objective function is used to evaluate the quality
of a solution. The backtracking algorithm may aim to find the optimal solution,
and the objective function guides the search.
• Partial Solution:
• The algorithm incrementally builds a partial solution by making decisions and
backtracks when it determines that the partial solution cannot be extended to a
valid solution.
• Backtracking Mechanism:
• The algorithm needs a mechanism to backtrack when it encounters a dead-end
or determines that the current path will not lead to a valid solution. This typically
involves undoing the last decision and exploring other options.
• Proper Ordering of Decisions:

Ques10. Order the following time complexities in increasing order.

l , log2 n, n log2 n , n, n], 2" 3"


Ans.
answer

Part B

Ques 1.
Answer.

Ques 2 What is the usc of prefix function in KMP string matching algorithm? Explain
Wilh example.

Ans.
Answer

Ques 3 Explain vertex and sct coyer problem.

Ans.

Answer
Ques 4 . Write short notes on the following:(a) Quadratic assignment problem

(b) Boyer-Moorc Algorithm

Ans.

Answer

Ques 5 . Explain the Las Vegas and Monte Carlo Algorithm with example.
Ans. Las Vegas Algorithm:
A Las Vegas algorithm is a type of randomized algorithm that always produces the
correct result, but its running time is allowed to be probabilistic. The key characteristic is
that it always gives the correct answer when it terminates, but the time it takes to
terminate may vary based on random choices. If the algorithm runs for a random
amount of time, it may be faster or slower on different runs.

Example - Randomized Quicksort:

The Quicksort algorithm can be turned into a Las Vegas algorithm by randomly
choosing a pivot during the partitioning step. This ensures that the expected running
time is good, but the actual running time may vary depending on the specific random
choices made.

Monte Carlo Algorithm:

A Monte Carlo algorithm is a randomized algorithm that may produce incorrect results
with some small probability. The key characteristic is that it has a probabilistic element,
and its correctness is not guaranteed. However, it is designed in such a way that the
probability of producing an incorrect result is very low.

Example - Primality Testing:

The Miller-Rabin primality test is a Monte Carlo algorithm for determining whether a
given number is likely to be a prime number. It may occasionally produce false positives
(indicating a composite number as prime), but the probability of this happening can be
made arbitrarily small by adjusting the number of random tests.

Ques 6 Solve thc following recurrence relations and find their complcxitics using master

method-

1. T(n) 2T(vfi) + logz n


2. T(n) = + nz
Ans.
Ques 7. 7 Dcfinc thc terms P, NP,.NP complete and NP-Hard problems,

Ans. P (Polynomial Time):


• Definition: P is the class of decision problems (problems with a yes/no answer)
that can be solved by a deterministic Turing machine in polynomial time.
• Characteristics: P problems are considered "efficiently solvable." The running
time of an algorithm for a P problem is bounded by a polynomial function in the
size of the input.
• Example: Determining whether a given graph is acyclic (acyclic graph testing) is
in P.
NP (Nondeterministic Polynomial Time):
• Definition: NP is the class of decision problems for which a given solution can be
verified by a deterministic Turing machine in polynomial time.
• Characteristics: While the solution may be hard to find, if someone presents a
potential solution, it can be checked efficiently. NP stands for "nondeterministic
polynomial time."
• Example: The traveling salesman problem is in NP.
NP-Complete (Nondeterministic Polynomial Time Complete):
• Definition: A problem is NP-complete if it is in NP, and every problem in NP can
be reduced to it in polynomial time. In other words, it is one of the most
challenging problems in NP.
• Characteristics: If you can solve one NP-complete problem efficiently, you can
solve all problems in NP efficiently.
• Example: The Boolean satisfiability problem (SAT) is NP-complete.
NP-Hard (Nondeterministic Polynomial Time Hard):
• Definition: A problem is NP-hard if every problem in NP can be reduced to it in
polynomial time, but it may not be in NP itself. NP-hard problems are at least as
hard as the hardest problems in NP.
• Characteristics: While solving an NP-hard problem does not guarantee an
efficient solution for all NP problems, it is at least as hard as any problem in NP.
• Example: The halting problem is NP-hard but not in NP.
In summary, P represents efficiently solvable problems, NP includes problems that can
be efficiently checked, NP-complete problems are among the hardest in NP, and NP-
hard problems are at least as hard as NP-complete problems but may not be in NP. The
question of whether P = NP remains one of the most important open problems in
computer science.

You might also like