Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Q.Write short note on memorization.

Memorization is a technique used in computer science to improve the performance of


algorithms by storing the results of previous computations. This can be particularly useful for
advanced data structures, which often involve complex calculations.One example of
memorization in advanced data structures is the use of memo tables to implement dynamic
programming algorithms. Dynamic programming algorithms solve problems by breaking them
down into smaller subproblems and then recursively solving those subproblems. Memo
tables store the results of previously solved subproblems so that they do not have to be
recomputed.Another example of memorization in advanced data structures is the use of hash
tables in self-organizing trees. Self-organizing trees are trees that automatically restructure
themselves to improve their performance. Hash tables are used to store the mapping
between keys and nodes in the tree. This allows the tree to quickly find the node
corresponding to a given key.Memorization can be a powerful technique for improving the
performance of advanced data structures. However, it is important to use it carefully, as it
can also lead to increased memory usage. eg. fibonacci, knapsack.

Q.Explain Longest common subsequence with example.

The longest common subsequence (LCS) problem is to find the longest subsequence that is
common to all the given sequences. A subsequence is a sequence that can be derived from
another sequence by deleting some (or none) of the elements without changing the order of
the remaining elements.

For example, the LCS of the strings "ABCD" and "ACBD" is "ABD". This is because "ABD" is a
subsequence of both strings and it is the longest subsequence that is common to both
strings.

There are many different ways to solve the LCS problem. One common approach is to use a
dynamic programming algorithm. The dynamic programming algorithm works by building a
table that stores the length of the LCS of all prefixes of the given sequences. Once the table is
built, the length of the LCS of the entire sequences can be found by looking up the
corresponding entry in the table.

The longest common subsequence problem has a number of applications, including:

Text comparison and editing


DNA sequencing
Bioinformatics
Compiler optimization
Q.Define P class.

P is a complexity class that contains all decision problems that can be solved by a
deterministic Turing machine in polynomial time. A decision problem is a problem that has a
yes or no answer. A deterministic Turing machine is a type of Turing machine that always
takes the same path for a given input. Polynomial time means that the time it takes the Turing
machine to solve the problem grows as a polynomial function of the size of the input.

P is considered to be the class of problems that are efficiently solvable. This is because
polynomial-time algorithms are relatively fast, even for large inputs.

Some examples of problems in P include:

Sorting a list of numbers


Searching for an element in a sorted list
Finding the shortest path between two points in a graph
Checking if a number is prime

Q.Define all pair shortest path algorithms in detail.

The "All-Pairs Shortest Path" (APSP) problem is a classic problem in graph theory and
computer science. It involves finding the shortest paths between all pairs of vertices in a
weighted directed graph.

Floyd-Warshall Algorithm:

The Floyd-Warshall algorithm is a dynamic programming-based approach to solve the APSP


problem. It works for both positive and negative edge weights (as long as there are no
negative cycles). The Floyd-Warshall algorithm has a time complexity of O(V^3) and is suitable
for dense graphs, where the number of edges is close to the maximum possible (V^2). It's not
recommended for graphs with negative cycles, as it won't work correctly in such cases.

Johnson's Algorithm:

Johnson's algorithm is an improvement over the Floyd-Warshall algorithm that works well
with graphs that may have negative edge weights and negative cycles. It first eliminates
negative weights by reweighting the graph using the Bellman-Ford algorithm, and then
applies Dijkstra's algorithm for each vertex to find the shortest paths. Johnson's algorithm
has a time complexity of O(V^2 * log(V) + VE) using Fibonacci heaps, making it more efficient
than Floyd-Warshall for sparse graphs with negative weights.

Viterbi's Algorithm:

The Viterbi algorithm is a specialized algorithm for solving the APSP problem in weighted
directed acyclic graphs (DAGs). It finds the shortest paths between all pairs of vertices in
such graphs in linear time. This algorithm is not applicable to general graphs but is useful in
specific scenarios, especially in applications like dynamic programming and bioinformatics.

Start by topologically sorting the vertices of the DAG.


Initialize an array to store the shortest path distances.
Iterate through the vertices in topological order and update the shortest path distances
based on the outgoing edges from each vertex.

The Viterbi algorithm is efficient and works well in situations where the graph structure
matches the constraints of a DAG.

Q.Explain in details NP, NP hard and NP complete problem.

NP (Nondeterministic Polynomial Time):

Definition: NP is a class of decision problems. A decision problem is a type of problem


where the answer is "yes" or "no." NP includes problems for which a proposed solution
can be verified in polynomial time.
Verification: In NP, if you are given a solution to a problem, you can efficiently verify
whether the solution is correct. This means that if someone claims a solution is
correct, you can check it in polynomial time.
Non-determinism: The term "nondeterministic" in NP refers to the idea that you can
guess a solution (in polynomial time) and then efficiently check its correctness. You
don't have to find the solution deterministically, just verify it.
Example: The classic example of an NP problem is the Traveling Salesman Problem
(TSP). Given a list of cities and distances between them, the problem is to find the
shortest possible route that visits each city exactly once and returns to the starting
city. While finding the optimal route is challenging, verifying a proposed route's length
as a solution is relatively easy and can be done in polynomial time.

NP-Hard (Nondeterministic Polynomial-Time Hard):

Definition: NP-hard is a class of problems that are at least as hard as the hardest
problems in NP. In other words, NP-hard problems are at least as challenging as the
most difficult problems in NP but may not necessarily be decision problems.
Hardness: An NP-hard problem doesn't necessarily have to be verifiable in polynomial
time. These problems are often optimization problems where the goal is to find the
best solution (e.g., maximizing or minimizing an objective) among a set of possible
solutions.
Example: The Traveling Salesman Problem, as mentioned earlier, is also NP-hard. This
means that not only is it hard to find the optimal solution (as an optimization
problem), but it is also at least as difficult as the hardest problems in NP.

NP-Complete (Nondeterministic Polynomial-Time Complete):

Definition: NP-complete is a special class of problems within NP that are both in NP


and NP-hard. These are the "hardest" problems in NP, meaning if you can solve any
NP-complete problem efficiently, you can solve all problems in NP efficiently.
Completeness: An NP-complete problem is a problem for which a proposed solution
can be verified in polynomial time, and if you can solve one NP-complete problem in
polynomial time, you can solve any problem in NP in polynomial time.
Example: The Cook-Levin Theorem demonstrated the existence of the first known NP-
complete problem, called the Boolean Satisfiability Problem (SAT). In SAT, you're given
a Boolean formula, and the problem is to determine if there is an assignment of truth
values to variables that makes the formula evaluate to true. SAT is NP-complete, and if
you can efficiently solve SAT, you can efficiently solve any problem in NP.

Q.Explain in detail the rabin-Karp string matching Algorithm

The Rabin-Karp string matching algorithm is a widely used method for finding occurrences of
a substring within a longer text. It's particularly efficient for searching for a fixed-length
pattern (substring) within a text. The algorithm uses hashing to compare substrings in the text
to the pattern, allowing for quick identification of potential matches.

Here are the detailed steps for the Rabin-Karp string matching algorithm:

Precompute the hash value of the pattern using a suitable hash function (e.g., polynomial
hash).
Compute the hash value of the first substring in the text that has the same length as the
pattern.
Compare the hash value of the current substring with the hash value of the pattern. If
they match, proceed to character-by-character comparison to confirm the match.
If the substring and pattern do not match, slide the window to the right by one character,
rehashing the new substring.
Repeat steps 3 and 4 until the end of the text is reached.
Record the positions where a match is found.

Advantages:

The Rabin-Karp algorithm is efficient for pattern matching, especially when you need to
find all occurrences of a fixed-length pattern in a long text.
It has an average-case time complexity of O(n + m), where "n" is the length of the text,
and "m" is the length of the pattern.
The rolling hash function minimizes the number of character comparisons, making it
faster for many practical scenarios.

Limitations:

The Rabin-Karp algorithm is sensitive to hash collisions. When two different substrings
have the same hash value, it can produce false positives. Collisions are rare but need to
be handled.
The rolling hash function's choice and implementation are crucial for efficiency and
avoiding hash collisions.

Q.Total 4 objects are given for each object associated profit is p={1,2,5,6} and weight W=
{2,3,4,5} fill the bag with these objects up to capacity M=8 and solve it using a dynamic
approach to get maximum profit.

To solve this knapsack problem using dynamic programming, we can create a 2D table to
store the maximum profit achievable for different combinations of objects and weight
capacities. The goal is to fill the knapsack with objects to maximize the total profit without
exceeding the weight capacity. Here's a step-by-step solution:

Given:
Objects: A = {1, 2, 3, 4}
Corresponding Profits: p = {1, 2, 5, 6}
Corresponding Weights: W = {2, 3, 4, 5}
Knapsack Capacity: M = 8

Step 1: Initialization of the DP table:

dp[i][j] = 0 for all i and j

Step 2: Filling the DP table:

for i from 1 to 4:

for j from 0 to 8:

if W[i-1] <= j:

dp[i][j] = max(dp[i-1][j], dp[i-1][j - W[i-1]] + p[i-1])

else:

dp[i][j] = dp[i-1][j]

Step 3: Backtracking to find the selected objects:

0 1 2 3 4 5 6 7 8

00 0 0 0 0 0 0 0 0

10 0 1 1 1 1 1 1 1

20 0 1 2 2 3 3 3 3

30 0 1 2 5 5 6 7 7

40 0 1 2 5 6 7 7 8

The maximum profit achieved is 8, and the selected objects are 4, 3, 2, and 1 (in reverse
order), which results in a total weight of 8.

You might also like