2m PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

1. Find the time complexity of binary search, while the search element is the last one.

The time complexity of binary search is O(log n), where n is the number of elements in the sorted
array.

When the search element is the last one, binary search still takes O(log n) time complexity because it
follows the same algorithmic steps regardless of the search element's position.

The algorithm first compares the search element with the middle element of the array. If the search
element is greater than the middle element, it narrows the search range to the right half of the array.
Otherwise, it narrows the search range to the left half of the array. This process is repeated until the
search element is found or the search range becomes empty.

2 Write the time efficiency and drawback of merge sort.

One of the drawbacks of merge sort is that it requires additional memory space to store the
temporary arrays used in the merging process. This can be a problem when sorting very large
arrays, as the additional memory usage can be significant.
Another drawback of merge sort is that it may not be the best choice for small arrays due to the
overhead of the recursive calls and copying of arrays. Insertion sort or other simpler algorithms
may be faster for small arrays.

3 Mention the problems that can be solved using the divide and conquer method.

Sorting:

Searching

Matrix Multiplication

Maximum Subarray

Closest Pair of Point

Polynomial Multiplication
State Multistage graph. Give an example.

A multistage graph is a directed graph in which the nodes are divided into multiple
stages, and the edges are directed from one stage to the next. Each stage represents
a particular phase or step of a process, and the graph represents the sequence of
steps involved in completing the process.

An example of a multistage graph is the network of cities and transportation routes for
a package delivery company. The graph can be divided into multiple stages, with each
stage representing the different types of transportation used in delivering the package.

Write the principle of optimality.

The principle of optimality is a fundamental principle in the field of dynamic programming. It


states that an optimal solution to a problem can be obtained by breaking the problem down
into smaller subproblems, and then recursively finding optimal solutions to those subproblems.

More specifically, the principle of optimality can be defined as follows:

"An optimal solution to a problem contains within it optimal solutions to subproblems."

In other words, if a problem can be divided into smaller subproblems, then the optimal solution
to the larger problem can be obtained by finding the optimal solutions to the smaller
subproblems, and then combining them in some way.

7. Name the algorithm used to find all pairs shortest path.

The algorithm used to find all pairs shortest path is called the Floyd-Warshall algorithm. The
Floyd-Warshall algorithm is a dynamic programming algorithm that solves the all-pairs
shortest path problem for a weighted, directed graph with positive or negative edge weights
(but without negative weight cycles). The algorithm finds the shortest path between every pair
of vertices in the graph by considering all possible paths through intermediate vertices. The
time complexity of the Floyd-Warshall algorithm is O(n^3), where n is the number of vertices
in the graph.
8. Write the time complexity of recursion and DP in knapsack problems.

Recursion: The time complexity of the recursive implementation of the knapsack problem is
exponential, O(2^n), where n is the number of items in the knapsack. This is because the
recursive algorithm considers all possible subsets of items, which leads to a large number of
recursive calls.

Dynamic Programming: The time complexity of the dynamic programming implementation of


the knapsack problem is O(nW), where n is the number of items and W is the maximum weight
that the knapsack can hold. This is because the dynamic programming algorithm creates a table
of size (n+1) x (W+1) and fills it with values using a bottom-up approach, which takes O(nW)
time.

Memoization: The time complexity of the memoization implementation of the knapsack


problem is also O(nW). Memoization is a top-down approach, similar to recursion, but with the
added optimization of storing the results of previously computed subproblems in a table to
avoid redundant computation. Therefore, the time complexity is the same as the dynamic
programming implementation.

9)Enlist the applications that make use of dynamic programming for the
Knapsack problem

Longest common subsequence

Matrix chain multiplication

Shortest path problem

Sequence alignment

Coin changing problem

Optimal binary search trees


Edit distance

10)Which data structure is suitable for divide and conquer strategy?

Quicksort uses divide and conquer approach to sort a collection of elements. It chooses a pivot
element, partitions the array into two subarrays, recursively sorts the subarrays, and then
combines the sorted subarrays to form the final sorted array.0

11 List the techniques of Dynamic programming.

The common techniques used in dynamic programming include memoization, tabulation, state
space reduction, bitmasking, divide and conquer, and greedy algorithms. These techniques can
be used in combination to achieve the desired performance and correctness for a specific
problem.

12 Write down the optimization technique used for Warshall's algorithm. State the rules and
assumptions which are implied behind that.

The optimization technique used for Warshall's algorithm is loop unrolling, which reduces the
number of iterations required by updating multiple elements at once. Loop unrolling assumes
that the loop body can be executed independently of the loop variable and that it is relatively
small compared to the overhead of loop control instructions. While loop unrolling can improve
performance, its effectiveness may depend on the specific characteristics of the program being
optimized.

13)Define optimal binary search tree.

An optimal binary search tree minimizes the expected search time for a set of keys by
arranging the keys in a way that reduces the average number of comparisons required to search
for a key.

14)How is the dynamic programming approach used to solve binomial coefficient problems?

To solve binomial coefficient problems using dynamic programming, we start by initializing a


table of size (n+1) x (k+1) with all values set to zero. We then fill in the table row by row,
using the recurrence relation above to compute each value. Specifically, we compute C(i,j) by
adding the values of C(i-1,j-1) and C(i-1,j), which have already been computed and stored in
the table.
At the end of this process, the value of C(n,k) will be stored in the bottom-right corner of the
table. The time complexity of this algorithm is O(nk), which is much faster than the
exponential time complexity of a brute-force approach.

15) List the variants of the knapsack problem?

0/1 Knapsack Problem

Fractional Knapsack Problem

Bounded Knapsack Problem

Multiple Knapsack Problem

Unbounded Knapsack Problem

Subset Sum Problem

Knapsack with Profits and Losses

You might also like