Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 9

GHRCEM, Wagholi, Pune

Department of AI, TY AI A, B, C Question Bank for CAE 2


Subject – DAA
Sr. Question Unit BL CO
No.
01 Explain Greedy method in detail 3 2 3
02 State Minimum cost spanning trees 3 1 3
03 Explain 2 types of MST 3 2 3
04 Calculate minimum job sequencing with deadline problem? 3 3 3

05 Explain Dijkstra Algorithm in detail 3 2 3


06 Difference between Prims and Kruskal Algorithm. 3 2 3
07 Find MST using Prims Algorithm 3 3 3

08 Find MST using Kruskal Algorithm 3 3 3

09 Calculate single source shortest path : 3 3 3


10 Explain the terms: 3 2 3
Optimal solution, feasible solution, local optimal, global optimal
11 Explain Dynamic programming in detail 4 2 4
12 State Multistage graph. Find the path in graph below: 4 3 4

13 Find the All pair shortest path in below diagram: 4 3 4

14 Explain Optimal binary search tree in detail 4 2 4


15 What is traveling salesman problem? Explain the methods used for solving 4 3 4
it.
16 Compare Greedy Method and Dynamic Programming with example 4 2 4
17 Find the optimal binary search tree: 4 3 4

18 Solve the following travelling salesman problem: 4 3 4


Mr. Girish Patil / Mrs. Anjali Gaur/ Mrs. Pranita Mokal Prof. Rachna Sabale

Subject In-charge
10 Explain the terms:

Optimal solution,

An optimal solution, in the context of optimization problems, refers to the best possible
solution among all feasible solutions according to the criteria or objective being optimized.
This could involve maximizing profits, minimizing costs, maximizing efficiency, or achieving
any other specified goal.

The term "optimal" implies that this solution is the most favorable or advantageous outcome
within the given constraints and parameters of the problem. It is the solution that either
maximizes or minimizes the objective function (depending on whether it's a maximization or
minimization problem) while still satisfying all constraints.

For example, in a transportation optimization problem where the objective is to minimize


transportation costs while ensuring that all goods reach their destinations on time, the optimal
solution would be the arrangement of routes, vehicles, and schedules that achieves the lowest
total cost while meeting delivery deadlines and capacity constraints.

, feasible solution:-

A feasible solution, in the context of optimization and mathematical modeling, refers to a


solution that satisfies all the constraints of a given problem. Feasible solutions are valid and
acceptable within the defined boundaries or conditions of the problem. These constraints
could include limitations on resources, capacity, time, budget, or any other relevant factors.

Feasible solutions are crucial in optimization because they represent practical and realistic
outcomes that can be implemented or considered viable. However, a feasible solution may not
necessarily be the best or optimal solution. It simply meets the requirements set by the
problem without violating any constraints.

For example, in a production optimization problem where the goal is to maximize output
while adhering to constraints such as limited raw materials, labor availability, and production
capacity, a feasible solution would be a production plan that utilizes resources within the
specified limits. This plan may not be the most efficient or cost-effective, but it is feasible
because it doesn't exceed the available resources or violate any production constraints.

local optimal, global optimal:_

Local Optimal:

A local optimal solution is a solution that is optimal within a specific neighborhood or region
of the search space. It is the best solution among nearby feasible solutions but may not be the
globally optimal solution. Local optimality does not guarantee global optimality.
Global Optimal:

A global optimal solution is the best possible solution across the entire feasible region of the
optimization problem. It is superior to all other feasible solutions, including local optimal
solutions. Finding the global optimal solution is often the main goal in optimization problems,
especially in situations where small improvements in the objective function are crucial.

11 Explain Dynamic programming in detail

Dynamic Programming

Dynamic programming is a technique that breaks the problems into sub-problems, and saves
the result for future purposes so that we do not need to compute the result again. The
subproblems are optimized to optimize the overall solution is known as optimal substructure
property. The main use of dynamic programming is to solve optimization problems. Here,
optimization problems mean that when we are trying to find out the minimum or the
maximum solution of a problem. The dynamic programming guarantees to find the optimal
solution of a problem if the solution exists.

The definition of dynamic programming says that it is a technique for solving a complex
problem by first breaking into a collection of simpler subproblems, solving each subproblem
just once, and then storing their solutions to avoid repetitive computations.

How does the dynamic programming approach work?

The following are the steps that the dynamic programming follows:

It breaks down the complex problem into simpler subproblems.

It finds the optimal solution to these sub-problems.

It stores the results of subproblems (memoization). The process of storing the results of
subproblems is known as memorization.

It reuses them so that same sub-problem is calculated more than once.

Finally, calculate the result of the complex problem.

Approaches of dynamic programming

There are two approaches to dynamic programming:

Top-down approach

Bottom-up approach
Top-down approach

The top-down approach follows the memorization technique, while bottom-up approach
follows the tabulation method. Here memorization is equal to the sum of recursion and
caching. Recursion means calling the function itself, while caching means storing the
intermediate results.

Advantages

It is very easy to understand and implement.

It solves the subproblems only when it is required.

It is easy to debug.

Disadvantages

It uses the recursion technique that occupies more memory in the call stack. Sometimes when
the recursion is too deep, the stack overflow condition will occur.

It occupies more memory that degrades the overall performance.

Bottom-Up approach

The bottom-up approach is also one of the techniques which can be used to implement the
dynamic programming. It uses the tabulation technique to implement the dynamic
programming approach. It solves the same kind of problems but it removes the recursion. If
we remove the recursion, there is no stack overflow issue and no overhead of the recursive
functions. In this tabulation technique, we solve the problems and store the results in a
matrix.

The bottom-up is the approach used to avoid the recursion, thus saving the memory space.
The bottom-up is an algorithm that starts from the beginning, whereas the recursive
algorithm starts from the end and works backward. In the bottom-up approach, we start
from the base case to find the answer for the end. As we know, the base cases in the Fibonacci
series are 0 and 1. Since the bottom approach starts from the base cases, so we will start from
0 and 1.

14 Explain Optimal binary search tree in detail

As we know that in binary search tree, the nodes in the left subtree have lesser value than the
root node and the nodes in the right subtree have greater value than the root node.
We know the key values of each node in the tree, and we also know the frequencies of each
node in terms of searching means how much time is required to search a node. The frequency
and key-value determine the overall cost of searching a node. The cost of searching is a very
important factor in various applications. The overall cost of searching a node should be less.
The time required to search a node in BST is more than the balanced binary search tree as a
balanced binary search tree contains a lesser number of levels than the BST. There is one way
that can reduce the cost of a binary search tree is known as an optimal binary search tree.

Let's understand through an example.

If the keys are 10, 20, 30

What is traveling salesman problem? Explain the methods used for solving it.

The traveling salesman problem (TSP) is an algorithmic problem tasked with finding the
shortest route between a set of points and locations that must be visited. In the problem
statement, the points are the cities a salesperson might visit. The salesman‘s goal is to keep
both the travel costs and the distance traveled as low as possible.

Focused on optimization, TSP is often used in computer science to find the most efficient
route for data to travel between various nodes. Applications include identifying network or
hardware optimization methods.

1. Nearest Neighbor (NN)

Nearest neighbor algorithm

The Nearest Neighbor Method is probably the most basic TSP heuristic. The method followed
by this algorithm states that the driver must start by visiting the nearest destination or closest
city. Once all the cities in the loop are covered, the driver can head back to the starting point.

Solving TSP using this efficient method, requires the user to choose a city at random and then
move on to the closest unvisited city and so on. Once all the cities on the map are covered, you
must return to the city you started from.

2. The Branch and Bound Algorithm

The Branch and Bound Algorithm for traveling salesman problem


The Branch & Bound method follows the technique of breaking one problem into several little
chunks of problems. So it solves a series of problems. Each of these sub-problems may have
multiple solutions. The solution you choose for one problem may have an effect on the
solutions of subsequent sub-problems.

3. The Brute Force Algorithm

Brute Force Algorithm to solve traveling salesman problem

The Brute Force Approach takes into consideration all possible minimum cost permutation of
routes using a dynamic programming approach. First, calculate the total number of routes.
Draw and list all the possible routes that you get from the calculation. The distance of each
route must be calculated and the shortest route will be the most optimal solution.

16 Compare Greedy Method and Dynamic Programming with example

Dynamic Programming Greedy Method

1. Dynamic Programming is used to obtain the 1. Greedy Method is also used to get the optimal solution.
optimal solution.

2. In Dynamic Programming, we choose at each 2. In a greedy Algorithm, we make whatever choice seems
step, but the choice may depend on the solution to best at the moment and then solve the sub-problems
sub-problems. arising after the choice is made.

3. Less efficient as compared to a greedy approach 3. More efficient as compared to a greedy approach

4. Example: 0/1 Knapsack 4. Example: Fractional Knapsack

5. It is guaranteed that Dynamic Programming 5. In Greedy Method, there is no such guarantee of getting
will generate an optimal solution using Principle Optimal Solution.
of Optimality.

You might also like