Daa Unit Iii

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 104

Design and Analysis of Algorithm [410241]

Ms. Rupali S. Shishupal


Asst. Professor
Dept. of Computer Engineering,
Sinhgad Institute of Technology, Lonavala

rss.sit@sinhgad.edu

1
Sinhgad Institutes
Vision and Mission of Institute

VISION

We are committed to produce not only good engineers but good


human beings, also.

MISSION
• We believe in and work for the holistic development of
students and teachers.
• We strive to achieve this by imbibing a unique value system,
transparent work culture, excellent academic and physical
environment conducive to learning, creativity and technology
transfer.

Sinhgad Institutes
Vision and Mission of the Department

VISION
We strive to produce globally competent computer professionals
enriched with innovative skills and good moral values with
societal concerns.

MISSION
• M1: To provide broad-based education and contemporary
knowledge by adopting modern teaching-learning methods.
• M2: To inculcate a spirit of innovation in students through
industrial interactions.
• M3: To develop individual’s potential to its fullest extent so
that they can emerge as gifted leaders in their fields

Sinhgad Institutes
Course Objectives

• To develop problem solving abilities using mathematical


theories.
• To apply algorithmic strategies while solving problems.
• To analyze performance of different algorithmic strategies in
terms of time and space.
• To develop time and space efficient algorithms.
• To study algorithmic examples in distributed and concurrent
environments
• To Understand Multithreaded and Distributed Algorithms

Sinhgad Institutes
Course Outcomes

On completion of the course, students will be able to–


CO1: Formulate the problem
CO2: Analyze the asymptotic performance of algorithms
CO3: Decide and apply algorithmic strategies to solve given
problem
CO4: Find optimal solution by applying various methods
CO5: Analyze and A`pply Scheduling and Sorting Algorithms.
CO6: Solve problems for multi-core or distributed or
concurrent environments

Sinhgad Institutes
Unit 3: Greedy And Dynamic Programming
algorithmic Strategies

Greedy strategy: Principle, control abstraction, time analysis


of control abstraction, knapsack problem, scheduling
algorithms-Job scheduling and activity selection problem.
Dynamic Programming: Principle, control abstraction, time
analysis of control abstraction, binomial coefficients, OBST, 0/1
knapsack, Chain Matrix multiplication.

Sinhgad Institutes
Greedy Method

• The Greedy method is a most straight forward design


technique which can be applied to a wide variety of
problems.
• This algorithm works in steps. In each step it selects the
best available options until all options are finished.
• Most of the problems have n inputs and require us to
obtain a subset that satisfies some constraints.
• Any subset that satisfies these constraints is called as a
feasible solution.
• A feasible solution that either minimizes or maximizes
a given objective function is called as Optimal
Solution. 3

Sinhgad Institutes
Greedy Algorithm

• The Greedy method suggest that one can devise an


algorithm that work in stages, considering one input at a
time.
• At each stage, a decision is made regarding whether a
particular input is an optimal solution or not.
• This is done by considering the inputs in an order
determined by some selection procedure.
• If the inclusion of the next input into the partially
constructed optimal solution results sub- optimal/infeasible
solution, then that input is not added to the partial
solution. Otherwise, it is added. The selection procedure
itself is based on some optimization measures.

Sinhgad Institutes
Control Abstraction for Greedy Method

Select selects an input from a[] and removes it. The selected input’s
value is assigned to x.
Feasible is a Boolean-valued function that determines whether x can be
included into the solution vector or not.
Union combines x with the solution and updates the objective function.

Sinhgad Institutes
Types of Greedy Problems

• Subset Paradigm
• To solve a problem (or possibly find the optimal/best
solution), greedy approach generate subset by
selecting one or more available choices. Eg. includes
Knapsack problem, job sequencing with deadlines.
In both of the problems greedy create a subset of
items or jobs which satisfies all
the constraints.
• Ordering Paradigm
• In this, greedy approach generate some
arrangement/order to get the best solution. Eg.
includes: Minimum Spanning tree

Sinhgad Institutes
Applications

• Fractional knapsack algorithm


• Optimal Storage on tapes
• Job sequencing with deadline
• Single source shortest path
– Dijkstra's SSSP algorithm
• Activity Selection Problem
• Minimum Cost Spanning Tree

Sinhgad Institutes
An Activity-Selection Problem

Let S = {1, 2, . . . , n} be the set of activities that compete for a


resource. Each activity i has its starting time si and finish
time fi with si ≤ fi, namely, if selected, i takes place during
time [si, fi). No two activities can share the resource at any
time point. We say that activities i and j are compatible if
their time periods are disjoint.

The activity-selection problem is the problem of selecting the


largest set of mutually compatible activities.

Sinhgad Institutes
Activity Selection

•You are given a list of programs to run on a single


processor
•Each program has a start time and a finish time
• However the processor can only run one program at any
given
• time, and there is no preemption (i.e. once a program
is running, it must be completed)

Sinhgad Institutes
Activity Selection

Sinhgad Institutes
Greedy Activity Selection Algorithm

In this algorithm the activities are first sorted according to


their finishing time, from the earliest to the latest, where a
tie can be broken arbitrarily.
Then the activities are greedily selected by going down the
list and by picking whatever activity that is compatible with
the current selection.

Sinhgad Institutes
What is the running time of
this method?

Well, it depends on which sorting algorithm you use.

The sorting part can be as small as O(n log n) and the other
part is O(n), so the total is O(n log n).

Sinhgad Institutes
Activity-selection problem: Example

➢ Consider the set of activities given in the ascending order of


finish time. Find maximum size subset of mutually compatible
activities.

Activity A1 A2 A3 A4 A5 A6 A7 A8 A9 A10

Start time 1 0 2 4 3 4 7 5 8 10

Finish 4 5 5 6 7 7 9 10 11 12
time

➢ Assume that all the activities are arranged according to their


finish time in the ascending order.

Sinhgad Institutes
Activity-selection problem: Example (cont..)

Sinhgad Institutes
Activity-selection problem: Example (cont..)

Sinhgad Institutes
Activity-selection problem: Example (cont..)

Sinhgad Institutes
Knapsack Problem

• Let, we are given n objects and a Knapsack or Bag.


• Object i has weight Wi and the Knapsack has a capacity M.
• if a fraction Xi of object i is placed into Knapsack, then a
profit of PiXi is earned.
• The objective is to obtain a filling of Knapsack that
maximizes the total profit earned.

Sinhgad Institutes
• Maximize (A)

• Subject to (B)

•And 0 ≤ Xi ≤ 1, 1 ≤i ≤n (C)
•The profit and weights are the positive numbers.
•Here, A feasible solution is any set (X1, X2, …,
Xn) satisfying above rules (B) and (C).
•And an optimal solution is feasible solution for
which rule (A) is maximized.

Sinhgad Institutes
Here, N=3, M=20, (P1, P2, P3)=(25, 24, 15) and (W1, W2, W3)=(18, 15, 10)
Different feasible solutions are:
(X1, X2, X3)

1. (1/2, 1/3, ¼) 16.5 24.2


5
2. (1, 2/15, 0) 20 28.2
3. (0, 2/3, 1) 20 31
4. (0, 1, 1/2) 20 31.5
5. (1/2, 2/3, 1/ 10) 20 30
• 6. (1, 0, 2/10) 20 28
• Of these Six feasible solutions, solution 4 yields the maximum profit. Therefore
solution 4 is optimal for the given problem instance.
• Consideration 1 - In case the sum of all the weights is ≤ M, then Xi=1, 1 ≤ i ≤n
is an optimal solution.
Consideration 2 - All optimal solutions will fill the knapsack exactly.
Sinhgad Institutes
The knapsack algorithm

The greedy algorithm:


• Step 1: Sort pi/wi into nonincreasing order. p[i]/w[i]≥p[i+1]≥w[i+1]
• Step 2: Put the objects into the knapsack according to
the sorted sequence as possible as we can.

e. g.
n = 3, M = 20, (p1, p2, p3) = (25, 24, 15), (w1, w2, w3) = (18, 15, 10)
Sol: p1/w1 = 25/18 = 1.39, p2/w2 = 24/15 = 1.6, p3/w3 = 15/10 = 1.5
After sorting: p2/w2 = 24/15 = 1.6, p3/w3 = 15/10 = 1.5,
p1/w1 = 25/18 = 1.39
Optimal solution: x2 = 1, x3 = 1/2, x1 = 0,
total profit = 24 + 7.5 = 31.5

Sinhgad Institutes
Knapsack Problem: Algorithm
Knapsack-Greedy (w[ ], p[ ],M) x[i] = 1
begin weight = weight+ w[i]
for i = 1 to n do x[i] = 0 // profit = profit + p[i]
initialize else
weight = 0, profit = 0 x[i] = (M – weight)/w[i]
while (weight <= M) do profit = profit + p[i] * x[i]
begin weight = M
i = next object with end
highest profit/weight ratio end
if (weight + w[i] <= M) return x[ ] // solution
then
end
begin

Sinhgad Institutes
Knapsack Problem

• Q. Find feasible solutions for the following knapsack instance :


• Let n = 5, M = 100,
• w = {10, 20, 30, 40, 50},
• P = {20, 30, 66, 40, 60}
• Ans:
1) Increasing order of weight
2) Decreasing order of profit
3) Decreasing order of profit per weight ratio

Sinhgad Institutes
Example problem
• N=7, M=15, (p1,p2…p7)=(10,5,15,7,6,18,3),
(w1,w2..w7)=(2,3,5,7,1,4,1)
Sol: Calculate pi/wi
p1/w1=5 p2/w2=1.66 p3/w3=3 p4/w4=1 p5/w5=6 p6/w6=4.5
p7/w7=3
• Sort pi/wi into nonincreasing order.
x5, x1, x6, x3, x7, x2, x4
• M= (15-1=14-2=12-4=8-5=3-1=2)
• Weight is 1+2+4+5+1+2/3*3+0=13+2/3*3=15
• Profit is 6+10+18+15+3+2/3*5+0=55.34
• The solution vector is (1,2/3,1,0,1,1,1)

18

Sinhgad Institutes
Job sequencing with deadline

• Job scheduling algorithm is applied to schedule the jobs on a


single processor to maximize the profits.

• The greedy approach of the job scheduling algorithm states


that, “Given ‘n’ number of jobs with a starting time and
ending time, they need to be scheduled in such a way that
maximum profit is received within the maximum deadline”.

Sinhgad Institutes
Job sequencing with deadlines (cont..)

• Feasible solution
– A subset J of jobs such that each job in this subset can be
completed by its deadline.
– The value of a feasible solution is the sum of the profits of
the jobs in J.
• Optimal solution
– A feasible solution having a set of jobs completed within
their deadline giving maximum profit.

Sinhgad Institutes
Job Scheduling Algorithm

• Set of jobs with deadlines and profits are taken as an input


with the job scheduling algorithm and scheduled subset of
jobs with maximum profit are obtained as the final output.

Algorithm:
1. Find the maximum deadline value from the input set of
jobs.
2. Once, the deadline is decided, arrange the jobs in
descending order of their profits.
3. Selects the jobs with highest profits, their time periods not
exceeding the maximum deadline.
4. The selected set of jobs are the output.

Sinhgad Institutes
Job sequencing with deadlines

Algorithm GreedyJob (d, J, n)


begin
J={1}
for i = 2 to n do
if (all jobs in J  { i } can be completed
by their deadlines)
then J = J  { i }
end for
End

Sinhgad Institutes
Job Scheduling Algorithm

• Example:
Consider the following tasks with their deadlines and profits.
Schedule the tasks in such a way that they produce maximum
profit after being executed −

S. No. 1 2 3 4 5
Jobs J1 J2 J3 J4 J5
Deadlines 2 2 1 3 4
Profits 20 60 40 100 80

Sinhgad Institutes
Job Scheduling Algorithm

• Step 1
Find the maximum deadline value, dm, from the deadlines given.
dm = 4
• Step 2
Arrange the jobs in descending order of their profits.

S. No. 1 2 3 4 5
Jobs J4 J5 J2 J3 J1
Deadlines 3 4 2 1 2
Profits 100 80 60 40 20
The maximum deadline, dm, is 4. Therefore, all the tasks must
end before 4.

Sinhgad Institutes
Job Scheduling Algorithm

Choose the job with highest profit, J4. It takes up 3 parts of the
maximum deadline.
Therefore, the next job must have the time period 1.
Total Profit = 100.
• Step 3
S. No. 1 2 3 4 5
Jobs J4 J5 J2 J3 J1
Deadlines 3 4 2 1 2
Profits 100 80 60 40 20

The next job with highest profit is J5. But the time taken by J5 is
4, which exceeds the deadline by 3. Therefore, it cannot be
added to the output set.
Sinhgad Institutes
Job Scheduling Algorithm

• Step 4
S. No. 1 2 3 4 5
Jobs J4 J5 J2 J3 J1
Deadlines 3 4 2 1 2
Profits 100 80 60 40 20

The next job with highest profit is J2. The time taken by J5 is 2,
which also exceeds the deadline by 1. Therefore, it cannot be
added to the output set.

Sinhgad Institutes
Job Scheduling Algorithm

• Step 5
S. No. 1 2 3 4 5
Jobs J4 J5 J2 J3 J1
Deadlines 3 4 2 1 2
Profits 100 80 60 40 20

The next job with higher profit is J3. The time taken by J3 is 1,
which does not exceed the given deadline. Therefore, J3 is added
to the output set.
Total Profit: 100 + 40 = 140

Sinhgad Institutes
Job Scheduling Algorithm

• Step 6
S. No. 1 2 3 4 5
Jobs J4 J5 J2 J3 J1
Deadlines 3 4 2 1 2
Profits 100 80 60 40 20

• Since, the maximum deadline is met, the algorithm comes to


an end. The output set of jobs scheduled within the deadline
are {J4, J3} with the maximum profit of 140.

Sinhgad Institutes
Analysis of Job Sequence Algorithm

• The time complexity of this algorithm can be measured using


two parameters
– total number of jobs n
– number of jobs k included in the solution J
• Time complexity = O(kn)
• If k = n, we have the time complexity as O(n2).
• Sorting the job array takes O(n2) time. Hence, the total time
complexity is O(n2).

Sinhgad Institutes
Unit 3: Greedy And Dynamic Programming
algorithmic Strategies

Greedy strategy: Principle, control abstraction, time analysis


of control abstraction, knapsack problem, scheduling
algorithms-Job scheduling and activity selection problem.
Dynamic Programming: Principle, control abstraction, time
analysis of control abstraction, binomial coefficients, OBST, 0/1
knapsack, Chain Matrix multiplication.

Sinhgad Institutes
DYNAMIC PROGRAMMING

Sinhgad Institutes
Dynamic Programming

• Dynamic programming is a technique that breaks the


problems into sub-problems, and saves the result for future
purposes so that we do not need to compute the result again.
• The subproblems are optimized to optimize the overall
solution is known as optimal substructure property.
• The main use of dynamic programming is to solve
optimization problems.
• Here, optimization problems mean that when we are trying to
find out the minimum or the maximum solution of a problem.
• The dynamic programming guarantees to find the optimal
solution of a problem if the solution exists.

Sinhgad Institutes
How does the dynamic programming approach work?

The following are the steps that the dynamic programming


follows:
• It breaks down the complex problem into simpler
subproblems.
• It finds the optimal solution to these sub-problems.
• It stores the results of subproblems (memoization). The
process of storing the results of subproblems is known as
memorization.
• It reuses them so that same sub-problem is calculated more
than once.
• Finally, calculate the result of the complex problem.

Sinhgad Institutes
Approaches of dynamic programming

There are two approaches to dynamic programming:


➢ Top-down approach
➢ Bottom-up approach

• The top-down approach follows the memorization technique,


while bottom-up approach follows the tabulation method.
• Here memorization is equal to the sum of recursion and
caching. Recursion means calling the function itself, while
caching means storing the intermediate results.
• Tabulation method solves the same kind of problems but it
removes the recursion. If we remove the recursion, there is no
stack overflow issue and no overhead of the recursive
functions. In this tabulation technique, we solve the problems
and store the results in a matrix.
Sinhgad Institutes
Dynamic Programming

• Dynamic Programming is mainly an optimization over


plain recursion.
• For example, if we write simple recursive solution
for Fibonacci Numbers, we get exponential time complexity
and if we optimize it by storing solutions of subproblems, time
complexity reduces to linear.

Sinhgad Institutes
Dynamic Programming: Control abstraction

function solve(p) function compute(p)


begin Begin
if known (p) then if trivial(p) then
return (lookup (p)) return(trivial_sol(p))
else else
x = compute (p) divide p into subproblems p1,
save(p, x) p2, ...pn
return x S1 = solve(p1), ......
end Sn = solve (pn)
return (combine(S1, S2, ..., Sn))
end if
End

Sinhgad Institutes
Binomial Coefficient

• Definition: A binomial coefficient C(n, k) can be defined as


the coefficient of x^k in the expansion of (1 + x)^n.
• For example, upon expansion (1+x)2 becomes (1+2x+x 2).
• Here,
C(2, 0) = 1
C(2, 1) = 2
C(2, 2) = 1

Sinhgad Institutes
Binomial Coefficient

• Another Definition: A binomial coefficient C(n, k) also gives


the number of ways, disregarding order, that k objects can be
chosen from among n objects more formally, the number of
k-element subsets (or k-combinations) of a n-element set.
• Mathematically it is defined as,

• For example, let’s say we have n = 2 objects to choose from.


Then,
- C(2, 0) = No. of ways of choosing 0 objects out of 2 objects = 1
- C(2, 1) = No. of ways of choosing 1 objects out of 2 objects = 2
- C(2, 2) = No. of ways of choosing 2 objects out of 2 objects = 1

Sinhgad Institutes
Binomial Coefficient

The Problem
Write a function that takes two parameters n and k and returns
the value of Binomial Coefficient C(n, k).
• For example,
• Given n = 4 and k = 2, function should return 6 .
• Given n = 5 and k = 2, function should return 10 .

Sinhgad Institutes
Binomial Coefficient

Optimal Substructure:
The value of C(n, k) can be recursively calculated using the
following standard formula for Binomial Coefficients.

C(n, k) = C(n-1, k-1) + C(n-1, k)


C(n, 0) = C(n, n) = 1

Sinhgad Institutes
Binomial Coefficient

Optimal Substructure:
Lets work it out for n = 4 and k = 2.
C(4, 2) = Sum of We know:
C(3,1) = Sum of C(n, k) = C(n-1, k-1) + C(n-1, k)
C(2,0) = 1 C(n, 0) = C(n, n) = 1

C(2,1) = Sum of
C(1,0) =1 Answer: 6
C(1,1) = 1
C(3,2) = Sum of
C(2,1) = Sum of
C(1,1) = 1
C(1,0) = 1
C(2,2) = 1
Sinhgad Institutes
Binomial Coefficient

Overlapping Subproblems:
It should be noted that the above function computes the same
subproblems again and again. See the following recursion tree
for n = 5 and k = 2. The function C(3, 1) is called two times. For
large values of n, there will be many common subproblems.

Sinhgad Institutes
Binomial Coefficient

Overlapping Subproblems:
• Since the same subproblems are called again, this problem
has the Overlapping Subproblems property.
• So the Binomial Coefficient problem has both properties of a
dynamic programming problem.
• Like other typical Dynamic Programming(DP) problems, re-
computations of the same subproblems can be avoided by
constructing a temporary 2D-array C[][] in a bottom-up
manner.

Sinhgad Institutes
Binomial Coefficient

Dynamic Programming
The optimal substructure for using dynamic programming is
stated as,

In Table, index i indicates row and index j indicates column

Sinhgad Institutes
Binomial Coefficient using Dynamic Programming

We know

In Table, index i indicates row and index j indicates column

This tabular representation of binomial coefficients is also


known as Pascal’s triangle.
Sinhgad Institutes
Binomial Coefficient using Dynamic Programming
Algorithm:

BINOMIAL_DC (n, k)
// n is total number of items
// k is the number of items to be selected from n

if k == 0 or k == n then
return 1
else
return DC_BINOMIAL(n – 1, k – 1) + DC_BINOMIAL(n – 1, k)
end

Time Complexity: O(n*k)


Sinhgad Institutes
Optimal Binary Search Tree

• Given a sorted array keys[0.. n-1] of search keys and an array


freq[0.. n-1] of frequency counts, where freq[i] is the number
of searches to keys[i].
• Construct a binary search tree of all keys such that the total
cost of all the searches is as small as possible.
• The cost of a BST node is level of that node multiplied by its
frequency.
• Level of root is 1.

Sinhgad Institutes
Example 1

• Input: keys[] = {10, 12}, freq[] = {34, 50}


• There can be following two possible BSTs
10 12
\ /
12 10
I II
Frequency of searches of 10 and 12 are 34 and 50
respectively. The cost of tree I is 34*1 + 50*2 = 134
The cost of tree II is 50*1 + 34*2 = 118

Sinhgad Institutes
Example 2:

Sinhgad Institutes
Search cost in BST

Sinhgad Institutes
OBST

• With n nodes, there exist (2n)!/((n + 1)! * n!) different


binary search trees. An exhaustive search for optimal
binary search tree leads to huge amount of time.
• The goal is to construct a tree which minimizes the total
search cost. Such tree is called optimal binary search tree.

Sinhgad Institutes
Tabular method (Dynamic programming)

• Optimal BST problem has both of properties of a dynamic


programming problem. Like other typical Dynamic
Programming(DP) problems, recomputations of same
subproblems can be avoided by constructing a temporary
array cost[][] in bottom up manner.
• Formulas:
w(i, j) = p(j) + q(j) + w(i, j-1)
c(i, j) = min { c(i, k-1) + c(k,j) } + w(i, j)
i<k≤j

r(i, j) = value of k that minimizes the above equation.


Initial values
• c(i, i) = 0
• w(i, i) = q(i)
• r(i, i) = 0
Sinhgad Institutes
Question

• Let n = 4 and
{a1, a2 , a3 , a4} = {do, if ,int , while}
Let p(1:4) = {3, 3, 1, 1}
And q(0:4) = {2, 3, 1, 1, 1}
The p’s and q’s have been multiplied by 16 for convenience. Find
the OBST.

0 1 2 3 4
Key do if int while
pi 3 3 1 1
qi 2 3 1 1 1

Sinhgad Institutes
0 1 2 3 4
empty W00=2 W11=3 W22=1 W33=1 W44=1
C00=0 C00=0 C00=0 C00=0 C00=0
r00=0 r00=0 r00=0 r00=0 r00=0
N=1 W01=2+3+3=8 W12=3+3+1 W23=1+1+1 W34=1+1+1
C01= =7 =3 =3
r01= C12= C23= C34=
r12= r23= r34=
N=2 W02=3+1+8 W13= W24=
C02= C13= C24=
r02= r13= r24=
N=3 W03= W14= w(i, j) = p(j) + q(j) + w(i, j-1)
C02= C02=
0 1 2 3 4
r02= r02=
Key do if int while
N=4 W04=
C02= pi 3 3 1 1
r02= qi 2 3 1 1 1
Sinhgad Institutes
0 1 2 3 4
empty W00=2 W11=3 W22=1 W33=1 W44=1
C00=0 C11=0 C22=0 C33=0 C44=0
r00=0 r11=0 r22=0 r33=0 r44=0
N=1 W01=8 W12=7 W23=3 W34=3
C01=8 C12= C23= C34=
R01=1 r12= r23= r34=
N=2 W02=12 W13=9 W24=5
C02= C13= C24=
r02= r13= r24=
W03=14 W14=11
C03= C14
0 1 2 3 4
r03= r14
Key do if int while
W04=16
C04= pi 3 3 1 1
r04= qi 2 3 1 1 1
Sinhgad Institutes
C(i, j) = min {c(i,k-1)+c(k,j)} + w(i,j)
i<k<=j
C(0,1)=min {c(0,1-1)+c(1,1)} + w(0,1)
0<k<=1
Here K can be 1 only
Therefore
C(0,1) = {c(0,0)+c(1,1)} + w(0,1)
= 0+0+8 =8
Similarly
C(1,2)=7 c(2,3)=3 c(3,4)=3 and
As k is having only 1 value equal to j
r(0,1)= 1 r(1,2)=2 r(2,3)=3 r(3,4)=4

Sinhgad Institutes
0 1 2 3 4
W00=2 W11=3 W22=1 W33=1 W44=1
C00=0 C00=0 C00=0 C00=0 C00=0
r00=0 r00=0 r00=0 r00=0 r00=0
N=1 W01=8 W12=7 W23=3 W34=3
C01=8 C12=7 C23=3 C34=3
r01=1 r12=2 r23=3 r34=4
N=2 W02=12 W13=9 W24=5
C02=19 C13=12 C24=
r02=1 r13=2 r24=
N=3 W03=14 W14=11
C03= C14
0 1 2 3 4
r03= r14
Key do if int while
W04=16
C04= pi 3 3 1 1
r04= qi 2 3 1 1 1
Sinhgad Institutes
C(i, j) = min {c(i,k-1)+c(k,j)} + w(i,j)
i<k<=j
K=1,2
C(0,2)=min{c(0,0)+c(1,2), //k=1
c(0,1)+c(2,2)} //k=2
+w(0,2)
=min{0+7,8+0}+12
=7+12=19 k=1
K=2,3
C(1,3)=min{c(1,1)+c(2,3), //k=2
c(1,2)+c(3,3)} //k=3
+w(1,3)=min{(0+3),(7+0)}+9=3+9=12 k=2

Sinhgad Institutes
C(i, j) = min {c(i,k-1)+c(k,j)} + w(i,j)
i<k<=j
C(0 , 2) =min {c(0,k-1)+c(k,2)} + w(0,2)
0<k<=2
Here K can have values 1 or 2 so,
C(0 , 2)
=min {c(0,1-1)+c(1,2), c(0,2-1)+c(2,2)} + w(0,2)
=min{c(0,0)+c(1,2),c(0,1)+c(2,2)}+ w(0,2)
=min{0+7,8+0}+12
=min{7,8}+12
=7+12 (min value is by k=1 so, r(0,2)=1)
=19
so c(0,2) = 19 and r(0,2)=1
Sinhgad Institutes
0 1 2 3 4
W00=2 W11=3 W22=1 W33=1 W44=1
C00=0 C00=0 C00=0 C00=0 C00=0
r00=0 r00=0 r00=0 r00=0 r00=0
W01=8 W12=7 W23=3 W34=3
C01=8 C12=7 C23=3 C34=3
r01=1 r12=2 r23=3 r34=4
W02=12 W13=9 W24=5
C02=19 C13= C24=
r02=1 r13= r24=
W03=14 W14=11
C03= C14 0 1 2 3 4
r03= r14
Key do if int while
W04=16
pi 3 3 1 1
C04=
r04= qi 2 3 1 1 1
Sinhgad Institutes
C(i, j) = min {c(i,k-1)+c(k,j)} + w(i,j)
i<k<=j
K=1,2,3
C(0,3)=min{C(0,0)+C(1,3), //k=1
C(0,1)+C(2,3), //k=2
C(0,2)+C(3,3)} //k=3
+W(0,3)
=min{(0+12),(8+3),(19+0)}+14
=11+14=25 K=2

Sinhgad Institutes
Similarly using same formula we can find
C(1,3)=12 r(1,3)=2
C(2,4)= 8 r(2,4)=3
C(0,3)= 25 r(0,3)=2
C(1,4)=19 r(1,4)=2
C(0,4)= 3 r(0,4)=2
r(i,j) value is the k value which gives minimum cost

Sinhgad Institutes
0 1 2 3 4
Key do if int while
pi 3 3 1 1
qi 2 3 1 1 1
Sinhgad Institutes
Optimal search tree for the example

if R(0,4)=2
K=2

do int R(k,j)=R(2,4)
R(I,k-1)=R(0,1)
K=3
K=1

while
R(I,k-1)=R(0,0) R(k,j)=R(1,1) R(I,k-1)=R(2,2) R(k,j)=R(3,4)
K=0 K=0 K=0 K=1

Minimum cost of the BST = 32


0 1 2 3 4
Key do if int while
pi 3 3 1 1
qi 2 3 1 1 1
Sinhgad Institutes
0/1 Knapsack Problem

• Given N items where each item has some weight and profit
associated with it and also given a bag with capacity W, [i.e.,
the bag can hold at most W weight in it].
• The task is to put the items into the bag such that the sum of
profits associated with them is the maximum possible.

Note: The constraint here is we can either put an item


completely into the bag or cannot put it at all [It is not possible
to put a part of an item into the bag].

Sinhgad Institutes
0/1 Knapsack Problem

• Input: N = 3, W = 4, profit[] = {1, 2, 3}, weight[] = {4, 5, 1}


• Explanation:
• There are two items which have weight less than or equal to
4.
• If we select the item with weight 4, the possible profit is 1.
• And if we select the item with weight 1, the possible profit is
3.
• So the maximum possible profit is 3. Note that we cannot put
both the items with weight 4 and 1 together as the capacity of
the bag is 4.
• Output: x = {0, 0, 1}

Sinhgad Institutes
0/1 Knapsack Problem using Dynamic programming
approach
Consider-
Knapsack weight capacity = w
Number of items each having some weight and value = n
0/1 knapsack problem is solved using dynamic programming in
the following steps-
Step 1:
Draw a table say ‘T’ with (n+1) number of rows and (w+1)
number of columns. Fill all the boxes of 0th row and 0th column
with 0.

Sinhgad Institutes
0/1 Knapsack Problem using Dynamic programming
approach
Step 2:
Start filling the table row wise top to bottom from left to right.
Use the following formula-
T (i , j) = max { T ( i-1 , j ) , valuei + T( i-1 , j – weighti ) }

Here, T(i , j) = maximum value of the selected items if we can


take items 1 to i and have weight restrictions of j.

Sinhgad Institutes
0/1 Knapsack Problem using Dynamic programming
approach
Step 3:
To identify the items that must be put into the knapsack to
obtain that maximum profit,
• Consider the last column of the table.
• Start scanning the entries from bottom to top.
• On encountering an entry whose value is not same as the
value stored in the entry immediately above it, mark the row
label of that entry.
• After all the entries are scanned, the marked labels represent
the items that must be put into the knapsack.
Time Complexity: O(N * W). where ‘N’ is the number of
elements and ‘W’ is capacity.

Sinhgad Institutes
0/1 Knapsack Problem using Dynamic programming
approach

Consider the problem having weights and profits are:


• Weights: {2, 3, 4, 5}
• Profits: {3, 4, 5, 6}
• The weight of the knapsack W: 5 kg
• The number of items is n: 4

Sinhgad Institutes
0/1 Knapsack Problem using Dynamic programming
approach
Solution-
Step-01:
Draw a table say ‘T’ with (n+1) = 4 + 1 = 5 number of rows and
(w+1) = 5 + 1 = 6 number of columns.
Fill all the boxes of 0th row and 0th column with 0.

Sinhgad Institutes
0/1 Knapsack Problem using Dynamic programming approach

Step-02:
Start filling the table row wise top to bottom from left to right
using the formula-
T (i , j) = max { T ( i-1 , j ) , valuei + T( i-1 , j – weighti ) }

Finding T(1, 1)-


We have,
(value)i = (value)1 = 3, (weight)i = (weight)1 = 2
Substituting the values, we get- Item Weight Value

T(1,1) = max { T(1-1 , 1) , 3 + T(1-1 , 1-2) } 1 2 3


2 3 4
T(1,1) = max { T(0,1) , 3 + T(0,-1) }
3 4 5
T(1,1) = T(0,1) { Ignore T(0,-1) }
4 5 6
T(1,1) = 0
Sinhgad Institutes
0/1 Knapsack Problem using Dynamic programming approach

Step-02:
Start filling the table row wise top to bottom from left to right
using the formula-
T (i , j) = max { T ( i-1 , j ) , valuei + T( i-1 , j – weighti ) }

Finding T(1, 2)-


We have,
(value)i = (value)1 = 3, (weight)i = (weight)1 = 2
Substituting the values, we get- Item Weight Value

T(1,2) = max { T(1-1 , 2) , 3 + T(1-1 , 2-2) } 1 2 3


2 3 4
T(1,2) = max { T(0,2) , 3 + T(0,0) }
3 4 5
T(1,2) = max {0 , 3+0}
4 5 6
T(1,2) = 3
Sinhgad Institutes
0/1 Knapsack Problem using Dynamic programming approach

Start filling the table row wise top to bottom from left to right
using the formula-

T (i , j) = max { T ( i-1 , j ) , valuei + T( i-1 , j – weighti ) }

Finding T(1, 3)-


T(1,3) = max { T(1-1 , 3) , 3 + T(1-1 , 3-2) }
T(1,3) = max {0 , 3+0}
0 1 2 3 4 5 Item Weight Value
0 0 0 0 0 0 0 1 2 3
1 0 0 3 3 2 3 4
2 0
3 4 5
3 0
4 5 6
4 0
Sinhgad Institutes
0/1 Knapsack Problem using Dynamic programming approach

Start filling the table row wise top to bottom from left to right
using the formula-

T (i , j) = max { T ( i-1 , j ) , valuei + T( i-1 , j – weighti ) }

Finding T(1, 4)-


T(1,4) = max { T(1-1 , 4) , 3 + T(1-1 , 4-2) }
T(1,4) = max {0 , 3+0}
0 1 2 3 4 5 Item Weight Value
0 0 0 0 0 0 0 1 2 3
1 0 0 3 3 3 2 3 4
2 0 3 4 5
3 0
4 5 6
4 0
Sinhgad Institutes
0/1 Knapsack Problem using Dynamic programming approach

Similarly, compute all the entries.

After all the entries are computed and filled in the table, we get
the following table-
0 1 2 3 4 5 Item Weight Value

0 0 0 0 0 0 0 1 2 3
1 0 0 3 3 3 3 2 3 4
2 0 0 3 4 4 7 3 4 5
3 0 0 3 4 5 7 4 5 6
4 0 0 3 4 5 7

The last entry represents the maximum possible value that can
be put into the knapsack.
So, maximum possible value that can be put into the knapsack =
7.
Sinhgad Institutes
0/1 Knapsack Problem using Dynamic programming approach

Identifying Items To Be Put Into Knapsack-

Following Step-04,

• We mark the rows labelled “1” and “2”.


• Thus, items that must be put into the knapsack to obtain the
maximum value 7 are-
Item-1 and Item-2
X = {1, 1, 0, 0}

Sinhgad Institutes
Chain Matrix multiplication

• Given ‘N’ matrices of varying dimensions, which are


multiplication compatible, find an order to multiply them such
that the number of unit operations is minimized.
• In short, we need to find the most efficient way to multiply
the matrices.

Sinhgad Institutes
Chain Matrix multiplication

Example:
• Given three matrices ‘A’, ‘B’, and ‘C’ having dimensions 10 x
20, 20 x 15, and 15 x 30, respectively. We need to multiply A,
B, and C. Now there are two ways in which we can multiply
them.

1. A * (B * C)
2. (A * B) * C

Sinhgad Institutes
Chain Matrix multiplication

Example:
1. A * (B * C):
The dimension of matrix A: 10 x 20, B: 20 x 15, C: 15 x 30.
Therefore, the cost of multiplication of B(20 x 15) and C(15 x 30)
is BC(20 x 30) = 20 * 15 * 30 = 9000.
Now, multiplying A(10 x 20) and B(20 x 30), the multiplication
cost will be ABC(10 x 30) = 10 x 20 x 30 = 6000.
Therefore, the total multiplication cost = 6000 + 9000 = 15000.

Sinhgad Institutes
Chain Matrix multiplication

Example:
2. (A * B) * C :
The dimension of matrix A: 10 x 20, B: 20 x 15, C: 15 x 30.
The cost of multiplication of A(10 x 20) and B(20 x 15) is AB(10 x
15) = 10 * 20 * 15 = 3000.

Then, we will multiply matrices AB and C, which has the cost 10 *


15 * 30.
The total multiplication cost = 3000 + 10 * 15 * 30 = 3000 + 4500
= 7500.

Sinhgad Institutes
Chain Matrix multiplication

Let us consider the example of four matrices. Have a look at the


picture shown below.

So, as we can see in the above image, there are five ways to
multiply the four matrices. We need to choose the most
optimal amongst them. We need to choose one of the N - 1
position to put parenthesis. Thus after adding the parenthesis,
our problem divides into two smaller subproblems. As a result,
we can use a recursive approach to solve these smaller
subproblems.
Sinhgad Institutes
Chain Matrix multiplication

Optimal Substructure:
First, we can place parenthesis at all possible places. If the length
of the chain of matrices is ‘N’, then the number of places we can
put the parenthesis is ‘N - 1’. For Example, if we have four
matrices, i.e., N = 4, we can put parentheses at three places for
the first set. These are (A) * (BCD), (A * B) * (C * D) and (ABC) *
(D). Thus we divided the problem into a similar problem of
smaller size.
Overlapping Subproblems:
In the above figure, we see that the subproblem (B * C) is
repeating twice, (C * D) is repeating twice, and (A * B) is
repeating twice. If in such a smaller problem there are so many
repeating subproblems, then Dynamic Programming would be of
great help here.

Sinhgad Institutes
Chain Matrix multiplication

Dynamic Programming:
Given matrices and their corresponding dimensions are −
A5×10×B10×15×C15×20×D 20×25

Find the count of parenthesization of the 4 matrices, i.e. n = 4.

Using the formula,

Sinhgad Institutes
Chain Matrix multiplication

Find the count of parenthesization of the 4 matrices, i.e. n = 4.

=2+1+2=5
Among these 5 combinations of parenthesis, the matrix chain
multiplication algorithm must find the lowest cost parenthesis.
Sinhgad Institutes
Chain Matrix multiplication

Dynamic Programming:
Step 1:
- Create a cost table, where all the cost values calculated from
the different combinations of parenthesis are stored.
- Create another table to store the k values obtained at the
minimum cost of each combination.

Sinhgad Institutes
Chain Matrix multiplication

Dynamic Programming:
Step 2:
Applying the dynamic programming approach formula find the
costs of various parenthesizations,

Sinhgad Institutes
Chain Matrix multiplication using Dynamic
Programming
Step 2:
Applying the dynamic approach formula only in the upper
triangular values of the cost table, since i < j always.

A5×10×B10×15×C15×20×D 20×25

Sinhgad Institutes
Chain Matrix multiplication using Dynamic
Programming
Step 2:
Find the values of [1, 3] and [2, 4] in this step. The cost table is
always filled diagonally step-wise.

A5×10×B10×15×C15×20×D 20×25

Sinhgad Institutes
Chain Matrix multiplication using Dynamic
Programming
Step 2:
Now compute the final element of the cost table to compare the
lowest cost parenthesization.

A5×10×B10×15×C15×20×D 20×25

Sinhgad Institutes
Chain Matrix multiplication using Dynamic
Programming
Step 3:
Now that all the values in cost table are computed, the final step
is to parethesize the sequence of matrices. For that, k table
needs to be constructed with the minimum value of ‘k’
corresponding to every parenthesis.

Sinhgad Institutes
Chain Matrix multiplication using Dynamic
Programming
Step 4: Parenthesization
Based on the lowest cost values from the cost table and their
corresponding k values, let us add parenthesis on the sequence
of matrices.

The lowest cost value at [1, 4] is achieved when k = 3,


therefore, the first parenthesization must be done at 3.
(ABC)(D)

Sinhgad Institutes
Chain Matrix multiplication using Dynamic
Programming
Step 4: Parenthesization

The lowest cost value at [1, 3] is achieved when k = 2,


therefore the next parenthesization is done at 2. But the
parenthesization needs at least two matrices to be multiplied
so we do not divide further.
Since, the sequence cannot be parenthesized further, the
final solution of matrix chain multiplication is ((AB)C)(D).

Time Complexity: O(N3 )


Sinhgad Institutes
Difference between Greedy and Dynamic
Programming
Dynamic Programming Greedy Method
1. Dynamic Programming is used to 1. Greedy Method is also used to get
obtain the optimal solution. the optimal solution.
2. In Dynamic Programming, we 2. In a greedy Algorithm, we make
choose at each step, but the choice whatever choice seems best at the
may depend on the solution to sub- moment and then solve the sub-
problems. problems arising after the choice is
made.
3. Less efficient as compared to a 3. More efficient as compared to a
greedy approach greedy approach
4. Example: 0/1 Knapsack 4. Example: Fractional Knapsack
5. It is guaranteed that Dynamic 5. In Greedy Method, there is no
Programming will generate an such guarantee of getting Optimal
optimal solution using Principle of Solution.
Optimality.

Sinhgad Institutes
Thank You

Sinhgad Institutes

You might also like