Download as pdf or txt
Download as pdf or txt
You are on page 1of 105

Design and Analysis of Algorithms 21CS42

Module 4: Dynamic Programming

21CS42-Design and Analysis of Algorithms


Outline
• Introduction to • Optimal Binary Search
Dynamic Programming Trees
– General method with • Knapsack problem
Examples
• Bellman-Ford Algorithm
– Multistage Graphs
• Travelling Sales Person
• Transitive Closure:
problem
– Warshall’s Algorithm,
• Reliability design
• All Pairs Shortest Paths:
– Floyd's Algorithm,

18CS42-Design and Analysis of Algorithms


2
21CS42-Design and Analysis of Algorithms
Outline
• Introduction to • Optimal Binary Search
Dynamic Programming Trees
– General method with • Knapsack problem
Examples
• Bellman-Ford Algorithm
– Multistage Graphs
• Travelling Sales Person
• Transitive Closure:
problem
– Warshall’s Algorithm,
• Reliability design
• All Pairs Shortest Paths:
– Floyd's Algorithm,

18CS42-Design and Analysis of Algorithms


3
21CS42-Design and Analysis of Algorithms
Dynamic Programming
• Dynamic programming is a technique for solving
problems with overlapping subproblems.
– subproblems arise from a recurrence relating a given
problem’s solution to solutions of its smaller
subproblems.
– DP suggests solving each of the smaller subproblems
only once and recording the results in a table from
which a solution to the original problem can then be
obtained.
• The Dynamic programming can be used when the
solution to a problem can be viewed as the result of
sequence of decisions.

21CS42-Design and Analysis of Algorithms 4


Example-1: Knapsack Problem

21CS42-Design and Analysis of Algorithms 5


Example-2: File merging problem

21CS42-Design and Analysis of Algorithms 6


Principle of Optimality

Difference between greedy and DP?

21CS42-Design and Analysis of Algorithms 7


Outline
• Introduction to • Optimal Binary Search
Dynamic Programming Trees
– General method with • Knapsack problem
Examples
• Bellman-Ford Algorithm
– Multistage Graphs
• Travelling Sales Person
• Transitive Closure:
problem
– Warshall’s Algorithm,
• Reliability design
• All Pairs Shortest Paths:
– Floyd's Algorithm,

18CS42-Design and Analysis of Algorithms


8
21CS42-Design and Analysis of Algorithms
Multistage Graphs
• A multistage graph G=(V,E) is a directed graph in which
the vertices are partitioned into k ≥2 disjoint set Vi.
• The Vertex s is the source, and t is the sink.
• Let c(i,j) be the cost of edge(i,j). The cost of a path from s
to t is the sum of the costs of the edges on the path. The
multistage graph problem is to find a minimum cost path
from s to t.
• Each set Vi defines a stage in the graph.

• Because of the constraints on E, every path from s to t


starts in stage 1, goes to stage 2, then to stage 3, then to
stage 4, and so on, and eventually terminates in stage k.

21CS42-Design and Analysis of Algorithms


DP Solution using forward approach

Cost of node j at
stage i to reach
terminal node t

Stage number
Node

21CS42-Design and Analysis of Algorithms


Cost(ith stage, jth vertex)

Stage 5
cost(5,12)=0

Stage 4
cost(4,9) = c(9,12) = 4
cost(4,10) = c(10,12)=2
cost(4,11) = c(11,12)=5

11
18CS42-Design
21CS42-Design and
and Analysis of Analysis of Algorithms
Algorithms
Stage 3
cost(3,6) = min { c(6,9) + cost(4,9), c(6,10) + cost(4,10)}
= min { 6 + 4, 5 + 2}= 7 d[6]=10

cost(3,7) = min { c(7,9) + cost(4,9), c(7,10) + cost(4,10)}


= min { 4 + 4, 3 + 2}= 5 d[7]=10

cost(3,8) = min { c(8,10) + cost(4,10), c(8,11) + cost(4,11)}


= min { 5 + 2, 6 + 5}= 7 d[8]=10

21CS42-Design and Analysis of Algorithms


Stage 2
cost(2,2) = min { c(2,6) + cost(3,6), c(2,9) + cost(3,7), c(2,8) + cost(3,8), }
= min { 4 + 7, 2 + 5, 1 + 7}= 7 d[2]=7
cost(2,3) = min { c(3,6) + cost(3,6), c(3,7) + cost(3,7)}
= min { 2 + 7, 7 + 5}= 9 d[3]=6
cost(2,4) = min { c(4,8) + cost(3,8)}
= min { 11 + 7}= 18 d[4]=8
cost(2,5) = min { c(5,7) + cost(3,7), c(5,8) + cost(3,8)}
= min { 11 + 5, 8 + 7}= 15 d[5]=8

21CS42-Design and Analysis of Algorithms


Stage 1
cost(1,1) = min { c(1,2)+cost(2,2), c(1,3)+cost(2,3), c(1,4)+cost(2,4), c(1,5) + cost(2,5) }
= min { 9 + 7, 7 + 9, 3 + 18, 2 + 15}= 16 d[1]=2 or 3

Stage node

Stage number Node


Decisions are taken from source to sink. So called as forward approach.
d(1, 1) = 2 or d(1,1) = 3
d(2,2) = 7 or d(2,3) = 6
d(3,7) = 10 or d(3,6) = 10
d(4,10) = 12 or d(4,10) = 12
So, optimal path =21CS42-Design
1 -> 2 ->and7 -> 10 of->Algorithms
Analysis 12 (or) 1 -> 3 -> 6 -> 10 -> 12
Principle of optimality w.r.t. Multistage graph

Principle of Optimality:
Suppose that in solving a problem, we have to make a
sequence of decisions D1, D2, ……, Dn. If this sequence is
optimal, then the last k decisions, 1 < k < n must be
optimal.

E.g. The shortest path problem


If i, i1, i2, …. , j is a shortest path from i to j, then i1, i2, … , j
must be a shortest path from i1 to j.

21CS42-Design and Analysis of Algorithms


Multistage Graph Pseudocode Corresponding to the Forward Approach

1 2 3 4 5
P 1 2 7 10 12

21CS42-Design and Analysis of Algorithms 18


Multistage Graph Pseudocode Corresponding to the Backward
Approach

Multistage Graph Pseudocode Corresponding to the Multistage Graph Pseudocode Corresponding to the
Forward Approach Backward Approach
21CS42-Design and Analysis of Algorithms
Analysis

Θ( |V| + |E|)

21CS42-Design and Analysis of Algorithms


Multistage Graph
Forward Approach and Backward Approach:
• Note that if the recurrence relations are formulated using
the forward approach then the relations are solved
backwards. i.e., beginning with the last decision.

• On the other hand if the relations are formulated using the


backward approach, they are solved forwards.

To solve a problem by using dynamic programming:


• Find out the recurrence relations.
• Represent the problem by a multistage graph.

21CS42-Design
18CS42-Design and Analysis of Algorithms 27
Outline
• Introduction to • Optimal Binary Search
Dynamic Programming Trees
– General method with • Knapsack problem
Examples
• Bellman-Ford Algorithm
– Multistage Graphs
• Travelling Sales Person
• Transitive Closure:
problem
– Warshall’s Algorithm,
• Reliability design
• All Pairs Shortest Paths:
– Floyd's Algorithm,

18CS42-Design and Analysis of Algorithms


28
21CS42-Design and Analysis of Algorithms
Transitive Closure

Definition: The transitive closure of a directed graph with n


vertices can be defined as the n x n boolean matrix T = {tij }, in
which the element in the ith row `and the jth column is 1 if
there exists a nontrivial path (i.e., directed path of a positive
length) from the ith vertex to the jth vertex; otherwise, tij is 0.

21CS42-Design and Analysis of Algorithms


Transitive Closure
• We can generate the transitive closure of a digraph
with the help of depth-first search or breadth-first
search.
• Since this method traverses the same digraph several
times, we can use a better algorithm called
Warshall’s algorithm.
• Warshall’s algorithm constructs the transitive closure
through a series of n × n boolean matrices:

21CS42-Design and Analysis of Algorithms


Warshalls Algorithm
• The element in the ith row and jth column of matrix R(k) is
equal to 1 if and only if
– there exists a directed path of a positive length from
the ith vertex to the jth vertex with each intermediate
vertex, if any, numbered not higher than k.

• This means that there exists a path from the ith vertex vi
to the jth vertex vj with each intermediate vertex
numbered not higher than k:
vi, a list of intermediate vertices each numbered not higher
than k, vj

21CS42-Design and Analysis of Algorithms


Warshalls Algorithm
Two situations regarding this path are possible.
1. In the first, the list of its intermediate vertices does
not contain the kth vertex.

2. The second possibility is that path does contain the


kth vertex vk among the intermediate vertices.

21CS42-Design and Analysis of Algorithms


21CS42-Design and Analysis of Algorithms
21CS42-Design and Analysis of Algorithms
21CS42-Design and Analysis of Algorithms
Algorithm

21CS42-Design and Analysis of Algorithms


Analysis
• Its time efficiency is Θ(n3).
• We can make the algorithm to run faster by treating
matrix rows as bit strings and employ the bitwise or
operation available in most modern computer
languages.
• Space efficiency
– Although separate matrices for recording
intermediate results of the algorithm are used,
that can be avoided.

21CS42-Design and Analysis of Algorithms


Outline
• Introduction to • Optimal Binary Search
Dynamic Programming Trees
– General method with • Knapsack problem
Examples
• Bellman-Ford Algorithm
– Multistage Graphs
• Travelling Sales Person
• Transitive Closure:
problem
– Warshall’s Algorithm,
• Reliability design
• All Pairs Shortest Paths:
– Floyd's Algorithm,

18CS42-Design and Analysis of Algorithms


38
21CS42-Design and Analysis of Algorithms
All Pairs Shortest Paths
• Problem definition: Given a weighted connected
graph (undirected or directed), the all-pairs shortest
paths problem asks to find the distances—i.e., the
lengths of the shortest paths - from each vertex to all
other vertices.

• Applications:
– Communications, transportation networks, and operations
research.
– pre-computing distances for motion planning in computer
games.

21CS42-Design and Analysis of Algorithms


All Pairs Shortest Paths

(a) Digraph. (b) Its weight matrix. (c) Its distance matrix

We can generate the distance matrix with an


algorithm that is very similar to Warshall’s algorithm.

It is called Floyd’s algorithm.

21CS42-Design and Analysis of Algorithms


Floyds Algorithm

21CS42-Design and Analysis of Algorithms


Floyds Algorithm

21CS42-Design and Analysis of Algorithms


Analysis
• Its time efficiency is Θ(n3),
– similar to the warshall’s algorithm.

21CS42-Design and Analysis of Algorithms


Example

21CS42-Design and Analysis of Algorithms


Outline
• Introduction to • Optimal Binary Search
Dynamic Programming Trees
– General method with • Knapsack problem
Examples
• Bellman-Ford Algorithm
– Multistage Graphs
• Travelling Sales Person
• Transitive Closure:
problem
– Warshall’s Algorithm,
• Reliability design
• All Pairs Shortest Paths:
– Floyd's Algorithm,

18CS42-Design and Analysis of Algorithms


45
21CS42-Design and Analysis of Algorithms
Optimal Binary Search Tree
• A binary search tree is one of the most important
data structures in computer science.
• One of its principal applications is to implement a
dictionary, a set of elements with the operations of
searching, insertion, and deletion.
• The binary search tree for which the average number
of comparisons in a search is the smallest as possible
is called as optimal binary search tree.
• If probabilities of searching for elements of a set are
known, an optimal binary search tree can be
constructed.

21CS42-Design and Analysis of Algorithms


Optimal Binary Search Tree
Example
• Consider four keys A, B, C, and D to be searched for
with probabilities 0.1, 0.2, 0.4, and 0.3, respectively.
• 14 different binary search trees are possible
• Two are shown here

• The average number of comparisons in a successful


search in the first of these trees is
0.1 *1 + 0.2 * 2 + 0.4 *3+ 0.3 *4 = 2.9,
0.1 * 2 + 0.2 * 1+ 0.4 * 2 + 0.3 *3= 2.1.
Neither of these two trees is, in fact, optimal.
21CS42-Design and Analysis of Algorithms
Optimal BST
• For our tiny example, we could find the optimal tree
by generating all 14 binary search trees with these
keys.
• As a general algorithm, this exhaustive-search
approach is unrealistic:
– the total number of binary search trees with n
keys is equal to the nth Catalan number,

which grows to infinity as fast as 4n / n1.5

21CS42-Design and Analysis of Algorithms


Optimal BST
Problem definition:
• Given a sorted array a1, . . . , an of search keys and an
array p1, . . . , pn of probabilities of searching,
where pi is the probability of searches to ai.
• Construct a binary search tree of all keys such that,
smallest average number of comparisons made in a
successful search.

21CS42-Design and Analysis of Algorithms


Optimal BST – Solution using DP
• So let a1, . . . , an be distinct keys ordered from the
smallest to the largest and let p1, . . . , pn be the
probabilities of searching for them.
• Let C(i, j) be the smallest average number of
comparisons made in a successful search in a binary
search tree Ti j made up of keys ai, . . , aj, where i, j
are some integer indices, 1≤ i ≤ j ≤ n.
• We are interested just in C(1, n).

21CS42-Design and Analysis of Algorithms


Optimal BST – Solution using DP
• To derive a recurrence, we will consider all possible
ways to choose a root ak among the keys ai, . . . , aj .

21CS42-Design and Analysis of Algorithms


21CS42-Design and Analysis of Algorithms
index 1 2 3 4

Example

21CS42-Design and Analysis of Algorithms


21CS42-Design and Analysis of Algorithms
index 1 2 3 4

Example

k = 1, C(1,0) + C(2,2), 0.2


C(1,2) = Min + 0.3 = Min + 0.3 = 0.4
k = 2, C(1 ,1) + C(3,2) 0.1

21CS42-Design and Analysis of Algorithms


Example index 1 2 3 4

2
0.4
3
0.8

3
1

0.4
C(2,3) = Min k = 2 , C(2,1) + C(3,3), + 0 . 6 = Min 0 . 2 + 0 . 6 = 0 . 8
k = 3 , C(2 ,2) + C(4,3) 0.2

k = 3 , C(3,2) + C(4,4), 0.3


C(3,4) = Min + 0 . 7 = Min + 0.7 =1
k = 4 , C(3,3) + C(5,4) 0.4

21CS42-Design and Analysis of Algorithms


Example index 1 2 3 4

3 1.7 3
1.1

1.4 3

k = 1 , C(1,0) + C(2,3), 0.8


C(1,3) = Min k = 2 , C(1,1) + C(3,3), + 0 . 7 = Min
0.5 + 0.7 =1.1
K=3, C(1,2) + C(4,3) 0.4
k = 2 , C(2,1) + C(3,4), 1.0
C(2,4) = Min k = 3 , C(2 ,2) + C(4,4) +0.9 = Min0.5 + 0.9 =1.4
K=4 , C(2,3) + C(5,4) 0.8
k = 1 , C(1,0) + C(2,4), 1.4
C(1,4) = Min + 1 = Min + 1 =1.7
k = 2 , C(1,1) + C(3,4) 1.1
K=3, C(1,2)+C(4,4) 0.7
K=4, C(1,3)+C(5,4) 1.1
21CS42-Design and Analysis of Algorithms
index 1 2 3 4

Example

How to construct the


optimal binary search tree?

21CS42-Design and Analysis of Algorithms


How Construct Optimal BST
▪ Thus, the average number of key comparisons in the optimal tree is
equal to 1.7.
▪ Since R[1, 4] = 3, the root of the optimal tree contains the third key, i.e.,
C.
▪ Its left subtree is made up of keys A and B, and its right subtree contains
just key D (why?).
▪ To find the specific structure of these subtrees, we find first their roots
by consulting the root table again as follows.
▪ Since R[1, 2] = 2, the root of the optimal tree containing A and B is B,
with A being its left child (and the root of the one node tree: R[1, 1] = 1).
▪ Since R[4, 4] = 4, the root of this one-node optimal tree is its only key D.

21CS42-Design and Analysis of Algorithms


21CS42-Design and Analysis of Algorithms
21CS42-Design and Analysis of Algorithms
Analysis
• The algorithm requires O(n2) time and O(n2) storage.
• Therefore, as ‘n’ increases it will run out of storage
even before it runs out of time.
• The storage needed can be reduced by almost half by
implementing the two-dimensional arrays as one-
dimensional arrays.

21CS42-Design and Analysis of Algorithms


Home Work
• Find the optimal binary search tree for the
keys A, B, C, D, E with search probabilities
0.1, 0.1, 0.2, 0.2, 0.4 respectively.

21CS42-Design and Analysis of Algorithms


Outline
• Introduction to • Optimal Binary Search
Dynamic Programming Trees
– General method with • Knapsack problem
Examples
• Bellman-Ford Algorithm
– Multistage Graphs
• Travelling Sales Person
• Transitive Closure
problem
– Warshall’s Algorithm,
• Reliability design
• All Pairs Shortest Paths:
– Floyd's Algorithm,

18CS42-Design and Analysis of Algorithms


63
21CS42-Design and Analysis of Algorithms
Knapsack problem using DP
• Given n items of known weights w1, . . . , wn and
values v1, . . . , vn and a knapsack of capacity W, find
the most valuable subset of the items that fit into the
knapsack.
• To design a DP algorithm, we need to derive a
recurrence relation that expresses a solution in terms
of its smaller sub instances.
• Let us consider an instance defined by the first i
items, 1≤ i ≤ n, with weights w1, . . . , wi, values v1, . . .
, vi , and knapsack capacity j, 1 ≤ j ≤ W.

21CS42-Design and Analysis of Algorithms


Knapsack problem using DP
• Let F(i, j) be the value of an optimal solution to this
instance.
• We can divide all the subsets of the first i items that
fit the knapsack of capacity j into two categories:
– those that do not include the ith item
– and those that do.
• Among the subsets that do not include the ith item,
– the value of an optimal subset is, by definition,
– i.e F(i , j) = F(i − 1, j).

21CS42-Design and Analysis of Algorithms


Knapsack problem using DP
Let i represent the ith object in question to be selected or not,
to get the max profit with weight wi and profit pi with
remaining capacity j. F[i,j] is the profit obtained by considering
all I objects with capacity j.
The various cases to be considered are:
Case 1: If there are no objects (ie i=0) or capacity of knapsack
itself is 0(ie j=0), then profit will be 0.
Therefore, F[i,j]=0 if(i=0 or j=0)----1
Case 2: Suppose we have a situation that weight of ith object
wi is greater than remaining capacity j, in such case ith obj
cannot be selected. So, the profit obtained is
F[i,j]=F[i-1,j] if(wi>j----2

21CS42-Design and Analysis of Algorithms


Knapsack problem using DP
Case 3: Suppose we have situation where wi is less than
remaining capacity j, in such case we have two alternatives.
a) We reject ith obj, if so,profit obtained is F[i,j]=F[i-1,j]
b) We select ith obj, if so,profit obtained is F[i,j]=F[i-1,j-wi]+pi
Out of these situations whichever yields max profit, that has to
be considered.
So, we have to select max of these two.
Therefore, Profit obtained if wi<=j is given by.
F[i,j]= max{F[i-1,j],F[i-1,j-wi]+pi} if wi<=j----3
Therefore, Final recurrence relation is
0 𝑖𝑓 𝑖 = 0 𝑜𝑟 𝑗 = 0
F[i,j]= ൞𝐹 𝑖 − 1, 𝑗 𝑖𝑓 𝑤𝑖 > 𝑗
max 𝐹 𝑖 − 1, 𝑗 , 𝐹 𝑖 − 1, 𝑗 − 𝑤𝑖 + 𝑝𝑖 𝑖𝑓 𝑤𝑖 ≤ 𝑗

21CS42-Design and Analysis of Algorithms


Knapsack problem using DP
• Our goal is to find F(n, W), the maximal value of a
subset of the n given items that fit into the knapsack
of capacity W, and an optimal subset itself.

21CS42-Design and Analysis of Algorithms


Algorithm

vi
vi

21CS42-Design and Analysis of Algorithms


Method for filling the matrix
• For F[i,j], first shift left wi times, if shifting left goes
out of the matrix, then copy the value in previous
row.
• If shift left is within the matrix, then check whether
wi>j, if so, copy the previous row value.
• If not, go to previous row after shifting wi times, then
add profit of ith object to previous row value,
compare resulted value with previous row value of
same column(ie i-1th row, jth column), write max
value out of them.

21CS42-Design and Analysis of Algorithms


Example-1

Find the composition of an optimal subset by backtracing the computations


21CS42-Design and Analysis of Algorithms
Example-1

Find the composition of an optimal subset by backtracing the computations


21CS42-Design and Analysis of Algorithms
Thus, the maximal value is V[4, 5] = $37. We can find the
composition of an optimal subset by tracing back the
computations of this entry in the table.
Since V[4, 5] ≠ V[3, 5], item 4 was included in an optimal
solution along with an optimal subset for filling 5 - 2 = 3
remaining units of the knapsack capacity.
The latter is represented by element V[3, 3].
Since V[3, 3] = V[2, 3], item 3 is not a part of an optimal subset.
Since V[2, 3] ≠V[l, 3], item 2 is a part of an optimal selection,
which leaves element V[l, 3 -1] to specify its remaining
composition.
Similarly, since V[1, 2]≠ V[0, 2], item 1 is the final part of the
optimal solution {item 1, item 2, item 4).

21CS42-Design and Analysis of Algorithms


Analysis
• The classic dynamic programming approach, works
bottom up: it fills a table with solutions to all smaller
subproblems, each of them is solved only once.
• Drawback: Some unnecessary subproblems are also
solved

• The time efficiency and space efficiency of this


algorithm are both in Θ(nW).
• The time needed to find the composition of an
optimal solution is in O(n).

21CS42-Design and Analysis of Algorithms


Discussion
• The direct top-down approach to finding a solution to
such a recurrence leads to an algorithm that solves
common subproblems more than once and hence is very
inefficient.
• Since this drawback is not present in the top-down
approach, it is natural to try to combine the strengths of
the top-down and bottom-up approaches.
• The goal is to get a method that solves only subproblems
that are necessary and does so only once. Such a method
exists; it is based on using memory functions.

21CS42-Design and Analysis of Algorithms


Algorithm MFKnapsack(i, j )
• Implements the memory function method for the
knapsack problem
• Input: A nonnegative integer i indicating the number of the
first items being considered and a nonnegative integer j
indicating the knapsack capacity
• Output: The value of an optimal feasible subset of the first i
items
• Note: Uses as global variables input arrays Weights[1..n], V
alues[1..n], and table F[0..n, 0..W ] whose entries are
initialized with −1’s except for row 0 and column 0 initialized
with 0’s

21CS42-Design and Analysis of Algorithms


Algorithm MFKnapsack(i, j )

21CS42-Design and Analysis of Algorithms


Example

21CS42-Design and Analysis of Algorithms


Outline
• Introduction to • Optimal Binary Search
Dynamic Programming Trees
– General method with • Knapsack problem
Examples
• Bellman-Ford
– Multistage Graphs
Algorithm
• Transitive Closure:
• Travelling Sales Person
– Warshall’s Algorithm,
problem
• All Pairs Shortest Paths:
• Reliability design
– Floyd's Algorithm,

18CS42-Design and Analysis of Algorithms


78
21CS42-Design and Analysis of Algorithms
Single Source shortest path with –ve wts
Problem definition
• Single source shortest path - Given a graph and a
source vertex s in graph, find shortest paths from s to
all vertices in the given graph.
• The graph may contain negative weight edges.

• Dijkstra’s algorithm [ O(VlogV) ], doesn’t work for


graphs with negative weight edges.

• Bellman-Ford works in O(VE)

21CS42-Design and Analysis of Algorithms


Bellman Ford Algorithm
• Like other Dynamic Programming Problems, the
algorithm calculates shortest paths in bottom-up
manner.
• It first calculates the shortest distances
– for the shortest paths which have at-most one edge in the
path.
– Then, it calculates shortest paths with at-most 2 edges,
and so on.

• Iteration i finds all shortest paths that use i edges.

21CS42-Design and Analysis of Algorithms


21CS42-Design and Analysis of Algorithms
21CS42-Design
18CS42-Design and Analysis of Algorithms 82
dist 1 (1) = 0
dist 1 (2) = 6
dist 1 (3) = 5
dist 1 (4) = 5
dist 1 (5) = ∞
dist 1 (6) = ∞
dist 1 (7) = ∞

k=1

21CS42-Design and Analysis of Algorithms


For every vertex (except source) look for the
incoming edge (except from source)
dist 2 (1) = 0
dist 1 (2) 6
dist 2 (2) = min = min =3
dist 1 (3) + cost(3,2) 5 −2
dist 1 (3) 5
dist 2 (3) = min = min =3
dist 1 (4) + cost(4,3) 5 −2
dist 2 (4) = 5
dist 1 (5) ∞
dist 2 (5) = min dist 1 (2) + cost(2,5) = min 6−1 = 5
dist 1 (3) + cost(3,5) 5+1
dist 1 (6) ∞
dist 2 (6) = min = min =4
dist 1 (4) + cost(4,6) 5−1
dist 1 (7) ∞
dist 2 (7) = min dist 1 (5) + cost(5,7) = min ∞ + 3 = ∞
dist 1 (6) + cost(6,7) ∞+ 3

k=2

21CS42-Design and Analysis of Algorithms


For every vertex (except source) look for the
incoming edge (except from source)
dist 3 (1) = 0
dist 2 (2) 3
dist 3 (2) = min = min =1
dist 2 (3) + cost(3,2) 3 −2
dist 2 (3) 3
dist 3 (3) = min = min =3
dist 2 (4) + cost(4,3) 5 −2
dist 3 (4) = 5
dist 2 (5) 5
dist 3 (5) = min dist 2 (2) + cost(2,5) = min 3−1 = 2
dist 2 (3) + cost(3,5) 3 +1
dist 2 (6) 4
dist 3 (6) = min = min =4
dist 2 (4) + cost(4,6) 5 −1
dist 2 (7) ∞
dist 3 (7) = min dist 2 (5) + cost(5,7) = min 5 + 3 = 7
4+3
dist 2 (6) + cost(6,7)

k=3

21CS42-Design and Analysis of Algorithms


For every vertex (except source) look for the
incoming edge (except from source)
dist 4 (1) = 0
dist 3 (2) 1
dist 4 (2) = min = min =1
dist 3 (3) + cost(3,2) 3 −2
dist 3 (3) 3
dist 4 (3) = min = min =3
dist 3 (4) + cost(4,3) 5 −2
dist 4 (4) = 5
dist 3 (5) 2
dist 4 (5) = min dist 3 (2) + cost(2,5) = min 1−1 = 0
dist 3 (3) + cost(3,5) 3 +1
dist 3 (6) 4
dist 4 (6) = min = min =4
dist 3 (4) + cost(4,6) 5 −1
dist 3 (7) 7
dist 4 (7) = min dist 3 (5) + cost(5,7) = min 2 + 3 = 5
4+3
dist 3 (6) + cost(6,7)

k=4

21CS42-Design and Analysis of Algorithms


For every vertex (except source) look for the
incoming edge (except from source)
dist 5 (1) = 0
dist 4 (2) 1
dist 5 (2) = min = min =1
dist 4 (3) + cost(3,2) 3 −2
dist 4 (3) 3
dist 5 (3) = min = min =3
dist 4 (4) + cost(4,3) 5 −2
dist 5 (4) = 5
dist 4 (5) 0
dist 5 (5) = min dist 4 (2) + cost(2,5) = min 1−1 = 0
dist 4 (3) + cost(3,5) 3 +1
dist 4 (6) 4
dist 5 (6) = min = min =4
dist 4 (4) + cost(4,6) 5 −1
dist 4 (7) 5
dist 5 (7) = min dist 4 (5) + cost(5,7) = min 0 + 3 = 3
dist 4 (6) + cost(6,7) 4+3

k=5

21CS42-Design and Analysis of Algorithms 87


For every vertex (except source) look for the
incoming edge (except from source)
dist 6 (1) = 0
dist 5 (2) 1
dist 6 (2) = min = min =1
dist 5 (3) + cost(3,2) 3 −2
dist 5 (3) 3
dist 6 (3) = min = min =3
dist 5 (4) + cost(4,3) 5 −2
dist 6 (4) = 5
dist 5 (5) 0
dist 6 (5) = min dist 5 (2) + cost(2,5) = min 1−1 = 0
dist 5 (3) + cost(3,5) 3 +1
dist 5 (6) 4
dist 6 (6) = min = min =4
dist 5 (4) + cost(4,6) 5 −1
dist 5 (7) 5
dist 6 (7) = min dist 5 (5) + cost(5,7) = min 0 + 3 = 3
dist 5 (6) + cost(6,7) 4+3

k=6

21CS42-Design and Analysis of Algorithms


21CS42-Design and Analysis of Algorithms
Algorithm

21CS42-Design and Analysis of Algorithms


Analysis
• Bellman-Ford works in O(VE)

21CS42-Design and Analysis of Algorithms


Example-2 Home work
Find shortest path from S to all other vertices
using Bellman Ford algorithm

21CS42-Design and Analysis of Algorithms


Example-2

21CS42-Design and Analysis of Algorithms


Outline
• Introduction to • Optimal Binary Search
Dynamic Programming Trees
– General method with • Knapsack problem
Examples
• Bellman-Ford Algorithm
– Multistage Graphs
• Travelling Sales Person
• Transitive Closure:
problem
– Warshall’s Algorithm,
• Reliability design
• All Pairs Shortest Paths:
– Floyd's Algorithm,

18CS42-Design and Analysis of Algorithms


94
21CS42-Design and Analysis of Algorithms
Travelling Salesman Problem
Problem Statement
• The travelling salesman problem consists of a
salesman and a set of cities.
• The salesman has to visit each one of the cities
exactly once starting from a certain one (e.g. the
hometown) and returning to the same city.
• The challenge of the problem is that the traveling
salesman wants to minimize the total length of the
trip
• It is assumed that all cities are connected to all other
cities
21CS42-Design and Analysis of Algorithms
Travelling Salesman Problem
Naive Solution:
1. Consider city 1 as the starting and ending point.
2. Generate all (n-1)! Permutations of cities.
3. Calculate cost of every permutation and keep track of
minimum cost permutation.
4. Return the permutation with minimum cost.

Time Complexity: ?

n!

21CS42-Design and Analysis of Algorithms


TSP using DP
• Assume tour to be a simple path that starts & ends at 1
• Every tour consists of
– an edge <1, k> for some k ∈ V − {1} and
– a path from vertex k to vertex 1
• The path from k to 1 goes through each vertex in V−{1,k}
exactly once.
• If the tour is optimal, then the path from k to 1 must be a
shortest path going through all vertices in V − {1, k}
• Let g(i, S) be the length of a shortest path starting at
vertex i, going through all vertices in S, and terminating
at vertex i.
• g(1, V − {1}) is the length of an optimal salesperson tour

21CS42-Design and Analysis of Algorithms


TSP using DP
• From the principal of optimality

g(1, V-{1}) is the Shortest path starting at vertex 1, going through all vertices
say k(for some k ∈ V-{1}) and terminating at vertex 1.

• Generalizing

g(i , S) is the Shortest path starting at vertex i, going through all vertices
in S and terminating at vertex i.

• Example k = 2, c 12 + g(2,{3,4})
k = 3, c 13 + g(3,{2,4})
g(1,{2,3,4}) = min
k = 4, c 14 + g(4,{2,3})

21CS42-Design and Analysis of Algorithms


Example-1
Case 1: S= Ø , So |S|=0
g(i, Ø)=Ci1 where 1 ≤ i ≤ n and i ≠ 1

Case 2: |S|=1, where i ≠1, 1 ∉ S, i ∉ S


So, g(i,S)= min {Cij+ g(j, S-{j})}
j∈S
When i=2
When i=3
When i=4

Case 3:

21CS42-Design and Analysis of Algorithms


Example-1

Since |S| < n-1 ie |S|< 3, We should not find out g(i,S) when |S|=3.
Therefore, Final Relation is
g(1, V-{1}) = min{C1k + g(k, V-{1,k})}
2≤k≤n
g(1,{1,2,3,4}-{1}) = min{C1k + g(k, {1,2,3,4}-{1,k})}
k=2,3,4

Optimal Path can be obtained by


g(1, {2,3,4}) = C12 + g(2, {3,4})
C24 + g(4,{3})
C43 + g(3,Ø )

Optimal Path=1 -> 2 -> 4 -> 3 ->1


Optimal Cost = 21CS42-Design
10 + 10 + 9and+ Analysis
6=35 of Algorithms 99
Example-2 ( Home Work )

Complete the exercise….

21CS42-Design and Analysis of Algorithms 100


Analysis

21CS42-Design and Analysis of Algorithms 102


Outline
• Introduction to Dynamic • Optimal Binary Search
Programming Trees
– General method with • Knapsack problem
Examples • Bellman-Ford Algorithm
– Multistage Graphs
• Travelling Sales Person
• Transitive Closure: problem
– Warshall’s Algorithm,
• Reliability design
• All Pairs Shortest Paths:
– Floyd's Algorithm,

18CS42-Design and Analysis of Algorithms


103
21CS42-Design and Analysis of Algorithms
Reliability Design

• If
the reliability of whole system is
𝑛

ෑ 𝑟𝑖
𝑖=1

21CS42-Design and Analysis of Algorithms 104


Reliability Design

• Let ri be the reliability of the device at stage i.


• If stage i has mi devices of type i in parallel then
reliability of stage i is
𝜑𝑖𝑚 𝑖 = 1 − (1 − 𝑟 𝑖 ) 𝑚 𝑖
21CS42-Design and Analysis of Algorithms 105
Example
• Design a 3-stage system with device types A,B,C.
There costs are 30, 15, 20 and reliability are 0.9, 0.8,
0.5 respectively. Budget available is 105. Design a
system with highest reliability.

Devices A B C
cost 30 15 20
Reliability 0.9 0.8 0.5
Maximum number of devices can be
purchased (by guaranteed purchase of u1=2 u2=3 u3=2
one unit of other devices)

Budget =105 u1= floor((105-(15+20))/30) = 2


u2= floor((105-(30+20))/15) = 3
u3= floor((105-(30+15))/20) = 2
21CS42-Design and Analysis of Algorithms 106
Example Budget =105 𝜑𝑖 𝑚 𝑖 = 1 − (1 −
𝑟𝑖)𝑚𝑖
No. of devices in A B C
llel Reliability Cost Reliability Cost Reliability Cost
0.9 0.8 0.5
1 30 15 20
m1=1 m2=1 m3=1
0.99 0.96 0.75
2 60 30 40
m1=2 m2=2 m3=2
0.992 0.875
3 - - 45 60
m2=3 m3=3

• 𝑺
𝒊
- Set of all tuples of the from (f,x) generated from various sequences of
m1, m2.. mi considering i stages where f is reliability and x is cost
• 𝑺 𝒊 - i is the stage no., j is the no. of devices used at stage i
𝒋
• Example
– 𝑺 𝟏 means 1 device of type A at stage
1
– 𝑺 𝟏 means 2 device of type A at stage
1

21CS42-Design and Analysis of Algorithms 107


# of devices in A B C
llel Reliability Cost Reliability Cost Reliability Cost
1 0.9 m1=1 30 0.8 m2=1 15 0.5 m3=1 20
2 0.99 m1=2 60 0.96 m2=2 30 0.75 m3=2 40
3 - - 0.992 m2=3 45 0.875 m3=3 60

𝑺𝟏 𝟎. 𝟗, 𝟑 𝟎 m1=1 m1=2
=𝑺 𝟏 𝟎. 𝟗 𝟗 , 𝟔 𝟎 𝐶 𝑜 𝑚 𝟎. 𝟗, 𝟑 𝟎 , 𝟎. 𝟗 𝟗 , 𝟔 𝟎
= .9*.8 .99*.8
𝑏 𝑖 𝑛 𝑖 𝑛
𝑺𝟐 𝟎 . 𝟕 𝟐 , 𝟒 𝟓 , ( 𝟎: . 𝑺
= 𝟕.9*.96
𝟗𝟐, 𝟕𝟓) =
.99*.96
𝑺 𝟐 = (𝟎. 𝟖 𝟔 𝟒 , For (0.9504,90) refer Note #1
𝟔𝟎)
.9*.992 .99*.992
𝑺𝟐 𝟎. For (0.982,105) refer Note #1
= 𝟖𝟗𝟐𝟖, Note #2

𝐶 𝟕𝑜 𝟓 𝑚 𝟎. 𝟕 𝟐 , 𝟒 𝟓 , 𝟎. 𝟕 𝟗 𝟐 , 𝟕 𝟓 , 𝟎.
𝑏 𝑖 𝑛 𝑖 𝑛 𝟖𝟔𝟒, 𝟔 𝟎 , 𝟎. 𝟖 𝟗 𝟐 𝟖 , 𝟕 𝟓
𝑔 : 𝟎 . 𝟕m1=1 𝟐 , 𝟒 𝟓 ,m1=1 𝟎m1=1
. 𝟖 𝟔 𝟒 , 𝟔 𝟎 , 𝟎.
𝑺 m2=1 m2=2 m2=3
𝟖𝟗𝟐𝟖, 𝟕𝟓
𝟐 This configuration will not leave adequate funds to complete system
Note #1: =
NoHtaeriv#in2o:dFNorthe same
𝟐 c121CS42-Design
o8CsSt4,2-rDeetsagiinnanand
tdhAeAnalysis
naoylnsiseowof
fAiAlgorithms
lgtohrithhmigshestreliabilFiteyb-May 2020 108
# of devices in A B C
llel Reliability Cost Reliability Cost Reliability Cost
1 0.9 m1=1 30 0.8 m2=1 15 0.5 m3=1 20
2 0.99 m1=2 60 0.96 m2=2 30 0.75 m3=2 40
3 - - 0.992 m2=3 45 0.875 m3=3 60

𝑺𝟐 𝟎. 𝟕 𝟐 , 𝟒 𝟓 , 𝟎. 𝟖 𝟔 𝟒 ,
= 𝟔 𝟎m1=1
, 𝟎 m1=1
. 𝟖 𝟗 𝟐 𝟖 , 𝟕m1=1
𝟓
m2=1 m2=2 m2=3

𝑺𝟑
=
𝑺𝟑
= m1=1
𝑺𝟑 m2=1
M3 =3
=
𝐶𝑜𝑚𝑏𝑖𝑛𝑖𝑛𝑔:
𝑺𝟑
=
Best Reliability 0.648 with cost=100 where A -1 unit, B -1 unit, C- 2 units in ||el.

21CS42-Design and Analysis of Algorithms 109


Assignment-4
1. Write multistage graph algorithm using backward
approach.
2. Define transitive clossure of a directed graph. Find the
transitive clossure matrix for the graph whose
adjacency matrix is given.

3. Apply bottom up dynamic programming algorithm for


the following instance of the knapsack problem.
Knapsack capacity = 10.

21CS42-Design and Analysis of Algorithms 110


Assignment-4
4. For the given graph obtain optimal cost tour using
dynamic programming.

5. Apply Floyds algorithm to find the all pair shortest path


for the graph given below.

21CS42-Design and Analysis of Algorithms 111

You might also like