Download as pdf or txt
Download as pdf or txt
You are on page 1of 79

Design and Analysis of

Algorithms
Divide and Conquer

Juliet Moso
Department of Computer Science
Divide-and-Conquer

The most-well known algorithm design strategy:


1. Divide instance of problem into two or more
smaller instances

2. Solve smaller instances recursively

3. Obtain solution to original (larger) instance by


combining these solutions

Design and Analysis of Computer Algorithm Monday, April 26, 2021 2


Divide-and-Conquer Technique

problem of size n
(instance)

subproblem 1 subproblem 2
of size n/2 of size n/2

a solution to a solution to
subproblem 1 subproblem 2

a solution to It generally leads to a


the original problem recursive algorithm!

Design and Analysis of Computer Algorithm Monday, April 26, 2021 3


Basic Steps in divide and conquer

The divide-and-conquer paradigm involves


three steps at each level of the recursion:
Divide the problem into a number of sub-
problems.
Conquer the sub-problems by solving them
recursively. If the sub-problem sizes are small
enough, however, just solve the sub-problems in
a straightforward manner.
Combine the solutions to the sub-problems into
the solution for the original problem.

Design and Analysis of Computer Algorithm Monday, April 26, 2021 4


Merge sort algorithm

The merge sort algorithm closely follows the


divide-and-conquer paradigm. Intuitively, it
operates as follows.
To sort an array A[p . . r]:
Divide: Divide the n-element sequence to be sorted
into two subsequences of n/2 elements each.
Conquer: Sort the two subsequences recursively
using merge sort.
Combine: Merge the two sorted subsequences to
produce the sorted answer

Design and Analysis of Computer Algorithm Monday, April 26, 2021 5


Merge Sort
p q r
1 2 3 4 5 6 7 8

Alg.: MERGE-SORT(A, p, r) 5 2 4 7 1 3 2 6

if p < r Check for base case

then q ← (p + r)/2 Divide

MERGE-SORT(A, p, q) Conquer

MERGE-SORT(A, q + 1, r) Conquer

MERGE(A, p, q, r) Combine

 Initial call: MERGE-SORT(A, 1, n)

Design and Analysis of Computer Algorithm Monday, April 26, 2021 6


Example – n Power of 2

1 2 3 4 5 6 7 8
Divide 5 2 4 7 1 3 2 6 q=4

1 2 3 4 5 6 7 8
5 2 4 7 1 3 2 6

1 2 3 4 5 6 7 8
5 2 4 7 1 3 2 6

1 2 3 4 5 6 7 8
5 2 4 7 1 3 2 6

Design and Analysis of Computer Algorithm Monday, April 26, 2021 7


Example – n Power of 2

1 2 3 4 5 6 7 8
Conquer 1 2 2 3 4 5 6 7
and
Merge 1 2 3 4 5 6 7 8
2 4 5 7 1 2 3 6

1 2 3 4 5 6 7 8
2 5 4 7 1 3 2 6

1 2 3 4 5 6 7 8
5 2 4 7 1 3 2 6

Design and Analysis of Computer Algorithm Monday, April 26, 2021 8


Example – n Not a Power of 2
1 2 3 4 5 6 7 8 9 10 11
4 7 2 6 1 4 7 3 5 2 6 q=6
Divide
1 2 3 4 5 6 7 8 9 10 11
q=3 4 7 2 6 1 4 7 3 5 2 6 q=9

1 2 3 4 5 6 7 8 9 10 11
4 7 2 6 1 4 7 3 5 2 6

1 2 3 4 5 6 7 8 9 10 11
4 7 2 6 1 4 7 3 5 2 6

1 2 4 5 7 8
4 7 6 1 7 3

Design and Analysis of Computer Algorithm Monday, April 26, 2021 9


Example – n Not a Power of 2
1 2 3 4 5 6 7 8 9 10 11
Conquer 1 2 2 3 4 4 5 6 6 7 7
and
1 2 3 4 5 6 7 8 9 10 11
Merge 1 2 4 4 6 7 2 3 5 6 7

1 2 3 4 5 6 7 8 9 10 11
2 4 7 1 4 6 3 5 7 2 6

1 2 3 4 5 6 7 8 9 10 11
4 7 2 1 6 4 3 7 5 2 6

1 2 4 5 7 8
4 7 6 1 7 3

Design and Analysis of Computer Algorithm Monday, April 26, 2021 10


Merging

p q r
1 2 3 4 5 6 7 8
2 4 5 7 1 2 3 6

Input: Array A and indices p, q, r such that


p≤q<r
Subarrays A[p . . q] and A[q + 1 . . r] are sorted
Output: One single sorted subarray A[p . .
r]

Design and Analysis of Computer Algorithm Monday, April 26, 2021 11


Merge - Pseudocode
p q r
Alg.: MERGE(A, p, q, r) 1 2 3 4 5 6 7 8
2 4 5 7 1 2 3 6
1. Compute n1 and n2
2. Copy the first n1 elements into n1 n2
L[1 . . n1 + 1] and the next n2 elements into R[1 . . n2 + 1]
3. L[n1 + 1] ← ∞; R[n2 + 1] ← ∞ p q

4. i ← 1; j ← 1 L 2 4 5 7 ∞
5. for k ← p to r q+1 r

6. do if L[ i ] ≤ R[ j ] R 1 2 3 6 ∞
7. then A[k] ← L[ i ]
8. i ←i + 1
9. else A[k] ← R[ j ]
10. j←j+1
Design and Analysis of Computer Algorithm Monday, April 26, 2021 12
Analyzing Divide-and Conquer Algorithms

The recurrence is based on the three steps


of the paradigm:
T(n) – running time on a problem of size n
Divide the problem into a subproblems, each of
size n/b: takes D(n)
Conquer (solve) the subproblems aT(n/b)
Combine the solutions C(n)

T(n) = Θ(1) if n ≤ c
aT(n/b) +D(n) + C(n) otherwise
Design and Analysis of Computer Algorithm Monday, April 26, 2021 13
MERGE-SORT Running Time

Divide: Compute the middle of the subarray, which


takes constant time: D(n) = Θ(1)
Conquer: recursively solve 2 subproblems, each of
size n/2  2T (n/2)
Combine: MERGE on an n-element subarray takes
Θ(n) time  C(n) = Θ(n)
 The recurrence for the worst-case running time
T(n) of merge sort:
T(n) = Θ(1) if n =1
2T(n/2) + Θ(n) if n > 1
 The solution for above recurrence is ϴ (n log n).
Design and Analysis of Computer Algorithm Monday, April 26, 2021 14
Reading Assignment

Using an example explain how selection sort


works giving it’s time analysis.

Design and Analysis of Computer Algorithm Monday, April 26, 2021 15


CCS 3102: Dynamic programming

Longest Common Subsequence and


0-1knapsack problem
Dynamic Programming
Dynamic programming is a simple method of solving complex real world
problems, such as
Traveling salesman problem
Fibonacci sequence
Stagecoach problem
Knapsack problem
Applicable when subproblems are not independent
Subproblems share subsubproblems
A divide and conquer approach would repeatedly solve the common
subproblems
Dynamic programming solves every subproblem just once and stores the
answer in a table
Steps to Dynamic Programming
Every problem is divided into stages
Each stage requires a decision
Decisions are made to determine the state of the next stage
The solution procedure is to find an optimal solution at each
stage for every possible state
This solution procedure often starts at the last stage and works
its way forward
Dynamic Programming
Used for optimization problems
A set of choices must be made to get an optimal solution
Find a solution with the optimal value (minimum or
maximum)
There may be many solutions that lead to an optimal value
Our goal: find an optimal solution

It is used, when the solution can be recursively described in


terms of solutions to subproblems (optimal substructure)
Four Steps of Problem Solving with DP
1. Characterize the structure of an optimal solution: Assume you
have an optimal solution and show how it must decompose Sometimes it is
useful to write a brute force solution, observe its redundancies, and
characterize a more refined solution
2. Recursively define the value of an optimal solution: Write a
recursive cost function that reflects the above structure.
3. Compute the value of an optimal solution: Write code to compute
the recursive values, solving smaller problems first to avoid redundant
computation
4. Construct an optimal solution from the computed information:
Augment the code as needed to record the structure of the solution.
Longest Common Subsequence (LCS)
Application: comparison of two DNA strings
Ex: X= {A B C B D A B },Y= {B D C A B A}
Longest Common Subsequence:
X = A B C B DA B
Y = B D CA B A
Brute force algorithm would compare each subsequence of X with
the symbols in Y
If |X| = m, |Y| = n, then there are 2m subsequences of x; we must
compare each with Y (n comparisons)
So the running time of the brute-force algorithm is O(n2m)
Notice that the LCS problem has optimal substructure: solutions of
subproblems are parts of the final solution.
Subproblems: “find LCS of pairs of prefixes of X and Y”
LCS Algorithm
First we’ll find the length of LCS. Later we’ll modify the
algorithm to find LCS itself.
Define Xi, Yj to be the prefixes of X and Y of length i and j
respectively
Define c[i, j] to be the length of LCS of Xi and Yj
Then the length of LCS of X and Y will be c[m, n]

c[i − 1, j − 1] + 1 if x[i ] = y[ j ],
c[i, j ] = 
 max(c[i, j − 1], c[i − 1, j ]) otherwise
LCS recursive solution
c[i − 1, j − 1] + 1 if x[i ] = y[ j ],
c[i, j ] = 
 max(c[i, j − 1], c[i − 1, j ]) otherwise
We start with i = j = 0 (empty substrings of x and y)
Since X0 and Y0 are empty strings, their LCS is always
empty (i.e. c[0,0] = 0)
LCS of empty string and any other string is empty, so for
every i and j: c[0, j] = c[i,0] = 0
LCS recursive solution
c[i − 1, j − 1] + 1 if x[i ] = y[ j ],
c[i, j ] = 
 max(c[i, j − 1], c[i − 1, j ]) otherwise
When we calculate c[i,j], we consider two cases:
First case: x[i]=y[j]: one more symbol in strings X and Y
matches, so the length of LCS Xi and Yj equals to the length of
LCS of smaller strings Xi-1 and Yj-1 , plus 1
Second case: x[i] != y[j]
As symbols don’t match, our solution is not improved, and the
length of LCS(Xi , Yj) is the same as before (i.e. maximum of
LCS(Xi, Yj-1) and LCS(Xi-1,Yj)
LCS Length Algorithm
LCS-Length(X, Y)
1. m = length(X) // get the # of symbols in X
2. n = length(Y) // get the # of symbols in Y
3. for i = 1 to m c[i,0] = 0 // special case: Y0
4. for j = 1 to n c[0,j] = 0 // special case: X0
5. for i = 1 to m // for all Xi
6. for j = 1 to n // for all Yj
7. if ( Xi == Yj )
8. c[i,j] = c[i-1,j-1] + 1
9. else c[i,j] = max( c[i-1,j], c[i,j-1] )
10. return c
LCS Example
We’ll see how LCS algorithm works on the following example:
X = ABCB
Y = BDCAB

What is the Longest Common Subsequence


of X and Y?

LCS(X, Y) = BCB
X=AB C B
Y= B DCAB
LCS Example (0) ABCB
j 0 1 2 3 4 5
BDCAB
i Yj B D C A B
0 Xi

A
1
2 B

3 C

4 B

X = ABCB; m = |X| = 4
Y = BDCAB; n = |Y| = 5
Allocate array c[5,4]
ABCB
LCS Example (1) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
A
1 0
2 B 0
3 C 0
4 B 0

for i = 1 to m c[i,0] = 0
for j = 1 to n c[0,j] = 0
ABCB
LCS Example (2) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
1 A 0 0

2 B 0
3 C 0
4 B 0

if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
ABCB
LCS Example (3) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
A
1 0 0 0 0

2 B 0
3 C 0
4 B 0

if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
ABCB
LCS Example (4) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
1 A 0 0 0 0 1

2 B 0
3 C 0
4 B 0

if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
ABCB
LCS Example (5) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
A
1 0 0 0 0 1 1

2 B 0
3 C 0
4 B 0

if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
ABCB
LCS Example (6) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B 0 1
3 C 0
4 B 0

if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
ABCB
LCS Example (7) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
A
1 0 0 0 0 1 1

2 B 0 1 1 1 1
3 C 0
4 B 0

if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
ABCB
LCS Example (8) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1 1 1 1 2
3 C 0
4 B 0

if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
ABCB
LCS Example (10) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
A
1 0 0 0 0 1 1

2 B 0 1 1 1 1 2
3 C 0 1 1
4 B 0

if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
ABCB
LCS Example (11) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B 0 1 1 1 1 2
3 C 0 1 1 2
4 B 0

if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
ABCB
LCS Example (12) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
A
1 0 0 0 0 1 1

2 B 0 1 1 1 1 2
3 C 0 1 1 2 2 2
4 B 0

if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
ABCB
LCS Example (13) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B 0 1 1 1 1 2
3 C 0 1 1 2 2 2
4 B 0 1

if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
ABCB
LCS Example (14) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
A
1 0 0 0 0 1 1

2 B 0 1 1 1 1 2
3 C 0 1 1 2 2 2
4 B 0 1 1 2 2

if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
ABCB
LCS Example (15) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1 1 1 1 2
3 C 0 1 1 2 2 2
4 B 0 1 1 2 2 3
if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
LCS Algorithm Running Time
LCS algorithm calculates the values of each entry of the array
c[m,n]
So what is the running time?

O(m*n)
since each c[i,j] is calculated in constant time,
and there are m*n elements in the array
How to find actual LCS
So far, we have just found the length of LCS, but not LCS
itself.
We want to modify this algorithm to make it output Longest
Common Subsequence of X and Y
Each c[i,j] depends on c[i-1,j] and c[i,j-1]
or c[i-1, j-1]
For each c[i,j] we can say how it was acquired:

2 2 For example, here


2 3 c[i,j] = c[i-1,j-1] +1 = 2+1=3
How to find actual LCS - continued
Remember that

c[i − 1, j − 1] + 1 if x[i ] = y[ j ],
c[i, j ] = 
 max(c[i, j − 1], c[i − 1, j ]) otherwise
So we can start from c[m,n] and go backwards
Whenever c[i,j] = c[i-1, j-1]+1, remember x[i]
(because x[i] is a part of LCS)
When i=0 or j=0 (i.e. we reached the beginning),
output remembered letters in reverse order
Additional Information
0 if i,j = 0 A matrix b[i, j]:
c[i, j] = c[i-1, j-1] + 1 if xi = yj • For a subproblem [i, j] it
max(c[i, j-1], c[i-1, j]) if xi ≠ yj tells us what choice was
made to obtain the
0 1 2 3 n
b & c: yj: A C D F optimal value
0 xi 0 0 0 0 0 0 • If xi = yj
1 A 0 b[i, j] = “ ”
2 B 0 c[i-1,j]
• Else, if
i c[i - 1, j] ≥ c[i, j-1]
3 C 0 c[i,j-1]
b[i, j] = “ ↑ ”
0
else
m D 0
b[i, j] = “ ← ”
j
Finding LCS
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B 0 1 1 1 1 2
3 C 0 1 1 2 2 2
4 B 0 1 1 2 2 3
Finding LCS (2)
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B 0 1 1 1 1 2
3 C 0 1 1 2 2 2
4 B 0 1 1 2 2 3
LCS (reversed order): B C B
LCS (straight order): B C B
(this string turned out to be a palindrome)
Knapsack problem

There are two versions of the problem:


(1) “0-1 knapsack problem”
Items are indivisible; you either take an item or not.
Solved with dynamic programming

(2) “Fractional knapsack problem”


Items are divisible: you can take any fraction
of an item. Solved with a greedy algorithm.
The 0-1 knapsack problem
A thief breaks into a house, carrying a knapsack...
He can carry up to 25 kilograms of loot
He has to choose which of N items to steal
Each item has some weight and some value
“0-1” because each item is stolen (1) or not stolen (0)
He has to select the items to steal in order to maximize the
value of his loot, but cannot exceed 25 kilograms
A greedy algorithm does not find an optimal solution
A dynamic programming algorithm works well
0-1 Knapsack Algorithm
for w = 0 to W
V[0,w] = 0
for i = 1 to n
V[i,0] = 0
for i = 1 to n
for w = 0 to W
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example
Given some items, pack the knapsack to get the maximum
total value.
Each item has some weight and some value.
Let’s run our algorithm on the following data:
n = 4 (# of elements)
W = 5 (max weight)

Item # Weight Value/benefit


1 2 3
2 3 4
3 4 5
4 5 6
Example (2)

i\W 0 1 2 3 4 5
0 0 0 0 0 0 0
1
2
3
4

for w = 0 to W
V[0,w] = 0
Example (3)

i\W 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0
2 0
3 0
4 0

for i = 1 to n
V[i,0] = 0
Example (4)
Items:
1: (2,3)
i\W 0 1 2 3 4 5 i=1 2: (3,4)
0 0 0 0 0 0 0 3: (4,5)
bi=3
1 0 0 4: (5,6)
wi=2
2 0
w=1
3 0
w-wi =-1
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example (5)
Items:
1: (2,3)
i\W 0 1 2 3 4 5 i=1 2: (3,4)
0 0 0 0 0 0 0 bi=3 3: (4,5)
1 0 0 3
wi=2 4: (5,6)
2 0
w=2
3 0
w-wi =0
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example (6)
Items:
1: (2,3)
i\W 0 1 2 3 4 5 i=1 2: (3,4)
0 0 0 0 0 0 0 bi=3 3: (4,5)
1 0 0 3 3
wi=2 4: (5,6)
2 0
w=3
3 0
w-wi =1
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example (7)
Items:
1: (2,3)
i\W 0 1 2 3 4 5 i=1 2: (3,4)
0 0 0 0 0 0 0 bi=3 3: (4,5)
1 0 0 3 3 3
wi=2 4: (5,6)
2 0
w=4
3 0
w-wi =2
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example (8)
Items:
1: (2,3)
i\W 0 1 2 3 4 5 i=1 2: (3,4)
0 0 0 0 0 0 0 bi=3 3: (4,5)
1 0 0 3 3 3 3
wi=2 4: (5,6)
2 0
w=5
3 0
w-wi =3
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example (9)
Items:
1: (2,3)
i\W 0 1 2 3 4 5 i=2 2: (3,4)
0 0 0 0 0 0 0 bi=4 3: (4,5)
1 0 0 3 3 3 3 4: (5,6)
wi=3
2 0 0
w=1
3 0
w-wi =-2
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example (10)
Items:
1: (2,3)
i\W 0 1 2 3 4 5 i=2 2: (3,4)
0 0 0 0 0 0 0 bi=4 3: (4,5)
1 0 0 3 3 3 3 4: (5,6)
wi=3
2 0 0 3
w=2
3 0
w-wi =-1
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example (11)
Items:
1: (2,3)
i\W 0 1 2 3 4 5 i=2 2: (3,4)
0 0 0 0 0 0 0 bi=4 3: (4,5)
1 0 0 3 3 3 3
wi=3 4: (5,6)
2 0 0 3 4
w=3
3 0
w-wi =0
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example (12)
Items:
1: (2,3)
i\W 0 1 2 3 4 5 i=2 2: (3,4)
0 0 0 0 0 0 0 bi=4 3: (4,5)
1 0 0 3 3 3 3
wi=3 4: (5,6)
2 0 0 3 4 4
w=4
3 0
w-wi =1
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example (13)
Items:
1: (2,3)
i\W 0 1 2 3 4 5 i=2 2: (3,4)
0 0 0 0 0 0 0 bi=4 3: (4,5)
1 0 0 3 3 3 3
wi=3 4: (5,6)
2 0 0 3 4 4 7
w=5
3 0
w-wi =2
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example (14)
Items:
1: (2,3)
i\W 0 1 2 3 4 5 i=3 2: (3,4)
0 0 0 0 0 0 0 bi=5 3: (4,5)
1 0 0 3 3 3 3
wi=4 4: (5,6)
2 0 0 3 4 4 7
w= 1..3
3 0 0 3 4
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example (15)
Items:
1: (2,3)
i\W 0 1 2 3 4 5 i=3 2: (3,4)
0 0 0 0 0 0 0 bi=5 3: (4,5)
1 0 0 3 3 3 3 4: (5,6)
wi=4
2 0 0 3 4 4 7
w= 4
3 0 0 3 4 5
w- wi=0
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example (16)
Items:
1: (2,3)
i\W 0 1 2 3 4 5 i=3 2: (3,4)
0 0 0 0 0 0 0 bi=5 3: (4,5)
1 0 0 3 3 3 3 4: (5,6)
wi=4
2 0 0 3 4 4 7
w= 5
3 0 0 3 4 5 7
w- wi=1
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example (17)
Items:
1: (2,3)
i\W 0 1 2 3 4 5 i=4 2: (3,4)
0 0 0 0 0 0 0 bi=6 3: (4,5)
1 0 0 3 3 3 3 4: (5,6)
wi=5
2 0 0 3 4 4 7
w= 1..4
3 0 0 3 4 5 7
4 0 0 3 4 5
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example (18)
Items:
1: (2,3)
i\W 0 1 2 3 4 5 i=4 2: (3,4)
0 0 0 0 0 0 0 bi=6 3: (4,5)
1 0 0 3 3 3 3 4: (5,6)
wi=5
2 0 0 3 4 4 7
w= 5
3 0 0 3 4 5 7
w- wi=0
4 0 0 3 4 5 7
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
How to find actual Knapsack Items
All of the information we need is in the table.
V[n,W] is the maximal value of items that can be placed in
the Knapsack.
Let i=n and k=W
if V[i,k] ≠ V[i−1,k] then
mark the ith item as in the knapsack
i = i−1, k = k-wi
else
i = i−1 // Assume the ith item is not in the knapsack
// Could it be in the optimally packed
knapsack?
Finding the Items
Items:
1: (2,3)
i\W 0 1 2 3 4 5 i=4 2: (3,4)
0 0 0 0 0 0 0 k= 5 3: (4,5)
1 0 0 3 3 3 3 bi=6 4: (5,6)
2 0 0 3 4 4 7 wi=5
3 0 0 3 4 5 7 V[i,k] = 7
V[i−1,k] =7
4 0 0 3 4 5 7
i=n, k=W
while i,k > 0
if V[i,k] ≠ V[i−1,k] then
mark the ith item as in the knapsack
i = i−1, k = k-wi
else
i = i−1
Finding the Items (2)
Items:
1: (2,3)
i\W 0 1 2 3 4 5 i=4 2: (3,4)
0 0 0 0 0 0 0 k= 5 3: (4,5)
1 0 0 3 3 3 3 bi=6 4: (5,6)
2 0 0 3 4 4 7 wi=5
3 0 0 3 4 5 7 V[i,k] = 7
V[i−1,k] =7
4 0 0 3 4 5 7
i=n, k=W
while i,k > 0
if V[i,k] ≠ V[i−1,k] then
mark the ith item as in the knapsack
i = i−1, k = k-wi
else
i = i−1
Finding the Items (3)
Items:
1: (2,3)
i\W 0 1 2 3 4 5 i=3 2: (3,4)
0 0 0 0 0 0 0 k= 5 3: (4,5)
1 0 0 3 3 3 3 bi=5 4: (5,6)
2 0 0 3 4 4 7 wi=4
3 0 0 3 4 5 7 V[i,k] = 7
V[i−1,k] =7
4 0 0 3 4 5 7
i=n, k=W
while i,k > 0
if V[i,k] ≠ V[i−1,k] then
mark the ith item as in the knapsack
i = i−1, k = k-wi
else
i = i−1
Finding the Items (4)
Items:
1: (2,3)
i\W 0 1 2 3 4 5 i=2 2: (3,4)
0 0 0 0 0 0 0 k= 5 3: (4,5)
1 0 0 3 3 3 3 bi=4 4: (5,6)
2 0 0 3 4 4 7 wi=3
3 0 0 3 4 5 7 V[i,k] = 7
V[i−1,k] =3
4 0 0 3 4 5 7
k − wi=2
i=n, k=W
while i,k > 0
if V[i,k] ≠ V[i−1,k] then
mark the ith item as in the knapsack
i = i−1, k = k-wi
else
i = i−1
Finding the Items (5)
Items:
1: (2,3)
i\W 0 1 2 3 4 5 i=1 2: (3,4)
0 0 0 0 0 0 0 k= 2 3: (4,5)
1 0 0 3 3 3 3 bi=3 4: (5,6)
2 0 0 3 4 4 7 wi=2
3 0 0 3 4 5 7 V[i,k] = 3
V[i−1,k] =0
4 0 0 3 4 5 7
k − wi=0
i=n, k=W
while i,k > 0
if V[i,k] ≠ V[i−1,k] then
mark the ith item as in the knapsack
i = i−1, k = k-wi
else
i = i−1
Finding the Items (6)
Items:
1: (2,3)
i\W 0 1 2 3 4 5 i=0 2: (3,4)
0 0 0 0 0 0 0 k= 0 3: (4,5)
1 0 0 3 3 3 3 4: (5,6)
2 0 0 3 4 4 7
3 0 0 3 4 5 7 The optimal
knapsack
4 0 0 3 4 5 7
should contain
i=n, k=W {1, 2}
while i,k > 0
if V[i,k] ≠ V[i−1,k] then
mark the nth item as in the knapsack
i = i−1, k = k-wi
else
i = i−1
Finding the Items (7)
Items:
1: (2,3)
i\W 0 1 2 3 4 5 2: (3,4)
0 0 0 0 0 0 0 3: (4,5)
1 0 0 3 3 3 3 4: (5,6)
2 0 0 3 4 4 7
3 0 0 3 4 5 7 The optimal
knapsack
4 0 0 3 4 5 7
should contain
i=n, k=W {1, 2}
while i,k > 0
if V[i,k] ≠ V[i−1,k] then
mark the nth item as in the knapsack
i = i−1, k = k-wi
else
i = i−1
Summary
Dynamic Programming applies when the problem has these characteristics:
Recursive Decomposition
The problem has recursive structure: it breaks down into smaller problems
of the same type. This characteristic is shared with divide and conquer.
Overlapping Subproblems
The subproblems solved by a recursive solution overlap (same subproblems
are revisited more than once). This means we can save time by preventing
the redundant computations.
Optimal Substructure
Any optimal solution involves making a choice that leaves one or more
subproblems to solve, and the solutions to the subproblems used within the
optimal solution must themselves be optimal. This means that optimized
recursive solutions can be used to construct optimized larger solutions.
Summary
Dynamic programming can be approached top-down or bottom-up:
Top-Down with memoization:
Write a recursive procedure to solve the problem, computing subproblems
as needed. Each time a sub-problem is encountered, see whether you have
stored it in a table, and if not, solve it and store the solution.
Bottom-Up:
Order the subproblems such that "smaller" problems are solved first, so
their solutions are available in the table before "larger" problems need
them. (This ordering need not be based on literal size.)
Both have the same asympotic running time. The top-down procedure has
the overhead of recursion, but computes only the subproblems that are
actually needed.
Bottom-up is used the most in practice.
Summary
Dynamic programming solutions can often be quite complex
and tricky

Dynamic programming is used for optimization problems,


especially ones that would otherwise take exponential time
Only problems that satisfy the principle of optimality are
suitable for dynamic programming solutions

Since exponential time is unacceptable for all but the smallest


problems, dynamic programming is sometimes essential

You might also like