Professional Documents
Culture Documents
Advanced Algorithm Design and Analysis (1x1)
Advanced Algorithm Design and Analysis (1x1)
Chapter 4
Advanced Algorithm Design and Analysis Techniques
Yonas Y.
Algorithm Analysis and Design
School of Electrical and Computer Engineering
1
Outline Dynamic Programming Greedy Algorithms
1 Dynamic Programming
Rod cutting problem
Elements of dynamic programming
Longest common subsequence
Optimal binary search trees
2 Greedy Algorithms
An activity-selection problem
Elements of the greedy strategy
Huffman Codes
3 Amortized Analysis
Aggregate analysis
2
Outline Dynamic Programming Greedy Algorithms
Dynamic Programming
3
Outline Dynamic Programming Greedy Algorithms
4
Outline Dynamic Programming Greedy Algorithms
Rod cutting
Sterling Enterprises buys long steel rods and cuts them into shorter
rods, which it then sells.
Each cut is free.
The management of sterling enterprises wants to know the
best way to cut up the rods.
We assume that we know, for i = 1, 2,. . . , the price pi in birr
that sterling enterprises charges for a rod of length i inches.
Rod lengths are always an integral number of inches.
length i 1 2 3 4 5 6 7 8 9 10
price pi 1 5 8 9 10 17 17 20 24 30
Table: 4.1 A sample price table for rods. Each rod of length i inches
earns the company pi birrs of revenue.
5
Outline Dynamic Programming Greedy Algorithms
6
Outline Dynamic Programming Greedy Algorithms
7
Outline Dynamic Programming Greedy Algorithms
n = i1 + i2 + . . . + ik
8
Outline Dynamic Programming Greedy Algorithms
Once we make the first cut, we may consider the two pieces
as independent instances of the rod-cutting problem.
9
Outline Dynamic Programming Greedy Algorithms
the rod is cut into a first piece of length i cut off the left-hand
end, and then a right-hand remainder of length n - i.
Only the remainder, and not the first piece, may be further
divided.
10
Outline Dynamic Programming Greedy Algorithms
Algorithm 1 CUT-ROD(p, n)
1: if n == 0 then
2: return 0
3: end if
4: q = −∞
5: for i = 1 to n do
6: q = max (q, p[i] + CUT-ROD(p, n - i))
7: end for
8: return q
11
Outline Dynamic Programming Greedy Algorithms
Figure: 4.2 The recursion tree showing recursive calls resulting from a call
CUT-ROD(p, n) for n = 4.
12
Outline Dynamic Programming Greedy Algorithms
T(n) = 1 + n-1
P
j=0 T(j),
13
Outline Dynamic Programming Greedy Algorithms
14
Outline Dynamic Programming Greedy Algorithms
15
Outline Dynamic Programming Greedy Algorithms
Algorithm 2 MEMOIZED-CUT-ROD(p, n)
1: let r[0 .. n] be a new array
2: for i = 1 to n do
3: r[i] = −∞
4: end for
5: return MEMOIZED-CUT-ROD-AUX(p, n, r)
16
Outline Dynamic Programming Greedy Algorithms
Algorithm 3 MEMOIZED-CUT-ROD-AUX(p, n, r)
1: if r[n] ≤ 0 then
2: return r[n]
3: end if
4: if n == 0 then
5: return q = 0
6: else
7: q = −∞
8: for i = 1 to n do
9: q = max (q, p[i] + MEMOIZED-CUT-ROD-AUX(p, n-i,
r))
10: end for
11: end if
12: r[n] = q
13: return q
17
Outline Dynamic Programming Greedy Algorithms
Algorithm 4 BOTTOM-UP-CUT-ROD(p, n)
1: let r[0 .. n] be a new array
2: r[0] = 0
3: for j = 1 to n do
4: q = −∞
5: for i = 1 to j do
6: q = max (q, p[i] + r[j-i])
7: end for
8: r[j] = q
9: end for
10: return r[n]
Subproblem graphs
Figure 4.3 shows the subproblem graph for the rod-cutting problem
with n = 4.
19
Outline Dynamic Programming Greedy Algorithms
Figure: 4.3 The subproblem graph for the rod-cutting problem with n = 4.
Reconstructing a solution
We can record not only the optimal value computed for each
subproblem, but also a choice that led to the optimal value.
21
Outline Dynamic Programming Greedy Algorithms
Algorithm 5 EXTENDED-BOTTOM-UP-CUT-ROD(p, n)
1: let r[0 .. n] be a new array
2: r[0] = 0
3: for j = 1 to n do
4: q = −∞
5: for i = 1 to j do
6: if q < p[i] + r[j-i] then
7: q = p[i] + r[j-i]
8: s[j] = i
9: end if
10: end for
11: r[j] = q
12: end for
13: return r and s
22
Outline Dynamic Programming Greedy Algorithms
i 0 1 2 3 4 5 6 7 8 9 10
r[i] 0 1 5 8 10 13 17 18 22 25 30
s[i] 0 1 2 3 2 2 6 1 2 3 10
23
Outline Dynamic Programming Greedy Algorithms
Optimal substructure
Overlapping subproblems
24
Outline Dynamic Programming Greedy Algorithms
Optimal substructure
Suppose that you are given this last choice that leads to an
optimal solution.
25
Outline Dynamic Programming Greedy Algorithms
26
Outline Dynamic Programming Greedy Algorithms
Expand it as necessary.
27
Outline Dynamic Programming Greedy Algorithms
28
Outline Dynamic Programming Greedy Algorithms
First make a choice that looks best, then solve the resulting
subproblem.
29
Outline Dynamic Programming Greedy Algorithms
Subtleties
V is a set of vertices.
E is a set of edges.
30
Outline Dynamic Programming Greedy Algorithms
31
Outline Dynamic Programming Greedy Algorithms
NO!
Subpath q r is q → r.
Longest simple path q r is q → s → t → r.
Subpath r t is r → t.
Longest simple path r t is r → q → s → t.
33
Outline Dynamic Programming Greedy Algorithms
q→s→t→r→q→s→t
34
Outline Dynamic Programming Greedy Algorithms
Overlapping subproblems
35
Outline Dynamic Programming Greedy Algorithms
Lookup in table.
If answer is there, use it.
Else, compute answer, then store it.
36
Outline Dynamic Programming Greedy Algorithms
37
Outline Dynamic Programming Greedy Algorithms
Example 1:
Brute-force approach
Time: Θ(n2m )
2m subsequences of X to check.
39
Outline Dynamic Programming Greedy Algorithms
Optimal substructure
Notation: ith prefix
Xi = prefix hx1 , ..., xi i
Yi = prefix hy1 , ..., yi i
Theorem:
A recursive solution
41
Outline Dynamic Programming Greedy Algorithms
42
Outline Dynamic Programming Greedy Algorithms
Algorithm 6 LCS-LENGTH(X, Y)
1: m = X.length
2: n = Y.length
3: let b[1..m, 1..n] and c[0..m, 0..n] be new tables
4: for i = 1 to m do
5: c[i, 0] = 0
6: end for
7: for j = 1 to n do
8: c[0, j] = 0
9: end for
10: for i = 1 to m do
11: for j = 1 to n do
12: if xi == yj then
13: c[i, j] = c[i-1, j-1] + 1
14: b[i, j] = ” - ”
15: else if c[i-1, j] ≥ c[i, j-1] then
16: c[i, j] = c[i-1, j]
17: b[i, j] = ” ↑ ”
18: else c[i, j] = c[i, j-1]
19: b[i, j] = ” ← ”
20: end if
21: end for
22: end for
23: return c and b 43
Outline Dynamic Programming Greedy Algorithms
The running time of the procedure is Θ(mn), since each table entry
takes Θ(1) time to compute. 44
Outline Dynamic Programming Greedy Algorithms
Constructing an LCS
The b table returned by LCS-LENGTH enables us to quickly
construct an LCS of X = hx1 , ..., xm i and Y = hy1 , ..., yn i.
Algorithm 7 PRINT-LCS(b, X, i, j
1: if i == i or j==0 then
2: return
3: end if
4: if b[i, j] == ” - ” then
5: PRINT-LCS(b, X, i-1, j-1)
6: print xi
7: else if b[i, j] == ” ↑ ” then
8: PRINT-LCS(b, X, i-1, j)
9: else
10: PRINT-LCS(b, X, i, j-1)
11: end if
The procedure takes time O(m + n), since it decrements at least
one of i and j in each recursive call. 45
Outline Dynamic Programming Greedy Algorithms
Because we will search the tree for each individual word in the
text, we want the total time spent searching to be as low as
possible.
46
Outline Dynamic Programming Greedy Algorithms
47
Outline Dynamic Programming Greedy Algorithms
49
Outline Dynamic Programming Greedy Algorithms
i 0 1 2 3 4 5
pi 0.15 0.10 0.05 0.10 0.20
qi 0.05 0.10 0.05 0.05 0.05 0.10
n
X n
X
pi + qi = 1
i=1 i=0
50
Outline Dynamic Programming Greedy Algorithms
Let us assume that the actual cost of a search equals the number
of nodes examined, i.e., the depth of the node found by the search
in T, plus 1.
51
Outline Dynamic Programming Greedy Algorithms
From figure 4.5, we can calculate the expected search cost node by
node:
Observations:
53
Outline Dynamic Programming Greedy Algorithms
54
Outline Dynamic Programming Greedy Algorithms
55
Outline Dynamic Programming Greedy Algorithms
Greedy Algorithms
56
Outline Dynamic Programming Greedy Algorithms
An activity-selection problem
An activity-selection problem
57
Outline Dynamic Programming Greedy Algorithms
An activity-selection problem
i 1 2 3 4 5 6 7 8 9
si 1 2 4 1 5 8 9 11 13
fi 3 5 7 8 9 10 11 14 16
58
Outline Dynamic Programming Greedy Algorithms
An activity-selection problem
59
Outline Dynamic Programming Greedy Algorithms
An activity-selection problem
60
Outline Dynamic Programming Greedy Algorithms
An activity-selection problem
61
Outline Dynamic Programming Greedy Algorithms
An activity-selection problem
Thus,
Aij = Aik ∪ {ak } ∪ Akj
62
Outline Dynamic Programming Greedy Algorithms
An activity-selection problem
If we did not know that an optimal solution for the set Sij
includes activity ak , we would have to examine all activities in
Sij to find which one to choose, so that
0 if Sij = ∅
c[i, j] =
max {c[i, k] + c[k, j] + 1}, if Sij 6= ∅
ak ∈Sij
63
Outline Dynamic Programming Greedy Algorithms
An activity-selection problem
64
Outline Dynamic Programming Greedy Algorithms
An activity-selection problem
Each subproblem is Smi ,n+1 , i.e., the last activities to finish. And
the subproblems chosen have finish times that increase.
65
Outline Dynamic Programming Greedy Algorithms
An activity-selection problem
66
Outline Dynamic Programming Greedy Algorithms
An activity-selection problem
Algorithm 8 RECURSIVE-ACTIVITY-SELECTOR(s, f, k, n)
1: m = k + 1
2: while m ≤ n and s[m] < f[k] do . find the first activity in Sk
to finish
3: m=m+1
4: end while
5: if m ≤ n then
6: return {am } ∪ RECURSIVE-ACTIVITY-SELECTOR(s, f, m, n)
7: else return ∅
8: end if
An activity-selection problem
i 1 2 3 4 5 6 7 8 9 10 11
si 1 3 0 5 3 5 6 8 8 2 12
fi 4 5 6 7 9 9 10 11 12 14 16
68
Outline Dynamic Programming Greedy Algorithms
An activity-selection problem
An activity-selection problem
Algorithm 9 GREEDY-ACTIVITY-SELECTOR(s, f)
1: n = s.length
2: A = {a1 }
3: k=1
4: for m = 2 to n do
5: if s[m] ≥ f[k] then
6: A = A ∪ {am }
7: k=m
8: end if
9: end for
10: return A
71
Outline Dynamic Programming Greedy Algorithms
Greedy-choice property
Optimal substructure
72
Outline Dynamic Programming Greedy Algorithms
Greedy-choice property
Greedy:
Optimal substructure
Just show that optimal solution to subproblem and greedy choice
⇒ optimal solution to problem.
73
Outline Dynamic Programming Greedy Algorithms
n items.
Item i is worth vi birr, weighs wi pounds.
Have to either take an item or not take it-can’t take part of it.
74
Outline Dynamic Programming Greedy Algorithms
Greedy doesn’t work for the 0-1 knapsack problem. Might get
empty space, which lowers the average value per pound of the
items taken.
75
Outline Dynamic Programming Greedy Algorithms
Figure: 4.7 An example showing that the greedy strategy does not work
for the 0-1 knapsack problem.
76
Outline Dynamic Programming Greedy Algorithms
Huffman Codes
Huffman Codes
Huffman codes compress data very effectively.
a b c d d f
Frequency (in thousands) 45 13 12 16 9 5
Fixed-length codeword 000 001 010 011 100 101
Variable-length codeword 0 101 100 111 1101 1100
Huffman Codes
1 Fixed-length code:
3 bits to represent 6 characters =⇒ 300,000 bits to code
the entire file.
2 Variable-length code:
Frequent characters =⇒ short codewords.
From Table 4.1 the code requires
(45· 1 + 1· 3 + 12·3 + 16· 3 + 19·4 + 5·4)·1000 = 224,000
bits
to represent the file, a savings of approximately 25%.
78
Outline Dynamic Programming Greedy Algorithms
Huffman Codes
Prefix Codes
79
Outline Dynamic Programming Greedy Algorithms
Huffman Codes
A binary tree whose leaves are the given characters provides one
representation.
0 means ”go to the left child” and 1 means ”go to the right
child.”
80
Outline Dynamic Programming Greedy Algorithms
Huffman Codes
Huffman Codes
82
Outline Dynamic Programming Greedy Algorithms
Huffman Codes
Algorithm 10 HUFFMAN(C)
1: n = |C|
2: Q = C
3: for i = 1 to n-1 do
4: allocate a new node z
5: z.left = x = EXTRACT-MIN(Q)
6: z.right = y = EXTRACT-MIN(Q)
7: z.freq = x.freq + y.freq
8: INSERT(Q, z)
9: end for
10: return EXTRACT-MIN(Q) . return the root of the tree
83
Outline Dynamic Programming Greedy Algorithms
Huffman Codes
Figure: 4.9 The steps of Huffmans algorithm for the frequencies given in
Figure 4.2. 84
Outline Dynamic Programming Greedy Algorithms
Huffman Codes
85
Outline Dynamic Programming Greedy Algorithms
Amortized Analysis
Average in this context does not mean that we’re averaging over a
distribution of inputs.
No probability is involved.
We’re talking about average cost in the worst case.
86
Outline Dynamic Programming Greedy Algorithms
Aggregate analysis
Aggregate analysis
87
Outline Dynamic Programming Greedy Algorithms
Aggregate analysis
Stack operations
Algorithm 11 MULTIPOP(S, k)
1: while not STACK-EMPTY(S) and k ¿ 0 do
2: POP(S)
3: k = k-1
4: end while
88
Outline Dynamic Programming Greedy Algorithms
Aggregate analysis
Aggregate analysis
Have n operations.
90
Outline Dynamic Programming Greedy Algorithms
Aggregate analysis
Observation
Each object can be popped only once per time that it’s
pushed.
Have ≤ n PUSHes ⇒ ≤ n POPs, including those in
MULTIPOP.
Therefore, total cost = O(n).
Average over the n operations ⇒ O(1) per operation on
average.
Aggregate analysis
92
Outline Dynamic Programming Greedy Algorithms
Aggregate analysis
Algorithm 12 INCREMENT(A)
1: i=0
2: while i < A.length and A[i]==1 do
3: A[i]=0
4: i = i+1
5: end while
6: if i < A.length then
7: A[i]=1
8: end if
93
Outline Dynamic Programming Greedy Algorithms
Aggregate analysis
Aggregate analysis
Observation
Not every bit flips every time.
bit flips how often times in n INCREMENTs
0 every time n
1 1/2 the time bn/2c
2 1/4 the time bn/4c
.
.
i i
1/2 the time bn/2i c
.
.
i≥ k never 0
k-1 ∞
bn/2i c < n b1/2i c = 2n.
P P
Therefore, total # of flips =
i=0 i=0
As a result, n INCREMENTs costs O(n).
Average cost per operation = O(1). 95
Outline Dynamic Programming Greedy Algorithms
Aggregate analysis
End of Chapter 4
Questions?
96