Professional Documents
Culture Documents
Unit 3: Dynamic Programming
Unit 3: Dynamic Programming
Dynamic Programming
Dynamic Programming
• Dynamic Programming is a general algorithm
design technique for solving problems defined
by or formulated as recurrences with
overlapping subinstances.
• Dynamic programming, like the divide-and-
conquer method, solves problems by
combining the solutions to subproblems.
(“Programming” in this context refers to a
tabular method, not to writing computer
code.)
Steps
• When developing a dynamic-programming algorithm,
we follow a sequence of
• four steps:
• 1. Characterize the structure of an optimal solution.
• 2. Recursively define the value of an optimal solution.
• 3. Compute the value of an optimal solution, typically
in a bottom-up fashion.
• 4. Construct an optimal solution from computed
information.
Principle of optimality
• The principle of optimality states that an
optimal sequence of decisions has the
property that whatever the initial state and
decision are, the remaining decisions must
constitute an optimal decision sequence with
regard to state resulting from the first
decision.
• Many decision sequences may be generted,
Subsequnces can not be optimal
Exactly the same as divide-and-conquer …
but store the solutions to subproblems for
possible reuse.
A good idea if many of the subproblems are the
same as one another.
There might be O(2n)
nodes in this tree,
but only e.g. O(n3)
different nodes.
f(7)
Fibonacci series
f(6) f(5)
• 0, 1, 1, 2, 3, 5, 8, 13, 21, …
• f(0) = 0. f(5) f(4) f(4) f(3)
• f(1) = 1.
f(4) f(3) f(3) f(2) f(3) f(2) f(2) f(1)
• f(N) = f(N-1) + f(N-2)
if N 2.
… … … … … … … 1
int f(int n) { f(n) takes exponential time
if n < 2 to compute.
return n Proof: f(n) takes more than
else twice as long as f(n-2), which
return f(n-1) + f(n-2) therefore takes more than
twice as long as f(n-4) …
} Don’t you do it faster?
• Either way, you should table results that you’ll need later
• Mixing forward and backward is possible (future topic)
fmemo(6) fmemo(5)
? ?
f(6)
fmemo(5) fmemo(4)
? ?
f(5)
fmemo(4) fmemo(3)
? …
f(4)
…
600.325/425 Declarative Methods - J.
Eisner
How to analyze runtime of backward chaining
fmemo(…) is fast. Why? f(7)
Just looks in the memo So only O(1) work within
each box (for Fibonacci)
table & decides whether fmemo(6) fmemo(5)
to call another box
? ?
f(6)
fmemo(6) fmemo(5)
? ?
f(6)
fmemo(5) fmemo(4)
? ?
Caveat: Tempting to try to divide
f(5) up work this way:
fmemo(6) fmemo(5)
? ?
f(6)
Caveat: Tempting to try to divide
up work this way:
fmemo(5) fmemo(4)
? ? How many calls to fmemo(n)?
And how long does each one take?
f(5)
But hard to figure out how many.
fmemo(4) fmemo(3) And the first one is slower than rest!
This is just the data collection, for that we can use the
following formula:
V [i, w] max{V [i 1, w],V [i 1, w w[i ]] p[i ]}
• Sequence of decisions are as follows:
x1 x2 x3 x4 W=8
1 8-6=2
x1 x2 x3 x4 W=8
0 1 8-6=2