Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 16

Dynamic Programming

• Dynamic Programming is an algorithm design


method that can be used when the solution to a
problem may be viewed as the result of a
sequence of decisions.

7 -1
Dynamic Programming History
• Bellman. Pioneered the systematic study of dynamic
programming in the 1950s.

• Etymology.
– Dynamic programming = planning over time.
– Secretary of Defense was hostile to mathematical research.
– Bellman sought an impressive name to avoid confrontation.
• "it's impossible to use dynamic in a pejorative sense"
• "something not even a Congressman could object to"
Reference: Bellman, R. E. Eye of the Hurricane, An Autobiography.

2
Dynamic Programming
• Dynamic Programming is an algorithm design technique for optimization
problems: often minimizing or maximizing.
• Like divide and conquer, DP solves problems by combining solutions to
subproblems.
• Unlike divide and conquer, subproblems are not independent.
– Subproblems may share subsubproblems,
– However, solution to one subproblem may not affect the solutions to
other subproblems of the same problem. (More on this later.)
• DP reduces computation by
– Solving subproblems in a bottom-up fashion.
– Storing solution to a subproblem the first time it is solved.
– Looking up the solution when subproblem is encountered again.
• Key: determine structure of optimal solutions
Steps in Dynamic Programming
1. Characterize structure of an optimal solution.
2. Define value of optimal solution recursively.
3. Compute optimal solution values either top-
down with caching or bottom-up in a table.
4. Construct an optimal solution from computed
values.
Elements of Dynamic Programming
• Optimal Substructure
– An optimal solution to a problem contains within it an
optimal solution to subproblems
– Optimal solution to the entire problem is build in a
bottom-up manner from optimal solutions to subproblems
• Overlapping Subproblems
– If a recursive algorithm revisits the same subproblems
over and over  the problem has overlapping
subproblems

5
Dynamic Programming
• Used for optimization problems
– A set of choices must be made to get an optimal solution
– Find a solution with the optimal value (minimum or
maximum)
– There may be many solutions that lead to an optimal
value
– Our goal: find an optimal solution

6
Dynamic Programming Algorithm
1. Characterize the structure of an optimal solution
2. Recursively define the value of an optimal
solution
3. Compute the value of an optimal solution in a
bottom-up fashion
4. Construct an optimal solution from computed
information (not always necessary)
7
The shortest path

• To find a shortest path in a multi-stage graph


3 2 7

1 4
S A B 5
T

5 6

• Apply the greedy method :


the shortest path from S to T :
1+2+5=8

7 -8
The shortest path in multistage graphs
• e.g. A
4
D
1 18
11 9

2 5 13
S B E T
16 2

5
C 2
F

• The greedy method can not be applied to this case:


(S, A, D, T) 1+4+18 = 23.
• The real shortest path is:
(S, C, F, T) 5+2+2 = 9.
7 -9
Dynamic programming approach
• Dynamic programming approach (forward approach):

A
4
D 1 A
1 d(A, T)
18
11 9

2 d(B, T)
S
2
B
5
E
13
T S B T
16 2

5 d(C, T)
5
C 2
F C

• d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)}


4
 d(A,T) = min{4+d(D,T), 11+d(E,T)} A D
d(D, T)
= min{4+18, 11+13} = 22.
11 T
E d(E, T)
7 -10
• d(B, T) = min{9+d(D, T), 5+d(E, T), 16+d(F, T)}
= min{9+18, 5+13, 16+2} = 18.
A
4
D 9 D
1 d(D, T)
18
11 9

5 d(E, T)
S
2
B
5
E
13
T B E T
16 2
d(F, T)
16
5
C F
F
2

• d(C, T) = min{ 2+d(F, T) } = 2+2 = 4


• d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)}
= min{1+22, 2+18, 5+4} = 9.
• The above way of reasoning is called
backward reasoning.

7 -11
Backward approach
(forward reasoning)
4
A D
1 18
11 9

2 5 13
S B E T
16
• d(S, A) = 1 2

5
d(S, B) = 2 C F
2
d(S, C) = 5
• d(S,D)=min{d(S,A)+d(A,D), d(S,B)+d(B,D)}
= min{ 1+4, 2+9 } = 5
d(S,E)=min{d(S,A)+d(A,E), d(S,B)+d(B,E)}
= min{ 1+11, 2+5 } = 7
d(S,F)=min{d(S,B)+d(B,F), d(S,C)+d(C,F)}
= min{ 2+16, 5+2 } = 7
7 -12
• d(S,T) = min{d(S, D)+d(D, T), d(S,E)+
d(E,T), d(S, F)+d(F, T)}
= min{ 5+18, 7+13, 7+2 }
=9
4
A D
1 18
11 9

2 5 13
S B E T
16 2

5
C 2
F

7 -13
Principle of optimality
• Principle of optimality: Suppose that in solving a problem, we have to
make a sequence of decisions D1, D2, …, Dn. If this sequence is optimal,
then the last k decisions, 1  k  n must be optimal.

• e.g. the shortest path problem


If i, i1, i2, …, j is a shortest path from i to j, then i1, i2, …, j must be a
shortest path from i1 to j

• In summary, if a problem can be described by a multistage graph, then it


can be solved by dynamic programming.

7 -14
Dynamic Progamming vs. Memoization

• Advantages of dynamic programming vs. memoized


algorithms
– No overhead for recursion, less overhead for maintaining the
table
– The regular pattern of table accesses may be used to reduce
time or space requirements
• Advantages of memoized algorithms vs. dynamic
programming
– Some subproblems do not need to be solved

15
Dynamic Programming Applications
• Areas.
– Bioinformatics.
– Control theory.
– Information theory.
– Operations research.
– Computer science: theory, graphics, AI, systems, ….

• Some famous dynamic programming algorithms.


– Viterbi for hidden Markov models.
– Unix diff for comparing two files.
– Smith-Waterman for sequence alignment.
– Bellman-Ford for shortest path routing in networks.
– Cocke-Kasami-Younger for parsing context free grammars.

16

You might also like