Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 50

Design and Analysis of Algorithm

Tanveer Ahmed Siddiqui

Department of Computer Science


COMSATS University, Islamabad
Recap and Today Covered

Algorithm Design and Analysis Process


Understand the problem

Decide on : algorithm
design techniques etc.

Design an algorithm

Prove correctness

Analyze efficiency etc

Code the algorithm


Department of Computer Science
Objectives

 How to Design Algorithm using Dynamic


Programming

Department of Computer Science


Reading Material

Read Chapter 8
Dynamic Programming

Department of Computer Science


The Phases of Human Progress

Those who cannot remember the past


are condemned to repeat it

(George Santayana, (1905))

Department of Computer Science


Today Covered
 Motivation
 Fibonacci Sequence
 Computing Binomial Coefficients
 Problem Analysis of above examples
 Divide and conquer approach
 Time Complexity
 Dynamic Algorithm
 Time Complexity
 Optimizations problem?
 Steps in Development of Dynamic Algorithms
 Why dynamic in optimization problem?
 Generalization and Applications
Department of Computer Science
Recall Divide & Conquer

 Five pillars of Divide and Conquer algorithms:


 How to divide into sub-problems?
 Divide according to position
 Divide according to value
 When to stop dividing?
 Conquer through recursion
 How to recombine sub-results?
 Analysis using Master Theorem or Recurrence
Relations

Department of Computer Science


Food for thought

 True or false?
 Every algorithm that contains a divide step and
a conquer step is a divide-and-conquer
algorithm.
 Answer: No
 A dynamic programming contains a divide step
and a conquer step and may not be a divide-
and-conquer algorithm.

Department of Computer Science


(Fibonacci – Revisited)

 Write a recursive algorithm for computing


Fibonacci numbers Fn which are defined as
follows: ALGORITHM FIB(n)
 F0 = 0 // Input: A positive number n
//Output: nth Fibonacci term
 F1 = 1
1.if n < 2 then
 Fn = Fn-1 + Fn-2 2. return n
3.else return FIB(n-1) + FIB(n-2)

 What is its Time Complexity? 1


T ( n)  
if n  2
T (n  1)  T (n  2) n  2

 The resulting procedure is so slow. Can you
state
Department theScience
of Computer reason?
Recursion? No, thanks
 Actually, recursion is a very bad idea:
 The resulting algorithm(recursive) is so slow(would
require exponential time) ALGORITHM FIB(n)
// Input: A positive number n
 Why resulting procedure //Output: nth Fibonacci term
1.if n < 2 then

is so slow?
2. return n
3.else return FIB(n-1) + FIB(n-2)

 Because it re-solves the same sub problems many


times(sub-problems over and over again)
Department of Computer Science
Recursion? No, thanks
 How can we make it fast? Can we reduce this
complexity from exponential to polynomial?
 Yes (How?)
 Recall the George Santayana quote:
 “Those who cannot remember
the past are condemned to
repeat it”.
 So, to avoid repetition, we must
remember
 We can make it fast by remembering the result
of a sub problem once it is solved.
Department of Computer Science
What is Dynamic Programming

 Dynamic Programming is mainly an optimization


over plain recursion.
 Wherever we see a recursive solution that has repeated
calls for same inputs, we can optimize it using Dynamic
Programming.
 The idea is to simply store the results of
subproblems, so that we do not have to re-
compute them when needed later.
 How we store the results of subproblem?
 There are two different ways to store the values so
that the values of a sub-problem can be reused.

Department of Computer Science


DP Key Idea #1

Not every sub-


problem is new.

Save time: retain


Department of Computer Science
How to avoid unnecessary repetition
 We can avoid this unnecessary repetitions by
writing down the results of recursive calls and
looking them up again if we need them later.
 This process is called memoization. Here is the
algorithm with memoization.
ALGORITHM MEMOFIB(n)
// Input: A positive number n
//Output: nth Fibonacci term
1.if n < 2 then
2. return n
3.If F[n] is undefined then
4.F[n]  MEMO FIB(n-1) + MEMOFIB(n-2)
5.Return F[n]

Department of Computer Science


How to avoid unnecessary repetition: MEMOIZATION

If we trace through the recursive calls to MEMOFIB, we find


that array F[] gets filled from bottom up. i.e., first F[2],
then F[3], and so on, up to F[n].
F(0) = 0 ALGORITHM MEMOFIB(n)
F(1) = 1 // Input: A positive number n
F(2) = 1+0 = 1 //Output: nth Fibonacci term
… 1.if n < 2 then
F(n-2) = 2. return n
3.If F[n] is undefined then
F(n-1) =
4. F[n] MEMO FIB(n-1) + MEMOFIB(n-2)
F(n) = F(n-1) + F(n-2)
5.Return F[n]

0 1 1 . . . F(n-2) F(n-1) F(n)

Efficiency: What if we solve it iteratively?


- time n
- space n
Department of Computer Science
Iterative Algorithm
 Can we replace recursion with a simple for-loop
that just fills up the array F[ ] in that
order(bottom up fashion).
 Yes(how ?)

ALGORITHM ITERFIB(n)
// Input: A positive number n
//Output: nth Fibonacci term
1.F[0]  0
2.F[1] 1
3.for i  2 to n do
4. F[i] F[i-1] + F[(i-2]
5.return
Department of Computer F[n]
Science
New designing technique build from the idea of divide and conquer

 So what we have done?


 We take “conquer” part of divide-and-conquer algorithm and
replace its recursive calls with table lookups
 Instead of returning a value, we record it in a table entry
 We use base case of divide-and-conquer to fill in start of table
 We devise “look-up template”
 devise for-loops that fill the table using “look-up template”

Department of Computer Science


Procedure

 Take “conquer” part of divide-and-conquer algorithm and


replace its recursive calls with table lookups
 Instead of returning a value, record it in a table entry
 Use base case of divide-and-conquer to fill in start of
table
 We devise “look-up template”
 devise for-loops that fill the table using “look-up
template”

Department of Computer Science


Example 2

Department of Computer Science


Computing Binomial Coefficients

 A binomial coefficient, denoted C(n, r), is the number of


combinations of r elements from an n-element set (0 ≤ r ≤
n). Design divide and conquer(recursive) algorithm for
computing C(n, r)
 To choose r things out of n, either:
 Choose the first item. Then we must choose the remaining r −
1 items from the other n − 1 items. C(n-1, r-1); or
 Don’t choose the first item. Then we must choose the r items
from the other n − 1 items. C(n-1, r)
 Therefore, we have

Department of Computer Science


Computing Binomial Coefficients

Time Complexity:

 Why Exponential?
 Can we reduce this complexity from
exponential to polynomial?

Department of Computer Science


Problem analysis of Binomial Coefficient algorithm
 Why Exponential?
 Now let us analyze why exponential time
complexity for computing C(n, r).
 Suppose we want to compute C(6,4)
Repeated Computation

 Can we have better algorithm?


 Pascal’s Triangle.
Department of Computer Science
Pascal Triangle
 Pascal Triangle
1
1 1
1 2 1
1 3 3 1
1 4 6 4 1
1 5 10 10 5 1
 How we fill this table?
 To build the triangle, start with
1" at the top, then continue placing
numbers below it in a triangular pattern.
Each number is just the two numbers
above it added together
(except for the edges, which are all "1").
 (Here I have highlighted that 1+3 = 4)

Department of Computer Science


Pascal Triangle
 Pascal Triangle
1
1 1
1 2 1
1 3 3 1
1 4 6 4 1
1 5 10 10 5 1

 Record the values of the binomial coefficients in
a table of n+1 rows and k+1 columns,
numbered from 0 to n and 0 to k respectively.
 What is look-up template for this example?

Department of Computer Science


Look-up Template

 Initialization

 Why
 C(n, 0) = C(n, n) = 1

Department of Computer Science


Look-up Template
 How we fill remaining table?

Department of Computer Science


Look-up Template

 Fill in the columns from left to right. Fill in each


of the columns from top to bottom.

Department of Computer Science


Procedure

 Take “conquer” part of divide-and-conquer algorithm and


replace its recursive calls with table lookups
 Instead of returning a value, record it in a table entry
 Use base case of divide-and-conquer to fill in start of
table
 We devise “look-up template”
 devise for-loops that fill
the table using “look-up template”

Department of Computer Science


This new designing
technique is known as
Dynamic Programming

Department of Computer Science


What is Dynamic Programming

 Dynamic Programming is an algorithmic paradigm


that solves a given complex problem(Usually an
optimization problem) by breaking it into
subproblems and stores the results of subproblems
to avoid computing the same results again.
 Dynamic Programming is mainly an optimization
over plain recursion.
 Wherever we see a recursive solution that has repeated
calls for same inputs, we can optimize it using Dynamic
Programming

Department of Computer Science


Optimization Problems

 If a problem has only one correct solution, then


optimization is not required
 For example, there is only one sorted sequence
containing a given set of numbers.
 Optimization problems have many solutions.
 We want to compute an optimal solution e. g.
with minimal cost and maximal gain.
 There could be many solutions having optimal value
 Dynamic programming is very effective technique
 Development of dynamic programming algorithms
can be broken into a sequence steps as in the next.

Department of Computer Science


Steps in Development of Dynamic Algorithms

1. Characterize the structure of an optimal solution


2. Recursively define the value of an optimal
solution
3. Compute the value of an optimal solution in a
bottom-up fashion
4. Construct an optimal solution from computed
information
Note: Steps 1-3 form the basis of a dynamic
programming solution to a problem. Step 4 can
be omitted only if the value of an optimal
solution is required.

Department of Computer Science


Why Dynamic Programming?

 Dynamic programming, like divide and conquer


method, solves problems by combining the
solutions to sub-problems.
 Divide and conquer algorithms:
• Partition the problem into independent sub-problem
• Solve the sub-problem recursively and
• Combine their solutions to solve the original problem
 In contrast, dynamic programming is applicable
when the sub-problems are not independent.
 Dynamic programming is typically applied to
optimization problems.
Department of Computer Science
Divide & Conquer vs Dynamic Programming
 What does dynamic programming have in
common with divide-and conquer?
 Both divide problems into sub problems
 What is a principal difference between the two
techniques?
Divide and conquer Dynamic programming
It is a top down approach It is bottom up approach
In this approach , In this approach ,
sub problems are independent sub problems are dependent
Does not use memoization Used memoization process(i.e.
process (i.e. does not store the store the result for reuse)
result for reuse)

Department of Computer Science


Difference between DnC & DP

Divide and conquer Dynamic programming


It is a top down approach It is bottom up approach

Department of Computer Science


Difference between DnC & DP
1. Dynamic
Divide Programming
and conquer Subproblems
Dynamic programming
In this approach , In this approach ,
sub problems are independent sub problems are dependent

Divide-and-Conquer Subproblems Dynamic Programming


Subproblems
Independent subproblems
Spectacularly redundant
subproblems
efficient algorithms.
exponential algorithms.
Department of Computer Science
Time Complexity in Dynamic Algorithms

 Time complexity:
 If there are polynomial number of sub-problems.
 If each sub-problem can be computed in polynomial
time.
 Then the solution of whole problem can be found in
polynomial time.

Remark:
Greedy also applies a top-down strategy but
usually on one sub-problem so that the order of
computation is clear

Department of Computer Science


New designing technique build from the idea of divide and conquer

 Perspective #1: Divide & Conquer with Memory Table


 Solve the sub-problems one by one, smallest first, possibly by
recursion
 Store the solutions to sub-problems in a table and reuse the
solutions to solve larger sub-problems
 Until the top (original) problem instance is solved

Department of Computer Science


DP: Food for thought

 When we use Dynamic Programming ?


 Dynamic Programming is mainly used when solutions
of the same subproblems are needed again and again.
 When Dynamic Programming is not useful?
 Dynamic Programming is not useful when there are no
common (overlapping) subproblems because there is
no point storing the solutions if they are not needed
again.
 For example, Binary Search doesn’t have common
subproblems. If we take an example of following recursive
program for Fibonacci Numbers, there are many subproblems
that are solved again and again.

Department of Computer Science


History

Department of Computer Science


Birth of DP(CHOICE OF THE NAME DYNAMIC PROGRAMMING)

 Ahsan: What is your program for joining our


Murree Trip.
 Qasim: Sorry yar, I have a quiz, so I am busy in my
preparation.
 What does the word “program” means here in
above conversation?
 a) Plan b) computer coding

Department of Computer Science


Birth of DP(CHOICE OF THE NAME DYNAMIC PROGRAMMING)

 The name was coined in 1957 by Richard Belman to


describe a common type of control problem.
 Actually, the name originally described the problem
more than the technique of solution.
 The sense in which programming is meant is “a
series of choices,” like the programming of a radio
station.
 The word dynamic conveys the idea that choices
may depend on the current state, rather than being
decided ahead of time.

Department of Computer Science


Birth of DP

 So, in this original sense , a radio show in which


listener call in requests might be said to be
“dynamically programmed” to contrast it with the
more usual format where the selections are decided
before the show begins.
In Bellman’s own words (Bellman 1984), here is the origin of the
name:
“In the first place I was interested in planning, in decision
making, in thinking. But planning is not a good word for
various reasons. I decided therefore to use the word,
“programming”. I wanted to get across the idea that this
was dynamic, this was multistage, this was time-varying – I
thought, let’s kill two birds with one stone.”
Thus, the word “programming” in the name of this technique stands
for “planning” and does not refer to computer programming.
Department of Computer Science
Rules of Dynamic Programming
 OPTIMAL SUB-STRUCTURE:
 An optimal solution to a problem contains
optimal solutions to sub-problems
 OVERLAPPING SUB-PROBLEMS:
 A recursive solution contains a “small”
number of distinct sub-problems repeated many
times
 BOTTOM UP FASHION:
 Computes the solution in a bottom-up fashion in the
final step

Department of Computer Science


Optimization Problems
 If a problem has only one correct solution, then
optimization is not required
 For example, there is only one sorted sequence
containing a given set of numbers.
 Optimization problems have many solutions.
 We want to compute an optimal solution e. g.
with minimal cost and maximal gain.
 There could be many solutions having optimal value
 Dynamic programming is very effective technique
 Development of dynamic programming algorithms
can be broken into a sequence steps as in the next.

Department of Computer Science


Algorithmic Paradigms?

 Divide-and-conquer: Break up a problem into


independent sub-problems, solve each sub-problem, and
combine solution to sub-problems to form solution to
original problem.
 example: merge-sort
 Greedy Algorithm: build up a solution incrementally, by
optimizing some local criterion.
 example: Kruskal’s alg. for MST
 Dynamic programming: Break up a problem into a
series of overlapping sub-problems, and build up solutions
to larger and larger sub-problems.
 “overlapping” means that sub-problems share sub-problems
 The idea: solve every sub-problem only once and store the
answer for use when it reappears.

Department of Computer Science 46


Summary

Department of Computer Science


Dynamic Programming
 Dynamic Programming is a fancy name for divide-and-conquer with a

table.
 “Programming” here means “planning”. “Programming” refers to a
tabular method, not to writing computer code.
 Invented by American mathematician Richard Bellman in the 1950s to

solve optimization problems


 Main idea:
 solve several smaller (overlapping) sub-problems

 record solutions in a table so that each sub-problem is only solved

once
 final state of the table will be (or contain) solution

 Dynamic programming vs. divide-and-conquer


Department of Computer Science


Why Dynamic Programming?
 Dynamic programming, like divide and conquer
method, solves problems by combining the
solutions to sub-problems.
 Divide and conquer algorithms:
• partition the problem into independent sub-problem
• Solve the sub-problem recursively and
• Combine their solutions to solve the original problem
• Food for thought: When divide and conquer is not a
good choice?
• When sub-problems are not independent
 In contrast, dynamic programming is applicable
when the sub-problems are not independent.
Department of Computer Science
Conclusion(what we have learn Today)
 Motivation
 Fibonacci Sequence
 Computing Binomial Coefficients
 Problem Analysis of above examples
 Divide and conquer approach
 Time Complexity
 Dynamic Algorithm
 Time Complexity
 Optimizations problem?
 Steps in Development of Dynamic Algorithms
 Why dynamic Programming in optimization
problem?
 Generalization
Department of Computer Science and Applications

You might also like