Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 24

Design Methods

We have discussed examples of the following algorithm design


principles:

•Dynamic Programming Paradigm


•Greedy Paradigm
•Divide-and-Conquer Paradigm

1
Dynamic Programming

2
Dynamic Programming
The development of a dynamic programming algorithm can be
subdivided into the following steps:
1.Characterize the structure of an optimal solution
2.Recursively define the value of an optimal solution
3.Compute the value of an optimal solution in a bottom-up
fashion
4.Construct an optimal solution from computed information

3
Optimal Substructure
A problem exhibits optimal substructure if and only if an
optimal solution to the problem contains within it optimal
solutions to subproblems.

Whenever a problem exhibits optimal substructure, it is


an indication that a dynamic programming or greedy
strategy might apply.

4
Overlapping Subproblems
A second indication that dynamic programming might
be applicable is that the space of subproblems must be
small, meaning that a recursive algorithm for the
problem solves the same subproblems over and over.

Typically, the total number of distinct subproblems is a


polynomial in the input size.

5
Overlapping Subproblems
When a recursive algorithm revisits the same problem over and
over again, then we say that the optimization problem has
overlapping subproblems.

Here two subproblems are called overlapping if and only if


they really are the same subproblem that occurs as a
subproblem of different problems.

6
Note
If a recursive algorithm solving the problem creates always
new subproblems, then this is an indication that divide-and-
conquer methods rather than dynamic programming might
apply.

7
Greedy Algorithms

8
Greedy Algorithms
The development of a greedy algorithm can be separated into the
following steps:
1.Cast the optimization problem as one in which we make a choice
and are left with one subproblem to solve.
2.Prove that there is always an optimal solution to the original
problem that makes the greedy choice, so that the greedy choice is
always safe.
3.Demonstrate that, having made the greedy choice, what remains is
a subproblem with the property that if we combine an optimal
solution to the subproblem with the greedy choice that we have
made, we arrive at an optimal solution to the original problem.

9
Greedy-Choice Property

The greedy choice property is that a globally optimal solution


can be arrived at by making a locally optimal (=greedy) choice.

10
Optimal Substructure
A problem exhibits optimal substructure if and only if an
optimal solution to the problem contains within it optimal
solutions to subproblems.

11
Divide-and-Conquer

12
Divide-and-Conquer
A divide and conquer method can be used for problems that can
be solved by recursively breaking them down into two or more
sub-problems of the same (or related) type, until these become
simple enough to be solved directly. The solutions to the sub-
problems are then combined to give a solution to the original
problem.

This approach is particularly successful when the number of


subproblems remain small in each step and combining the
solutions is easily done.

13
Greedy Algorithms

• Similar to dynamic programming, but simpler approach


Also used for optimization problems
• Idea: When we have a choice to make, make the one that looks
best right now
Make a locally optimal choice in hope of getting a globally optimal solution
• Greedy algorithms don’t always yield an optimal solution
• Makes the choice that looks best at the moment in order to get
optimal solution.
The Knapsack Problem
• The famous knapsack problem:
A thief breaks into a museum. Fabulous paintings,
sculptures, and jewels are everywhere. The thief has a good
eye for the value of these objects, and knows that each will
fetch hundreds or thousands of dollars on the clandestine art
collector’s market. But, the thief has only brought a single
knapsack to the scene of the robbery, and can take away
only what he can carry. What items should the thief take to
maximize the haul?
Fractional Knapsack Problem

• Knapsack capacity: W
• There are n items: the i-th item has value vi and weight wi
• Goal:
find xi such that for all 0  xi  1, i = 1, 2, .., n

 wixi  W and

 xivi is maximum
Fractional Knapsack - Example

• E.g.: 20
$80
---
Item 3 30 +

Item 2 50 50
20 $100
Item 1 30
20 +
10 10 $60

$60 $100 $120 $240

$6/pound $5/pound $4/pound


Fractional Knapsack Problem
• Greedy strategy 1:
Pick the item with the maximum value
• E.g.:
W=1
w1 = 100, v1 = 2
w2 = 1, v2 = 1
Taking from the item with the maximum value:
Total value taken = v1/w1 = 2/100
Smaller than what the thief can take if choosing the other
item
Total value (choose item 2) = v2/w2 = 1
Fractional Knapsack Problem
Greedy strategy 2:
• Pick the item with the maximum value per pound vi/wi
• If the supply of that element is exhausted and the thief can carry
more: take as much as possible from the item with the next greatest
value per pound
• It is good to order items based on their value per pound

v1 v2 vn
  ... 
w1 w2 wn
Fractional Knapsack Problem
Alg.: Fractional-Knapsack (W, v[n], w[n])
1. While w > 0 and as long as there are items remaining
2. pick item with maximum vi/wi

3. xi  min (1, w/wi)


4. remove item i from list
5. w  w – x i wi

• w – the amount of space remaining in the knapsack (w = W)


• Running time: (n) if items already ordered; else (nlgn)
The Knapsack Problem
• More formally, the 0-1 knapsack problem:

• The thief must choose among n items, where the ith item
worth vi dollars and weighs wi pounds
• Carrying at most W pounds, maximize value
• Note: assume vi, wi, and W are all integers
“0-1” b/c each item must be taken or left in entirety
• A variation, the fractional knapsack problem:
Thief can take fractions of items
Think of items in 0-1 problem as gold ingots, in fractional
problem as buckets of gold dust
The Knapsack Problem
And Optimal Substructure
• Both variations exhibit optimal substructure
• To show this for the 0-1 problem, consider the most valuable
load weighing at most W pounds

• If we remove item j from the load, what do we know about the


remaining load?

• A: remainder must be the most valuable load weighing at


most W - wj that thief could take from museum, excluding
item j
Solving The Knapsack Problem
• The optimal solution to the fractional knapsack problem can
be found with a greedy algorithm
How?
• The optimal solution to the 0-1 problem cannot be found with
the same greedy strategy

• Greedy strategy: take in order of dollars/pound


Example: 3 items weighing 10, 20, and 30 pounds, knapsack
can hold 50 pounds

Suppose item 2 is worth $100. Assign values to the other


items so that the greedy strategy will fail
0-1 Knapsack problem:
a picture
Weight Benefit value
Items wi bi
2 3
This is a knapsack 3 4
Max weight: W = 20
4 5
5 8
W = 20

9 10

You might also like