Download as pdf or txt
Download as pdf or txt
You are on page 1of 75

The Greedy method

Dr. Dwiti Krishna Bebarta


Outline

The Greedy method


– The general method
– Minimum cost spanning trees
– Single-source shortest paths
– Knapsack problem
– Optimal merge patterns
Greedy algorithms build a solution part by part, choosing the n
ext part in such a way, that it gives an immediate benefit. This
approach never reconsiders the choices taken previously. This
approach is mainly used to solve optimization problems.
Components of Greedy Algorithm
• A candidate set − A solution is created from this set.
• A selection function − Used to choose the best candidate
to be added to the solution.
• A feasibility function − Used to determine whether a
candidate can be used to contribute to the solution.
• An objective function − Used to assign a value to a
solution or a partial solution.
• A solution function − Used to indicate whether a complete
solution has been reached.
General Method
• Greedy method control abstraction for subset
paradigm
Minimum-cost Spanning Trees
• Definition 4.1 Let G=(V, E) be at undirected connected graph. A
sub-graph t=(V, E’) of G is a spanning tree of G iff t is a tree.

• Example 4.5

Spanning trees

• Applications
– Obtaining an independent set of circuit equations for an electric
network
– network designs (i.e. telephone or cable networks)
– Computer Network Routing Protocol
Minimum-cost Spanning Trees
• Example of MCST
– Finding a spanning tree of G with minimum cost

1 1
28
10 2 10 2
14 16 14 16

6 7 3 6 7 3
24
25 18 12 25 12
5 5
22 4 22 4

(a) (b)
Prim’s Algorithm

1 1 1
10 10 2 10 2
2

6 7 3 6 7 3 6 7 3

25 25
5 5 5
4 4 22 4
(a) (b) (c)
1 1 1
10 10 2 10 2
2 16
16 14
3 6 7 3 6 7 3
6 7
25 25 12 25 12
12
5 5 5
22 4 22 4 22 4
(d) (e) (f)
Prim’s Algorithm
Prim’s algorithm
GENERIC-MST(G, w)
1. A=Φ
2. while A does not form a spanning tree
3. find an edge (u, v) that is safe for A
4. A ∪ {(u, v)}
5. return A

• Initialization: After line 1, the set A trivially satisfies the loop in


variant.
• Maintenance: The loop in lines 2–4 maintains the invariant by
adding only safe edges.
• Termination: All edges added to A are in a minimum spanning
tree, and so the set A returned in line 5 must be a minimum
spanning tree.
MST-PRIM (G, w, r)
1 for each u ∈ G[V]
2 u.cost = ∞
3 u.parent = NIL
4 r.cost = 0
5 Q = G[V]
6 mincost = 0
7 while Q ≠ Φ
8 u = EXTRACT-MIN(Q)
9 mincost = mincost+ u.cost
9 for each v ∈ G.Adj(u)
10 if v ∈ Q and w(u, v) < v.cost
11 v.parent = u
12 v.cost = w(u, v)
Data structure used for the minimum edge weight Time Complexity
2
Adjacency matrix, linear searching O(|V| )
Adjacency list and binary heap O(|E| log |V|)
Adjacency list and Fibonacci heap O(|E|+ |V| log |V|)
a is extracted
for each v ∈ G.Adj(u)
if v ∈ Q and w(u, v) < v.cost
a b h i c g d f e v.parent = u
0 ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ v.cost = w(u, v)

∞ ∞ ∞
Nodes Parent (v) Cost (v)
a Nil 0
0 ∞
b a 4

h a 8
i
∞ ∞ ∞ c
g
ab<cost of b ah<cost of h d
4<∞ T 8<∞T f
e
„ab‟ and „ah‟ edges are considered sin
ce they are adjacent to the node „a‟
a b h i c g d f e
b is extracted
0 4 8 ∞ ∞ ∞ ∞ ∞ ∞ for each v ∈ G.Adj(u)
if v ∈ Q and w(u, v) < v.cost
v.parent = u
4 ∞ ∞ v.cost = w(u, v)

0 ∞

Nodes Parent (v) Cost (v)
a Nil 0
8 ∞ ∞
b a 4
a∈ Q F h a 8
bh<cost of h
Not considered s
11 < 8 F i
ince „a‟ is not ele
ment of Q c b 8
bc < cost of c
g
8<∞ T
d
a b h i c g d f e f
0 4 8 ∞ 8 ∞ ∞ ∞ ∞ e
a b h i c g d f e Now c is extracted
0 4 8 ∞ 8 ∞ ∞ ∞ ∞
for each v ∈ G.Adj(u)
if v ∈ Q and w(u, v) < v.cost
v.parent = u
4 8 ∞ v.cost = w(u, v)

b c
0 ∞

Node (v) Parent (v) Cost (v)

8 ∞ ∞ a Nil 0
b a 4
b∈ Q F cd<cost of d h a 8
Not cosidered 7<∞T i c 2
c b 8
cf < cost of f ci < cost of i
g
4<∞ T 2<∞ T
d c 7
a b h i c g d f e
f c 4
0 4 8 2 8 ∞ 7 4 ∞ e
a b h i c g d f e
0 4 8 2 8 ∞ 7 4 ∞
i is extracted
4 8
7 for each v ∈ G.Adj(u)
b if v ∈ Q and w(u, v) < v.cost
c
v.parent = u
0 2 v.cost = w(u, v)
i ∞

4 Node (v) Parent (v) Cost (v)


8 ∞
a Nil 0
c∈ Q F ih<cost of h b a 4
Not considered 7<8T h a, i 8, 7
i c 2
ig < cost of g
c b 8
6<∞ T
g i 6
d c 7
a b h i c g d f e
f c 4
0 4 7 2 2 6 7 4 ∞ e
a b h i c g d f e Now f is extracted
0 4 7 2 2 6 7 4 ∞ for each v ∈ G.Adj(u)
if v ∈ Q and w(u, v) < v.cost
v.parent = u
4 2 v.cost = w(u, v)
7
b c
0 2
i ∞
Node (v) Parent (v) Cost (v)
7 4
6 a Nil 0
fg<cost of g fd<cost of d b a 4
2<6 T 14 < 7 F h a, i 8, 7
i c 2
fe < cost of e c∈ Q F c b 8
10 < ∞ T Not cosidered g i, f 6, 2
d c 7
a b h i c g d f e
f c 4
0 4 7 2 2 2 7 4 10 e f 10
a b h i c g d f e Now g is extracted
0 4 7 2 2 2 7 4 10
for each v ∈ G.Adj(u)
if v ∈ Q and w(u, v) < v.cost
v.parent = u
4 2 v.cost = w(u, v)
7
b c
0 2
i 10
Node (v) Parent (v) Cost (v)
7 4
6 a Nil 0
i∈ Q F gh<cost of h b a 4
Not considered 1<7T h a, i, g 8, 7, 1
i c 2
f∈ Q F
c b 8
Not considered
g i, f 6, 2
d c 7
a b h i c g d f e
f c 4
0 4 1 2 2 2 7 2 10 e f 10
a b h i c g d f e Now h is extracted
0 4 1 2 2 2 7 4 10
for each v ∈ G.Adj(u)
if v ∈ Q and w(u, v) < v.cost
v.parent = u
4 2 v.cost = w(u, v)
7
b c
0 2
i 10
Node (v) Parent (v) Cost (v)
7 4
6 a Nil 0
a∈ Q F b∈ Q F b a 4
Not considered Not considered h a, i, g 8, 7, 1
i c 2
i∈ Q F g∈ Q F
c b 8
Not considered Not considered
g i, f 6, 2
d c 7
a b h i c g d f e
f c 4
0 4 1 2 2 2 7 2 10 e f 10
a b h i c g d f e Now d is extracted
0 4 1 2 2 2 7 4 10
for each v ∈ G.Adj(u)
if v ∈ Q and w(u, v) < v.cost
v.parent = u
4 2 v.cost = w(u, v)
7
b c
0 2
i 10
Node (v) Parent (v) Cost (v)
7 4
6 a Nil 0
c∈ Q F f∈ Q F b a 4
Not considered Not considered h a, i, g 8, 7, 1
i c 2
de < cost of e
c b 8
9 < 10 T
g i, f 6, 2
d c 7
a b h i c g d f e
f c 4
0 4 1 2 2 2 7 2 10 e f, d 10, 9
a b h i c g d f e Now d is extracted
0 4 1 2 2 2 7 4 10
for each v ∈ G.Adj(u)
if v ∈ Q and w(u, v) < v.cost
v.parent = u
4 2 v.cost = w(u, v)
7
b c
0 2
i 10
Node (v) Parent (v) Cost (v)
7 4
6 a Nil 0
d∈ Q F f∈ Q F b a 4
Not considered Not considered h a, i, g 8, 7, 1
i c 2
a b h i c g d f e
c b 8
0 4 1 2 2 2 7 2 10
g i, f 6, 2
d c 7
f c 4
e f, d 10, 9
a b h i c g d f e
0 4 1 2 2 2 7 2 10

b c

Node (v) Parent (v) Cost (v)


a Nil 0
b a 4
h a, i, g 8, 7, 1
i c 2
c b 8
g i, f 6, 2
d c 7
f c 4
e f, d 10, 9
Disjoint-set operations
Disjoint-Set
• A disjoint-set is a collection i.e. S={S1, S2,…, Sk} of di
stinct dynamic sets.
• Each set is identified by a member of the set, called re
presentative.
Disjoint Set Operations:
– MAKE-SET(x):Create new set {x} with representative
x.
– UNION(x,y):x and y are elements of two sets. Re
move these sets and add their union. Choose a repr
esentative for it.
– FIND-SET(x):return the representative of the set c
ontaining x.
Disjoint-set Representations using linked lists
Linked-List Implementation
• Each set as a linked-list, with head and tail, and each node co
ntains value, next node pointer and back-to-representative point
er.
Example:
• MAKE-SET costs O(1): just create a single element list.
• FIND-SET costs O(1): just return back-to-representative pointer.

UNION(x,y)
Applications
• Disjoint-set data structures model the partitioning of a set, for ex
ample to keep track of the connected components of an undire
cted graph. This model can then be used to determine whether t
wo vertices belong to the same component, or whether adding an e
dge between them would result in a cycle.
• This data structure is used Graph Library to implement its Inc
remental Connected Components functionality. It is also used for i
mplementing Kruskal's algorithm to find the minimum spanning tre
e of a graph.
• Note that the implementation as disjoint-set forests doesn't allow d
eletion of edges — even without path compression or the rank heu
ristic, this is not as easy, although more complex schemes have be
en designed that can deal with this type of incremental update.
Kruskal's Algorithm
MST-Kruskal(G,w)
1. Sort the edges E in non-decreasing order by weight
2. A  
3. for each vertex v V[G]
4. do make-set(v)
5. for each edge (u,v)  E, taken in non-decreasing
order weight.
6. do if find-set(u)  find-set(v)
7. then A = A ∪ {(u,v)}
8. union ( u,v )
9. return A
It uses a disjoint-set data structure to maintain several disjoint sets of elements. Eac
h set contains the vertices in one tree of the current forest. The operation FIND-SE
T (u) returns a representative element from the set that contains u. Thus, we can deter
mine whether two vertices u and belong to the same tree by testing whether FIND-
SET (u) equals FIND-SET (v). To combine trees, Kruskal‟s algorithm calls the UNION
procedure.
Kruskal's Algorithm: Time Analysis

MST-Kruskal(G,w)
1. Sort the edges E in non-decreasing order ( E lg E )
by weight
2. A   Count2= (1)
3. for each vertex v V[G] Count3= ( V )
4. do make-set(v)
5. for each edge (u,v)  E, taken in Count5 = O( E )
non-decreasing order by weight
6. do if find-set(u)  find-set(v)
Sorting dominates the runtime.
7. then A = A ∪ {(u,v)} We get T( V,E ) = ( E lg E),
8. union ( u,v )
9. return A
Example:

a 4
6

5
b u

14 2
10
c v

3
8 15
d

f
Single-Source Shortest Paths Problem

Single source shortest path problems


• Shortest Paths: Given a weighted graph and two
vertices u and v, we want to find a path of minimum
total weight between u and v. Length of a path is the
sum of the weights of its edges.
Applications
• Internet packet routing
• Flight reservations
• Driving directions
Single-Source Shortest Paths Problem

• Is there any path from A to B?


• If more than one path exists, then find the shortest path
from A to B
• Find all paths from 1 to 2.
Single-Source Shortest Paths Problem Cont.
Shortest Path Problems
• Single source single destination.
• Single source all destinations.
• All pairs (every vertex is a source and destinati
on).
Algorithm will compute a shortest-path tree.
– Similar to BFS tree.
Algorithms to be discussed
• Bellman-Ford algorithm: find the shortest path for all v
ertices connected to the source node.
• Dijkstra’s algorithm.
Relaxation Algorithm
Algorithms keep track of distance[v], parent[v].
Initialized the given graph as follows:
Procedure Initialize(G, s)
for each v  V[G] do
distance[v] := 
parent[v] := NIL
distance[s] := 0
These values are changed/updated when an edge (u, v) is
relaxed
Procedure Relax(u, v, w)
if (distance[u] + w(u, v) < distance[v]) then
distance[v] := distance[u] + w(u, v);
parent[v] := u
Bellman-Ford algorithm

• Can have negative-weight edges. Will “detect” reachable


negative-weight cycles.
Initialize(G, s) (V)
for i := 1 to |V[G]| – 1 do (V)
for each (u, v) in E[G] do (E) (VE)
Relax(u, v, w)
for each (u, v) in E[G] do
if (d[u] + w(u, v) < d[v]) then (E)
return false
return true

The Bellman-Ford algorithm runs in time O(VE)


0 3,3, 2 Cond: distance[u] + w(u, v) < distance[v]
A B
5 Iteration-1: processing all edges
4 -2 Edges Process
C D
AB A=0 and B=
initialized ,4 ,7 Applying relax procedure
distance(A) + w(A,B) = 0+3< is true
A B C D so B is updated to 3, parent(B)=A
0    AC A=0 and C=
Applying relax procedure
A B C D distance(A) + w(A,C) = 0+4< is true
0 3   so C is updated to 4, parent(C)=A
CB C=4 and B=3
A B C D
Applying relax procedure
0 3 4  distance(C) + w(C,B) = 4+(-2)<3 is true
so B is updated to 2, parent(B)=C
A B C D
BD B=2 and D=
0 2 4  Applying relax procedure
distance(B) + w(B,D) = 2+5< is true
A B C D
so D is updated to 7, parent(D)=B
0 2 4 7
0 3 ,3, 2 Cond: distance[u] + w(u, v) < distance[v]
A B
5 Iteration-2: processing all edges
4 -2 Edges Process
C D
AB A=0 and B=2
initialized ,4 ,7 Applying relax procedure
distance(A) + w(A,B) = 0+3<2 is false
A B C D so no updates, parent(B)=C
0    AC A=0 and C=4
Applying relax procedure
A B C D distance(A) + w(A,C) = 0+4<4 is false
0 3   no updates, parent(C)=A
CB C=4 and B=2
A B C D
Applying relax procedure
0 3 4  distance(C) + w(C,B) = 4+(-2)<2 is false
no updates, parent(B)=C
A B C D
BD B=2 and D=7
0 2 4  Applying relax procedure
distance(B) + w(B,D) = 2+5<7 is false
A B C D
so D is updated to 7, parent(D)=B
0 2 4 7
During iteration-1 when all edges are processed, all nodes with
distance and the corresponding parent node are finalized.
During iteration-2, no updates, So further iteration will be
simply executed without any change.
6 4

7 2
Edges T/F
zu T
zx T distance[u] + w(u, v) < distance[v]
uv T
vu F Parent node Updated to Distance
xy T Parent (u): z, Till u=6
yz F Parent (v): u, x Till v=11, 4
xv T Parent (x): z, Till x=7
ux F Parent (y): x, u Till y=2
uy T Parent (z): F, Till z=0
yv F
6,2 4

7 2,-2
Edges T/F
zu F
zx F distance[u] + w(u, v) < distance[v]
uv F
vu T Parent node Updated to Distance
xy F Parent (u): z, v Till u=6, 2
yz F Parent (v): u, x Till v=11, 4
xv F Parent (x): z, Till x=7
ux F Parent (y): x, u Till y=2, -2
uy T Parent (z): F, Till z=0
yv F
2 4

7 -2
Edges T/F
zu F
zx F distance[u] + w(u, v) < distance[v]
uv F
vu F Parent node Updated to Distance
xy F Parent (u): z, v Till u=6, 2
yz F Parent (v): u, x Till v=11, 4
xv F Parent (x): z, Till x=7
ux F Parent (y): x, u Till y=2, -2
uy F Parent (z): F, Till z=0
yv F
All false, so no updates here after
Parent node Updated to Distance
Parent (u): z, v Till u=6, 2
Parent (v): u, x Till v=11, 4
Parent (x): z, Till x=7
Parent (y): x, u Till y=2, -2
Solution of the Given graph
Parent (z): F, Till z=0
(marked in red lines).

Paths Distance
From z to u z->x->v->u = 2
From z to v z->x->v = 4
From z to x z->x = 7
From z to y z->x->v->u->y = -2
Single-source Shortest Paths
Greedy algorithm: Dijkstra’s algorithm
• Assumes no negative-weight edges.
• Maintains a set S of vertices which starts at s.
• Repeatedly selects u in V–s with minimum distance estimates with
greedy choice.
• Store V–s in a min-priority queue Q.
Dijkstra’s algorithm
Initialize(G, s); (V)
S := ; (1)
Q := V[G]; (E*logV)
while Q   do (V)
u := extract-Min(Q); (logV)
S := S  {u}; (1)
for each v  Adj[u] do (E)
Relax(u, v, w) (1)

Adjacency list and binary heap


T (V,E) = (V)+ (1)+(V*logV)+ (V) *{(logV)+ (1)+ [(E)*1]}
=>T (V,E) = (V)+ (1)+(V*logV)+ (VlogV)+ (V)+ (VE)
=> T (V,E) = (E*logV)+ (VE)
s t x y z s is extracted
0 ∞ ∞ ∞ ∞ If(distance[u] + w(u, v) < distance[v])
distance[v] := distance[u] + w(u, v);
parent[v] := u
Nodes Parent (v) distance (v)
s Nil
t Nil,s 10
x Nil,
y Nil,s 5
z Nil

while Q   do
„st‟ and „sy‟ edges are considered u := Extract-Min(Q);
S := S  {u};
since they are adjacent to the for each v  Adj[u] do
node „s‟ Relax(u, v, w)
s t x y z y is extracted
0 ∞ ∞ ∞ ∞ If(distance[u] + w(u, v) < distance[v])
distance[v] := distance[u] + w(u, v);
parent[v] := u
Nodes Parent (v) distance (v)
s Nil
t Nil,s, y 10, 8
x Nil, y 14
y Nil,s 5
z Nil, y 7

while Q   do
„yt‟, „yx‟ and „yz‟ adjacent edges to u := Extract-Min(Q);
S := S  {u};
node „y‟ for each v  Adj[u] do
Relax(u, v, w)
s t x y z z is extracted
0 ∞ ∞ ∞ ∞ If(distance[u] + w(u, v) < distance[v])
distance[v] := distance[u] + w(u, v);
parent[v] := u
Nodes Parent (v) distance (v)
s Nil
t Nil,s, y 10, 8
x Nil, y, z 14, 13
y Nil,s 5
z Nil, y 7

while Q   do
„zs‟, „zx‟ adjacent edges to node „z‟ u := Extract-Min(Q);
S := S  {u};
for each v  Adj[u] do
Relax(u, v, w)
s t x y z t is extracted
0 ∞ ∞ ∞ ∞ If(distance[u] + w(u, v) < distance[v])
distance[v] := distance[u] + w(u, v);
parent[v] := u
Nodes Parent (v) distance (v)
s Nil
t Nil,s, y 10, 8
x Nil, y, z, t 14, 13, 9
y Nil,s 5
z Nil, y 7

while Q   do
„ty‟, „tx‟ adjacent edges to node „z‟ u := Extract-Min(Q);
S := S  {u};
for each v  Adj[u] do
Relax(u, v, w)
s t x y z x is extracted
0 ∞ ∞ ∞ ∞ If(distance[u] + w(u, v) < distance[v])
distance[v] := distance[u] + w(u, v);
parent[v] := u
Nodes Parent (v) distance (v)
s Nil
t Nil,s, y 10, 8
x Nil, y, z, t 14, 13, 9
y Nil,s 5
z Nil, y 7

while Q   do
„xz‟ adjacent edges to node „z‟ u := Extract-Min(Q);
S := S  {u};
for each v  Adj[u] do
Relax(u, v, w)
Knapsack Problem
• Problem definition (fractional)
– Given n objects and a knapsack where object i has a weight wi and
the knapsack has a capacity m
– If a fraction xi of object i placed into knapsack, a profit pixi is earned
– The objective is to obtain a filling of knapsack maximizing the total
profit
• Problem formulation

maximize  pi xi (4.1)
1i  n

subject to  wi xi  m (4.2)
1i  n

and 0  xi  1, 1  i  n (4.3)
• A feasible solution is any set satisfying (4.2) and (4.3)
• An optimal solution is a feasible solution for which (4.1) is
maximized
Knapsack Problem

– n=3, m=20, (p1,p2,p3)=(25,24,15), (w1,w2,w3)=(18,15,10)

x1, x2, x3  wixi  pixi


• (1) (1/2, 1/3, 1/4) 9+5+2.5=16.5 12.5+8+3.75=24.25
• (2) (1, 2/5, 0) 18+2+0=20 25+3.2+0=28.2
• (3) (0, 2/3, 1) 0+10+10=20 0+16+15=31
• (4) (0, 1, 1/2) 0+15+5=20 0+24+7.5=31.5

• Out of 1, 2, 3 & 4 feasible solutions, 4 is the optimum one.


• Knapsack problem fits the subset paradigm
There are basically three approaches to solve the problem:
1. The first approach is to select the item based on the maximum profit.
2. The second approach is to select the item based on the minimum weight.
3. The third approach is to calculate the ratio of profit/weight.
• EXAMPLE
Objects: 1 2 3 4 5 W (Weight of the knapsack): 14
Profit (P): 10 15 7 8 9 n (no of items): 5
Weight(w): 3 5 2 2 4

1. Select the item based on the Object Profit Weight Remaining weight
maximum profit 2 15 5 14-5 = 9
Object Profit Weight 1 10 3 9-3 = 6
1 10 3 5 9 4 6-2 = 1
2 15 5 4 8 2 1-2*(1/2) = 0
3 7 2 3 7 2 Not selected
4 8 2
Total profit: 15+10+9+8/2 = 38
5 9 4
Sol: (1,1,0,1/2,1)
There are basically three approaches to solve the problem:
1. The first approach is to select the item based on the maximum profit.
2. The second approach is to select the item based on the minimum weight.
3. The third approach is to calculate the ratio of profit/weight.
• EXAMPLE
Objects: 1 2 3 4 5 W (Weight of the knapsack): 14
Profit (P): 10 15 7 8 9 n (no of items): 5
Weight(w): 3 5 2 2 4

2. Select the item based on the Object Profit Weight Remaining weight
minimum weight 4 8 2 14-2 = 12
Object Profit Weight 3 7 2 12-2 = 10
1 10 3 1 10 3 10-3 = 7
2 15 5 5 9 4 7-4 = 3
3 7 2 2 15 5 3-(5*3/5)=0
4 8 2
Total profit: 8+7+10+9+(15*3/5) = 43
5 9 4
Sol: (1,3/5,1,1,1)
There are basically three approaches to solve the problem:
1. The first approach is to select the item based on the maximum profit.
2. The second approach is to select the item based on the minimum weight.
3. The third approach is to calculate the ratio of profit/weight.
• EXAMPLE
Objects: 1 2 3 4 5 W (Weight of the knapsack): 14
Profit (P): 10 15 7 8 9 n (no of items): 5
Weight(w): 3 5 2 2 4
1. (W - weight) / w[i]
3. Select the item based on the Object Profit Weight Remaining weight
P/W ratio in decreasing order 4 8 2 14-2 = 12
Object Profit Weight P/W ratio 3 7 2 12-2 = 10
1 10 3 3.33 1 10 3 10-3 = 7
2 15 5 3 2 15 5 7-5 = 2
3 7 2 3.5 5 9 4 2-(4*1/2) = 0
4 8 2 4
Total profit: 8+7+10+15+(9*1/2) =
5 9 4 2.25 44.5 (Optimal) Sol: (1,1,1,1,1/2)
Knapsack Problem

Observations about greedy method


• Generally w1+w2+……+wn  M; But if wi ≤ M then xi =1.
• 1≤ i ≤ n is an optional solution (not a fraction of w; but
whole wi is considered =1 (not a fraction of wi but whole
wi is considered  xi = 1)
• if wi > M than all xi cannot be 1 and 0 ≤ xi < 1.
• All optional solutions will fill the knapsack exactly, as
some fractional amount can be placed till wi = M
• Greedy strategy using ratio of profit to weight (pi/wi) as
optimization function
– Solution 4 in Example 4.1, i.e. Optimal
Knapsack Problem
First method :
• Consider the objects in the non-increasing order of profits.
• Example: (p1, p2, p3) = (25, 24, 15); (w1, w2, w3) = (18, 15, 10)
• M= 20 profit is maximum for the first item
•  x1 = 1 Remaining weight 20-18= 2
• x2 = 2/15 because 2/15 of w2 = 2
• The solution is (x1, x2, x3) = (1, 2/15, 0) and the profit is 25+2
4 x 2/15 = 25+48/15 = 25 + 3.2 = 28.2
• This is solution 3 which is feasible but not optimal.
Knapsack Problem
Second method:
• Use the capacity as slowly as possible or consider the objects
in the non-decreasing order of weights.
• Example: (p1, p2, p3) = (25, 24, 15); (w1, w2, w3) = (18, 15, 10)
• First consider third item x3 = 1
• Remaining weight is 20 – 10 =10
• now x2 =10/15 = 2/3 and x1= 0
• (0, 2/3, 1) is the solution with profit
• pixi = 0 + 24.2/3+15.1 = 16+15 = 31
which is feasible but not optimal solution.
Knapsack Problem
Third Method:
• Achieve a balance between the rate at which the profit increases and t
he rate at which the capacity is used.
• This is obtained by including the object which has maximum profit pe
r unit of capacity.
• Objects are included in non increasing order of the ratio pi/wi
• Example: (p1,p2,p3) : (25,24,15); (w1, w2, w3) : (18, 15,10)
• (P1/ w1, P2/ w2, P3/ w3) = (25/18, 24/15, 15/10)
• = 1.38, 1.6, 1.5
• With M = 20
• x2 = 1, M = 20-15 = 5
• x3 = (M-previous weight (w2))/w3 = 5/10 = ½ and x1 = 0
 (0,1,1/2) is the solution with profit  pixi which is also optimal
Knapsack Problem
– Time complexity
• Sorting: O(n log n) using fast sorting algorithm like merge sort
• Greedy-Fractional-Knapsack: O(n)
• So, total time is O(n log n)

Item Weight Price


1 5 30
2 10 40
3 15 45
4 22 77
5 25 90
Calculate Price/Weight
Item Weight Price Price/Weight
1 5 30 6
2 10 40 4
3 15 45 3
4 22 77 3.5
5 25 90 3.6

After sorting
Item Weight Price Price/Weight
1 5 30 6
2 10 40 4
3 25 90 3.6
4 22 77 3.5
5 15 45 3
Now apply Greedy-Fractional-Knapsack with M=60

Remaining
Item Weight Price Price/Weight xi weight
weight
1 5 30 6 1 5 60-5 = 55
2 10 40 4 1 10 55-10=45
3 25 90 3.6 1 25 45-25=20
4 22 77 3.5 20/22 20 20-20=0
5 15 45 3 -- --
Total weight 60

Total sum = 30*1+40*1+90*1+77*20/22 = 230


Knapsack Problem
Algorithm: Greedy-Fractional-Knapsack (p[1..n], w[1..n], W)
1. for i = 1 to n do
2. ratio[i] = p[i]/w[i]
3. Sort ratio[ ] using efficient sorting method.
// consider p[1..n], and w[1..n] data are in order of pi/wi
1. for i = 1 to n do
2. x[i] = 0
3. weight = 0, profit=0
4. for i = 1 to n do
5. if weight + w[i] ≤ W then
6. x[i] = 1
7. weight = weight + w[i]
8. profit=profit+p[i]
9. else
10. x[i] = (W - weight) / w[i]
11. weight = weight + x[i] * w[i]
12. profit = profit+p[i]* (W - weight) / w[i]
13. if (weight = W) then
14. break
15. return x and profit
Knapsack Problem

– Time complexity
• Calculate P/W ratio: O(n)
• Sorting: O(n log n) using fast sorting algorithm like
merge sort
• Greedy-Fractional-Knapsack to select items and
calculate profit: O(n)
• So, total time is O(n)+O(n log n)+O(n)
T(n) = O(n*logn)
Optimal Merge Patterns
• Problem
– Given n sorted files, find an optimal way (i.e., requiring the fewest
comparisons or record moves) to pairwise merge them into one
sorted file
– It fits ordering paradigm
• Example 4.9
– Three sorted files (x1,x2,x3) with lengths (30, 20, 10)
– Solution 1: merging x1 and x2 (50 record moves), merging the result
with x3 (60 moves)  total 110 moves
– Solution 2: merging x2 and x3 (30 moves), merging the result with x1
(60 moves)  total 90 moves
– The solution 2 is better
Optimal Merge Patterns
• A greedy method (for 2-way merge problem)
– At each step, merge the two smallest files
– e.g., five files with lengths (20,30,10,5,30) (Figure 4.11)
95 z4

z2 35 60 z3

z1 15 20 30 30
x1 x5 x2

5 10
x4 x3
– Total number of record moves = weighted external path length
n

d q
i 1
i i

– The optimal 2-way merge pattern = binary merge tree with minimum
weighted external path length
Optimal Merge Patterns
Optimal Merge Patterns

• Solve with Input: n = 6, size = {2, 3, 4, 5, 6, 7}


• Time Complexity
– If list is kept in non-decreasing order: O(n2)
– If list is represented in a min-heap: O(n log n)

You might also like