Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

MODULE – 3

CHAPTER – 9: GREEDY METHOD


Algorithm Greedy(a, n)
// Apply Greedy Algorithm to find the feasible solu on for the given problem
// Input: a[1, …, n] contains the n inputs.
// Output: Ge ng feasible solu on.
solu on ← Ø
for i ← 1 to n do
x ← Select(a)
if Feasible(solu on, x) then
solu on ← Union(solu on, x)
return solu on
On each step—and this is the central point of this technique—the choice made must be:

 feasible, i.e., it has to sa sfy the problem’s constraints locally.


 op mal, i.e., it has to be the best local choice among all feasible choices available on that
step.
 irrevocable, i.e., once made, it cannot be changed on subsequent steps of the algorithm.

Coin change Problem:


the change-making problem faced, at least subconsciously, by millions of cashiers all over the world:
give change for a specific amount n with the least number of coins of the denomina ons d1 > d2 > ...
> dm used in that locale.

For example:

The widely used coin denomina ons in the United States are d1 = 25 (quarter), d2 = 10 (dime), d3 =
5 (nickel), and d4 = 1 (penny). How would you give change with coins of these denomina ons of, say,
48 cents?

Ans: If you came up with the answer 1 quarter, 2 dimes, and 3 pennies, you followed— consciously
or not—a logical strategy of making a sequence of best choices among the currently available
alterna ves.

 Indeed, in the first step, you could have given one coin of any of the four denomina ons.
“Greedy” thinking leads to giving one quarter because it reduces the remaining amount the
most, namely, to 23 cents.
 In the second step, you had the same coins at your disposal, but you could not give a quarter,
because it would have violated the problem’s constraints.
 So, your best selec on in this step was one dime, reducing the remaining amount to 13
cents. Giving one more dime le you with 3 cents to be given with three pennies.

The approach applied in the opening paragraph to the change-making problem is called greedy. The
greedy approach suggests construc ng a solu on through a sequence of steps, each expanding a
par ally constructed solu on obtained so far, un l a complete solu on to the problem is reached.
9.1 Prim’s Algorithm
DEFINITION: A spanning tree of an undirected connected graph is its connected acyclic subgraph
(i.e., a tree) that contains all the ver ces of the graph. If such a graph has weights assigned to its
edges.

 A minimum spanning tree is its spanning tree of the smallest weight, where the weight of a
tree is defined as the sum of the weights on all its edges.
 The minimum spanning tree problem is the problem of finding a minimum spanning tree
for a given weighted connected graph.

If we were to try construc ng a minimum spanning tree by exhaus ve search, we would face two
serious obstacles.

 First, the number of spanning trees grows exponen ally with the graph size (at least for
dense graphs).
 Second, genera ng all spanning trees for a given graph is not easy; in fact, it is more difficult
than finding a minimum spanning tree for a weighted graph by using one of several efficient
algorithms available for this problem.
 In this sec on, we outline Prim’s algorithm, which goes back to at least 19571.

Prim’s algorithm constructs a minimum spanning tree through a sequence of expanding subtrees.

 The ini al subtree in such a sequence consists of a single vertex(V) selected arbitrarily from
the set V of the graph’s ver ces.
 On each itera on, the algorithm expands the current tree in the greedy manner by simply
a aching to it the nearest vertex not in that tree.
 The algorithm stops a er all the graph’s ver ces have been included in the tree being
constructed.
 Since the algorithm expands a tree by exactly one vertex on each of its itera ons, the total
number of such itera ons is n − 1, where n is the number of ver ces in the graph.
 The tree generated by the algorithm is obtained as the set of edges used for the tree
expansions.

Pseudocode of Prim’s Algorithm:

ALGORITHM Prim(G)
//Prim’s algorithm for construc ng a minimum spanning tree
//Input: A weighted connected graph G = V,E
//Output: ET , the set of edges composing a minimum spanning tree of G.

VT ← {v0} //the set of tree ver ces can be ini alized with any vertex
ET ← ∅
for i ← 1 to |V| − 1 do
find a minimum-weight edge e∗ = (v∗, u∗) among all the edges (v, u)
such that v is in VT and u is in V − VT
VT ← VT ∪ {u∗}
ET ← ET ∪ {e∗}
return ET
Explana on of Algorithm:

The nature of Prim’s algorithm makes it necessary to provide each vertex not in the current tree with
the informa on about the shortest edge connec ng the vertex to a tree vertex. We can provide such
informa on by a aching two labels to a vertex:
 The name of the nearest tree vertex and the length (the weight) of the corresponding edge.
Ver ces that are not adjacent to any of the tree ver ces can be given the ∞ label indica ng
their “infinite” distance to the tree ver ces and a null label for the name of the nearest tree
vertex.
 With such labels, finding the next vertex to be added to the current tree T = * VT , ET +
becomes a simple task of finding a vertex with the smallest distance label in the set V − VT .
Ties can be broken arbitrarily.
A er we have iden fied a vertex u∗ to be added to the tree, we need to perform two opera ons:
 Move u∗ from the set V − VT to the set of tree ver ces VT .
 For each remaining vertex u in V − VT that is connected to u∗ by a shorter edge than the u’s
current distance label, update its labels by u∗ and the weight of the edge between u∗ and u,
respec vely.
Figure 9.3: demonstrates the applica on of Prim’s algorithm to a specific graph. Does Prim’s
algorithm always yield a minimum spanning tree?
 The answer to this ques on is yes. Let us prove by induc on that each of the subtrees Ti, i =
0, ...,n − 1, generated by Prim’s algorithm is a part (i.e., a subgraph) of some minimum
spanning tree. (This immediately implies, of course, that the last tree in the sequence, Tn−1,
is a minimum spanning tree itself because it contains all n ver ces of the graph.)
 The basis of the induc on is trivial, since T0 consists of a single vertex and hence must be a
part of any minimum spanning tree. For the induc ve step, let us assume that Ti−1 is part of
some minimum spanning tree T.
 We need to prove that Ti, generated from Ti−1 by Prim’s algorithm, is also a part of a
minimum spanning tree. We prove this by contradic on by assuming that no minimum
spanning tree of the graph can contain Ti. Let ei = (v, u) be the minimum weight edge from a
vertex in Ti−1 to a vertex not in Ti−1 used by Prim’s algorithm to expand Ti−1 to Ti.
 By our assump on, ei cannot belong to any minimum spanning tree, including T.
Therefore, if we add ei to T , a cycle must be formed (Figure 9.4).
In addi on to edge ei = (v, u), this cycle must contain another edge (v’ , u’ ) connec ng a
vertex v’ ∈ Ti−1 to a vertex u that is not in Ti−1. (It is possible that v coincides with v or u
coincides with u but not both.)
 If we now delete the edge (v’ , u’) from this cycle, we will obtain another spanning tree of
the en re graph whose weight is less than or equal to the weight of T since the weight of ei
is less than or equal to the weight of (v’ , u’).

Hence, this spanning tree is a minimum spanning tree, which contradicts the assump on that no
minimum spanning tree contains Ti. This completes the correctness proof of Prim’s algorithm.

How efficient is Prim’s algorithm? The answer depends on the data structures chosen for the graph
itself and for the priority queue of the set V − VT whose vertex priori es are the distances to the
nearest tree ver ces. In par cular, if a graph is represented by its weight matrix and the priority
queue is implemented as an unordered array, the algorithm’s running me will be in Ø(|V|2). Indeed,
on each of the |V| − 1 itera ons, the array implemen ng the priority queue is traversed to find and
delete the minimum and then to update, if necessary, the priori es of the remaining ver ces.

Example:

FIGURE 9.3: Applica on of Prim’s algorithm. The parenthesized labels of a vertex in the middle
column indicate the nearest tree vertex and edge weight; selected ver ces and edges are shown in
bold.
Kruskal’s Algorithm:

Kruskal’s algorithm a er Joseph Kruskal, who discovered this algorithm when he was a second-year
graduate student [Kru56]. “Kruskal’s algorithm looks at a minimum spanning tree of a weighted
connected graph G = V,E as an acyclic subgraph with |V | − 1 edges for which the sum of the edge
weights is the smallest.”

Consequently, the algorithm constructs a minimum spanning tree as an expanding sequence of


subgraphs that are always acyclic but are not necessarily connected on the intermediate stages of
the algorithm. The algorithm begins by sor ng the graph’s edges in nondecreasing order of their
weights. Then, star ng with the empty subgraph, it scans this sorted list, adding the next edge on the
list to the current subgraph if such an inclusion does not create a cycle and simply skipping the edge
otherwise.

ALGORITHM Kruskal(G)
//Kruskal’s algorithm for construc ng a minimum spanning tree
//Input: A weighted connected graph G = V,E
//Output: ET , the set of edges composing a minimum spanning tree of G
sort E in non-decreasing order of the edge weights w(ei1 ) ≤ ... ≤ w(ei|E|)
ET ← ∅; ecounter ← 0 //ini alize the set of tree edges and its size
k ← 0 //ini alize the number of processed edges
while ecounter < |V | − 1 do
k←k+1
if ET ∪ {eik} is acyclic
ET ← ET ∪ {eik}; ecounter ← ecounter + 1
return ET
Example:
Dijkstra’s Algorithm:

This algorithm is applicable to undirected and directed graphs with nonnega ve weights only. Since
in most applica ons this condi on is sa sfied, the limita on has not impaired the popularity of
Dijkstra’s algorithm.

In this sec on, we consider the single-source shortest-paths problem: for a given vertex called the
source in a weighted connected graph, find shortest paths to all its other ver ces.

Dijkstra’s algorithm finds the shortest paths to a graph’s ver ces in order of their distance from a
given source. First, it finds the shortest path from the source to a vertex nearest to it, then to a
second nearest, and so on. In general, before its ith itera on commences, the algorithm has already
iden fied the shortest paths to i − 1 other ver ces nearest to the source.

These ver ces, the source, and the edges of the shortest paths leading to them from the source form
a subtree Ti of the given graph (Figure 9.10). Since all the edge weights are nonnega ve, the next
vertex nearest to the source can be found among the ver ces adjacent to the ver ces of Ti. The set
of ver ces adjacent to the ver ces in Ti can be referred to as “fringe ver ces”; they are the
candidates from which Dijkstra’s algorithm selects the next vertex nearest to the source.

To iden fy the ith nearest vertex, the algorithm computes, for every fringe vertex u, the sum of the
distance to the nearest tree vertex v (given by the weight of the edge (v, u)) and the length dv of
the shortest path from the source to v (previously determined by the algorithm) and then selects
the vertex with the smallest such sum. The fact that it suffices to compare the lengths of such special
paths is the central insight of Dijkstra’s algorithm.

To facilitate the algorithm’s opera ons, we label each vertex with two labels. The numeric label d
indicates the length of the shortest path from the source to this vertex found by the algorithm so far;
when a vertex is added to the tree, d indicates the length of the shortest path from the source to
that vertex. The other label indicates the name of the next-to-last vertex on such a path, i.e., the
parent of the vertex in the tree being constructed. (It can be le unspecified for the source s and
ver ces that are adjacent to none of the current tree ver ces.) With such labelling, finding the next
nearest vertex u∗ becomes a simple task of finding a fringe vertex with the smallest d value. Ties can
be broken arbitrarily.
A er we have iden fied a vertex u∗ to be added to the tree, we need to perform two opera ons:

 Move u∗ from the fringe to the set of tree ver ces.


 For each remaining fringe vertex u that is connected to u∗ by an edge of weight w(u∗, u) such
that du∗ + w(u∗, u) < du, update the labels of u by u∗ and du∗ + w(u∗, u), respec vely.
ALGORITHM Dijkstra(G, s)
//Dijkstra’s algorithm for single-source shortest paths
//Input: A weighted connected graph G = V,E with nonnega ve weights and its vertex s.
//Output: The length dv of a shortest path from s to v and its penul mate vertex pv for every vertex v in V

Ini alize(Q) //ini alize priority queue to empty for every vertex v in V
dv ← ∞; pv ← null
Insert(Q, v, dv) // ini alize vertex priority in the priority queue
ds ← 0; Decrease(Q, s, ds) // update priority of s with ds
VT ← ∅
for i ← 0 to |V|−1 do
u∗ ← DeleteMin(Q) // delete the minimum priority element
VT ← VT ∪ {u∗}
for every vertex u in V − VT that is adjacent to u∗ do
if du* + w(u∗, u) < du
du ← du∗ + w(u∗, u); pu ← u∗
Decrease(Q, u, du)

Example:
Huffman Tree And Code

Codeword: a text that comprises symbols from some n-symbol alphabet by assigning to each of the
text’s symbols some sequence of bits called the codeword.

Variable-length encoding, which assigns codewords of different lengths to different symbols,


introduces a problem that fixed-length encoding does not have.
Namely, how can we tell how many bits of an encoded text represent the first (or, more generally,
the ith) symbol? To avoid this complica on, we can limit ourselves to the so-called prefix-free (or
simply prefix code) codes.

Huffman’s algorithm
Step 1: Ini alize n “one-node” trees and label them with the symbols of the alphabet given.
Record the frequency of each symbol in its tree’s root to indicate the tree’s weight. (More
generally, the weight of a tree will be equal to the sum of the frequencies in the tree’s
leaves.)
Step 2: Repeat the following opera on un l a single tree is obtained. Find two trees with the
smallest weight ( es can be broken arbitrarily, but see Problem 2 in this sec on’s exercises).
Make them the le and right subtree of a new tree and record the sum of their weights in
the root of the new tree as its weight.

NOTE: A tree constructed by the above algorithm is called a Huffman tree

EXAMPLE: Consider the five-symbol alphabet {A, B, C, D, _} with the following occurrence
frequencies in a text made up of these symbols:

Symbol: A B C D _
Frequency: 0.35 0.1 0.2 0.2 0.15

Solu on:
NOTE:
 Step used to solve the Huffman tree an code
 Step 1: arrange the values in ascending order.
 Step 2: find the sum of the all the possible pairs
 Step 3: arrange in ascending order and repeat the step 2, repeat this process un l all
the possible pairs are solved.
 Step 4: create tree for each pair solved from beginning as shown in the diagram
below.
 Step 5: connect all the leave to the root of the tree where le -side(L) hold the
smaller value and right(R) holds the larger value (L < R < Root).
 Step 6: To find the codeword: (as shown in the last tree structure).
o We assign every le -side element with 0 and right-side element as 1 and read
the value from top-to-bo om.
Symbol: A B C D _
Frequency: 0.35 0.1 0.2 0.2 0.15
Codeword: 11 100 00 01 101

Note: to find the binary code to represent a given word from the Huffman tree we make
use of codeword
Example: Here “DAD” is encoded as: 011101
Job Sequencing with Deadline:

We are given a set of n jobs. Associated with job i is an integer deadline di > 0 and a profit
pi > 0.
 For any job i the profit pi is earned if the job is completed by its deadline.
 To complete a job, one has to process the job on a machine for one unit of time. Only
one machine is available for processing jobs.
 A feasible solution for this problem is a subset J of jobs such that each job in this
subset can be completed by its deadline.
 The value of a feasible solution J is the sum of the profits of the jobs in J, or
∑𝒊∈𝑱 𝑷 .
𝒊
 An optimal solution is a feasible solution with maximum value. Here again, since the
problem involves the identification of a subset, it fits the subset paradigm.

Algorithm JobSequence(d, j, n)
// Implement the job sequence algorithm to find the op mal solu on.
/* input: d[i] > 1,where 1 < = i <= n are the deadlines, n >= 1. The jobs are ordered such that p[1] >=
p[2] >= … p[n].*/
/* output: J[i] is the ith job in the op mal solu on,1 <= i <= k, also, at termina on d[J[i]] <= d[J[i+1]],
1 <= i <= k.*/

d[o] ← J[0] ← 0
J[l] ← 1;
//Include job1
fc ← l;
for i ← 2 to n do
// Consider jobs in nonincreasing order of p[i].Find
// posi on for i and check feasibility of inser on.
r ← k;
while ((d[J[r]] > d[i] and d[J[r]] ≠ r)) do r ← r - 1
if ((d[J[r]] < d[i]) and (d[i] > r)) then
// Insert i into J[].
for q ← k to (r + 1) step - 1 do J[q+ 1] ← J[q];
J[r+1] ← i; k ← k + 1;
return k;
Example:
Consider the following tasks with their deadlines and profits. Schedule the tasks in
such a way that they produce maximum profit after being executed −

S. No. 1 2 3 4 5

Jobs J1 J2 J3 J4 J5

Deadlines 2 2 1 3 4

Profits 20 60 40 100 80

Step 1
Find the maximum deadline value, dm, from the deadlines given.
dm = 4.
Step 2
Arrange the jobs in descending order of their profits.

S. No. 1 2 3 4 5

Jobs J4 J5 J2 J3 J1

Deadlines 3 4 2 1 2

Profits 100 80 60 40 20

The maximum deadline, dm, is 4. Therefore, all the tasks must end before 4.
Choose the job with highest profit, J4. It takes up 3 parts of the maximum deadline.
Therefore, the next job must have the time period 1.
Total Profit = 100.
Step 3
The next job with highest profit is J5. But the time taken by J5 is 4, which exceeds the
deadline by 3. Therefore, it cannot be added to the output set.
Step 4
The next job with highest profit is J2. The time taken by J5 is 2, which also exceeds
the deadline by 1. Therefore, it cannot be added to the output set.
Step 5
The next job with higher profit is J3. The time taken by J3 is 1, which does not exceed
the given deadline. Therefore, J3 is added to the output set.
Total Profit: 100 + 40 = 140
Step 6
Since, the maximum deadline is met, the algorithm comes to an end. The output set
of jobs scheduled within the deadline are {J4, J3} with the maximum profit of 140.

***************************************************************************

IMPORTANT QUESTIONS

1. Explain Greedy Method and write the algorithm.


2. Explain Job sequence Algorithm and find the op mal Solu on from the following list
of jobs and their deadlines.
Problem Statement: Job(J) = {j1, …., j4} deadline(D) = {d1, …., d4} Profit(P)=
{100, 20, 50, 85} and D = { 2, 1, 4, 3} respec vely.
3. Explain the Prim’s Algorithm and solve the following graph Prim’s principles find the
minimum spanning tree.

4. Explain the Kruskal’s Algorithm and solve the following graph using Kruskal’s
principles and find the minimum spanning tree.
5. Explain the Dijkstra’s Algorithm and solve the following graph using Dijkstra’s
principles and find the minimum spanning tree.

6. Explain the Coin Change problem with example

You might also like