Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

UNIT – III

Dynamic Programming: General Method, Multi Stage Graph, All Pairs Shortest Paths, Single
Source Shortest Paths-general Weights, Optimal Binary Search Trees, String Editing, 0/1
Knapsack, Traveling Salesman Problem.
Back Tracking: General Method, 8-queen problem, Sum of Subsets, Graph Coloring, Hamiltonian
Cycles.

Dynamic Programming
Dynamic programming is a technique that breaks the problems into sub-problems, and saves
the result for future purposes so that we do not need to compute the result again. The
subproblems are optimized to optimize the overall solution is known as optimal substructure
property. The main use of dynamic programming is to solve optimization problems. Here,
optimization problems mean that when we are trying to find out the minimum or the maximum
solution of a problem. The dynamic programming guarantees to find the optimal solution of a
problem if the solution exists.
The definition of dynamic programming says that it is a technique for solving a complex problem
by first breaking into a collection of simpler subproblems, solving each subproblem just once,
and then storing their solutions to avoid repetitive computations.
The steps in a dynamic programming solution are:
• It breaks down the complex problem into simpler subproblems.
• It finds the optimal solution to these sub-problems.
• It stores the results of subproblems (memoization). The process of storing the results of
subproblems is known as memoization.
• It reuses them so that same sub-problem is calculated more than once.
• Finally, calculate the result of the complex problem.

General Method:
Enumerate all decision sequences and then pick out the best
time and space requirements may be prohibitive
In dynamic programming an optimal sequence of decisions is obtained by using the principle of
optimality.
In the greedy method only one decision sequence is ever generated. In dynamic programming,
many decision sequences may be generated. However, sequences contains suboptimal
subsequences cannot be optimal and so will not be generated.

Multi Stage Graph:


↪ A multistage graph G = (V, E) is a directed graph in which the vertices are partitioned into
k≥2 disjoint sets Vi, 1≤i ≤ k
↪ In addition, if <u, v> is an edge in E then u ∈ Vi and v ∈ Vi+i for some i, 1≤i<k
↪ The set V1 and Vk are such that V1 = Vk =1
↪ The multistage graph problem is to find a minimum cost path from s in V1 to t in Vk
↪ Each set Vi defines a stage in the graph
Greedy VS Dynamic Programming:

The Greedy method: S →A→ D→ T = 1+4+18 = 23.


The Dynamic Programming method: S→ C→ F→ T = 5+2+2 = 9
Dynamic Programming approach to solve above multi stage graph:
• d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)}
• d(A, T) = min{4+d(D, T), 11+d(E, T)} = min{4+18, 11+13} = 22.
• d(B, T) = min{9+d(D, T), 5+d(E, T), 16+d(F, T)} = min{9+18, 5+13, 16+2} = 18.
• d(C, T) = min{ 2+d(F, T) } = 2+2 = 4
Therefore, d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)} = min{1+22, 2+18, 5+4} = 9.

Dynamic Programming Formulation for k stage Graph


• Every s to t path is the result of a sequence of k-2 decisions. The ith decision involves
determining which vertex in Vi+1, 1 ≤ i ≤ k-2, is to be on the path.
• Let p(i, j) be a minimum-cost path from vertex j in Vi to vertex t.
• Let cost(i, j) be the cost of this path. Then, using the forward approach, we obtain
o cost(i, j) = min{c(j , l) + cost(i+1 , l)} l ∈ V i+1 (j , l) ∈ E
• cost(k-1, j) = c(j, t) if (j, t) ∈ E and cost(k-1, j) = ∞ if (j, t) ∉ E
Problem:
Solve the following multi stage graph

cost(4,9) = 4
cost(4,10) = 2
cost(4,11) = 5
cost(3,6) = min {6+cost(4,9), 5+cost(4,10)} = 7
cost(3,7) = min {4+cost(4,9), 3+cost(4,10)} = 5
cost(3,8) = min {5+cost(4,10), 6+cost(4,11)} = 7
cost(2,2) = min {4+cost(3,6), 2+cost(3,7), 1+ cost(3,8)} = 7
cost(2,3) = 9
cost(2,4) = 18
cost(2,5) = 15
cost(1,1) = min {9 + cost(2, 2), 7 + cost(2, 3), 3 + cost(2,4), 2 + cost(2,5)} = 16
d(3,6) = 10 ; d(3,7) = 10 ; d(3,8) = 10 ;
d(2,2) = 7; d(2,3) = 6 ; d(2,4) = 8 ; d(2,5) = 8 ;
d(1,1) = 2

Algorithm:
Algorithm FGraph(G,k,n,p)
{
cost[n] := 0.0;
for j : = n-1 to 1 step -1 do
{ / / Compute cost[j].
Let r be a vertex such that (j, r) is an edge of G and c[j, r] + cost[r] is
minimum;
cost[j] := c[j, r] + cost[r];
d[j] := r;
}
/ / Find a minimum-cost path.
p[l] := 1; p[k] := n;
for j := 2 to k-1 do p[j] := d[p[j -1]];
}
Backward Approach
bcost(i,j) = min {bcost(i - 1,l) +c(l,j)} l ∈ Vi-1 (l , j) ∈ E
bcost(3,6) = min{bcost(2,2) + c(2, 6), bcost(2,3) + c(3,6)} = min {9+ 4, 7+ 2} = 9
bcost(3,7) = 11
bcost(3,8) = 10
bcost(4,9) = 15
bcost( 4,10) = 14
bcost(4,11) = 16
bcost(5,12) = 16
bcost(3,6) = min{bcost(2,2) + c(2, 6), bcost(2,3) + c(3,6)} = min {9+ 4, 7+ 2} = 9
bcost(3,7) = 11
bcost(3,8) = 10
bcost(4,9) = 15
bcost(4,10) = 14
bcost(4,11) = 16
bcost(5,12) = 16

Algorithm:
Algorithm BGraph(G,k,n,p)
{
bcost[1] := 0.0;
for j : = 2 to n do
{ / / Compute cost[j].
Let r be a vertex such that <r, j> is an edge of G and c[r, j] + bcost[r] is
minimum;
bcost[j] := c[r, j] + bcost[r];
d[j] := r;
}
/ / Find a minimum-cost path.
p[1] := 1; p[k] := n;
for j := k-1 to 2 do p[j] := d[p[j + 1]];
}
All Pairs Shortest Path:
↪ The all-pairs shortest-path problem is to determine a matrix A such that A(i,j) is the length of
a shortest path from i to j.
↪ G shouldn’t have cycles with negative length. If we allow G to contain a cycle of negative
length, then the shortest path between any two vertices on this cycle has length -∞.
↪ A^k (i, j) to represent the length of a shortest path from i to j going through
no vertex of index greater than k
A(i, j) = min { min {A^(k-1) (i,k) + A^(k-1) (k, j)},cost(i, j)} 1≤ k ≤ n

A0=

A1= A2=

A3= A4=
Algorithm:
Algorithm AIIPaths(cost, A, n)
{
for i := 1 to n do
for j := 1 to n do
A[i,j] := cost[i,j];
for k := 1 to n do
for i := 1 to n do
for j := 1 to n do
A[i,j] :=min{A[i, j], A[i, k] +A[k,j]};
}

Single Source Shortest Paths-general Weights:

★ When negative edge lengths are permitted, we require that the graph have no cycles of
negative length. This is necessary to ensure that shortest paths consist of a finite number of
edges.

★ For example, in the above graph, the length of the shortest path from vertex 1 to vertex 3 is -∞
★ When there are no cycles of negative length, there is a shortest path between any two vertices
of an n-vertex graph that has at most n-1 edges on it.
★ If the shortest path from v to u with at most k, edges has no more than k-1 edges, then
distk[u] = distk-1[u].
★ If the shortest path from v to u with at most k, edges has exactly k edges, then it is made up of
a shortest path from v to some vertex j followed by the edge (j, u). The path from v to j has k-1
edges, and its length is distk-1[j].
★ These observations result in the following recurrence for dist:
distk[u] = min {distk-1[u], min {distk-1[i]+ cost[i, u]}
Example:
Algorithm:
Algorithm BellmanFord(v, cost, dist, n)
{
for i := 1 to n do // Initialize dist.
dist[i] := cost[v, i];
for k := 2 to n-1 do
for each u such that u ≠ v and u has at least one incoming edge do
for each (i, u) in the graph do
if dist[u] > dist[i] + cost[i,u] then
dist[u] = dist[i] + cost[i,u];
}

Optimal Binary Search Trees:


An Optimal Binary Search Tree (OBST), also known as a Weighted Binary Search Tree, is a
binary search tree that minimizes the expected search cost. In a binary search tree, the search
cost is the number of comparisons required to search for a given key.
In an OBST, each node is assigned a weight that represents the probability of the key being
searched for. The sum of all the weights in the tree is 1.0. The expected search cost of a node is
the sum of the product of its depth and weight, and the expected search cost of its children.
Cost of Optimal Binary Search Trees
★ If a binary search tree represents n identifiers, then there will be exactly n internal nodes and
n + 1 external nodes

Using above formula find which of the following trees is optimal


{a1, a2, a3} = {do, if, stop}
P(1, 2, 3) = (5, 1, 2) Q(0, 1, 2, 3) = (2, 1, 1, 2)
cost(Tree a) = Σ P(i)*level a(i) + Σ Q(i)*(level (Ei) -1)
= (5*3 + 1*2 + 2*1) + (2*3 + 1*3 + 1*2 + 2*1) = 32
cost(Tree b) = (5*2 + 1*1 + 2*2) + (2*2 + 1*2 + 1*2 + 2*2) = 27
cost(Tree c) = (5*1 + 1*2 + 2*3) + (2*1 + 1*2 + 1*3 + 2*3) = 26
Therefore, Tree C is optimal.

Basic Notations
★ A binary search tree T
■ All identifiers in the Tleft < Troot
■ All identifiers in the Tright > Troot
■ The left and right subtrees of T are also BST
★ a1< a2< … < an
★ Ti,j : OBST for ai+1,…,aj
★ Ci,j : cost for Ti,j
★ Ri,j : root of Ti,j
★ Weight of Ti,j : Wi,j = Qi + Σ (Qk + Pk ), k = i+1 … j

Dynamic Programming Solution


★ To obtain a OBST using Dynamic programming we need to take a sequence of decisions
regarding the construction of tree.
★ First decision is which of ai is to be the root.
★ If we choose ak as the root, then the internal nodes a1, ..., ak-1 and the external nodes for classes
E0, E1, ..., Ek-1 will lie in the left subtree L of the root.
★ The remaining nodes will be in the right subtree R.
Algorithm:
procedure OBST(P, Q, n)
real P(1:n), Q(0:n), C(0:n, 0:n), W(0:n, 0:n)
integer R(0:n, 0:n)
for i ←0 to n - 1 do
(W(i, i), R(i, i), C(i. i)) ←(Q(i), 0, 0)
(W(i, i+1), R(i, i+1), C(i, i+1)) ←(Q(i) + Q(i+1) + P(i+1), i+1, Q(i) + Q(i+1) + P(i+1))
(W(n, n), R(n, n), C(n, n)) ← (Q(n), 0, 0)
for m ← 2 to n do
for i ← 0 to n - m do
j←i+m
W(i,j) ← W(i,j - 1) + P(j) + Q(j)
k ← a value that minimizes C(i, k-1) + C(k, j)
C(i, j) ← W(i, j) + C(i, k-1) + C(k, j)
R(i,j) ← k
end OBST

String Editing
★ Given two strings X = x1,x2,...,xn and Y = y1,y2,...,ym, where xi, 1 ≤ i ≤ n, and yj, 1 ≤ j ≤ m, are
members of a finite set of symbols known as the alphabet.
★ For transforming X into Y, using a sequence of edit operations on X, the permissible edit
operations are
○ Insert
○ Delete
○ Change (a symbol of X into another)
★ There is a cost associated with performing each operation
★ The problem of string editing is to identify a minimum-cost sequence of edit operations that
will transform X into Y.
★ Let D(xi) be the cost of deleting the symbol xi from X, I(yj) be the cost of inserting the symbol
yj into X, and C(xi,yj) be the cost of changing the symbol xi oÿ X into yj.
★ Consider the sequences X = a,a,b,a,b and Y = b,a,b,b.
★ Let the cost associated with each insertion and deletion be 1 (for any symbol), and cost of
changing any symbol to any other symbol be 2.
★ One possible way of transforming X into Y is delete each Xi, and insert each yj,.
○ The total cost of this edit sequence is 9.
★ Another possible edit sequence is delete x1 and x2 and insert y4 at the end of string X.
○ The total cost is only 3.
Dynamic Programming Solution
• Define cost(i,j) to be the minimum cost of any edit sequence for transforming x1, x2 , ...,
xi into y1, y2, …, yj (for 0 ≤ i ≤ n and 0 ≤ j ≤ m).
• Compute cost(i, j) for each i and j. Then cost(n, m) is the cost of an optimal edit
sequence.
• For i = j = 0, cost(i,j) = 0, since the two sequences are empty, they are identical.
• If j = 0 and i > 0, we can transform X into Y by a sequence of deletes. Thus, cost(i,
0)=cost(i-1, 0)+D(xi) Similarly, if i=0 and j>0, we get cost(0,j) = cost(0, j-1)+I(yj).
• if i>0 and j>0, X can be transformed into Y in one of three ways:
1. Transform x1,x2 ,..., xi-1 into y1, y2 ,..., yj using a minimum-cost edit sequence
and then delete xi. The corresponding cost is cost(i-1,j) + D(xi).
2. Transform x1,x2 ,..., xi-1 into y1, y2 ,..., yj-1 using a minimum-cost edit
sequence and then change the symbol xi to yj. The corresponding cost is
cost(i-1,j-1) + C(xi,yj).
3. Transform x1,x2 ,..., xi into y1, y2 ,..., yj-1 using a minimum-cost edit sequence
and then insert yj. The corresponding cost is cost(i,j-1) + I(yj)
Recurrence Equation for cost(i, j)
0/1 Knapsack:
Statement:
➔ Similar to fractional knapsack problem but we may not take a fraction of an object. Given ‘ N ‘
object with weight Wi and profits Pi where i varies from l to N and also a knapsack with capacity
‘ M ‘.
➔ Maximize Σ Xi Pi where i=l to n subject to the constraint ΣXi Wi ≤ M
➔ Xi is required to be 0 or 1. if the object is selected then it is 1. if the object is rejected than the
it is 0. That is why it is called as 0/1, knapsack problem.
Dynamic Programming Solution
To solve the problem by dynamic programming, we need to make a sequence of decisions on
variables x1,x2,x3…xn. Let us assume that the decisions on xi are made in the order xn, xn-
1,….,x1. After tracking decision on xn, we may be in one of the two possible states
1) The capacity remaining in the knapsack is M and no profit has earned i.e., Xn=0
2) The capacity remaining in the knapsack is M - Wn and profit of Pn has earned. We can obtain
Fn(M) = Max { Fn-1(M) , Fn-1(M-Wn))+Pn }
➔ To solve this let us consider the order set Si contains the pair (P, W) i.e., profit and weight
pairs. Initially S0 contains (0,0).
➔ We compute Si+1 from Si by computing Si1 = {(P, W) / (P-pi+1 , W-wi+1 ) ε Si}
➔ Now Si+1 can be computed by merging the pairs in Si and Si1 together.
➔ Let Si represent the possible states resulting from the decision sequences for (x1,x2,…,xi). To
obtain Si+1 the possibilities for Xi+1 are Xi+1 = 0 (or) 1, when Xi+1 = 0 the resulting states are
same as for Si. when Xi+1 = 1 the resulting states are obtained by adding (pi+1, wi+1) to each
state in Si. call this as Si1
➔ If Si+1 contains two pairs (pj,wj) and (pk,wk) with the property that pj <= pk and wj >= wk
then the pair ( pj,wj) can be discarded using purging rule also called as dominance rule. i.e.,
(pk,wk) dominates (pj,wj).
➔ Si+1 can be computed by merging and purging the states in Si and Si1 together.
➔ We purge all pairs (P, W) with W > M. then the result Fn(M) is given by the P value of the last
pair in Sn.
Traveling Salesman Problem:
Definitions
★ Let G = (V, E) be a directed graph with edge costs cij
★ A tour of G is a directed simple cycle that includes every vertex in V. (We regard a tour to be a
simple path that starts and ends at vertex 1)
★ The cost of a tour is the sum of the costs of the edges on the tour.
★ The traveling salesperson problem is to find a tour of minimum cost.
Dynamic Programming Formulation
★ Let g(i,S) be the length of a shortest path starting at vertex i, going through all vertices in S,
and terminating at vertex 1.
★ The function g(1, V — {1}) is the length of an optimal salesperson tour.
From the principle of optimality it follows that

Generalizing the above function, we get

Example:
Back Tracking
General Method

★ Problems which deal with searching for a set of solutions or which ask for an optimal solution
satisfying some constraints can be solved using the backtracking formulation
★ The desired solution is expressible as an n-tuple (x1, x2... , xn), where the xi are chosen from
some finite set Si.
★ Solution to the problem is finding one vector that maximizes (or minimizes or satisfies) a
criterion function P(x1, x2... , xn).
★ All possible solutions required to satisfy a set of constraints:
1. Explicit Constraint: Explicit constraints are rules that restrict each xi to take on values only
from a given set. Ex: xn= 0 or 1.
2.Implicit Constraint: Implicit Constraints are rules that determine which of the tuples in the
solutions space satisfy the criterion function
Solution Space: All tuples that satisfy the explicit constraints define a possible solution space for
a particular instance I of the problem.
Problem State: Each node in the tree organization defines a problem state.
State Space Tree: If we represent solution space in the form of a tree then the tree is referred as
the state space tree.
Answer States: These solution states S for which the path from the root to S defines a tuple
which is a member of the set of solution (i.e. it satisfies the implicit constraints) of the problem.
Live node: A node which has been generated and all of whose children have not yet been
generated is live node.
E-node: The live nodes whose children are currently being generated is called E-node (node
being expanded)
Dead node: It is a generated node that is either not to be expanded further or one for which all of
its children has been generated.
Bounding function: It will be used to kill live nodes without generating all their children

8-Queen Problem:
Given an 8×8 chess board and 8 queens the objective of this problem is to arrange the 8 queens
on the chess board such that no two queens are in attacking position i.e., no two queens should
be in same row, column and diagonal.
The solution tuple for 8-queens problem can be represented as (x1, x2,… x8) the explicit
constraints for 8-queens problem is that each xi should taken on values from the set {1, 2.., 8}
and the implicit constraints are
i. No two queens should not be in same row
ii. No two queens should not be in same column
iii. No two queens should not be in same diagonal
Note: here xi is representing the column number in which ‘i’ is placed in row ‘i’.
Solutions to 4-queen problem are (3, 1, 4, 2) or (2, 4, 1, 3) o
One of the solution to 8-queen problem (8, 6, 4, 2, 7, 1, 3, 5).
➔ No two queens are in the same row is given by the definition of the answer tuple itself as it
each xi represents the ith queen being placed in ith row.
➔ No two queens must be in the same column is achieved by placing distinct values in the
answer tuple i.e., xi≠xj ∀ i, j.
➔ No two queens must be in same diagonal. Let (u, v) and (x, y) be positions of placing a queen,
they are in same diagonal if and only if │u-x│=│v-y│.
Algorithm:(Use n=8 for 8 queen problem)
Algorithm Nqueens(k, n) {
for i:=1 to n do
{
if(place (k, i))then
{
x[k]:=i;
if(k=n)then
write (x[1:n]);
else
Nqueens(k+1, n);
}
}
}
Place (k, i) is an algorithm that returns a Boolean i.e., true if the Kth queen can be placed in
column i.
Algorithm place(k, i)
{
for j:=1 to k-1 do
{
if((x[j]:=i) or (abs(x[j])-k)=(abs(x[j]-i])) )
return false;
}
return true;
}

Sum of Subsets
We are given ‘n’ positive numbers called weights and we have to find all combinations of these
numbers whose sum is M. this is called sum of subsets problem.
If we consider backtracking procedure using fixed tuple strategy , the elements X(i) of the
solution vector is either 1 or 0 depending on if the weight W(i) is included or not.
Constraints
In the state space tree of the solution, for a node at level i, the left child corresponds to X(i)=1
and right child to X(i)=0.
Explicit constraints:
xi = 0 or 1 depending on wi is included or not.
Implicit constraints:
Example
★ Given n=6, M=30 and W(1…6) = (5,10,12,13,15,18). We have to generate all possible
combinations of subsets whose sum is equal to the given value M=30.
★ In state space tree of the solution the rectangular node lists the values of s, n, r, where s is the
sum of subsets, ’n’ is the iteration and ‘r’ is the sum of elements after ‘n’ iterations in the original
set.
The State Space Tree:

Statespace Tree Construction for Sum of Subsets Problem


Ist solution is A -> 1 1 0 0 1 0
IInd solution is B -> 1 0 1 1 0 0
III rd solution is C -> 0 0 1 0 0 1
★ In the state space tree, edges from level ‘i’ nodes to ‘i+1’ nodes are labeled with the values of
Xi, which is either 0 or 1.
★ The left sub tree of the root defines all subsets containing Wi.
★ The right sub tree of the root defines all subsets, which does not include Wi
★ If S+X(k)=M then print the subset because the sum is the required output.
★ If the above condition is not satisfied then check S+X(k)+W(k+1)<=M. If so, generate the
left sub tree. It means W(t) can be included so the sum will be incremented and we have to
check for the next k.
★ After generating the left sub tree, generate the right sub tree. For this, check S+W(k+1)<=M.
Because W(k) is omitted and W(k+1) has to be selected.
Algorithm:
Algorithm SumOfSub(s,k,r) {
X[k]=1; //generate the left child.
if (S+W[k]=m) then write(X[1:k]); // subset found.
else if (S+W[k]+W[k+1]<=m) then
SumOfSub(S+W[k], k+1,r- W[k]);
//generate right child and evaluate Bk
.
If ((S + r - W[k]>=m) and (S+ W[k+1]<=m)) then {
X[k]=0;
SumOfSub(S, k+1, r- W[k]);
}}

Graph Coloring:
Definition
★ Let ‘G’ be a graph and ‘m’ be a given positive integer. If the nodes of ‘G’ can be colored in such a
way that no two adjacent nodes have the same color. Yet only ‘M’ colors are used. So it’s called
M-colorability decision problem.
★ The graph G can be colored using the smallest integer ‘m’. This integeris referred to as
chromatic number of the graph.
★ A graph is said to be planar iff it can be drawn on plane in such a way that no two edges cross
each other. Suppose we are given a map then, we have to convert it into planar. Consider each
and every region as a node. If two regions are adjacent then the corresponding nodes are joined
by an edge.
Example Planar map:

Steps to color the graph


★ First create the adjacency matrix graph(1:m,1:n) for a graph, if there is an edge between i,j
then C(i,j) = 1 otherwise C(i,j) =0.
★ The Colors will be represented by the integers 1,2,.....m and the solutions will be stored in the
array X(1),X(2),...........,X(n) ,X(index) is the color, index is the node.
★ The formula which is used to set the color is,X(k) = (X(k)+1) % (m+1)
★ First one chromatic number is assigned, after assigning a number for ‘k’ node, we have to
check whether the adjacent nodes has got the same values if so then we have to assign the next
value.
★ Repeat the procedure until all possible combinations of colors are found.
★ The function which is used to check the adjacent nodes and same color is, If(( Graph (k,j) ==
1) and X(k) = X(j))
Example:
State space tree for given graph

Algorithm:

Algorithm mcoloring(k)
{
repeat
{
Nextvalue(k); // Assign to X[k] a legal color.
If (X[k]=0) then return; // No new color possible.
If (k=n) then write(x[1:n]);
else mcoloring(k+1);
} until(false);
}
Algorithm Nextvalue(k)
{
repeat
{
X[k] = (X[k]+1)mod(m+1);
if(X[k]=0) then return;
for j=1 to n do
{
if((G[k,j] <> 0)and(X[k] = X[j]))
then break;
}
if(j=n+1) then return; //new color found.
} until(false); //otherwise try to find another color.
}
Hamiltonian Cycles:
Definition:
★ Let G=(V,E) be a connected graph with ‘n’ vertices. A HAMILTONIAN CYCLE is a round trip
path along ‘n’ edges of G that visits every vertex once and returns to its starting position.
★ If the Hamiltonian cycle begins at some vertex V1 belongs to G and the vertices of G are visited
in the order of V1,V2…….Vn+1,then the edges (Vi,Vi+1) are in E,1<=i<=n, and the Vi are
distinct except for V1 and Vn+1 which are equal.
Example:

1. Define a solution vector X(X1……..Xn) where Xi represents the ith visited vertex of the
proposed cycle.
2. Create a cost adjacency matrix for the given graph.
3. The solution array initialized to all zeros except X(1)=1,b’coz the cycle should start at vertex
‘1’.
4. Now we have to find the second vertex to be visited in the cycle.
5. The vertex from 1 to n are included in the cycle one by one by checking 2 Conditions,
1. There should be a path from previous visited vertex to current vertex.
2. The current vertex must be distinct and should not have been visited earlier.
6. When these two conditions are satisfied the current vertex is included in the Cycle; else the
next vertex is tried.
7. When the nth vertex is visited we have to check, is there any path from nth vertex to first
vertex. If no path, the go back one step and after the previous visited node.
8. Repeat the above steps to generate possible Hamiltonian cycle.
Algorithm:
Algorithm Hamiltonian(k)
{
repeat
{
// generate values for X[k].
Nextvalue(k); // Assign a legal next value to X[k]
if (X[k]=0) then return;
if (k=n) then
Write(x[1:n]);
else Hamiltonian(k+1);
} until(false);
}

***

You might also like