Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 146

UNIT 2

A Jeyanthi ASP/CSE MNMJEC 1


Solving Problems by Searching
• The searching algorithms are divided into two categories
• 1. Uninformed Search Algorithms (Blind Search)
• 2. Informed Search Algorithms (Heuristic Search)

• There are six Uninformed Search Algorithms


• 1. Breadth First Search
• 2. Uniform-cost search
• 3. Depth-first search
• 4. Depth-limited search
• 5. Iterative deepening depth-first search
• 6. Bidirectional Search

• There are three Informed Search Algorithms


• 1. Best First Search
• 2. Greedy Search
• 3. A* Search
A Jeyanthi ASP/CSE MNMJEC 2
Blind search Vs Heuristic search

Blind search(UNINFORMED) Heuristic search


the strategies have no additional that uses problem-specific knowledge
information about states except that provided beyond the definition of the problem itself
in the problem definition
Less effective in search method More effective in search method

UNINFORMED Search
Only pbm defn is given.
Search tech uses the info in pbm defn and generates new set of states
from the initial state and differentiates goal state from non goal state.

A Jeyanthi ASP/CSE MNMJEC 3


Breadth-first search
(level by level expansion)
• Breadth-first search is a simple strategy in which the root node is expanded
first, then all the successors of the root node are expanded next, then their
successors, and so on.
• The nodes in level 1 are expanded before nodes in level 2 and so on…
• In general, all the nodes at a given depth are expanded before any nodes at the
next level are expanded.
• Implementation
• calling GRAPH-SEARCH(Problem, FIFO-QUEUE())results in a breadth-first search
by using a FIFO queue for the frontier.
• It assures that the nodes that are visited first will be expanded first.
• The FIFO queue puts all nodes which are visited earlier at the front of the
queue and newly generated successors at the end of the queue, which means
that shortest nodes are expanded before deeper nodes

A Jeyanthi ASP/CSE MNMJEC 4


First,node A is inserted in queue;If it is does
not a contain goal state , it is expanded.So B,C are
added..If B does not contain goal state it is
expanded…and so on.
A Jeyanthi ASP/CSE MNMJEC 5
• Time and space complexity:

• Time complexity
• Consider search tree where every node has b successors.
• The root node generates ‘b’ nodes at the first level
• each of which generates b more nodes, for a total of b^2 at the second level.
• Each of these generates b more nodes, yielding b^3 nodes at the third level, and so
on.
• Now suppose that the solution is the last node at depth d.
• Then the total number of nodes generated is
• = 1 +b + b^2 + . . . . . . . + b^d
• = O(b^d)
• The space complexity
• For breadth-first graph search in particular, every node generated remains in
• memory. There will be O(b^d−1) nodes in the explored set and O(b^d) nodes in the
frontier,
• So Space complexity O(b^d)

A Jeyanthi ASP/CSE MNMJEC 6


• Completeness: Yes
• Optimality: Yes,Only if all step cost are equal else not optimal
• Advantage: Guaranteed to find the single solution at the
shortest(shallowest) depth level
• Disadvantage: Suitable for only smallest instances problem (i.e.)
(number of levels to be minimum (or) branching factor to be
minimum)

A Jeyanthi ASP/CSE MNMJEC 7


A Jeyanthi ASP/CSE MNMJEC 8
A Jeyanthi ASP/CSE MNMJEC 9

uniform-cost search
breadth-first search is optimal only When all step costs are equal, because it always
expands the shallowest unexpanded node.
• By a simple extension, we can find an algorithm called uniform-cost search that is optimal
with any step-cost
• In UCS first root node is expanded; Of all successors of the root node, the lowest path cost
successor is expanded and so on..this process goes on until a goal state is reached
• How it is possible?
• In UCS priority queue is used as frontier
• In UCS newly generated nodes are put in the priority queue in the order of path cost g(n).
• Implementation
• calling GRAPH-SEARCH(Problem, FIFO-QUEUE())results in a Uniform Cost search by using a
Priority queue as the frontier.
• Two significant differences from breadth-first search.
• In addition to the ordering of the queue by path cost;
• 1.In UCS ,the goal test is applied to a node when it is selected for expansion rather than
when it is first generated .
• The reason is that the first goal node that is generated may be on a suboptimal path
(less than optimal).
• 2.The second difference is that a test is added in case a better path is found to a node
currently on the frontier.
A Jeyanthi ASP/CSE MNMJEC 10
• The goal is to get from Sibiu to Bucharest.
• The successors of Sibiu are Rimnicu Vilcea and Fagaras, with costs 80 and 99,
respectively.
• The least-cost node, Rimnicu Vilcea, is expanded next, adding Pitesti with cost 80 +
97=177.
• The least-cost node is now Fagaras, so it is expanded, adding Bucharest with cost
99+211=310. Now a goal node has been generated,
• but uniform-cost search keeps going, choosing Pitesti for expansion and adding a
second path ss to Bucharest with cost 80+97+101= 278.
• Now the algorithm checks to see if this new path is better than the old one; it is,
so the old one is discarded.
• Bucharest, now with g-cost 278, is selected for expansion and the solution is
returned.
A Jeyanthi ASP/CSE MNMJEC 11
A Jeyanthi ASP/CSE MNMJEC 12
• Disadvantages of UCS :
• Uniform-cost search is not complete if any step cost is equll to zero. In such case it gets into
infinity loop.
• Therefore UCS is complete; only if every step cost exceeds some small positive constant €
• Uniform-cost search complexity is not characterized in terms of b and d. Instead, by cost of
the optimal solution C∗
• Then the algorithm’s worst-case time and space complexity is

• which can be much greater than O( b^d)


• This is because uniform cost search can explore large trees of small steps before exploring
paths involving large
• When all step costs are the same, uniform-cost search is similar to breadth-first search,
except that the latter stops as soon as it generates a goal, whereas uniform-cost search
examines all the nodes at the goal’s depth to see if one has a lower cost; thus uniform-cost
search does strictly more work by expanding nodes at depth d unnecessarily.

A Jeyanthi ASP/CSE MNMJEC 13


• Depth-first search (Expand the deepest unexpanded Node)
• Depth-first search always expands the deepest node in the current
fringe of the search tree
• First the root node is expanded and successor nodes are generated;
• Of those successors; one node is expanded to the depth of the
tree(nodes have no successors).
If the dead end is reached back tracking is done to the previous node
for the node to be expanded.
• implementation
• DFS can be implemented by Graph-SEARCH with a last-in-first-out
(LIFO) queue, also known as a stack.
• A LIFO queue means that the most recently generated node is
chosen for expansion.
• This ensures the deepest unexpanded node to be expanded first

A Jeyanthi ASP/CSE MNMJEC 14


A Jeyanthi ASP/CSE MNMJEC 15
A Jeyanthi ASP/CSE MNMJEC 16
• The properties of depth-first search depend strongly on whether the graph-search
or tree-search version is used.
• COMPLETENESS
• The graph-search version
• is complete in finite state spaces as it avoids repeated states and redundant paths,
• In infinite state spaces,search is not complete (if an infinite non-goal path is
encountered).
• The Tree-search version
• May be in complete even in finite state spaces as it cannot avoid repeated states
• OPTIMALITY
• For similar reasons, both versions are non optimal.
• For example, in Figure 3.16, depthfirst search will explore the entire left sub tree
even if node C is a goal node. If node J were also a goal node, then depth-first
search would return it as a solution instead of C, which would be a better solution;
hence, depth-first search is not optimal.

A Jeyanthi ASP/CSE MNMJEC 17


• SPACE COMPLEXITY
• The graph-search version
• For a graph search, there is no advantage,
• but a depth-first tree search needs to store only a single path from the root to a leaf
node, along with the remaining unexpanded sibling nodes for each node on the
path.
• Once a node has been expanded, it can be removed from memory as soon as all its
descendants have been fully explored. (See Figure 3.16.)
• For a state space with branching factor b and maximum depth m, depth-first search
requires storage of only O(b^m) nodes.
• The Tree-search version
• May be incomplete, even in finite state spaces as it cannot avoid repeated states
• TIME COMPLEXITY
• The time complexity of depth-first graph search is bounded by the size of the state
space (which may be infinite, of course).
• A depth-first tree search, on the other hand, may generate all of the O(b^m) nodes
in the search tree, where m is the maximum depth of any node;
• This can be much greater than the size of the state space. Note that m itself can be
much larger than d (the depth of the shallowest solution) and is infinite if the tree is
A Jeyanthi ASP/CSE MNMJEC 18
unbounded
• Backtracking search
A variant of depth-first search called backtracking search uses still less memory.
• In backtracking, only one successor is generated at a time rather than all
successors; each partially expanded node remembers which successor to generate
next.
• In this way, Space complexity is only O(m) memory, rather than O(b^m).
• Backtracking search facilitates yet another memory-saving (and time-saving)
• trick: the idea of generating a successor by modifying the current state description
directly rather than copying it first.
• This reduces the memory requirements to just one state description and O(m)
actions.

A Jeyanthi ASP/CSE MNMJEC 19


• Depth - limited search
• failure of depth-first search ,in infinite state spaces can be alleviated
by supplying depth-first search with a predetermined depth limit(l) .
• That is, nodes at depth are DEPTH-LIMITED treated as if they have
no successors. This approach is called depth-limited search.
• The depth limit solves the infinite-path problem.
• Drawbacks:
• It is incomplete, if we choose l< d, that is, the shallowest goal is
beyond the depth limit. (This is likely when d is unknown.)
• It is, non optimal if we choose l> d.
• Its time complexity is O(b^l) and its space complexity is O(bl).
• Depth-first search can be viewed as a special case of depth-limited
search with l=∞.

A Jeyanthi ASP/CSE MNMJEC 20


• Implementation:
• DLS can be implemented as a simple modification
to the general tree Or graph-search algorithm.
• Alternatively, it can be implemented as a simple
recursive algorithm as shown in Figure 3.17.
• Depth Limit search can terminate with two kinds of
failure:
• the standard failure value indicates no solution;
• the cutoff value indicates no solution within the
depth limit.

A Jeyanthi ASP/CSE MNMJEC 21


A Jeyanthi ASP/CSE MNMJEC 22
Iterative deepening depth-first search
• It gradually increases the limit—first 0, then 1, then 2, and so on—until a goal is
found.
• This will occur when the depth limit reaches d, the depth of the shallowest goal
node.
• Iterative deepening combines the benefits of depth-first and breadth-first
search.
• Adv:
• Like depth-first search, its memory requirements are modest: O(bd) to be
precise.
• Like breadth-first search, it is complete when the branching factor is finite and
• optimal when the path cost is a same.
• Figure 3.19 shows four iterations of ITERATIVE-DEEPENING-SEARCH on a binary
search tree, where the solution is found on the fourth iteration.

A Jeyanthi ASP/CSE MNMJEC 23


A Jeyanthi ASP/CSE MNMJEC 24
• T
• T
• T
• T
• T

A Jeyanthi ASP/CSE MNMJEC 25


• Iterative deepening search may seem wasteful ;because states are
generated multiple times. But this is not too costly.
• The reason is that in a search tree, most of the nodes are in the
bottom level rather than upper level
• so it does not matter how many nodes in the upper levels are
generated multiple times.
• In an iterative deepening search, the nodes on the bottom level
(depth d) are generated once,
• those on the next-to-bottom level are generated twice, and so on,
up to the children of the root, which are generated d times.
• So the total number of nodes generated in the worst case is

• which gives a time complexity of O(b^d)—asymptotically the same as


breadth-first search.
• There is some extra cost for generating the upper levels multiple
times, but it is not large. A Jeyanthi ASP/CSE MNMJEC 26
• For example, if b = 10 and d = 5, the numbers are
• N(IDS) = 50 + 400 + 3, 000 + 20, 000 + 100, 000 = 123, 450
• N(BFS) = 10 + 100 + 1, 000 + 10, 000 + 100, 000 = 111, 110 .
• In general, iterative deepening is the preferred uninformed search
method when the search space is large and the depth of the solution
is not known.

A Jeyanthi ASP/CSE MNMJEC 27


Bidirectional search
• The idea behind bidirectional search is to run two
simultaneous searches
• one forwards from the initial state
• and the other backwards from the goal;
• Algm stops when the two searches meet in the middle
(Figure 3.20).
• Forward search starts from the initial state ;and generate
successor and search for the goal
• Backward search start from the goal state and searches
backward;
• the goal test,checks to see whether the frontiers of the two
searches intersect; if they do, a solution has been found.
A Jeyanthi ASP/CSE MNMJEC 28
• For example, if a problem has solution depth d=6, and each direction runs breadth-
first search one node at a time,
• then in the worst case the two searches meet when they have generated all of the
nodes at depth 3.
• When using breadth-first searches directions
• The space complexity is also O(b^d/2).
• The time complexity is O(b^d/2).

A Jeyanthi ASP/CSE MNMJEC 29


A Jeyanthi ASP/CSE MNMJEC 30
A Jeyanthi ASP/CSE MNMJEC 31
INFORMED (HEURISTIC) SEARCH STRATEGIES
• an informed search strategy uses problem-specific knowledge in
addition to the problem definition
• So can find solutions more efficiently than can an uninformed
strategy.
• BEST-FIRST SEARCH
• The general approach for INFORMED SEARCH is best-first search.
• this algorithm uses evaluation function, f(n).
• Nodes are chosen for expansion based on an evaluation function,
f(n).
• Usually the nodes with lowest f(n) are choosen for expansion
• Variants of best-first search: Based on evaluation function f(n)
• 1.Greedy BFS :f(n)=h(n)
• 2.A* Search: f(n)=g(n)+h(n)
A Jeyanthi ASP/CSE MNMJEC 32
heuristic function (h(n))
• It gives additional knowledge about the pbm to the
algm
• Ex:In shortest path pbm SLD(Straight line Distance)
is used as h(n)
SLD is calculated by experience
SLD gives the estimated cost of cheapest path
SLD is always < than actual cost

A Jeyanthi ASP/CSE MNMJEC 33


Greedy best-first search
• evaluates nodes by using heuristic function;
f(n) = h(n).
• It expands the node with lowest f(n)
• It ignores the actual path cost between nodes
• Hence may not be optimal
• also incomplete even in a finite state space,
much like depth-first search

A Jeyanthi ASP/CSE MNMJEC 34


• The worst-case time and space complexity for
the tree version is O(b^m), where m is the
maximum depth of the search space.
• With a good heuristic function, however, the
complexity can be reduced substantially.
• The amount of the reduction depends on the
particular problem and on the quality of the
heuristic.

A Jeyanthi ASP/CSE MNMJEC 35


1)Find shortest path from S to G using Greedy BFS

A Jeyanthi ASP/CSE MNMJEC 36


A Jeyanthi ASP/CSE MNMJEC 37
A Jeyanthi ASP/CSE MNMJEC 38
A Jeyanthi ASP/CSE MNMJEC 39
Fhhhhhh

A Jeyanthi ASP/CSE MNMJEC 40


A Jeyanthi ASP/CSE MNMJEC 41
A* Search
• The most widely known form of best-first search is called A∗ search
• It avoids expanding the node which is already expensive
• It evaluates nodes by combining g(n),h(n)
• f(n) = g(n) + h(n)
• Where g(n) is the traversed path cost to reach the node n from root,
• h(n) is the estimated cost from the node n to the goal
• f(n) = estimated cost of the cheapest solution from root to goal
• The algorithm is identical to UNIFORM-COST-SEARCH except that A∗ uses g +
h instead of g.
• In each step the node with lowest path cost is choosen for expansion
• A* is both complete and optimal if h(n ) meets some condns
• A∗ is optimally efficient for any given consistent heuristic.
• optimally efficient means to reach the soln A* gererates fewer nodes than
any other optimal algorithm

A Jeyanthi ASP/CSE MNMJEC 42


1)Find shortest path from S to G using A*

A Jeyanthi ASP/CSE MNMJEC 43


A Jeyanthi ASP/CSE MNMJEC 44
A Jeyanthi ASP/CSE MNMJEC 45
Conditions for optimality: Admissibility and consistency
• condition 1: h(n) be an admissible heuristic.
• An admissible heuristic is one that never overestimates the cost to reach the goal.
• Because g(n) is the actual cost to reach n along the current path, and f(n)=g(n) +
h(n), we have as an immediate consequence that f(n) never overestimates the true
cost of a solution along the current path through n.
• Straight-line distance is admissible because the shortest path between any two
points is a straight line
• Condition 2: h(n ) be consistent (or sometimes monotonicity)
• A heuristic h(n) is consistent if, for every node n and every successor n’ of n
generated by any action a, the estimated cost of reaching the goal from n is no
greater than the step cost of getting to n’ plus the estimated cost of reaching the
goal from n’ :
• h(n) ≤ c(n, a, n’) + h(n’) .
• This is a form of the general triangle inequality
• A∗ has the following properties:
• the tree-search version of A∗ is optimal if h(n) is admissible
• while the graph-search versionAisJeyanthi ASP/CSE MNMJEC
optimal if h(n) is consistent.
46
A Jeyanthi ASP/CSE MNMJEC 47
A Jeyanthi ASP/CSE MNMJEC 48
A Jeyanthi ASP/CSE MNMJEC 49
Limitations of A*
• Time complexity is more
• Space complexity is also more; since it keeps all the generated nodes
in memory
• Therefore suitable only for small scale pbm
• To reduce the memory requirement of A* ,Memory-bounded
heuristic search algms like IDA*,RBFS can be used
• Advantage of A*:
• A* is optimally efficient

A Jeyanthi ASP/CSE MNMJEC 50


IDA*
• By applying iterative deepening in A* ,IDA* is obtained
• The main difference between IDA∗ and standard iterative
deepening is that
• In standard iterative deepening the depth limit is used as
cutoff
• In IDA* , f (n) is used as cutoff; rather than the depth;
• at each iteration, the cutoff value is the smallest f(n) of any
node that exceeded the cutoff on the previous iteration.
• Its complete and optimal but more costly than A*
• IDA∗ is practical for many problems with unit step costs and
avoids the substantial overhead associated with keeping a
sorted queue of nodes.

A Jeyanthi ASP/CSE MNMJEC 51


Recursive Best First Search(RBFS)
• it uses the f-limit variable to keep track of the f-value of the
best alternative path available from any ancestor of the
current node.
• If the current node exceeds this limit, the recursion unwinds
back to the alternative path.
• As the recursion unwinds, RBFS replaces the f-value of each
node along the path with a backed-up value(the best f-value
of its children).
• In this way, RBFS remembers the f-value of the best leaf in
the forgotten subtree and can therefore decide whether it’s
worth reexpanding the subtree at some later time.
• Figure 3.27 shows how RBFS reaches Bucharest.
• RBFS is somewhat more efficient than IDA∗, but still suffers
from excessive node regeneration.
A Jeyanthi ASP/CSE MNMJEC 52
A Jeyanthi ASP/CSE MNMJEC 53
A Jeyanthi ASP/CSE MNMJEC 54
A Jeyanthi ASP/CSE MNMJEC 55
A Jeyanthi ASP/CSE MNMJEC 56
• IDA∗ and RBFS suffer from using too little memory
• Between iterations, IDA∗ retains only a single number: the current f-cost limit.
• RBFS retains more information in memory, but it uses only linear space: even if more
memory were available, RBFS has no way to make use of it.
• Therefore, to use all available memory. Two algorithms Are
• 1. MA∗ (memory-bounded A∗)
• 2. SMA∗ (simplified MA∗).

• SMA∗— is simpler
• SMA∗ proceeds just like A∗, expanding the best leaf until memory is full.
• At this point, it cannot add a new node to the search tree without dropping an old
one.
• SMA∗ always drops the worst leaf node—the one with the highest f-value.
• Like RBFS, SMA∗ then backs up the value of the forgotten node to its parent. In this
way, the ancestor of a forgotten subtree knows the quality of the best path in that
subtree.
• With this information, SMA∗ regenerates the subtree only when all other paths have
been shown to look worse than the path it has forgotten.
• Another way of saying this is that, if all the descendants of a node n are forgotten,
then we will not know which way to go from n, but we will still have an idea of how
worthwhile it is to go anywhere from n.

A Jeyanthi ASP/CSE MNMJEC 57


Local Search Algorithms And Optimization Problems
• For many problems reaching the goal state; as well as the path to reach the goal state is important

• Example: In shortest path problem, the goal state and the path followed is also important
• but for some problems only the goal state is important; and path to reach the goal state is irrelevant

• Ex:In 8 Queen’s problem; only the final board configuration(8 Queens in non attacking manner ) is important
• And the order in which the Queens are added to the board is not important
• In such pbms; Local Search Algorithm can be used

• Local Search Algorithm



• It starts with a single current state and moves only to the neighbour state
• the path followed by the search is not retained in memory

• Advantage of Local Search Algms:


• needs only little amount of memory ;since the path followed by the search is not retained in memory
• Can solve continuous space problem ,that cannot be solved by systematic algms like A*,BFS,DFS
• It can solve Optimization problem

• Integrated - circuit design


• Factory - floor layout
• Job-shop scheduling
• Automatic programming
• Vehicle routing A Jeyanthi ASP/CSE MNMJEC 58
• Telecommunications network Optimization
• The local search problem is explained with the state space
land scape.
• A landscape has:
• Location - defines the state
• Elevation - defines the value of the objective function or
heuristic cost function
• if elevation corresponds to cost then global minimum needs
to be achieved.
• If elevation corresponds to an objective function, then the
global maximum needsA to be achieved.
Jeyanthi ASP/CSE MNMJEC 59
• Global Maxima:is the area of state space in which objective
function is highest peak
• Local maxima : is a peak that is higher than each of its
neighboring states, but lower than the global maximum
• Plateau or shoulder: a plateau is an area of the state space
landscape where the evaluation function is flat.
• Ridges: Ridges result in a sequence of local maxima that is
very difficult for greedy algorithms to navigate.

A Jeyanthi ASP/CSE MNMJEC 60


• local search algorithms are:
• 1. Hill climbing search (Greedy Local Search)
• 2. Simulated annealing
• 3. Local beam search
• 4. Genetic Algorithm (GA)

A Jeyanthi ASP/CSE MNMJEC 61


Hill Climbing Search
• Is basic Local Search
• At each step the current node is replaced by the best neighbor(the neighbor
with highest objective value);
• The hill-climbing search algorithm is simply a loop that continually moves
in the direction of increasing value.
• It terminates when it reaches a "peak" where no neighbor has a higher
value.
• The algorithm does not maintain a search tree,
• It records only the current state and its objective function value.
• Hill climbing often gets stuck for the following reasons:
• Local maxima :The hill climbing that reaches local maxima cannot come
out
• Other local search algms can over come this
• Plateau : The hill climbing that reaches plateau cannot come out

A Jeyanthi ASP/CSE MNMJEC 62


A Jeyanthi ASP/CSE MNMJEC 63
• Variants of hill-climbing
• Stochastic hill climbing – It doesnot examine all neighbors; It chooses
neighbors at random from among the uphill moves; the probability of selection
can vary with the steepness of the uphill move.
• First-choice hill climbing – is variant of stochastic hill climbing; It generates
successors randomly until one is generated that is better than the current state
• The hill-climbing algorithms described so far are incomplete—they often fail to
find a goal when one exists because they can get stuck on local maxima
• Random-restart hill climbing –

• Whenever we struck in local maxima or plateau,


Random-restart is used
• It conducts a series of hill-climbing searches
from randomly generated initial state, stopping
when a goal is found.

A Jeyanthi ASP/CSE MNMJEC 64


Simulated annealing search
• An algorithm which combines hill climbing with random move to yield both
efficiency and completeness
• In metallurgy, annealing is the process used to join two different metals
by heating them to a high temperature and then gradually cooling them
• When temp reaches 0; two metals are joined
• When the search stops at the local maxima; to escape from local maxima, it
allows some "bad" moves
• but gradually decrease their size and frequency.
• Instead of picking the best move ;it picks the random move;
• If the move improves the situation, it is executed.
• Else it accepts the move with some pbty <1
• SA uses control parameter T
• T starts with high value and gradually reduced to 0
• SA requires annealing schedule ; that determines the value of temperature
as a function of Time
• When T is high bad moves have high pbty
• When T is low bad moves have low pbty
A Jeyanthi ASP/CSE MNMJEC 65
A Jeyanthi ASP/CSE MNMJEC 66
• Local beam search
• It uses K states and generates successors for K
states in parallel instead of one state and its
successors in sequence.
• The useful information is passed among the K
parallel threads.
• The sequence of steps to perform local beam
search is given below:

A Jeyanthi ASP/CSE MNMJEC 67


• Steps:
• Start with K randomly generated states.
• At each iteration, all the successors of all K states are
generated.
• If anyone is a goal state stop; else select the K best
successors from the complete list and repeat.
• This search will suffer from lack of diversity among K
states.
• Therefore a variant named as stochastic beam search
selects K successors at random, with the probability of
choosing a given successor being an increasing
function of its value.

A Jeyanthi ASP/CSE MNMJEC 68


• Genetic Algorithms (GA)
• A genetic algorithm (or GA) is a variant of stochastic beam
search in which successor states are generated by combining
two parent states, rather than by modifying a single state
• GA begins with a set of k randomly generated states, called
the population.
• Each state, or individual, is represented as a string of numeric
digits.
• For Example in an 8 queen pbm; each state could be
represented as 8 digits, each in the range from 1 to 8, rep pos
of Queen in each column.

A Jeyanthi ASP/CSE MNMJEC 69


A Jeyanthi ASP/CSE MNMJEC 70
A Jeyanthi ASP/CSE MNMJEC 71
• Initial population: K randomly generated states
• fitness function :Each state is rated by fitness function ;that tells the
fitness of the state to reach the goal state
• It returns higher values for better State.
• Selection :Based on fitness function random choice of pair is
selected is selected for reproduction
• In the example one state is chosen twice (probability of 29% )and the
another one state is not chosen (Probability of 14%)
• Cross over: A crossover point is randomly chosen from the position
in the string.
• For the first pair the crossover point is chosen after 3 digits and after
5 digits for the second pair.
• During cross over operation, first child of ist pair gets first three digits
from the first parent and remaining digits from second parent
• Similarly the second child of the first pair gets the first 3 digits
from the second parent and the remaining digits from the first
parent.
A Jeyanthi ASP/CSE MNMJEC 72
• Mutation : is applied to each location of the
state with small independent probability.
• In the example; One digit was mutated in the
first, third, and fourth offspring
• There for this Algm takes set of states and
applies selection, cross over, mutation
operation and returns a single state with high
fitness value

A Jeyanthi ASP/CSE MNMJEC 73


Game Playing or
Adversarial search problems
• have competitive environments, in which the
agents’ goals are in conflict
• Ex: Consider A Game with two players - Max and Min.
• Max, makes a move first by choosing a high value and take turns moving until the
game is over
• Min, makes a move as a opponent and tries to minimize the max player score, until
the game is over.
• At the end of the game (goal state or time), points are awarded to the winner.
• Gaming can be defined as search having following components:
• Initial state -includes the initial board position and identifies the player to the
move.
• Successor function - returns a list of (move, state) pairs, each indicating a legal
move and the resulting state.
• Terminal test - determines the end state of the game.
• Utility function (also called an objective function or payoff function), - which
• gives a numeric value for the terminal states. In chess, the outcome is a win,loss, or
draw, with values +1, -1, or 0 A Jeyanthi ASP/CSE MNMJEC 74
A Jeyanthi ASP/CSE MNMJEC 75
• The initial state and the legal moves for each side
define the game tree for the game.(Prev Slide)
• Example : Tic – Tac – Toe (Noughts and Crosses)
• From the initial state, MAX has nine possible moves
• Play alternates between MAX and MIN
• MAX'S places an X and MIN'S places an O, until we
reach leaf nodes corresponding to terminal states such
that one player has three in a row or all the squares are
filled.
• Initial State : Initial Board Position
• Successor Function :
• Max placing X’s in the empty square
• Min placing O’s in the empty square
A Jeyanthi ASP/CSE MNMJEC 76
• Goal State : We have three different types of goal state, any one to be reached.
• i) If the O’s are placed in one column, one row (or) in the diagonal continuously
then, it is a goal state of min player. (Won by Min Player)
• ii) If the X’s are placed in one column, one row (or) in the diagonal continuously
then, it is a goal state of min player. (Won by Max Player)
• iii) If the all the nine squares are filled by either X or O and there is no win
condition by Max and Min player then it is a state of Draw between two players.
• Some terminal states
• Won by Max Won by Min Draw

• Utility function
• Win = 1 Draw =0 Loss = -1
• Utility value of terminal state indicates the utility value from the point of view of
MAX

A Jeyanthi ASP/CSE MNMJEC 77


Optimal strategies
• Given a game tree, the optimal strategy determines optimal action
by examining the minimax value of each node, which we write as
MINIMAX- VALUE(n).

• Even a simple game like tic-tac-toe is too complex for us to draw the
entire game tree.So we will switch to the trivial game in Figure 5.2.

A Jeyanthi ASP/CSE MNMJEC 78


A Jeyanthi ASP/CSE MNMJEC 79
The possible moves for MAX at the root node are labeled a1, a2, and a3.
The possible replies to a1 for MIN are b1, b2,b3, and so on.

The terminal nodes show the utility values for MAX;


The other nodes are labeled with their minimax values.
MAX'S best move at the root is a1, because it leads to the successor with the highest
minimax value
MIN'S best reply is b1, because it leads to the successor with the lowest minimax
value.
The root node is a MAX node; its successors have minimax values 3, 2, and 2; so it
has a minimax value of 3.
The first MIN node, labeled B, has three successors with values 3, 12, and 8, so its
minimax value is 3
A Jeyanthi ASP/CSE MNMJEC 80
The minimax algorithm
• The minimax algorithm computes optimal action
or the minimax decision from the current state.
• It uses a simple recursive depth search of game
tree
• The recursion proceeds all the way down to the
leaves of the tree, and then the minimax values are
backed up as the recursion unwinds.

A Jeyanthi ASP/CSE MNMJEC 81


• Steps to find optimal action in any state using MINMAX:
• 1.Generate the whole game tree, all the way down to the terminal
state.
• 2.Apply the utility function to each terminal state to get its value.
• 3.Using MINMAX algm find MINIMAX values of all nodes using
utility functions of the terminal state
• 4.The algm returns optimal action (Minimax decision) for the
current state
• Assumption: that the opponent will play perfectly to minimize the
max player score.
• MINMAX algm
• The algorithm first recurses down to the three bottom left nodes,
and uses the UTILITY function on them to discover that their
values are 3, 12, and 8 respectively.
• Then it takes the minimum of these values, 3, and returns it as the
backed-up value of node B.
• A similar process gives the backed up values of 2 for C and 2 for D.
• Finally, we take the maximum of 3,2, and 2 to get the backed-up
value of 3 for the root node.
A Jeyanthi ASP/CSE MNMJEC 82
A Jeyanthi ASP/CSE MNMJEC 83
A Jeyanthi ASP/CSE MNMJEC 84
A Jeyanthi ASP/CSE MNMJEC 85
A Jeyanthi ASP/CSE MNMJEC 86
• NPTEL videos
• https://www.youtube.com/watch?
v=TlVgUWdUpwc

A Jeyanthi ASP/CSE MNMJEC 87


A Jeyanthi ASP/CSE MNMJEC 88
• Complexity : If the maximum depth of the
tree is m, and there are b legal
• moves at each point then the time complexity
of the minimax algorithm is O(b^m).
• This algorithm is a depth first search,
therefore the space requirements are linear in
m and b.
• Completeness : If the tree is finite, then it is
complete.
• Optimality : It is optimal when played against
an optimal opponent

A Jeyanthi ASP/CSE MNMJEC 89


• ALPHA - BETA PRUNING
• The problem with minimax search is to compute optimal solution, minimax
value all the nodes should be computed
• So the number of states to be examined is exponential in the depth of the
tree
• To overcome this ALPHA - BETA PRUNING algm can be used
• Pruning - The process of eliminating a branches that cannot influence the final
decision
• Idea: is computing the correct minimax decision without looking at every node
in the game tree
• The two parameters of pruning technique are:
• Alpha (α ) :is the value of the Best choice along the MAX path
• Beta (β ) : is the value of the Best choice along the MIN path

• Alpha - Beta pruning : The Alpha and Beta values are applied to a minimax
tree
• Alpha - Beta pruning returns the same move as minimax, but prunes away
branches that cannot
possibly influence the final decision
Consider again the two-ply game tree from

A Jeyanthi ASP/CSE MNMJEC 90


Two types of cut off
• Alpha–beta search updates the values of α and β as
it goes along and prunes the remaining branches at
a node (i.e., terminates the recursive call) as soon
as the value of the current node is known to be
worse than the current α or β value for MAX or
MIN, respectively
• 1. α cut off
• In MIN node if β<= α of MAX ancestor then
remaining leafs are cutoff.It is called α cut off
• 2. β cut off
• In MAX node if α>=β of MIN ancestor then
remaining leafs are cutoff.It called β cut off
A Jeyanthi ASP/CSE MNMJEC 91
Effectiveness of Alpha – Beta Pruning
• Alpha - Beta pruning algorithm needs to examine
only O(b^(d/2)) nodes to pick the best move,
instead of O(b^d) with minimax algorithm, that is
effective branching factor is √b instead of b.

A Jeyanthi ASP/CSE MNMJEC 92


A Jeyanthi ASP/CSE MNMJEC 93
A Jeyanthi ASP/CSE MNMJEC 94
• 1. α cut off
In MIN node if β<= α of MAX ancestor then remaining leafs are cutoff.
• 2. β cut off
In MAX node if α>=β of MIN ancestor then remaining leafs are cutoff.

A Jeyanthi ASP/CSE MNMJEC 95


• 1. α cut off
In MIN node if β<= α of MAX ancestor then remaining leafs are cutoff.
• 2. β cut off
In MAX node if α>=β of MIN ancestor then remaining leafs are cutoff.

A Jeyanthi ASP/CSE MNMJEC 96


A Jeyanthi ASP/CSE MNMJEC 97
A Jeyanthi ASP/CSE MNMJEC 98
then remaining leafs are cutoff.
2. β cut off
In MAX node if α>=β of MIN ancestor
then remaining leafs are cutoff.

A Jeyanthi ASP/CSE MNMJEC 99


then remaining leafs are cutoff.
2. β cut off
In MAX node if α>=β of MIN ancestor
then remaining leafs are cutoff.

A Jeyanthi ASP/CSE MNMJEC 100


then remaining leafs are cutoff.
2. β cut off
In MAX node if α>=β of MIN ancestor
then remaining leafs are cutoff.

A Jeyanthi ASP/CSE MNMJEC 101


then remaining leafs are cutoff.
2. β cut off
In MAX node if α>=β of MIN ancestor
then remaining leafs are cutoff.

A Jeyanthi ASP/CSE MNMJEC 102


then remaining leafs are cutoff.
2. β cut off
In MAX node if α>=β of MIN ancestor
then remaining leafs are cutoff.

A Jeyanthi ASP/CSE MNMJEC 103


then remaining leafs are cutoff.
2. β cut off
In MAX node if α>=β of MIN ancestor
then remaining leafs are cutoff.

A Jeyanthi ASP/CSE MNMJEC 104


Assignment 2
Question 1:Evalute using
a)minmax alglm b)Alpha beta pruning

A Jeyanthi ASP/CSE MNMJEC 105


• Assignment 2:Question 2:Evalute using
a)minmax alglm b)Alpha beta pruning

A Jeyanthi ASP/CSE MNMJEC 106


A Jeyanthi ASP/CSE MNMJEC 107
• multiplayer games
• Let us examine how to extend the minimax idea to multiplayer
games
• First, we need to replace the single value for each node with a
vector of values.
• For example, in a three-player game with players A, B, and C, a
vector vA, vB, vC is associated with each node.
• For terminal states, this vector gives the utility of the state from
each player’s viewpoint.
• Now we have to consider nonterminal states. Consider the node
marked X in the game tree shown in Figure 5.4.
• In that state, player C chooses what to do. The two choices lead to
terminal states with utility vectors vA =1, vB =2, vC =6 and vA =4, vB
=2, vC =3.
• Since 6 is bigger than 3, C should choose the first move. This means
that if state X is reached, subsequent play will lead to a terminal
state with utilities vA =1, vB =2, vC =6.
• Hence, the backed-up value of X is this vector.

A Jeyanthi ASP/CSE MNMJEC 108


multiplayer games

A Jeyanthi ASP/CSE MNMJEC 109


multiplayer games

A Jeyanthi ASP/CSE MNMJEC 110


constraint satisfaction
problem
• A constraint satisfaction problem consists of three
components, X,D, and C:
• X is a set of variables, {X1, . . . ,Xn}.
• D is a set of values or domains, {D1, . . . ,Dn}, for
each variable.
• C is a set of constraints
• A solution is an assignment of value Di to each
variable Xi such that every constraint is satisfied

A Jeyanthi ASP/CSE MNMJEC 111


Some examples for CSP's are:

• Some examples for CSP's are:


• The n-queens problem
• A crossword puzzle
• A map coloring problem
• The Boolean satisfiability problem
• A cryptarithmetic problem

A Jeyanthi ASP/CSE MNMJEC 112


• Types of variables
• 1. Discrete variables 2.Continuous variable
• Discrete variables can have
•  Finite Domains
•  Infinite domains

• Finite domains
• The simplest kind of CSP involves variables that are discrete and have finite domains.
• Map coloring problems are of this kind. The 8-queens problem can also be viewed as finite-
domain CSP, where the variables Q1,Q2,…..Q8 are the positions each queen in columns 1,….8
and each variable has the domain {1,2,3,4,5,6,7,8}.
• If the maximum domain size of any variable in a CSP is d, then the number of possible
complete assignments is O(dn) – that is, exponential in the number of variables.
• Finite domain CSPs include Boolean CSPs, whose variables can be either true or false.

• Infinite domains
• Discrete variables can also have infinite domains – for example, the set of integers or the
set of strings. With infinite domains, it is no longer possible to describe constraints by
enumerating all allowed combination of values. For example, if Jobl, which takes five
days, must precede Jobs, then we would need a constraint language of algebraic
inequalities such as
A Jeyanthi ASP/CSE MNMJEC 113
• Startjob1 + 5 <= Startjob3.
Continuous variable
• CSPs with continuous domains are very common in
real world. For example, in operation research field, the
scheduling of experiments on the Hubble Telescope
requires very precise timing of observations; the start
and finish of each observation and maneuver are
continuous-valued variables that must obey a variety of
astronomical, precedence and power constraints.
• The best known category of continuous-domain CSPs
is that of linear programming problems, where the
constraints must be linear inequalities forming a
convex region. Linear programming problems can be
solved in time polynomial in the number of variables.

A Jeyanthi ASP/CSE MNMJEC 114


Types of constraints :

• 1.Unary constraints – Which restricts a single


variable.
• Example : SA≠ green
• 2.Binary constraints - relates pairs of
variables.
• Example : SA ≠ WA 
• 3.Higher order constraints involve 3 or more
variables.
A Jeyanthi ASP/CSE MNMJEC 115
• Example for Constraint Satisfaction Problem :
• The map coloring problem. The task of coloring each region
with red, green or blue in such a way that no neighboring
regions have the same color.
• The map-coloring problem represented as a constraint
graph. where the nodes are the variables and the edges are
the binary constraints.

A Jeyanthi ASP/CSE MNMJEC 116


• To formulate this as CSP, we define
• the variables are
{ WA, NT, Q, NSW, V, SA, and T}.
• The domain of each variable
is {red, green, blue}.
• The constraints
{WA ≠ NT, SA ≠ NT, WA ≠SA,…….}

A Jeyanthi ASP/CSE MNMJEC 117


• Example 2: crypto arithmetic puzzles. Each letter stands for
a distinct digit
• The aim is to find a substitution of digits for letters such
that the resulting sum is arithmetically correct, with the
added restriction that no leading zeros are allowed.

A Jeyanthi ASP/CSE MNMJEC 118


• we define csp 3 components
• The variables are
{F T U W R O X1 X2 X3}.
• The domain of each variable
is {0,1…9}
• The constraints
• C1:ALL DIFF(F T U W R O )
• C2:O + O = R + 10 . X1
• C3:X1 + W + W = U + 10 . X2
• C4:X2 + T + T = O + 10 . X3
• C5:X3 = F
• Where X1, X2, and X3 are auxiliary variables representing the digit
(0 or 1) carried over into the next column.

A Jeyanthi ASP/CSE MNMJEC 119


Backtracking Search for CSPs
• The backtracking search is used for CSP that chooses
values for one variable at a time and backtracks when a
variable has no legal values left to assign.

A Jeyanthi ASP/CSE MNMJEC 120


Part of search tree generated by
simple backtracking for the map
coloring problem

A Jeyanthi ASP/CSE MNMJEC 121


Backtracking example

A Jeyanthi ASP/CSE MNMJEC 122


Backtracking example

A Jeyanthi ASP/CSE MNMJEC 123


Backtracking example

A Jeyanthi ASP/CSE MNMJEC 124


Backtracking example

A Jeyanthi ASP/CSE MNMJEC 125


Improving backtracking efficiency
• General-purpose methods can give huge gains
in speed:
– Which variable should be assigned next?
– In what order should its values be tried?
variable/value ordering
– Can we detect inevitable failure early?
is by propagating constraints

A Jeyanthi ASP/CSE MNMJEC 126


Variable/value ordering
• It is a method to find next unassigned variable
and value using 3 heuristics
• 1.Choosing minimum remaining values (MRV)
heuristic
• 2.Degree heuristic
• 3.Least Constraint value heuristic

A Jeyanthi ASP/CSE MNMJEC 127


minimum remaining values (MRV) heuristic
• It chooses the variable with the fewest remaining values

• After assigning WA=R,NT=G ;there is only one possible value for SA ; where Q has
2 possible vale and the remaining variables have 3 possible values
• The idea is choose SA rather than Q(ie choose the variable with the fewest legal
values)
• This heuristic picks a variable with less legal values ;which will cause failure as
soon as possible, avoiding pointless search through other variables which always
fail
• allowing the tree to be pruned.

A Jeyanthi ASP/CSE MNMJEC 128


Degree Heuristic or most constrained
variable heuristic
• It is used in the beginning when none of the variable is assigned
• It chooses the variable with largest no of constraints
• Here SA has 5 constraints and other variables WA,NT,Q,NSW ,V have
2,3,3,3,2 constraints respectively
• If SA is assigned first ; the other variables can be assigned values with
no back tracking

A Jeyanthi ASP/CSE MNMJEC 129


Least constraining value heuristic
• Given a variable, It considers the value that constraints the domain of
the variable least
– the one that rules out the fewest values in the remaining variables
• Leaves maximal flexibility for a solution.
• For ex; after assigning WA=R,NT=G the next choice of Blue to Q is the
bad choice; because it eliminates the last legal values left for Q
neighborhood
• this heuristics prefer red instead of blue for Q

A Jeyanthi ASP/CSE MNMJEC 130


• So far our search algms consider the constraints
on one variable at a time
• Instead of considering one variable at a time ;we
propagate constraints earlier to reduce the
search space
• This is called propagating constraints
• 3 methods
• 1)Forward checking
• 2)Constraint Propagation (arc consistency)
• 3)Intelligent Backtracking
A Jeyanthi ASP/CSE MNMJEC 131
Forward checking
– It is the simplest propagation
– When u set a value for a variable, the forward checking Keep track of
remaining legal values for unassigned variables
– If there is any inconsistency, we Terminate search (when any variable has no
legal values)
– Fig shows the process of map colouring with forward checking
– Whenever a variable X is assigned, the forward checking process looks at each
unassigned variable Y that is connected to X and deletes the value of X from
Y's domain
– There are two important points to notice about this example.
– First WA=R is assigned then Forward Checking deletes R from the domain of
neighboring variables NT and SA
– After assigning Q=G ;G is deleted from the domain of neighboring variables NT
and SA,NSW
– After assigning V=B ;B is deleted from the domain of neighboring variables
NSW, SA which leads SA with no legal value
– The algm terminates and back track at this point
A Jeyanthi ASP/CSE MNMJEC 132
A Jeyanthi ASP/CSE MNMJEC 133
Constraint propagation
• forward checking does not detect all of inconsistencies,
• because it does not look far enough ahead, it detects inconsistency in later
stage;
• Constraint propagation propagates the implications of a constraint on one
variable onto other variables
• constraint propagation that is substantially stronger ,faster than forward
checking.
• Methods
• 1. Arc consistency
• 2. Node consistency
• 3. Path consistency
• 4. K consistency

A Jeyanthi ASP/CSE MNMJEC 134


Arc consistency
• Idea is whenever the domain of a variable is defined, the other arc
needs further revision until no more inconsistency
• AC is a fast method of constraint propagation
• A network is arc-consistent if every variable is arc consistent with
every other variable.
• Given the current domain of X , Y , An arc from X to Y is consistent;
iff for every value v1 of X there is some value v2 of Y that is
consistent with X

A Jeyanthi ASP/CSE MNMJEC 135


Arc consistency

• 1.Assign WA=R

A Jeyanthi ASP/CSE MNMJEC 136


Arc consistency

• 2. R is deleted from its neighbors NT,SA by considering arc from


WA to NT and WA to SA

A Jeyanthi ASP/CSE MNMJEC 137


Arc consistency

• 3.Assign Q=G; consider arcs from Q to NT and Q to SA


and Q to NSW;and G is deleted from NT,SA,NSW

A Jeyanthi ASP/CSE MNMJEC 138


Arc consistency

• 4.The algm go back to the recently revised variable(NT,SA,NSW )


to check if there is any inconsistency with their neighbors

A Jeyanthi ASP/CSE MNMJEC 139


Arc consistency

• 5.When NT is checked with neighbors, NT is inconsistent with SA.


So the algm backtracks at this point

A Jeyanthi ASP/CSE MNMJEC 140


Algorithm for arc consistency

A Jeyanthi ASP/CSE MNMJEC 141


Algorithm for arc consistency: AC-3
• It uses a queue to keep track of the arcs that
need to be checked for inconsistency.
• Each arc (Xi, Xj) in turn is removed from the
agenda and checked.
• If any values need to be deleted from the
domain of Xi, then every arc (Xk, Xj) pointing to
Xi must be reinserted on the queue for
checking.
A Jeyanthi ASP/CSE MNMJEC 142
Node consistency

• A single variable (corresponding to a node in the CSP


network) is node-consistent if all the values in the
variable’s domain satisfy the variable’s unary
constraints.
• For example,in the variant of Australia map-coloring
problem
• where South Australians dislike green, the variable SA
starts with domain {red , green, blue}, and we can
make it node consistent by eliminating green, leaving
SA with the reduced domain {red , blue}.
• We say that a network is node-consistent if every
variable in the network is node-consistent

A Jeyanthi ASP/CSE MNMJEC 143


path consistency
• A two-variable set {Xi,Xj} is path-consistent
with respect to a third variable Xm if,for every
assignment {Xi = a,Xj = b} consistent with the
constraints on {Xi,Xj}, there is an assignment
to Xm that satisfies the constraints on {Xi,Xm}
and {Xm,Xj}.
• This is called path consistency because one
can think of it as looking at a path from Xi to Xj
with Xm in the middle.

A Jeyanthi ASP/CSE MNMJEC 144


k-consistency
• A CSP is k-consistent if, for any set of k − 1
variables and for any consistent assignment to
those variables, a consistent value can always be
assigned to any kth variable.
• 1-consistency says that, given the empty set, we
can make any set of one variable consistent: this
is what we called node consistency.
• 2-consistency is the same as arc consistency. For
binary constraint networks,
• 3-consistency is the same as path consistency.
A Jeyanthi ASP/CSE MNMJEC 145
Intelligent backtracking
• The previous BACKTRACKING-SEARCH algorithm in has
a very simple policy for what to do when a branch of
the search fails: back up to the preceding variable and
try a different value for it. This is called chronological
backtracking because the most recent decision point
is revisited.
• In this subsection, we consider better possibilities.
• Intelligent backtracking ;Here if search fails the algm
backtracks to conflict point because of which conflict
occurs rather than of most recent decision pt

A Jeyanthi ASP/CSE MNMJEC 146

You might also like