Chapter 3 State Space Search

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 145

Structures and Strategies For Space

State Search
3.0 Introduction 3.3 Using Space State to Represent
Reasoning with the Predicate Calculus
3.1 Graph Theory
3.4 Epilogue and References
3.2 Strategies for Space State Search
3.5 Exercises

George F Luger

ARTIFICIAL INTELLIGENCE 6th edition


Structures and Strategies for Complex Problem Solving

1
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited
Why Search?
 To achieve goals or to maximize our utility
we need to predict what the result of our
actions in the future will be.

 There are many sequences of actions, each


with their own utility.

 We want to find, or search for, the best one.

2
Solving problems by searching
Questions that need to be answered include:

-Is the problem solver guaranteed to find a solution?


-Will the problem solver always terminate? Can it become caught in an
infinite loop?
-When a solution is found, is it guaranteed to be optimal?
-What is the complexity of the search process in terms of time usage?,
memory usage?
-How can the interpreter most effectively reduce search complexity?
-How can an interpreter be designed to most effectively utilize a
representation language?
Search Overview
 Watch this video on search algorithms:
http://videolectures.net/aaai2010_thayer_bis/

4
Leonard Euler (“Oiler”)
1706 - 1783

6.1 Introduction to Graphs 16


Figure 3.1: The city of Königsberg.

•Write the rules more formally


Start on any land mass
Cross each bridge once and only once
Can you do it?
2
AI: Chapter 3: Structure and Strategies by Searching-Luger: Artificial Intelligence, 6th edition.
Figure 3.2: Graph of the Königsberg bridge system.

RB: River Bank


i: islands

The graph preserves the


essential structure of
bridge.

• if there is a walk around the city that crosses each bridge exactly once.
•Although the residents had failed to find such a walk and doubted that it was possible, no
one had proved its impossibility.
•Devising a form of graph theory, Euler created an alternative representation for the map,
3
presented in Figure 3.2.
AI: Chapter 3: Structure and Strategies by Searching-Luger: Artificial Intelligence, 6th edition.
Figure 3.3: A labeled directed graph.

4
AI: Chapter 3: Structure and Strategies by Searching-Luger: Artificial Intelligence, 6th edition.
Figure 3.4: A rooted tree, exemplifying family relationships.

5
AI: Chapter 3: Structure and Strategies by Searching-Luger: Artificial Intelligence, 6th edition.
Graph Vs Tree
Graph Vs Tree

Graph in Real Life Tree in Real Time


Check if a given graph is tree or not
The State Space Representation of Problems

In the state space representation of a problem, the nodes of a


graph correspond to partial
problem solution states and the arcs correspond to steps in a
problem-solving process. One
or more initial states, corresponding to the given information
in a problem instance, form
the root of the graph. The graph also defines one or more goal
conditions, which are solutions to a problem instance. State
space search characterizes problem solving as the process of
finding a solution path from the start state to a goal
AI: Chapter 3: Structure and Strategies by Searching-Luger: Artificial Intelligence, 6th edition.
AI: Chapter 3: Structure and Strategies by Searching-Luger: Artificial Intelligence, 6th edition.
AI: Chapter 3: Structure and Strategies by Searching-Luger: Artificial Intelligence, 6th edition.
AI: Chapter 3: Structure and Strategies by Searching-Luger: Artificial Intelligence, 6th edition.
Example: Eight Puzzle
• States:
– Description of the eight
tiles and location of the 7 2 4
blank tile
• Successor Function: 5 6
– Generates the legal states 8 3 1
from trying the four actions
{Left, Right, Up, Down}
• Goal Test:
– Checks whether the state
matches the goal 1 2 3
configuration
• Path Cost: 4 5 6
– Each step costs 1
7 8

AI: Chapter 3: Solving Problems 41


by Searching
AI: Chapter 3: Structure and Strategies by Searching-Luger: Artificial Intelligence, 6th edition.
AI: Chapter 3: Structure and Strategies by Searching-Luger: Artificial Intelligence, 6th edition.
AI: Chapter 3: Structure and Strategies by Searching-Luger: Artificial Intelligence, 6th edition.
AI: Chapter 3: Structure and Strategies by Searching-Luger: Artificial Intelligence, 6th edition.
The Fig shows one way in which possible solution paths may be generated and
compared
AI: Chapter 3: Structure and Strategies by Searching-Luger: Artificial Intelligence, 6th edition.
How to reduce search complexity?

AI: Chapter 3: Structure and Strategies by Searching-Luger: Artificial Intelligence, 6th edition.
AI: Chapter 3: Structure and Strategies by Searching-Luger: Artificial Intelligence, 6th edition.
Example: Eight Queens
• Place eight queens on a Q
chess board such that no
queen can attack another Q
queen
Q
• No path cost because Q
only the final state
counts! Q
Q
• Incremental formulations
• Complete state Q
formulations Q
AI: Chapter 3: Solving Problems 49
by Searching
Example: Eight Queens
• States:
– Any arrangement of 0 to 8
Q
queens on the board
• Initial state:
Q
– No queens on the board Q
• Successor function:
– Add a queen to an empty Q
square
• Goal Test: Q
– 8 queens on the board and
none are attacked Q
• 64*63*…*57 = 1.8*1014
possible sequences Q
– Ouch!
Q
AI: Chapter 3: Solving Problems 50
by Searching
Example: Eight Queens
• States: Q
– Arrangements of n queens,
one per column in the Q
leftmost n columns, with
no queen attacking another Q
are states
• Successor function: Q
– Add a queen to any square
in the leftmost empty Q
column such that it is not
attacked by any other Q
queen.
• 2057 sequences to Q
investigate Q
AI: Chapter 3: Solving Problems 51
by Searching
Other Toy Examples
• Another Example: Jug Fill
• Another Example: Black White Marbles
• Another Example: Row Boat Problem
• Another Example: Sliding Blocks
• Another Example: Triangle Tee

AI: Chapter 3: Solving Problems 52


by Searching
AI: Chapter 3: Structure and Strategies by Searching-Luger: Artificial Intelligence, 6th edition.
Strategies for state space search
• A state may be searched into two directions

• Problem Goal

• Goal Problem
Goal Driven Vs Data Driven
Goal driven: A theorem that is to be proved,
diagnosing mechanical problem in an
automobile and finding an exit from a maze
Data driven: Configuration problems and an
expert system that will help a human classify
plants by species, genus etc
Best?
• Both strategies search the same search space
graph
• The order and actual number of states
searched can differ
• The preferred strategy is determined by
- Properties of the problem
•Include complexity of rules
•“shape” of state shape
•Nature of problem data

AI: Chapter 3: Structure and Strategies by Searching-Luger: Artificial Intelligence, 6th edition.
AI: Chapter 3: Structure and Strategies by Searching-Luger: Artificial Intelligence, 6th edition.
Both directions yield Exponential Complexity

AI: Chapter 3: Structure and Strategies by Searching-Luger: Artificial Intelligence, 6th edition.
Reading Task

Read Section 3.2(page 93-96) of Book –B


for the decision to choose between data-goal-
driven complexity.

AI: Chapter 3: Structure and Strategies by Searching-Luger: Artificial Intelligence, 6th edition.
Uninformed strategies use only the information available
in the problem definition
Also known as blind searching
Uninformed Search Strategies

• Breadth-first search
• Uniform-cost search
• Depth-first search
• Depth-limited search
• Iterative deepening search

AI: Chapter 3: Solving Problems by Searching 71


Breadth-First Search

AI: Chapter 3: Solving Problems by


January 26, 2003 78
Searching
Breadth-First Search

AI: Chapter 3: Solving Problems by


January 26, 2003 79
Searching
Breadth-First Search

AI: Chapter 3: Solving Problems by


January 26, 2003 80
Searching
Breadth-First Search

AI: Chapter 3: Solving Problems by


January 26, 2003 81
Searching
Breadth-first search
 Expand shallowest unexpanded node
 Fringe: nodes waiting in a queue to be explored
 Implementation:
 fringe is a first-in-first-out (FIFO) queue, i.e.,
new successors go at end of the queue.

Is A a goal state?

82
Breadth-first search
 Expand shallowest unexpanded node
 Implementation:
 fringe is a FIFO queue, i.e., new successors go
at end

Expand:
fringe = [B,C]

Is B a goal state?

83
Breadth-first search
Expand shallowest unexpanded node

 Implementation:
 fringe is a FIFO queue, i.e., new successors go
at end
Expand:
fringe=[C,D,E]

Is C a goal state?

84
Breadth-first search
 Expand shallowest unexpanded node
 Implementation:
 fringe is a FIFO queue, i.e., new successors go
at end

Expand:
fringe=[D,E,F,G]

Is D a goal state?

85
Example
BFS

88
Properties of breadth-first search
 Complete? Yes it always reaches goal (if b is finite)
 Time? 1+b+b2+b3+… +bd + (bd+1-b)) = O(bd+1)
(this is the number of nodes we generate)
 Space? O(bd+1) (keeps every node in memory,
either in fringe or on a path to fringe).
 Optimal? Yes (if we guarantee that deeper
solutions are less optimal, e.g. step-cost=1).

 Space is the bigger problem (more than time)

Note: in the new edition Space & Time complexity was O(bd) because we
postpone the expansion. 89
Lessons From Breadth First Search
• The memory requirements are a bigger
problem for breadth-first search than is
execution time

• Exponential-complexity search problems


cannot be solved by uniformed methods for
any but the smallest instances

AI: Chapter 3: Solving Problems by


January 26, 2003 90
Searching
Uniform-Cost Search
• Same idea as the algorithm for breadth-first
search…but…
- Expand the least-cost unexpanded node
- FIFO queue is ordered by cost
- Equivalent to regular breadth-first search if all step
costs are equal

AI: Chapter 3: Solving Problems by


January 26, 2003 91
Searching
Depth-First Search
• Recall from Data Structures the basic
algorithm for a depth-first search on a
graph or tree

• Expand the deepest unexpanded node

• Unexplored successors are placed on a


stack until fully explored
AI: Chapter 3: Solving Problems 94
by Searching
Depth-First Search

AI: Chapter 3: Solving Problems 95


by Searching
Depth-First Search

AI: Chapter 3: Solving Problems 96


by Searching
Depth-First Search

AI: Chapter 3: Solving Problems 97


by Searching
Depth-First Search

AI: Chapter 3: Solving Problems 98


by Searching
Depth-First Search

AI: Chapter 3: Solving Problems 99


by Searching
Depth-First Search

AI: Chapter 3: Solving Problems 100


by Searching
Depth-First Search

AI: Chapter 3: Solving Problems 101


by Searching
Depth-First Search

AI: Chapter 3: Solving Problems 102


by Searching
Depth-First Search

AI: Chapter 3: Solving Problems 103


by Searching
Depth-First Search

AI: Chapter 3: Solving Problems 104


by Searching
Depth-First Search

AI: Chapter 3: Solving Problems 105


by Searching
Depth-First Search

AI: Chapter 3: Solving Problems 106


by Searching
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = Last In First Out (LIPO) queue, i.e., put
successors at front
Is A a goal state?

107
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

queue=[B,C]

Is B a goal state?

108
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

queue=[D,E,C]

Is D = goal state?

109
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

queue=[H,I,E,C]

Is H = goal state?

110
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

queue=[I,E,C]

Is I = goal state?

111
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

queue=[E,C]

Is E = goal state?

112
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

queue=[J,K,C]

Is J = goal state?

113
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

queue=[K,C]

Is K = goal state?

114
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

queue=[C]

Is C = goal state?

115
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

queue=[F,G]

Is F = goal state?

116
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

queue=[L,M,G]

Is L = goal state?

117
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

queue=[M,G]

Is M = goal state?

118
Properties of depth-first search A

B C
• Complete? No: fails in infinite-depth spaces
Can modify to avoid repeated states along path
• Time? O(bm) with m=maximum depth
• terrible if m is much larger than d
– but if solutions are dense, may be much faster than

breadth-first
• Space? O(bm), i.e., linear space! (we only need to
remember a single path + expanded unexplored nodes)
• Optimal? No (It may find a non-optimal goal first)
121
Properties of depth-first search A

B C
• Complete? No: fails in infinite-depth spaces
Can modify to avoid repeated states along path
• Time? O(bm) with m=maximum depth
• terrible if m is much larger than d
– but if solutions are dense, may be much faster than

breadth-first
• Space? O(bm), i.e., linear space! (we only need to
remember a single path + expanded unexplored nodes)
• Optimal? No (It may find a non-optimal goal first)
126
Iterative deepening search
• To avoid the infinite depth problem of DFS, we can
decide to only search until depth L, i.e. we don’t expand beyond depth L.
 Depth-Limited Search

• What if solution is deeper than L?  Increase L iteratively.


 Iterative Deepening Search

• As we shall see: this inherits the memory advantage of Depth-First


search, and is better in terms of time complexity than Breadth first search.

131
Iterative deepening search L=0

132
Iterative deepening search L=1

133
Iterative deepening search L=2

134
Iterative Deepening Search L=3

135
Iterative deepening search
• Number of nodes generated in a depth-limited search to
depth d with branching factor b:
NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd

• Number of nodes generated in an iterative deepening


search to depth d with branching factor b:
NIDS = (d+1)b0 + d b1 + (d-1)b2 + … + 3bd-2 +2bd-1 + 1bd
= O (b d )  O (b d 1 )

BFS
• For b = 10, d = 5,
– NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111
– NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,450
– NBFS = ............................................................................................
d = 1,111,100
Note: BFS can also be adapted to be O(b ) by waiting to expand until all nodes at depth d are checked 136
Properties of iterative deepening search

• Complete? Yes
• Time? O(bd)
• Space? O(bd)
• Optimal? Yes, if step cost = 1 or increasing
function of depth.

137
Bidirectional Search
• Idea
– simultaneously search forward from S and backwards
from G
– stop when both “meet in the middle”
– need to keep track of the intersection of 2 open sets
of nodes
• What does searching backwards from G mean
– need a way to specify the predecessors of G
• this can be difficult,
• e.g., predecessors of checkmate in chess?
– which to take if there are multiple goal states?
– where to start if there is only a goal test, no explicit 138
Bi-Directional Search
Complexity: time and space complexity are:
O (b d / 2 )

139
Graph Search vs Tree S Search
B
S
B C

C C S B S
State Space
Example of a Search Tree
• Graph search optimal but memory inefficient
– never generate a state generated before
• must keep track of all possible states (uses a lot of memory)
• e.g., 8-puzzle problem, we have 9! = 362,880 states
• approximation for DFS/DLS: only avoid states in its (limited)
memory: avoid looping paths.
• Graph search optimal for BFS and UCS (do you understand why?)
140
Graph Search vs Tree S Search
B
S
B C

C C S B S
State Space
Example of a Search Tree
• Graph search optimal but memory inefficient
– never generate a state generated before
• must keep track of all possible states (uses a lot of memory)
• e.g., 8-puzzle problem, we have 9! = 362,880 states
• approximation for DFS/DLS: only avoid states in its (limited)
memory: avoid looping paths.
• Graph search optimal for BFS and UCS (do you understand why?)
141
Summary of algorithms

even complete preferred


Third edition has if step cost is not uninformed
O(bd) increasing with depth. search strategy

142
Summary
• Problem formulation usually requires abstracting away
real-world details to define a state space that can
feasibly be explored

• Variety of uninformed search strategies

• Iterative deepening search uses only linear space and


not much more time than other uninformed algorithms
http://www.cs.rmit.edu.au/AI-Search/Product/
http://aima.cs.berkeley.edu/demos.html (for more demos)

143
Exercise
2. Consider the graph below:

B D
A F

E
C
a) [2pt]Draw the first 3 levels of the full search tree with root node given by A.
Use graph search, i.e. avoid repeated states.
b) [2pt] Give an order in which we visit nodes if we search the tree breadth first.
c) [2pt] Express time and space complexity for general breadth-first search in terms
of the branching factor, b, and the depth of the goal state, d.
d) [2pt] If the step-cost for a search problem is not constant, is breadth first search
always optimal? Is BFS graph search optimal?
e) [2pt] Now assume the constant step-cost.
144
Is BSF tree search optimal? Is BFS graph search optimal?
Next time

Questions?

145

You might also like