Problem Solving by Searching

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 47

Artificial Intelligence

Lecture 2
Problem Solving by Searching
20-10-2022
Outline

 Revision of Search Problems


 Uniformed search
 Revision of BFS and DFS
 Uniform Cost Search

 Informed Search
 Heuristics
 Greedy Search

 A* Search
Search Problem
A search problem consists of
1. Initial state
2. State space
3. Goal test
4. Actions
5. Transition model
6. Action cost
 A solution is a sequence of actions that leads from
the initial state to a goal state.
Example: The 8-puzzle

 states?
 actions?
 goal test?
Example: The 8-puzzle

 states? locations of tiles (any location of tiles can be start state)

 actions? move blank left, move blank right, move blank up, move
blank down

 goal test? = goal state (given)


State Space Graphs vs. Search Trees

Each NODE in
State Space Graph Search Tree
in the search
tree is an entire S
PATH in the
a G d e p
b c
state space
graph. b c e h r q
e
d f a a h r p q f
S h We construct
p q f q c G
p q r both on demand
q c a
– and we G
construct as little a
as possible.
State Space Graphs vs. Search Trees

Consider this 4-state graph: How big is its search tree (from S)?

S G

Important: Lots of repeated structure in the search tree!


Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO stack, i.e., put successors at front
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front
Breadth-first search
 Expand shallowest unexpanded node
 Implementation:
 fringe is a FIFO queue, i.e., new successors go at end
Breadth-first search
 Expand shallowest unexpanded node
 Implementation:
 fringe is a FIFO queue, i.e., new successors go at end
Breadth-first search
 Expand shallowest unexpanded node
 Implementation:
 fringe is a FIFO queue, i.e., new successors go at end
Breadth-first search
 Expand shallowest unexpanded node
 Implementation:
 fringe is a FIFO queue, i.e., new successors go at end
BFS vs DFS
 Assume the goal node is e:
 DFS method needs only 4 steps to reach to e.
 BFS method 6 steps are required.
 DFS is better for deeper nodes.

 Assume the goal node is c:


 BFS needs only 4 steps.
 DFS method 8 comparisons are
needed

 Which approach is best, depends on the position of goal node.


BFS vs DFS (Cont.)
 If branching factor b (number of branches per node) is large,
the DFS is better suited, and BFS is worst.
 The efficiency of search depends on the structure of tree, the
search method used, and branching factor.
Cost-Sensitive Search

a GOAL
2 2
• A different kind of a b c
3
problem where the cost 2
1 8
between states is 2 e
important. 3 d
f
• It is desired to find the 9 8 2
START h
shortest path 4 2
1
p 4 r
1 q
5

 BFS finds the shortest path in terms of number of actions. It does not find the least-cost path.
 A similar algorithm, uniform cost search, can be used to find the least-cost path.
Uniform Cost Search
2 a G
Strategy: expand a cheapest node first: b c
1 8 2
Fringe is a priority queue (priority: 2 e
3 d f
cumulative cost) 9 2
S h 8 1
1 p q r
15

S 0

d 3 e 9 p 1

b 4 c e 5 h 17 r 11 q 16
11
Cost a 6 a h 13 r 7 p q f
contours
p q f 8 q c G

q 11 c G 10 a

a
Uniform Cost Search (UCS) Properties

 What nodes does UCS expand?


 Processes all nodes with cost less than cheapest solution!
b c1
 If that solution costs C* and arcs cost at least  , then the …
c2
“effective depth” is roughly C*/ C*/ “tiers”
c3
 Takes time O(bC*/) (exponential in effective depth)

 How much space does the fringe take?


 Has roughly the last tier, so O(bC*/)

 Is it complete?
 Assuming best solution has a finite cost and minimum arc
cost is positive, yes!

 Is it optimal?
 Yes!
Uniform Cost Issues

 Remember: UCS explores … c1


increasing cost contours c2
c3

 The good: UCS is complete and


optimal!

 The bad:
 Explores options in every
“direction”
 No information about goal location
Implementation note

 All the uninformed search algorithms are


the same in implementation except for fringe
strategies
 Conceptually, all fringes are priority queues (i.e.,
collections of nodes with attached priorities)
 In DFS and BFS an actual priority queue uses
stacks and queues, respectively.
 In DF priority, deep node is chosen.
 In BF priority, shallow node is chosen.
 In UC priority, the node with the lowest cost is
chosen.
 Can even code one implementation that takes a
variable queuing object.
Search Heuristics

▪ A heuristic is:
▪ A function that estimates how close a state is to a
goal
▪ Designed for a particular search problem
▪ Examples: Manhattan distance, Euclidean distance
for pathing
Example: Heuristic Function

h(x)
Example: Heuristic Function-h(x)

h(x)
Greedy Search

 Expand the node that seems closest…

 What can go wrong?


Greedy Search

 Expand the node that seems closest…

 What can go wrong?


 There is a less cost path through Rimnicu Vilcea
Greedy Search
b

 Strategy: expand a node that you think
is closest to a goal state
 Heuristic: estimate of distance to nearest
goal for each state

 A common case: b

 Best-first takes you straight to the (wrong)
goal

 Worst-case: like a badly-guided DFS


Combining UCS and Greedy
 Uniform-cost orders by path cost, or backward cost g(n)
 Greedy orders by goal proximity, or forward cost h(n) Search tree

8 g=0
State Space S h=6
Graph e h=1 g=1
a
1 h=5

1 3 2 g=2 g=9
S a d G
h=6 b d g=4 e h=1
h=6 h=5 h=2
1 h=2 h=0
1 g=3 g=6
c b g = 10
h=7 c G h=0 d
h=2
h=7 h=6
 A* Search orders by the sum: f(n) = g(n) + h(n) g = 12
G
 A* search path: S → a→ d → G h=0
When should A* terminate?

 Should we stop when we enqueue a goal?


h=2

2 A 2 Fringe:

S h=3 h=0 G S g= 0, h=3

S → A g =2, h= 2 f=4
2 B 3 S → B g= 2 h=1 f= 3
h=1
S→ B→G f=5
 No: only stop when we dequeue a goal S→A→G
If we declare
success at this
There is another path with length 4
point, we will
There are A and G in the Queue
get the path
G has f = 5 but A has f = 4
with length 5
Is A* Optimal?
Remember: h is the
estimated cost to
h is poorly chosen the goal ‫المسافة‬
h=6 ‫التقديريه الي الهدف‬
heuristic; it says that it
is still 6 away from
1 G A 3
but it is only 3

S h=7
G h=0
Counter example:
5

 What went wrong?


 Actual bad goal cost < estimated good goal cost
 We need estimates to be less than actual costs!
 This is called admissible heuristic.
Admissible Heuristics

 A heuristic h is admissible (optimistic) if:

where h*(n) is the true cost to a nearest goal

 Example, straight-line distance from node n to


the goal.
 Coming up with admissible heuristics is most of
what’s involved in using A* in practice.
Admissible heuristics
E.g., for the 8-puzzle:
g(n) = 1 for all nodes
 h1(n) = number of misplaced tiles
 h2(n) = total Manhattan distance

(i.e., h2 is the sum of the distances of the tiles from the goal position)

 h1(S) = ? 8
 h2(S) = ? 3+1+2+2+2+3+3+2 = 18
Example
5 8 1 2 3
4 2 1 4 5 6
7 3 6 7 8

start Goal
h1(N) = number of misplaced numbered tiles = 6
h2(N) = sum of the (Manhattan) distance of
every numbered tile to its goal position
= 2 + 3 + 0 + 1 + 3 + 0 + 3 + 1 = 13
Question
 Trace the A* Search algorithm using the total Manhattan
Distance heuristic, to find the shortest path from the initial
state to the goal state:
1 2 3 1 2 3

7 4 5 4 5 6

8 6 7 8

start state Goal state


Optimality of A* Tree Search

Assume:

 A is an optimal goal node

 B is a suboptimal goal node

 h is admissible

Claim:
 A will exit the fringe before B
Optimality of A* Tree Search: Blocking

Proof:
 Imagine B is on the fringe …

 Some ancestor n of A is on the


fringe, too (maybe A!)
 Claim: n will be expanded before
B
1. f(n) ≤ f(A):
◼ f(n) = g(n) + h(n) -- Definition of f-cost
◼ f(n) ≤ g(A) -- Admissible of h
◼ g(A) = f(A) -- h = 0 at a goal
Optimality of A* Tree Search: Blocking

Proof:

 Imagine B is on the fringe

 Some ancestor n of A is on the fringe, too


(maybe A!)
 Claim: n will be expanded before B

1. f(n) ≤ f(A)
2. f(A) < f(B):
◼ g(A) <g(B) A is optimal, B is
suboptimal
◼ f(A) < f(B) h = 0 at a goal
Optimality of A* Tree Search: Blocking

Proof:

 Imagine B is on the fringe

 Some ancestor n of A is on the


fringe, too (maybe A!)
 Claim: n will be expanded before B

1. f(n) ≤ f(A)
2. f(A) < f(B)
3. f(n) ≤ f(A) < f(B), then n
expands before B
 All ancestors of A expand before B

 A expands before B

 A* search is optimal

You might also like