Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 23

CM20252/CM50263 – Artificial Intelligence

Dr Rob Wortham

Heuristic Search

Note: This lecture is being recorded using University of Bath Panopto


22/01/2024 Artificial Intelligence - Lecture 5 1
Today’s Lecture

• What is a heuristic?
• Searching with heuristics
• Greedy best-first
• A*
• Choosing heuristics – relaxed problems
• Reading
• Labs

22/01/2024 Artificial Intelligence - Lecture 5 2


Informed search

• Heuristics
• Strategies derived from previous experiences with similar
problems.
• Depend on using readily accessible, though loosely applicable,
information to improve problem solving.

• Heuristics in Search
• h(n) = estimated cost of the cheapest path from the state at node
n to a goal state.
• h(n) = 0 if n is a goal state

22/01/2024 Artificial Intelligence - Lecture 5 3


Informed search - example

LET’S
IGNORE
THIS PART

22/01/2024 Artificial Intelligence - Lecture 5 4


Informed search - example GREEDY BEST-FIRST
GRAPH SEARCH
USING STRAIGHT-LINE DISTANCE
AS A HEURISTIC Frontier
Arad

Timisiora Sibiu Zerind

Rimnicu Vilcea Arad Fagaras Oradea

Sibiu Bucharest
Best-first
Optimal solution
solution
22/01/2024 Artificial Intelligence - Lecture 5 5
Greedy Best-First search – another problem

Initial state
Goal state


22/01/2024 Artificial Intelligence - Lecture 5 6
Greedy Best-First search – Key Facts
• Complete
• Only when performing graph search in finite state set.
• No, otherwise (it can get into loops).

• Optimal
• No.

• Time and space complexity


• Requires space to store frontier
• Graph search also requires storage of Explored set
• O(bm) - But a good heuristic can reduce it substantially.

22/01/2024 Artificial Intelligence - Lecture 5 7


An improvement on Greedy best-first – A*
• Idea: Lets include the cost so far in the heuristic – g(n)
• Then, expand the node that has the lowest g(n) + h(n)

• g(n) – path cost from start node to node n (not an estimate)


• h(n) – estimated cost of cheapest path to goal state (same as greedy)

• A* search f(n) = g(n) + h(n)

g(n h(n
) )
I n G
Known cost from Estimated cost to goal
initial state state

22/01/2024 Artificial Intelligence - Lecture 5 8


Informed search – A* A* GRAPH SEARCH
USING STRAIGHT-LINE DISTANCE
AS A HEURISTIC
Frontier
Arad 0

Timisiora 118 Sibiu 140 Zerind 75

Rimnicu Vilcea 220 Arad 280 Fagaras 239 Oradea 291

Sibiu 300 Craiova 366 Pitesti 317

Optimal …
solution
22/01/2024 Artificial Intelligence - Lecture 5 9
A* Search – Key facts
• Complete
• Yes.

• Optimal
• Yes. Tree search version is optimal with an admissible heuristic (see next slides).
• Graph search version is optimal with a consistent heuristic (see next slides).

• A* is optimally efficient for any given consistent heuristic. No other optimal


algorithm is guaranteed to expand fewer nodes than A*.

• Time and space complexity


• For most problems, the number of states A* visits is still exponential in the length of
the solution. It usually runs out of space long before it runs out of time (dependent
on b).
• O(bm) – but generally much lower

22/01/2024 Artificial Intelligence - Lecture 5 10


A* Search – Admissible heuristic

• An admissible heuristic never overestimates the cost to reach the goal.

• Example: Straight-line distance is admissible because the shortest path between any two
points is a straight line.

22/01/2024 Artificial Intelligence - Lecture 5 11


A* Search – Consistent heuristic
• Consistency
• Estimate h(n) from a node (n) to the goal must be less
than or equal to actual cost c to any other node (n’) plus
estimate h(n’) from that node to goal.

• h(n) <= c(n, a, nʹ) + h(nʹ) G

h(n)
h(nʹ)
• Consistency is required only for A* graph search.
• Every consistent heuristic is also admissible.
n nʹ
c

22/01/2024 Artificial Intelligence - Lecture 5 12


A* search
1968
A*
Why did this take so long!
1950
The Turing Test
A* (Hart, Nilsson & Raphael, 1968)

1955
1845 Babbage: The term
Discussed “Artificial
programming a intelligence”
computer to is coined
play chess.
22/01/2024 Artificial Intelligence - Lecture 5 13
Heuristic functions – The 8 Puzzle

7 2 4 7 2 4 7 2 4
5 8 6 5 8 5 6
1 3 1 3 6 1 3 8

Q. What is the Branching factor b?

A. ≅ 3 steps

22/01/2024 Artificial Intelligence - Lecture 5 14


Heuristic functions – The 8 Puzzle #2

Solution depth ≅ 22 steps (just accept this as true for now)


Branching factor ≅ 3 steps

Exhaustive tree search to depth 22 would examine roughly 322 ≅ 3.1 x 1022 states.

Graph search is much more efficient: 181,440 distinct states are reachable.

But Graph search for 15-puzzle: 1013 distinct states

We need a good heuristic……..

22/01/2024 Artificial Intelligence - Lecture 5 15


Heuristics for the 8 Puzzle
Q. What is good heuristics could we use?

7
5
2
8
4
6
? 1
4
2
5
3
6
1 3 7 8

h1 = the number of misplaced tiles


=6 Are these good
heuristics?
h2 = total distance of tiles from their goal positions
= 2 + 1 + 2 + 1 + 3 + 3 = 12
22/01/2024 Artificial Intelligence - Lecture 5 16
Heuristics for the 8 Puzzle #2
If the search
Performance - We need to test to find out! tree were
Run 1200 random 8-puzzle problems (Russell & Norvig, 2010, p.104) uniform

d = length
of optimal
solution
IDS =
Incremental
Deepening
Search

Is h2 always better than h1? Essentially, yes. Because h2 dominates h1: h2(n) >= h1(n), for any node n.
22/01/2024 Artificial Intelligence - Lecture 5 17
Heuristic functions – More Generally
• What about other problems?
• Is there a general way to generate good heuristic functions?

• Remember
• h1 = the number of misplaced tiles
• h2 = total distance of tiles from their goal positions

• Heuristic functions h1 and h2 are accurate path lengths for simplified versions of the problem.
• If we could move a tile anywhere, then h1 gives the shortest solution.
• If we could move a tile to any adjacent square, then h2 gives the shortest solution.

• A problem with fewer restrictions on the actions is called a relaxed problem.

• The cost of an optimal solution to a relaxed problem is a lower bound on the cost of an optimal
solution to the real problem.

22/01/2024 Artificial Intelligence - Lecture 5 18


Heuristic functions – More Generally #2
• There is often no single “clearly best” heuristic.

• If a collection of admissible heuristics h1 … hm is available for a problem, and


none dominates any of the others, which should we choose?

• There is no need to choose!

• Define h(n) = max{h1(n), …, hm(n)}

• h is admissible and dominates all its component heuristics.

22/01/2024 Artificial Intelligence - Lecture 5 19


Reading
• Essential Reading from Russell & Norvig, Chapter 3:

• Critical sections in Chapter 3:

• 3.5.2 Heuristic search strategies: A*

• 3.6 Heuristic functions (until 3.6.3)

22/01/2024 Artificial Intelligence - Lecture 5 20


Labs This Week
• Introduction to python, week 2 of 2

• If you are already a python programmer,


please work through the introductory Jupyter
tutorial anyway to check for gaps.

• Consolidate your skills by helping others.

• If you have completed the python intro: Try


Missionaries & Cannibals

• Materials are available on moodle. Lab TAs


will assist.

Artificial Intelligence - Lecture 5


22/01/2024 21
Next Lecture – Local Search
• Local search – when all we need to
find is the goal state

• Example: 8 Queens

22/01/2024 Artificial Intelligence - Lecture 5 22


Today’s Lecture

• What is a heuristic?
• Searching with heuristics
• Greedy best-first
• A*
• Choosing heuristics – relaxed problems
• Reading
• Labs

Questions ?

22/01/2024 Artificial Intelligence- Lecture 5 23

You might also like