Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 50

ARTIFICIAL INTELLIGENCE

Dr. Nidhi Kushwaha


Department of Computer Science and Engineering

Indian Institute of Information Technology, Ranchi


2

OVERVIEW
 Informed Search Strategies (Heuristic)

02/27/24
 Best-first search
 Greedy best-first search
 A* search
GRAPH SEARCH ALGORITHM
HEURISTICS
 Heuristic=Rule of Thumb
 Heuristics are criteria, methods or principle, for deciding
which among several alternative courses of action
promises to be the most effective in order to achieve
some goal.
 Heuristic is problem-solving method that uses shortcuts
to produce good-enough solutions given a limited time.
It is a technique for finding quick decisions, but not
provide optimal solution every time.
 Heuristics can be used to identify the most promising
search path.
HEURISTIC FUNCTION
HEURISTIC FUNCTION
 Example:
 Want path from Ranchi to Prayagraj.
 Heuristic for prayaraj may be Straight-Line Distance
 h(Prayaraj)=EuclideanDistance(Ranchi, Prayagraj)

 Goal-Oriented Search.
 A value is associated with every state.

 The value is computed by applying the function on the state,


to find the distance of that state from goal state.
 Such a function is called a heuristic function and the value
calculated by the function is termed as the heuristic value.
HEURISTIC FUNCTION
 Example 1: Heuristic for 8-puzzle problem
h(n)=Number of tiles out of place
2 8 3 1 2 3

1 6 4 8 4

7 5 7 6 5
HEURISTIC FUNCTION
 Example 2: Heuristic for 8-puzzle problem
h(n)=Sum of the Distances of the tiles from their goal
position

2 8 3 1 2 3
1 6 4 8 4
7 5 7 6 5
02/27/24
10
INFORMED SEARCH STRATEGIES
(HEURISTIC) 11

 Strategies that know whether one goal state is more promising than
the other state.

02/27/24
 It uses problem-specific knowledge beyond the definition of the
problem itself.

 The general approach is called as Best-first search.

 Best-first search is an instance of the general Tree-Search or


Graph-Search algorithm in which a node is selected for expansion
based on an Evaluation function f(n).
INFORMED SEARCH STRATEGIES
(HEURISTIC) 12

 Heuristic function is a concept that associates a value with every


state in a given problem space.

02/27/24
 The Evaluation function f(n) is constructed as cost estimate, so the
node with the lowest evaluation function will be expanded first.

 Implementation of Best-first graph is identical to that for uniform-


cost search, except for the use of ‘f ‘instead of ‘g’ to order the
priority queue.

 Uniform-cost search is based on function g(n), which is a path cost


to n from a start state.
INFORMED SEARCH STRATEGIES
13
(HEURISTIC)

02/27/24
14

GREEDY BEST-FIRST SEARCH

02/27/24
15

GREEDY BEST-FIRST SEARCH


 It serve as a combination of BFS and DFS.

02/27/24
 Node is selected for expansion based on the evaluation function f(n).

 It tries to expand the node that is closest/lowest to the goal node, on the
grounds that this is likely to lead the solution quickly.

 Thus, it evaluates nodes by using just the heuristic function, that is


f(n)=h(n).

 It can be implementation by priority queue.

 It is not optimal but is often efficient.


16

GREEDY BEST-FIRST SEARCH


 Informed search methods may have access to a

02/27/24
heuristic function h(n) that estimates the cost of a
solution from ‘n’.
 The generic best-first search algorithm selects a node
for expansion according to an evaluation function.
EXAMPLE
18
GREEDY BEST-FIRST SEARCH
 Greedy best-first search expands nodes with minimal h(n). It is
not optimal but is often efficient.

02/27/24
 If n is goal, then h(n)=0
GREEDY BEST-FIRST SEARCH 19

02/27/24
GREEDY BEST-FIRST SEARCH

02/27/24
20
GREEDY BEST-FIRST SEARCH

02/27/24
21
22
GREEDY BEST-FIRST SEARCH

02/27/24
23
PROPERTIES OF GREEDY BEST-FIRST SEARCH

02/27/24
PROPERTIES OF GREEDY BEST-FIRST
24
SEARCH

02/27/24
EXAMPLE 2: 8-PUZZLE USING BEST FIRST SEARCH

2 8 3 1 2 3
1 6 4 8 4
7 5 7 6 5

Initial state Final State

Heuristic Function: Count the number of tiles out of place


in a state in compared to Goal State
2 8 3 1 2 3
1 6 4 8 4
7 5 7 6 5

2 8 3 2 8 3
5 3 5
1 6 4 1 4
7 5 7 6 5

2 8 3 2 3 2 8 3
3 1 4 3 1 8 4 5 1 4
7 6 5 7 6 5 7 6 5

2 3 2 3
4 2 1
1 8 4 1 8 4 1 2 30
7 6 5 1 2 3
7 6 5 8 4
8 4
7 6 5
7 6 5
02/27/24
27
28

A* Search Algorithm

02/27/24
A* Search Algorithm

02/27/24
02/27/24 IIITU-AI413(d)-C12VL1 by Dr. Nidhi
A* Search Algorithm
31
A* Search Algorithm

02/27/24
A* Search Algorithm
32

02/27/24
33
A* Search Algorithm

02/27/24
34
A* Search Algorithm

02/27/24
A* Search 35
Algorithm

02/27/24
A* Search Algorithm 36

02/27/24
A* SEARCH (EXAMPLE-1, BEST PATH
FINDING) 38

A
Initial Node = A

02/27/24
Final /Goal Node =J
B F
HSLD From J to
Every Node
A 10
B 8 G H
C 5
D 7
E 3
F 6
I
G 5
H 3 E J
I 1
J 0
A* SEARCH (EXAMPLE-2, BEST PATH40
FINDING)

02/27/24
Initial Node=I
Final Goal Node=g
HSLD from g to
every other node

I 5
D 4.5
A 4
B 2
C 4
E 2
g 0
A* SEARCH (EXAMPLE-3, 8-PUZZLE 41

PROBLEM)

02/27/24
A* SEARCH (EXAMPLE-3, 8-PUZZLE
42
PROBLEM)

2 8 3 1 2 3

02/27/24
1 6 4 8 4
7 5 7 6 5
A* SEARCH (EXAMPLE-3, 8-PUZZLE
PROBLEM) 43

2 8 3 1 2 3

02/27/24
1 6 4 8 4
7 5 7 6 5
SEARCHING WITH PARTIAL
INFORMATION

 Incompleteness: knowledge of states or actions are incomplete


 – Can’t know which state the agent is in
 – Can’t calculate exactly which state results from any sequence of
actions
 If the environment is not fully observable or deterministic, then
the following types of problems occur:

 Kinds of Incompleteness
 – Sensorless problems
 – Contingency problems
 – Exploration problems
SEARCHING WITH PARTIAL
INFORMATION
1. Sensorless problems
If the agent has no sensors, then the agent
cannot know it’s current state, and hence
would have to make many repeated action
paths to ensure that the goal state is reached
regardless of it’s initial state.

Example: the vacuum world has 8 states


– Three actions – Left, Right, Suck
– Goal: clean up all the dirt and result in states
7 and 8
– Original task environment – observable,
deterministic
SEARCHING WITH PARTIAL
INFORMATION
What if the agent is
partially sensor less
• Only know the effects of
it actions

Belief State Space – A


belief state is a set of states
that represents the agent’s
current belief about the
possible physical states it
might be in

A solution is a path that


leads to a belief state all of
whose elements are goal
states
SEARCHING WITH PARTIAL
INFORMATION

2. Contingency problems
This is when the environment is partially observable or when
actions are uncertain. Then after each action the agent needs
to verify what effects that action has caused.
Rather than planning for every possible contingency after an
action, it is usually better to start acting and see which
contingencies do arise.
This is called interleaving of search and execution.

A problem is called adversarial if the uncertainty is caused


by the actions of another agent.
CONTINGENCY PROBLEM
 exact prediction is impossible

 state unknown in advance, may depend on the outcome of actions and


changes in the environment
 accessibility of the world some essential information may be obtained
through sensors only at execution time
 consequences of action may not be known at planning time
 goal instead of single action sequences, there are trees of actions
 contingency branching point in the tree of actions
 agent design different from the previous two cases:the agent must act on
incomplete plans
 search and execution phases are interleaved

 Example: Vacuum world, The effect of a suck action is random.


There is no action sequence that can be calculated at planning time and is
guaranteed to reach the goal state.
Limitations: Can’t deal with situations in which the environment or effects
of action are unknown
CONTINGENCY PROBLEM
3. Exploration problems
This can be considered an extreme case of contingency
problems:
when the states and actions of the environment is unknown, the
agent must act to discover them.
Thank You!!

You might also like