Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 105

Artificial Intelligence

Unit IV

Problem Solving
In problem solving, we sometimes have
to search through many possible ways.
We may know all the possible actions
our robot can do, but we have to
consider various sequences to find a
sequence of action to achieve a goal.
We may know all possible moves in
chess game, but we must consider
many possibilities to find a good move.

Effectiveness of Search
The effectiveness of search can be
measured in at least three ways:
Does it finds solution at all.
Is it an optimal solution(with low
cost)?
What is the search cost associated
with the time and memory required
to find a solution?

Problem Solving Agent


A Problem Solving Agent is a kind of Goal
Based Agent that decides what to do by
finding sequences of action to get desired
goal.
Example :
(i) Find the shortest path between two
cities in given country.
(ii) Find the shortest path for salesman to
travel, visiting each city once, and then
returning to the starting city.

Problem Solving Agents


Intelligent agents can solve problems by searching a state-space
State-space Model
the agents model of the world
usually a set of discrete states
e.g., in driving, the states in the model could be towns/cities

Goal State(s)
a goal is defined as a desirable state for an agent
there may be many states which satisfy the goal test
e.g., drive to a town with a ski-resort

or just one state which satisfies the goal


e.g., drive to Mammoth

Operators (actions, successor function)


operators are legal actions which the agent can take to move from one
state to another

State Space Model


Initial State: The state the agent knows it self to be in. (e.g.
Initial Chess Board, Point where to start)
Action/Operator: A set of action that moves the problem from
one state to another.(e.g. Chess move, Robot Action)
Neighbourhood: Set of all possible states reachable from a
given state.
State Space: The set of all states reachable from initial state by
any sequence of actions.
Path: Any sequence of action leading from one state to
another.
Goal Test: A test applicable to a single state problem to
determine if it is a goal state.(e.g. Winning Chess position,
Target Location)
Path Cost: The function that assigns a cost to the path.

Eight Queen Problem

Eight Queen Problem

Eight Queen Problem

Eight Queen Problem

Tic-Tac-Toe

Tic-Tac-Toe

Uninformed Search
Also known as BLIND SEARCH.
Uninformed search has no
information about the number of
steps or the path costs from the
current state to the goal.
They can only distinguish a goal
state from a non-goal state.
There is no bias to go to go towards
the desire goal.

Uninformed Search

Breadth-First-Search
Depth-First-Search
Uniform-Search
Iterative Deepening Search
Bi-Directional Search

Uninformed Search

Breadth-First-Search
Depth-First-Search
Uniform-Search
Iterative-Deepening-Search
Bi-Directional Search

Breadth First Search


Expand root node.
Expand all children of root node.
Expand all grandchildren.

Breadth First Search

Breadth First Search

Breadth First Search

Breadth First Search

Breadth First Search


S

C
D

E
B

G
D

S
C

F
C

Breadth First Search

Breadth First Search

Breadth First Search

Time Complexity
assume (worst case) that there is 1
goal leaf at the RHS at depth d
so BFS will generate
= b + b2+ ..... + bd + bd+1 - b
= O (bd+1)

d=0
d=1
d=2
G

Space Complexity
how many nodes can be in the queue
(worst-case)?
at depth d there are bd+1 unexpanded
nodes in the Q = O (bd+1)

d=0
d=1
G

d=2

Depth-First-Search

Depth-First-Search

Depth-First-Search

Depth-First-Search

Depth-First-Search

Depth-First-Search

Depth-First-Search

Depth-First-Search

Depth-First-Search

BFS vs DFS

Iterative Deepening
Search
It is similar to the DFS but chooses
the best depth limit.
It tries all possible depth limits: first
depth 0, then depth 1, then depth 2
and so on.
Linear memory requirement of depth
first search.
Guarantee for goal node of minimal
depth.

Iterative Deepening
Search

Iterative Deepening
Search

Iterative Deepening
Search

Iterative Deepening
Search

Iterative Deepening
Search

Iterative Deepening
Search

Bidirectional Search
Run two simultaneous search.
One forward from initial state and
other backward from goal-hoping
that two searches meet in the
middle.
For problems where branching factor
is b and the solution is in depth d,
the solution is found after half step of
depth first.

Bidirectional Search

Bidirectional Search

Bidirectional Search
The time and space complexity for
Bidirectional search is O(bd/2).
Complete : Yes
Optimal : Yes

Informed Search
Strategies
Here we see how information about
the state space can prevent
algorithms from blundering about in
the dark.
Also known as Heuristic Search.

Informed Search
Strategies

Hill-Climbing.
BestFirst Search.
Greedy-Best-First Search.
A* Search.
IDA* Search.
Local-Beam Search

Informed Search
Strategies

Hill-Climbing.
BestFirst Search.
Greedy-Best-First Search.
A* Search.
IDA* Search.
Local-Beam Search.

Heuristic Function
The word Heuristic is derived from the
Greek verb heuriskein, meaning to
find or to discover.
A heuristic function at a node n is an
estimate of the optimum cost from the
current node to a goal.
Denoted by h(n).
h(n)= Estimated cost of the cheapest
path from node n to goal node.

Heuristic Function
Heuristic Function = Cost from start
state to current state + Estimated
distance from the goal.

Heuristic Function

8-Puzzle

8-Puzzle

8-Puzzle

8-Puzzle
In this case only the 3 ,8 and 1
tiles are misplaced by 2, 3 and 3
space respectively.
So, the heuristic function evaluates
to 8.
In other words the heuristic is telling
us, that it thinks a solution is
available in just 8 more moves.
h(n)=8.

Greedy Best-First Search


Expand the node that is closest to the goal
node.
The selection of promising node is based on
some heuristic function : f(n)=h(n).
Usually it is difficult to compute the exact
distance from the goal.
Therefore a heuristic function is used to
estimate of cost from n to goal.
hSLD (n)=STRAIGHT-LINE-DISTANCE from n to
BUCHAREST.

Greedy Best-First Search

Greedy Best-First Search

Greedy Best-First Search

Greedy Best-First Search

Greedy Best-First Search

Greedy Best-First Search

Greedy Best-First Search

Greedy Best-First Search

Greedy Best-First Search

Greedy Best-First Search

A*
A* include in its evaluation function the cost
from the start node to the current node, in
addition to the estimated cost from the
current node to the goal.
Evaluation function: f(n)=g(n)+h(n).
Where,
g(n) = cost so for to reach n.
h(n) =Estimated cost to goal from n.
f(n) =Estimated total cost of path from the
starting node n0 through n to goal.

A*

A*

A*

A*

A*

A*

A*

A*

A*

A*

Optimality of A*

Optimality of A*

Optimality of A*

A* and Depth First

Iterative Deepening A*
Like Iterative Deepening Depth-First, but depth
bound modified to be an f-Limit.
Start with F-Limit= f(Start).
Prune any node if f(Node)>F-Limit.
Next F-Limit=min-cost of any node pruned.
If this Search does not succeed, determine the
lowest f-Limit among the nodes that were visited
but not expanded.
Use this f-Limit as the new limit value-cut off value
and do another depth first search.
Repeated this procedure Until a goal node is found.

Iterative Deepening A*

Iterative Deepening A*

Iterative Deepening A*

Iterative Deepening A*

Iterative Deepening A*

Iterative Deepening A*

Iterative Deepening A*

Iterative Deepening A*

Iterative Deepening A*

Iterative Deepening A*

Iterative Deepening A*

Iterative Deepening A*

Iterative Deepening A*

Iterative Deepening A*

IDA*

Questions from Previous Exams


of MIT
Difference between DFS and BFS with Example?
Describe various informed search strategies?
What is Iterative Deepening? Write the various steps
of IDA*.
What do you mean by Heuristic Search Technique.
How are they different from informed search techniques.
Explain A* algorithm in detail. Also discuss the
limitations of this algorithm.
Explain Breadth-First Search with Suitable example.
Discuss IDA* algorithm in detail.
Explain A* in brief. Give its algorithmic complexity.
Explain AO* algorithm with suitable example.

You might also like