Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 66

Aritificial Intelligence

Problem Solving as
Searching – Informed
Search
Searching Strategies

In Today’s
Learning
Lecture
Outcomes

Uninformed Strategies
Search /Types

3
4

 Searching the correct solution or goal among


alternatives
 Implement search algorithms
 Do we need to find the solution or approximate
it?
 Is the algorithm guaranteed to terminate?
 If a solution is found is it the optimal solution?
 What is the complexity of the search process?
 How can we reduce search complexity?
Introduction
5

Problem Solving Agent


 Simple reflex agents base their actions on a direct
mapping from states to actions
 Inefficient when these mappings are very large
 Goal based agents succeed by considering future
actions and the desirability of their outcomes

 Problem solving agents, a type of a goal based agent,


decide what to do by finding sequences of actions that
lead to desirable states
 Like checkmate state in chess

 AI maximize their performance measure which is


simplified if the agent can adopt a goal and aim at
achieving it
6
Problem Solving as Search

 Define the problem through:

(a) Goal formulation


(b) Problem formulation

 Solving the problem as a 2-stage process:

(a) Search: Looking for a sequence of


actions that reaches the goal
(b) Execute the solution found
7

Problem Solving Agent


 Initial State and Goal State
 Goal Formulation
 First step in problem solving
 Identify the goals of an agent
 Problem Formulation
 Problem of deciding what actions and states to
consider, given a goal
 We must eliminate all irrelevant details in the solution
 Search
 Process for finding a sequence of actions that lead to
the goal
 Solution/Execute
 A correct or rational sequence of action
8

Problem Solving Agent


9
Problem Solving as Search
1. Initial state: the state in which the agent starts

2. States: All states reachable from the initial


state by any sequence of actions (State
space)
Problem
Formulation 3. Actions: possible actions available to the
agent at a state s, Actions(s) returns the set of
actions that can be executed in state s.
(Action space)
10

4. Transition model: A description of what


each action does - Results(s; a)

5. Goal test: Determines if a given state is


a goal state

6. Path cost: Function that assigns a


numeric cost to a path w.r.t. performance
measure
Problem
Formulation
11

Examples
12

States: all arrangements of 0 to 8 queens on the


board

Initial state: No queen on the board

Actions: Add a queen to any empty square

Transition model: updated board

Goal test: 8 queens on the board with none attacked

Examples
13

8 puzzles
Example http://www.permadi.com/java/puzzle8/
14

Start State Goal State


Examples
15

States: Location of each of the 8 tiles and the blank in the


3x3 grid

Initial state: Any state


Actions: Move blank Left, Right, Up or Down
Example Transition model: Given a state and an action, returns
resulting state
Goal test: state matches the goal state?
Path cost: total moves, each move costs 1.
16

8 Tile
Puzzle
17

Examples of Search Agents


18

Examples of Search Agents

Romania
19
Road Distances in km

Romania
20
Examples of Search Agents
States: In City where City E { Arad, Zerind,
Bucharest,... }

Initial state: In Arad

Actions: Go Sibiu, etc.

Transition model: Results(In(Arad); Go(Zerind)) =


In(Zerind)

Goal test: In(Bucharest)


Examples Path cost: path length in kilometers
21
State Space vs. Search Space
State space: a physical configuration
Search space: an abstract configuration
represented by a search tree or graph of possible
solutions

Search tree:
 Root: initial state
 Branches: actions
 Nodes: results from actions. A node has: parent,
children, depth, path cost, associated state
 Expand: A function that given a node, creates all
children nodes
22
“When solving problems, dig at the roots instead of just
hacking at the leaves”
Anthony J. D'Angelo
The College Blue Book

Search and
Trees...
23
Search and Trees...
24

Search
and Trees...
25

Order of node expansion picked up using some strategy

Search and
Trees...
26
Strategies Evaluated According to:

Completeness: Does the strategy Time complexity: How


always find a solution if there is long does it take to find a
one? solution? (Number of
nodes generated)
Optimality: Does the strategy
find the optimal (lowest cost) Space complexity: How
solution? much memory is needed to
perform the search
(maximum number of
nodes in memory)?
27
Strategies Evaluated According to:
Time: number of nodes generated

Space: Maximum number of nodes stored in


memory
Branching factor: b is maximum number of
successors of any node

Depth: d depth of the least-cost solution i.e.


shallowest goal node

Path length: m maximum length of any path in the


Cost variables state space
28

Uninformed / Blind Search Informed / Heuristic Search

 Breadth-first  Greedy Best First Search


 uniform first  Hill climbing
 depth-first  A* Search
 Iterative deepening depth-first  Beam Search
 Bidirectional
 Branch and Bound

Searching the
Search Space
29

 Use no domain knowledge!

 Have access only to the problem definition/goal

 We are not informed about the quality of our


choices

 Also called blind search

Uninformed
search
30

 Idea of the goal

 We are informed about the quality of our choices

Informed
search
31

Strategy in which the root node is expanded first, then all


the successors of the root node are expanded next, then
their successors, and so on.

Implementation:
Fringe = First In First Out (FIFO) queue, i.e., put
successors at back/end of queue

Breadth First
Search
32

Expand:
fringe = [B,C]

Is B a goal state?
Breadth First
Search
33

Expand:
fringe=[C,D,E]

Breadth First Is C a goal state?

Search
34

Expand:
fringe=[D,E,F,G]

Is D a goal state?

Breadth First
Search
35

• Time Complexity d=0


– assume (worst case) that there is 1 goal
leaf at the RHS
– so BFS will expand all nodes
d=1

= 1 + b + b2+ ......... + bd d=2


= O (bd) G

• Space Complexity d=0


– how many nodes can be in the queue
(worst-case)?
– at depth d-1 there are bd unexpanded d=1
nodes in the Q = O (bd)
BFS d=2
G
36
Examples of Time and Memory Requirements
Breadth-First Search
37

Breadth-First Search – Properties

Solution Length: optimal

Expand each node once (can check for duplicates)


Search Time: O(Bd)

Memory Required: O(Bd)


Breadth-
Drawback: requires exponential space
First Search
38
Depth First Search
Expand deepest unexpanded node

Implementation:
fringe = Last In First Out (LIFO) queue, i.e., put
successors at front

Is ‘A’ a goal state?

DFS
39
Depth First Search

queue=[B,C]

Is B a goal state?
DFS
40
Depth First Search

queue=[D,E,C]

Is D = goal state?
DFS
41
Depth First Search

queue=[H,I,E,C]

DFS Is H = goal state?


42
Depth First Search

queue=[I,E,C]

Is I = goal state?
DFS
43
Depth First Search

queue=[I,E,C]

Is I = goal state?
DFS
44
Depth First Search

queue=[E,C]

Is E = goal state?
DFS
45
Depth First Search

queue=[J,K,C]

Is J = goal state?
DFS
46
Depth First Search

queue=[K,C]

Is K = goal state?
DFS
47
Depth First Search

queue=[C]

Is C = goal state?
DFS
48
Depth First Search

queue=[F,G]

Is F = goal state?
DFS
49
Depth First Search

queue=[L,M,G]

Is L = goal state?
DFS
50
Depth First Search

queue=[M,G]

Is M = goal state?
DFS
51

DFS
52 Properties d=0

 Time Complexity (d is deepest path) d=1


• Assume (worst case) that there is
1 goal leaf at the RHS d=2
• so DFS will expand all nodes G

= 1 + b + b2+ ......... + bd
= O (bd)
d=0

 Space Complexity d=1


• How many nodes can be in the
queue (worst-case)?
• At depth l < d we have b-1 nodes
d=2
• At depth d we have b nodes
d=3
DFS • Total = (d-1)*(b-1) + b = O(bd)
d=4
53
Depth First Search - Properties

Non-optimal solution path

Incomplete unless there is a depth bound

Exponential time

Linear space

DFS
54
Comparison -- BFS - DFS
 Same worst-case time Complexity, but
 In the worst-case BFS is always better than DFS
 Sometime, on the average DFS is better if:

many goals, no loops and no infinite paths


 BFS is much worse memory-wise
DFS is linear space
BFS may store the whole search space.
 In general
BFS is better if goal is not deep, if infinite paths, if many
loops, if small search space
DFS is better if many goals, not many loops,
DFS is much better in terms of memory
55

Iterative Deepening Depth


First Search
Finds the best depth limit

It does this by gradually increasing the limit—first 0, then


1, then 2, and so on—until a goal is found.

Combines benefits of both BFS and DFS


56
Iterative Deepening Search
57
Iterative deepening search L=0
58
Iterative deepening search L=1
59
Iterative deepening search L=2
60
Iterative Deepening Search L=3
61

Iterative
Deepening
62
Bidirectional Search

Simultaneously search forward from S and


backwards from G

Stop when both “meet in the middle”

Need to keep track of the intersection of 2 open sets


of nodes

Idea
63

Bi-Directional
Search
64
65

THANKS!
Any questions?
You can find me at

Facebook/babaryaqoob92
Twitter/byaqoobkhan
66

AND THAT IS FAREWELL


TO

WEEK 3 – 4 

You might also like