Ai - Module 2 - Week - 3

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 87

Subject Name: Artificial Intelligence (AI)

Unit No:2. Uninformed Search Strategies


Index – Module 2 - Problem solving

Lecture 7: Formulating Problem, Example Problems

Lecture 8Uninformed Search Methods: BFS Depth

Limited Search

Lecture 9- Depth First Iterative Deepening (DFID),

Lecture 10 - Informed

2
Unit No: 2. Unit name: Intelligent Agents

Lecture No: 7
Formulating Problems,
Components of a problem

A problem can be defined formally by


five components:
• Initial State
• Actions
• Transition model
• Goal Test
• Path Cost

4
Example : Traveling in Romania

Problem: On holiday in Romania; currently in Arad. Flight leaves


tomorrow from Bucharest. Find a short route to drive to Bucharest.

Formulate
problem:
states: various
cities
actions: drive
between cities

Formulate goal:
be in Bucarest

Formulate solution:
sequence of cities
(eg, Arad,
5 Sibiu,
Fagaras,
Example : Traveling in Romania

6
• Deterministic, fully observable environment =⇒
single-state problem
• Agent knows exactly which state it will be
• in Solution is a sequence of actions

• Non-observable environment =⇒ conformant


problem P
• Agent know it may be in any of a number of
r actions
• states Solution, if any, is a sequence of

• o environment =⇒
Nondeterministic and/or partially observable
contingency problem
• Percepts provide new information about b current
• state Solution is a tree or policy l
• Often interleave search and execution
e
m
State-Space Problem Formulation

 A problem is defined by four items:


 initial state e.g., "at Arad“
 actions or successor function
 S(x) = set of action–state pairs
e.g., S(Arad) = {<Arad  Zerind, Zerind>, … }
 goal test (or set of goal states)
 e.g., x = "at Bucharest”, Checkmate(x)
 path cost (additive)
 e.g., sum of distances, number of actions executed, etc.
 c(x,a,y) is the step cost, assumed to be ≥ 0
 A solution is a sequence of actions leading from the initial state to a goal
state

8
Example : 8-queens problem

9
State-Space problem formulation

 States: -any arrangement of n<=8 queens


 or arrangements of n<=8 queens in leftmost n
 columns, 1 per column, such that no queen
 attacks any other.
 initial state no queens on the board
 actions -add queen to any empty square
 -or add queen to leftmost empty square such that it is not
attacked by other queens.
 goal test 8 queens on the board, none a ttacked.
 path cost 1 per move

10
Restricted form of general agent

Formulate problems as search over a space of


states (possibly) reachable by sequences of actions
Define an initial state and a class of goal states
Solution of original problem is extracted from the goal
state or from the path to it
Restricted form of general
agent

function SIMPLE-PROBLEM-SOLVING-AGENT( percept) returns an action


static: seq, an action sequence, initially empty
state, some description of the current world state
goal, a goal, initially null
problem, a problem formulation

state → UPDATE-STATE(state, percept)


if seq is empty then
goal → FORMULATE-GOAL(state)
problem → FORMULATE-PROBLEM(state,
goal) seq → SEARch( problem)
acti on → REcOMMENDATION(seq,
state) seq → REMAINDER(seq, state)
return acti on
Note: this is offline problem solving; solution executed “eyes closed.”
Online problem solving involves acting without complete
knowledge.

CS:4420 Spring 2019 – p.3/34


Example: Vacuum
World
Single-state problem
initial state = 5
goal states = {7,
8}
Solution?
1 2

3 4

5 6

7 8

CS:4420 Spring 2019 – p.8/34


Example: Vacuum
World
Single-state problem
initial state = 5
goal states = {7,
8}
Solution? [Right,
Suck ] 1 2

3 4

5 6

7 8

CS:4420 Spring 2019 – p.8/34


Example: Vacuum
World
Conformant problem, initial state = {1, 2, 3, 4, 5, 6, 7, 8}
Right =⇒ {2, 4, 6, 8}, Left =⇒ {1, 3, 5, 7}, Suck =⇒ {4, 5, 7,
8}
Solution?
1 2

3 4

5 6

7 8

CS:4420 Spring 2019 – p.9/34


Example: Vacuum
World
Conformant problem, initial state = {1, 2, 3, 4, 5, 6, 7, 8}
Right =⇒ {2, 4, 6, 8}, Left =⇒ {1, 3, 5, 7}, Suck =⇒ {4, 5, 7,
8}
Solution? [Right, Suck, Left , Suck ]

1 2

3 4

5 6

7 8

CS:4420 Spring 2019 – p.9/34


Example: Vacuum
Contingency World
problem, initial state = 5
Suck occasionally fails. Local sensing: dirt, location.
Solution?

2
3 4

5 6

7 8
C
S
:
4
4
2 CS:4420 Spring 2019 – p.10/34
0
S
p
Example: Vacuum
Contingency World
problem, initial state = 5
Suck occasionally fails. Local sensing: dirt, location.
Solution? [Right, dirt then Suck]
if
1 2

3 4

5 6

7 8

CS:4420 Spring 2019 – p.10/34


We start by considering the simpler cases in which the environment
is fully observable, static and deterministic

In such environments the following holds for an agent A:


• A’s world is representable by a discrete set of states

• A’s actions are representable by a discrete set of operators

• the next world state is completely determined by the


current state and A’s actions
• the world’s state transitions are caused exclusively by A’s
actions
Formally, a problem is defined by four
components:
• An initial state (eg, In(Arad))
• A successor function S returning sets of action–state
pairs (eg, S(Arad) = {⟨GoTo(Zerind), In(Zerind)⟩, . . .})
• A goal test, Singl
explicit (eg, x =
In(Bucharest)) or implicit, (eg,
• e-
NoDi r t (x))
A path cost state
(eg, sum of distances, number of actions
executed, . . . ) Usually Probl
additive and given as c(x, a, y), the step cost from x to y
A solution is a sequence
by action
a goal state
of to
a, assumed em
actions
be ≥ leading
0 from the initial state to

Form
ulati
Selecting a State
Since the realSpace
world is absurdly complex the state space must
be abstracted for problem solving.
• Abstract state = set of real states

• (Abstract) action = complex combination of real actions eg,


GoTo(Zerind) from Arad represents a complex set of
possible routes, detours, rest stops, etc.
• For guaranteed realizability, any real state corresponding
to In( Ar ad) must get to some real state corresponding to
I n ( Ze r i n d )
• Each abstract action should be “easier” than the
original problem!
• (Abstract) solution = set of real paths that are solutions in
the real world
R
L
R

L
S S

R R
L R L R

L L

Example:
S S
S S

R
L R

vacuum
S S

States? world
Actions?
Goal test? state
Path cost? space
graph
R
L
R

L
S S

R R
L R L R

L L

Example:
S S
S S

R
L R

vacuumS S

States? ⟨dirt flag,world


robot location⟩ (ignore dirt amount)
Actions? Left , Right , Suck , NoOp
Goal test? ¬dirty state
Path cost? space
1 per action (0 for NoOp)

graph
In the graph
• each node represents a possible state

• a node is designated as the initial state

Formula
• one or more nodes represent goal states, states in which
the agent’s goal is considered accomplished
ting
• each edge represents a state transition caused by a specific
agent action Problem
• associated to eachas
edge ais the cost of performing that transition
Labeled
Graph
How do we reach a goal
state? 4
initial state
4 C
A B
3
7
S 5 2 5
F S
goal states

4 e
D E G
2 3 a
There may be several possible ways. Or
none! Factors to consider:
r
• cost of finding a path c
• cost of traversing a h
path
Problem Solving as
Search
Search space: set of states reachable from an initial state S 0 via
a (possibly empty/finite/infinite) sequence of state transitions

To achieve the problem’s goal


1. search the space for a (ideally optimal) sequence of
transitions starting from S 0 and leading to a goal state
2. execute (in order) the actions associated to each transition in
the identified sequence

For contingency problems, two steps above need to be interleaved


Example: The 8-
puzzle

2 8 3
1 6 4
7 5
Example: The 8-
Problem: puzzle
Go from state S to state
G. 2 8 3 1 2 3
1 6 4 8 4
7 5 7 6 5
(S) (G)

2 8 3
1 6 4
7 5
L
R
D U
L R
2 8 3 2 8 3 2 8 3
1 6 4 1 4 1 6 4
7 5 R 7 6 5 L 7 5

D U D U D U
L R
2 8 3 2 8 3 2 3 2 8 3 2 8 3
6 4 1 4 1 8 4 1 4 1 6
CS 1 7 5 7 6 5 7 6 5 7 6 5 7 5 4
:4
4
20
Sp
ri
ng
20
Example: The 8-
puzzle
States: configurations of tiles
Operators: move one tile Up/Down/Left/Right

Note:
• There are 9! = 362, 880 possible states: all permutations of
{0, 1, 2, 3, 4, 5, 6, 7, 8} where 0 is the empty space
• Not all states are directly reachable from a given state

How can an artificial agent represent the states and the state space
for this problem?
1. Choose an appropriate data structure to represent the
world states

2. Define each operator as a precondition/effects pair where


the
P
• precondition holds exactly in the states the operator
is applicable to
r
• effects describe how a state changes into a successor
o
state by the application of the operator

3. Specify an initial state b


4. Provide a description of the goal—tol check if a reached state
is a goal state
e
m
States: each represented by a 3 × 3 array of numbers in [0 . . .
8], where value 0 is for the empty cell

Form
2 8 3 2 8
1 6 4 ulatin
becomes A = 3
7 5 g the 1 6
4
8- 7 0
5
puzzl
e
Probl
• Operators: 24 operators of the form OP r , c , d
where r, c ∈ {1, 2, 3}, d ∈ {L, R , U, D }
• If the empty space is at position (r, c), OP r , c , d moves it
in direction d
Form
Example:
ulatin
2 8 3 g the 2
OP 3,2,L
8
1 6 4
8-=⇒ 13 6 4
7 0 5 0 7 5
puzzl
e
Probl
Example: OP 3 , 2 , R

2 OP 3,2,R
8 3
1 6 4 =⇒ 1 2
6 84
7 0 5 7 3
5 0
P
Preconditions: A[3, 2] =
(
0
A[3, 2] →
re
A[3,
Effects:
3] c
A[3, 3] → 0
o
n
di
ti CS:4420 Spring 2019 – p.25/34
Example: OP 3 , 2 , R

2 OP 3,2,R
8 3
1 6 4 =⇒ 1 2
6 84
7 0 5 7 3
5 0
P
Preconditions: A[3, 2] =
(
0
A[3, 2] → A[3,
re
Effects:
3] c
A[3, 3] → 0
o
n
We have 24 operators in this problem formulation . . . 20 too
many!
di
ti CS:4420 Spring 2019 – p.25/34
States: each represented by a pair (A, (i, j ) )
where:
• A is a 3 × 3 array of numbers in [0 . . . 8]
• (i, j ) is the position of the empty space (0) in the
array
A
2 8 3 2 8 3
1 6 4 becomes (
B
1 6 4 , (3, 2) )
7 5 e7 0 5
t
t
e
Operators: 4 operators of the form O P d where d ∈ {L , R , U,
D}

O P d moves the empty space in the direction d


Example: A
2 8 3 2 8 3
( 1 6 4 , (3, 2) )
B
= ⇒ 1 6 4 , (3, 1) )
O P L

( e
7 0 5 0 7 5
t
t
e
Example: O P L

2 8 3 2 8 3
O P
( 1 6 4 , (3, 2) ) = ⇒ 1 6 4 , (3, 1) )
L

(
7 0 5
P0 7 5
Let (r , c ) be the position of 0 in A re
0 0

Preconditions: c > 1
0
c
A[r , c ]
0 →oA[r , c −
0 0 0
Effects:  1]

→n0
 A[r , c − 1]
 0 0

 (r , c ) → (r , c − 1)
0 0
di 0 0

ti
3gl 4gl

T
h
e
W
a into the 4gl
Get exactly 2 gallons of water
jug
t
e
States: Determined by the amount of water in each jug

J4,
State Representation: Two real-valued variables, J 3 ,
indicating the amount of water in the two jugs,
with the constraints:

0 ≤ J ≤ 3,
T 0 ≤ J4
3
≤ 4 h
Initial State Description e
Goal State J = 0,
3 W J4 = 0
Description:
a
(non exhaustive
J = 2
4 description)
t
e
The Water Jugs Problem:
Operators
E4: empty jug4 on the
ground effect: J ′ 4 = 0
E4-3: precond: J 4 from
pour water > 0 jug4 into jug3 until jug3 is
full precond: J 3 < 3, effect: J ′3 = 3,
J4 ≥ 3 − J3 J ′4 = J 4 − (3 − J 3 )
P3-4: pour water from jug3 into jug4 until jug4 is
full
precond: J 4 < 4, effect: J ′4 = 4,
J3 ≥ 4 − J4 J ′3 = J 3 − (4 − J 4 )
E3-4: pour water from jug3 into jug4 until jug3 is
empty
precond: J 3 + J 4 < 4, effect: J ′ = J 3 4+ J 4 , J 3 >
0 J ′3 = 0
...
CS
:4
4
20
Sp
ri
ng
20
Problem Search Graph

J_3 = 0 J_3 = 0
J_4 = 0 J_4 = 2 F4 J_4 = 0 F3

J_3 = 0 J_3 = 3
J_4 = 4 J_4 = 0
F3 P4-3
F4 E3-4

J_3 = 3
J_4 = 4
T
J_3 = 3
J_4 = 1
J_3 = 0
J_4 = 4
J_3 = 0
J_4 = 3

h
F3

... ... ...


J_3 = 3

e P3-4
J_4 = 3

W E4
J_3 = 2
J_4 = 4
...

a
E3-4
J_3 = 2
J_4 = 0

CS
:4
4
t
J_3 = 0
J_4 = 2
20
Sp
ri
ng
e
20
• Route Finding
(computer networks, airline travel planning
system, . . . )
• Travelling Salesman Optimization Problem
(package delivery, automatic drills, . . . )
• Layout Problems Re
(VLSI layout, furniture layout,
packaging, . . . ) al-
• Assembly Sequencing
(assembly of electric Wo

motors, . . . )
Task Scheduling rld
(manufacturing,
timetables, . . . ) Se
• ..
. arc
h
Typically, a problem’s solution is a description of how to reach a goal
state from the initial state:
Examples:
• n-puzzle
• route-finding
P
• problem assembly
sequencing
r
Occasionally, a problem’s solution is simply a description of the
goal o
state itself:
Examples:
b
• 8-queen problem
l
• scheduling problems
• layout problems e
m
Lecture No: 8
Uninformed Search
Methods: Depth Limited
search
Search Algorithms

45
The uninformed search algorithms are also called blind
search algorithms.

The search algorithm produces the search tree without using


any domain knowledge, which is a brute force in nature.

They don't have any background information on how to


approach the goal or whatsoever.thms. The search algorithm
produces the search tree without using any domain
knowledge, which is a brute force in nature. They don't
have any background information on how to approach the
goal or whatsoever.
Search Strategies

 A search strategy is defined by picking the order of node


expansion
 Strategies are evaluated along the following dimensions:
 completeness: does it always find a solution if one exists?
 time complexity: number of nodes generated
 space complexity: maximum number of nodes in memory
 optimality: does it always find a least-cost solution?
 Time and space complexity are measured in terms of
 b: maximum branching factor of the search tree
 d: depth of the least-cost solution
 m: maximum depth of the state space (may be ∞)

47
Uninformed Search

Uninformed strategies use only the information available


in the problem definition
 Breadth-first search
 Uniform-cost search
 Depth-first search
 Depth-limited search
 Iterative deepening search

48
Breadth-first search

• Breadth-first search is a simple strategy in which the root node is expanded first,
then all the SEARCH successors of the root node are expanded next,
then their successors, and so on.

In general, all the nodes are expanded at a given depth in the search tree before any
nodes at the next level are expanded. Breadth-first search is an instance of the
general graph-search algorithm.

in which the shallowest unexpanded node is chosen for expansion.

This is achieved very simply by using a FIFO queue for the frontier. Thus, new
nodes (which are always deeper than their parents) go to the back of the queue, and
old nodes, which are shallower than the new nodes, get expanded first.

There is one slight tweak on the general graph-search algorithm, which is that the
goal test is applied to each node when it is generated rather than when it is selected
for expansion.
• This decision is explained below, where we discuss time complexity.

• Note also that the algorithm, following the general template for graph search, discards
any new path to a state already in the frontier or explored set; it is easy to see that any
such path must be at least as deep as the one already found.

• Thus, breadth-first search always has the shallowest path to every node on the frontier.
Breadth-first Search:

•Breadth-first search is the most common search strategy for traversing a tree or
graph. This algorithm searches breadthwise in a tree or graph, so it is called
breadth-first search.
•BFS algorithm starts searching from the root node of the tree and expands all
successor node at the current level before moving to nodes of next level.
•The breadth-first search algorithm is an example of a general-graph search
algorithm.
•Breadth-first search implemented using FIFO queue data structure.
• The Breadth First Search (BFS) algorithm is used to search a graph data structure for a
node that meets a set of criteria.
• It starts at the root of the graph and visits all nodes at the current depth level before moving
on to the nodes at the next depth level.
How does BFS work?
Starting from the root, all the nodes at a particular level are visited first and then the nodes
of the next level are traversed till all the nodes are visited.
To do this a queue is used. All the adjacent unvisited nodes of the current level are pushed
into the queue and the nodes of the current level are marked visited and popped from the
queue.
Illustration:
Let us understand the working of the algorithm with the help of the following example.
Step1: Initially queue and visited arrays are empty.
Step2: Push node 0 into queue and mark it visited.

Step 3: Remove node 0 from the front of queue and visit the unvisited
neighbours and push them into queue.
Step 4: Remove node 1 from the front of queue and visit the unvisited
neighbours and push them into queue.

Step 5: Remove node 2 from the front of queue and visit the unvisited neighbours and
push them into queue.
Step 6: Remove node 3 from the front of queue and visit the unvisited neighbours
and push them into queue.
As we can see that every neighbours of node 3 is visited, so move to the next node
that are in the front of the queue.
Steps 7: Remove node 4 from the front of queue and visit the unvisited neighbours
and push them into queue.
As we can see that every neighbours of node 4 are visited, so move to the next node
that is in the front of the queue.
Breadth-first search

Breadth-first search on a simple binary tree. At each stage, the


node to be expanded next is indicated by a marker.
Advantages:
•BFS will provide a solution if any solution exists.
•If there are more than one solutions for a given problem, then BFS will provide the
minimal solution which requires the least number of steps.
Disadvantages:
•It requires lots of memory since each level of the tree must be saved into memory
to expand the next level.
•BFS needs lots of time if the solution is far away from the root node.
Example:
In the below tree structure, we have shown the traversing of the tree using BFS
algorithm from the root node S to goal node K. BFS search algorithm traverse in
layers, so it will follow the path which is shown by the dotted arrow, and the
traversed path will be:
S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Time Complexity: Time Complexity of BFS algorithm can be obtained by the
number of nodes traversed in BFS until the shallowest Node. Where the d= depth
of shallowest solution and b is a node at every state.
T (b) = 1+b2+b3+.......+ bd= O (bd)
Space Complexity: Space complexity of BFS algorithm is given by the Memory
size of frontier which is O(bd).
Completeness: BFS is complete, which means if the shallowest goal node is at
some finite depth, then BFS will find a solution.
Optimality: BFS is optimal if path cost is a non-decreasing function of the depth
of the node.
Depth-first Search
•Depth-first search isa recursive algorithm for traversing a tree or
graph data structure.
•It is called the depth-first search because it starts from the root node
and follows each path to its greatest depth node before moving to the
next path.
•DFS uses a stack data structure for its implementation.
•The process of the DFS algorithm is similar to the BFS algorithm.
• Advantage:
• DFS requires very less memory as it only needs to store a stack of the nodes on the
path from root node to the current node.
• It takes less time to reach to the goal node than BFS algorithm (if it traverses in the
right path).
• Disadvantage:
• There is the possibility that many states keep re-occurring, and there is no guarantee of
finding the solution.
• DFS algorithm goes for deep down searching and sometime it may go to the infinite
loop
Example:

In the below search tree, we have shown the flow of depth-first search, and it will
follow the order as:
Root node--->Left node ----> right node.
It will start searching from root node S, and traverse A, then B, then D and E,
after traversing E, it will backtrack the tree as E has no other successor and still
goal node is not found. After backtracking it will traverse node C and then G, and
here it will terminate as it found goal node.
• Completeness: DFS search algorithm is complete within finite state space
as it will expand every node within a limited search tree.
• Time Complexity: Time complexity of DFS will be equivalent to the node
traversed by the algorithm. It is given by:
• T(n)= 1+ n2+ n3 +.........+ nm=O(nm)
• Where, m= maximum depth of any node and this can be much larger
than d (Shallowest solution depth)
• Space Complexity: DFS algorithm needs to store only single path from the
root node, hence space complexity of DFS is equivalent to the size of the
fringe set, which is O(bm).
• Optimal: DFS search algorithm is non-optimal, as it may generate a large
number of steps or high cost to reach to the goal node.
Why Depth Limited Search

66
DFS with a depth-limit
L

 Standard DFS, but tree is not explored below some depth-limit L

 Solves problem of infinitely deep paths with no solutions


 But will be incomplete if solution is below depth-limit

 Depth-limit L can be selected based on problem knowledge


 E.g. diameter of state-space:
 E.g. max number of steps between 2 cities
 But typically not known ahead of time in practice

67
• How does DFS work?
• Depth-first search is an algorithm for traversing or searching tree or graph
data structures. The algorithm starts at the root node (selecting some arbitrary
node as the root node in the case of a graph) and explores as far as possible
along each branch before backtracking.
• Let us understand the working of Depth First Search with the help of the
following illustration
Step1: Initially stack and visited arrays are empty.

Step 2: Visit 0 and put its adjacent nodes which are not visited yet into the stack
Step 3: Now, Node 1 at the top of the stack, so visit node 1 and pop it from the stack
and put all of its adjacent nodes which are not visited in the stack.

Step 4: Now, Node 2 at the top of the stack, so visit node 2 and pop it from the stack and put
all of its adjacent nodes which are not visited (i.e, 3, 4) in the stack.
Step 5: Now, Node 4 at the top of the stack, so visit node 4 and pop it from the
stack and put all of its adjacent nodes which are not visited in the stack.

Step 6: Now, Node 3 at the top of the stack, so visit node 3 and pop it from the stack and
put all of its adjacent nodes which are not visited in the stack.
Depth-First Search with a depth-limit, L =
5

72
Depth-First Search with a depth-limit

73
Depth-First Search with a depth-limit

74
Depth Limited Search algorithm

75
Performance Comparison

 Completeness: no, dead end is avoided


 Optimality: no
 Time complexity: O(b*l)
 Space complexity: O(bᶩ )

76
Example

77
Unit No: 3 Unit name:Problem Solving

Lecture No: 09
Depth First Iterative
Deepening(DFID)
What is IDS?

• A search algorithm known as IDS combines the benefits of DFS with Breadth
First Search (BFS).

• The graph is explored using DFS, but the depth limit steadily increased until the
target is located.

• In other words, IDS continually runs DFS, raising the depth limit each time, until
the desired result is obtained.

• Iterative deepening is a method that makes sure the search is thorough (i.e., it
discovers a solution if one exists) and efficient (i.e., it finds the shortest path to
the goal).
1.When the graph has no cycle: This case is simple. We can DFS multiple
times with different height limits.
2.When the graph has cycles. This is interesting as there is no visited flag in
IDDFS.
Performance Comparison

 Completeness: yes
 Optimality: yes
 Time complexity: O(b*d)
 Space complexity: O(b ͩ )

In general, iterative deepening is the preferred uninformed search method


when the search space is large and the depth of the solution is not known.

82
Time Complexity: Suppose we have a tree having branching factor ‘b’ (number
of children of each node), and its depth ‘d’, i.e., there are bd nodes. In an iterative
deepening search, the nodes on the bottom level are expanded once, those on the
next to bottom level are expanded twice, and so on, up to the root of the search
tree, which is expanded d+1 times. So the total number of expansions in an
iterative deepening search is-

(d)b + (d-1)b2 + .... + 3bd-2 + 2bd-1 + bd

That is,
Summation[(d + 1 - i) bi], from i = 0 to i = d
Which is same as O(bd)
Comparison of Uninformed Search Algorithms

85
Example of Depth First Iterative Deepening Search

86
Thank You

You might also like