Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Module2

Module 2

Problem Solving
3.1 Problem Solving Agents

 The reflex agents are known as the simplest agents because they directly map states into
actions. Unfortunately, these agents fail to operate in an environment where the mapping is
too large to store and learn.

 Goal-based agent, on the other hand, considers future actions and the desired outcomes.
Here, we will discuss one type of goal-based agent known as a problem-solving agent,
which uses atomic representation with no internal states visible to the problem-solving
algorithms.

 Problem-solving agent :The problem-solving agent perfoms precisely by defining problems


and its several solutions.
 According to psychology, “a problem-solving refers to a state where we wish to reach to a
definite goal from a present state or condition.”
 According to computer science, a problem-solving is a part of artificial intelligence which
encompasses a number of techniques such as algorithms, heuristics to solve a problem.
Therefore, a problem-solving agent is a goal-driven agent and focuses on satisfying the goal.

Steps performed by Problem-solving agent

 Goal Formulation: It is the first and simplest step in problem-solving. It organizes the
steps/sequence required to formulate one goal out of multiple goals as well as actions to
achieve that goal. Goal formulation is based on the current situation and the agent’s
performance measure (discussed below)

 Problem Formulation: It is the most important step of problem-solving which decides what
actions should be taken to achieve the formulated goal.

There are following five components involved in problem formulation:


1. Initial State: It is the starting state or initial step of the agent towards its goal.
2. Actions: It is the description of the possible actions available to the agent.
3. Transition Model: It describes what each action does.
4. Goal Test: It determines if the given state is a goal state.
5. Path cost: It assigns a numeric cost to each path that follows the goal.

The problem solving agent selects a cost function, which reflects its performance measure.
Remember, an optimal solution has the lowest path cost among all the solutions.

Page 1
Module2

 Search: It identifies all the best possible sequence of actions to reach the goal state from the
current state. It takes a problem as an input and returns solution as its output.

 Solution: It finds the best algorithm out of various algorithms, which may be proven as the
best optimal solution.

 Execution: It executes the best optimal solution from the searching algorithms to reach the
goal state from the current state.

Example Problems Basically, there are two types of problem approaches:

1. Toy Problem: It is a concise and exact description of the problem which is used by the
researchers to compare the performance of algorithms.

2. Real-world Problem: It is real-world based problems which require solutions. Unlike a toy
problem, it does not depend on descriptions, but we can have a general formulation of the
problem

Agent Design

Environment Assumptions
 Static, formulating and solving the problem is done without paying attention to any
changes that might be occurring in the environment.
• Initial state is known and the environment is observable.
• Discrete, enumerate alternative courses of actions.
• Deterministic, solutions to problems are single sequences of actions, so they cannot handle
any unexpected events, and solutions are executed without paying attention to the percepts.
Page 2
Module2

Well-defined problems and solutions

A problem can be defined formally by four components:

1. Initial state that the agent starts in


– e.g. In(Arad)
2. A description of the possible actions available to the agent
– Successor function – returns a set of <action,successor> pairs
– e.g. {<Go(Sibiu),In(Sibiu)>, <Go(Timisoara),In(Timisoara)>, <Go(Zerind),
In(Zerind)>}
– Initial state and the successor function define the state space ( a graph in which the
nodes are states and the arcs between nodes are actions). A path in state space is a
sequence of states connected by a sequence of actions
3. Goal test determines whether a given state is a goal state
– e.g.{In(Bucharest)}
4. Path cost function that assigns a numeric cost to each path.
The cost of a path can be described as the some of the costs of the individual actions along
the path – step cost
– e.g. Time to go Bucharest

Page 3
Module2

3.2 Example Problems

1.Vacuum world state space graph

2 . 8- Puzzle Problem

Page 4
Module2

3. 8-queens problem

4. Route finding problem

 states: each is represented by a location (e.g. An airport) and the current time.
• Initial state: specified by the problem.
• Successor function: returns the states resulting from taking any scheduled flight, leaving later than
the current time plus the within airport transit time, from the current airport to another.
• goal test: are we at the destination by some pre-specified time.
• Path cost: monetary cost, waiting time, flight time, customs and immigration procedures, seat
quality, time of day, type of airplane, frequent-flyer mileage awards, etc.
• Route finding algorithms are used in a variety of applications, such as routing in computer
networks, military operations planning, airline travel planning systems.

3.3 Searching for Solutions


Searching through the state space generates a search tree (maybe a search graph).
It is important to distinguish between the state space and the search tree.
 state space: states + actions
 search tree: nodes + actions

There are many ways to represent nodes, but we will assume that a node is data structure with five
components:
1. State: the state in the state space to which the node corresponds.
2. Parent-node: the node in the search tree that generated this node.
Page 5
Module2

3. Action: the action that was applied to the parent to generate the node.
4.Path-cost: the cost, traditionally denoted by g(n), of the path from the initial state to the node, as
indicated by the parent pointers.
5. Depth: the number of steps along the path from the initial state.

3.3.1 Tree search algorithms

An informal description of general tree search algorithm

Example to find route from ARAD TO BUCHAREST

Page 6
Module2

1. 1.Initial State : e.g. “At Arad”.


2. Successor Function: A set of action state pairs.
S(Arad) = {(Arad->Zerind, Zerind), …}.
3. Goal Test e.g. x = “at Bucharest”.
4. Path Cost “sum of the distances travelled.

State VS Node

 A state is (a representation of) a physical configuration.


 A node is a data structure constituting part of a search tree.
Includes parent, children, depth, path cost.
States do not have children, depth, or path cost.

Fringe: The collection of nodes that have been generated but not yet been expanded
• Each element of a fringe is a leaf node, a node with no successors.

Search strategy: A function that selects the next node to be expanded from fringe
• We assume that the collection of nodes is implemented as a queue.

The operations on the queue are:


 Make-queue(queue)
 Empty?(queue)
 first(queue)
 remove-first(queue)
 insert(element, queue)
 insert-all(elements, queue)

Measuring Problem -solving Performance

A search strategy is defined by picking the order of node expansion

• Strategies are evaluated along the following dimensions:


1. Completeness: does it always find a solution if one exists?
2. Time complexity: number of nodes generated
3. Space complexity: maximum number of nodes in memory
Page 7
Module2

4. Optimality: does it always find a least-cost solution?


• Time and space complexity are measured in terms of
– b: maximum branching factor of the search tree
– d: depth of the least-cost solution
– m: maximum depth of the state space (may be ∞)

The General Search Tree Algorithm

Types of Search
1. Uninformed (Blind) Search
2. Informed ( Heuristic)Search

3.4 UNINFORMED SEARCH STRATERGIES

Uninformed search (blind search) strategies use only the information available in the problem
definition.

Various types of search strategies


1. Breadth-first search
2. Uniform-cost search
3. Depth-first search
4. Depth-limited search
5. Iterative deepening search
6. Bidirectional Search

1. Breadth-first search.
The root node is expanded first, then all the successors of the root node, and their successors and so
on.
In general, all the nodes are expanded at a given depth in the search tree before any nodes at the
next level are expanded. Expand shallowest unexpanded node.
• Implementation:
 fringe is a FIFO queue,
 the nodes that are visited first will be expanded first

Page 8
Module2

 All newly generated successors will be put at the end of the queue
 Shallow nodes are expanded before deeper nodes

Example

Properties of Breadth-First Search


1. Complete
Yes if b (max branching factor) is finite.
2.Time
1 + b + b2 + … + bd + b(bd-1) = O(bd+1)
exponential in d.
3.Space
O(bd+1)
Keeps every node in memory.
4.Optimal
Yes (if cost is 1 per step); not optimal in general
The memory requirements are a bigger problem for breadth-first search than is execution time

Page 9
Module2

Exponential-complexity search problems cannot be solved by uniformed methods for any but the
smallest instances.

Advantages:
 BFS will provide a solution if any solution exists.
 If there are more than one solutions for a given problem, then BFS will provide the minimal
solution which requires the least number of steps.

Disadvantages:
 It requires lots of memory since each level of the tree must be saved into memory to expand
the next level.

 BFS needs lots of time if the solution is far away from the root node.

2. Uniform-cost search
Uniform cost search can be used if the cost of travelling from one node to another is available.
Breadth first Search finds the shallowest goal but it’s not always sure to find the optimal solution.

Uniform cost search always expands the lowest cost node on the fringe (the collection of nodes t hat
are waiting to be expanded. The first solution is guaranteed to be the cheapest one because a
cheaper one is expanded earlier and so would have been found first.

Properties of Uniform-cost search


1. Complete
Yes if the cost is greater than some threshold
step cost >= ε
2. Time
Complexity cannot be determined easily by d or d
Let C* be the cost of the optimal solution

O(bceil(C*/ ε))
3. Space

O(bceil(C*/ ε))
4.Optimal
Yes, Nodes are expanded in increasing order

Advantages:

 Uniform cost search is optimal because at every state the path with the least cost is chosen.
Disadvantages:

Page 10
Module2

 It does not care about the number of steps involve in searching and only concerned about
path cost. Due to which this algorithm may be stuck in an infinite loop.

3. Depth-First Search

It always expands the deepest node in the fringe. After reaching the deepest level, it backs up the
next deepest node that still has unexplored successors. It can be implemented by TREE-SEARCH
with a last-in-first-out (LIFO) queue.

Example:

Page 11
Module2

Properties:
1.Complete
No: fails in infinite-depth spaces, spaces with loops
Modify to avoid repeated spaces along path
Yes: in finite spaces
2. Time
O(bm)
Not great if m is much larger than d
But if the solutions are dense, this may be faster than breadth-first search
3. Space
O(bm)…linear space
Optimal
No

Advantages:

 DFS requires very less memory as it only needs to store a stack of the nodes on the path from root
node to the current node.

 It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right path).
Disadvantages:

 There is the possibility that many states keep re-occurring, and there is no guarantee of
finding the solution.

 DFS algorithm goes for deep down searching and sometime it may go to the infinite loop.

4. Depth-Limited Search

A depth-limited search algorithm is similar to depth-first search with a predetermined limit. Depth-
limited search can solve the drawback of the infinite path in the Depth-first search. In this
algorithm, the node at the depth limit will treat as it has no successor nodes further.

Depth-limited search can be terminated with two Conditions of failure:

Page 12
Module2

1.Standard failure value: It indicates that problem does not have any solution.
2.Cutoff failure value: It defines no solution for the problem within a given depth limit.

Depth Limited Search Algorithm

Example

The problem of unbounded tree can be alleviated supplying DFS with a depth limit l.
Properties:
Unfortunately, it introduces an additional source of incompleteness if l < d.
It is nonoptimal if l > d .
1 .Time complexity: O(b l )

Page 13
Module2

2 .Space complexity: O(bl)

Advantages:
 Depth-limited search is Memory efficient.
Disadvantages:
 Depth-limited search also has a disadvantage of incompleteness.
 It may not be optimal if the problem has more than one solution.

5. Iterative Deepening Search


 The iterative deepening algorithm is a combination of DFS and BFS algorithms. This search
algorithm finds out the best depth limit and does it by gradually increasing the limit until a
goal is found.
 It performs depth-first search up to a certain "depth limit", and it keeps increasing the depth
limit after each iteration until the goal node is found.
 It combines the benefits of Breadth-first search's fast search and depth-first search's memory
efficiency.
 The iterative search algorithm is useful uninformed search when search space is large, and
depth of goal node is unknown.

Algorithm:

Example

Page 14
Module2

Properties:
1.Complete: Yes
2.Time: O(bd )
3.Space: O(bd)
4.Optimal: Yes if step cost = 1
Can be modified to explore uniform cost tree

Advantages: It combines the benefits of BFS and DFS search algorithm in terms of fast search and
memory efficiency.
Disadvantages: The main drawback of IDDFS is that it repeats all the work of the previous phase.

6. Bidirectional search
 Bidirectional search algorithm runs two simultaneous searches, one form initial state called
as forward-search and other from goal node called as backward-search, to find the goal
node.
 Bidirectional search replaces one single search graph with two small subgraphs in which one
starts the search from an initial vertex and other starts from goal vertex. The search stops
when these two graphs intersect each other.
Properties:
Whether the algorithm is complete/optimal depends on the search strategies in both searches.

Page 15
Module2

1. Time complexity: O(b d/2 ) (Assume BFS is used) Checking node for membership in the other
search tree can be done in constant time.
2.Space complexity: O(b d/2 ) (Assume BFS is used)
At least one of the search tree must be kept in memory for membership checking.
3. Optimal: Bidirectional search is Optimal.

Advantages:
 Bidirectional search is fast.

 Bidirectional search requires less memory


Disadvantages:

 Implementation of the bidirectional search tree is difficult.

 In bidirectional search, one should know the goal state in advance.

Comparison of Uninformed Search Strategies

Page 16

You might also like