Professional Documents
Culture Documents
Module 2.
Module 2.
A sequence of actions that form a path to a goal state. Such an agent is called a problem-
solving agent, and the computational process it undertakes is called a search.
Problem-solving agents:
The agent, currently in Arad, is on a touring vacation in Romania with diverse goals: taking
in sights, improving Romanian, enjoying nightlife, and avoiding hangovers. The challenge
is navigating from Arad to Bucharest with only three roads visible—leading to Sibiu,
Timisoara, and Zerind. Fortunately, the agent has access to a map (figure 3.1) for better
decision-making.
Figure 3.1 is a simplified road map of part of Romania, with road distances in miles.
With this information, the agent can follow this four-phase problem-solving process:
1. Goal formulation:
The agent's primary goal is reaching Bucharest, which helps narrow down considerations and
actions by limiting the objectives and hence the actions to be considered.
2. Problem formulation:
The agent creates an abstract model of the environment, defining states (cities) and actions
(traveling between adjacent cities). The model represents the world's relevant part, where the
only changing state is the current city.
3. Search:
• Before real-world actions, the agent simulates sequences in its model, searching for a
solution to reach the goal.
• potential sequences may involve traveling from Arad to Sibiu, Fagaras, and finally,
Bucharest. The agent may need to explore multiple sequences before finding a viable
solution (until it finds a sequence of actions that reaches the goal).
Solution: Once a solution is found, a fixed sequence of actions is established (e.g., Arad to
Sibiu to Fagaras to Bucharest). This represents an open-loop system, where the agent can
execute actions without constant observation if the environment is fully observable,
deterministic, and known.
4. Execution:
The agent can now execute the actions in the solution, progressing from one city to another.
In an open-loop system.
it can temporarily ignore percepts, trusting the predetermined solution to lead to the goal.
However, caution is advised in case of uncertainties or non-deterministic environments,
where a closed-loop approach my be necessary.
Search problems and solutions
Problem definition:
• States (state space): A set of possible states that the environment can be in.
• Initial state: where the agent begins (e.g., Arad).
• Goal states: desired outcomes, which can be one or more options. Actions: available
moves the agent can make in a given state
Result(s, a) returns the state that results from doing action a in state s,
e.g., result(Arad, toZerind) = Zerind.
Action cost function: denoted by action-cost(s, a, s/) when we are programming or when
we are doing math, that gives the numeric cost of applying action an in state s to reach state
s/
(assigns a numeric cost to actions based on the state and the desired result.)
Graph representation:
The state space can be shown as a graph, where states are vertices and actions are directed
edges.
The map of Romania shown in Figure 3.1 is such a graph, where each road indicates two
actions, one in each direction
Formulating problems
Abstraction in problem-solving:
Example: while our problem of reaching Bucharest is abstract, a real cross-country trip
involves numerous details like travel companions, radio programs, road conditions, and
more. These are excluded from our model to focus on the route-finding problem.
Level of abstraction:
If actions were too detailed (e.g., "move the right foot forward a centimeter"), the agent might
struggle to navigate even a parking lot.
Now consider a solution to the abstract problem: for example the path from Arad to
Sibiu to Rimnicu Vilcea to Pitesti to Bucharest. This abstract solution corresponds to a large
number of more detailed paths. For example, we could drive with the radio on between
Sibiu and Rimnicu Vilcea, and then switch it off for the rest of the trip.
Standardized:
A sufficient condition is that for every detailed state that is "in Arad„, there is a detailed path
to some state that is "in Sibiu," and so on.
The abstraction is useful if carrying out each of the actions in the solution is easier than the
original problem; in our case, the action "drive from Arad to Sibiu" can be carried out
without further search or planning by a driver with average skill.
The choice of a good abstraction thus involves removing as much detail as possible while
retaining validity and ensuring that the abstract actions are easy to carry out. Were it not for
the ability to construct useful abstractions, intelligent agents would be completely swamped
by the real world.
EXAMPLE PROBLEMS
Standardized Problems:
- These are specific problems designed to showcase or practice problem-solving
methods.
- They have clear and exact descriptions, making them easy to define and
understand.
- They are often used as benchmarks in research to compare how well different
problem-solving algorithms perform. - Example: A math problem in a textbook
that requires a specific method to solve.
Real-World Problems:
- These are practical problems that people encounter in the real world, like
navigating a robot.
- Solutions to real-world problems are used in everyday life, and the problems are
unique and not standardized.
- The formulation of these problems is specific and can vary, for example, because
different robots have different sensors producing different types of data. -
Example: Teaching a robot to navigate a room by avoiding obstacles, where the
specific challenges depend on the robot's unique sensors and the environment
it's in.
STANDARDIZED PROBLEMS:
Definition:
A grid world problem involves a two-dimensional grid of square cells where an
agent can move. The agent can move horizontally, vertically, and sometimes
diagonally, avoiding obstacles and interacting with objects.
Example1: The vacuum world problem from can be considered a grid world
problem. In this case, the states represent the arrangement of the agent and dirt in
the cells.
• States: Describe which objects are in which cells (e.g., agent and dirt). In simple
two-cell version ,the agent can be in either of two cells, and each Cell either
contain dirt or not, so there are 2*2*2=8 states as shown in above fig. In
general,vacuum environment with n cells has n*2^n states.
• Initial State: Any state can be the starting point.
• Actions: Movements like Suck, move Left, and move Right.
• Transition Model: Describes the effects of actions, like Suck removing dirt and
movements obeying grid constraints.
• Goal States: States where every cell is clean.
• Action Cost: Each action has a cost of 1.
8 Puzzle Problem.
The 8 puzzle consists of eight numbered, movable tiles set in a 3x3 frame. One
cell of the frame is alwaysempty thus making it possible to move an adjacent
numbered tile into the empty cell. Such a puzzle is illustrated in following
diagram. tile into the empty cell. Such a puzzle is illustrated in following diagram.
The program is to change the initial configuration into the goal configuration. A
Solution:
A move transforms one problem state into another state. The 8-puzzle is conveniently
interpreted as having the following for moves. Move empty space
(blank) to the left, move blank up, move blank to theright and move blank down,.
These moves are modeled by production rules that operate on the state descriptions
in the appropriate manner.
The rules each have preconditions that must be satisfied by a state description in order
for them to be applicable to that state description. Thus the precondition for the rule
associated with ―move blank up‖ is derived from the requirement that the blank space
must not already be in the top row.
The problem goal condition forms the basis for the termination condition of the
production system. The control strategy repeatedly applies rules to state
descriptions until a description of a goal state is produced. It also keeps track of
rules that have been applied so that it can compose them into sequence
representing the problem solution. A solution to the 8-puzzle problem is given
in the following figure.
8 Queens problem
– Fringe
-Leaves of partial search tree, candidates for expansion Search trees = data structure to
search state-space
Which search algorithm one should use will generally depend on the problem
domain?
Optimality – Is the solution found guaranteed to be the best (or lowest cost) solution
if there existsmore than one solution?
Time Complexity – The upper bound on the time required to find a solution, as a
function of thecomplexity of the problem.
The General Problem Solver (GPS) was the first useful AI program,
written by Simon, Shaw, and Newell in 1959. As the name implies, it was
intended to solve nearly any problem.
Newell and Simon defined each problem as a space. At one end of the
space is the starting point; on the other side is the goal. The problem-solving
procedure itself is conceived as a set of operations to cross that space, to get from
the starting point to the goal state, one step at a time.
The General Problem Solver, the program tests various actions (which
Newell and Simon called operators) to see which will take it closer to the goal
state. An operator is any activity that changes the state of the system. The General
Problem Solver always chooses the operation that appears to bring itcloser to its
goal.
This formulation reduces the 8-queens state space from 1.8 × 1014 to just 2,057,
and solutions are easy to find. On the other hand, for 100 queens the reduction is
from roughly 10400 states to about 1052 states —a big improvement, but not
enough to make the problem tractable. describes the complete-state formulation,
and gives a simple algorithm that solves even the million-queens problem with
ease.
Now, there's a more complex problem devised by Donald Knuth in 1964. He
suggested that by using a sequence of mathematical operations like square root,
floor, and factorial, starting from the number 4, you can reach any positive integer.
For example, you can get from 4 to 5 by applying these operations in a specific
order. Knuth proposed that this sequence of operations could reach any positive
whole number.
In simpler terms, Knuth came up with a puzzle involving math operations to reach
any number from a starting point of 4, showcasing how complex problems can be
abstracted and solved using a sequence of mathematical steps.
The route-finding problem involves finding the best way to get from one place to
another, considering the paths and connections available. Algorithms for this are
used in various applications. For example:
1. Route-finding Problems:
- Imagine you want to find the best route from one place to another, considering
various factors like time, cost, and modes of transportation. Algorithms for this
are used in applications like navigation apps, websites providing directions, and
travel planning.
6. Robot Navigation:
- Unlike fixed routes, robots can move in a continuous space with many possible
actions. Robot navigation involves finding paths in a complex environment. Real
robots deal with errors in sensor readings and motor controls.
7. Assembly Sequencing:
- This is about determining the order in which parts should be assembled.
Choosing the wrong order can create problems, and checking the feasibility of
each step is a complex task. It is closely related to robot navigation.
8. Protein Design:
- In this context, the goal is to find a sequence of amino acids that will fold
into a three-dimensional protein with specific properties to treat a disease. It's a
challenging search problem.
Consider the airline travel problems that must be solved by a travel-planning web
site:
• States: Each state represents a location (e.g., an airport) and the current
time. Additional details are recorded about previous flight segments, fare
bases, and whether they are domestic or international.
• Actions: Choose any flight from the current location, in any seat class,
departing after the current time, allowing enough time for within-airport
transfers if needed.
• Transition Model: Taking a flight updates the state with the new destination
(the flight's destination) and the new time (the flight's arrival time).
• Goal State:The destination city. Sometimes the goal can be more specific,
like arriving at the destination on a nonstop flight.
Breadth-first search is a simple strategy in which the root node is expanded first, then all the
successors of the root node are expanded next, then their successors, and so on. In general, all
the nodes are expanded at a given depth in the search tree before any nodes at the next level are
expanded.
Breadth-first search is an instance of the general graph-search algorithm in which the shallowest
unexpanded node is chosen for expansion. This is achieved very simply by using a FIFO queue
for the frontier. Thus, new nodes (which are always deeper than their parents) go to the back of
the queue, and old nodes, which are shallower than the new nodes ,get expanded first.
Thus we can say that Breadth-first search (BFS) is a graph traversal algorithm that explores all
the vertices of a graph level by level. It starts at the root node and visits all its neighbors before
moving on to their neighbors. This process continues until all nodes are visited or a specific node
is reached. BFS is often used to find the shortest path in an unweighted graph and is implemented
using a queue data structure to maintain the order of exploration.
ALGORITHM:
BFS illustrated:
Step 1: Initially fringe or frontier or open list contains only one node corresponding to the source
state A.
Figure 1
FRINGE: A
Step 2: A is removed from fringe. The node is expanded, and its children B and C are
generated.They are placed at the back of fringe.
Figure 2
FRINGE: B C
Step 3: Node B is removed from fringe and is expanded. Its children D, E are generated and putat
the back of fringe.
Figure 3
FRINGE: C D E
Step 4: Node C is removed from fringe and is expanded. Its children D and G are added to the
back of fringe.
Figure 4
FRINGE: D E D G
Step 5: Node D is removed from the fringe. Its children C and F are generated and added to the
back of fringe.
Figure 5
FRINGE: E D G C F
Figure 6
FRINGE: D G C F
Figure 7
FRINGE: G C F B F
Disadvantages:
• Requires the generation and storage of a tree whose size is exponential the
depth of theshallowest goal node.
• The breadth first search algorithm cannot be effectively used unless the search
space is quitesmall.
2. *Completeness:* BFS will find a solution if one exists, making it a complete search
algorithm. It explores all possible nodes at the current depth level before moving on to
the next level, ensuring that it covers the entire graph.
5. *Memory Efficiency:* While BFS uses a queue data structure, it often requires less
memory compared to DFS in scenarios where the branching factor is high, as it explores
all nodes at the current level before moving deeper.
These advantages make BFS a suitable choice for various applications, particularly when finding
the shortest path or exploring a graph systematically is essential.
UNIFORM-COST SEARCH
Uniform-cost search is an optimal algorithm that expands nodes with the lowest path cost in a
search space. This is achieved by maintaining a priority queue ordered by the cumulative path
cost. The algorithm is similar to breadth-first search when step costs are equal, as it explores the
shallowest unexpanded nodes first. The key components of the algorithm include:
1. Initialization: Start with a node representing the initial state and a path cost of 0.
2. Frontier: Use a priority queue ordered by path cost to store the frontier.
3. Exploration: Loop through the frontier, selecting the node with the lowest path cost for
expansion.
4. Goal Test: Apply the goal test when a node is selected for expansion, not when it is
generated.
5. Node Expansion: Generate child nodes for the selected node based on available actions.
Add these nodes to the frontier if they are not in the explored set or the frontier. Replace
the frontier node with the new child if it has a higher path cost.
6. Optimality: Uniform-cost search is optimal because it always expands nodes with the
lowest path cost, ensuring the first goal node selected is on the optimal solution path.
8. Complexity: The worst-case time and space complexity of the uniform-cost search is
O(b^(1 + C∗/)), where b is the branching factor, d is the depth of the shallowest goal
node, and C∗ is the cost of the optimal solution. This complexity can be greater than bd,
especially when step costs vary widely.
Uniform-cost search is guided by path costs rather than depths, making it suitable for scenarios
where the cost of actions varies. Its optimality ensures that the first goal node expanded is part
of the optimal solution path. However, it may get stuck in an infinite loop if there is a path with
an infinite sequence of zero-cost actions.
Depth-First Search (DFS) is a graph traversal algorithm that explores as far as possible along
each branch before backtracking. Starting from a specific node, it visits an adjacent unvisited
node, and continues this process until it reaches a dead end. Then, it backtracks to the most
recent unexplored branch and repeats the process. DFS can be implemented using recursion or
a stack data structure. It's often used in solving problems related to connectivity, path finding,
and cycle detection in graphs.
Depth-first search always expands the deepest node in the current frontier of the search tree.
The search proceeds immediately to the deepest level of the search tree, where the nodes have
no successors. As those nodes are expanded, they are dropped from the frontier, so then the
search ―backs up‖ to the next deepest node that still has unexplored successors.
• Algorithm:
1. Create a variable called NODE-LIST and set it to initial state
DFS illustrated:
A State Space
Graph Step 1:
Figure 1
FRINGE: A
Step 2:
A is removed from the fringe. A is expanded and its children B and C are put in front of
fringe.
Figure 2
FRINGE: B C
Step 3:
Node B is removed from fringe, and its children D and E are pushed in front of fringe.
Figure 3
FRINGE: D E C
Step 4:
Figure 4
FRINGE: C F E C
Step 5: Node C is removed from fringe. Its child G is pushed in front of fringe.
Figure 5
FRINGE: G F E C
Step 6:
Figure 6
FRINGE: G F E C
Note that the time taken by the algorithm is related to the maximum depth of the search
tree. If the search tree has infinite depth, the algorithm may not terminate. This can
happen if the search space is infinite. It can also happen if the search space contains
cycles. The latter case can be handled bychecking for cycles in the algorithm. Thus
Depth First Search is not complete.
TREE-SEARCH & the GRAPH-SEARCH ALGORITHM
Description:
It is a search strategy resulting when you combine BFS and DFS, thus
combining the advantages of each strategy, taking the completeness and
optimality of BFS and the modest memory requirements of DFS.
IDS works by looking for the best search depth d, thus starting with depth
limit 0 and make a BFSand if the search failed it increase the depth limit by
1 and try a BFS again with depth 1 and soon – first d = 0, then 1 then 2 and
so on – until a depth d is reached where a goal is found.
Algorithm:
Performance Measure: o Completeness: IDS is like BFS, is complete when the
branching factor b is finite.
o Optimality: IDS is also like BFS optimal when the steps are of the same cost.
Time Complexity:
o One may find that it is wasteful to generate nodes multiple times, but actually
it is not that costly compared to BFS, that is because most of the generated
nodes are always in the deepest level reached, consider that we are searching
a binary tree and our depth limit reached 4, the nodes generated in last level
= 24 = 16, the nodes generated in all nodes before last level = 20 + 21 + 22 +
23= 15
o Imagine this scenario, we are performing IDS and the depth limit reached
depth d, nowif you remember the way IDS expands nodes, you can see that
nodes at depth d are generated once, nodes at depth d-1 are generated 2 times,
nodes at depth d-2 are generated 3 times and so on, until you reach depth 1
which is generated d times, we can view the total number of generated nodes
in the worst case as:
Weblinks:
i. https://www.youtube.com/watch?v=7QcoJjSVT38
ii. https://mhesham.wordpress.com/tag/iterative-deepening-
depthfirst- search
iii. https://www.youtube.com/watch?v=7QcoJjSVT38
iv. https://mhesham.wordpress.com/tag/iterative-deepening-
depthfirst- search
Conclusion:
We can conclude that IDS is a hybrid search strategy between BFS and DFS inheriting
their advantages.
IDS is faster than BFS and DFS.
It is said that ―IDS is the preferred uniformed search method when there is a large
search space and the depth of the solution is not known‖.