Download as pdf or txt
Download as pdf or txt
You are on page 1of 33

MODULE -2

A sequence of actions that form a path to a goal state. Such an agent is called a problem-
solving agent, and the computational process it undertakes is called a search.

Problem-solving agents:

The agent, currently in Arad, is on a touring vacation in Romania with diverse goals: taking
in sights, improving Romanian, enjoying nightlife, and avoiding hangovers. The challenge
is navigating from Arad to Bucharest with only three roads visible—leading to Sibiu,
Timisoara, and Zerind. Fortunately, the agent has access to a map (figure 3.1) for better
decision-making.

Figure 3.1 is a simplified road map of part of Romania, with road distances in miles.

With this information, the agent can follow this four-phase problem-solving process:

1. Goal formulation:
The agent's primary goal is reaching Bucharest, which helps narrow down considerations and
actions by limiting the objectives and hence the actions to be considered.
2. Problem formulation:
The agent creates an abstract model of the environment, defining states (cities) and actions
(traveling between adjacent cities). The model represents the world's relevant part, where the
only changing state is the current city.

3. Search:

• Before real-world actions, the agent simulates sequences in its model, searching for a
solution to reach the goal.
• potential sequences may involve traveling from Arad to Sibiu, Fagaras, and finally,
Bucharest. The agent may need to explore multiple sequences before finding a viable
solution (until it finds a sequence of actions that reaches the goal).

Solution: Once a solution is found, a fixed sequence of actions is established (e.g., Arad to
Sibiu to Fagaras to Bucharest). This represents an open-loop system, where the agent can
execute actions without constant observation if the environment is fully observable,
deterministic, and known.

4. Execution:

The agent can now execute the actions in the solution, progressing from one city to another.
In an open-loop system.

it can temporarily ignore percepts, trusting the predetermined solution to lead to the goal.
However, caution is advised in case of uncertainties or non-deterministic environments,
where a closed-loop approach my be necessary.
Search problems and solutions

Problem definition:

• States (state space): A set of possible states that the environment can be in.
• Initial state: where the agent begins (e.g., Arad).
• Goal states: desired outcomes, which can be one or more options. Actions: available
moves the agent can make in a given state

e.g. Actions (Arad) = {to Sibiu, to Timisoara, to Zerind}.

Transition model: describes what each action does.

Result(s, a) returns the state that results from doing action a in state s,
e.g., result(Arad, toZerind) = Zerind.

Action cost function: denoted by action-cost(s, a, s/) when we are programming or when
we are doing math, that gives the numeric cost of applying action an in state s to reach state
s/
(assigns a numeric cost to actions based on the state and the desired result.)

Path and solution:

Path: a sequence of actions, forming a route.


Solution: a path from the initial state to a goal state.
Optimal solution: the most efficient path in terms of total action cost

Graph representation:

The state space can be shown as a graph, where states are vertices and actions are directed
edges.

(e.g., the Romania map in Figure 3.1).

The map of Romania shown in Figure 3.1 is such a graph, where each road indicates two
actions, one in each direction
Formulating problems

Abstraction in problem-solving:

Abstraction involves simplifying a problem representation by removing irrelevant details.

Example: while our problem of reaching Bucharest is abstract, a real cross-country trip
involves numerous details like travel companions, radio programs, road conditions, and
more. These are excluded from our model to focus on the route-finding problem.

Level of abstraction:

Importance: a good problem formulation maintains the right level of detail.

If actions were too detailed (e.g., "move the right foot forward a centimeter"), the agent might
struggle to navigate even a parking lot.

Now consider a solution to the abstract problem: for example the path from Arad to
Sibiu to Rimnicu Vilcea to Pitesti to Bucharest. This abstract solution corresponds to a large
number of more detailed paths. For example, we could drive with the radio on between
Sibiu and Rimnicu Vilcea, and then switch it off for the rest of the trip.

Abstraction is valid if it can be translated into a detailed solution.

Standardized:
A sufficient condition is that for every detailed state that is "in Arad„, there is a detailed path
to some state that is "in Sibiu," and so on.
The abstraction is useful if carrying out each of the actions in the solution is easier than the
original problem; in our case, the action "drive from Arad to Sibiu" can be carried out
without further search or planning by a driver with average skill.

The choice of a good abstraction thus involves removing as much detail as possible while
retaining validity and ensuring that the abstract actions are easy to carry out. Were it not for
the ability to construct useful abstractions, intelligent agents would be completely swamped
by the real world.

EXAMPLE PROBLEMS
Standardized Problems:
- These are specific problems designed to showcase or practice problem-solving
methods.
- They have clear and exact descriptions, making them easy to define and
understand.
- They are often used as benchmarks in research to compare how well different
problem-solving algorithms perform. - Example: A math problem in a textbook
that requires a specific method to solve.

Real-World Problems:
- These are practical problems that people encounter in the real world, like
navigating a robot.
- Solutions to real-world problems are used in everyday life, and the problems are
unique and not standardized.
- The formulation of these problems is specific and can vary, for example, because
different robots have different sensors producing different types of data. -
Example: Teaching a robot to navigate a room by avoiding obstacles, where the
specific challenges depend on the robot's unique sensors and the environment
it's in.

STANDARDIZED PROBLEMS:

Definition:
A grid world problem involves a two-dimensional grid of square cells where an
agent can move. The agent can move horizontally, vertically, and sometimes
diagonally, avoiding obstacles and interacting with objects.
Example1: The vacuum world problem from can be considered a grid world
problem. In this case, the states represent the arrangement of the agent and dirt in
the cells.

• States: Describe which objects are in which cells (e.g., agent and dirt). In simple
two-cell version ,the agent can be in either of two cells, and each Cell either
contain dirt or not, so there are 2*2*2=8 states as shown in above fig. In
general,vacuum environment with n cells has n*2^n states.
• Initial State: Any state can be the starting point.
• Actions: Movements like Suck, move Left, and move Right.
• Transition Model: Describes the effects of actions, like Suck removing dirt and
movements obeying grid constraints.
• Goal States: States where every cell is clean.
• Action Cost: Each action has a cost of 1.
8 Puzzle Problem.

The 8 puzzle consists of eight numbered, movable tiles set in a 3x3 frame. One
cell of the frame is alwaysempty thus making it possible to move an adjacent
numbered tile into the empty cell. Such a puzzle is illustrated in following
diagram. tile into the empty cell. Such a puzzle is illustrated in following diagram.
The program is to change the initial configuration into the goal configuration. A

solution to the problem is an appropriate sequence of moves, such as ―move tiles


5 to the right, move tile 7 to the left, move tile 6 to the down, etc‖.

Solution:

To solve a problem using a production system, we must specify the global


database the rules, and the control strategy. For the 8 puzzle problem that correspond
to these three components. These elements are the problem states, moves and goal. In
this problem each tile configuration is a state. The set of all configuration in the space
of problem states or the problem space, there are only 3, 62,880 different configurations
o the 8 tiles and blank space. Once the problem states have been conceptually identified,
we must construct a computer representation, or description of them . this description
is then used as the database of a production system. For the 8-puzzle, a straight forward
description is a 3X3 array of matrix of numbers.
The initial global database is this description of the initial problem state.
Virtuallyany kind of data structure can be used to describe states.

A move transforms one problem state into another state. The 8-puzzle is conveniently
interpreted as having the following for moves. Move empty space
(blank) to the left, move blank up, move blank to theright and move blank down,.
These moves are modeled by production rules that operate on the state descriptions
in the appropriate manner.

The rules each have preconditions that must be satisfied by a state description in order
for them to be applicable to that state description. Thus the precondition for the rule
associated with ―move blank up‖ is derived from the requirement that the blank space
must not already be in the top row.
The problem goal condition forms the basis for the termination condition of the
production system. The control strategy repeatedly applies rules to state
descriptions until a description of a goal state is produced. It also keeps track of
rules that have been applied so that it can compose them into sequence
representing the problem solution. A solution to the 8-puzzle problem is given
in the following figure.
8 Queens problem

1. Initial State: Any arrangement of 0 to 8 queens on board.


2. Operators: add a queen to any square.
3. Goal Test: 8 queens on board, none attacked.
4. Path cost: not applicable or Zero (because only the final state counts, search cost
mightbe of interest).

State Spaces versus Search Trees:


State Space o Set of valid states for a problem o Linked by
operators o e.g., 20 valid states (cities) in the Romanian travel
problem Search Tree
– Root node = initial state
– Child nodes = states that can be visited from parent
– Note that the depth of the tree can be infinite
E.g., via repeated states
Partial search tree

Portion of tree that has been expanded so far

– Fringe
-Leaves of partial search tree, candidates for expansion Search trees = data structure to
search state-space

Properties of Search Algorithms

Which search algorithm one should use will generally depend on the problem
domain?

There are four important factors to consider:

Completeness – Is a solution guaranteed to be found if at least one solution exists?

Optimality – Is the solution found guaranteed to be the best (or lowest cost) solution
if there existsmore than one solution?

Time Complexity – The upper bound on the time required to find a solution, as a
function of thecomplexity of the problem.

Space Complexity – The upper bound on the storage space (memory)


required at any point during thesearch, as a function of the complexity of
the problem.

General Problem Solver:

The General Problem Solver (GPS) was the first useful AI program,
written by Simon, Shaw, and Newell in 1959. As the name implies, it was
intended to solve nearly any problem.
Newell and Simon defined each problem as a space. At one end of the
space is the starting point; on the other side is the goal. The problem-solving
procedure itself is conceived as a set of operations to cross that space, to get from
the starting point to the goal state, one step at a time.

The General Problem Solver, the program tests various actions (which
Newell and Simon called operators) to see which will take it closer to the goal
state. An operator is any activity that changes the state of the system. The General
Problem Solver always chooses the operation that appears to bring itcloser to its
goal.

This formulation reduces the 8-queens state space from 1.8 × 1014 to just 2,057,
and solutions are easy to find. On the other hand, for 100 queens the reduction is
from roughly 10400 states to about 1052 states —a big improvement, but not
enough to make the problem tractable. describes the complete-state formulation,
and gives a simple algorithm that solves even the million-queens problem with
ease.
Now, there's a more complex problem devised by Donald Knuth in 1964. He
suggested that by using a sequence of mathematical operations like square root,
floor, and factorial, starting from the number 4, you can reach any positive integer.
For example, you can get from 4 to 5 by applying these operations in a specific
order. Knuth proposed that this sequence of operations could reach any positive
whole number.

In simpler terms, Knuth came up with a puzzle involving math operations to reach
any number from a starting point of 4, showcasing how complex problems can be
abstracted and solved using a sequence of mathematical steps.

The problem definition is very simple:

• States: Positive numbers.


• Initial state: 4.
• Actions: Apply factorial, square root, or floor operation (factorial for integers
only).
• Transition model: As given by the mathematical definitions of the operations.
• Goal test: State is the desired positive integer.

To our knowledge there is no bound on how large a number might be constructed


in the process of reaching a given target—for example, the number
620,448,401,733,239,439,360,000 is generated in the expression for 5—so the
state space for this problem is infinite. Such state spaces arise frequently in tasks
involving the generation of mathematical expressions, circuits, proofs, programs,
and other recursively defined objects

REAL WORLD PROBLEMS

The route-finding problem involves finding the best way to get from one place to
another, considering the paths and connections available. Algorithms for this are
used in various applications. For example:

1. Route-finding Problems:
- Imagine you want to find the best route from one place to another, considering
various factors like time, cost, and modes of transportation. Algorithms for this
are used in applications like navigation apps, websites providing directions, and
travel planning.

2. Airline Travel Planning:


- When planning air travel, the problem involves states (current location and
time), actions (flights), and the goal of reaching the final destination. The path
cost includes factors like cost, waiting time, flight duration, and seat quality.
Contingency plans are needed for unexpected issues.
3. Touring Problems:
- Similar to route-finding but with a twist. Imagine needing to visit multiple
cities with specific rules, starting and ending in a particular city. The challenge
includes keeping track of visited cities, and the goal is to visit all cities efficiently.

4. Traveling Salesperson Problem (TSP):


- In TSP, a salesperson needs to visit each city exactly once, aiming to find
the shortest tour. It's a complex problem used not just in sales but also in planning
movements for various tasks, like circuit-board drills.

5. VLSI Layout Problem:


- In the context of designing computer chips, the problem is to position millions
of components to minimize space, circuit delays, and maximize manufacturing
yield. It involves two steps: cell layout and channel routing.

6. Robot Navigation:
- Unlike fixed routes, robots can move in a continuous space with many possible
actions. Robot navigation involves finding paths in a complex environment. Real
robots deal with errors in sensor readings and motor controls.

7. Assembly Sequencing:
- This is about determining the order in which parts should be assembled.
Choosing the wrong order can create problems, and checking the feasibility of
each step is a complex task. It is closely related to robot navigation.

8. Protein Design:
- In this context, the goal is to find a sequence of amino acids that will fold
into a three-dimensional protein with specific properties to treat a disease. It's a
challenging search problem.

Consider the airline travel problems that must be solved by a travel-planning web
site:
• States: Each state represents a location (e.g., an airport) and the current
time. Additional details are recorded about previous flight segments, fare
bases, and whether they are domestic or international.

• Initial State:The starting point is the user's home airport.

• Actions: Choose any flight from the current location, in any seat class,
departing after the current time, allowing enough time for within-airport
transfers if needed.
• Transition Model: Taking a flight updates the state with the new destination
(the flight's destination) and the new time (the flight's arrival time).

• Goal State:The destination city. Sometimes the goal can be more specific,
like arriving at the destination on a nonstop flight.

• Action Cost: The cost is a combination of monetary cost, waiting time,


flight time, customs and immigration procedures, seat quality, time of day,
type of airplane, frequent-flyer reward points, and more.
BREADTH FIRST SEARCH

Breadth-first search is a simple strategy in which the root node is expanded first, then all the
successors of the root node are expanded next, then their successors, and so on. In general, all
the nodes are expanded at a given depth in the search tree before any nodes at the next level are
expanded.
Breadth-first search is an instance of the general graph-search algorithm in which the shallowest
unexpanded node is chosen for expansion. This is achieved very simply by using a FIFO queue
for the frontier. Thus, new nodes (which are always deeper than their parents) go to the back of
the queue, and old nodes, which are shallower than the new nodes ,get expanded first.
Thus we can say that Breadth-first search (BFS) is a graph traversal algorithm that explores all
the vertices of a graph level by level. It starts at the root node and visits all its neighbors before
moving on to their neighbors. This process continues until all nodes are visited or a specific node
is reached. BFS is often used to find the shortest path in an unweighted graph and is implemented
using a queue data structure to maintain the order of exploration.

ALGORITHM:
BFS illustrated:

Step 1: Initially fringe or frontier or open list contains only one node corresponding to the source
state A.

Figure 1
FRINGE: A

Step 2: A is removed from fringe. The node is expanded, and its children B and C are
generated.They are placed at the back of fringe.

Figure 2
FRINGE: B C
Step 3: Node B is removed from fringe and is expanded. Its children D, E are generated and putat
the back of fringe.

Figure 3
FRINGE: C D E

Step 4: Node C is removed from fringe and is expanded. Its children D and G are added to the
back of fringe.

Figure 4
FRINGE: D E D G

Step 5: Node D is removed from the fringe. Its children C and F are generated and added to the
back of fringe.

Figure 5
FRINGE: E D G C F

Step 6: Node E is removed from fringe. It has no children.

Figure 6
FRINGE: D G C F

Step 7: D is expanded; B and F are put in OPEN.

Figure 7
FRINGE: G C F B F

Step 8: G is selected for expansion. It is found to be a goal node. So the algorithm


returns thepath A C G by following the parent pointers of the node corresponding to
G. The algorithm terminates.
Breadth first search is:

• One of the simplest search strategies


• Complete. If there is a solution, BFS is guaranteed to find it.
• If there are multiple solutions, then a minimal solution will be found
• The algorithm is optimal (i.e., admissible) if all operators have the same cost.
Otherwise, breadth first search finds a solution with the shortest path length.
• Time complexity : O(bd )
• Space complexity : O(bd )
• Optimality :Yes b - branching factor(maximum no of successors of
any node), d – Depth of the

shallowest goal node

Disadvantages:
• Requires the generation and storage of a tree whose size is exponential the
depth of theshallowest goal node.
• The breadth first search algorithm cannot be effectively used unless the search
space is quitesmall.

ADVANTAGES OF BREADTH FIRST SEARCH:

Breadth-First Search (BFS) has several advantages,


1. *Shortest Path:* BFS guarantees that the first time a node is encountered during
traversal, it is the shortest path to that node from the starting point in an unweighted
Graph.
This property is valuable in applications where finding the shortest path iscrucial.

2. *Completeness:* BFS will find a solution if one exists, making it a complete search
algorithm. It explores all possible nodes at the current depth level before moving on to
the next level, ensuring that it covers the entire graph.

3. *Optimality in Unweighted Graphs:* In an unweighted graph, BFS is optimal for


finding the shortest path. Since all edges have the same weight, the first time a node is
encountered during traversal, it's guaranteed to be along the shortest path.

4. *Level-by-Level Exploration:* BFS traverses a graph level by level, which can be


beneficial in certain scenarios. It's useful for applications like network broadcasting or
web crawling, where exploring nearby nodes first is advantageous.

5. *Memory Efficiency:* While BFS uses a queue data structure, it often requires less
memory compared to DFS in scenarios where the branching factor is high, as it explores
all nodes at the current level before moving deeper.
These advantages make BFS a suitable choice for various applications, particularly when finding
the shortest path or exploring a graph systematically is essential.

UNIFORM-COST SEARCH

Uniform-cost search is an optimal algorithm that expands nodes with the lowest path cost in a
search space. This is achieved by maintaining a priority queue ordered by the cumulative path
cost. The algorithm is similar to breadth-first search when step costs are equal, as it explores the
shallowest unexpanded nodes first. The key components of the algorithm include:
1. Initialization: Start with a node representing the initial state and a path cost of 0.

2. Frontier: Use a priority queue ordered by path cost to store the frontier.

3. Exploration: Loop through the frontier, selecting the node with the lowest path cost for
expansion.

4. Goal Test: Apply the goal test when a node is selected for expansion, not when it is
generated.

5. Node Expansion: Generate child nodes for the selected node based on available actions.
Add these nodes to the frontier if they are not in the explored set or the frontier. Replace
the frontier node with the new child if it has a higher path cost.
6. Optimality: Uniform-cost search is optimal because it always expands nodes with the
lowest path cost, ensuring the first goal node selected is on the optimal solution path.

7. Completeness: Completeness is guaranteed as long as the cost of every step exceeds a


small positive constant.

8. Complexity: The worst-case time and space complexity of the uniform-cost search is
O(b^(1 + C∗/)), where b is the branching factor, d is the depth of the shallowest goal
node, and C∗ is the cost of the optimal solution. This complexity can be greater than bd,
especially when step costs vary widely.

Uniform-cost search is guided by path costs rather than depths, making it suitable for scenarios
where the cost of actions varies. Its optimality ensures that the first goal node expanded is part
of the optimal solution path. However, it may get stuck in an infinite loop if there is a path with
an infinite sequence of zero-cost actions.

In the example of Sibiu to Bucharest using uniform-cost search:


 Successors of Sibiu: The successors of Sibiu are Rimnicu Vilcea and Fagaras, with costs
80 and 99, respectively.
 Node Expansion: Rimnicu Vilcea, being the least-cost node, is expanded first. Pitesti is
added with a cost of 80 + 97 = 177.
 Next Expansion: Fagaras becomes the least-cost node and is expanded, adding Bucharest
with a cost of 99 + 211 = 310
 Goal Node Generated: A goal node is generated at this point, but uniform-cost search
continue
 Additional Path: Pitesti is chosen for expansion, adding a second path to Bucharest with
a cost of 80 + 97 + 101 = 278.
 Path Comparison: The algorithm checks if the new path is better than the old one. Since
278 is less than the previous cost of 310, the old path is discarded.
 Optimal Solution: Bucharest, now with a g-cost of 278, is selected for expansion, and
the optimal solution is returned.
 Optimality of Uniform-Cost Search: Uniform-cost search is proven to be optimal.
When a node is expanded, the optimal path to that node has been found. This is because
step costs are nonnegative, and paths never get shorter as nodes are added.
 Completeness: Completeness is guaranteed as long as the cost of every step exceeds a
small positive constant.
 Complexity: The worst-case time and space complexity of uniform-cost search is O(b^(1
+ C∗/)), where b is the branching factor and C∗ is the cost of the optimal solution. This
can be greater than bd, especially when exploring large trees of small steps before larger
steps.
 Focus on Path Costs: Uniform-cost search prioritizes path costs over depths. It may
examine all nodes at a goal's depth to find the lowest cost, which can result in additional
work compared to breadth-first search that stops as soon as a goal is generated
DEPTH FIRST SEARCH

Depth-First Search (DFS) is a graph traversal algorithm that explores as far as possible along
each branch before backtracking. Starting from a specific node, it visits an adjacent unvisited
node, and continues this process until it reaches a dead end. Then, it backtracks to the most
recent unexplored branch and repeats the process. DFS can be implemented using recursion or
a stack data structure. It's often used in solving problems related to connectivity, path finding,
and cycle detection in graphs.
Depth-first search always expands the deepest node in the current frontier of the search tree.
The search proceeds immediately to the deepest level of the search tree, where the nodes have
no successors. As those nodes are expanded, they are dropped from the frontier, so then the
search ―backs up‖ to the next deepest node that still has unexplored successors.

Depth- First- Search.


We may sometimes search the goal along the largest depth of the tree, and move
up only when further traversal along the depth is not possible. We then attempt
to find alternative offspring of the parent of the node (state) last visited. If we
visit the nodes of a tree using the above principles to search the goal, the
traversal made is called depth first traversal and consequently the search
strategy is called depth first search.

• Algorithm:
1. Create a variable called NODE-LIST and set it to initial state

2. Until a goal state is found or NODE-LIST is empty do


a. Remove the first element from NODE-LIST and call it E. If
NODE-LIST was empty,quit
b. For each way that each rule can match the state described in E do:
i. Apply the rule to generate a new state
ii. If the new state is a goal state, quit and return this state
iii. Otherwise, add the new state in front of NODE-LIST

DFS illustrated:

A State Space

Graph Step 1:

Initially fringe contains only the node for A.

Figure 1

FRINGE: A
Step 2:
A is removed from the fringe. A is expanded and its children B and C are put in front of
fringe.

Figure 2
FRINGE: B C

Step 3:

Node B is removed from fringe, and its children D and E are pushed in front of fringe.

Figure 3
FRINGE: D E C
Step 4:

Node D is removed from fringe. C and F are pushed in front of fringe.

Figure 4

FRINGE: C F E C

Step 5: Node C is removed from fringe. Its child G is pushed in front of fringe.

Figure 5

FRINGE: G F E C
Step 6:

Node G is expanded and found to be a goal node.

Figure 6

FRINGE: G F E C

The solution path A-B-D-C-G is returned and the algorithm terminates.

Depth first searches:

1. The algorithm takes exponential time.


2. If N is the maximum depth of a node in the search space, in the worst case the
algorithm will
3. The space taken is linear in the depth of the search tree, O(bN).

Note that the time taken by the algorithm is related to the maximum depth of the search
tree. If the search tree has infinite depth, the algorithm may not terminate. This can
happen if the search space is infinite. It can also happen if the search space contains
cycles. The latter case can be handled bychecking for cycles in the algorithm. Thus
Depth First Search is not complete.
TREE-SEARCH & the GRAPH-SEARCH ALGORITHM

Exhaustive searches- Iterative Deeping DFS

Description:

It is a search strategy resulting when you combine BFS and DFS, thus
combining the advantages of each strategy, taking the completeness and
optimality of BFS and the modest memory requirements of DFS.

IDS works by looking for the best search depth d, thus starting with depth
limit 0 and make a BFSand if the search failed it increase the depth limit by
1 and try a BFS again with depth 1 and soon – first d = 0, then 1 then 2 and
so on – until a depth d is reached where a goal is found.
Algorithm:
Performance Measure: o Completeness: IDS is like BFS, is complete when the
branching factor b is finite.

o Optimality: IDS is also like BFS optimal when the steps are of the same cost.

Time Complexity:

o One may find that it is wasteful to generate nodes multiple times, but actually
it is not that costly compared to BFS, that is because most of the generated
nodes are always in the deepest level reached, consider that we are searching
a binary tree and our depth limit reached 4, the nodes generated in last level
= 24 = 16, the nodes generated in all nodes before last level = 20 + 21 + 22 +
23= 15

o Imagine this scenario, we are performing IDS and the depth limit reached
depth d, nowif you remember the way IDS expands nodes, you can see that
nodes at depth d are generated once, nodes at depth d-1 are generated 2 times,
nodes at depth d-2 are generated 3 times and so on, until you reach depth 1
which is generated d times, we can view the total number of generated nodes
in the worst case as:

 N(IDS) = (b)d + (d – 1) b2+ (d – 2) b3 + …. + (2)bd-1 + (1)bd =


O(bd) o If this search were to be done with BFS, the total number of
generated nodes inthe worst case will be like:

 N(BFS) = b + b2 + b3 + b4 + …. bd + (bd+ 1 – b) = O (bd + 1)

o If we consider a realistic number, and use b = 10 and d = 5, then number


ofgenerated nodes in BFS and IDS will be like

 N(IDS) = 50 + 400 + 3000 + 20000 + 100000 = 123450


 N(BFS) = 10 + 100 + 1000 + 10000 + 100000 + 999990 =
1111100

 BFS generates like 9-time nodes to those generated with IDS


Space Complexity:

IDS is like DFS in its space complexity, taking O(bd) of memory.

Weblinks:
i. https://www.youtube.com/watch?v=7QcoJjSVT38

ii. https://mhesham.wordpress.com/tag/iterative-deepening-
depthfirst- search

iii. https://www.youtube.com/watch?v=7QcoJjSVT38

iv. https://mhesham.wordpress.com/tag/iterative-deepening-
depthfirst- search

Conclusion:

We can conclude that IDS is a hybrid search strategy between BFS and DFS inheriting
their advantages.
IDS is faster than BFS and DFS.
It is said that ―IDS is the preferred uniformed search method when there is a large
search space and the depth of the solution is not known‖.

You might also like