Professional Documents
Culture Documents
PROBLEM SOLVING
PROBLEM SOLVING
PROBLEM SOLVING
Problem solving is fundamental to many AI-based applications. Problem solving is a process of
generating solutions from observed data. A problem is characterized by:
A set of goals
A set of objects and
A set of operations
These could be ill-defined and may evolve during problem solving. To develop a computer
system that is capable of solving problems it is necessary to perform 4 activities:
Define the problem precisely including detailed specifications and what constitutes an
acceptable solution.
Analyse the problem thoroughly for some features may have a dominant affect on the
chosen method of solution.
Isolate and represent the background knowledge needed in the solution of the problem.
Choose the best problem solving techniques in the solution.
7 6 To be tranformed 7 6 2
5 8 2 5 8
3. (X,Y) (X-D,Y) If Y>0 Pour some water out of the 4- gallon jug
4. (X,Y) (X,Y-D) If Y>0 Pour some water out of the 3- gallon jug
7. (X,Y) (4,Y-(4-X)) If Pour water from the 3- gallon jug into the 4-gallon
X+Y>=4 and Y >0 jug until the 4-galoon jug is full
8 (X,Y) (X-(3-Y),3)) If Pour water from the 4- gallon jug into the 3-gallon
X+Y>=3 and X>0 jug until the
9. (X,Y) (X+Y,0) If X+Y<=4 Pour all the water from the 3 -gallon jug into
and Y>0 the 3-gallon jug
10. (X,Y) (0,X+Y) If X+Y<=3 Pour all the water from the 4 -gallon jug into
11. (0,2) (2,0) Pour the 2-gallon from the 3 -gallon jug into
the 4-gallon jug
12. (2,Y) (0,X) Empty the 2 gallon in the 4 gallon on the ground
Please note that the speed with which the problem is solved depends upon the mechanism control
structure which is used to select the next operation. Two such possible solutions are shown
below:
Trace of steps involved in solving the water jug problem First solution
1 Initial state 0 0
2 R2 {Fill 3-g jug} 0 3
3 R7 {Pour all water from 3 to 4-g jug} 3 0
4 R2 {Fill 3-g jug} 3 3
5 R5 {Pour from 3 to 4-g jug until it is full 4 2
6 R3 {Empty 4-gallon jug} 0 2
7 R7 {Pour all water from 3 to 4-g jug} 2 0
Goal state
1 Initial state 0 0
2 R1 {Fill 4-gallon jug} 4 0
3 R6 {Pour from water 4 to 3-g jug} 1 3
4 R4 {Empty 3-gallon jug} 1 0
5 R8 {Pour all water from 4 to 3-gallon jug} 0 1
6 R1 {Fill 4-gallon jug} 4 1
7 R6 {Pour from 4 to 3-g jug until it is full} 2 3
8 R4 {Empty 3-gallon jug} 2 0
Goal state
The state space graph for missionary and cannibals is shown below:
Our goal (the solution) is to move all disks to the last peg as shown . As in many state spaces,
there are potential transitions that are not legal. For example, we can only move a peg that has no
object above it. Further,we can’t move a large disk onto a smaller disk (though we can move any
disk to an empty peg). The space of possible operators is therefore constrained only to legal
moves. The state space can also be constrained to moves that have not yet been performed for a
given subtree. For example, if we move a small disk from Peg A to Peg C, moving the same disk
back to Peg A could be defined as an invalid transition. Not doing so would result in loops and
an infinitely deep tree.
Consider our initial position shown above. The only disk that may move is the small disk at the
top of Peg A. For this disk, only two legal moves are possible, from Peg A to Peg B or C. From
this state, there are three potential moves:
1. Move the small disk from Peg C to Peg B.
2. Move the small disk from Peg C to Peg A.
3. Move the medium disk from Peg A to Peg B.
The first move (small disk from Peg C to Peg B), while valid is not a potential
move, as we just moved this disk to Peg C (an empty peg). Moving it a second
time serves no purpose (as this move could have been done during the prior
transition), so there’s no value in doing this now (a heuristic). The second
move is also not useful (another heuristic), because it’s the reverse of the
Search strategies
A strategy is defined by picking the order of node expansion. Strategies are evaluated based on:
completeness—does it always find a solution if one exists?
time complexity—number of nodes generated/expanded
space complexity—maximum number of nodes in memory
optimality—does it always find a least-cost solution?
There two major types of search strategies this include Uninformed and informed search strategy
1. Uninformed Search: sometimes called blind, exhaustive or bruto-force. Methods that do not
use any specific knowledge about the problem to guide the search and therefore may not be
very efficient. Search through the search space all possible candidates for the solution
checking whether each candidate satisfies the problem's statement. The search techniques in
this strategy include:
Breadth-first search BFS
depth-first search DFS
Depth limited search DLS
Depth first search Iterative deepening DFSID
Uniform cost search UCS
Bi-directional search
2. Informed Search: sometime called heuristics or intelligent search which uses information
about the problem to guide the search - usually guesses the distance to a goal state and
therefore efficient, but the search may not be always possible. They are specific to the
problem. The methods in this strategy include:
Best-First Search
o Greedy best-first Search
o A* Search
Breadth-first search is a simple strategy in which the root node is expanded first,then all
successors of the root node are expanded next,then their successors,and so on. In general,all the
nodes are expanded at a given depth in the search tree before any nodes at the next level are
expanded.
Advantages
Breath First Search is an exhaustive search algorithm. It is simple to implement. And it can
be applied to any search problem.
Comparing Breath First Search to depth-first search algorithm, BFS does not suffer from any
potential infinite loop problem , which may cause the computer to crash whereas depth first
search goes deep down searching.
Breath First Search will perform well if the search space is small.
It performs best if the goal state lies in upper left-hand side of the tree.
If there is more than one solution then Breath First search find the minimal one that requires
less number of steps.
Disadvantages
Breath first search performs relatively poorly relative to the depth-first search algorithm if
the goal state lies in the bottom of the tree.
memory utilization is poor in Breath First Search so we can say that Breath First Search
needs more memory as compared to DFS.
Advantages:
Low storage requirement: linear with tree depth.
Easily programmed: function call stack does most of the work of maintaining state of the
search.
Disadvantages:
May find a sub-optimal solution (one that is deeper or more costly than the best solution).
Incomplete: without a depth bound, may not find a solution even if one exists.)
The drawback of depth-first-search is that it can make a wrong choice and get stuck going
down very long(or even infinite) path when a different choice would lead to solution near the
root of the search tree.
Disavantages
Depth limited search is complete but not optimal.
If we choose a depth limit that is too small, then depth limited search is not even complete.
The time and space complexity of depth limited search is similar to depth first search.
Depth First Search Iterative Deepening is a kind of search performs depth first search to
bounded depth d , starting d=1, and on each iteration it increases by 1.Depth First Search
Iterative Deepening was created as an attempt to combine the ability of BFS to always find an
optimal solution. With the lower memory overhead of the DFS, we can say it combines the best
features of breadth first and depth first search. It performs the DFS search to depth one, then
starts over, executing a complete DFS to depth two, and continues to run depth first searches to
successfully greater depths until a solution is found.
Advantages:
Finds an optimal solution (shortest number of steps).
Disadvantages:
The disadvantage is wasted search
Disadvantages
1. It performs wasted computation before reaching the goal depth.
For example if a problem has solution depth d=6, and each direction runs breadth-first search one
node at a time then in the worst case the two searches meet when each has expanded all but one
of the nodes at depth 3. For b=10, it means a total of 22,200 node generation, compared with
11,111,100 for a standard breadth-first search. This algorithm is complete and optimal, if both
searches are breadth first; other combinations may sacrifice completeness, optimally or both.
Advantages
Disadvantages
Heuristic or informed search exploits additional knowledge about the problem that helps direct
search to more promising paths. A heuristic function, h(n), provides an estimate of the cost of
the path from a given node to the closest goal state. Must be zero if node represents a goal state.
Example:
Straight-line distance from current location to the goal location in a road navigation problem.
This is simply breadth-first search, but with the nodes re-ordered by their heuristic value. Best-
first retains a record of every state that has been visited as well as the heuristic value of that state.
The best state ever visited is retrieved and search continues from there. This makes best-first
search appear to jump around the search tree, like a random search, but of course best-first search
is not random. The memory requirements for best-first search are worse than hill-climbing but
not as bad as breadth-first. This is because breadth-first search does not use a heuristic to avoid
obviously worse states. There are two types of Best first search this include:
When we are examining node n, then h() gives us the estimated cost of the cheapest path from
n’s state to the goal state. Of course the better an estimate h() gives, the better and faster we will
find a solution to our problem Greedy search has similar behavior to depth-first search. Its
advantages are delivered via the use of a quality heuristic function to direct the search.
Disadvantages
Like depth-first search, greedy search is not complete.
Greedy search is not guaranteed to find the solution with the shortest path i.e not optimal
It is possible for greedy search to proceed down an infinitely long branch without finding a
solution, even when one exists i.e likely to get stuck in loops
b) A* Search
A* Search One of the best-known form of Best First search. A* search algorithm, which
combines the greedy search algorithm for efficiency with the uniform cost search for optimality
and completeness. Unlike greedy search, with A* the heuristic function also takes into account
the existing cost from the starting point to the current node. This searching technique avoids
expanding paths that are already expensive, but expands most promising paths first.
In A* the evaluation function is computed by the two heuristic measures h(n) and g(n).
f(n) = g(n) + h(n), where
g(n) the cost of the shortest path from the start node to node n
h(n) returns the actual cost of the shortest path from n to the goal
f(n) this is the actual cost of the optimal path from start node to goal node that passes
through node n
This combination of strategies turns out to provide A* with both completeness(does it always
find a solution if one exists?) and optimality(does it always find a least-cost solution?).
Advantages
Disadvantages
A* search is that, as it needs to maintain a list of unsearched nodes, it can require large
amounts of memory.
The basic idea of hill-climbing search is that it simply evaluates the objective function for all
states that are neighbors to the current state, and takes the neighbor state with the best objective
function value as the new current state. If there are more than one next best states, one is picked
randomly.
Hill-climbing search is sometimes called greedy search, because a step is taken after only
considering the immediate neighbors. No time is spent considering possible future states.
Hill-climbing is easy to formulate and implement and often finds pretty good states quickly. But,
it has the following problems:
it gets stuck on local optima (hills for maximizing searches, valleys for minimizing searches,
it may get stuck on a ridge, if no single action can advance the search along the ridge,
it may get stuck wandering on a plateau for which all neighboring states have equal value.
Common variations include
allow sideways moves (when on a plateau)
stochastic hill-climbing: choose next state with probability related to increase in value of
objective function
first-choice hill-climbing: generate neighbors by random choice of available actions and keep
first state that has better value,
random-restart hill climbing: conduct multiple hill-climbing searches from multiple,
randomly generated, initial states.
Only this last one, with random-restarts, is complete. In the limit, all states will be tried as
starting states so the goal, or best state, will eventually be found.
Advantages
1. Acceptable for simple problems.
Disadvantages
1. Local Maxima: peaks that aren’t the highest point in the space
2. Plateaus: the space has a broad flat region that gives the search algorithm no direction
(random walk)
3. Ridge:The orientation of the high region, compared to the set of available moves, makes
it impossible to climb up. However, two moves executed serially may increase the height.