Professional Documents
Culture Documents
AI Chapter2
AI Chapter2
AI Chapter2
COLLEGE OF ENGINEERING
DEPARTMENT OF COMPUTER SCIENCE AND
ENGINEERING
ARTIFICIAL INTELLIGENCE
Chapter 2
Solving Problem by searching
By: Bereket M.
Problem-Solving Agents
● In the realm of artificial intelligence, problem-solving agents are intelligent
entities designed to perceive their environment, interpret information, and
strategically take actions to achieve predefined goals.
● Problem Space: Agents navigate through a problem space, where states
represent different configurations, and actions lead to transitions between
states.
Problem-Solving Agents
● Reflex agents may face challenges in environments where mappings are
impractical to store or take too long to learn. Unlike reflex agents, goal-based
agents look beyond immediate actions, evaluating actions based on their
contribution to achieving predefined goals.
Problem-Solving Agents
Problem Space:
The problem space represents all possible states and transitions between states
that the agent might encounter while searching for a solution. It is a conceptual
framework that helps model the structure of a problem.
Search Algorithms:
States in the problem space represent different configurations or situations.In each instant of the resolution of a
problem, those elements have specific descriptors (How to select them?) and relations. Effective state
representation is crucial for the agent to understand the current situation and make informed decisions.
Adaptability:
Problem-solving agents should be adaptive to changes in the environment. As the agent encounters new
information or obstacles, it needs to adjust its strategies and actions to achieve its goals.
State modification: successor function
● The successor function is a function that generates the set of successor
states reachable from a given state in a problem space. It defines the possible
actions that can be taken from the current state and the resulting states.
● A successor function is a description of possible actions, a set of operators. It
is a transformation function on a state representation, which convert it into
another state.
State Space
The state space is the set of all states reachable
from the initial state.
1. 8-puzzle
2. 8-queen/n-queen
3. Crypt arithmetic
4. vacuum world
Real World
5. Traveling Salesperson
6. VLSI layout
7. Robot navigation
Example: 8-puzzle
● State space: configuration of the eight
tiles on the board
● Initial state: any configuration
● Goal state: tiles in a specific order
● Operators or actions: “blank moves”
○ Condition: the move is within the board
○ Transformation: blank moves Left, Right,
Up, or Down
● Solution: optimal sequence of
operators
Example: n queens (n = 4)
● State space: configurations from 0 to n queens on
the board with only one queen per row and column
● Initial state: configuration without queens on the
board
● Goal state: configuration with n queens such that
no queen attacks any other
● Operators or actions: place a queen on the board
○ Condition: the new queen is not attacked by any other
already placed
○ Transformation: place a new queen in a particular square
of the board
● Solution: one solution (cost is not considered)
Searching for Solutions
● In the realm of problem-solving and search algorithms, solutions are
encapsulated as action sequences, and uncovering optimal or near-optimal
solutions involves meticulous exploration of potential action sequences.
● The exploration process unfolds within a structured framework known as a
search tree.
● The search tree begins its growth with the initial state at the root. Each
branching signifies a distinct action, and nodes within branches represent
states in the problem's state space.
Searching for Solutions
● This hierarchical representation elegantly captures the progression from one
state to another, offering a visual and conceptual framework for understanding
the evolution of possible solutions.
● The search tree serves as a navigational guide for search algorithms,
delineating the landscape of potential actions and states.
● This structured approach aids in the efficient exploration of the problem
space, providing a systematic methodology for problem-solving agents to
navigate and refine their strategies.
Searching for Solution
Exploration of the solution is done by
checking each action. This gives further
states to be explored until reached to
the goal.
● Starting state: Arda
● Checkers if it is goal state
● Expands: this gives new states
● Action: move to the next node
● Repeat
Infrastructure for Search algorithms
Search algorithms require a data structure to keep track of the search tree that is
being constructed. For each node n of the tree, we have a structure that contains
four components:
● n.STATE: the state in the state space to which the node corresponds;
● n.PARENT: the node in the search tree that generated this node;
● n.ACTION: the action that was applied to the parent to generate the node;
● n.PATH-COST: the cost of the path from the initial state to the node, as
indicated by the parent pointers.
Infrastructure for Search algorithms
function CHILD-NODE(problem, parent, action) returns a node
return a node with
STATE = problem.RESULT(parent.STATE, action),
PARENT = parent, ACTION = action,
PATH-COST = parent.PATH-COST + problem.STEP-COST(parent.STATE, action)
Search Strategies
● A strategy is defined by picking the order of node expansion
● Strategies are evaluated along the following dimensions:
○ Completeness – does it always find a solution if one exists?
○ Time complexity – number of nodes generated/expanded
○ Space complexity – maximum nodes in memory
○ Optimality – does it always find a least-cost solution?
● Time and space complexity are measured in terms of:
○ b – maximum branching factor of the search tree (may be infinite)
○ d – depth of the least-cost solution
○ m – maximum depth of the state space (may be infinite)
Uninformed Search Strategies
● Uninformed search strategies, commonly known as blind search strategies,
are algorithms in artificial intelligence that explore a problem space without
utilizing domain-specific knowledge or heuristics.
● These strategies aim to traverse the search space solely based on its
structure, without additional information about state characteristics or the
likelihood of reaching the goal.
Uninformed Search Strategies
● Uninformed strategies use only the information available in the problem
definition
○ Breadth-first search
○ Uniform-cost search
○ Depth-first search
○ Depth-limited search
○ Iterative deepening search
Breadth-first search
● Breadth-First Search (BFS) is a uninformed search strategy that explores the
search space level by level, expanding all nodes at the current level before
moving on to the next level.
● BFS systematically visits nodes in breadth, ensuring all nodes at the current
level are explored before moving to the next level.
● Example: Consider a map where each node represents a city, and edges
between nodes represent possible routes. BFS would explore all cities at a
certain distance from the starting city before moving to cities farther away.
Breadth-first search
Application:
BFS is commonly applied in scenarios where the goal is to
find the shortest path or explore neighboring nodes
uniformly.
Advantages:
Guarantees the shortest path to the goal in unweighted
graphs.
Well-suited for scenarios where the depth of the solution
is not known in advance.
Challenges:
Memory-intensive, especially in scenarios with a large
branching factor.
Use Case:
BFS can be employed in network routing, social network
analysis, and puzzle-solving scenarios where exploring all
possibilities at a certain depth is essential.
Depth-first search
● Depth-First Search (DFS) is a uniformed search strategy that explores the
search space by going as deeply as possible along each branch before
backtracking.
● DFS traverses a path until it reaches the deepest node, then backtracks to
explore alternative paths. This process continues until the goal is found.
● Example: In a maze-solving scenario, DFS might explore a path as far as
possible before backtracking to try alternative paths.
Depth-first search
Application:
● DFS is suitable for scenarios where the goal is to reach a
solution quickly, and the depth of the solution is not a
primary concern.
Advantages:
● Memory-efficient as it does not need to store all nodes at
each level.
● Well-suited for scenarios with a limited branching factor.
Challenges:
● May not find the shortest path to the goal.
● Prone to getting stuck in infinite loops in the presence of
cycles.
Use Case:
● DFS can be applied in puzzles, game strategies, and
scenarios where the goal is to explore deeply to find a
solution.
Uniform-cost search
● Uniform-Cost Search (UCS) is a uninformed search strategy that selects
nodes for expansion based on the cost of reaching them from the initial state,
always expanding the node with the lowest accumulated cost.
● UCS prioritizes nodes with lower accumulated costs, ensuring that the path
chosen has the minimum cost among the explored paths.
● As it continues to move deeper in the tree the costs from each layer add up.
● Example: In a navigation problem where the cost is the distance traveled,
UCS would prioritize paths with lower distances.
Uniform-cost search
Application:
● UCS is valuable in scenarios where the cost of reaching the goal
is a critical factor.
Advantages:
● Guarantees finding the lowest-cost path to the goal.
● Suitable for scenarios where the cost of actions varies.
Challenges:
● May lead to increased exploration in scenarios with
G1
varying action costs.
Use Case: G1
● UCS is commonly used in navigation systems, resource
allocation, and scenarios where the cost of actions plays a
crucial role.
Depth-limited search
● Depth-Limited Search (DLS) is a variant of DFS with a depth limit, preventing
infinite exploration in case of cycles by limiting the depth of the search.
● DLS explores paths in a similar fashion to DFS but imposes a depth limit,
restricting the exploration depth to prevent potential infinite loops.
● Example: DLS might be used in a chess game, limiting the depth of possible
moves to a certain number of moves ahead.
Depth-limited search
Application:
● DLS is applied in scenarios where the depth of the solution is
known or should be limited to avoid excessive exploration.
Advantages:
● Prevents infinite exploration in cyclic environments.
● Efficient for scenarios where an exact depth of solution is
desired.
Challenges:
● May not find a solution even if one exists within the depth
limit.
Use Case:
● DLS is commonly used in game strategies, puzzle-solving,
and scenarios where limiting exploration depth is crucial.
Iterative deepening search
● Iterative Deepening Depth-First Search (IDDFS) is a search strategy that
combines the benefits of BFS and DFS by performing DFS with increasing
depth limits in successive iterations until the solution is found.
● IDDFS starts with a depth limit of 1, gradually increasing the depth limit in
subsequent iterations until a solution is discovered.
● Example: IDDFS could be applied to puzzle-solving, gradually increasing the
depth of exploration until a solution is found.
Iterative deepening search
Application:
● IDDFS is applied in scenarios where the optimal depth of the
solution is unknown, and the advantages of both BFS and DFS
are desired.
Advantages:
● Guarantees finding the shallowest solution.
● Retains memory efficiency compared to BFS.
Challenges:
● May explore some paths multiple times, leading to
redundancy.
Use Case:
● IDDFS is commonly used in scenarios where the optimal
solution depth is unknown or to balance memory efficiency
with exploration depth.
Avoiding Repeated States
● Avoiding repeated states is a critical concept in search algorithms, particularly
in state-space search, aiming to enhance efficiency and prevent unnecessary
redundancy in exploration.
● In state-space search problems, the search algorithm explores a graph or tree
where nodes represent states, and edges represent transitions between
states.
Avoiding Repeated States
Challenge:
● Without precautions, a search algorithm may encounter the same state multiple times due to the branching structure of the
problem space or cycles in state transitions.
Mechanism:
● Search algorithms incorporate mechanisms to avoid revisiting states, typically by maintaining a data structure (e.g., set or
hash table) to track explored states.
Data Structure:
● The data structure stores unique identifiers or representations of states, preventing redundant exploration.
Purpose:
● The primary purpose is to reduce redundant exploration, enhancing computational efficiency and preventing infinite loops.
Searching with Partial Information
● Searching with partial information refers to the scenario where an intelligent
agent lacks complete knowledge about the current state or the future states in
the environment it is navigating. In these situations, the agent may have to
make decisions and take actions without having access to all relevant
information. This concept is particularly relevant in real-world applications
where uncertainty and incomplete knowledge are inherent.
Searching with Partial Information
● The environment is only partially observable if the agent cannot directly
perceive or obtain complete information about the current state. The agent
may have limited or incomplete sensor data.
● To handle partial information, the agent often maintains a belief state, which is
a probability distribution over possible states given the available observations.
The belief state represents the agent's subjective view of the world.
● Searching with partial information introduces challenges such as the need for
robust decision-making, handling uncertainty, and updating the belief state
accurately based on new observations.
Informed Search and Exploration
● Informed (Heuristic) search strategies incorporate domain-specific knowledge,
represented by heuristics, to guide the search process more effectively.
Heuristics are rules or guidelines that estimate the cost or desirability of
reaching a goal from a given state.
● There are different types of informed search algorithms
○ Best-First Search
○ A* Search
○ Memory-bounded heuristic search
Greedy best-first search
● Greedy best-first search tries to expand the node that is closest to the goal,
on the grounds that this is likely to lead to a solution quickly. Thus, it
evaluates nodes by using just the heuristic function
● It prioritizes nodes that appear to be the most promising in terms of reaching
the goal quickly, without considering the actual cost incurred so far
Greedy best-first search
1. Initialization:
○ The algorithm starts with the initial state as the current node and initializes
an open list to store nodes that need to be expanded.
2. Evaluation Function:
○ Greedy Best-First Search uses an evaluation function that depends only on
the heuristic estimate of the cost from the current node to the goal.
3. Node Expansion:
○ At each iteration, the algorithm selects the node from the open list with the
lowest heuristic estimate This node is then expanded, and its successors
are generated.
4. Successor Evaluation:
○ For each successor, the algorithm calculates and stores its heuristic
estimate .These successors are added to the open list.
5. Goal Test:
○ If the expanded node is the goal state, the algorithm terminates, and the
solution is reconstructed by backtracking from the goal to the initial state.
6. Repeat:
○ Steps 3 to 5 are repeated until the goal state is reached or until the open list
is empty, indicating that no more nodes can be expanded.
A* Search
● A* is an informed search algorithm that combines the principles of Best-First
Search and Uniform-Cost Search. It is widely used in artificial intelligence and
robotics for pathfinding, graph traversal, and optimization problems.
● A* combines the advantages of Best-First Search, which uses heuristic
estimates, and Uniform-Cost Search, which considers the actual cost of
reaching a node.
● A* is particularly effective when there is a need for finding the shortest path
from a start node to a goal node in a graph or state space.
A* Search
1. Initialization:
○ The algorithm starts with the initial state as the current node and initializes an open list to store nodes that need to be expanded. Each node
in the open list is associated with a cost value.
2. Evaluation Function:
○ A* uses an evaluation function that considers both the cost to reach the current node from the start node and a heuristic estimate of the cost
from the current node to the goal
3. Node Expansion:
○ At each iteration, the algorithm selects the node from the open list with the lowest. This node is then expanded, and its successors are
generated.
4. Successor Evaluation:
○ For each successor, the algorithm calculates the actual cost to reach the successor from the start node and the heuristic estimate. The total
cost for each successor is then computed.
5. Updating Costs and Adding to Open List:
○ If the successor is not in the open list or has a lower total cost than the current recorded cost, the algorithm updates the cost values and
adds the successor to the open list.
6. Goal Test:
○ If the expanded node is the goal state, the algorithm terminates, and the solution is reconstructed by backtracking from the goal to the initial
state.
7. Repeat:
○ Steps 3 to 6 are repeated until the goal state is reached or until the open list is empty, indicating that no more nodes can be expanded.
A* Search
Memory-bounded heuristic search
● Memory-bounded heuristic search refers to a category of search algorithms
designed to operate within specified memory constraints. These algorithms
are tailored to limit the amount of memory they use during the search process
while still incorporating heuristic information to guide the exploration of the
solution space. The goal is to find a balance between efficient memory usage
and effective heuristic-guided search.
Memory-bounded heuristic search
Iterative-deepening A*:
● The main difference between IDA∗ and standard iterative deepening is that the cutoff used is
the f-cost (g +h) rather than the depth; at each iteration, the cutoff value is the smallest f-cost
of any node that exceeded the cutoff on the previous iteration
● The evaluation function in IDA* looks like this:
g(n) = The actual cost from the initial node to the current node.
h(n) = Heuristic estimated cost from the current node to the goal state. it is based on
the approximation according to the problem characteristics.
Iterative-deepening A*:
{WA=red,NT=green,Q=red,NSW=green,V=red,
SA=blue,T=green}
Backtracking
● The term backtracking search is used for a depth-first search that chooses
values for one variable at a time and backtracks when a variable has no legal
values left to assign.
● It repeatedly chooses an unassigned variable, and then tries all values in the
domain of that variable in turn, trying to find a solution.
Backtracking
Local search for CSPs
● Use complete-state representation
○ Initial state = all variables assigned values
○ Successor states = change 1 (or more) values
● For CSPs
○ allow states with unsatisfied constraints operators reassign variable values
○ hill-climbing with n-queens is an example
● Variable selection: randomly select any variable
● Value selection: min-conflicts heuristic
○ Select new value that results in a minimum number of conflicts with the other variables
Local search for CSPs
● In the context of local search for CSPs, the algorithm iteratively explores the
solution space, making local changes to improve the current assignment of
values to variables. The focus is on improving the current solution rather than
exhaustively searching the entire space.
Local search for CSPs
● Current State:
○ Start with an initial assignment of values to variables. This assignment may or may not satisfy all constraints.
● Objective Function:
○ Define an objective function that measures the quality of the current assignment. The objective function reflects how well the current
assignment satisfies the constraints.
● Local Changes:
○ Make small, local changes to the current assignment. These changes involve modifying the values of one or more variables while
attempting to improve the overall solution.
● Feasibility:
○ Ensure that the local changes maintain the feasibility of the assignment; that is, the constraints are not violated by the changes.
● Evaluation:
○ Evaluate the new assignment using the objective function. If the objective function indicates an improvement, keep the new
assignment. Otherwise, revert to the previous assignment.
● Termination Criteria:
○ Repeat steps 3-5 until a satisfactory solution is found or a specified termination criterion is met (e.g., a maximum number of iterations).
● Output:
○ The final assignment is considered a solution to the CSP.
Tree-Structured CSPs
● In a tree-structured CSP, the constraints are hierarchically arranged, and the
solution space can be explored systematically following the structure of the
tree. This type of organization simplifies the representation and solution of the
problem.
Tree-Structured CSPs
● The hierarchical structure allows for the decomposition of the problem into
smaller subproblems. Each level of the tree may represent a different aspect
or perspective of the problem, and solving the entire problem involves solving
the subproblems bottom-up or top-down.
● Theorem:
○ if a constraint graph has no loops then the CSP can be solved in O(nd^ 2) time
○ linear in the number of variables!
● Compare difference with general CSP, where worst case is O(d ^n)
non-Tree CSPs
● In a non-tree CSP, the relationships among constraints are not strictly
hierarchical, and the solution space may involve more complex dependencies
and interactions among variables.
● Unlike tree-structured CSPs, non-tree CSPs may contain loops or cycles in
the constraint graph. In a constraint graph, nodes represent variables, and
edges represent constraints. The presence of cycles implies circular
relationships among variables.
non-Tree CSPs
● General idea is to convert the graph to a tree