AI Chapter2

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 107

DEFENCE UNIVERSITY

COLLEGE OF ENGINEERING
DEPARTMENT OF COMPUTER SCIENCE AND
ENGINEERING

ARTIFICIAL INTELLIGENCE
Chapter 2
Solving Problem by searching
By: Bereket M.
Problem-Solving Agents
● In the realm of artificial intelligence, problem-solving agents are intelligent
entities designed to perceive their environment, interpret information, and
strategically take actions to achieve predefined goals.
● Problem Space: Agents navigate through a problem space, where states
represent different configurations, and actions lead to transitions between
states.
Problem-Solving Agents
● Reflex agents may face challenges in environments where mappings are
impractical to store or take too long to learn. Unlike reflex agents, goal-based
agents look beyond immediate actions, evaluating actions based on their
contribution to achieving predefined goals.
Problem-Solving Agents
Problem Space:

The problem space represents all possible states and transitions between states
that the agent might encounter while searching for a solution. It is a conceptual
framework that helps model the structure of a problem.

Goals and Objectives:

Problem-solving agents operate with specific goals or objectives in mind. These


goals define the desired states that the agent aims to achieve.

Search Algorithms:

Problem-solving often involves exploring the problem space systematically.


Problem-Solving Agents
State Representation:

States in the problem space represent different configurations or situations.In each instant of the resolution of a
problem, those elements have specific descriptors (How to select them?) and relations. Effective state
representation is crucial for the agent to understand the current situation and make informed decisions.

Adaptability:

Problem-solving agents should be adaptive to changes in the environment. As the agent encounters new
information or obstacles, it needs to adjust its strategies and actions to achieve its goals.
State modification: successor function
● The successor function is a function that generates the set of successor
states reachable from a given state in a problem space. It defines the possible
actions that can be taken from the current state and the resulting states.
● A successor function is a description of possible actions, a set of operators. It
is a transformation function on a state representation, which convert it into
another state.
State Space
The state space is the set of all states reachable
from the initial state.

It forms a graph or map in which the nodes are


states and the arcs between nodes are actions.

A path in the state space is a sequence of states


connected by a sequence of actions.

The solution of the problem is part of the map


formed by the state space.
Problem solution
● A solution in the state space is a path from the initial state to a goal state or,
sometimes, just a goal state.
● Path/solution cost: function that assigns a numeric cost to each path, the cost
of applying the operators to the states
● Solution quality is measured by the path cost function, and an optimal solution
has the lowest path cost among all solutions.
Problem description
Components:
● State space (explicitly or implicitly defined)
● Initial state
● Goal state (or the conditions it has to fulfill)
● Available actions (operators to change state)
● Restrictions (e.g., cost)
● Elements of the domain which are relevant to the problem (e.g.,
incomplete knowledge of the starting point)
Example Problems
Toy Problems:

1. 8-puzzle
2. 8-queen/n-queen
3. Crypt arithmetic
4. vacuum world

Real World

5. Traveling Salesperson
6. VLSI layout
7. Robot navigation
Example: 8-puzzle
● State space: configuration of the eight
tiles on the board
● Initial state: any configuration
● Goal state: tiles in a specific order
● Operators or actions: “blank moves”
○ Condition: the move is within the board
○ Transformation: blank moves Left, Right,
Up, or Down
● Solution: optimal sequence of
operators
Example: n queens (n = 4)
● State space: configurations from 0 to n queens on
the board with only one queen per row and column
● Initial state: configuration without queens on the
board
● Goal state: configuration with n queens such that
no queen attacks any other
● Operators or actions: place a queen on the board
○ Condition: the new queen is not attacked by any other
already placed
○ Transformation: place a new queen in a particular square
of the board
● Solution: one solution (cost is not considered)
Searching for Solutions
● In the realm of problem-solving and search algorithms, solutions are
encapsulated as action sequences, and uncovering optimal or near-optimal
solutions involves meticulous exploration of potential action sequences.
● The exploration process unfolds within a structured framework known as a
search tree.
● The search tree begins its growth with the initial state at the root. Each
branching signifies a distinct action, and nodes within branches represent
states in the problem's state space.
Searching for Solutions
● This hierarchical representation elegantly captures the progression from one
state to another, offering a visual and conceptual framework for understanding
the evolution of possible solutions.
● The search tree serves as a navigational guide for search algorithms,
delineating the landscape of potential actions and states.
● This structured approach aids in the efficient exploration of the problem
space, providing a systematic methodology for problem-solving agents to
navigate and refine their strategies.
Searching for Solution
Exploration of the solution is done by
checking each action. This gives further
states to be explored until reached to
the goal.
● Starting state: Arda
● Checkers if it is goal state
● Expands: this gives new states
● Action: move to the next node
● Repeat
Infrastructure for Search algorithms
Search algorithms require a data structure to keep track of the search tree that is
being constructed. For each node n of the tree, we have a structure that contains
four components:
● n.STATE: the state in the state space to which the node corresponds;
● n.PARENT: the node in the search tree that generated this node;
● n.ACTION: the action that was applied to the parent to generate the node;
● n.PATH-COST: the cost of the path from the initial state to the node, as
indicated by the parent pointers.
Infrastructure for Search algorithms
function CHILD-NODE(problem, parent, action) returns a node
return a node with
STATE = problem.RESULT(parent.STATE, action),
PARENT = parent, ACTION = action,
PATH-COST = parent.PATH-COST + problem.STEP-COST(parent.STATE, action)
Search Strategies
● A strategy is defined by picking the order of node expansion
● Strategies are evaluated along the following dimensions:
○ Completeness – does it always find a solution if one exists?
○ Time complexity – number of nodes generated/expanded
○ Space complexity – maximum nodes in memory
○ Optimality – does it always find a least-cost solution?
● Time and space complexity are measured in terms of:
○ b – maximum branching factor of the search tree (may be infinite)
○ d – depth of the least-cost solution
○ m – maximum depth of the state space (may be infinite)
Uninformed Search Strategies
● Uninformed search strategies, commonly known as blind search strategies,
are algorithms in artificial intelligence that explore a problem space without
utilizing domain-specific knowledge or heuristics.
● These strategies aim to traverse the search space solely based on its
structure, without additional information about state characteristics or the
likelihood of reaching the goal.
Uninformed Search Strategies
● Uninformed strategies use only the information available in the problem
definition
○ Breadth-first search
○ Uniform-cost search
○ Depth-first search
○ Depth-limited search
○ Iterative deepening search
Breadth-first search
● Breadth-First Search (BFS) is a uninformed search strategy that explores the
search space level by level, expanding all nodes at the current level before
moving on to the next level.
● BFS systematically visits nodes in breadth, ensuring all nodes at the current
level are explored before moving to the next level.
● Example: Consider a map where each node represents a city, and edges
between nodes represent possible routes. BFS would explore all cities at a
certain distance from the starting city before moving to cities farther away.
Breadth-first search
Application:
BFS is commonly applied in scenarios where the goal is to
find the shortest path or explore neighboring nodes
uniformly.
Advantages:
Guarantees the shortest path to the goal in unweighted
graphs.
Well-suited for scenarios where the depth of the solution
is not known in advance.
Challenges:
Memory-intensive, especially in scenarios with a large
branching factor.
Use Case:
BFS can be employed in network routing, social network
analysis, and puzzle-solving scenarios where exploring all
possibilities at a certain depth is essential.
Depth-first search
● Depth-First Search (DFS) is a uniformed search strategy that explores the
search space by going as deeply as possible along each branch before
backtracking.
● DFS traverses a path until it reaches the deepest node, then backtracks to
explore alternative paths. This process continues until the goal is found.
● Example: In a maze-solving scenario, DFS might explore a path as far as
possible before backtracking to try alternative paths.
Depth-first search
Application:
● DFS is suitable for scenarios where the goal is to reach a
solution quickly, and the depth of the solution is not a
primary concern.
Advantages:
● Memory-efficient as it does not need to store all nodes at
each level.
● Well-suited for scenarios with a limited branching factor.
Challenges:
● May not find the shortest path to the goal.
● Prone to getting stuck in infinite loops in the presence of
cycles.
Use Case:
● DFS can be applied in puzzles, game strategies, and
scenarios where the goal is to explore deeply to find a
solution.
Uniform-cost search
● Uniform-Cost Search (UCS) is a uninformed search strategy that selects
nodes for expansion based on the cost of reaching them from the initial state,
always expanding the node with the lowest accumulated cost.
● UCS prioritizes nodes with lower accumulated costs, ensuring that the path
chosen has the minimum cost among the explored paths.
● As it continues to move deeper in the tree the costs from each layer add up.
● Example: In a navigation problem where the cost is the distance traveled,
UCS would prioritize paths with lower distances.
Uniform-cost search
Application:
● UCS is valuable in scenarios where the cost of reaching the goal
is a critical factor.
Advantages:
● Guarantees finding the lowest-cost path to the goal.
● Suitable for scenarios where the cost of actions varies.
Challenges:
● May lead to increased exploration in scenarios with
G1
varying action costs.
Use Case: G1
● UCS is commonly used in navigation systems, resource
allocation, and scenarios where the cost of actions plays a
crucial role.
Depth-limited search
● Depth-Limited Search (DLS) is a variant of DFS with a depth limit, preventing
infinite exploration in case of cycles by limiting the depth of the search.
● DLS explores paths in a similar fashion to DFS but imposes a depth limit,
restricting the exploration depth to prevent potential infinite loops.
● Example: DLS might be used in a chess game, limiting the depth of possible
moves to a certain number of moves ahead.
Depth-limited search
Application:
● DLS is applied in scenarios where the depth of the solution is
known or should be limited to avoid excessive exploration.
Advantages:
● Prevents infinite exploration in cyclic environments.
● Efficient for scenarios where an exact depth of solution is
desired.
Challenges:
● May not find a solution even if one exists within the depth
limit.
Use Case:
● DLS is commonly used in game strategies, puzzle-solving,
and scenarios where limiting exploration depth is crucial.
Iterative deepening search
● Iterative Deepening Depth-First Search (IDDFS) is a search strategy that
combines the benefits of BFS and DFS by performing DFS with increasing
depth limits in successive iterations until the solution is found.
● IDDFS starts with a depth limit of 1, gradually increasing the depth limit in
subsequent iterations until a solution is discovered.
● Example: IDDFS could be applied to puzzle-solving, gradually increasing the
depth of exploration until a solution is found.
Iterative deepening search
Application:
● IDDFS is applied in scenarios where the optimal depth of the
solution is unknown, and the advantages of both BFS and DFS
are desired.
Advantages:
● Guarantees finding the shallowest solution.
● Retains memory efficiency compared to BFS.
Challenges:
● May explore some paths multiple times, leading to
redundancy.
Use Case:
● IDDFS is commonly used in scenarios where the optimal
solution depth is unknown or to balance memory efficiency
with exploration depth.
Avoiding Repeated States
● Avoiding repeated states is a critical concept in search algorithms, particularly
in state-space search, aiming to enhance efficiency and prevent unnecessary
redundancy in exploration.
● In state-space search problems, the search algorithm explores a graph or tree
where nodes represent states, and edges represent transitions between
states.
Avoiding Repeated States
Challenge:

● Without precautions, a search algorithm may encounter the same state multiple times due to the branching structure of the
problem space or cycles in state transitions.

Mechanism:

● Search algorithms incorporate mechanisms to avoid revisiting states, typically by maintaining a data structure (e.g., set or
hash table) to track explored states.

Data Structure:

● The data structure stores unique identifiers or representations of states, preventing redundant exploration.

Purpose:

● The primary purpose is to reduce redundant exploration, enhancing computational efficiency and preventing infinite loops.
Searching with Partial Information
● Searching with partial information refers to the scenario where an intelligent
agent lacks complete knowledge about the current state or the future states in
the environment it is navigating. In these situations, the agent may have to
make decisions and take actions without having access to all relevant
information. This concept is particularly relevant in real-world applications
where uncertainty and incomplete knowledge are inherent.
Searching with Partial Information
● The environment is only partially observable if the agent cannot directly
perceive or obtain complete information about the current state. The agent
may have limited or incomplete sensor data.
● To handle partial information, the agent often maintains a belief state, which is
a probability distribution over possible states given the available observations.
The belief state represents the agent's subjective view of the world.
● Searching with partial information introduces challenges such as the need for
robust decision-making, handling uncertainty, and updating the belief state
accurately based on new observations.
Informed Search and Exploration
● Informed (Heuristic) search strategies incorporate domain-specific knowledge,
represented by heuristics, to guide the search process more effectively.
Heuristics are rules or guidelines that estimate the cost or desirability of
reaching a goal from a given state.
● There are different types of informed search algorithms
○ Best-First Search
○ A* Search
○ Memory-bounded heuristic search
Greedy best-first search
● Greedy best-first search tries to expand the node that is closest to the goal,
on the grounds that this is likely to lead to a solution quickly. Thus, it
evaluates nodes by using just the heuristic function
● It prioritizes nodes that appear to be the most promising in terms of reaching
the goal quickly, without considering the actual cost incurred so far
Greedy best-first search
1. Initialization:
○ The algorithm starts with the initial state as the current node and initializes
an open list to store nodes that need to be expanded.
2. Evaluation Function:
○ Greedy Best-First Search uses an evaluation function that depends only on
the heuristic estimate of the cost from the current node to the goal.
3. Node Expansion:
○ At each iteration, the algorithm selects the node from the open list with the
lowest heuristic estimate This node is then expanded, and its successors
are generated.
4. Successor Evaluation:
○ For each successor, the algorithm calculates and stores its heuristic
estimate .These successors are added to the open list.
5. Goal Test:
○ If the expanded node is the goal state, the algorithm terminates, and the
solution is reconstructed by backtracking from the goal to the initial state.
6. Repeat:
○ Steps 3 to 5 are repeated until the goal state is reached or until the open list
is empty, indicating that no more nodes can be expanded.
A* Search
● A* is an informed search algorithm that combines the principles of Best-First
Search and Uniform-Cost Search. It is widely used in artificial intelligence and
robotics for pathfinding, graph traversal, and optimization problems.
● A* combines the advantages of Best-First Search, which uses heuristic
estimates, and Uniform-Cost Search, which considers the actual cost of
reaching a node.
● A* is particularly effective when there is a need for finding the shortest path
from a start node to a goal node in a graph or state space.
A* Search
1. Initialization:
○ The algorithm starts with the initial state as the current node and initializes an open list to store nodes that need to be expanded. Each node
in the open list is associated with a cost value.
2. Evaluation Function:
○ A* uses an evaluation function that considers both the cost to reach the current node from the start node and a heuristic estimate of the cost
from the current node to the goal
3. Node Expansion:
○ At each iteration, the algorithm selects the node from the open list with the lowest. This node is then expanded, and its successors are
generated.
4. Successor Evaluation:
○ For each successor, the algorithm calculates the actual cost to reach the successor from the start node and the heuristic estimate. The total
cost for each successor is then computed.
5. Updating Costs and Adding to Open List:
○ If the successor is not in the open list or has a lower total cost than the current recorded cost, the algorithm updates the cost values and
adds the successor to the open list.
6. Goal Test:
○ If the expanded node is the goal state, the algorithm terminates, and the solution is reconstructed by backtracking from the goal to the initial
state.
7. Repeat:
○ Steps 3 to 6 are repeated until the goal state is reached or until the open list is empty, indicating that no more nodes can be expanded.
A* Search
Memory-bounded heuristic search
● Memory-bounded heuristic search refers to a category of search algorithms
designed to operate within specified memory constraints. These algorithms
are tailored to limit the amount of memory they use during the search process
while still incorporating heuristic information to guide the exploration of the
solution space. The goal is to find a balance between efficient memory usage
and effective heuristic-guided search.
Memory-bounded heuristic search
Iterative-deepening A*:
● The main difference between IDA∗ and standard iterative deepening is that the cutoff used is
the f-cost (g +h) rather than the depth; at each iteration, the cutoff value is the smallest f-cost
of any node that exceeded the cutoff on the previous iteration
● The evaluation function in IDA* looks like this:

f(n) = g(n) + h(n)

f(n) = Actual cost + Estimated cost

f(n) = Total cost evaluation function.

g(n) = The actual cost from the initial node to the current node.

h(n) = Heuristic estimated cost from the current node to the goal state. it is based on
the approximation according to the problem characteristics.
Iterative-deepening A*:

● Root node as current node i.e 2


● Threshold = current node value (2=2). So explore its children.
● 4 > Threshold & 5>Threshold. So, this iteration is over and the pruned values are 4, and 5.
Iterative-deepening A*:
● In pruned values, the least is 4, So threshold = 4
● current node = 2 and 2< threshold, So explore its
children. i.e two children explore one by one
● So, first children 4, So, set current node = 4 i.e equal
to the threshold, so, explored its children also i.e 5,
4 having 5> threshold so, pruned it and explore
second child of node 4 i.e 4, so set current node = 4
= threshold, and explore its children i.e 8 & 7 having
both 8 & 7 > threshold so, pruned it. At the end of
this, our pruned value is 5,8,7
● Similarly, Explore the second child of root node 2 i.e
5 as the current node, i.e 5>threshold, So pruned it.
● So, our pruned value is 5,8,7.
Iterative-deepening A*:
● In pruned values, the least is 5, So threshold = 5
● current node = root node = 2 and 2< threshold, So
explore its children. i.e two children explore one by
one
● So, first children 4, So, set current node = 4 <
threshold, so, explored its children also i.e 5, 4
having 5= threshold so explore its child also 7&8 >
threshold. So, pruned it and explore the second child
of node 4 i.e 4, so set current node = 4 < threshold,
and explore its children i.e 8 & 7 here, both 8 & 7 >
threshold so, pruned it. At the end of this, our pruned
value is 7 & 8
● Similarly, Explore the second child of root node 2 i.e
5 as the current node, i.e 5 = threshold, so, explored
its children also i.e 6 & 6, i.e both 6 & 6 > threshold.
So pruned it
● So, our pruned value is 7,8 & 6
Recursive best first search
● Recursive Best-First Search (RBFS) is a memory-bounded variant of the
Best-First Search algorithm. It was designed to address the issue of high
memory consumption in Best-First Search by using a recursive backtracking
mechanism. RBFS maintains a limited amount of memory while still effectively
exploring the search space guided by heuristic estimates.
Recursive best first search
● Initialization:
○ RBFS begins with an initial state and initializes the search with a limited memory space.
● Node Expansion:
○ The algorithm selects a node for expansion based on its heuristic estimate. If the memory limit is not exceeded, the
selected node is expanded, and its successors are generated.
● Successor Evaluation:
○ For each successor, RBFS computes the heuristic estimate and the actual cost, forming an evaluation function. These
successors are added to the set of nodes eligible for expansion.
● Recursive Backtracking:
○ If the memory limit is exceeded during the expansion, RBFS applies a recursive backtracking mechanism. It backtracks to
the parent of the node being expanded and stores the second-best evaluation value.
● Limited Memory Usage:
○ RBFS maintains limited memory by storing only a subset of nodes in the memory space. The second-best evaluation
value is used to guide the search in a more focused manner.
● Goal Test:
○ If the goal state is reached during the expansion, the search terminates, and the solution path is reconstructed.
● Repeat:
○ The process continues with the limited memory space and the recursive backtracking mechanism until the goal is found or
the search space is exhausted.
Heuristic Functions
● The 8-puzzle was one of the earliest heuristic search problems. The object of
the puzzle is to slide the tiles horizontally or vertically into the empty space
until the configuration matches the goal configuration
Heuristic Functions
● The average solution cost for a randomly generated 8-puzzle instance is
about 22 steps.
● The branching factor is about 3 (When the empty tile is in the middle, four
moves are possible; when it is in a corner, two; and when it is along an edge,
three.)
● There will be 3^22 ≈ 3.1 × 10^10 states in tree search
Heuristic Functions
Heuristic Functions
Heuristic Functions
Heuristic Functions
Local Search Algorithms and Optimization Problems
● The search algorithms that we have seen so far are designed to explore
search spaces systematically. This systematicity is achieved by keeping one
or more paths in memory and by recording which alternatives have been
explored at each point along the path. When a goal is found, the path to that
goal also constitutes a solution to the problem. In many problems, however,
the path to the goal is irrelevant.
● This property holds for many important applications such as integrated-circuit
design, factory-floor layout, job-shop scheduling, automatic programming,
telecommunications network optimization, vehicle routing, and portfolio
management.
Local Search Algorithms and Optimization Problems
● Local search algorithms operate using a single current node (rather than
multiple paths) and generally move only to neighbors of that node. Typically,
the paths followed by the search are not retained.
● They have two key advantages:
○ They use very little memory—usually a constant amount
○ They can often find reasonable solutions in large or infinite (continuous) state spaces
● In addition to finding goals, local search algorithms are useful for solving pure
optimization problems, in which the aim is to find the best state according to
an objective function
Local Search Algorithms and Optimization Problems
● Local search algorithms
○ Hill climbing
○ Simulated annealing
○ Local beam search
○ Genetic algorithms
Hill climbing
● Hill-Climbing is a local search algorithm that makes incremental steps toward
the direction of increasing elevation (or, in the context of optimization
problems, toward increasing objective value).
● It explores the solution space by iteratively moving from the current state to a
neighboring state that improves the objective function, stopping when it
reaches a peak (or local optimum) where no neighboring state has a higher
objective value.
● Problems in Hill Climbing Algorithm
○ Local Maximum: A local maximum is a peak state in the landscape which is better than each
of its neighboring states, but there is another state also present which is higher than the local
maximum.
Hill climbing
● Initialization:
○ Start with an initial solution (state) to the problem.
● Evaluation:
○ Evaluate the current solution by computing its objective function value.
● Local Move:
○ Generate neighboring solutions by making small changes to the current solution. These changes are
often referred to as "local moves" or "perturbations."
● Selection of Neighbor:
○ Select the neighbor with the highest objective function value (if maximizing) or the lowest value (if
minimizing). This neighbor becomes the new current solution.
● Repeat:
○ Repeat steps 2 to 4 until reaching a solution where no neighboring state has a higher (or lower,
depending on the problem) objective function value.
Hill climbing
Simulated Annealing
● A variant of Hill-Climbing that allows occasional moves to worse solutions,
helping to escape local optima.
● It works by combining Hill-Climbing with random walking to get to the global
maximum.
Genetic algorithms
● A genetic algorithm (or GA) is a variant of stochastic beam search in which
successor states are generated by combining two parent states rather than by
modifying a single state.
● By modifying the current states the next offsprings are generated and
weighted if they have a higher objective by going through this iteratively it
reaches to the optimal goal
Genetic algorithms
Local Search in Continuous Space
● Local search in continuous space involves applying optimization algorithms to
problems where the solution space is continuous rather than discrete. In this
context, the solutions are represented as vectors or points in a continuous
domain, and the goal is to find the optimal solution that maximizes or
minimizes an objective function.
● Continuous optimization problems involve finding the best solution from an
infinite set of possibilities. These problems often arise in engineering, physics,
machine learning, and various scientific domains.
Local Search in Continuous Space
● Solutions in continuous space are typically represented as vectors of real
numbers. Each component of the vector corresponds to a variable, and the
combination of variable values defines a specific solution.
● The optimization problem is defined by an objective function that maps a
solution in the continuous space to a real number, indicating the quality of the
solution. The goal is to find the solution that maximizes or minimizes this
objective function.
● Local search algorithms for continuous optimization move from the current
solution to a neighboring solution in an iterative manner. The search focuses
on exploring the local region around the current solution.
Constraint Satisfaction Problems
● Constraint Satisfaction Problems (CSPs) are a class of problems in artificial
intelligence and computer science where the goal is to find a solution that
satisfies a set of constraints. In a CSP, a problem is defined by a set of
variables, each with a domain of possible values, and a set of constraints that
specify relationships among the variables. The objective is to find values for
the variables that satisfy all the given constraints.
Constraint Satisfaction Problems
● What is a CSP?
○ Finite set of variables V1, V2, …, Vn
■ Nonempty domain of possible values for each variable
DV1, DV2, … DVn
○ Finite set of constraints C1, C2, …, Cm
■ Each constraint Ci limits the values that variables can take,
● e.g., V1 ≠ V2
● A constraint satisfaction problem consists of three components, X, D, and C:
○ X is a set of variables, {X1,...,Xn}.
○ D is a set of domains, {D1,...,Dn}, one for each variable.
○ C is a set of constraints that specify allowable combinations of values.
Constraint Satisfaction Problems
● Variables:
○ Variables represent the unknowns or decision variables in the problem. Each variable has a
domain, which is a set of possible values it can take.
● Domains:
○ The domain of a variable is the set of values it can legally take. The domains are defined
based on the nature of the problem and the constraints.
● Constraints:
○ Constraints are restrictions on the possible combinations of values that the variables can
take. They define relationships or rules that must be satisfied for a solution to be valid.
● Solution:
○ A solution to a CSP is an assignment of values to all variables such that every constraint is
satisfied. The goal is to find a solution that meets all the criteria.
Constraint Satisfaction Problems
Varieties of constraints
● Unary constraints involve a single variable.
○ e.g. SA != green
● Binary constraints involve pairs of variables.
○ e.g. SA != WA
● Higher-order constraints involve 3 or more variables.
○ Professors A, B,and C cannot be on a committee together
○ Can always be represented by multiple binary constraints
● Preference (soft constraints)
○ e.g. red is better than green often can be represented by a cost for each variable
assignment
○ combination of optimization with CSPs
Constraint Satisfaction Problems
Variations on the CSP formalism
● Is nature of the variables in the problem. In many CSPs, variables take values
from discrete sets, meaning that the possible values are distinct and separate.
● Finite domains:
○ The domains of these variables are finite, indicating that there is a limited and countable
set of potential values.
● Infinite domains:
○ The domains of the variables are continuous, this shows that the domain can contain
unbounded elements.
■ Eg D ={"apple","banana","cherry",...}.
Constraint Satisfaction Problems
● Examples of Applications:
○ Scheduling the time of observations on the Hubble Space Telescope
○ Airline schedules
○ Cryptography
○ Computer vision -> image interpretation
Constraint Satisfaction Problems
CSP example: map coloring
● Variables: WA, NT, Q, NSW, V, SA, T
● Domains: Di={red,green,blue}
● Constraints:adjacent regions must
have different colors.
{SA != WA, SA != NT, SA != Q, SA != NSW , SA !=
V, WA != NT, NT != Q, Q != NSW , NSW != V }
Constraint Satisfaction Problems
Solution to the problem

{WA=red,NT=green,Q=red,NSW=green,V=red,
SA=blue,T=green}
Backtracking
● The term backtracking search is used for a depth-first search that chooses
values for one variable at a time and backtracks when a variable has no legal
values left to assign.
● It repeatedly chooses an unassigned variable, and then tries all values in the
domain of that variable in turn, trying to find a solution.
Backtracking
Local search for CSPs
● Use complete-state representation
○ Initial state = all variables assigned values
○ Successor states = change 1 (or more) values
● For CSPs
○ allow states with unsatisfied constraints operators reassign variable values
○ hill-climbing with n-queens is an example
● Variable selection: randomly select any variable
● Value selection: min-conflicts heuristic
○ Select new value that results in a minimum number of conflicts with the other variables
Local search for CSPs
● In the context of local search for CSPs, the algorithm iteratively explores the
solution space, making local changes to improve the current assignment of
values to variables. The focus is on improving the current solution rather than
exhaustively searching the entire space.
Local search for CSPs
● Current State:
○ Start with an initial assignment of values to variables. This assignment may or may not satisfy all constraints.
● Objective Function:
○ Define an objective function that measures the quality of the current assignment. The objective function reflects how well the current
assignment satisfies the constraints.
● Local Changes:
○ Make small, local changes to the current assignment. These changes involve modifying the values of one or more variables while
attempting to improve the overall solution.
● Feasibility:
○ Ensure that the local changes maintain the feasibility of the assignment; that is, the constraints are not violated by the changes.
● Evaluation:
○ Evaluate the new assignment using the objective function. If the objective function indicates an improvement, keep the new
assignment. Otherwise, revert to the previous assignment.
● Termination Criteria:
○ Repeat steps 3-5 until a satisfactory solution is found or a specified termination criterion is met (e.g., a maximum number of iterations).
● Output:
○ The final assignment is considered a solution to the CSP.
Tree-Structured CSPs
● In a tree-structured CSP, the constraints are hierarchically arranged, and the
solution space can be explored systematically following the structure of the
tree. This type of organization simplifies the representation and solution of the
problem.
Tree-Structured CSPs
● The hierarchical structure allows for the decomposition of the problem into
smaller subproblems. Each level of the tree may represent a different aspect
or perspective of the problem, and solving the entire problem involves solving
the subproblems bottom-up or top-down.
● Theorem:
○ if a constraint graph has no loops then the CSP can be solved in O(nd^ 2) time
○ linear in the number of variables!
● Compare difference with general CSP, where worst case is O(d ^n)
non-Tree CSPs
● In a non-tree CSP, the relationships among constraints are not strictly
hierarchical, and the solution space may involve more complex dependencies
and interactions among variables.
● Unlike tree-structured CSPs, non-tree CSPs may contain loops or cycles in
the constraint graph. In a constraint graph, nodes represent variables, and
edges represent constraints. The presence of cycles implies circular
relationships among variables.
non-Tree CSPs
● General idea is to convert the graph to a tree

Two general approaches

1. Assign values to specific variables (Cycle Cutset method)


2. Construct a tree-decomposition of the graph
a. Connected subproblems (subgraphs) form a tree structure
Cycle Cutset method
● In this method by assigning the most constranted element with a value it will
be removed from the graph and give the other variables a domain that will not
contain the selected value.
● By this method the graph can be converted to a tree.
● On the following image by assigning SA (That is the most variable with most
constraint) to be a color Red
○ d={Green,Blue}
Cycle Cutset method
Adversarial
Search
Game
● It is a field within artificial intelligence that focuses on developing strategies for
games where two or more players compete against each other.
● Mathematical game theory, a branch of economics, views any multiagent
environment as a game, provided that the impact of each agent on the others
is “significant,” regardless of whether the agents are cooperative or
competitive.
● Adversarial search algorithms aim to find optimal or near-optimal strategies
for decision-making in such competitive environments.
Game
Types of game problems
● Adversarial games:

win of one player is a loss of the other


● Cooperative games:

players have common interests and utility function


● A spectrum of game problems in between the two
Game
● These are different components for adversarial search
○ S0: The initial state, which specifies how the game is set up at the start.
○ PLAYER(s): Defines which player has the move in a state.
○ ACTIONS(s): Returns the set of legal moves in a state.
○ RESULT(s, a): The transition model, which defines the result of a move
○ TERMINAL-TEST(s): A terminal test, which is true when the game is over and false
otherwise. States where the game has ended are called terminal states.
○ UTILITY(s, p): A utility function (also called an objective function or payoff function)
Game
● Search objective:
○ find the sequence of player’s decisions (moves) maximizing its utility (payoff)
○ Consider the opponent’s moves and their utility
○ Find the contingent strategy for MAX assuming an infallible MIN opponent.
Minimax algorithm
● One of the solution for this competitive or adversarial problems is a minimax
algorithm.
● The Minimax algorithm recursively explores the game tree, alternating
between minimizing and maximizing players. At each level of the tree:
○ Maximizing Player (Max): Chooses the move that maximizes the score.
○ Minimizing Player (Min): Chooses the move that minimizes the score.
● By agreement the root of the game-tree represents the max player.
● It is assumed that each player aims to do the best move for himself and
therefore the worst move for his opponent in order to win the game.
Minimax
● In the following Minmax
algorithm
○ X is the max
○ O is the min
● X choose the option that
maximizes the score
● O chooses the opposite of
the X
● The opponent is rational and
always optimize its behavior
Minimax
Alpha-Beta Pruning
● The problem with minimax search is that the number of game states it has to
examine is exponential in the depth of the tree.
● Alpha-Beta Pruning is an optimization technique applied to the Minimax
algorithm in adversarial search, specifically in two-player, zero-sum games.
The primary goal of Alpha-Beta Pruning is to reduce the number of nodes that
need to be evaluated in the game tree
Alpha-Beta Pruning
● Alpha-Beta Pruning introduces two parameters, alpha and beta, to keep track
of the minimum score guaranteed for the maximizing player (Max) and the
maximum score guaranteed for the minimizing player (Min) at a given level.
● During a search If the current node's score is greater than or equal to beta
(for Min), or less than or equal to alpha (for Max), pruning occurs, and the
remaining branches at that node are not explored.
Alpha-Beta Pruning
● Rules of Thumb
○ α is the highest max found so far
○ β is the lowest min value found so far
Alpha-Beta Pruning
Alpha-Beta Pruning
Alpha-Beta Pruning
Alpha-Beta Pruning
Alpha-Beta Pruning
Alpha-Beta Pruning
Alpha-Beta Pruning
Alpha-Beta Pruning
Alpha-Beta Pruning
Alpha-Beta Pruning
Alpha-Beta Pruning
Alpha-Beta Pruning
● Traverse the search tree in depth-first order
● At each Max node n, alpha(n) = maximum value found so far
○ Start with - infinity and only increase.
○ Increases if a child of n returns a value greater than the current alpha.
○ Serve as a tentative lower bound of the final pay-off.
● At each Min node n, beta(n) = minimum value found so far
○ Start with infinity and only decrease.
○ Decreases if a child of n returns a value less than the current beta.
○ Serve as a tentative upper bound of the final pay-off.
● beta(n) for MAX node n: smallest beta value of its MIN ancestors.
● alpha(n) for MIN node n: greatest alpha value of its MAX ancestors

You might also like