Professional Documents
Culture Documents
Unit 2 PSM
Unit 2 PSM
Problem solving in artificial intelligence (AI) involves using techniques such as:
Efficient algorithms
Heuristics
Root cause analysis
Searching algorithms
The goal of AI is to solve problems in the same way that humans do.
Here are some problem solving methods in AI:
Heuristic search
Uses a rule of thumb to increase the chances of success
Heuristic approach
Focuses on experimentation and test procedures to understand a problem and create a
solution
State space search
Involves defining the search space, deciding start and goal states, and then finding the path
from start state to goal state through search space
Constraint satisfaction problems
Mathematical questions defined as a set of objects whose state must satisfy a number of
constraints or limitations.
Other problem solving methods in AI include:
Uninformed vs Informed Search
Local Search
Stochastic Hill Climbing
Simulated Annealing
Tabu Search
SEARCH STRATEGIES
In artificial intelligence (AI), search strategies are techniques and algorithms used to explore
and navigate a problem space in order to find a solution to a given problem. Search strategies
are commonly employed in tasks like pathfinding, puzzle-solving, game playing, and
optimization. There are various search algorithms, each with its own characteristics and
applications. Here are some of the commonly used search strategies in AI:
1. Depth-First Search (DFS): DFS explores a tree or graph by following one branch as deeply
as possible before backtracking. It is implemented using a stack data structure and is often
used in games and puzzle-solving.
2. Breadth-First Search (BFS): BFS explores a tree or graph level by level, visiting all nodes
at the current level before moving on to the next level. It is implemented using a queue data
structure and is commonly used in finding the shortest path in unweighted graphs.
3. Uniform Cost Search (UCS): UCS is a variant of BFS that takes into account the cost
associated with each path. It expands the path with the lowest accumulated cost first and is
used for finding the shortest path in weighted graphs.
4. A Search:* A* combines the advantages of both BFS and UCS by using a heuristic function
to estimate the cost of reaching the goal from a given state. It selects the path with the lowest
estimated total cost, making it efficient for pathfinding and optimization problems.
5. Greedy Best-First Search: Greedy Best-First Search selects the path that appears to be
closest to the goal according to a heuristic, without considering the actual path cost. It can be
very efficient but may not always find the optimal solution.
6. Dijkstra's Algorithm: Dijkstra's algorithm is used to find the shortest path in a weighted
graph. It explores paths in order of their actual costs and is guaranteed to find the shortest
path in a non-negative weighted graph.
7. Bidirectional Search: Bidirectional search starts from both the initial and goal states and
searches simultaneously until the two searches meet in the middle. It can be more efficient
than traditional searches, especially in large search spaces.
8. Depth-Limited Search: Depth-Limited Search is a modified version of DFS that sets a
maximum depth limit. It helps avoid infinite loops in cases where a solution may not exist
within a reasonable depth.
9. Iterative Deepening Depth-First Search (IDDFS): IDDFS combines the advantages of
DFS and BFS by repeatedly applying DFS with increasing depth limits until a solution is
found. It ensures optimality and is suitable for large state spaces.
10. Beam Search: Beam search is a variant of BFS that only keeps a fixed number of the best
nodes at each level, discarding the rest. It is used in game playing and optimization problems.
11. Simulated Annealing: Simulated annealing is a probabilistic search algorithm inspired by
the annealing process in metallurgy. It is used for optimization problems and aims to find the
global optimum through random exploration and probabilistic acceptance of worse solutions.
12. Genetic Algorithms: Genetic algorithms are inspired by the process of natural selection.
They use populations of solutions and evolve them over generations by applying genetic
operators like mutation and crossover. Genetic algorithms are used in optimization and
machine learning.
These search strategies can be adapted and combined to suit the specific requirements and
characteristics of the problem at hand. The choice of a particular search strategy depends on
factors such as the problem space, search space, available computational resources, and
desired solution quality.
UNINFORMED SEARCH
Uninformed search, also known as blind search, is a category of search algorithms in artificial
intelligence (AI) that explore a problem space without any knowledge of the location of the
goal or the structure of the problem. Uninformed search algorithms rely on systematic
exploration, visiting states or nodes in the search space based on simple rules, often without
any heuristic information.
Here are some commonly used uninformed search algorithms in AI:
1. Breadth-First Search (BFS): BFS explores a search tree or graph by systematically
expanding all the nodes at the current level before moving on to the next level. It is
complete, meaning it will find a solution if one exists, and it is guaranteed to find the
shortest path in an unweighted graph.
2. Depth-First Search (DFS): DFS explores a search tree or graph by visiting a node
and then recursively visiting its children as deeply as possible before backtracking.
DFS is often implemented using a stack data structure and is memory-efficient
compared to BFS, but it may not find the optimal solution.
3. Depth-Limited Search (DLS): DLS is a modification of DFS where a depth limit is
imposed. It is useful for preventing infinite loops and for limiting the depth of
exploration. DLS is often used in conjunction with iterative deepening to ensure
completeness and optimality.
4. Iterative Deepening Depth-First Search (IDDFS): IDDFS is a combination of DLS
and DFS. It repeatedly applies DLS with increasing depth limits until a solution is
found. IDDFS is both complete (guaranteed to find a solution) and optimal
(guaranteed to find the shortest path in unweighted graphs).
5. Uniform Cost Search (UCS): UCS is a variant of BFS that considers the actual cost
of reaching a state. It explores paths in order of their accumulated cost, making it
useful for finding the shortest path in weighted graphs.
6. Bidirectional Search: Bidirectional search starts from both the initial and goal states
and searches simultaneously until the two searches meet in the middle. It can be more
efficient than traditional searches in large search spaces.
Uninformed search algorithms are used in situations where little or no information about the
problem space is available. While these algorithms are guaranteed to find a solution
(completeness), they may not always be the most efficient, especially in complex search
spaces. In cases where additional information is available, heuristic search algorithms, such
as A* search, are often preferred because they can make more informed decisions about
which paths to explore. Uninformed search is valuable for baseline exploration and as a basis
for developing more advanced search strategies in AI.
INFORMED SEARCH
Informed search in artificial intelligence (AI) is a search algorithm that uses problem-
specific knowledge to make search more directed and efficient. Informed search provides
the AI with guidance on how and where to look for the solution to the problem.
Informed search uses heuristics to evaluate which nodes or states to expand next during
search. A heuristic is a function that finds the most promising path. It takes the current state
of the agent as its input and produces an estimation of how close the agent is from the goal.
Informed search algorithms, like A* Search, use heuristics to estimate the cost to reach the
goal, potentially speeding up the search and finding more efficient solutions.
An example of informed search in AI is searching a place you want to visit on Google
maps. The current location and the destination place are given to the search algorithm for
calculating the accurate distance, time taken, and real-time traffic updates on that particular
route.
HEURISTIC SEARCH
Heuristic search is a simple searching technique in artificial intelligence (AI) that looks
for a reasonable solution to a problem. It ranks every opportunity at each search branch
and then chooses the best one from the options provided.
Heuristic search is also known as informed search or heuristic control strategy. It uses the
idea of heuristic, which is a function that finds the most promising path.
Heuristic search is useful in situations where known algorithms can't be found. It can also be
used in large search spaces.
Hill climbing: A simple optimization algorithm used to find the best possible solution for a
given problem
Breadth First Search (BFS): An example of uninformed search
Depth First Search (DFS): An algorithm that explores as far as possible along each branch
before backtracking
OPTIMIZATION PROBLEM
An optimization problem in artificial intelligence (AI) is finding a set of inputs to an
objective function that results in a maximum or minimum function evaluation. It is a
challenging problem that underlies many machine learning algorithms, such as fitting logistic
regression models and training artificial neural networks.
Optimization problems can be classified according to the nature of the constraints, such as
linear, nonlinear, or convex.
Some optimization algorithms include:
Ant colony optimization
Based on the ideas of ant foraging by pheromone communication to form paths
Genetic algorithm
A conventional evolutionary algorithm motivated by Darwinian evolutionary ideas
Nature-inspired search algorithms
Such as the runner-root algorithm (RRA), which is inspired by the function of runners and
roots of plants in nature
When identifying AI optimization problems and objectives, you should also consider the
ethical, social, and environmental implications of the problem and the solutions. For example,
if you are optimizing an AI system for facial recognition, you should be aware of the
potential biases, privacy issues, and legal regulations involved.
A set of variables
A domain for each variable
A set of constraints that outline the possible combinations of values for these variables
Many combinatorial problems in operational research, such as scheduling and timetabling,
can be formulated as CSPs.
1. Communicating the domain reduction of a decision variable to all of the constraints that are
stated over this variable
2. Using the constraints to reduce the domain of possible values for each variable
3. Inferring new constraints from the existing ones
4. Applying filtering to the constraints in the CSP instance at hand
5. Explicitly forbidding values or combinations of values for some variables of a problem
because a given subset of its constraints cannot be satisfied otherwise
A constraint satisfaction problem (CSP) in AI consists of three components:
Game playing in artificial intelligence (AI) is one of the first tasks undertaken in AI. There
are two main approaches to game playing in AI:
In AI, optimal decision making in games is making the best decision possible in a given
situation, while considering various factors. These factors include:
Game mechanics
Resources available
Potential risks and rewards
Other players' activities
The optimal strategy in a game can be determined from the minimax value of each node
in a game tree. In the minimax algorithm, MAX prefers to move to a state of maximum
value, while MIN prefers a state of minimum value.
The optimal solution becomes a contingent strategy when it specifies MAX's move in the
initial state, then Max moves to the states resulting for every possible response by MIN.
Some factors that can make a more formal approach to decision making necessary include:
X
/\
O -
/ \ /|\
X OX-X
/ / |
O - X
/|\ /\
XOX OX
/ /
- -
In this example:
X is the root node, and it is X's turn to play.
X explores its first child, O, and assigns an alpha value of -∞ to X.
O explores its first child, X, and assigns a beta value of ∞ to O.
X evaluates its second child, which is a terminal state with a score of 0 (draw).
O determines that X's second child is not better than its first child (X's alpha value), so
it prunes the remaining branches below O.
This process continues, and pruning occurs when O's beta value becomes less than or
equal to X's alpha value at nodes with dashes (-).
The result of alpha-beta pruning in this example is that many branches of the game tree are
pruned, significantly reducing the number of nodes that need to be evaluated. Alpha-beta
pruning ensures that the Minimax algorithm explores the most promising branches and
discards unproductive ones while maintaining the same optimal result as a full Minimax
search.
STOCHASTIC GAMES
Stochastic games, also known as stochastic dynamic games or sequential games of chance,
are a class of games in the field of game theory and artificial intelligence where uncertainty,
randomness, and sequential decision-making play a significant role. Unlike deterministic
games, where the outcome of each move is known with certainty, stochastic games involve a
level of randomness or uncertainty in the decision-making process. Stochastic games are used
to model a wide range of situations, from economics to biology to AI.
Here are some key concepts related to stochastic games:
1. Players: Stochastic games involve two or more players, each making decisions in a
sequential manner. Players may have different objectives or preferences.
2. States: The game progresses through a sequence of states. Each state represents a
snapshot of the game at a specific point in time. The transition from one state to
another depends on the players' actions and the random elements in the game.
3. Actions: In each state, players have a set of possible actions they can take. These
actions influence the game's progression.
4. Transitions: The game transitions from one state to another based on the actions of
the players and the probabilistic elements, such as random events or chance outcomes.
5. Rewards or Payoffs: At each state, players receive rewards or payoffs based on the
actions they have taken. The rewards can be deterministic or stochastic, depending on
the game's design.
6. Discount Factor: A discount factor is often used to model time preferences or
uncertainties about the future. It determines the relative importance of immediate
rewards versus future rewards.
7. Objective Function: Players typically have an objective function that defines their
goals or preferences. The goal can be to maximize expected rewards, minimize
expected costs, or achieve other specific objectives.
8. Strategies: Players choose strategies to maximize their objectives. Strategies are
plans or policies that specify the actions to take in different states or situations.
9. Equilibria: Stochastic games can have various equilibria, including Nash equilibria,
subgame perfect equilibria, and Markov perfect equilibria. These represent stable
points where no player can unilaterally improve their situation.
Examples of stochastic games include:
Repeated Games: Games played over multiple rounds where the outcomes of
previous rounds can influence future rounds. For example, repeated prisoner's
dilemma games.
Stochastic Control Problems: Problems where a player must make sequential
decisions under uncertainty. Examples include financial portfolio optimization and
inventory management.
Game-Theoretic Models in Economics: Stochastic games are used to model
economic interactions involving strategic decision-making under uncertainty.
Reinforcement Learning: In reinforcement learning, agents learn to make sequential
decisions in stochastic environments, where they receive rewards that are subject to
uncertainty.
Stochastic games provide a powerful framework for modeling complex, dynamic systems
where uncertainty is a central feature. Analyzing and solving stochastic games often involves
advanced mathematical techniques, including dynamic programming, Markov decision
processes, and game-theoretic concepts. These games are essential in understanding decision-
making under uncertainty and have applications in various fields, including economics,
operations research, and AI.