Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 100

ARITIFICIAL INTELLIGENCE

Artificial intelligence
is the simulation of
human intelligence
processes by
machines, especially
computer systems

A machine that could Serve every member of a


organize your cupboard house a customized cup of
as you like coffee makes your day easier

These are the products of artificial intelligence


WHY USE THE TERM
ARITIFICIAL INTELLIGENCE??????
These machine are
artificially incorporated
with human like
intelligence to perform
tasks as we do

This intelligence is built using complex


algorithm and mathematical functions
Inspite of variation in lighting , landscape
and dimensions of the field the AI robot must
perform as expected

The ability to react appropriately to new


situation is called generalized learning
Robot at crossroad one that is paved
and other one is rocky

The robot should determine which


path to take based on circumstances

This portraits the robot’s reasoning


ability
Robot encounters a stream
that it cannot swim across
Using the plank provided
as an input the robot is
able to cross the string
Weak AI also called as narrow AI focuses solely on
one task

Alphago is a maestro
of the game go but
AI provides machine with capability you cant expect it to
to adapt, reason and provide be remotely good at
solutions chess

Avengers is an ideal
example of a strong AI
because its self aware
and eventually develops
emotions

This makes AI response


unpredictable
Algorithm to incorporated intelligence
into machine by automatically learning
from data

Algorithm that mimics human brain to


incorporate intelligence into machine

AI is the ability of machine to


imitate human intelligence
AI Techniques

Search Use of Knowledge Abstraction


Search provides a way This technique provides a way Abstraction finds a way of
of solving problems for of solving complex problems separating important features
which no more direct by exploiting the structure of and notifications from the
approach is available as the object that are involved unimportant ones that would
well as a framework into otherwise confuse any process
which any direct
techniques that are
available can be
embedded.
Problem solving in AI
Defining the Implementation
problem

Analyzing the Choosing the


problem solution

Identification of
solutions
Water Jug Problem
Water Jug Problem
Water Jug Problem
TowerOfHanoi
Travelling Salesman Problem
Travelling Salesman Problem
Broad Categorizations of AI problems
 Structured Problem
 Well structured – Yield a right answer (1+2=3)
 Ill structured – Do not yield a particular answer
(Which economic system allows for the most human flourishing?)

 Unstructured Problem
 Very hard to formulate the problem
 Ambiguous in nature
Broad Categorizations of AI problems
 Linear Problem
 Have clear solution
 All kind of classification problems
 Non linear Problem
 Relationships between input and output is non linear
 Further decision can’t be taken like in linear problem
AI Models
• Semiotic Models
- Based on Sign processes / signification and communication.
- Code is specific which gives meaning to each sign based on the sound or letters that human use to form
words or movements.

23
AI Models
• Statistical Models
- Refers to representation and formulation of relationships through statistical techniques.
- Statistical model employs probabilistic approaches and is typically a collection of
probability density function and distribution functions.

24
Data acquisition and learning aspects in AI
Data acquisition and learning aspects in AI
Data acquisition and learning aspects in AI
Data acquisition and learning aspects in AI
Data acquisition and learning aspects in AI
Data acquisition and learning aspects in AI
Data acquisition and learning aspects in AI
Problem Solving

32
Problem Solving

33
Goal based agent or problem solving agent

34
Problem solving agent

35
Goal formulation and problem formulation

36
37
38
39
Problem types

Deterministic or Unknown state space


observable
(single-state)

Non-observable Non-deterministic or
(multiple-state) partially observable
Deterministic or observable
(Single-state problems)
• Each state is fully observable and it goes to one definite state after any
action.
• Here, the goal state is reachable in one single action or sequence of actions.
• Deterministic environments ignore uncertainty.
Ex- Vacuum cleaner with sensor

41
Non-observable(Multiple-state problems)
or conformant problems
• Problem – solving agent does not have any information about the state.
• Solution may or may not be reached.
Ex- In case of vacuum cleaner , the goal state is to clean the floor
rather clean floor. Action is to suck if there is dirt. So , in non-observable condition ,
as there is no sensor , it will have to suck the dirt , irrespective of whether it is
towards right or left . Here , the solution space is the states specifying its
movement across the floor.
42
Non-deterministic
(partially observable) problem
• The effect of action is not clear.
• Percepts provide new information about the current state.

Ex- If we take Vacuum cleaner , and now assume that the sensor is attached to it ,
then it will suck if there is dirt. Movement of the cleaner will be based on its
current percept.

43
Unknown state space problems

• Typically exploration problems


• States and impact of actions are not known

Ex- online search that involves acting without compete knowledge of the next state
or scheduling without map.

44
Problem Characteristics
1. Is the problem decomposable ?

2. Can Solution steps be ignored or undone ?

3. Is the Universe Predictable?

4. Is a good solution absolute or relative ?

5. Is the solution a state or a path?

6. What is the role of knowledge?

7. Does the task require interaction with a person ?


45
1. Is the problem decomposable ?
• In this case, the problem is divided into smaller problems.
• The smaller problems are solved independently.
• Finally, the result is merged to get the final result.

46
2. Can Solution steps be ignored or undone ?
• In the Theorem Proving problem, a lemma that has been proved can be ignored
for the next steps.
• Such problems are called Ignorable problems.
• In the 8-Puzzle, Moves can be undone and backtracked.
• Such problems are called Recoverable problems.

47
3. Is the Universe Predictable?
• In Playing Bridge, We cannot know exactly where all the cards are or
what the other players will do on their turns.
• Uncertain outcome!

• For certain-outcome problems, planning can be used to generate a


sequence of operators that is guaranteed to lead to a solution.

• For uncertain-outcome problems, a sequence of generated operators


can only have a good probability of leading to a solution. Plan revision is
made as the plan is carried out and the necessary feedback is provided.

48
4. Is a good solution to the problem is absolute or relative?
• The facts can be represented using a formal language
called “Predicate Logic”
• There may be “n” different solutions. If one solution is
found , there is no need to go back and see if some path
might also lead to a solution.

Absolute solution:
Ex: P -> Q
If marks = 92 -> Grade = O
Relative solution:
Ex. TSP
(Explore all possible solutions) 49
5. Is the solution a state or a path?
● To solve a problem, of finding interpretation,
we need to produce only the interpretation
itself. Record of processing by how the
interpretation was arrived is NOT required.
Ex: The President ate Food with a fork.

● In contrast, if we must produce, the final


state along with the path that we found to
that state along with sequence of operations
to produce the final state.

50
6. What is the role of knowledge ?
• Now consider the problem of scanning daily newspapers to decide which are
supporting the Democrats and which are supporting the Republicans in some
upcoming election. Again assuming unlimited computing power, how much
knowledge would be required by a computer trying to solve this problem? This time
the answer is a great deal.
• It would have to know such things as:
 The names of the candidates in each party.
 The fact that if the major thing you want to see done is have taxes lowered, you
are probably supporting the Republicans.
 The fact that if the major thing you want to see done is improved education for
minority students, you are probably supporting the Democrats.
 The fact that if you are opposed to big government, you are probably supporting
the Republicans.
51
7. Does the task require interaction with a person ?

• The solitary problem, in which there is no intermediate communication


and no demand for an explanation of the reasoning process.
• The conversational problem, in which intermediate communication is
to provide either additional assistance to the computer or additional
information to the user.

52
Problem, Problem space and Search
Problem:
• A problem is a specific task or challenge that requires finding a solution or making a decision.
• In artificial intelligence, problems can vary in complexity and scope, ranging from simple tasks
like arithmetic calculations to complex challenges such as image recognition, natural language
processing, game playing, and optimization.
• Each problem has a defined set of initial states, possible actions or moves, and a goal state that
needs to be reached or achieved.
• Example, in a game of chess, the problem is to find a sequence of moves that lead to
checkmate, while in route planning, the problem is to find the shortest path between two
locations on a map.

53
Problem, Problem space and search
Problem Space:
• The problem space is the set of all possible states, actions, and transitions that can be encountered
while attempting to solve a specific problem. It represents the entire landscape of potential solutions
and paths from the initial state to the goal state.
• In other words, the problem space defines all the possible configurations or arrangements of elements
involved in the problem and the set of valid moves or actions that can be taken at each state. Each
state in the problem space represents a specific configuration, and each action represents a possible
move or step from one state to another.
• Example, in the problem of route planning, the problem space includes all possible locations on the
map as states and all valid roads or paths between them as actions.

54
Problem, Problem space and search
Search:
• Search is the process of exploring the problem space to find a sequence of actions or moves that lead to the goal
state or a satisfactory solution. In AI, search algorithms are used to systematically navigate through the problem
space and discover paths or solutions that satisfy the problem’s constraints and objectives.
• The search process involves starting from the initial state and exploring possible actions to generate new states.
These states are then evaluated based on certain criteria (e.g., distance to the goal, cost, or utility) to determine
the most promising states to explore further. The process continues iteratively until the goal state is reached or a
satisfactory solution is found.
• There are various search algorithms used in AI, such as depth-first search, breadth-first search, A* search, and
heuristic search. Each algorithm has its strengths and weaknesses, and the choice of search algorithm depends
on the problem’s characteristics, size of the problem space, and the resources available.

55
The 8-Puzzle (Sliding Block Puzzle)

States: A state description specifies the location of each of the eight tiles
and the blank in one of the nine squares.

Initial state: Any state can be designated as the initial state.

Successor function: This generates the legal states that result from trying
the four actions (blank moves Left, Right, Up, or Down).

Goal test: This checks whether the state matches the goal configuration
(Other goal configurations are possible.)

Path cost: Each step costs 1, so the path cost is the number of steps in the
path.
The 8-Puzzle (Sliding Block Puzzle) - Solution

• hf= +1 for every correct position


• Solution of this problem is “movement of tiles” in order to reach
goal state.
• The transition function or legal move is any one tile movement
by one space in any direction.
The 8-Puzzle (Sliding Block Puzzle) - Solution
The 8-Puzzle (Sliding Block Puzzle) - Solution
4- queen problem
Toy Problem- 7
Missionaries and Cannibals
Let Missionary is denoted by ‘M’ and Cannibal, by ‘C’.
These rules are described below:

All or some of these production rules will have to be used in a particular


sequence to find the solution of the problem.
Toy Problem- 7
Missionaries and Cannibals
Rules applied and their sequence in Missionaries and Cannibals problem
Toy Problem- 7
Formalization of the M&C Problem

State space: triple (x,y,z) with 0 ≤ x,y,z ≤ 3, where x,y, and z


represent the number of missionaries, cannibals and boats
currently on the original bank.
Initial State: (3,3,1)
Successor function: From each state, either bring one
missionary, one cannibal, two missionaries, two cannibals,
or one of each type to the other bank.
Note: Not all states are attainable (e.g., (0,0,1)), and some
are illegal.
Goal State: (0,0,0) Path Costs: 1 unit per crossing
Agents in Artificial Intelligence

What is an Agent?
• An agent can be anything that perceive its environment through sensors and act upon that
environment through actuators.
• Agent runs in the cycle of perceiving, thinking, and acting.

64
Agents in Artificial Intelligence

Human-Agent: A human agent has eyes, ears, and other organs which work for
sensors and hand, legs, vocal tract work for actuators.
Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for
sensors and various motors for actuators.

65
Agents in Artificial Intelligence
Sensor: Sensor is a device which detects the change in the environment and sends the information to
other electronic devices. An agent observes its environment through sensors.
Actuators: Actuators are the component of machines that converts energy into motion. The actuators
are only responsible for moving and controlling a system. An actuator can be an electric motor, gears,
rails, etc.
Effectors: Effectors are the devices which affect the environment. Effectors can be legs, wheels, arms,
fingers, wings, fins, and display screen.

66
Intelligent Agents
An intelligent agent is an autonomous entity which act upon an environment using
sensors and actuators for achieving goals. An intelligent agent may learn from the
environment to achieve their goals. A thermostat is an example of an intelligent
agent.
Following are the main four rules for an AI agent:
Rule 1: An AI agent must have the ability to perceive the environment.
Rule 2: The observation must be used to make decisions.
Rule 3: Decision should result in an action.
Rule 4: The action taken by an AI agent must be a rational action.
67
Characteristics of an AI agent
Autonomy: An AI virtual agent is capable of performing tasks independently without requiring
constant human intervention or input.
Perception: The agent function senses and interprets the environment they operate in through
various sensors, such as cameras or microphones.
Reactivity: An AI agent can assess the environment and respond accordingly to achieve its goals.
Reasoning and decision-making: AI agents are intelligent tools that can analyze data and make
decisions to achieve goals. They use reasoning techniques and algorithms to process information and
take appropriate actions.
Learning: They can learn and enhance their performance through machine, deep, and reinforcement
learning elements and techniques.
Communication: AI agents can communicate with other agents or humans using different methods,
like understanding and responding to natural language, recognizing speech, and exchanging messages
through text.
Goal-oriented: They are designed to achieve specific goals, which can be pre-defined or learned
through interactions with the environment.

68
Rational Agent
• A rational agent is an agent which has clear preference, models uncertainty, and
acts in a way to maximize its performance measure with all possible actions.
• A rational agent is said to perform the right things. AI is about creating rational
agents to use for game theory and decision theory for various real-world
scenarios.
• For an AI agent, the rational action is most important because in AI reinforcement
learning algorithm, for each best possible action, agent gets the positive reward
and for each wrong action, an agent gets a negative reward.

69
Rationality
The rationality of an agent is measured by its performance measure. Rationality can
be judged on the basis of following points:
• Performance measure which defines the success criterion.
• Agent prior knowledge of its environment.
• Best possible actions that an agent can perform.
• The sequence of percepts.
Rationality differs from Omniscience because an Omniscient agent knows the actual
outcome of its action and act accordingly, which is not possible in reality.

70
Structure of an AI Agent
• The task of AI is to design an agent program which implements the agent
function. The structure of an intelligent agent is a combination of architecture and
agent program.
Agent = Architecture + Agent program
• Following are the main three terms involved in the structure of an AI agent:
Architecture: Architecture is machinery that an AI agent executes on.
Agent Function: Agent function is used to map a percept to an action.
f:P* → A
Agent program: Agent program is an implementation of agent function. An agent
program executes on the physical architecture to produce function f. 72
Types of Agents in Artificial Intelligence
There are five different types of intelligent agents used in AI.
• Simple Reflex Agents
• Model-based Agents
• Goal-based agents
• Utility-based agents
• Learning agents

73
Simple Reflex Agent
• A simple reflex agent is an AI system that follows pre-defined rules to make
decisions. It only responds to the current situation without considering the past or
future ramifications.
• A simple reflex agent is suitable for environments with stable rules and
straightforward actions, as its behavior is purely reactive and responsive to
immediate environmental changes.

Example

Example: A thermostat in a heating


system.

74
Model-based Reflex Agent
A model-based reflex performs actions based on a current percept and an internal
state representing the unobservable world.
It updates its internal state based on two factors:
• How the world evolves independently of the agent
• How does the agent’s action affect the world
A cautionary model-based reflex agent is a variant of a model-based reflex agent that
also considers the possible consequences of its actions before executing them.

75
Model-based Reflex Agent
The model-based reflex agent operates in four stages:
Sense: It perceives the current state of the world with its sensors.
Model: It constructs an internal model of the world from what it sees.
Reason: It uses its model of the world to decide how to act based on a set of predefined rules or
heuristics.
Act: The agent carries out the action that it has chosen.

Example
A vacuum cleaner that uses sensors to detect dirt and obstacles and moves and cleans based on a model.

76
Goal-based Agents
• Goal-based agents are AI agents that use information from their environment to achieve specific
goals. They employ search algorithms to find the most efficient path towards their objectives
within a given environment.
• These agents are also known as rule-based agents, as they follow predefined rules to accomplish
their goals and take specific actions based on certain conditions.
• Goal-based agents are easy to design and can handle complex tasks. They can be used in various
applications like robotics, computer vision, and natural language processing.
• Unlike basic models, a goal-based agent can determine the optimal course of decision-making and
action-taking processes depending on its desired outcome or goal.

77
Goal-based Agents
The working pattern of the goal-based agent can be divided into five steps:
Perception: The agent perceives its environment using sensors or other input
devices to collect information about its surroundings.
Reasoning: The agent analyzes the information collected and decides on the best
course of action to achieve its goal.
Action: The agent takes actions to achieve its goal, such as moving or manipulating
objects in the environment.
Evaluation: After taking action, the agent evaluates its progress towards the goal and
adjusts its actions, if necessary.
Goal Completion: Once the agent has achieved its goal, it either stops working or
begins working on a new goal.
Example
A chess-playing AI whose goal is winning the game.

78
Utility-based Agents
• Utility-based agents are AI agents that make decisions based on maximizing a
utility function or value. They choose the action with the highest expected utility,
which measures how good the outcome is.
• This helps them deal with complex and uncertain situations more flexibly and
adaptively. Utility-based agents are often used in applications where they have to
compare and select among multiple options, such as resource allocation,
scheduling, and game-playing.

79
Utility-based Agents
• A utility-based agent aims to choose actions that lead to a high utility state. To
achieve this, it needs to model its environment, which can be simple or complex.
• Then, it evaluates the expected utility of each possible outcome based on the
probability distribution and the utility function.
• Finally, it selects the action with the highest expected utility and repeats this
process at each time step

Example
A delivery drone that delivers packages to customers efficiently while optimizing factors like delivery time,
energy consumption, and customer satisfaction.

80
Learning Agents
An AI learning agent is a software agent that can learn from past experiences and
improve its performance. It initially acts with basic knowledge and adapts
automatically through machine learning.
The learning agent comprises four main components:
Learning Element: It is responsible for learning and making improvements based on
the experiences it gains from its environment.
Citric: It provides feedback to the learning element by the agent’s performance for a
predefined standard.
Performance Element: It selects and executes external actions based on the
information from the learning element and the critic.
Problem Generator: It suggests actions to create new and informative experiences
for the learning element to improve its performance.

81
Learning Agents

82
Learning Agents
AI learning agents follow a cycle of observing, learning, and acting based on feedback. They interact
with their environment, learn from feedback, and modify their behavior for future interactions.
Observation: The learning agent observes its environment through sensors or other inputs.
Learning: The agent analyzes data using algorithms and statistical models, learning from feedback on
its actions and performance.
Action: Based on what it has learned, the agent acts in its environment to decide how to behave.
Feedback: The agent receives feedback about their actions and performance through rewards,
penalties, or environmental cues.
Adaptation: Using feedback, the agent changes its behavior and decision-making processes,
updating its knowledge and adapting to its environment.

Example
AutoGPT analyzes the pros and cons of the top ten smartphones by exploring various websites and sources. It
evaluates the authenticity of websites using a sub-agent program. Finally, it generates a detailed report
summarizing the findings and listing the pros and cons of the top ten smartphone companies.

83
PEAS Representation
PEAS is a type of model on which an AI agent works upon. When we define an AI agent or rational
agent, then we can group its properties under PEAS representation model. It is made up of four
words:
P: Performance measure
E: Environment
A: Actuators
S: Sensors

84
PEAS for self-driving cars

Performance: Safety, time, legal drive,


comfort
Environment: Roads, other vehicles, road
signs, pedestrian
Actuators: Steering, accelerator, brake, signal,
horn
Sensors: Camera, GPS, speedometer,
odometer, accelerometer, sonar.

85
Example of Agents with their PEAS representation

Agent Performance measure Environment Actuators Sensor

Medical Diagnose

Vacuum Cleaner

Part -picking Robot

86
Example of Agents with their PEAS representation
Agent Performance measure Environment Actuators Sensor

• Patient
Medical Diagnose • Healthy patient • Tests • Keyboard
• Hospital
• Minimized cost • Treatments (Entry of symptoms)
• Staff

• Room • Camera
• Cleanness
• Table • Wheels • Dirt detection sensor
Vacuum Cleaner • Efficiency
• Wood floor • Brushes • Cliff sensor
• Battery life
• Carpet • Vacuum Extractor • Bump Sensor
• Security
• Various obstacles • Infrared Wall Sensor

• Conveyor belt with


Part -picking Robot • Percentage of parts in • Jointed Arms • Camera
parts
correct bins • Hand • Joint angle sensors.
• Bins

87
Types of Environments in AI

1. Fully Observable vs Partially Observable


2. Deterministic vs Stochastic
3. Competitive vs Collaborative
4. Single-agent vs Multi-agent
5. Static vs Dynamic
6. Discrete vs Continuous
7. Episodic vs Sequential
8. Known vs Unknown

88
Fully Observable vs Partially Observable
• When an agent sensor is capable to sense or access the complete state of an agent at each point in
time, it is said to be a fully observable environment else it is partially observable.
• Maintaining a fully observable environment is easy as there is no need to keep track of the history
of the surrounding.
• An environment is called unobservable when the agent has no sensors in all environments.
Examples:
• Chess – the board is fully observable, and so are the opponent’s moves.
• Driving – the environment is partially observable because what’s around the corner is not known.

89
Deterministic vs Stochastic
• When a uniqueness in the agent’s current state completely determines the next state of the agent,
the environment is said to be deterministic.
• The stochastic environment is random in nature which is not unique and cannot be completely
determined by the agent.
Examples:
• Chess – there would be only a few possible moves for a coin at the current state and these moves
can be determined.
• Self-Driving Cars- the actions of a self-driving car are not unique, it varies time to time.

90
Competitive vs Collaborative
• An agent is said to be in a competitive environment when it competes against another agent to
optimize the output.
• The game of chess is competitive as the agents compete with each other to win the game which is
the output.
• An agent is said to be in a collaborative environment when multiple agents cooperate to produce
the desired output.
• When multiple self-driving cars are found on the roads, they cooperate with each other to avoid
collisions and reach their destination which is the output desired.

91
Single-agent vs Multi-agent
• An environment consisting of only one agent is said to be a single-agent environment.
• A person left alone in a maze is an example of the single-agent system.
• An environment involving more than one agent is a multi-agent environment.
• The game of football is multi-agent as it involves 11 players in each team.

Dynamic vs Static
• An environment that keeps constantly changing itself when the agent is up with some action is said
to be dynamic.
• A roller coaster ride is dynamic as it is set in motion and the environment keeps changing every
instant.
• An idle environment with no change in its state is called a static environment.
• An empty house is static as there’s no change in the surroundings when an agent enters. 92
Discrete vs Continuous
• If an environment consists of a finite number of actions that can be deliberated in the environment
to obtain the output, it is said to be a discrete environment.
• The game of chess is discrete as it has only a finite number of moves. The number of moves might
vary with every game, but still, it’s finite.
• The environment in which the actions are performed cannot be numbered i.e. is not discrete, is
said to be continuous.
• Self-driving cars are an example of continuous environments as their actions are driving, parking,
etc. which cannot be numbered.

93
Episodic vs Sequential
• In an Episodic task environment, each of the agent’s actions is divided into atomic incidents or
episodes. There is no dependency between current and previous incidents. In each incident, an
agent receives input from the environment and then performs the corresponding action.
• Example: Consider an example of Pick and Place robot, which is used to detect defective parts from
the conveyor belts. Here, every time robot(agent) will make the decision on the current part i.e.
there is no dependency between current and previous decisions.
• In a Sequential environment, the previous decisions can affect all future decisions. The next action
of the agent depends on what action he has taken previously and what action he is supposed to
take in the future.
• Example:
Checkers- Where the previous move can affect all the following moves.
94
Known vs Unknown

• In a known environment, the output for all probable actions is given.


• Obviously, in case of unknown environment, for an agent to make a decision, it has
to gain knowledge about how the environment works.

95
Constraint Satisfaction Problems (CSP)

• Finding a solution that meets a set of constraints is the goal of constraint


satisfaction problems (CSPs), a type of AI issue.
• Finding values for a group of variables that fulfill a set of restrictions or rules is the
aim of constraint satisfaction problems.
• For tasks including resource allocation, planning, scheduling, and decision-making,
CSPs are frequently employed in AI.

96
Basic components in (CSP)
Variables: The things that need to be determined are variables. Variables in a CSP are
the objects that must have values assigned to them in order to satisfy a particular set
of constraints. Boolean, integer, and categorical variables are just a few examples of
the various types of variables Variables, for instance, could stand in for the many
puzzle cells that need to be filled with numbers in a sudoku puzzle.
Domains: The range of potential values that a variable can have is represented by
domains. Depending on the issue, a domain may be finite or limitless. For instance, in
Sudoku, the set of numbers from 1 to 9 can serve as the domain of a variable
representing a problem cell.
Constraints: The guidelines that control how variables relate to one another are
known as constraints. Constraints in a CSP define the ranges of possible values for
variables. Unary constraints, binary constraints, and higher-order constraints are only
a few examples of the various sorts of constraints. For instance, in a sudoku problem,
the restrictions might be that each row, column, and 3×3 box can only have one
instance of each number from 1 to 9. 97
Basic components in (CSP)

98
Basic components in (CSP)

99
Constraint Satisfaction Problems (CSP) algorithms
• The backtracking algorithm is a depth-first search algorithm that methodically
investigates the search space of potential solutions up until a solution is discovered
that satisfies all the restrictions.
• The method begins by choosing a variable and giving it a value before repeatedly
attempting to give values to the other variables.
• The method returns to the prior variable and tries a different value if at any time a
variable cannot be given a value that fulfills the requirements.
• Once all assignments have been tried or a solution that satisfies all constraints has
been discovered, the algorithm ends.
101
Constraint Satisfaction Problems (CSP) algorithms
• The forward-checking algorithm is a variation of the backtracking algorithm that
condenses the search space using a type of local consistency.
• For each unassigned variable, the method keeps a list of remaining values and
applies local constraints to eliminate inconsistent values from these sets.
• The algorithm examines a variable’s neighbors after it is given a value to see
whether any of its remaining values become inconsistent and removes them from
the sets if they do.
• The algorithm goes backward if, after forward checking, a variable has no more
values.
102
Constraint Satisfaction Problems (CSP) algorithms
• Algorithms for propagating constraints are a class that uses local consistency and
inference to condense the search space. These algorithms operate by propagating
restrictions between variables and removing inconsistent values from the variable
domains using the information obtained.

103

You might also like