Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 123

19UCSPEX01

Foundations of AI

1
19UCSPEX01-Foundations of AI
Course Outcomes:
PX601.1: Understand Intelligent agent frameworks to build AI
applications.
PX601.2: Apply appropriate search algorithms for problem
solving.
PX601.3: Analyze Knowledge Representation to solve Artificial
Intelligence task.
PX601.4: Design software agents to solve given real time scenario.
PX601.5: Build NLP application using Artificial Intelligence.
2
19UCSPEX01-Foundations of AI
UNIT I - INTRODUCTION
Introduction - Definition - Future of
Artificial Intelligence - Characteristics of
Intelligent Agents–Typical Intelligent
Agents -AI Applications - Problem
Solving Approach to Typical AI problems. 3
UNIT II - PROBLEM SOLVING METHODS
Problem solving Methods - Search Strategies-
Uninformed - Informed - Heuristics - Local
Search Algorithms and Optimization Problems –
Searching with Partial Observations - Constraint
Satisfaction Problems - Constraint Propagation -
Backtracking Search

4
UNIT III-KNOWLEDGE REPRESENTATION
First Order Predicate Logic–Prolog Programming –
Unification–Forward Chaining-Backward Chaining
–Resolution–Knowledge Representation–
Ontological Engineering-Categories and Objects –
Events – Mental Events and Mental Objects –
Reasoning Systems for Categories – Reasoning
with Default Information

5
UNIT IV- SOFTWARE AGENTS
Architecture for Intelligent Agents – Agent
communication – Negotiation and Bargaining –
Argumentation among Agents – Trust and
Reputation in Multi-agent systems.
UNIT V- APPLICATIONS
AI applications – Language Models – Information
Retrieval- Information Extraction – Natural
6
Language Processing – Machine Translation – Speech
Recognition – Robot – Hardware –Perception –
Planning – Moving.
TEXT BOOKS:
1.Stuart Russell and Peter Norvig, “Artificial
Intelligence – A Modern Approach”, Fourth
Edition, Pearson Education, 2021.

7
• Dan W. Patterson, “Introduction to Artificial
Intelligence and Expert Systems”, Pearson
Education,2007.

REFERENCES:
1.Kevin Night, Elaine Rich, and Nair B.,
“Artificial Intelligence”, McGraw Hill, 2008.
2.Patrick H. Winston, "Artificial Intelligence",
Third Edition, Pearson Education, 2006.
8
3. Mehryar Mohri, Afshin Rostamizadeh, Ameet
Talwalkar, “Foundations of Machine Learning”,
MIT Press, 2012.
4. M. Tim Jones, “Artificial Intelligence: A
Systems Approach(Computer Science)”, Jones
and Bartlett Publishers, Inc.; First Edition, 2008.

9
Artificial Intelligence History
• 1943: Warren Mc Culloch and Walter Pitts: a model of
artificial Boolean neurons to perform computations.
• First steps toward connectionist computation and learning
(Hebbian learning).
• Marvin Minsky and Dann Edmonds (1951) constructed the
first neural network computer
10
• 1950: Alan Turing’s “Computing Machinery and
Intelligence”
• First complete vision of AI.
• Idea of Genetic Algorithms
Artificial Intelligence

12
Artificial Intelligence

13
Artificial Intelligence

14
15
Artificial Intelligence
• The traditional problems of AI research include reasoning, knowledge
representation, planning, learning, natural language processing, perception,
and the ability to move and manipulate objects.

• The current problems of AI mostly relevant on Machine Learning (ML). ML


refers to a set of algorithms which improve automatically through experience
by the use of data. Within ML, an important category is Deep Learning (DL),
which utilizes so-called multi-layered neural networks.
16
• AI is defined as,
An Intelligent Entity Created By humans
Capable of Performing Tasks intelligently
without being explicitly instructed.
Capable of thinking and acting rationally and
humanely.

17
Artificial Intelligence: Definition
• Views of AI fall into four categories:

The textbook advocates "acting rationally"

18
Thinking Humanly
• “The exciting new effort to make computers think
machines with minds, in the full and literal sense.”
(Haugeland, 1985)
• “[The automation of] activities that we associate
with human thinking, activities such as decision-
making, problem solving, learning” (Bellman,
1978)
Thinking Rationally
• “The study of mental faculties through the
use of computational models.”(Charniak,
1985)
• “The study of the computations that make
it possible to perceive, reason, and act.”
(Winston, 1992)
Acting Humanly
• “The art of creating machines that perform
functions that require intelligence when
performed by people.” (Kurzweil,1990)
• “The study of how to make computers do
things at which, at the moment, people are
better.” (Rich , 1991)
Acting Rationally

• “Computational Intelligence is the study


of the design of intelligent agents.” (Poole,
1998)
• “AI . . . is concerned with intelligent behavior
in artifacts.” (Nilsson, 1998)
Thinking Humanly
• Cognitive science: modeling the processes of
human thought.
• Through a set of experiments and computational
models, trying to build good explanations of what
we do when we solve a particular task.
23
• Relevance to AI: to solve a problem that humans (or
other living being) are capable of, it's good to know
how we go about solving it.
• Early approaches tried to solve any problem exactly
the way a human would do. Now we know that it's
not the best approach
24
Acting Humanly
• How do you distinguish intelligent behavior from
intelligence?
• Turing test, by A. Turing, 1950: determining if a
program qualifies as artificially intelligent by
subjecting it to an interrogation along with a
human counterpart.

25
• Alan Turing publishes "Computing Machinery
and Intelligence" in which he proposed a test.
• The test can check the machine's ability to
exhibit intelligent behavior equivalent to
human intelligence, called a Turing test.

26
• The program passes the test if a human judge
cannot distinguish between the answers of the
program and the answers of the human subject.
• It hasn't been passed yet.

27
Thinking Rationally
• It refers to the ability of AI systems to make
logical and reasoned decisions based on available
information and objectives.
• Rational AI systems are capable of logical
reasoning, which involves deriving conclusions
from premises using established rules of
inference.
28
• AI systems often involve decision-making
processes where they choose actions or make
predictions based on available data and
predefined objectives.
• Rational thinking in AI may involve optimization,
where the AI system aims to find the best solution
or set of solutions to a given problem.

29
Acting Rationally
• It refers to making decisions or taking actions
that are expected to maximize the achievement of
a given set of goals or objectives.
• Rational behavior involves selecting the most
appropriate course of action based on available
information and reasoning.

30
• Many AI applications adopt the intelligent agent
approach.
• An agent is an entity capable of generating action.
• In AI a rational agent must be autonomous, capable
of perceiving its environment, adaptable, with a
given goal.
• Most often the agents are small pieces of code with
a specific proficiency. The problem is solved by
combining the skills of several agents
31
Types of Artificial Intelligence

Types of AI based on capabilities and scope.


• Artificial Narrow Intelligence (ANI)

• Artificial General Intelligence (AGI)

• Artificial Super Intelligence (ASI)


Artificial Narrow Intelligence (ANI)
• This is the most common form of AI that you’d find in
the market now. These Artificial Intelligence systems
are designed to solve one single problem and would be
able to execute a single task really well.
• By definition, they have narrow capabilities, like
recommending a product for an e-commerce user or
predicting the weather.
Artificial General Intelligence (AGI)

• Also known as Strong AI or Full AI.

• AGI refers to AI systems that possess the ability


to understand, learn, and apply knowledge across
a wide range of tasks at a level comparable to
human intelligence.
• An AGI system would have the capability to
perform any intellectual task that a human being
can do, demonstrating adaptability and
understanding in various domains.
• AGI is still a theoretical concept.

36
Artificial Super Intelligence (ASI)
• An Artificial Super Intelligence (ASI) system would be
able to surpass all human capabilities. This would include
decision making, taking rational decisions, and even
includes things like making better art and building
emotional relationships.
Types of AI
• Traditional methods were prominent in the earlier
stages of AI development, and they are still used in
specific applications where the problem can be
precisely defined and encoded in rule-based systems.
• Modern methods in AI have been developed and are
actively used in various applications that the field of
AI is dynamic, and new methods may have emerged.

39
• Traditional methods
• Expert Systems: They use a knowledge base of rules
and facts to provide advice or make decisions.
• Rule-Based Systems: Rule-based systems involve
the use of explicit rules and logic to make decisions
• Knowledge Representation: Knowledge
representation is the process of encoding information
about the world in a form that a computer system can
utilize to solve complex tasks.

40
• Search Algorithms: Search algorithms are used
in problem-solving tasks where the solution
space needs to be explored systematically.
• Planning and Scheduling Systems: Planning
systems involve generating a sequence of
actions to achieve a specific goal.
• Inference Engines: Inference engines are
components of rule-based systems that apply
logical rules to the available knowledge base to
derive new information or make decisions.
41
• Natural Language Processing (NLP): Traditional
NLP methods involve rule-based approaches to
understand and process human language.
• Constraint Satisfaction Problems: Constraint
satisfaction problems involve finding a solution that
satisfies a set of constraints.
• Symbolic Reasoning: Symbolic reasoning involves
manipulating symbols and logical operations to
represent and solve problems.
42
Modern method
• Deep Learning: Deep learning relies on neural
networks with many layers (deep neural
networks) to model complex patterns and
representations in data.
• Transfer Learning: Transfer learning involves
pre-training a model on a large dataset and then
fine-tuning it for a specific task with a smaller
dataset
43
• Reinforcement Learning: Reinforcement learning
involves training agents to make sequences of
decisions by interacting with an environment.
• Generative Adversarial Networks (GANs):
GANs consist of a generator and a discriminator
trained in tandem through adversarial training.
• Natural Language Processing (NLP):NLP
techniques, especially transformer-based models
like BERT (Bidirectional Encoder Representations
from Transformers) and GPT (Generative
44
Pre-trained Transformer), have shown significant
advancements in language understanding, generation, and
translation.
• Meta-Learning: Meta-learning, or learning to learn,
involves training models to quickly adapt to new tasks
with minimal data. This approach is particularly useful in
scenarios where data is scarce.
• Federated Learning: Federated learning allows models to
be trained across decentralized devices or servers without
exchanging raw data. It is beneficial for privacy-
preserving machine learning.
45
Generative AI
• Generative AI refers to a class of artificial
intelligence (AI) systems that have the ability to
generate new content or data that is similar to,
but not identical to, the input data they were
trained on.
• These systems are often designed to produce
realistic and contextually relevant outputs in various
domains, such as images, text, audio, and more.
• Generative AI is a subset of AI that focuses on the
creative aspect of machine learning.
• Generative models are at the core of generative AI.
These models aim to learn the underlying patterns
and structures present in a given dataset and then
generate new data points that resemble the
training data.
Advantages of Artificial Intelligence (AI)

• Reduction in human error

• Available 24×7

• Helps in repetitive work

• Digital assistance

• Faster decisions
• Rational Decision Maker

• Medical applications

• Improves Security

• Efficient Communication
Future of Artificial Intelligence
• AI for Healthcare:
• AI is expected to have a significant impact on
healthcare, with applications ranging from
personalized medicine and drug discovery to
medical imaging analysis and predictive analytics.
• AI may play a crucial role in addressing healthcare
challenges and improving patient outcomes.
51
• AI in Robotics:
• AI-powered robots are expected to become more
sophisticated and capable, leading to
advancements in fields such as manufacturing,
healthcare, and autonomous systems.
Collaborative robots (cobots) and socially
intelligent robots may become more prevalent.

52
• AI-driven Creativity:
• Generative AI models are becoming increasingly
capable of producing creative content, such as
art, music, and literature.
• The future may see AI systems collaborating
with humans in creative endeavors and
contributing to various forms of artistic
expression.

53
• AI in Cybersecurity:
• AI is anticipated to play a crucial role in
enhancing cybersecurity by identifying and
responding to threats in real time. AI-driven
solutions may help anticipate and counteract
cyberattacks more effectively.

54
Increasing role of remote and virtual care in
healthcare
• Remote and virtual care have become increasing
trends in healthcare today.
• Thanks to the advancements in technology, it has
become possible for healthcare providers to offer
remote care services and eliminate the need for
patients to physically visit a healthcare facility.
55
• This trend has been given a boost by the ongoing
COVID-19 pandemic, which has triggered the
need for contactless healthcare services.
• Artificial Intelligence (AI) and Machine Learning
(ML) have been instrumental in the development
of remote and virtual care in healthcare.
• These tools have been utilized by healthcare
providers to develop numerous healthcare apps,
chatbots, and other tools that enable patients to
access healthcare services remotely.
56
• AI and ML have also been used to analyze data
received from remote patient monitoring devices
and wearable technology used by patients,
resulting in improved patient outcomes.
• The increasing role of remote and virtual care in
AI and ML healthcare tool development serves as
a means to address the demand for better
healthcare services.

57
4. Agents and Environments
• An agent is anything that can be viewed as
perceiving its environment through sensors and
acting upon that environment through actuators.
• A human agent has eyes, ears, and other organs for
sensors and hands, legs, vocal tract, and so on for
actuators.
• A robotic agent might have cameras and infrared range
finders for sensors and various motors for actuators.
• A software agent receives keystrokes, file contents, and
network packets as sensory inputs and acts on the
environment by displaying on the screen, writing files,
and sending network packets.
• Percept - To refer to the agent’s perceptual inputs at any
given instant.
• Percept sequence - An agent’s percept sequence is the
complete history of everything the agent has ever
perceived.
• Agent function - An agent’s behavior is described by the
agent function that maps any given percept sequence to
an action.
• Agent program - the agent function for an artificial agent
will be implemented by an agent program.
Concept of rationality

• An agent should act as a Rational Agent. A rational agent


is one that does the right thing that is the right actions will
cause the agent to be most successful in the environment.
Rationality
What is rational at any given time depends on four things:
• The performance measure that defines the criterion of
success.
• The agent‘s prior knowledge of the environment.
• The actions that the agent can perform.
• The agent‘s percept sequence to date.
• This leads to a definition of a rational agent (ideal rational
agent)
• “For each possible percept sequence, a rational agent
should select an action that is expected to maximize its
performance measure, given the evidence provided by the
percept sequence and whatever built-in knowledge the
agent has, that is the task of rational agent is to improve
the performance measure depends on percept sequence ”
Omniscience, learning, and autonomy
• An omniscient agent knows the actual outcome of its actions
and can act accordingly; but omniscience is impossible in
reality.
• A rational agent not only to gather information, but also to
learn as much as possible from what it perceives.
• The agent‘s initial configuration could reflect some prior
knowledge of the environment, but as the agent gains
experience this may be modified and augmented.
• Successful agents split the task of computing the agent function
into three different periods:

➢When the agent is being designed, some of the computation is


done by its designers;

➢ When it is deliberating on its next action, the agent does more


computation; and

As it learns from experience, it does even more computation to


decide how to modify its behavior.
• A rational agent should be autonomous – it should learn what it can to
compensate for partial or incorrect prior knowledge. Concrete
implementation, running on the agent architecture.
6. Nature of environments
• A task environment specification includes the
performance measure, the external environment, the
actuators, and the sensors.
• In designing an agent, the first step must always be to
specify the task environment as fully as possible.
• Task environments are specified as a PAGE (Percept,
Action, Goal, Environment) (or) PEAS (Performance,
Environment, Actuators, Sensors) description, both means
the same.
• The following table describes some of the agent types and the basic
PEAS description.
Properties of task environment

• Fully observable vs Partially Observable

• Static vs Dynamic

• Discrete vs Continuous

• Deterministic vs Stochastic

• Single-agent vs Multi-agent

• Episodic vs sequential
Fully observable vs Partially Observable:
• If an agent‘s sensors give it access to the complete state of
the environment at each point in time, then we say that
the task environment is fully observable.
• Fully observable environments are convenient because
the agent need not maintain any internal state to keep
track of the world.
• An environment might be partially observable because of
noisy and inaccurate sensors or because parts of the state
are simply missing from the sensor data.
Deterministic vs Stochastic:
• If an agent's current state and selected action can
completely determine the next state of the environment,
then such environment is called a deterministic
environment.
• A stochastic environment is random in nature and cannot
be determined completely by an agent.
• In a deterministic, fully observable environment, agent
does not need to worry about uncertainty.
Static vs Dynamic:
• If the environment can change itself while an agent is
deliberating then such environment is called a dynamic
environment else it is called a static environment.
• Static environments are easy to deal because an agent
does not need to continue looking at the world while
deciding for an action.
• However for dynamic environment, agents need to keep
looking at the world at each action.
• Taxi driving is an example of a dynamic environment
whereas Crossword puzzles are an example of a static
environment.
Discrete vs Continuous:
• If in an environment there are a finite number of percepts
and actions that can be performed within it, then such an
environment is called a discrete environment else it is
called continuous environment.
• A chess game comes under discrete environment as there
is a finite number of moves that can be performed.
• A self-driving car is an example of a continuous
environment.
Single-agent vs Multi-agent
• If only one agent is involved in an environment, and
operating by itself then such an environment is called
single agent environment.
• However, if multiple agents are operating in an
environment, then such an environment is called a multi-
agent environment.
• The agent design problems in the multi-agent
environment are different from single agent environment.
Episodic vs Sequential:
• In an episodic environment, the agent‘s experience is
divided into atomic episodes.
• Each episode consists of its own percepts and actions and
it does not depend on the previous episode.
• In sequential environments, the current decision could
affect all future of decisions. Eg. Chess and taxi driving.
• Episodic environments are much simpler than sequential
environments because the agent does not need to think
ahead.
Task Observab Determinesti Episodic Static Discrete Agent
Environment le c

Crossword Fully Determines Sequential Static Discrete Single


Puzzle

Chess with Fully Strategic Sequential Static Discrete Multi


clock

Poker Partially Strategic Sequential Static Discrete Multi

Backgammon Fully Stochastic - Static Discrete Multi

Taxi driving Partially Stochastic Sequential Dynamic Con Multi


STRUCTURE OF AGENTS
• An intelligent agent is a combination of Agent Program and
Architecture.
Intelligent Agent = Agent Program + Architecture
• Agent Program is a function that implements the agent mapping from
percepts to actions.
• There exists a variety of basic agent program designs, reflecting the
kind of information made explicit and used in the decision process.
• The designs vary in efficiency, compactness, and flexibility. The
appropriate design of the agent program depends on the nature of the
environment.
• Architecture is a computing device used to run the agent program.
function TABLE-DRIVEN-AGENT(percept ) returns an
action
persistent: percepts, a sequence, initially empty
table, a table of actions, indexed by percept
sequences, initially fully specified
append percept to the end of percepts
action LOOKUP(percepts, table)
return action
Types of agent
To perform the mapping task four types of agent programs
are there. They are:
• Simple reflex agents
• Model-based reflex agents
• Goal-based agents
• Utility-based agents
CHARACTERISTICS OF INTELLIGENT
AGENTS
a. Capability to work on their own (autonomy)
b. Exhibition of goal-oriented behaviour
c. Transportable over networks (mobility)
d. Dedication to a single repetitive task
e. Ability to interact with humans, systems, and other agents
f. Inclusion of a knowledge base
g. Ability to learn
Simple reflex agents
• The simplest kind of agent is the simple reflex agent.

• These agents select actions on the basis SIMPLE REFLEX


AGENT of the current percept, ignoring the rest of the
percept history.

• For example, Vaccum cleaner agent is simple reflex agents


function REFLEX-VACUUM-AGENT( [location,status])
returns an action
if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return Left
• An agent describes about how the condition-action rules
allow the agent to make the connection from percept to
action.
• Condition-action rule: if condition then action
• Rectangle : to denote the current internal state of
the agents decision process.
• Oval : to represent the background information in
the process.
• A condition-action rule is a rule that maps a state i.e,
condition to an action.
• If the condition is true, then the action is taken, else not.
This agent function only succeeds when the environment
is fully observable.
• For simple reflex agents operating in partially observable
environments, infinite loops are often unavoidable.
• It may be possible to escape from infinite loops if the
agent can randomize its actions.
function SIMPLE-REFLEX-AGENT(percept ) returns an action
persistent: rules, a set of condition–action rules
state INTERPRET-INPUT(percept )
rule RULE-MATCH(state, rules)
action rule.ACTION
return action
• INTERRUPT-INPUT – function generates an abstracted
description of the current state from the percept.
• RULE-MATCH – function returns the first rule in the set
of rules that matches the given state description.
• RULE-ACTION – the selected rule is executed as action
of the given percept.
Model-based reflex agents (Agents that keep track of the world)

• The most effective way to handle partial observability is


for the agent ―to keep track of the part of the world it
can’t see now.
• That is, the agent which combines the current percept
with the old internal state to generate updated description
of the current state.
• The current percept is combined with the old internal state
and it derives a new current state is updated in the state
description is also.
• This updation requires two kinds of knowledge in the
agent program.
• First, we need some information about how the world
evolves independently of the agent.
• Second, we need some information about how the agents
own actions affect the world.
• The above two knowledge implemented in simple
Boolean circuits or in complete scientific theories is
called a model of the world.
• An agent that uses such a model is called a model- based
agent.
Goal-based agents

• An agent knows the description of current state and also


needs some sort of goal information that describes
situations that are desirable.
• The action matches with the current state is selected
depends on the goal state.
• The goal based agent is more flexible for more than one
destination also. After identifying one destination, the
new destination is specified, goal based agent is activated
to come up with a new behavior.
• Search and Planning are the subfields of AI devoted to
finding action sequences that achieve the agents goals.
• The goal-based agent appears less efficient, it is more
flexible because the knowledge that supports its decisions
is represented explicitly and can be modified.
• The goal-based agent‘s behavior can easily be changed
to go to a different location.
• Utility-based agents (Utility – refers to ― the quality
of being useful)
• An agent generates a goal state with high – quality
behavior (utility) that is, if more than one sequence exists
to reach the goal state then the sequence with more
reliable, safer, quicker and cheaper than others to be
selected.
• A utility function maps a state (or sequence of states) onto
a real number, which describes the associated degree of
happiness.
• The utility function can be used for two different cases:
➢ First, when there are conflicting goals, only some of
which can be achieved (for e.g., speed and safety), the
utility function specifies the appropriate tradeoff.
➢ Second, when the agent aims for several goals, none of
which can be achieved with certainty, then the success can
be weighted up against the importance of the goals.
Learning agents

• The learning task allows the agent to operate in initially


unknown environments and to become more competent
than its initial knowledge.
• A learning agent can be divided into four conceptual
components:
1. Learning element – This is responsible for making
improvements.
It uses the feedback from the critic on how the agent is
doing and determines how the performance element should
be modified to do better in the future.
• Performance element – which is responsible for
selecting external actions and it is equivalent to agent:
it takes in percepts and decides on actions.
• Critic – It tells the learning element how well the agent is
doing with respect to a fixed performance standard.
• Problem generator – It is responsible for suggesting
actions that will lead to new and informative experiences
Problem solving agents
• Intelligent agents are supposed to maximize their performance
measure.
• If the agent can adopt a goal and aim at satisfying it.
A problem can be defined formally by five components:
INITIAL STATE: The agent starts in
ACTIONS: A description of the possible actions available to the
agent.
SUCCESSOR FUNCTION: A description of what each action
does. Given a state x, SUCCESSOR FN(x) Returns a set of <action,
successor> ordered pairs. The initial state and successor function define the
state space.
GOAL TEST: Whether the given state is a goal state?
PATH COST: Assigns numeric cost to each path. Step cost for
taking action a from state x to y- denoted by c (x,a,y).
Example: A Simplified road map of part of Romania
• INITIAL STATE: The initial state for the Romania (Fig 2.1) problem
might me described as In(Arad)

• ACTIONS: The given a particular state s, ACTIONS(s) returns the


set of actions that can be executed in s , that is each of these actions
is applicable in s. For example, from the state In(Arad), the applicable
actions are {Go(Sibiu), Go(Timisoara), Go (Zerind)}.

• RESULT(In(Arad), Go(Zerind)) = In(Zerind) .


• SUCCESSOR FUNCTION: Given the formal name for this is the
transition model, specified by a function RESULT(s, a) that returns the
state that results from doing action a in state s. The term successor to
refer to any state reachable from a given state by a single action.2 For
example,

RESULT(In(Arad), Go(Zerind)) = In(Zerind) .


Example problem
• Toy problems
• Illustrate/test various problem-solving methods
• Concise, exact description
• Can be used to compare performance
• Examples: 8-puzzle, 8-queens problem,
Cryptarithmetic, Vacuum world, Missionaries
and cannibals, simple route finding
• Real-world problem
• More difficult
• No single, agreed-upon specification (state,
successor function, edgecost)
• Examples: Route finding, VLSI layout, Robot
navigation, Assembly sequencing
Toy Problems
The vacuum world
• The vacuum world
• The world has only two locations
• Each location may or may not contain dirt
• The agent may be in one location or the other
• 8 possible world states
• Three possible actions: Left, Right, Suck
• Goal: clean up all the dirt
• States: one of the 8 states given earlier
• Operators: move left, move right, suck
• Goal test: no dirt left in any square
• Path cost: each action costs one
Standardized problems

116
Real world problem
• Route finding
• Defined in terms of locations and transitions along links
between them
• Applications: routing in computer networks, automated
travel advisory systems, airline travel planning systems
• Touring and traveling salesperson problems
• “Visit every city on the map at least once and end in
Bucharest”
• Needs information about the visited cities
• Goal: Find the shortest tour that visits all cities
• NP-hard, but a lot of effort has been spent on
improving the capabilities of TSP algorithms
• Applications: planning movements of automatic
circuit board drills
• VLSI layout
• Place cells on a chip so they don’t overlap and there
is room for connecting wires to be placed between
the cells
• Robot navigation
• Generalization of the route finding problem
• No discrete set of routes
• Robot can move in a continuous space
• Infinite set of possible actions and states
• Assembly sequencing
• Automatic assembly of complex objects
• The problem is to find an order in which to
assemble the parts of some object
Consider the airline travel problems that must be solved
by a travel-planning Web site:
• States: Each state obviously includes a location (e.g.,
an airport) and the current time. Furthermore, because
the cost of an action (a flight segment) may depend on
previous segments, their fare bases, and their status as
domestic or international, the state must record extra
information about these “historical” aspects.
• Initial state: The user’s home airport.
121
•Actions: Take any flight from the current
location, in any seat class, leaving after the current
time, leaving enough time for within-airport
transfer if needed.
• Transition model: The state resulting from
taking a flight will have the flight’s destination as
the new location and the flight’s arrival time as
the new time.
• Goal state: A destination city. Sometimes the
goal can be more complex, such as “arrive at the
destination on a nonstop flight.”
• Action cost: A combination of monetary cost,
waiting time, flight time, customs and
immigration procedures, seat quality, time of day,
type of airplane, frequent-flyer reward points, and
so on.

You might also like