B. For each of the following activities: (20%) 1. Playing soccer.

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 53

ĐÀO TẠO TỪ XA

Môn học: TRÍ TUỆ NHÂN TẠO


Intelligent
Agents
Chapter 02
Objectives
• An agent is something that perceives and acts in an environment.
• The agent function for an agent specifies the action taken by the agent in
response to any percept sequence.
• The performance measure evaluates the behavior of the agent in an
environment.
• A rational agent acts so as to maximize the expected value of the performance
measure, given the percept sequence it has seen so far.
• A task environment specification includes the performance measure, the
external environment, the actuators, and the sensors. In designing an agent,
the first step must always be to specify the task environment as fully as
possible.
Objectives
• They can be fully or partially observable, single-agent or multiagent,
deterministic or nondeterministic, episodic or sequential, static or dynamic,
discrete or continuous, and known or unknown.
• The agent program implements the agent function.
• Simple reBlex agents respond directly to percepts, whereas model-based
reBlex agents maintain internal state to track aspects of the world that are not
evident in the current percept. Goal-based agents act to achieve their goals,
and utility-based agents try to maximize their own expected “happiness.”
• All agents can improve their performance through learning.
Contents
1. Introduction
2. Agents and Environments
3. The Concept of Rationality
4. The Nature of Environments
5. The Structure of Agents
1. Introduction
Introduction
• Chapter 1 identified the concept of rational agents as central to
our approach to artificial intelligence.
• In this chapter, we will see that the concept of rationality can be
applied to a wide variety of agents operating in any imaginable
environment.
• This concept will be used to develop a small set of design
principles for building successful agents—systems that can
reasonably be called intelligent.
• We begin by examining agents, environments, and the coupling
between them.
• The observation that some agents behave better than others leads
naturally to the idea of a rational agent—one that behaves as well
as possible.
Introduction
• How well an agent can behave depends on the nature of the
environment; some environments are more difBicult than others.
• We give a crude categorization of environments and show how
properties of an environment inBluence the design of suitable
agents for that environment.
• We describe a number of basic “skeleton” agent designs, which we
Blesh out in the rest of the book.
2. Agents and
Environments
Agent concept
• An agent is anything that can
perceive its environment
through sensors and acts upon that
environment through actuators.
• An Agent runs in the cycle
of perceiving, thinking, and acting. Figure 2.1 Agents interact with environments
through sensors and actuators.
Terminology
• Sensors: devices which detect the
change in the environment and send
the information to other electronic
devices.
o An agent observes its environment
through sensors.
• Actuators: components of machines Figure 2.1 Agents interact with environments
that converts energy into motion. through sensors and actuators.
o The actuators are only responsible for
moving and controlling a system
• Effectors: devices that affect the
environment.
o e.g. legs, wheels, arms, fingers, wings,
fins, and display screens.
Example Forms of Agents
• Human agent has eyes, ears, and other organs for sensors and
hands, legs, vocal tract, and so on for actuators.
• Robotic agent might have cameras and infrared range Binders for
sensors and various motors for actuators.
• Software agent receives Bile contents, network packets, and
human input (keyboard/ mouse/ touchscreen/ voice) as sensory
inputs and acts on the environment by writing Biles, sending
network packets, and displaying information or generating
sounds.
• For examples, AI-based smart assistants like Cortana (Windows),
Siri (Apple), Alexa (Amazon).
Environment concept
• The environment could be everything—the entire universe!
• In practice it is just that part of the universe whose state we care
about when designing this agent
• The part that affects what the agent perceives and that is affected
by the agent’s actions.
Percept concept Percepts

• The term percept refers to the content


an agent’s sensors are perceiving.
• An agent’s percept sequence is the
complete history of everything the
agent has ever perceived.
Figure 2.1 Agents interact with environments
through sensors and actuators.
Action concept
• In general, an agent’s choice of action
at any given instant can depend on its
built-in knowledge and on the entire
percept sequence observed to date, Actions
but not on anything it hasn’t
perceived.
• By specifying the agent’s choice of
action for every possible percept
sequence, we have said more or less Figure 2.1 Agents interact with environments
everything there is to say about the through sensors and actuators.

agent.
• Mathematically speaking, we say that an agent’s behavior is
described by the agent function that maps any given percept
sequence to an action.
Agent program
• The table of agent function can be constructed by trying out all
possible percept sequences and recording which actions the agent
does in response.
• The table is an external characterization of the agent.
• Internally, the agent function for an artificial agent will be
implemented by an agent program.
• It is important to keep these two ideas distinct.
• The agent function is an abstract mathematical description; the
agent program is a concrete implementation, running within some
physical system.
Example: the vacuum-cleaner world
• A robotic vacuum-cleaning agent in a world consisting
of squares that can be either dirty or clean.
• Figure 2.2 shows a conBiguration with just two squares,
A and B.
• The vacuum agent perceives which square it is in and
whether there is dirt in the square. Figure 2.2 A vacuum-cleaner world
with just two locations.
• The agent starts in square A.
• The available actions are to move to the right, move to the left, suck up the
dirt, or do nothing.
• One very simple agent function is the following: if the current square is dirty,
then suck; otherwise, move to the other square.
• A partial tabulation of this agent function is shown in Figure 2.3 and an agent
program that implements it appears in Figure 2.8
Example: the vacuum-cleaner world

Figure 2.3 Partial tabulation of a simple agent function for the vacuum-
cleaner world shown in Figure 2.2. The agent cleans the current square if
it is dirty, otherwise it moves to the other square. Note that the table is of
unbounded size unless there is a restriction on the length of possible
percept sequences.
Example: the vacuum-cleaner world
3. The Concept of
Rationality
The Concept of Rationality
• A rational agent is one that does the right thing.
• Obviously, doing the right thing is better than doing the wrong
thing, but what does it mean to do the right thing?
Performance measures
• The notion of “the right thing” in AI called consequentialism: we
evaluate an agent’s behavior by its consequences.
• When an agent is plunked down in an environment, it generates a
sequence of actions according to the percepts it receives.
• This sequence of actions causes the environment to go through a
sequence of states.
• If the sequence is desirable, then the agent has performed well.
• This notion of desirability is captured by a performance measure
that evaluates any given sequence of environment states.
Performance measures
• Humans have desires and preferences of their own, so the notion of
rationality as applied to humans has to do with their success in
choosing actions that produce sequences of environment states that
are desirable from their point of view.
• Machines, on the other hand, do not have desires and preferences of
their own; the performance measure is in the mind of the designer
of the machine, or in the mind of the users the machine is designed
for.
• Some agent designs have an explicit representation of the
performance measure, while in other designs the performance
measure is entirely implicit—the agent may do the right thing, but it
doesn’t know why.
Performance measures
• Performance measure for the vacuum-cleaner agent be the amount of
dirt cleaned up in a single eight-hour shift.
• With a rational agent what you ask for is what you get.
• A rational agent can maximize this performance measure by cleaning
up the dirt, then dumping it all on the floor, then cleaning it up again,
and so on.
• A more suitable performance measure would reward the agent for
having a clean floor.
• For example, one point could be awarded for each clean square at each
time step (perhaps with a penalty for electricity consumed and noise
generated).
• As a general rule, it is better to design performance measures according
to what one actually wants to be achieved in the environment, rather
than according to how one thinks the agent should behave.
Performance measures
• The notion of “clean Bloor” in the preceding paragraph is based on
average cleanliness over time.
• Yet the same average cleanliness can be achieved by two different
agents, one of which does a mediocre job all the time while the
other cleans energetically but takes long breaks.
• Deep philosophical question with far-reaching implications:
o Which is better—a reckless life of highs and lows, or a safe but humdrum
existence?
o Which is better—an economy where everyone lives in moderate poverty,
or one in which some live in plenty while others are very poor?
Rationality
• The rationality of an agent is measured by its performance
measure.
• Rationality can be judged on the basis of following points:
o Performance measure which deBines the success criterion.
o Agent prior knowledge of its environment.
o Best possible actions that an agent can perform.
o The sequence of percepts.
Definition of a rational agent
This leads to a definition of a rational agent:
For each possible percept sequence, a rational agent should select an action
that is expected to maximize its performance measure, given the evidence
provided by the percept sequence and whatever built-in knowledge the agent
has.
Example: the vacuum-cleaner
• The simple vacuum-cleaner agent with agent function tabulated
in Figure 2.3, a rational agent?
• First, we need to say what the performance measure is, what is
known about the environment, and what sensors and actuators
the agent has. Let us assume the following:
• Performance measure awards one point for each clean square at each
time step, over a “lifetime” of 1000 time steps.
• The “geography” of the environment is known a priori (Figure 2.2) but
the dirt distribution and the initial location of the agent are not. Clean
squares stay clean and sucking cleans the current square.
Example: the vacuum-cleaner
• The Right and Left actions move the agent one square except when
this would take the agent outside the environment, in which case the
agent remains where it is.
• The only available actions are Right, Left, and Suck.
• The agent correctly perceives its location and whether that location
contains dirt.
• Under these circumstances the agent is indeed rational; its
expected performance is at least as good as any other agent’s.
4. The Nature of
Environments
Specifying the task environment
• To specify the performance measure, the environment, and the
agent’s actuators and sensors.
• All these are grouped under the heading of the task environment.
• For the acronymically minded, we call PEAS (Performance,
Environment, Actuators, Sensors) description.
• In designing an agent, the first step must always be to specify the
task environment as fully as possible.
PEAS for Self-driving Cars
• PEAS representation of a self-driving car:
o Performance: Safety, time, legal drive,
comfort
o Environment: Roads, other vehicles, road
signs, pedestrian
o Actuators: Steering, accelerator, brake,
signal, horn
o Sensors: Camera, GPS, speedometer,
odometer, accelerometer, sonar.
Properties of task environments
• Fully Observable vs Partially Observable:
o When an agent sensor can perceive the whole state of an agent at any
point in time, the environment is said to be fully observable; otherwise, it
is partially observable.
o Maintaining a completely visible environment is simple since there is no
need to keep track of the surrounding history.
o When the agent has no sensors in all environments, the environment is
said to be unobservable.
• For examples:
o Chess – the board and the opponent’s movements are both fully
observable.
o Driving – the environment is partially observable because what’s around
the corner is not known.
Properties of task environments
• Deterministic vs nondeterministic
o A deterministic environment is one in which an agent’s present state and
chosen action totally determine the upcoming state of the environment.
o Otherwise, it is nondeterministic.
• For examples:
o Chess – In its current state, a piece has just a few alternative moves, and
these moves can be determined.
o Self-Driving Cars – The activities of self-driving cars are not consistent;
they change over time. Taxi driving is clearly nondeterministic in this
sense.
• Stochastic:
o used by some as a synonym for “nondeterministic
Competitive vs Collaborative
• When an agent competes with another agent to optimize output, it
is said to be in a competitive environment.
• When numerous agents work together to generate the required
result, the agent is said to be in a collaborative environment.
• Examples:
o Chess – the agents compete with each other to win the game which is the
output.
o Self-Driving Cars – When numerous self-driving cars are located on the
road, they work together to prevent crashes and arrive at their
destination, which is the intended result.
Single-agent vs Multi-agent
• A single-agent environment has only one agent.
• A multi-agent environment has more than one agent.
• Examples:
o A person left alone in a maze is an example of a single-agent environment.
o Football is a multi-agent game since each team has 11 players.
Dynamic vs Static
• A dynamic environment is one that changes frequently when the
agent is doing some action.
• A static environment is one that does not change its state.
• Examples:
o A roller coaster ride is dynamic since it is in motion and the environment
changes all the time.
o An empty house is static because nothing changes when an agent arrives.
Discrete vs Continuous
• When there are a Binite number of percepts and actions that may
be done in an environment, that environment is referred to as a
discrete environment.
• The environment in which actions are performed that cannot be
counted is referred to be continuous.
• Examples:
o Chess is a discrete game since it has a Binite number of moves.
o Self-driving cars are an example of continuous environments since their
activities, such as driving, parking, and so on, cannot be counted.
Episodic vs Sequential
• Each of the agent’s activities in an Episodic task environment is broken
into atomic events or episodes.
• There is no link between the present and past events.
• Example:
o Consider the Pick and Place robot, which is used to detect damaged components
from conveyor belts. There is no dependency between past and present
decisions.
• Previous decisions in a Sequential environment can influence all future
decisions.
• The agent’s next action is determined by what action they have taken
before and what action he is expected to take in the future.
• Example:
o Checkers - A game in which the previous move affects all following movements.
Known vs Unknown
• In a known environment, the results of all actions are known to
the agent. While in an unknown environment, the agent must
learn how it works to perform an action.
5. The Structure of
Agents
Intelligent Agent Structure
• Agent’s structure can be viewed as:
o Agent = Architecture + Agent Program
• The main three terms involved in the structure:
o Architecture: is machinery that an AI agent executes on.
o Agent Function: is used to map a percept to an action: f: P* → A
o Agent program: is an implementation of agent function.
• An agent program executes on the physical architecture to produce
function f.
Types of Agent
• Agents can be grouped into Bive classes based on their degree of
intelligence and capability.
Simple Reflex Agents
• They choose actions only based on the
current percept.
• They are rational only if a correct
decision is made only on the basis of
current precept.
• Their environment is completely
observable.
• Condition-Action Rule:
o is a rule that maps a state (condition) to
an action
o if condition then take action
Model Based Reflex Agents

• Use a model of the world to choose their


actions
• Maintain an internal state
• Model − knowledge about “how the
things happen in the world”
• Internal State − It is a representation of
unobserved aspects of current state
depending on percept history
• Updating the state requires the
information about:
o How the world evolves
o How the agent’s actions affect the world
Goal Based Agents
• Choose their actions in order to achieve
goals
• Goal-based approach is more Blexible
than reBlex agent since the knowledge
supporting a decision is explicitly
modeled, thereby allowing for
modiBications
• Goal is the description of desirable
situations
Utility Based Agents
• Choose actions based on a preference
(utility) for each state
• Goals are inadequate when:
o There are conBlicting goals, of which only a
few can be achieved.
o Goals have some uncertainty of being
achieved, and you need to weigh the
likelihood of success against the
importance of a goal.
Learning Agents
• Can learn from its past experiences
• Start to act with basic knowledge and
then able to act and adapt automatically
through learning
• Have four major components which
enable them to learn from their
experience:
o Critic: evaluates how well the agent
performs against a set performance
benchmark.
Learning Agents
o Learning Elements: takes input from the
Critic and helps Agent improve
performance by learning from the
environment.
o Performance Element: decides on the
action to be taken to improve the
performance.
o Problem Generator: takes input from
other components and suggests actions
resulting in a better experience.
Rules of AI Agents
• Four rules which agents have to follow to be termed as AI Agent:
o Rule 1: An AI agent must have the ability to perceive the environment.
o Rule 2: The observation must be used to make decisions.
o Rule 3: The decision should result in an action.
o Rule 4: The action taken by an AI agent must be a rational action.
Turing Test
• In 1950, Alan Turing introduced a
test to check whether a machine can
think like a human or not, this test is
known as the Turing Test.
• In this test, Turing proposed that the
computer can be said to be
intelligent if it can mimic human
response under speciBic conditions.
Turing Test
• The questions and answers can be like:
o Interrogator: Are you a computer?
o Player: No
o Interrogator: Multiply two large numbers
such as (256896489*456725896)
o Player: Long pause and give the wrong
answer.
• If an interrogator would not be able to
identify which is a machine and human,
then:
o The computer passes the test successfully,
and
o The machine is said to be intelligent and can
think like a human.
Features Required To Pass The Turing
Test
• Natural language processing: NLP is required to communicate
with Interrogator in general human language like English.
• Knowledge representation: To store and retrieve information
during the test.
• Automated reasoning: To use the previously stored information
for answering the questions.
• Machine learning: To adapt to new changes and detect
generalized patterns.
• Vision (For total Turing test): To recognize the interrogator
actions and other objects during a test.
• Motor Control (For total Turing test): To act upon objects if
requested.

You might also like