Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 26

Artificial Intelligence

Intelligent Agents

Instructor

Saeid Abrishami
Agents
• An agent is anything that can be viewed as
perceiving its environment through sensors
and acting upon that environment through
actuators
• Human agent:
▫ eyes, ears, and other organs for sensors;
▫ Hands, legs, mouth, and other body parts for
actuators
• Robotic agent:
▫ cameras and infrared range finders for sensors;
▫ various motors for actuators
Computer Department
Ferdowsi University of Mashhad Artificial Intelligence – Intelligent Agents 2
Agents and Environments

The agent function maps from percept histories to actions:



• [f: P*  A]

• The agent program runs on the physical architecture to produce f


• agent = architecture + program

Computer Department
Ferdowsi University of Mashhad Artificial Intelligence – Intelligent Agents 3
The vacuum-cleaner world

• Environment: square A and B


• Percepts: [location and content] e.g. [A, Dirty]
• Actions: left, right, suck, and no-op

Computer Department
Ferdowsi University of Mashhad Artificial Intelligence – Intelligent Agents 4
The vacuum-cleaner world

Percept sequence Action


[A,Clean] Right
[A, Dirty] Suck
[B, Clean] Left
[B, Dirty] Suck
[A, Clean],[A, Clean] Right
[A, Clean],[A, Dirty] Suck
… …

Computer Department
Ferdowsi University of Mashhad Artificial Intelligence – Intelligent Agents 5
The vacuum-cleaner world

function REFLEX-VACUUM-AGENT ([location, status]) return an action


if status == Dirty then return Suck
else if location == A then return Right
else if location == B then return Left

What is the right function? Can it be implemented in a small agent


program?
Computer Department
Ferdowsi University of Mashhad Artificial Intelligence – Intelligent Agents 6
Rational agents
• A rational agent is one that does the right thing.
▫ Every entry in the table is filled out correctly.
• What is the right thing?
▫ Approximation: the most successfull agent.
▫ Measure of success?
• Performance measure: An objective criterion for
success of an agent's behavior
▫ E.g., performance measure of a vacuum-cleaner agent
could be amount of dirt cleaned up, amount of time
taken, amount of electricity consumed, amount of noise
generated, etc.
• Performance measure according to what is wanted in the
environment instead of how the agents should behave.
Computer Department
Ferdowsi University of Mashhad Artificial Intelligence – Intelligent Agents 7
Rational agents
• What is rational at a given time depends on four
things:
▫ Performance measure,
▫ Prior environment knowledge,
▫ Actions,
▫ Percept sequence to date (sensors).
• DEF: A rational agent chooses whichever action
maximizes the expected value of the
performance measure given the percept
sequence to date and prior environment
knowledge.

Computer Department
Ferdowsi University of Mashhad Artificial Intelligence – Intelligent Agents 8
Rational agents
• Rationality  omniscience
▫ An omniscient agent knows the actual outcome
of its actions.
• Rationality  perfection
▫ Rationality maximizes expected performance,
while perfection maximizes actual performance.

Computer Department
Ferdowsi University of Mashhad Artificial Intelligence – Intelligent Agents 9
Rational agents
• The proposed definition requires:
▫ Information gathering/exploration
 To maximize future rewards
▫ Learn from percepts
 Extending prior knowledge
▫ Agent autonomy
 Compensate for incorrect prior knowledge
 An agent is autonomous if its behavior is determined by its
own experience (with ability to learn and adapt)

Computer Department
Ferdowsi University of Mashhad Artificial Intelligence – Intelligent Agents 10
Environments
• To design a rational agent we must specify
its task environment.
• PEAS description of the environment:
▫ Performance
▫ Environment
▫ Actuators
▫ Sensors

Computer Department
Ferdowsi University of Mashhad Artificial Intelligence – Intelligent Agents 11
Environments
• Example: Fully automated taxi:
▫ PEAS description of the environment:
 Performance
 Safety, destination, profits, legality, comfort
 Environment
 Streets/freeways, other traffic, pedestrians, weather,, …
 Actuators
 Steering, accelerating, brake, horn, speaker/display,…
 Sensors
 Video, sonar, speedometer, engine sensors, keyboard, GPS,

Computer Department
Ferdowsi University of Mashhad Artificial Intelligence – Intelligent Agents 12
Environment Types
• Fully observable (vs. partially observable):
▫ An agent's sensors give it access to the complete state of the
environment at each point in time.
• Deterministic (vs. stochastic):
▫ The next state of the environment is completely determined by the
current state and the action executed by the agent. (If the environment
is deterministic except for the actions of other agents, then the
environment is strategic)
• Episodic (vs. sequential):
▫ The agent's experience is divided into atomic "episodes" (each episode
consists of the agent perceiving and then performing a single action),
and the choice of action in each episode depends only on the episode
itself.
Computer Department
Ferdowsi University of Mashhad Artificial Intelligence – Intelligent Agents 13
Environment Types
• Static (vs. dynamic):
▫ The environment is unchanged while an agent is
deliberating. (The environment is semidynamic if the
environment itself does not change with the passage
of time but the agent's performance score does)
• Discrete (vs. continuous):
▫ A limited number of distinct, clearly defined percepts
and actions.
• Single agent (vs. multiagent):
▫ An agent operating by itself in an environment.

Computer Department
Ferdowsi University of Mashhad Artificial Intelligence – Intelligent Agents 14
Environment Types
Chess with Chess without Taxi driving
a clock a clock
Fully observable Yes Yes No
Deterministic Strategic Strategic No
Episodic No No No
Static Semi Yes No
Discrete Yes Yes No
Single agent No No No

• The environment type largely determines the agent


design

• The real world is (of course) partially observable,


stochastic, sequential, dynamic, continuous, multi-agent
Computer Department
Ferdowsi University of Mashhad Artificial Intelligence – Intelligent Agents 15
Agent Types
• How does the inside of the agent work?
▫ Agent = architecture + program

• All agents have the same skeleton:


▫ Input = current percepts
▫ Output = action
▫ Program= manipulates input to produce output

Computer Department
Ferdowsi University of Mashhad Artificial Intelligence – Intelligent Agents 16
Table-lookup agent

Function TABLE-DRIVEN_AGENT(percept) returns an action

static: percepts, a sequence initially empty


table, a table of actions, indexed by percept
sequence

append percept to the end of percepts


action  LOOKUP(percepts, table)
return action

Computer Department
Ferdowsi University of Mashhad Artificial Intelligence – Intelligent Agents 17
Table-lookup agent
• Drawbacks:
▫ Huge table
▫ Take a long time to build the table
▫ No autonomy
▫ Even with learning, need a long time to learn
the table entries

Computer Department
Ferdowsi University of Mashhad Artificial Intelligence – Intelligent Agents 18
Agent Types
• Four basic kind of agent programs will be
discussed:
▫ Simple reflex agents
▫ Model-based reflex agents
▫ Goal-based agents
▫ Utility-based agents
• All these can be turned into learning
agents.

Computer Department
Ferdowsi University of Mashhad Artificial Intelligence – Intelligent Agents 19
Simple Reflex Agent

Computer Department
Ferdowsi University of Mashhad Artificial Intelligence – Intelligent Agents 20
Simple Reflex Agents
function SIMPLE-REFLEX-AGENT(percept) returns an action

static: rules, a set of condition-action rules

state  INTERPRET-INPUT(percept)
rule  RULE-MATCH(state, rules)
action  RULE-ACTION[rule]
return action

Will only work if the environment is fully observable

Computer Department
Ferdowsi University of Mashhad Artificial Intelligence – Intelligent Agents 21
Model-based reflex agents

Computer Department
Ferdowsi University of Mashhad Artificial Intelligence – Intelligent Agents 22
Model-based reflex agents
function REFLEX-AGENT-WITH-STATE(percept) returns an
action

static: rules, a set of condition-action rules


state, a description of the current world state
action, the most recent action.

state  UPDATE-STATE(state, action, percept)


rule  RULE-MATCH(state, rule)
action  RULE-ACTION[rule]
return action

Computer Department
Ferdowsi University of Mashhad Artificial Intelligence – Intelligent Agents 23
Goal-based Agents

Computer Department
Ferdowsi University of Mashhad Artificial Intelligence – Intelligent Agents 24
Utility-based Agents

Computer Department
Ferdowsi University of Mashhad Artificial Intelligence – Intelligent Agents 25
Learning Agents

Computer Department
Ferdowsi University of Mashhad Artificial Intelligence – Intelligent Agents 26

You might also like