Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

What is an Agent COMP3411 Artificial Intelligence

Intelligent Agents An agent is anything that can be viewed to


perceive the environment through sensors act upon that environment through effectors

A human agent has eyes, ears, and other sensing organs and uses hands, feet, mouth etc. as effectors.

How Agents should Act


A rational agent is one that does the right thing. That is, the rational agent will act such that it will be most successful. A performance measure determines how successful an agent is.

Performance Measures
Asking the agent about its success may result in the sour grapes syndrome. An outside measure may be more objective. To create a sensible performance measure is often rather difficult.

Examples of Performance Measures


Vacuum cleaner: how much dirt removed. Call centre: how many calls handled? Teaching: how many students passed?

Rationality versus Omniscience


An omniscient agent knows everything, including the outcome of any actions. A rational agent works with reasonable expectations.

What is rational is determined by


the performance measure. the perceptions of the agent so far (the percept history). The agents knowledge. The actions the agent can perform.

An ideal rational agent


should do whatever action is expected to maximise its performance. measure on the basis of the evidence provided by the percept sequence and the built-in knowledge of the agent. This includes information gathering as a rational activity.

Mapping Percept Sequences to Actions - Lookup Table


Percept x Action z 1.0 1.000 1.1 1.048 1.2 1.095 1.3 1.140 1.4 1.183 1.5 1.224 1.6 1.264 1.7 1.303 1.8 1.341 1.9 1.378 ...

Mapping Percept Sequences to Actions


Percept x Action z 1.0 1.000 1.1 1.048 1.2 1.095 1.3 1.140 1.4 1.183 1.5 1.224 1.6 1.264 1.7 1.303 1.8 1.341 1.9 1.378 ... Function function SQRT(x) z:=1.0 repeat until|z*z - x| < 0.001 z:=z-(z*z-x)/(2z) end return z

Autonomy
We call an agent autonomous, to the extent that its behaviour is determined by its own experience. I.e. if we pre-program an agent to do everything it needs to do without learning from own experience, we would only ascribe very little or no autonomy to such an agent.

Agent Types - (Example 1)


Medical diagnosis system Percepts: Symptoms, findings, patients answers Actions: Questions, tests, treatments Goals: Healthy patients, minimise costs Environment: Patient, hospital

Agent Types -- (Example 2)


Satellite image analysis system Percepts: Pixels of varying colour and intensity Actions: Print a categorisation of the scene Goals: Correct categorisation Environment: Images from satellite

Agent Types -- (Example 3)


Part-picking robot Percepts: Pixels of varying colour and intensity Actions: Pick up parts and sort into bins Goals: Place parts in correct bin Environment: Conveyor belt with parts

Agent Types -- (Example 4)


Refinery controller Percepts: Temperature, pressure readings Actions: Open, close valves; adjust temperature Goals: Maximise purity, yield, safety Environment: Refinery

Basic Agent Program


function SKELETON-AGENT(percept) returns action static: memory // the agents memory of the world memory UPDATE-MEMORY(memory, percept) action CHOOSE_BEST_ACTION(memory) memory UPDATE-MEMORY(memory, action) return action

Basic Agent Program (Table driven)


function TABLE-DRIVEN-AGENT(percept) returns action static: percepts // a sequence, initially empty table // the agents memory of the world append percept to end of percepts action LOOKUP(percepts, table) return action

Simple Reflex Agent Program

function SIMPLE-REFLEX-AGENT(percept) returns action static: rules // a set of condition-action rules rule RULE-MATCH(percept, rules) action RULE-ACTION(rule) return action

Model-Based Reflex Agent


function MODEL-BASED REFLEX-AGENT (percept) returns action static: state // a description of the current world state rules // a set of condition-action rules state UPDATE_STATE(state, percept) rule RULE-MATCH(state, rules) action RULE-ACTION(rule) state UPDATE-STATE(state, action) return action

Goal-Based Agent
function AGENT -WITH-EXPLICIT_GOAL(percept, goal) returns action static: state // a description of the current world state rules // a set of condition-action rules state UPDATE_STATE(state, percept) rule RULE-MATCH(state, rules, goal) action RULE-ACTION(rule, goal) state UPDATE-STATE(state, action) return action

Utility-Based Agent
function UTILITY-BASED-AGENT(percept, utility-function) returns action static: state // a description of the current world state rules // a set of condition-action rules state UPDATE_STATE(state, percept) action UTILITY-MAXIMISER(state, rules, utility-function) state UPDATE-STATE(state, action) return action

Properties of Environments
accessible vs. inaccessible
Are all relevant aspects of the environment known to the agent?

deterministic vs. non -deterministic


Is the outcome of an action in a given situation always the same?

episodic vs. sequential


Does the set up for the task remain the same over time?

Properties of Environments
static vs. dynamic
Does the environment change over time without the agent taking an action?

Properties of Environments
Environment Accessible Deterministic Episodic Static Discrete Yes Yes No No No Yes No Yes Yes No No No Yes No No No No No No Yes No Yes Semi Yes No No Semi No Yes Yes Yes No No Yes Yes Chess w/o clock Chess with clock Poker Taxi Driving Medical Diagnosis Image Analysis Interactive English Tutor

discrete vs. continuous


Presents the environment itself in discrete terms?

Summary
An agent is something that perceives and acts. An ideal agent is one that always takes the action that is expected to maximise its performance measure. An agent is autonomous to the extent that its action choices depend on its own experience .

Summary (contd)
An agent program maps from a percept to an action, while updating an internal state Reflex agents respond immediately to percepts Goal-based agents act so that they will achieve their goal(s) Utility-based agents try to maximise their happiness.

Summary (contd)
The type of environment influences substantially on what is a successful agent design. The most challenging environments are inaccessible, non-deterministic, nonepisodic, dynamic and continuous.

You might also like