Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Artificial Intelligence

COMP-241, Level-6
Mohammad Fahim Akhtar, Dr. Mohammad Hasan
Department of Computer Science
Jazan University, KSA

Chapter 2: Intelligent Agents

In which we discuss the nature of agents, perfect or otherwise, the diversity of environments,
and the resulting menagerie of agent types.

An agent is anything that can be viewed as perceiving its environment through sensors and
acting upon that environment through actuators.

 A human agent has eyes, ears, and other organs for sensors and hands, legs, mouth, and
other body parts for actuators.
 A robotic agent might have camera and infrared range finders for sensors and various
motors for actuators.
 A software agent receives key strokes, file contains, and network packets as sensory
inputs and acts on the environments by displaying on the screen, writing files, and
sending network packets.
 We will make the general assumption that every agent can perceive its own actions.

 An agent choice of action at any given instant can depend on the entire percept
sequence observed to date.
 An agent’s behavior is described by the agent function that maps any given percept
sequence to an action.
 Intelligent agents are supposed to maximize their performance measure.

Developed BY: Mohammad Fahim Akhtar Version 1.4 Page 1 of 10


The following are the diagram of agents interact with environments through sensors and
actuators:

Good Behavior: The concept of Rationality

Rational Agent

An agent should strive to "do the right thing" based on what it can perceive and the actions it
can perform. The right action is the one that will cause the agent to be most successful.

For each possible percept sequence, a rational agent should select an action that is expected to
maximize its performance measure, given the evidence provided by the percept sequence and
whatever built-in knowledge the agent has.

Performance Measure

The criterion for success of an agent’s behavior. When an agent is plunked down in an
environment, it generates a sequence of actions according to the percepts it receives.

As a general rule, it is better to design performance measures according to what one actually
wants in the environment, rather than according to how one thinks the agent should behave.

Developed BY: Mohammad Fahim Akhtar Version 1.4 Page 2 of 10


Vacuum Cleaner Agent

Developed BY: Mohammad Fahim Akhtar Version 1.4 Page 3 of 10


Demonstrate program in Lisp:

The nature of environments

PEAS – Performance Environments Actuators Sensors

Developed BY: Mohammad Fahim Akhtar Version 1.4 Page 4 of 10


The structure of agents

Agent = architecture + program

Developed BY: Mohammad Fahim Akhtar Version 1.4 Page 5 of 10


Types of Intelligent Agents

1. Simple Reflex Agents

This the simplest type of agent architecture possible. The underlying concept is very simple and
lacks much intelligence. For each condition that the agent can obser
observe
ve from its sensors based on
the changes in the environment in which it is operating in, there is a specific action(s) defined by
an agent program. So, for each observation that it receives from its sensors, it checks the
condition-action
action rules and find ththee appropriate condition and then performs the relevant action
defined in that rule using its actuators. This can only be useful in the cases that the environment
in fully observable and the agent program contains all the condition
condition-action
action rules possible for
each observance, which is somewhat not possible in real world scenarios and only limited to toy
simulation based problems. The given below ffigure shows the concept of simple reflex type
agent.

Figure – Simple reflex agent

In the above case, the agent program contains a look


look-up
up table which has been constructed prior
to the agent being functional in the specific environment. The look
look-up
up table should consist of all
possible percept sequence mappings to respective actions. Thus based on the input thattha the agent
receives via the sensors (about the current state of the environment), the agent would access this
look-up
up table and retrieve the respective action mapping for that percept sequence and inform its
actuators to perform that action. This process is not very effective in a scenario where the
environment is constantly changing while the agent is taking the action because, the agent is
acting on a percept sequence that it acquired previously to the rapid change in the environment
and therefore the performed
rformed action might not suit the environment’s state after the change.

Developed BY: Mohammad Fahim Akhtar Version 1.4 Page 6 of 10


2. Model based reflex agents

This is a more improved version of the first type of agents with the capability of performing an
action based on how the environment evolves or changes ffrom rom the current state. As in all agent
types, model based reflex agents also acquire the percepts about the environment through its
sensors. These percepts would provide the agent with the understanding of what the environment
is like now at that moment with th some limited facts based on its sensors. Then the agent would
update the internal state of the percept history and thus would yield some unobserved facts about
the current state of the environment. To update the internal state, information should exist about
how the world (environment) evolves independently of the agent’s actions and information about
how the agent’s actions eventually affect the environment. This idea about incorporating the
knowledge of evolvement of the environment is known as a model of the world. This explains
how the name modelodel based was used for this agent type.

Figure –Model
Model based reflex agent

Developed BY: Mohammad Fahim Akhtar Version 1.4 Page 7 of 10


The above diagram shows the architecture of a model odel based reflex agent. Once the current
percept is received by the agent through its sensors, the previous internal state stored in its
internal state section in connection with the new percepts determines the revised description
about the current state. Therefore, the agent function updates its internal state every time it
receives
eives a new percept. Then based on the new updated percept sequence based on the look-up
look
table’s matching with that entry would determine what action needs to be performed and inform
the actuators to do so.

3. Goal Based Agents

This agent is designed so that it can perform actions in order to reach a certain goal. In any agent
the main criteria is to achieve a certain objective function which can in layman’s terms referred
to as a goal. Therefore, in this agent type goal information is defined so that is can determine
which suitable action or actions should be performed out of the available set of actions in order
to reach the goal effectively. For example, if we are designing an automated taxi driver, the
destination of the passenger (which would be fed iin n as a goal to reach) would provide him with
more useful insight to select the roads to reach that destination. Here the difference with the first
two types of agents is that it does not have hard wired condition
condition-action
action rule set thus the actions
are purely based on the internal state and the goals defined. This sometimes might lead to less
effectiveness, when the agent does not explicitly know what to do at a certain time but it is more
flexible since based on the knowledge it gathers about the state change
changess in the environment, it
can modify the actions to reach the goal.

Figure – Goal based agent

Developed BY: Mohammad Fahim Akhtar Version 1.4 Page 8 of 10


4. Utility Based Agent

Goals by themselves would not be adequate to provide effective behavior in the agents. If we
consider the automated taxi driver agent, ththen
en just reaching the destination would not be enough,
but passengers would require additional features such as safety, reaching the destination in less
time, cost effectiveness and so on. So in order to combine goals with the above features desired,
a concept
ept called utility function is used. So based on the comparison between different states of
the world, a utility value is assigned to each state and the utility function would map a state (or a
sequence of states) to a numeric representation of satisfactio
satisfaction.
n. So, the ultimate objective of this
type of agent would be to maximize the utility value derived from the state of the world. The
following diagram depicts the architecture of a Utility based agent.

Figure – Utility based agent

Developed BY: Mohammad Fahim Akhtar Version 1.4 Page 9 of 10


Exercises:
Q1. Define the following terms with suitable example:

a) Agent
b) Intelligent Agent
c) Rational Agent
d) Performance
e) Environment
f) Actuators
g) Sensors

Q2. Write the PEAS description of the following agent:

a) Taxi Driver
b) Medical Diagnosis System
c) Satellite Image Analysis System
d) Part Picking Robot
e) Refinery Controller
f) Interactive English Tutor
g) ATM (Automated Teller Machine)
h) BANK
i) Robot Soccer Player
j) Internet Book shopping agent
k) Autonomous Mars rover
l) Mathematician’s theorem proving assistant

Q3. Explain the function of simple reflex agent with suitable neat diagram.

Q4. Write the function of reflex vacuum agent.

Q5. Write the algorithm of simple reflex agent.

Q6. Write the algorithm of model based reflex agent.

Q7. Explain the function of model based reflex agent with suitable neat diagram.

Q8. Explain the function of goal based agent with suitable neat diagram.

Q9. Explain the function of utility based agent with suitable neat diagram.

References:
Artificial Intelligence – A modern Approach, Second edition, Stuart Russell & Peter Norvig

Developed BY: Mohammad Fahim Akhtar Version 1.4 Page 10 of 10

You might also like