Professional Documents
Culture Documents
Artificial Intelligence: COMP-241, Level-6
Artificial Intelligence: COMP-241, Level-6
COMP-241, Level-6
Mohammad Fahim Akhtar, Dr. Mohammad Hasan
Department of Computer Science
Jazan University, KSA
In which we discuss the nature of agents, perfect or otherwise, the diversity of environments,
and the resulting menagerie of agent types.
An agent is anything that can be viewed as perceiving its environment through sensors and
acting upon that environment through actuators.
A human agent has eyes, ears, and other organs for sensors and hands, legs, mouth, and
other body parts for actuators.
A robotic agent might have camera and infrared range finders for sensors and various
motors for actuators.
A software agent receives key strokes, file contains, and network packets as sensory
inputs and acts on the environments by displaying on the screen, writing files, and
sending network packets.
We will make the general assumption that every agent can perceive its own actions.
An agent choice of action at any given instant can depend on the entire percept
sequence observed to date.
An agent’s behavior is described by the agent function that maps any given percept
sequence to an action.
Intelligent agents are supposed to maximize their performance measure.
Rational Agent
An agent should strive to "do the right thing" based on what it can perceive and the actions it
can perform. The right action is the one that will cause the agent to be most successful.
For each possible percept sequence, a rational agent should select an action that is expected to
maximize its performance measure, given the evidence provided by the percept sequence and
whatever built-in knowledge the agent has.
Performance Measure
The criterion for success of an agent’s behavior. When an agent is plunked down in an
environment, it generates a sequence of actions according to the percepts it receives.
As a general rule, it is better to design performance measures according to what one actually
wants in the environment, rather than according to how one thinks the agent should behave.
This the simplest type of agent architecture possible. The underlying concept is very simple and
lacks much intelligence. For each condition that the agent can obser
observe
ve from its sensors based on
the changes in the environment in which it is operating in, there is a specific action(s) defined by
an agent program. So, for each observation that it receives from its sensors, it checks the
condition-action
action rules and find ththee appropriate condition and then performs the relevant action
defined in that rule using its actuators. This can only be useful in the cases that the environment
in fully observable and the agent program contains all the condition
condition-action
action rules possible for
each observance, which is somewhat not possible in real world scenarios and only limited to toy
simulation based problems. The given below ffigure shows the concept of simple reflex type
agent.
This is a more improved version of the first type of agents with the capability of performing an
action based on how the environment evolves or changes ffrom rom the current state. As in all agent
types, model based reflex agents also acquire the percepts about the environment through its
sensors. These percepts would provide the agent with the understanding of what the environment
is like now at that moment with th some limited facts based on its sensors. Then the agent would
update the internal state of the percept history and thus would yield some unobserved facts about
the current state of the environment. To update the internal state, information should exist about
how the world (environment) evolves independently of the agent’s actions and information about
how the agent’s actions eventually affect the environment. This idea about incorporating the
knowledge of evolvement of the environment is known as a model of the world. This explains
how the name modelodel based was used for this agent type.
Figure –Model
Model based reflex agent
This agent is designed so that it can perform actions in order to reach a certain goal. In any agent
the main criteria is to achieve a certain objective function which can in layman’s terms referred
to as a goal. Therefore, in this agent type goal information is defined so that is can determine
which suitable action or actions should be performed out of the available set of actions in order
to reach the goal effectively. For example, if we are designing an automated taxi driver, the
destination of the passenger (which would be fed iin n as a goal to reach) would provide him with
more useful insight to select the roads to reach that destination. Here the difference with the first
two types of agents is that it does not have hard wired condition
condition-action
action rule set thus the actions
are purely based on the internal state and the goals defined. This sometimes might lead to less
effectiveness, when the agent does not explicitly know what to do at a certain time but it is more
flexible since based on the knowledge it gathers about the state change
changess in the environment, it
can modify the actions to reach the goal.
Goals by themselves would not be adequate to provide effective behavior in the agents. If we
consider the automated taxi driver agent, ththen
en just reaching the destination would not be enough,
but passengers would require additional features such as safety, reaching the destination in less
time, cost effectiveness and so on. So in order to combine goals with the above features desired,
a concept
ept called utility function is used. So based on the comparison between different states of
the world, a utility value is assigned to each state and the utility function would map a state (or a
sequence of states) to a numeric representation of satisfactio
satisfaction.
n. So, the ultimate objective of this
type of agent would be to maximize the utility value derived from the state of the world. The
following diagram depicts the architecture of a Utility based agent.
a) Agent
b) Intelligent Agent
c) Rational Agent
d) Performance
e) Environment
f) Actuators
g) Sensors
a) Taxi Driver
b) Medical Diagnosis System
c) Satellite Image Analysis System
d) Part Picking Robot
e) Refinery Controller
f) Interactive English Tutor
g) ATM (Automated Teller Machine)
h) BANK
i) Robot Soccer Player
j) Internet Book shopping agent
k) Autonomous Mars rover
l) Mathematician’s theorem proving assistant
Q3. Explain the function of simple reflex agent with suitable neat diagram.
Q7. Explain the function of model based reflex agent with suitable neat diagram.
Q8. Explain the function of goal based agent with suitable neat diagram.
Q9. Explain the function of utility based agent with suitable neat diagram.
References:
Artificial Intelligence – A modern Approach, Second edition, Stuart Russell & Peter Norvig