Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 14

Dr.

Fazeel Abid
Assistant Professor
University of Lahore
Agents in Artificial Intelligence - AI Systems

An AI system is composed of an agent and its environment.


• The agents act in their environment.
• The environment may contain other agents.
• An agent is anything that can be viewed as :
Perceiving environment through sensors 
Acting upon that environment through actuators.
Examples of Agent
• A Human agent has sensory organs such as eyes, ears, nose, tongue and skin parallel
to the sensors, and other organs such as hands, legs, mouth, for actuators.
• A Robotics Agent uses cameras and infrared radars as sensors to record information
from the Environment and uses reflex motors as actuators to deliver output
• A Software Agent use keypad strokes, audio commands as input sensors and display
screen as actuators.
Agents in Artificial Intelligence - Agent Terminology

Performance Measure of Agent


 It is the criteria, which determines how successful an agent is.
Behavior of Agent 
It is the action that agent performs after any given sequence of percepts.
Percept 
It is agent’s perceptual inputs at a given instance.
Percept Sequence 
It is the history of all that an agent has perceived till date.
Agent Function 
It is a map from the precept sequence to an action.
However, Artificial intelligence is defined as a study of rational agents.
A rational agent could be anything which makes decisions, as a person, firm, machine, or
software.
 It carries out an action with the best outcome after considering past and current percepts.
Agents in Artificial Intelligence - Rationality
• It is status of being reasonable, sensible, and having good sense of judgment.
• It is expected actions and results depending upon agent perception.
What is Ideal Rational Agent?
• An ideal rational agent is capable of doing expected actions to maximize its
performance measure, on the basis of:
Percept sequence
Built-in knowledge base
• A Rational agent always performs right action, where the right action means the
action that causes the agent to be most successful in the given percept sequence.
• Examples of Rational Agents
1)……………….?
2)……………….?
Agents in Artificial Intelligence – Elements of Rational
Agents
• The Elements of the rational agent is characterized by:
Performance Measure:
Environment:
Actuators:
Sensors:
•  It's what we call the PEAS.
• Class Activity:
• Mapping Previous Two Examples Into PEAS.
Automated Tax Driver
Vacuum Cleaner
Agents in Artificial Intelligence
Example of Rational Action performed by any intelligent agent

Automated Taxi Driver: Vacuum cleaner


• Performance Measure: • Performance Measure:
Safe, fast, legal, comfortable trip, Cleanness, efficiency: distance traveled to
maximize profits. clean, battery life, security.
• Environment: •  Environment:
Roads, other traffic, customers.  Room, table, wood floor, carpet, different
• Actuators: obstacles.
Steering wheel, accelerator, brake, • Actuators:
signal, horn. • Wheels, different brushes, vacuum
• Sensors: extractor.
Cameras, sonar, speedometer, GPS, • Sensors:
odometer, engine sensors, keyboard.
Camera, dirt detection sensor, cliff sensor,
bump sensors, infrared wall sensors.
Agents in Artificial Intelligence – Structure & Types
An Agent’s structure can be viewed as:
Agent = Architecture + Agent Program
Architecture = The machinery that an agent executes on.
Agent Program = An implementation of an agent function.
Types of Agents
• Agents can be grouped into four/five classes based on their degree of perceived
intelligence and capability :
Simple Reflex Agents
Model-Based Reflex Agents
Goal-Based Agents
Utility-Based Agents
Learning Agent
Agents in Artificial Intelligence - Simple Reflex Agents
• Basic form of Agents and function only in current state.
• Low intelligence capability, don’t have the ability to store
past state.
• Respond to events based on pre-defined rules which are pre-
programmed.
• They perform well only when the environment is fully
observable?
• These agents are helpful only on a limited number of cases
such as smart thermostat.
• Simple Reflex Agents hold a static table from where they
fetch all the pre-defined rules for performing an action.
 Condition-Action Rule − It is a rule that maps a state (condition) to
an action. If the condition is true, then the action is taken, else not.
• Pseudo code for a simple reflex agent.
Problems with Simple reflex agents?
Agents in Artificial Intelligence - Model Based Reflex Agents
• Holds an internal state based on the percept history.
 Internal state helps agent to handle a partially
observable environment?
• Consider both internal state and current percept to take
an action. Also, each step it updates the internal state.
Updating internal state requires two kinds of
knowledge.
 Agent needs to know how the world evolves
 How the agent’s actions affect the environment.
• This knowledge is embedded to the agent’s program
which help agent to understand how the world works.
• Implementation of this, is called the model of the world
and the agent that uses this model to decide what action
to take called model based agents.
• The pseudo code for it
Agents in Artificial Intelligence - Goal Based Agents
• For some tasks, its not always enough to know how the
world works.
• Also, it is desirable to define a goal information to
describe a desirable situations.
• A goal-based agent combines model-based agent’s
model with a goal.
• The action taken by these agents depends on the
distance from their goal (Desired Situation).
• The actions are intended to reduce the distance between
the current state and the desired state.
• To reach goals Search and Planning algorithms.
• One drawback of Goal-Based Agents is that they don’t
always select the most optimized path to reach the
final goal.
Agents in Artificial Intelligence - Utility Based Agents

• When there are multiple possible


alternatives, then to decide which one is
best, utility-based agents are used.
• The action taken by these agents depends on
the end objective so they are called Utility
Agent.
• They perform a cost-benefit analysis of each
solution and select the one which can
achieve the goal in minimum cost.
• In Simple, these type of agents are not only
trying to achieve their goal but also to
achieve it cheaper, faster, safer... what they
call "utility“.
Agents in Artificial Intelligence - Learning Agent
• A learning agent can learn from its past experiences or it has
learning capabilities.
• It starts to act with basic knowledge and then able to act and
adapt automatically through learning.
A learning agent has mainly four conceptual components, which
are:
Learning element:
 Responsible for making improvements by learning from the
environment
Critic: 
 Learning element takes feedback from critic which describes how
well the agent is doing based on performance Standard.
Performance element: 
 Responsible for selecting external action
Problem Generator: 
 Responsible for suggesting actions that will lead to new and
informative experiences.
Agents in Artificial Intelligence - Properties of Environment
• Discrete / Continuous 
 If there are a limited number of distinct, clearly defined, states of the environment, the environment is discrete (For example, chess);
otherwise it is continuous (For example, driving).
• Observable / Partially Observable
 If it is possible to determine the complete state of the environment at each time point from the percepts it is observable; otherwise it is
only partially observable.
• Static / Dynamic
 If the environment does not change while an agent is acting, then it is static; otherwise it is dynamic.
• Single agent / Multiple agents 
 The environment may contain other agents which may be of the same or different kind as that of the agent.
• Accessible / Inaccessible 
 If the agent’s sensory apparatus can have access to the complete state of the environment, then the environment is accessible to that
agent.
• Deterministic / Non-deterministic 
 If the next state of the environment is completely determined by the current state and the actions of the agent, then the environment is
deterministic; otherwise it is non-deterministic.
• Episodic / Non-episodic
 In an episodic environment, each episode consists of the agent perceiving and then acting. The quality of its action depends just on the
episode itself. Subsequent episodes do not depend on the actions in the previous episodes. Episodic environments are much simpler
because the agent does not need to think ahead.
What’s Next ????
1. The Key Comparison of All Agents types
2. Explore the examples of Each Agent
3-Quiz

You might also like