Class2 - Agents

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 58

Machine Intelligence

Preet Kanwal
Associate Professor
Department of Computer Science & Engineering

Acknowledgement : Dr. Rajanikanth K (Former Principal MSRIT, PhD(IISc),


Academic Advisor, PESU)
Prof. K Srinivas (Associate Professor, PESU)
Teaching Assistant (Nishtha V - Sem VII)
Machine Intelligence
Unit 1 - Intelligent Agents and its Types

Agents in Artificial Intelligence

● An agent is anything that can be viewed as perceiving its environment through


sensors and acting upon that environment using the actuators.

● An Agent runs in the cycle of perceiving, thinking, and acting.


● Agent = Architecture(H/W, Sensors, Actuators) + Software System(Alg, Data structure)
Machine Intelligence
Unit 1 - Intelligent Agents and its Types

Agent Sensor Actuator

1 Human Agent eyes, ears, and other hands,legs, mouth, and


organs other body parts

2 Robotic Agent cameras and infrared range various motors


finders

3 Software Agent Keystrokes, file contents, Displays on the screen,


received network packages files, sent network packets
Machine Intelligence
Agents

● Inputs (percepts) from the environment are received by the intelligent


agent through sensors.
● Uses artificial intelligence to make decisions using the acquired
information/ observations.
● Actions are then triggered through actuators. Future decisions will be
influenced by percept history and past actions.
Machine Intelligence
Intelligent Agents Vs Rational Agents

INTELLIGENT AGENTS

● An intelligent agent is an autonomous entity which act upon an environment


using sensors and actuators for achieving goals.
● An intelligent agent may learn from the environment to achieve their goals.
RATIONAL AGENTS

● A rational agent is said to perform the right things.


● A rational agent is an agent which has clear preference, models uncertainty, and
acts in a way to maximize its performance measure with all possible actions.

Every Intelligent agent as per our view is a rational agent (since we are focusing on
the last quadrant - Acting Rationally.
Machine Intelligence
PEAS - Characterizing an Intelligent Agent

PEAS is a type of model on which an AI agent works upon. When we define an AI agent or
rational agent, then we can group its properties under PEAS representation model.
PEAS stands for Performance measure, Environment, Actuator, Sensor.

● Environment: Environment is the surrounding of an agent at every instant. It keeps


changing with time if the agent is set in motion.
● Sensor: Sensors are the receptive parts of an agent which takes in the input for the agent.
● Actuator: Actuator is a part of the agent that delivers the output of an action to the
environment.
● Performance Measure: Performance measure is the unit to define the success of an agent.
Performance varies with agents based on their different precept.
Machine Intelligence
PEAS

What is PEAS for a self-driving car?

● Performance:
● Environment:
● Actuators:
● Sensors:

Source - Google
Machine Intelligence
PEAS

What is PEAS for a self-driving car?

● Performance: Safety, time, legal drive,


comfort.
● Environment:
● Actuators:
● Sensors:

Source - Google
Machine Intelligence
PEAS

What is PEAS for a self-driving car?

● Performance: Safety, time, legal drive,


comfort.
● Environment: Roads, other vehicles, road
signs, pedestrian
● Actuators:
● Sensors:

Source - Google
Machine Intelligence
PEAS

What is PEAS for a self-driving car?

● Performance: Safety, time, legal drive,


comfort.
● Environment: Roads, other vehicles, road
signs, pedestrian
● Actuators: Steering, accelerator, brake,
signal, horn
● Sensors:
Source - Google
Machine Intelligence
PEAS

What is PEAS for a self-driving car?

● Performance: Safety, time, legal drive,


comfort.
● Environment: Roads, other vehicles, road
signs, pedestrian
● Actuators: Steering, accelerator, brake,
signal, horn
● Sensors: Camera, GPS, speedometer,
Source - Google odometer, accelerometer, sonar.
Machine Intelligence
Example of PEAS

What is PEAS for a Vacuum Cleaner?

● Performance:
● Environment:
● Actuators:
● Sensors:
Machine Intelligence
Example of PEAS

What is PEAS for a Vacuum Cleaner?


● Performance: Cleanness,
Efficiency, Battery life, Security
● Environment: Room, Table, Wood
floor, Carpet, Various obstacles
● Actuators: Wheels, Brushes,
Vacuum Extractor
● Sensors: Camera, Dirt detection
sensor, Cliff sensor(Low-level
obstacle detection), Bump/touch
Sensor, Infrared Sensor(to detect
motion)
Machine Intelligence
Example of PEAS

What is PEAS for a Part Picking Robot?


Assume : Bin picking as the application

● Performance:
● Environment:
● Actuators:
● Sensors:
Machine Intelligence
Example of PEAS

What is PEAS for a Part Picking Robot?


Assume : Bin picking as the application

● Performance: Percentage of parts


in correct bins.
● Environment: Conveyor belt with
parts, Bins
● Actuators: Jointed Arms, Hand
● Sensors: Camera, other sensors
Machine Intelligence
Example of PEAS

More Examples of Agents with their PEAS representation

Agent Performance Environment Actuators Sensors


measure
Medical ● Healthy patient ● Patient ● Tests Keyboard
Diagnose (Entry of
● Minimized cost ● Hospital Staff ● Treatments symptoms)
Machine Intelligence
Terminologies

Let us understand a few Terminologies.


Machine Intelligence
Terminologies

Agent :

Agent is an entity that perceives its environment and acts


upon that environment.
Machine Intelligence
Agent Function, Agent Program

AGENT FUNCTION (abstract mathematical description):


Input : Entire Percept History
Output : Provides behaviour of the Agent

AGENT PROGRAM (Concrete Implementation):


Input : Current Percept Only
Output : Returns Action to the Actuators
(Note : if the agent’s actions need to depend on the entire percept
sequence, the agent will have to remember the percepts)

Agent Program runs on the physical architecture to


produce/implement agent function
Machine Intelligence
Terminologies

State :
State is a configuration of the agent and its environment

Example : Each of the following diagrams represent a different state

State x State y State z


Machine Intelligence
Terminologies

Initial State :
Initial state is the state in which the agent begins.

Example : Let's say the following state is the initial state:

State x
Machine Intelligence
Terminologies

Actions :
● Actions choices that can be made in a state.
● Each edge leaving a node s corresponds to a
possible action
● a ∈ Actions(s) that could be performed in state s.
● ACTIONS(s) returns the set of actions that can be
executed in state s
● The edge is labeled with the action and its cost.
● The action leads deterministically to the successor
state represented by the child node.
● In summary, each root-to-leaf path represents a
possible action sequence, and the sum of the
costs of the edges is the cost of that path. The
goal is to find the root-to-leaf path that ends in a
valid end state
Machine Intelligence
Types of Environments

Environment from an agents view

AGENTS VIEW

ENVIRONMENT

Observability Continuity

Determinism Episodicity Dynamism


Machine Intelligence
Observability

FULLY OBSERVABLE
● If an agent’s sensors give it access to the complete state of the (relevant)environment needed
to choose an action, the environment is observable.
● You can determine the state of the system at all times

Example 8-puzzle problem, word-block problem, Sudoku puzzle etc : The states are completely visible at any
point of time

Source - Google
Machine Intelligence
Observability
PARTIALLY OBSERVABLE:-
● A partially observable system is one in which the entire state of the system is not
fully visible to an external sensor.
● The observer may utilise a memory system in order to add information to the observer's
understanding of the system..

Eg - Driving – the environment is partially observable because what’s around the corner is not known

Source - Google
Machine Intelligence
Determinism

DETERMINISTIC:-
● If an agent's current state and selected action can completely determine the next state of the
environment, then such environment is called a deterministic environment.
● So for example, if we had a pawn while playing chess and we moved that piece from A2 to A3,
that would always work. There is no uncertainty in the outcome of that move.

STOCHASTIC:-
● A stochastic environment is random in nature and cannot be determined completely by an agent.
● Using our poker game example, when a card is dealt there is a certain amount of randomness
involved in which card will be drawn;
● A self-driving car.
● Partially observable

Source - Google
Machine Intelligence
Episodicity

EPISODIC -
• In episodic environment the agent’s experience is divided into atomic episodes. Each episode consists of
the agent perceiving and then performing a single action and the choice of action in each episode
depends only on the episode itself.
• Example - The spam filter consider each incoming mail independent (episodic). An agent that has to spot
defective parts on an assembly line bases each decision on the current part, regardless of previous
decisions;
SEQUENTIAL -
• An environment where the next state is dependent on the current action. Current decision could affect all
future decisions.
• Example : Chess and Taxi driving is sequential.

Source - Google
Machine Intelligence
Important Note

Many environments are episodic at higher levels than the agent’s individual actions.

For example, a chess tournament consists of a sequence of games; each game is


an episode because (by and large) the contribution of the moves in one game to the
agent’s overall performance is not affected by the moves in its previous game.

On the other hand, decision making within a single game is certainly sequential.
Machine Intelligence
Dynamism

STATIC ENVIRONMENT -
• An idle environment with no change in its state is called a static environment. It does not change from
one state to next while the agent is considering its course of action. the agent need not keep looking at
the world while it is deciding on an action, nor need it worry about the passage of time.
• Example - snakes and ladders game, chess, crossword puzzles.

DYNAMIC ENVIRONMENT -
• If the environment can change while an agent is deliberating, then we say the environment is dynamic for
that agent; An environment that keeps constantly changing itself when the agent is up with some action is
said to be dynamic.
• Example - Taxi driving, Playing tennis, Playing cricket

Source - Google
Machine Intelligence
Continuity

DISCRETE ENVIRONMENT -
• If an environment consists of a finite number of actions that can be deliberated in the
environment to obtain the output, it is said to be a discrete environment.
• Example - The game of chess is discrete as it has only a finite number of moves. The
number of moves might vary with every game, but still, its finite.

CONTINUOUS ENVIRONMENT -
• The environment in which the actions performed cannot be numbered ie. is not discrete,
is said to be continuous.
• Example - Self-driving cars are an example of continuous environment as their actions
are driving, parking, etc. which cannot be numbered.

Source - Google
Machine Intelligence
Analysis Time

Tennis Match

Stochastic or Deterministic Stochastic

Episodic or Sequential Sequential

Dynamic or Static Dynamic

Discrete or Continuous Continuous

Fully observable or partially Partially Obs.


observable

Source - Google Note that the answers are not always cut and dried and depend on how the
task environment is defined.
Machine Intelligence
Analysis Time

Crossword puzzle

Stochastic or Deterministic Deterministic

Episodic or Sequential Sequential

Dynamic or Static Static

Discrete or Continuous Discrete

Fully observable or partially Fully Obs.


observable

Source - Google Note that the answers are not always cut and dried and depend on how the
task environment is defined.
Machine Intelligence
Classes of Intelligent Agent

Source - Google
Machine Intelligence
Simple Reflex Agents

• Simplest agents.
• These agents take decisions on the
basis of the current percepts and
ignore the rest of the percept history
• Work on Condition-action rule, which
means it maps the current state to
action.
• Succeeds when the environment is
fully observable
Machine Intelligence
Applications of Simple Reflex Agents

• Metal-detector - it doesn't matter if the previous input detected the


metal, the agent only alerts based on current percept.

• If a mars lander found a rock in a specific place it needed to collect


then it would collect it, if it was a simple reflex agent then if it found
the same rock in a different place it would still pick it up as it doesn't
take into account that it already picked it up.

• This is useful for when a quick automated response is needed. We


call these reflex actions.
Machine Intelligence
Model Based Reflex Agent
• The Model-based agent can work in
a partially observable environment,
and track the situation.
• These agents have the model,
"which is knowledge of the world"
and based on the model they
perform actions.
• It uses the percept history to help to
reveal the current unobservable
aspects of the environment.
Machine Intelligence
Applications of Model Based Reflex Agents

• Self driving cars - If the car in front of our agent shows red light ;
using condition action rules would make our agent to stop
immediately, but what if he is trying to slow down, using model
based reflex agents it would remember that instead of bringing its
speed to zero it should slow down else try to change the lane.

• For changing the lanes - the agent needs to keep a track of where
the other cars are.

Source - Google
Machine Intelligence
Model Based Reflex Agent Vs Simple Reflex Agents

WHAT IS THE DIFFERENCE BETWEEN


MODEL BASED REFLEX AGENTS AND
SIMPLE REFLEX AGENTS???
Machine Intelligence
Model Based Reflex Agent Vs Simple Reflex Agents

• A simple-reflex agent selects actions based on the agents current


perception of the world and not based on past perceptions

• A model-based-reflex agent is made to deal with partial


accessibility; they do this by keeping track of the part of the world it
can see now. It does this by keeping an internal state that depends
on what it has seen before so it holds information on the
unobserved aspects of the current state.

• The former only base its analysis on current state while the latter
takes account of past events
Machine Intelligence
Goal Based Agent

• Goal-based agents further expand


on the capabilities of the
model-based agents, by using
“goal” information : situations that
are desirable.
• This allows the agent a way to
choose among multiple
possibilities, selecting the one
which reaches a goal state.
• Search and planning are the sub
fields of artificial intelligence
devoted to finding action
sequences that achieve the agents
goals.
Machine Intelligence
Goal Based Agent - Pseudocode

function MODEL-GOAL-BASED-AGENT(percept) returns an action

persistent : state, what the current agent sees as the world state
model, a description detailing how the next state is a result of the
current state and action.
goals, a set of goals the agent needs to accomplish
action, the action that most recently occured and is initially null

state UPDATE-STATE(state,action,percept,model)
action BEST-ACTION(goals,state)
return action
Machine Intelligence
Applications of Goal Based Agents

• A simple example would be the shopping list; our goal is to pick up


everything on that list. This makes it easier to decide if you need to
choose between milk and orange juice because you can only afford one.
As milk is a goal on our shopping list and the orange juice is not we chose
the milk

• Google's Waymo driverless cars are good examples of a goal-based


agent when they are programmed with an end destination, or goal, in
mind. The car will then ''think'' and make the right decisions in order to
deliver the passenger where they intended to go.
Machine Intelligence
Utility-based agents

• These agents are similar to the


goal-based agent but provide an extra
component of utility measurement which
makes them different by providing a
measure of success at a given state.
• Utility-based agent act based not only
goals but also the best way to achieve
the goal.
• The Utility-based agent is useful when
there are multiple possible alternatives,
and an agent has to choose in order to
perform the best action.
Machine Intelligence
Applications of Utility - Based Agents

• A utility-based reflex agent is like the goal-based agent but with a


measure of "how much happy" an action would make it rather than the
goal-based binary feedback ['happy', 'unhappy']. This kind of agents
provide the best solution. An example is the route recommendation
system which solves the 'best' route to reach a destination.

• Let's say you want to travel from Bangalore to Mysore: the goal-based
agent will get you there. Mysore is the goal and this agent will map the
right path to get you there. But if you're traveling from Bangalore to
Mysore and encounter a closed road, the utility-based agent will kick into
gear and analyze other routes to get you there, selecting the best option
for maximum utility.

In this regard, the utility-based agent is a step above the goal-based


agent.
Machine Intelligence
Learning agents

• A learning agent in AI is the type of


agent which can learn from its past
experiences, or it has learning
capabilities.
Machine Intelligence
Learning agents

A learning agent has mainly four conceptual components, which are:


• Learning element: It is responsible for making improvements by learning from environment.

• Critic: Learning element takes feedback from critic which describes that how well the agent is
doing with respect to a fixed performance standard.

• Performance element: It is responsible for selecting external action.

• Problem generator: This component is responsible for suggesting actions that will lead to new
and informative experiences.
Machine Intelligence
Applications of Learning Agents

• Learning agents are able to learn, analyze performance, and look for
new ways to improve the performance. Any agent designed and
expected to be successful in an uncertain environment is considered
to be learning agent.
• The human is an example of a learning agent. For example, a human
can learn to ride a bicycle, even though, at birth, no human
possesses this skill.
• Search engines (Google)
• Computer Vision
• Recognition of gestures etc.
Machine Intelligence
QUIZ!

What could possibly be the environment of a Satellite Image Analysis


System

1. Computers in space and earth

2. Image categorization techniques

3. Statistical data on image pixel intensity value and histograms

4. All of the above


Machine Intelligence
QUIZ!

What could possibly be the environment of a Satellite Image Analysis


System

1. Computers in space and earth

2. Image categorization techniques

3. Statistical data on image pixel intensity value and histograms

4. All of the above


Machine Intelligence
QUIZ!

Which is used to improve the agents performance?

1. Perceiving

2. Learning

3. Observing

4. None of the mentioned


Machine Intelligence
QUIZ!

Which is used to improve the agents performance?

1. Perceiving

2. Learning

3. Observing

4. None of the mentioned


Machine Intelligence
QUIZ!

What is the rule of simple reflex agent?

1. Simple-Action rule

2. Condition-Action rule

3. Simple and Condition-Action rule

4. None of the Above


Machine Intelligence
QUIZ!

What is the rule of simple reflex agent?

1. Simple-Action rule

2. Condition-Action rule

3. Simple and Condition-Action rule

4. None of the Above


Machine Intelligence
QUIZ!

Which agent deals with happy and unhappy states?

1. Simple reflex agent

2. Model based agent

3. Learning agent

4. Utility based agent


Machine Intelligence
QUIZ!

Which agent deals with happy and unhappy states?

1. Simple reflex agent

2. Model based agent

3. Learning agent

4. Utility based agent


Machine Intelligence
QUIZ!

Which of the following does not represent a Goal based agent?

1. Reaching the goal in minimal amount of time

2. Reaching the goal in minimal cost

3. Reaching the initial state again after reaching the goal state

4. None of the above


Machine Intelligence
QUIZ!

Which of the following does not represent a Goal based agent?

1. Reaching the goal in minimal amount of time

2. Reaching the goal in minimal cost

3. Reaching the initial state again after reaching the goal state

4. None of the above


THANK YOU

Preet Kanwal
Department of Computer Science & Engineering
preetkanwal@pes.edu

You might also like