Professional Documents
Culture Documents
Class2 - Agents
Class2 - Agents
Class2 - Agents
Preet Kanwal
Associate Professor
Department of Computer Science & Engineering
INTELLIGENT AGENTS
Every Intelligent agent as per our view is a rational agent (since we are focusing on
the last quadrant - Acting Rationally.
Machine Intelligence
PEAS - Characterizing an Intelligent Agent
PEAS is a type of model on which an AI agent works upon. When we define an AI agent or
rational agent, then we can group its properties under PEAS representation model.
PEAS stands for Performance measure, Environment, Actuator, Sensor.
● Performance:
● Environment:
● Actuators:
● Sensors:
Source - Google
Machine Intelligence
PEAS
Source - Google
Machine Intelligence
PEAS
Source - Google
Machine Intelligence
PEAS
● Performance:
● Environment:
● Actuators:
● Sensors:
Machine Intelligence
Example of PEAS
● Performance:
● Environment:
● Actuators:
● Sensors:
Machine Intelligence
Example of PEAS
Agent :
State :
State is a configuration of the agent and its environment
Initial State :
Initial state is the state in which the agent begins.
State x
Machine Intelligence
Terminologies
Actions :
● Actions choices that can be made in a state.
● Each edge leaving a node s corresponds to a
possible action
● a ∈ Actions(s) that could be performed in state s.
● ACTIONS(s) returns the set of actions that can be
executed in state s
● The edge is labeled with the action and its cost.
● The action leads deterministically to the successor
state represented by the child node.
● In summary, each root-to-leaf path represents a
possible action sequence, and the sum of the
costs of the edges is the cost of that path. The
goal is to find the root-to-leaf path that ends in a
valid end state
Machine Intelligence
Types of Environments
AGENTS VIEW
ENVIRONMENT
Observability Continuity
FULLY OBSERVABLE
● If an agent’s sensors give it access to the complete state of the (relevant)environment needed
to choose an action, the environment is observable.
● You can determine the state of the system at all times
Example 8-puzzle problem, word-block problem, Sudoku puzzle etc : The states are completely visible at any
point of time
Source - Google
Machine Intelligence
Observability
PARTIALLY OBSERVABLE:-
● A partially observable system is one in which the entire state of the system is not
fully visible to an external sensor.
● The observer may utilise a memory system in order to add information to the observer's
understanding of the system..
Eg - Driving – the environment is partially observable because what’s around the corner is not known
Source - Google
Machine Intelligence
Determinism
DETERMINISTIC:-
● If an agent's current state and selected action can completely determine the next state of the
environment, then such environment is called a deterministic environment.
● So for example, if we had a pawn while playing chess and we moved that piece from A2 to A3,
that would always work. There is no uncertainty in the outcome of that move.
STOCHASTIC:-
● A stochastic environment is random in nature and cannot be determined completely by an agent.
● Using our poker game example, when a card is dealt there is a certain amount of randomness
involved in which card will be drawn;
● A self-driving car.
● Partially observable
Source - Google
Machine Intelligence
Episodicity
EPISODIC -
• In episodic environment the agent’s experience is divided into atomic episodes. Each episode consists of
the agent perceiving and then performing a single action and the choice of action in each episode
depends only on the episode itself.
• Example - The spam filter consider each incoming mail independent (episodic). An agent that has to spot
defective parts on an assembly line bases each decision on the current part, regardless of previous
decisions;
SEQUENTIAL -
• An environment where the next state is dependent on the current action. Current decision could affect all
future decisions.
• Example : Chess and Taxi driving is sequential.
Source - Google
Machine Intelligence
Important Note
Many environments are episodic at higher levels than the agent’s individual actions.
On the other hand, decision making within a single game is certainly sequential.
Machine Intelligence
Dynamism
STATIC ENVIRONMENT -
• An idle environment with no change in its state is called a static environment. It does not change from
one state to next while the agent is considering its course of action. the agent need not keep looking at
the world while it is deciding on an action, nor need it worry about the passage of time.
• Example - snakes and ladders game, chess, crossword puzzles.
DYNAMIC ENVIRONMENT -
• If the environment can change while an agent is deliberating, then we say the environment is dynamic for
that agent; An environment that keeps constantly changing itself when the agent is up with some action is
said to be dynamic.
• Example - Taxi driving, Playing tennis, Playing cricket
Source - Google
Machine Intelligence
Continuity
DISCRETE ENVIRONMENT -
• If an environment consists of a finite number of actions that can be deliberated in the
environment to obtain the output, it is said to be a discrete environment.
• Example - The game of chess is discrete as it has only a finite number of moves. The
number of moves might vary with every game, but still, its finite.
CONTINUOUS ENVIRONMENT -
• The environment in which the actions performed cannot be numbered ie. is not discrete,
is said to be continuous.
• Example - Self-driving cars are an example of continuous environment as their actions
are driving, parking, etc. which cannot be numbered.
Source - Google
Machine Intelligence
Analysis Time
Tennis Match
Source - Google Note that the answers are not always cut and dried and depend on how the
task environment is defined.
Machine Intelligence
Analysis Time
Crossword puzzle
Source - Google Note that the answers are not always cut and dried and depend on how the
task environment is defined.
Machine Intelligence
Classes of Intelligent Agent
Source - Google
Machine Intelligence
Simple Reflex Agents
• Simplest agents.
• These agents take decisions on the
basis of the current percepts and
ignore the rest of the percept history
• Work on Condition-action rule, which
means it maps the current state to
action.
• Succeeds when the environment is
fully observable
Machine Intelligence
Applications of Simple Reflex Agents
• Self driving cars - If the car in front of our agent shows red light ;
using condition action rules would make our agent to stop
immediately, but what if he is trying to slow down, using model
based reflex agents it would remember that instead of bringing its
speed to zero it should slow down else try to change the lane.
• For changing the lanes - the agent needs to keep a track of where
the other cars are.
Source - Google
Machine Intelligence
Model Based Reflex Agent Vs Simple Reflex Agents
• The former only base its analysis on current state while the latter
takes account of past events
Machine Intelligence
Goal Based Agent
persistent : state, what the current agent sees as the world state
model, a description detailing how the next state is a result of the
current state and action.
goals, a set of goals the agent needs to accomplish
action, the action that most recently occured and is initially null
state UPDATE-STATE(state,action,percept,model)
action BEST-ACTION(goals,state)
return action
Machine Intelligence
Applications of Goal Based Agents
• Let's say you want to travel from Bangalore to Mysore: the goal-based
agent will get you there. Mysore is the goal and this agent will map the
right path to get you there. But if you're traveling from Bangalore to
Mysore and encounter a closed road, the utility-based agent will kick into
gear and analyze other routes to get you there, selecting the best option
for maximum utility.
• Critic: Learning element takes feedback from critic which describes that how well the agent is
doing with respect to a fixed performance standard.
• Problem generator: This component is responsible for suggesting actions that will lead to new
and informative experiences.
Machine Intelligence
Applications of Learning Agents
• Learning agents are able to learn, analyze performance, and look for
new ways to improve the performance. Any agent designed and
expected to be successful in an uncertain environment is considered
to be learning agent.
• The human is an example of a learning agent. For example, a human
can learn to ride a bicycle, even though, at birth, no human
possesses this skill.
• Search engines (Google)
• Computer Vision
• Recognition of gestures etc.
Machine Intelligence
QUIZ!
1. Perceiving
2. Learning
3. Observing
1. Perceiving
2. Learning
3. Observing
1. Simple-Action rule
2. Condition-Action rule
1. Simple-Action rule
2. Condition-Action rule
3. Learning agent
3. Learning agent
3. Reaching the initial state again after reaching the goal state
3. Reaching the initial state again after reaching the goal state
Preet Kanwal
Department of Computer Science & Engineering
preetkanwal@pes.edu