Download as pdf or txt
Download as pdf or txt
You are on page 1of 45

AI agents

All the slides contain running sentences for your easy understanding
AI currently encompasses a huge variety of subfields, ranging from
general learning to the specific, such as playing chess, proving
mathematical theorems, writing poetry, driving a car on a crowded
street and diagnosing diseases.

Can we combine Darvin evolution + AI


AI definitions
• Thinking Humanly : “ the exciting new effort to make computers
think…machines with minds, in the full and literal sense”. (Haugeland
-1985)
• Thinking Rationally : “ the study of mental faculties through the use
of computational models”.(McDermott-1985)
• Acting Humanly : “the study of how to make computers do things at
which at the moment, people are better”.(Knight-1991)
• Acting Rationally : “ Computational intelligence is the study of the
design of intelligent systems”. Poole et.al-1998
Potted history if AI
Geoffrey Everest Hinton

AIMA slide @1998


Agent and Environment
• Agent: anything that can be viewed as perceiving its
environment through sensors and acting upon that
environment through actuators.
“ Abstractly , an agent is a function from percept histories to actions: “
Agent interaction with environment
What does an Agent looks like?
• Human agent: eyes, ears and other organs for sensors and hands,
legs, vocal tract so on for actuators.
• Robotic agent: camera and infrared range finder for sensors and
various motors for actuators.
• Software agent: receives keystrokes, file contents and network
packets as sensory inputs and acts on the environment by
displaying on the screen, writing files and sending network
packets.
Intelligent Agent Design
• Percepts ??
• Action??
• Goals??
• Environment ??
A Robot Vacuum Cleaner

Percepts; Action; Goals; Environment


Percepts and Percept Sequences
• A percept is a complete set of readings from all of the agent’s sensors
at an instant in time
• Robot vacuum cleaner: this will consist of its location and whether the floor is
clean or dirty
• Example percept: [A, dirty]
• A percept sequence is a complete ordered list of the percepts that the
agent received since time began
• Example: [ [A, dirty], [A, dirty], [A, clean], [B, dirty], … ]

Agent’s behaviour: agent function that maps any given percept


sequence to an action.
Agent Function
• An agent function is a theoretical device which maps from any
possible percept sequence to action
• Intelligent agent design -Specification
• Example – designing an automated taxi:
• Percepts Video, accelerometers, gauges, engine sensors, GPS,..
• Actions Steer, accelerate, brake, horn, speak/display
• Goals Safety, reach destination, maximize profit, passenger comfort
• Environment Urban street, freeways, traffic, pedestrians, weather
Agent relies on the prior knowledge of its designer rather than
on its own percepts, we say that the agent lack autonomy.

Eg RL agent - markov decision process


Given an agent to experiment with, we can in principle, construct table by trying out
all possible percept sequence and recording which actions the agent does in
response.
This table can be a external characterization of the agent.

Agent function is an abstract mathematical description, the agent


program is a concrete implementation, running with some physical
systems.
• Example : Vacuum Cleaner
• Particular world has two locations : square A and B
• VC perceives which square it is and whether there is dirt in the square.
• It can choose -> left, right, suck up the dirt or do nothing
• Agent function as : if the current square is dirty, then suck; otherwise move to
the other square.

Tabulation of agent function


Automated taxi driver
• What is the Performance Measure to which we would like our
automated driver to aspire?
• Getting the correct destination
• Minimizing fuel consumption and wear and tear.
• Minimizing the trip time or cost
• Minimizing violations of traffic laws and disturbance to other drivers
• Maximizing safety and passenger comfort
• Maximizing profits
• What is the driving environment that the taxi will face ?
• Taxi driver need to deal with a variety of roads, ranging from rural lanes and
urban alleys to 12-lane freeways.
• Taxi must also interact with potential and actual passengers.
• Actuator for an automated taxi include those available to a human
driver: control over the engine through the accelerator and control
over steering and braking.
• Sensor for the taxi will include one or more controllable video
cameras so that it can see the road: it might augment these with
infrared or sonar sensors to detect distances to other cars and
obstacles.
Specifying the task environment

Performance Measure, Environment, Actuators, and Sensors


Structure of Agents
• So far we understood agent by describing behaviour-the action that is
performed after any given sequence of percepts.
• Job of AI is to design an agent program that implements the agent
function- the mapping from percepts to actions.
• We assume this program will run on some sort of computing device
with physical sensors and actuators call it as architecture
• Agent =architecture + program
Agent programs
• Program we choose has to be one that is appropriate for the
architecture.
• If the program going to be recommend actions like walk, the
architecture had better have legs.
• Architecture might me ordinary or several onboard cameras and
other sensors.
• Agent program take current percept as input from sensors and return
an actions to the actuators.

Robotic arms and its application to pharmacy


Type of agent
• Simple reflex agents
• Model based agents
• Goal based agents
• Utility based agents
Simple reflex agent
• Simple kind of agent
• Select actions on the basis of the current percept, ignoring the rest of
the percept history.
• Eg: vacuum agent : simple reflex agent
• decision is based only on the current location and on whether that location
contain dirt.
• Simple reflex behaviour occur in more complex environments.
• Eg if the car in front brakes and its brake lights come on
• processing is done on the visual input to establish the condition “the car in
front is braking”.
• Trigger the agent program to the action “initiate braking”.
• This connection we can call it as “condition action rule”
• If car-in-front-is braking then initiate-braking
• Code: specific to vacuum cleaner
• Flexible approach is first build a general purpose interpreter for
condition-action-rules and then to create rule sets for specific task
environments.
• Rectangles to denote the current internal state of the agents decision
process.
• Oval represent the background information used in the process.
• Interpret-Input function generate an abstract description of current
state from the percept.
• Rule-match returns the first rule in the set of rules that matches the
given state description.
• Real system these are implemented as a collection of logic gates
implementing a Boolean circuit.
• It is not always possible to tell from a single image whether the car is
braking. In the worst case never brake at all.
Model Based reflex agents
• Inorder to handle partial observability is for the agent to keep track of
the part of the world it cant see now.
• Ie agent should maintain some sort of internal state based on the percept
history and thereby reflects atleast some of the unobserved aspects of the
current state.
• Breaking problem scenario-internal state is not too extensive just the previous
frame from the camera. Allowing the agent to detect when two red lights at
the edge of the vehicle go on or off simultaneously.

16-02-2024 30
• Updating the internal state requires two kinds of knowledge to be encode
in the agent program.
• Need information about how the world evolves independently of the agent
• Eg: an overtaking car generally will be closer behind than it was a moment ago.
• Secondly need some information about how the agent’s own actions affect the
world.
• Eg: when the agent turns the steering wheel clockwise , the car turn to the right.
• So the knowledge about “how the world works” whether implemented in simple
Boolean circuits or in complete scientific theories called model of the world.
• An agent that uses such a model is called a model based agent.

16-02-2024 31
• Updation based on current percept is combined with the old internal
state Agent best guess

Eg:- automated taxi may not be able to see around


the large truck that has stopped in front of it and
can only guess about what may be causing the
hold-up.

Thus uncertainty about the current state may be


unavoidable, but the agent still has to make a
decision.

16-02-2024 32
• UPDATE-STATE : responsible for creating the new internal state
description.

16-02-2024 33
• Case Study - https://www.ibm.com/thought-leadership/institute-
business-value/en-us/technology/automation-and-robotics
• This is part of your assignment
• Team size : max 4
• Note : team participation is compulsory.
Goal Based agents
• Knowing the current state of the environment is not always enough to
decide what to do.
• Eg: road junction: the taxi can turn left, turn right, or go straight on.
The correct decision depends on where the taxi is trying to get to.
• In other words as well as a current state description, the agent need
some sort of goal information that describes situations that are
desirable.

16-02-2024 35
16-02-2024 36
Goal Based agents
• Knowing the current state of the environment is not always enough to
decide what to do.
• Eg: road junction: the taxi can turn left, turn right, or go straight on.
The correct decision depends on where the taxi is trying to get to.
• In other words as well as a current state description, the agent need
some sort of goal information that describes situations that are
desirable.
• Sometimes goal based action selection is straightforward- eg: when
goal satisfaction results immediately from a single action. Sometimes
it will be more tricky-for example when the agent has to consider long
sequence of twists and terms inorder to find a way to achieve the
goal.
• Search and planning are the subfield of AI devoted to finding action
sequences that achieve the agent’s goal
• The difference in this approach with condition action rules is that it
involves consideration of the future. Ie “ what will happen if I do such
and such.
Utility based agent
Components of Agent programs work
• So far we understood the high level terms .
• We can represent along an axis of increasing complexity and expressive
power.
• Atomic
• Factored
• Structures
• Atomic Representation
• Each state of the world is indivisible- it has no internal structure.
• Act as single atom of knowledge: a “black box” whose only discernible
property is that of being identical to or different from another black box.
• HMM , Markov decision processes all work with atomic representations.
Factored representation
• Splits up each state into a fixed set of variables, each of which can
have a value.
• Two atomic states have nothing in common –they are just different
black boxes.
• Two different factored can share some attributes and not others
• Advt: make easier to work out how to turn one state into another.
• Factored representation can also represent uncertainity.
• Many AI are based on factored representation including
• Algorithm such as constraint satisfaction algorithm, propositional logic,
Bayesian network, machine learning algorithms.
Structured Representation
• Many purpose, we need to understand world as having things in it
that are related to each other, not just variables with values.
• Eg->
• In structured representation in which object such as cow and trucks
and their various and varying relationships can be described explicitly.
• Structured representations underline relational databases and first
order logics, first order probability models.
• In general axis along which atomic, factored and structured
representation lie is the axis of increasing expressiveness.

You might also like