Professional Documents
Culture Documents
Module 3 - Agents Environments
Module 3 - Agents Environments
Module 3 - Agents Environments
MODULE 3
Agents & Environments
An agent is anything that can perceive its environment through sensors and acts upon
that environment through effectors.
• A human agent has sensory organs such as eyes, ears, nose, tongue and skin
parallel to the sensors, and other organs such as hands, legs, mouth, for effectors.
• A robotic agent replaces cameras and infrared range finders for the sensors, and
various motors and actuators for effectors.
• A software agent has encoded bit strings as its programs and actions.
Agent Terminology
Rationality
Rationality is nothing but status of being reasonable, sensible, and having good sense
of judgment.
Rationality is concerned with expected actions and results depending upon what the
agent has perceived. Performing actions with the aim of obtaining useful information is
an important part of rationality.
An ideal rational agent is the one, which is capable of doing expected actions to
maximize its performance measure, on the basis of −
Learning Activity 1
1. In a robotic agent, what does sensors and effectors mean? Give
Examples.
2. When can you say that a system is an ideal rational agent?
A simple reflex agent is the most basic of the intelligent agents out there. It
performs actions based on a current situation. When something happens in the
environment of a simple reflex agent, the agent quickly scans its knowledge
base for how to respond to the situation at-hand based on pre-determined
rules.
Example:
Simple reflex agents are just that: simple. They cannot compute complex
equations or solve complicated problems. They work only in environments that
are fully-observable in the current percept, ignoring any percept history. If you
have a smart light bulb, for example, set to turn on at 6 p.m. every night, the
light bulb will not recognize how the days are longer in summer and the lamp
is not needed until much later. It will continue to turn the lamp on at 6 p.m.
because that is the rule it follows. Simple reflex agents are built on the
condition-action rule.
Condition-Action Rule − It is a rule that maps a state (condition) to an action.
Learning Activity 2
Give an example of a simple reflex agents that can be deployed in your house.
Explain its concept.
They use a model of the world to choose their actions. They maintain an
internal state.
Model − knowledge about “how the things happen in the world”.
Internal State − It is a representation of unobserved aspects of current
state depending on percept history.
Updating the state requires the information about −
A model-based reflex agent is one that uses its percept history and its internal
memory to make decisions about an internal ''model'' of the world around it. Internal
memory allows these agents to store some of their navigation history, and then
use that semi-subjective history to help understand things about their current
environment - even when everything they need to know cannot be directly
observed. What's a percept you ask? A percept is a lasting result of something we
have perceived, which (rather than being immediately perceived) is something we
know we could possibly perceive.
With Waymo, for example, the model-based agent uses GPS to understand its
location and predict upcoming drivers. You and I take for granted that, when the
brake lights of the car ahead of us come on, the driver has hit the brakes and so
the car in front of us is going to slow down. But there is no reason to associate a
red light with the deceleration of a vehicle - unless you are used to seeing those
two things happen at the same time. So, the Waymo can learn that it needs to hit
the brakes by drawing on its perceptual history. Waymo can learn to associate red
brake lights just ahead with the need to slow itself down. Another common task is
when the Waymo car decides it needs to change lanes. Should it just shimmy over
as though it's the only car in the world? No, it uses the same processes to estimate
whether any other cars might be in the path of its intended lane change, to avoid
causing a collision.
Learning Activity 3
Give an example of a model based reflex agents that can be deployed in Medina
College. Explain its concept.
In life, in order to get things done we set goals for us to achieve, this pushes
us to make the right decisions when we need to. A simple example would be the
shopping list; our goal is to pick up every thing on that list. This makes it easier to
decide if you need to choose between milk and orange juice because you can only
afford one. As milk is a goal on our shopping list and the orange juice is not we
chose the milk.
So, in an intelligent agent having a set of goals with desirable situations are
needed. The agent can use these goals with a set of actions and their predicted
outcomes to see which action(s) achieve our goal(s).
Although the goal-based agent does a lot more work that the reflex agent
this makes it much more flexible because the knowledge used for decision making
is is represented explicitly and can be modified. For example, if our mars Lander
needed to get up a hill the agent can update its knowledge on how much power to
put into the wheels to gain certain speeds, through this all relevant behaviors will
now automatically follow the new knowledge on moving. However, in a reflex agent
many condition-action rules would have to be re-written.
Learning Activity 4
Based on your example for model based reflex agents (Learning activity 3), how can
you improve it to be considered as Goal based agent.
The agents which are developed having their end uses as building blocks are called
utility-based agents. When there are multiple possible alternatives, then to decide which
one is best, utility-based agents are used. They choose actions based on a preference
(utility) for each state. Sometimes achieving the desired goal is not enough. We may look
for a quicker, safer, cheaper trip to reach a destination. Agent happiness should be taken
into consideration. Utility describes how “happy” the agent is. Because of the uncertainty
in the world, a utility agent chooses the action that maximizes the expected utility. A utility
function maps a state onto a real number which describes the associated degree of
happiness.
For example, let’s show our mars Lander on the surface of mars with an obstacle in its
way. In a goal-based agent it is uncertain which path will be taken by the agent and some
are clearly not as efficient as others but in a utility-based agent the best path will have
the best output from the utility function and that path will be chosen.
Learning Activity 5
Give an example of Utility based agent that can be deployed in Ipil, Zamboanga
Sibugay. Explain the concept.
Lesson 4 The Nature of Environments
Turing Test
The success of an intelligent behavior of a system can be measured with Turing Test.
Two persons and a machine to be evaluated participate in the test. Out of the two
persons, one plays the role of the tester. Each of them sits in different rooms. The tester
is unaware of who is machine and who is a human. He interrogates the questions by
typing and sending them to both intelligences, to which he receives typed responses.
This test aims at fooling the tester. If the tester fails to determine machine’s response
from the human response, then the machine is said to be intelligent.
Properties of Environment
Learning Activity 6
Draw an illustration based on the scenario being described on the Turing Test.