Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 33

Artificial Intelligence

Module No. 01
Introduction to Intelligent Systems and Intelligent Agents

• Introduction to AI, AI Problems and AI techniques, Solving problems


by searching, Problem Formulation. State Space Representation

• Structure of Intelligent agents, Types of Agents, Agent Environments


PEAS representation for an Agent.
What is AI?
Acting humanly: The Turing Test approach
The computer passes the test if a human interrogator, after posing some written questions,
cannot tell whether the written responses come from a person or not.

A computer program that pretends to be a 13-year-old Ukrainian boy called Eugene


Goostman passed a Turing test at the Royal Society in London 6 June 2014 by convincing
33 percent of the judges that it was human during a five-minute typed conversation.
Acting humanly: The Turing Test approach
The computer would need to possess the following capabilities:
Natural language processing to enable it to communicate
successfully in English
Knowledge representation to store what it knows or hears.
 Automated reasoning to use the stored information to answer
questions and to draw new conclusions.
 Machine Learning to adapt to new circumstances and to detect and
extrapolate patterns.
Acting humanly: The Turing Test approach
• Total Turing Test includes a video signal so that the interrogator can
test the subject's perceptual abilities.

To pass the total Turing Test, the computer will need


• Computer vision to perceive objects
• Robotics to manipulate objects and move about.
Thinking humanly: The cognitive modeling
approach
• Get inside the actual workings of human minds: Cognitive Modeling
Cognition: Mental processes such as Attention, Memory, producing & understanding
languages, Reasoning, problem solving, decision making.

There are two ways to do this:


• Introspection-trying to catch our own thoughts as they go by.
• And through psychological experiments.

The field of cognitive science brings


together computer models from AI and experimental techniques from psychology to try
to construct precise and testable theories of the workings of the human mind.
Thinking rationally: The "laws of thought"
approach
• The laws of thought were LOGIC supposed to govern the operation of
the mind; their study initiated the field called logic. The Greek
philosopher Aristotle was one of the first to attempt to codify "right
thinking“

• Syllogisms provided patterns for argument structures that always


yielded correct conclusions when given correct premises-for example,
"Socrates is a man; all men are mortal;
therefore, Socrates is mortal."
Acting rationally: The rational agent
approach
• A RATIONAL AGENT is one that acts so as to achieve the best outcome or,
when there is uncertainty, the best expected outcome.

Rationality of an agent depends on mainly on the following four parameters:


• The degree of success that is determined by the performance measures.
• The percept sequence of the agent that have been perceived till date.
• Prior knowledge of the environment that is gained by an agent till date.
• The actions that the agent might perform in the environment.
AI in Human Lives
• Eliza
• Hala
• Sophia
• Google Deep Mind
• Vera
• Google Map
• Google Keyboard
• Google Assistant
• Google Translate
• Siri
• Cortana
• Alexa
• Google Home
Intelligent agent
An agent is defined something that sees (perceives) and acts in an
environment.
Agent terminology

• Performance measure of agent: It is the criteria determining the success of


an agent.
• Behaviour/action of agent: It is the action performed by an agent after any
specified sequence of the percepts.
• Percept: It is defined as an agent’s perceptual inputs at a specified instance.
• Percept sequence: It is defined as the history of everything that an agent has
perceived till date.
• Agent function: It is defined as a map from the precept sequence to an action.
Agent function, a = F(p)
where p is the current percept, a is the action carried out, and F is the agent
function
Environment of problem
The vacuum agent perceives in which square it is in and if there is dirt
in the square. It may choose to move left, right or suck up the dirt or do
nothing. On the basis of the aforementioned task, one of the very simple
functions of the agent is the following: If the current square is dirty then
suck, otherwise move to other square. Hence, we can write:
Precepts: location and status, e.g., [A, Dirty]
Actions: left, right, suck and no-op (Do Nothing)
Vacuum cleaner problem

Performance measure of vacuum cleaner agent: All the rooms are well
cleaned.

Behaviour/action of agent: Left, right, suck and no-op (Doing nothing).

Percept: Location and status, for example, [A, Dirty].


Agent function: Mapping of precept sequence to an action
Vacuum cleaner problem
Structure of agent

The structure of agent can be represented as:

Agent= Architecture + Agent

Program where,

Architecture : The machinery that an agent executes on

Agent program: An implementation of an agent function


Types of agents
1. Table driven agent
2. Simple reflex agent
3. Model based agent
4. Goal based agent
5. Utility based agent
6. Learning agent
1. Table driven agent
• It forms a look up table between percept and action.
• Look-up table is indexed by percept.
• Simplest way to design agent.
• Table size becomes impracticable for certain applications.
• Complete table needs to be stored in memory.
• Building table is difficult, takes long time.
• No intelligence.
• Eg. Automated taxi driver
• Camera Image –50MB/s for 1Hr –50*60*60
1. Table driven agent
Function Table-Driven-Agent(Percept) returns action

Static-percept-sequence
Table

Append percept at the end of percepts


Action<-- Lookup(Percept Table)
Return action
2. Simple reflex agent

Simple reflex agent is said to be the simplest kind of agent. These agents select an action based on the
current percept ignoring the rest of the percept history.
These percept to action mapping which is known as condition-action rules (so-called situation–
action rules, productions, or if–then rules) in the simple reflex agent. It can be represented as follows:

if {set of percepts} then {set of actions}

For example,

if it is raining then put up umbrella


or
if carin front is breaking then initiate breaking
2. Simple reflex agent
2. Simple reflex agent
Limitations
• Intelligence level in these agents is very limited.
• It works only in a fully observable environment.
• It does not hold any knowledge or information of non perceptual parts
of state.
• Because of the static knowledge based; it’s usually too big to generate
and store.
• If any change in the environment happens, the collection of the rules
are required to be updated.
3. Model based agent
• Model-based agent is known as Reflex agents with an internal state. One problem
with the simple reflex agents is that their activities are dependent of the recent
data provided by their sensors. On the off chance that a reflex agent could monitor
its past states (that is, keep up a portrayal or “model” of its history in the world),
and understand about the development of the world.
3. Model based agent
4. Goal based

• Indeed, even with the expanded information of the current situation of


the world given by an agent’s internal state, the agent may not, in any
case, have enough data to reveal to it. The proper action for the agent
will regularly depend upon its goals .Thus, it must be provided with
some goal information.
4. Goal based
5. Utility based agent

Goals individually are insufficient to produce top high-quality behavior.


Frequently, there are numerous groupings of actions that can bring
about a similar goal being accomplished. Given proper criteria, it might
be conceivable to pick ‘best’ sequence of actions from a number that all
result in the goal being achieved.

Any utility-based agent can be depicted as having an utility capacity


that maps a state, or grouping of states, on to a genuine number that
speaks to its utility or convenience or usefulness.
5. Utility based agent
6. Learning agent
By actively exploring and experimenting with their environment, the
most powerful agents are able to learn. A learning agent can be further
divided into the four conceptual components: Critic, Learning Element,
Performance Element and Problem Generator.
PEAS for Agent
PEAS for vacuum cleaner
Performance cleanness, efficiency: distance
traveled to clean, battery life,
Measure security

Environment room, table, wood floor,


carpet, different obstacles

Actuators wheels, different brushes,


vacuum extractor

Sensors camera, dirt detection sensor,


cliff sensor, bump sensors,
infrared wall sensors
Environment and its properties

1. Accessible ( Fully observable) and Inaccessible Environments


(Partially Observable)
2. Deterministic Environment and Nondeterministic (stochastic)
Environment
3. Episodic and Non episodic (Sequential)Environment
4. Static versus Dynamic Environment
5. Discrete versus Continuous Environment
6. Single Agent versus Multiagent Environment
Environment and its properties
Fully Deterministic? Episodic? Static? Discrete? Single
observable? agent?
Solitaire No Yes Yes Yes Yes Yes
Backgamm
Yes No No Yes Yes No
on
Taxi
No No No No No No
driving
Internet
No No No No Yes No
shopping
Medical
No No No No No Yes
diagnosis
Crossword
Yes Yes Yes Yes Yes Yes
puzzle
English
No No No No Yes No
tutor
Image
Yes Yes Yes No No Yes
analysis
Thank you

You might also like