Download as pdf or txt
Download as pdf or txt
You are on page 1of 52

Artificial Intelligence

What is AI and Agents?


Week 2
Dr. Javed Anjum Sheikh
Mr. Salman Masih
Mr. Sameer
Mr Adnan
Ms Rabia Zafar
What is AI?
Class Exercise #0, part B:
– Hopefully you have received the AI definition from
another student in the course.
– Break into groups of 4 people
– In your groups:
• Start by giving a quick introduction: name, year, etc.
• On the additional blank card each group has been given, write
each person in the groups name and email
• Each person has a card; read the card to the group, and the
group should decide the category:

#2: Think like humans #3: Think rationally


#1: Act like humans #4: Act rationally
What is AI?

Views of AI fall into four different perspectives


--- two dimensions:

1) Thinking versus Acting


2) Human versus Rational (which is “easier”?)

Human-like “Ideal” Intelligent/


Intelligence Pure Rationality
Which is
Thought/
2. Thinking 3. Thinking
closest to
Reasoning
humanly Rationally
a ‘real’ human?
(“modeling thought /
brain)
1. Acting 4. Acting
Behavior/ Furthest?
Humanly Rationally
Actions
“behaviorism”
“mimics behavior”
Different AI Perspectives
3. Systems that think rationally
2. Systems that think like humans
(optimally)

Rational Thinking
Human Thinking
Human Acting
Rational Acting

1. Systems that act like humans 4. Systems that act rationally

Note: A system may be able to act like a human without thinking


like a human! Could easily “fool” us into thinking it was human!
1. Acting Humanly
Behaviorist approach
Not interested in how you get results, just the similarity to what
human results
Exemplified by the Turing Test (Alan Turing, 1950).

Human-like “Ideal” Intelligent/


Intelligence Rationally
2. Thinking 3. Thinking
Thought/ humanly
Reasoning
Rationally

1. Acting 4. Acting
Humanly Rationally
Behavior/
Actions Turing Test
Universality of Computation
Mathematical Formulation of
notion of Computation and Computability
(1936)
23 June 2012
Turing Centenary
Abstract model of a
digital Computer:
rich enough to capture
any computational process.
Universal Computer

Turing Machine
Description
+ input
Turing Centennial

Universal Information
Turing
Machine
 Processing Model
of a Universal
Computer
Vending Machine

von Neumann architecture (1947)


Architecture of modern computers.
Data and program are stored in the computer's memory.
(inspired by Turing’s model)
Acting humanly: Turing Test
Turing (1950) "Computing machinery and intelligence”
Alan Turing
"Can machines think?“ "Can machines behave intelligently?"
– Operational test for intelligent behavior: the Imitation Game

AI system passes
if interrogator
cannot tell which one
(interaction via written questions) is the machine.
No computer vision or robotics or physical presence required!

Predicted that by 2000, a machine might have a 30% chance of


fooling a lay person for 5 minutes. Achieved. (Siri!  )

But, by scientific consensus, we are still several decades away


from truly passing the Turing test (as the test was intended).
#2: Think Like Humans

•Exemplified byperforms functions does matter


How the computer
Comparison of the traces of the reasoning steps
•General Problem Solver (Newell and Simon)
Cognitive science  testable theories of the workings of the human mind
•Neural networks
•Reinforcement learning

But:
• some early research conflated algorithm performance
=> like human (and vice-versa)
• Do we want to duplicate human imperfections?
2. Thinking Humanly

Human-like “Ideal” Intelligent/


Intelligence Rationally
2. Thinking Thinking
Thought/ humanly Rationally
Reasoning
 Cognitive
Modeling
Acting Acting
Behavior/
Humanly Rationally
Actions
Turing Test
Thinking humanly:
modeling cognitive processes
Requires scientific theories of internal activities of the brain.

1) Cognitive Science (top-down) computer models +


experimental techniques from psychology
 Predicting and testing behavior of human subjects

2) Cognitive Neuroscience (bottom-up)


 Direct identification from neurological data

Distinct disciplines but especially 2) has become


very active. Connection to AI: Neural Nets. (Large
Google / MSR / Facebook AI Lab efforts.)
Wonderful (little) book:
The Sciences of the Artificial
by Herb Simon

One of the founders of AI. Nobel Prize in


economics. How to build decision making
machines operating in complex
environments. Theory of Information
Processing Systems. First to move
computers from “number crunchers”
(fancy calculators) to “symbolic
processing.”
Another absolute classic:
The Computer and the Brain
by John von Neumann.

Renowned mathematician and the


father of modern computing.
#3: Thinking rationally

Exemplified by "laws of thought"


Aristotle: what are correct arguments/thought processes?
Problems:
Several Greek schools developed various forms of logic: notation and
rules of derivation for thoughts
1.DirectNot easy to translate informal real world problem
line through mathematics and philosophy to modern AI
into formal terms (problem formulation is difficult)
2. While may be able to solve the problem in
principal (i.e. decidable), in practice, may not get
the answer in a reasonable amount of time
(computationally intractable)
3. Thinking Rationally

Human-like “Ideal” Intelligent/


Intelligence Rationally
Thinking humanly 3. Thinking
Thought/  Cognitive Rationally
Reasoning Modeling formalizing ”L
aws of Thought”

Acting Acting
Behavior/
Actions Humanly Rationally
Turing Test
Thinking rationally:
formalizing the "laws of thought”
Long and rich history!
Logic: Making the right inferences!
Remarkably effective in science, math, and engineering.

Several Greek schools developed various forms of logic:


notation and rules of derivation for thoughts.
Aristotle: what are correct arguments/thought processes?
(characterization of “right thinking”).
Socrates is a man
Syllogisms All men are mortal
Aristotle
--------------------------
Therefore, Socrates is mortal
Can we mechanize it? (strip interpretation)
Use: legal cases, diplomacy, ethics etc. (?)
More contemporary logicians (e.g. Boole, Frege, and Tarski).
Ambition: Developing the “language of thought.”
Direct line through mathematics and philosophy to modern AI.

Key notion:
Inference derives new information from stored facts.
Axioms can be very compact. E.g. much of mathematics
can be derived from the logical axioms of Set Theory.
Zermelo-Fraenkel with axiom of choice.

Also,
Godel’s
incompleteness.
Limitations:

• Not all intelligent behavior is mediated by logical


deliberation (much appears not…)

• (Logical) representation of knowledge underlying


intelligence is quite non-trivial. Studied in the area of
“knowledge representation.” Also brings in probabilistic
representations. E.g. Bayesian networks.

• What is the purpose of thinking?


• What thoughts should I have?

#4: Acting Rationally

Rational behavior: do the right thing


Always make the best decision given what is available (knowledge, time,
resources)
Perfect knowledge, unlimited resources  logical reasoning (#3)
•Connection to economics, operational research,
Imperfect knowledge, limited resources  (limited) rationality
and control theory
•But ignores role of consciousness, emotions,
fear of dying on intelligence
4. Acting Rationally

Human-like “Ideal” Intelligent/


Intelligence Rationally
Thinking humanly 3. Thinking
Thought/  Cognitive Rationally
Reasoning Modeling / ”Laws of
 Neural nets Thought“
Many current
Acting 4. Acting
Behavior/ AI advances
Humanly Rationally
Actions
Turing Test
Rational agents
• An agent is an entity that perceives and acts in
the world (i.e. an “autonomous system” (e.g.
self-driving cars) / physical robot or software robot
(e.g. an electronic trading system))

This course is about designing rational agents

• For any given class of environments and tasks, we


seek the agent (or class of agents) with the best
performance

• Caveat: computational limitations may make perfect
rationality unachievable
 design best program for given machine resources
Building Intelligent Machines

I Building exact models of human cognition


view from psychology, cognitive science, and neuroscience

II Developing methods to match or exceed human


performance in certain domains, possibly by
very different means

Main focus of current AI.

But, I) often provides inspiration for II). Also, Neural Nets


blur the separation.
Building Rational Agents
PEAS Description to Specify Task Environments

To design a rational agent we need to specify a task


environment
a problem specification for which the agent is a
solution
PEAS: to specify a task environment
P:Performance Measure
E: Environment
A: Actuators
S: Sensors

2
PEAS: Specifying an automated
taxi driver
Performance measure:
?
Environment:
?
Actuators:
?
Sensors:
?

2
PEAS: Specifying an automated
taxi driver

Performance measure:
safety, speed, legal, comfortable, maximize profits
Environment:
?
Actuators:
?
Sensors:
?

2
PEAS: Specifying an automated
taxi driver

Performance measure:
safe, fast, legal, comfortable, maximize profits
Environment:
roads, other traffic, pedestrians, customers
Actuators:
?
Sensors:
?

2
PEAS: Specifying an automated
taxi driver

Performance measure:
safe, fast, legal, comfortable, maximize profits
Environment:
roads, other traffic, pedestrians, customers
Actuators:
steering, accelerator, brake, signal, horn
Sensors:
?

2
PEAS: Specifying an automated
taxi driver

Performance measure:
safe, fast, legal, comfortable, maximize profits
Environment:
roads, other traffic, pedestrians, customers
Actuators:
steering, accelerator, brake, signal, horn
Sensors:
cameras, sonar, speedometer, GPS

2
PEAS: Specifying a part picking robot

Performance measure: Percentage of parts in correct bins


Environment: Conveyor belt with parts, bins
Actuators: Jointed arm and hand
Sensors: Camera, joint angle sensors

2
PEAS: Specifying an interactive English tutor

Performance measure: Maximize student's score on test


Environment: Set of students
Actuators: Screen display (exercises, suggestions,
corrections)
Sensors: Keyboard

2
Environment types/Properties of Task Environments
Fully observable (vs. partially observable):
An agent's sensors give it access to the complete state of
the environment at each point in time.
– e.g a taxi agent doesn’t has sensor to see what
other drivers are doing/thinking ….

Deterministic (vs. stochastic):


The next state of the environment is completely
determined by the current state and the action executed
by the agent. (If the environment is deterministic except
for the actions of other agents, then the environment is
strategic)
• Vacuum worlds is Deterministic while Taxi Driving is
Stochastic – as one can exactly predict the
behaviour of traffic
3
Environment types
Episodic (vs. sequential):
The agent's experience is divided into atomic "episodes"
(each episode consists of the agent perceiving and then
performing a single action), and the choice of action in
each episode depends only on the episode itself.
• E.g. an agent sorting defective parts in an assembly
line is episodic while a taxi driving agent or a chess
playing agent are sequential ….

Static (vs. dynamic): The environment is unchanged while an


agent is deliberating.
– (The environment is semidynamic if the environment
itself does not change with the passage of time but the
agent's performance score does)
• Taxi Driving is Dynamic, Crossword Puzzle solver is
3 static…
Environment types – cont’d

Discrete (vs. continuous): A limited number of distinct,


clearly defined percepts and actions.
• e.g. chess game has finite number of states
• Taxi Driving is continuous-state and continuous-
time problem …
Single agent (vs. multiagent): An agent operating by itself in
an environment.
• An agent solving a crossword puzzle is in a single
agent environment
• Agent in chess playing is in two-agent environment
Examples

• The environment type largely determines the agent design


• The real world is (of course) partially observable, stochastic,
sequential, dynamic, continuous, multi-agent
Agent types

Four basic types in order of increasing generality:


Simple reflex agents
Model-based reflex agents
Goal-based agents
Utility-based agents

– All of these can be turned into learning agents


Simple reflex agents
Information comes from sensors - percepts
Changes the agents current state of the world
• These agents select actions on the basis of current
percept
Triggers actions through the effectors
– The condition-action rules allow the agent to make
connection from percept to actions
if car-in-front-is-braking then brake
if light-becomes-green then move-forward
if intersection-has-stop-sign then stop
If dirty then suck
Simple reflex agents

Characteristics
• Such agents have limited intelligence…
• Efficient
• No internal representation for reasoning,
inference.
• No strategic planning, learning.
• Are not good for multiple, opposing, goals.
• Works only if correct decision can be made on
basis of current percept.
Simple reflex agents
Function SIMPLE-REFLEX-AGENT(percept) returns
an action
static: rules, a set of condition-action rules
state←INTERPRET-INPUT(percept)
rule←RULE-MATCH(state, rule)
action←RULE-ACTION[rule]
return action

Will only work if the environment is fully observable


otherwise infinite loops may occur.
Simple reflex agents

3
Model based reflex agents

These agents keep track of part of world it can’t see …..


To tackle partially observable environments.
To update its state the agent needs two kinds of knowledge:
1. how the world evolves independently from the agent;
Ex: an overtaking car gets closer with time.
2. how the world is affected by the agent’s actions.
Ex: if I turn left, what was to my right is now behind Me.
Thus a model based agent works as follows:
– information comes from sensors - percepts
– based on this, the agent changes the current state of the world
– based on state of the world and knowledge (memory), it triggers actions
through the effectors
– E.g.
– Know about other car location in overtaking scenario for taxi driving agent
3 – When agent turns the steering clockwise, car turns right ….
Turing Test
Interrogator interacts with a computer and a person via a
teletype.
Computer passes the Turing test if interrogator cannot
determine which is which.
Loebner contest: Modern version of Turing Test, held
annually, with a $100,000 prize.
http://www.loebner.net/Prizef/loebner-prize.html
– Participants include a set of humans and a set of computers
and a set of judges.
– Scoring: Rank from least human to most human.
– Highest median rank wins $2000.
– If better than a human, win $100,000. (Nobody yet…)
AI Characterizations

Discipline that systematizes and automates


intellectual tasks to create machines that:

#2: Think like humans #3: Think rationally


#1: Act like humans #4: Act rationally
Model based reflex agents

Function REFLEX-AGENT-WITH-STATE(percept) returns an action


static: rules, a set of condition-action rules
state, a description of the current world state
action, the most recent action.
state←UPDATE-STATE(state, action, percept)
rule←RULE-MATCH(state, rule)
action←RULE-ACTION[rule]
return action

4
Model-based reflex agents

4
Goal based agents
Current state of environments is not always enough ….
– e.g at a road junction, it can turn left, right or go straight ..
• Correct decision in such cases depends on where taxi is trying to get to ….
Major difference: future is taken into account
Combining goal information with the knowledge of its actions, the
agent can choose those actions that will achieve the goal.
Goal-based Agents are much more flexible in
responding to a changing environment;
accepting different goals.
Such agents work as follows:
– information comes from sensors - percepts
– changes the agents current state of the world
– based on state of the world and knowledge (memory) and
goals/intentions, it chooses actions and does them through the effectors.

4
Goal-based agents

4
Utility based agents
Goals alone are not always enough to generate quality
behaviours
– eg different action sequences can take the taxi agent to destination (and
achieving thereby the “goal”) – but some may be quicker, safer,
economical etc ..
A general performance measure is required to compare
different world states
A utility function maps a state (or sequence of states) to a
real number to take rational decisions and to specify
tradeoffs when:
• goals are conflicting – like speed and safety
• There are several goals and none of which can be achieved with certainty …

4
Utility-based agents

4
Learning agents
A learning agent can be divided into four conceptual
components:
– Learning Element
• Responsible for making improvements in Performance element
– uses feedback from Critic to determine how performance element should be
modified to do better….
– Performance Element
• Responsible for taking external actions
• selecting actions based on percepts
– Critic
• Tells the learning element how well agent is doing w.r.t. to fixed
performance standards…
– Problem Generator
• Responsible for suggesting actions that will lead to improved and
informative experiences.
….

4
For a taxi driver agent:
– Performance element consists of collection of knowledge and
procedures for selecting driving actions

– The Critic observes the world and passes the information to


performance element – e.g. reaction/response of other drivers when the
agent takes quick left turn from top lane …!!
– Learning element then can formulate a rule to mark a “bad action”…

– Problem generator identifies certain areas of behaviour improvement


and suggest experiments – trying brakes on different road conditions
etc…

– The Learning element can make changes in “knowledge” components


– by observing pairs of successive states allow an agent to learn (learn
from what happens when strong brake is applied on a wet road …)

4
Learning agents

5
Summary
An agent is something that perceives and acts in an
environment. The agent function specifies the
action taken by the agent in response to any percept
sequence.
• The performance measure evaluates the
behaviour of the agent in the environment. A
rational agent acts to maximise the expected value
of the performance measure.
• Task environments can be fully or partially
observable, deterministic or stochastic, episodic or
sequential, static or dynamic, discrete or continuous,
and single-agent or multiagent
5
Summary
Simplex reflex agents respond directly to percepts,
whereas model-based reflex agents maintain internal
state to track aspects of the world that are not evident in
the current percept.
Goal-based agents act to achieve their goals, and
utility-based agents try to maximize their own
expected “happiness”.
All agents can improve their performance
through learning

You might also like