Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 101

UNIT –I

Introduction to Intelligent
Systems
Introduction to Intelligent Systems

Introduction, History, Foundations and


Mathematical treatments

Problem solving with AI, AI models, Learning


aspects in AI

What is an intelligent Agents, Rational agent,


Environments types, types of Agents
Introduction

 What is intelligence?

 “The capacity to learn and solve problems”

 “The computational part of the ability to achieve goals in


the world. Varying kinds and degrees of intelligence occur
in people, many animals and some machines.”

 Ability to think and act rationally.


What is Intelligence?
4

Intelligence is a property of mind that includes


many related mental abilities, such as the
capabilities to
 think
 learn
 reason
 plan
 solve problems
What is AI?

Artificial intelligence (AI) is the


intelligence exhibited by machines
or software.
It is an academic field of study
which studies how to create
computers and computer software
that are capable of intelligent
behavior.
Definitions of AI

A field that focuses on developing techniques to


enable computer systems to perform activities that
are considered intelligent (in humans and other
animals). [Dyer]

The science and engineering of making intelligent


machines, especially intelligent computer
programs. [McCarthy]
Definitions of AI

The design and study of computer programs that


behave intelligently. [Dean, Allen, & Aloimonos]

The study of [rational] agents that exist in an


environment and perceive and act.
[Russell&Norvig]
Definitions of AI

Artificial intelligence is the branch of


computer science concerned with making
computers behave like humans.
The term was coined in 1956 by John McCarthy.
Artificial intelligence includes the following areas of
specialization:
Games playing
Expert System
Natural Languages
Neural Network
Robotics
Artificial intelligence

 Games playing: Programming computers to play games


against human opponents.
 Expert System: Programming computers to make
decisions in real-life situations (for example, some expert
systems help doctors diagnose diseases based on symptoms).
 Natural Languages: Programming computers to
understand natural human languages.
 Neural Network : System that simulate intelligence by
attempting to reproduce the types of physical connections
that occur in animal brains.
 Robotics: It deals with the design, construction, operation,
and application of robots, as well as computer systems for
their control, sensory feedback, and information processing.
History of Artificial Intelligence
The Beginnings of A.I.
Although the computer provided the
technology necessary for AI, it was not
until the early 1950's that the link
between human intelligence and
machines was really observed.
Norbert Wiener was one of the first
Americans to make observations on the
principle of feedback theory.
The most familiar example of feedback
theory is the thermostat: It controls the
temperature of an environment by
gathering the actual temperature of the
house, comparing it to the desired
temperature, and responding by turning
the heat up or down.
Alan Turing

In 1950 Alan Turing published a


landmark paper in which he speculated
about the possibility of creating machines
with true intelligence.
He noted that "intelligence" is difficult to
define and devised his famous Turing
Test.
If a machine could carry on a conversation
(over a teletype) that was indistinguishable
from a conversation with a human being,
then the machine could be called
"intelligent.“
The Turing Test was the first serious
proposal in the philosophy of artificial
intelligence.
Gaming in A.I History.

In 1951, using the Ferranti Mark I machine


of the University of Manchester, Christopher
Strachey wrote a checkers program and
Dietrich Prinz wrote one for chess. Arthur
Samuel's checkers program, developed in the
middle 50s and early 60s, eventually achieved
sufficient skill to challenge a world champion.
Game AI would continue to be used as a
measure of progress in AI throughout its
history.
Allen Newell & Herbert Simon

In late 1955, Newell and Simon


developed The Logic Theorist.

The program, representing


each problem as a tree model,
would attempt to solve it by
selecting the branch that
would most likely result in the
correct conclusion.
John McCarthy

In 1956 John McCarthy regarded as


the father of AI, organized a
conference to draw the talent and
expertise of others interested in machine
intelligence for a month of
brainstorming.
From that point on, because of
McCarthy, the field would be known
as Artificial intelligence.
Knowledge Expansion
In the seven years after the conference, AI began to pick up momentum.
Although the field was still undefined, ideas formed at the conference
were re-examined, and built upon. Further research was placed upon
creating systems that could efficiently solve problems, by limiting the
search, such as the Logic Theorist. And second, making systems that
could learn by themselves.

In 1957, the first version of a new program The General Problem


Solver(GPS) was tested. The GPS was an extension of Wiener's feedback
principle, and was capable of solving a greater extent of common sense
problems.
Knowledge Expansion (Cont.)

A couple of years after the GPS, IBM contracted a


team to research artificial intelligence.
In 1958 McCarthy announced his new development;
the LISP language, which is still used today and is the
language of choice among most AI developers.
From Lab to Life

 Other fields of AI also made there way into the marketplace

during the 1980's.


 By 1985 over a hundred companies offered machine vision

systems in the US.


A.I. Timeline
AI

In today’s generation, Hollywood movies are


mostly about androids, humanoids, and
robots.
Video game artificial intelligence is a programming
area that tries to make the computer act in a similar
way to human intelligence.
A rule based system is used whereby information
and rules are entered into a database, and when the
video game AI is faced with a situation, it finds
appropriate information and acts accordingly.
Understanding of AI
21

AI techniques and ideas seem to be harder to


understand than most things in computer
science.

AI shows best on complex problems for which


general principles don't help much, though
there are a few useful general principles.
Understanding of AI
22

Artificial intelligence is also difficult to


understand by its content.
Boundaries of AI are not well defined.
Often it means the advanced software
engineering, sophisticated software techniques
for hard problems that can't be solved in any
easy way.
AI programs - like people - are usually not
perfect, and even make mistakes.
Understanding of AI
23

Understanding of AI also requires an


understanding of related terms such as
intelligence, knowledge, reasoning, thought,
learning, and a number of other computer
related terms.
AI Applications

Autonomous
Planning &
Scheduling:
 Autonomous rovers.
AI Applications

Autonomous Planning & Scheduling:


 Telescope scheduling
AI Applications

Autonomous Planning & Scheduling:


 Analysis of data:
AI Applications

Medicine:
 Image guided surgery
AI Applications

Medicine:
 Image analysis and enhancement
AI Applications

Transportation:
 Autonomous
vehicle control:
AI Applications

Transportation:
 Pedestrian detection:
AI Applications

Games:
AI Applications

Games:
AI Applications

Robotic toys:
Advantages of Artificial Intelligence

 more powerful and more useful computers


 new and improved interfaces
 solving new problems
 better handling of information
 relieves information overload
 conversion of information into knowledge
Disadvantages of Artificial Intelligence

 increased costs
 difficulty with software development - slow and expensive
 few experienced programmers
 few practical products have reached the market as yet.
Four main approaches to AI
Systems that act like humans
Systems that think like humans
Systems that think rationally
Systems that act rationally
Approach #1: Acting Humanly
AI is: “The art of creating machines that
perform functions that require intelligence
when performed by people” (Kurzweil)
The overall behavior of the system should
be human like.
It could be achieved by observation.
Ultimately to be tested by the Turing Test.
Acting humanly: Turing test

 Turing (1950) "Computing machinery and intelligence“

 "Can machines think?"  "Can machines behave intelligently?“

 Operational test for intelligent behavior: the Imitation Game


The Turing Test
13-
39

In a Turing test, the


interrogator must
determine which
respondent is the
computer and which is
the human
Approach #2: Thinking Humanly

AI is: “The automation of activities that we associate


with human thinking, activities such as decision-
making, problem solving, learning…” (Bellman)
Goal is to build systems that function internally in
some way similar to human mind.
Most of the time it is a black box where we are not
clear about our thought process.
One has to know functioning of brain and its
mechanism for possessing information.
Approach #3: Thinking rationally

AI is: “The study of the computations that make it


possible to perceive, reason, and act” (Winston)
Approach firmly grounded in logic i.e., how can
knowledge be represented logically, and how can a
system draw deductions?
Such systems rely on logic rather than human to
measure correctness.
For thinking rationally or logically, logic formulas and
theories are used for synthesizing outcomes.
For example,
 given John is a human and all humans are mortal then one can
conclude logically that John is mortal
Approach #4: Acting rationally

AI is: “The branch of computer science that is


concerned with the automation of intelligent
behavior” (Luger and Stubblefield)
An agent is something that perceives and acts
Rational behavior means doing right thing.
Even if method is illogical, the observed behavior
must be rational.
Foundations and Mathematical treatments
43

 It is based on
Mathematics

Neuroscience

Control Theory
Linguistics
Foundations - Mathematics
44

 More formal logical methods


Boolean logic
Fuzzy logic
 Uncertainty
The basis for most modern approaches to
handle uncertainty in AI applications can be
handled by
 Probability theory
 Computations
 Model and Temporal logics
Foundations - Neuroscience
45

How do the brain works?


 Early studies (1824) relied on injured and abnormal
people to understand what parts of brain work
 More recent studies use accurate sensors to
correlate brain activity to human thought
 By monitoring individual neurons, monkeys can now
control a computer mouse using thought alone
Foundations – Control Theory
46

 Machines can modify their behavior in response to the


environment (sense/action loop)
 Water-flow regulator, steam engine governor, thermostat
 The theory of stable feedback systems (1894)
 Build systems that transition from initial
state to goal state with minimum energy
 In 1950, control theory could only describe
linear systems and AI largely rose as a
response to this shortcoming
Foundations - Linguistics
47

Speech demonstrates so much of human intelligence


 Analysis of human language reveals thought taking place in
ways not understood in other settings
 Children can create sentences they have never heard before
 Language and thought are believed to be tightly intertwined
What is an agent?

“An over-used term” (Patti


Maes, MIT Labs, 1996)
Many different definitions
exist …
Agent Definition (1)

American
Heritage
Dictionary:

I can relax,
agent -
my agents ” … one that acts or
will do all has the power or
the jobs on
authority to act… or
my behalf
represent another”
Agent Definition (2)

“…agents are software entities that carry out


some set of operations on behalf of a user or
another program ..." [IBM]
Agent Definition (3)
Agent Definition (4)

"An agent is anything that can be viewed as perceiving its


environment through sensors and acting upon that
environment through actuators."
Russell & Norvig
Agent

Agent: anything that can be viewed as…


 perceiving its environment through sensors
 acting upon its environment through effectors

Examples:
 Human
 Web search agent
 Chess player
Agents
Agents related Terms

Percept :
A complete history of everything the agent has ever
perceived.
Agent function (or Policy):
Maps percept to action (determines agent behavior)
Abstract mathematical description
Agent program:
Implements the agent function, running on agents
architecture.

An agent together with its environment is called a world.


Agent

An agent perceives its environment through sensors


 the complete set of inputs at a given time is called a percept
 the current percept, or a sequence of percepts may influence
the actions of an agent

It can change the environment through actuators


 an operation involving an actuator is called an action
 actions can be grouped into action sequences
Agent function and program

•The agent function maps from percept histories to


actions:
•[f: P*  A]

•The agent program runs on the physical architecture to


produce f
agent = architecture + program
Examples of Agent

 Human agent
 eyes, ears, skin, taste buds, etc. for sensors
 hands, fingers, legs, mouth, etc. for actuators
 powered by muscles

 Robot
 camera, infrared, bumper, etc. for sensors
 grippers, wheels, lights, speakers, etc. for actuators
 often powered by motors

 Software agent
 functions as sensors
 information provided as input to functions in the form of
encoded bit strings or symbols
 functions as actuators
 results deliver the output
Vacuum-cleaner world

This world has just two locations : square A and


square B.

The vacuum agent perceives which square it is in and


whether there is dirt in the square.

One very simple agent function : If the current


square is dirty, then suck, otherwise move to other
square.
Vacuum-cleaner world
Vacuum-cleaner world
61

 Percepts: location and contents,


e.g., [A, Dirty]
 Actions: Left, Right, Suck, NoOp
 Agent’s function  look-up table
 For many agents this is a very large
table
Vacuum-cleaner world

Agent Program

function REFLEX-VACCUM-AGENT([ location,


status]) returns an action
if status=dirty then return suck
else if location = A then return right
else if location = B then return left
Rational agents
63

• Rationality
– Do the actions that causes the agent to be most successful.

– Rational Agent: one that does the right thing

• Rational Agent:
• For each possible percept sequence, a rational agent
should select an action that is expected to maximize
its performance measure.
Rational Agent

Rationality depends on 4 things:

1. Performance measure of success


2. Agent’s prior knowledge of environment
3. Actions agent can perform
4. Agent’s percept sequence to date
What intelligent agents are ?
 “An intelligent agent is one that is capable of flexible
autonomous action in order to meet its design
objectives, where flexible means three things:
 reactivity: agents are able to perceive their
environment, and respond in a timely fashion to changes
that occur in it in order to satisfy its design objectives;
 pro-activeness: intelligent agents are able to exhibit
goal-directed behavior by taking the initiative in order to
satisfy its design objectives;
 social ability: intelligent agents are capable of
interacting with other agents (and possibly humans) in
order to satisfy its design objectives”;
Agent Characterisation
 An agent is responsible for satisfying specific goals.
There can be different types of goals such as achieving a
specific status (defined either exactly or approximately),
keeping certain status, optimizing a given function (e.g.,
utility), etc.
beliefs
knowledge
Goal1
Goal1
Goal2
Goal2

 The state of an agent includes state of its internal


environment + state of knowledge and beliefs about its
external environment.
Goal I (achieving exactly
defined status)

Initial Goal
State
Goal II (achieving constrained
status) Goal
Constraint:
“The smallest in on top”
Initial
State

OR
Goal III (continuously
keeping instable status)

Initial Goal
State
Goal IV (maximizing utility)

Goal:
Initial The basket filled with mushrooms
State that can be sold for maximum
possible price
Situatedness

An agent is situated in an environment, that consists of


the objects and other agents it is possible to interact with.

environment
environment

An agent has an identity that distinguishes it from the


other agents of its environment.
James
James Bond
Bond
Situated in an environment,
which can be:

Accessible/partially accessible/inaccessible
(with respect to the agent’s precepts) ;
Deterministic/nondeterministic
(current state can or not fully determine the next one) ;
Static/dynamic
(with respect to time).
PEAS Description Template

used for high-level characterization of agents

Performance How well does the agent solve the task at hand? How is this measured?

Measures

Environment Important aspects of theurroundings beyond the control of the agent:

Actuators Determine the actions the agent can perform.

Sensors Provide information about the current state of the environment.


Task Environments
75

• PEAS: Performance measure, Environment,


Actuators, Sensors
• Consider, e.g., the task of designing an
automated taxi
– Performance measure: Safe, fast, legal,
comfortable
trip, maximize profits
– Environment: Roads, other traffic,
pedestrians, customers
– Actuators: Steering wheel, accelerator, brake,
signal,
horn
– Sensors: Cameras, speedometer, GPS,
odometer, engine sensors, keyboard
PEAS
76

Agent: Part-picking robot


Performance measure: Percentage of parts in correct
bins
Environment: Conveyor belt with parts, bins
Actuators: Jointed arm and hand
Sensors: Camera, joint angle sensors
PEAS
77

Agent: Interactive English tutor


Performance measure: Maximize student's score on
test
Environment: Set of students
Actuators: Screen display (exercises, suggestions,
corrections)
Sensors: Keyboard
Environment types
78

• Fully observable (vs. partially observable)


• Deterministic (vs. stochastic)
• Episodic (vs. sequential)
• Static (vs. dynamic)
• Discrete (vs. continuous)
• Single agent (vs. multiagent):
Fully observable (vs. partially observable)
79

Is everything an agent requires to choose its actions


available to it via its sensors? Perfect or Full
information.
 If so, the environment is fully accessible
If not, parts of the environment are inaccessible
 Agent must make informed guesses about world.
In decision theory: perfect information vs. imperfect
information.
Cross Word Poker Backgammon Part picking robot Image analysis
Fully Partially Partially Fully Fully
Deterministic (vs. stochastic)
80

Does the change in world state


 Depend only on current state and agent’s action?
Non-deterministic environments
 Have aspects beyond the control of the agent
 Utility functions have to guess at changes in world

Cross Word Taxi driver Part


Part picking robot Image analysis
Deterministic Stochastic Stochastic Deterministic
Episodic (vs. sequential):
81

 Is the choice of current action


 Dependent only on the episodic itself.
 then the environment is episodic
 In non-episodic environments:
 Agent has to plan ahead:
 Current choice will affect future actions

Cross Word Poker Backgammon Taxi driver Image analysis


Sequential Sequential Sequential Sequential Episodic
Static (vs. dynamic):
82
Static :environments don’t change
 While the agent is deliberating over what to do
Dynamic :environments do change
 So agent should/could consult the world when choosing actions
 Alternatively: anticipate the change during deliberation OR make
decision very fast
Semidynamic: If the environment itself does not change
with the passage of time but the agent's performance
score does.
Cross Word Poker Backgammon Taxi driver Part picking robot
Static Static Static Dynamic Dynamic
Discrete (vs. continuous)
83

 A limited number of distinct, clearly defined percepts and


actions vs. a range of values (continuous)

Cross Word Poker Backgammon Taxi driver Part picking robot


Discrete Discrete Discrete Conti Conti
Single agent (vs. multiagent):
84

 An agent operating by itself in an environment or there are


many agents working together.

Cross Word Poker Backgammon Taxi driver Part picking robot


Single Multi Multi Single Single
Summary
Observable Deterministic Episodic Static Discrete Agents

Cross Word Fully Deterministic Sequential Static Discrete Single

Poker Fully Stochastic Sequential Static Discrete Multi

Backgammon Partially Stochastic Sequential Static Discrete Multi

Taxi driver Partially Multi


Stochastic Sequential Dynamic Conti

Part picking robot Partially Stochastic Episodic Dynamic Conti Single

Image analysis Fully Deterministic Episodic Semi Conti Single


Environments Properties

Determine to a large degree the interaction between


the “outside world” and the agent
 the “outside world” is not necessarily the “real world” as we
perceive it
 it may be a real or virtual environment the agent lives in
in many cases, environments are implemented
within computers
 they may or may not have a close correspondence to the “real
world”
Environment Properties

 fully observable vs. partially observable


 sensors capture all relevant information from the environment
 deterministic vs. stochastic (non-deterministic)
 changes in the environment are predictable
 episodic vs. sequential (non-episodic)
 independent perceiving-acting episodes
 static vs. dynamic
 no changes while the agent is “thinking”
 discrete vs. continuous
 limited number of distinct percepts/actions
 single vs. multiple agents
 interaction and collaboration among agents
 competitive, cooperative

Agents
Agent types
88

Four basic types of Agent:


 Simple reflex agents
 Reflex agents with state/model
 Goal-based agents
 Utility-based agents
 All these can be turned into learning agents
Simple reflex agents
89
Simple reflex agents
90

 Simple but very limited intelligence.


 Action does not depend on percept history, only on current
percept.
 Therefore no memory requirements.
 Infinite loops
 Suppose vacuum cleaner does not observe location. What do you do
given location = clean? Left of A or right on B -> infinite loop.
 Possible Solution: Randomize action[condition-action rule]

 Chess – openings, endings


 Lookup table (not a good idea in general)
 35100 entries required for the entire game
Simple Reflex Agent

Function SIMPLE-REFLEX-AGENT(percept)returns
an action
static: rules, A set of condition-action rules

state <- INTERPERFECT-INPUT(percept)


rules <- RULE-MATCH(state, rules)
action<- RULE-ACTION[rule]
return action
Simple Reflex Agent
92

• Recall the agent function that maps from percept histories


to actions:
[f: P*  A]
An agent program can implement an agent function by
maintaining an internal state.

 The internal state can contain information about the state


of the external environment.
Model-based reflex agents
Model-based reflex agents
94
 Know how world evolves
 Overtaking car gets closer from behind

 How agents actions affect the world


 Wheel turned clockwise takes you right

 Model base agents update their state


Goal-based agents
95

• Knowing current state and environment? Enough?


– Taxi can go left, right, straight

• Have a goal(Describe situations that are desirable)

 A destination to get to
Uses knowledge about a goal to guide its actions
 E.g., Search, planning
Goal-based agents
96

• Reflex agent breaks when it sees brake lights. Goal based agent
reasons
– Brake light -> car in front is stopping -> I should stop -> I should use brake
Utility-based agents
97

Goals are not always enough


 Many action sequences get taxi to destination
 Consider other things. How fast, how safe…..

A utility function maps a state onto a real number


which describes the associated degree of
“happiness”, “goodness”, “success”.
Utility-based agents
98
Learning Agent

It is divided into four components.


1. Learning Element
2. Performance Element
3. Critic
4. Problem Generator
Learning Agent
Learning agents
101

 Performance element
is what was previously the
whole agent
 Input sensor
 Output action
 Learning element
 Modifies performance
element.
Learning agents
102

 Critic: how the agent is


doing
 Input: checkmate?
 Fixed

 Problem generator
 Tries to solve the
problem differently
instead of optimizing.
 Suggests exploring new
actions -> new problems.

You might also like