Module - 1 Introduction To Artificial Intelligence

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 75

MODULE – 1

Introduction to Artificial Intelligence


What is AI, what is an AI technique &
which problems need AI attention?
Introduction
• Artificial intelligence or AI has many definitions associated with it
• AI currently encompasses a huge variety of subfields , ranging from
the general (learning and perception ) to be specific ,such as playing
chess , proving mathematical theorems ,driving a car on a crowded
street and diagnosing diseases.AI is relevant to any intellectual task ; it
is truly a universal field
• The AI, in simplest terms, is a way of solving difficult problems. The
meaning of difficult problem is not which is difficult because of the
logical requirements, but difficult because of the design of the
computer itself
• For example, it is very easy for a four-year-old child to see a few
samples of cars and then classify a vehicle as a car but can a normal
computer program do that?
• Many of us have observed that when we meet a friend after
years and find him grown in all dimensions, we are still in a
position to recognize him
• Researchers are working on programs which can recognize
simple photos and take decisions based on the same. As long
as the face is similar to the face they have seen to a large
extent, they do not have any issues. The problem starts when
there is a significant difference between two photos of the
same person may be taken at the different age or under
different backgrounds
• The researchers are finding difficulties in writing such
programs due to obvious reasons as there is no algorithm
known for solving these problems. Humans can solve those
problems as their minds are better equipped at solving such
problems
• The AI or Artificial Intelligence is to write computer programs
which can mimic human brain problem solving capabilities
• Elaine Rich, in her book "Artificial Intelligence" puts it as "AI
is the study of programs at which, at the moment, people are
better".
• One more author puts it as “AI is about writing intelligent
programs”
• One more definition is “AI is about building entities which can
understand, perceive, predict and manipulates like humans do”
• The last definition is little more interesting as it is also talking
about entities which can act like humans and not merely
programs. The robots which are confined in science fictions till
now, AI is a study of methods of bringing them to the real world
What is AI?

– Programs that behave externally like humans


– Programs that operate internally as humans do
– Computational systems that behave intelligently
– Rational behaviour
What is AI?
• Thinking humanly
• Acting humanly
• Thinking rationally
• Acting rationally
Turing Test

• Human beings are intelligent


• To be called intelligent, a machine must
produce responses that are indistinguishable
from those of a human

Alan Turing
Does AI have applications?
• Autonomous planning and scheduling of tasks
aboard a spacecraft
• Beating Gary Kasparov in a chess match
• Steering a driver-less car
• Understanding language
• Robotic assistants in surgery
• Monitoring trade in the stock market to see if
insider trading is going on
A rich history
• Philosophy
• Mathematics
• Economics
• Neuroscience
• Psychology
• Control Theory
• John McCarthy- coined the term- 1950’s
Academic Disciplines important to AI
• Philosophy Logic, methods of reasoning, mind as physical
system, foundations of learning, language,
rationality.

• Mathematics Formal representation and proof, algorithms,


computation, (un)decidability, (in)tractability,
probability.

• Economicsutility, decision theory

• Neuroscience neurons as information processing units.

• Psychology/ how do people behave, perceive, process Cognitive Science


information, represent knowledge.

• Computer building fast computers


engineering

• Control theory design systems that maximize an objective function


over time

• Linguisticsknowledge representation, grammar


State of the art
• Deep Blue defeated the reigning world chess champion Garry Kasparov in
1997

• Proved a mathematical conjecture (Robbins conjecture) unsolved for decades

• No hands across America (driving autonomously 98% of the time from


Pittsburgh to San Diego)

• During the 1991 Gulf War, US forces deployed an AI logistics planning and
scheduling program that involved up to 50,000 vehicles, cargo, and people

• NASA's on-board autonomous planning program controlled the scheduling


of operations for a spacecraft

• Proverb solves crossword puzzles better than most humans

• Best vehicle in Darpa challenge made it 7 miles into the desert


AI APPROACHES
TURING TEST
INTELLIGENT AGENTS
INTELLIGENT AGENTS
• The concept of rationality can be applied to a wide variety of
agents operating in any imaginable environment
• In this course the concept of rationality to develop a small set
of design principles for building successful agents – systems
that can reasonably be called intelligent
• The observation that some agents behave better than others
leads naturally to the idea of a rational agent - the one that
behaves as well as possible
• How well an agent can behave depends on the nature of the
environment : some environments are more difficult than others
• A crude categorization of environments and show how
properties of an environment influence the design of suitable
agents for that environment
AGENTS AND ENVIRONMENT
• An agent is anything that can be viewed as perceiving its
environment through sensors and acting upon that environment
through effectors
• A human agent has eyes ,ears and other organs for sensors and
hands ,legs ,mouth and other body parts for effectors
• A robotic agent substitutes camera and infrared range finders for
the sensors and various motors for the effectors
• A software agent has encoded bit strings as its percepts and
actions .A generic agent is diagrammed in figure 2.1
• A software agent receives keystrokes , file contents , and
network packets as sensory inputs and acts on the environment,
by displaying on the screen, writing files , and sending network
packets
• The term percept is used to refer to the agent’s perceptual inputs at any
given instant

• An agent’s percept sequence is the complete history of everything the


agent has ever perceived

• An agent’s choice of action at any given instant can depend on the


entire percept sequence observed to date ,but not on anything it hasn’t
perceived

• By specifying the agent’s choice of action for every possible percept


sequence ,we can say more or less everything about the agent

• Mathematically speaking ,we can say that an agents behavior is


described by the agent function that maps any given percept sequence
to an action
• Given an agent to experiment with, we can, in principle
,construct a table by trying out all possible percept sequences
and recording which actions the agent does in response
• Internally, the agent function for an artificial agent will be
implemented by an agent program
• Two distinct ideas should be kept in mind
• The agent function is an abstract mathematical description ;
the agent program is a concrete implementation ,running
within some physical system
• To illustrate these ideas ,we can use a very simple example –
the vacuum -cleaner world shown in figure 2.2
• This world is so simple that we can describe everything that
happens ;it’s a made –up world , so we can invent many
variations
• This particular world has two locations :square A and B

• The vacuum agent perceives which square it is in and whether


it is dirt in the square

• It can choose to move left ,move right , suck up the dirt ,or do
nothing

• One very simple agent function is the following :if the current
square is dirty ,then suck :otherwise ,move to the other square

• A partial tabulation of this agent function is shown in figure


2.3 and an agent program that implements it appears in figure
2.8
GOOD BEHAVIOUR :THE CONCEPT OF RATIONALITY
• A rational agent is the one that does the right thing –
conceptually speaking , every entry in the table for the agent
function is filled out correctly
GOOD BEHAVIOR: THE CONCEPT OF
RATIONALITY
• When an agent is plunked down in an
environment, it generates a sequence of actions
according to the percepts it receives. This
sequence of actions causes the environment to
go through a sequence of states. If the sequence
is desirable, then the agent has performed well.
• This notion of desirability is captured by a
performance measure that evaluates any given
sequence of environment states.
• Obviously, there is not one fixed performance
measure for all tasks and agents; typically, a designer
will devise one appropriate to the circumstances. This
is not as easy as it sounds.
• Consider, for example, the vacuum-cleaner agent
from the preceding section.
• We might propose to measure performance by the
amount of dirt cleaned up in a single eight-hour shift.
• With a rational agent, of course, what you ask for is
what you get.
• A rational agent can maximize this performance
measure by cleaning up the dirt, then dumping it all
on the floor, then cleaning it up again, and so on.
• A more suitable performance measure would
reward the agent for having a clean floor

• For example, one point could be awarded for each


clean square at each time step (perhaps with a
penalty for electricity consumed and noise
generated).
• As a general rule, it is better to design performance
measures according to what one actually wants in
the environment, rather than according to how one
thinks the agent should behave
Rational agent
• A rational agent is one that acts to achieve the best
expected outcome
• Goals are application-dependent and are expressed in
terms of the utility of outcomes
• Being rational means maximizing your expected utility
• In practice, utility optimization is subject to the agent’s
computational constraints (bounded rationality or
bounded optimality)
• This definition of rationality only concerns the
decisions/actions that are made, not the cognitive process
behind them
RATIONALITY
• Consider the same vacuum - cleaner agent that cleans if it is
dirt and moves to the other square if not : this is the agent
function tabulated in figure 2.3

• To consider this as a rational agent ,first we need to say what


the performance measure is , what is the environment ,and
what sensors and actuators the agent has

• Let us assume the following:


• The performance measure awards one point for each clean
square at each time step , over a “lifetime” of 1000 time steps

• The geography is known a priori (Figure 2.2) but the dirt


distribution and the initial location of the agent are not .Clean
squares stay clean and sucking cleans the current square
• The Left and Right actions move the agent left and right
except when this would take the agent outside the environment
,in which case the agent remains where it is
• The only available actions are Left , Right and suck
• The agent correctly perceives its location and whether that
location contains dirt
• We claim that under these circumstances the agent is indeed
rational : it’s expected performance is at least as high as any
other agent’s
• The same agent would be irrational under different
circumstances
• For example, once all the dirt is cleaned up, the agent
will oscillate needlessly back and forth; if the
performance measure includes a penalty of one point
for each movement left or right, the agent will fare
poorly.
• A better agent for this case would do nothing once it
is sure that all the squares are clean.
• If clean squares can become dirty again, the agent
should occasionally check and re-clean them if
needed
• If the geography of the environment is unknown, the
agent will need to explore it rather than stick to
squares A and B
Omniscience , Learning and Autonomy
• An omniscient agent knows the actual outcome of its action
and can act accordingly: but omniscience is impossible in
reality
• Rationality maximizes expected performance ,while
perfection maximizes actual performance
• Doing actions in order to modify future percepts called
information gathering –is an important part of rationality
• A second example of information gathering is provided by the
exploration that must be undertaken by a vacuum –cleaning
agent is an initially unknown environment
• Our definition requires a rational agent not only to gather
information but also to learn as much as possible from what
it perceives
THE NATURE OF ENVIRONEMNTS
• After defining the term “rationality”, we can built rational agents
• Task environment are essentially the problems to which rational
agents are the solutions
SPECIFYING THE TASK ENVIRONMENT
• In the discussion of the rationality of the simple vacuum-cleaner
agent, we had to specify the performance measure, the
environment, and the agent’s actuators and sensors.
• We group all these under the heading of the task environment. For
the acronymically minded, we call this the PEAS (Performance,
Environment, Actuators, Sensors) description.
• In designing an agent, the first step must always be to specify the
task environment as fully as possible.
SPECIFYING THE TASK ENVIRONMENT

• The vacuum world was a simple example; let us consider a


more complex problem: an automated taxi driver.
• Figure 2.4 summarizes the PEAS description for the taxi’s task
environment.

Agent Type Performance Environment Actuators Sensors


Measure
Taxi Driver Safe, fast, Roads, other Steering, Cameras, sonar,
legal, traffic, accelerator, speedometer,
comfortable pedestrians, brake, GPS, odometer,
trip, customers signal, Accelerometer,
maximize horn, engine sensors,
profits display keyboard

Figure 2.4 PEAS description of the task environment for an automated taxi.
• First, what is the performance measure to which we would
like our automated driver to aspire?

• Desirable qualities include getting to the correct destination;


minimizing fuel consumption and wear and tear; minimizing
the trip time or cost; minimizing violations of traffic laws and
disturbances to other drivers; maximizing safety and passenger
comfort; maximizing profits.

• Obviously, some of these goals conflict, so tradeoffs will be


required.
• Next, what is the driving environment that the taxi will face?
Any taxi driver must deal with a variety of roads, ranging from
rural lanes and urban alleys to 12-lane freeways.
• The roads contain other traffic, pedestrians, stray animals,
road works, police cars, puddles, and potholes. The taxi must
also interact with potential and actual passengers. There are
also some optional choices.
• The actuators for an automated taxi include those available to
a human driver: control over the engine through the
accelerator and control over steering and braking.
• In addition, it will need output to a display screen or voice
synthesizer to talk back to the passengers, and perhaps some
way to communicate with other vehicles, politely or otherwise.
• The basic sensors for the taxi will include one or more
controllable video cameras so that it can see the road; it might
augment these with infrared or sonar sensors to detect
distances to other cars and obstacles.
• To avoid speeding tickets, the taxi should have a speedometer,
and to control the vehicle properly, especially on curves, it
should have an accelerometer.
• To determine the mechanical state of the vehicle, it will need
the usual array of engine, fuel, and electrical system sensors.
Like many human drivers, it might want a global positioning
system (GPS) so that it doesn’t get lost.
• Finally, it will need a keyboard or microphone for the
passenger to request a destination.
Properties of task environments
Static vs. Dynamic
• If the environment can change while an agent is deliberating
,then we can say that the environment is dynamic for that
agent ;otherwise it is static

• Static environments are easy to deal with because the agent


need not keep looking at the world while it is deciding on
an action ,nor need it worry about the passage of time

• Dynamic environments ,on the other hand , are continuously


asking the agent what it wants to do; if it hasn’t decided yet
,that counts as deciding to do nothing
• If the environment itself doesn’t change with the
passage of time but the agent’s performance score
does ,then we can say that the environment is semi-
dynamic

• Taxi driving is dynamic


• Chess when played with a clock is semi dynamic
• Crossword puzzles are static
THE STRUCTURE OF AGENTS
• The job of AI is to design an agent program that implements
the agent function – the mapping from percepts to action
• We assume this program will run on some sort of computing
device with physical sensors and actuators - we call this the
architecture
agent = architecture + program
• The program , we choose has to be one that is appropriate for
the architecture
• The architecture might be just an ordinary PC , or it might be a
robotic car with several onboard computers , cameras and
sensors
• The architecture makes the percepts from the sensors available
to the program ,runs the program , and feeds the program action
choices to the actuators as they are generated
Agent Programs
• The agent programs will take the current percept as input from
the sensors and return an action to the actuators
• The difference between the agent program and agent action is
that ,the agent program takes the current percept as input
while the agent action takes the entire percept history
• The agent program takes just the current percept as input
because nothing more is available from the environment ;if
the agent’s actions need to depend on the entire percept
sequence ,the agent will have to remember the percepts
• The agent programs can be described in simple pseudo code
language and online code contains implementation in real
programming languages
• For example ,given below in figure 2.7 is a trivial agent
program that keeps track of the percept sequence and then uses
it to index into a table of actions to decide what to do
• The table – an example of which is given for the vacuum
world in figure 2.3 -represents explicitly the agent function
that the agent program embodies
• To build a rational agent in this way ,we as designers must
construct a table that contains the appropriate action for every
possible percept sequence
• Let P be the set of possible percepts and T is the lifetime of the
agent (the total number of percepts it will receive)
• The look up table will contain ∑T |P|t entries
t=1
SIMPLE REFLEX AGENTS
• The simplest kind of agent is the simple reflex
agent
• These agents select actions on the basis of the
current percept ,ignoring the rest of the
percept history
• For example , the vacuum agent whose agent
function is tabulated as a simple reflex agent
,because its decision is based only on the current
location and on whether that location contains dirt
REFERENCES
• Slides taken from the textbook : “Stuart . J.
Russell and Peter Norvig, Artificial
Intelligence - A Modern Approach, Prentice
Hall Series, Third edition,2015.

You might also like