Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 64

ARTIFICIAL

INTELLIGENCE
UNIT 1
Definition of AI
Artificial
• Produced by human art or effort, rather than originating naturally.
Intelligence
• It is the ability to acquire knowledge and use it.
So AI was defined as:
• AI is the study of ideas that enable computers to be intelligent.
• AI is the part of computer science concerned with design of computer systems
that exhibit human intelligence.
From the above two definitions, we can see that AI has two major
roles:
• Study the intelligent part concerned with humans.
• Represent those actions using computers.
Four Approaches in Artificial Intelligence

THOUGHT Systems that thinkSystems that think


like humans rationally

Systems that act Systems that act


BEHAVIOUR like humans rationally

HUMAN RATIONAL
Systems that act like humans
Turing Test Approach

?
• You enter a room which has a computer terminal. You have a fixed period of
time to type what you want into the terminal, and study the replies. At the
other end of the line is either a human being or a computer system.
• If it is a computer system, and at the end of the period you cannot reliably
determine whether it is a system or a human, then the system is deemed to
be intelligent.
Systems that act like humans

The Turing Test approach


• a human questioner cannot tell if
• there is a computer or a human answering his question, via teletype (remote
communication)
• The computer must behave intelligently
Intelligent behavior
• to achieve human-level performance in all cognitive tasks
Systems that act like humans
• These cognitive tasks include:
• Natural language processing
• for communication with human
• Knowledge representation
• to store information effectively & efficiently
• Automated reasoning
• to retrieve & answer questions using the stored information
• Machine learning
• to adapt to new circumstances
The Total Turing Test
• Includes two more issues:
• Computer vision
• to perceive objects (seeing)
• Robotics
• to move objects (acting)
Systems that think like humans
Cognitive Modeling approach
• Humans as observed from ‘inside’
• How do we know how humans think?
• Introspection vs. psychological experiments
• Cognitive Science
• “The exciting new effort to make computers think …
machines with minds in the full and literal sense”
(Haugeland)
• “[The automation of] activities that we associate with human
thinking, activities such as decision-making, problem
solving, learning …” (Bellman)
Systems that think ‘rationally’
“laws of thought” approach
• Humans are not always ‘rational’
• Rational - defined in terms of logic
• Logic can’t express everything (e.g. uncertainty)
• Logical approach is often not feasible in terms of computation
time (needs ‘guidance’)
• “The study of mental facilities through the use of computational
models” (Charniak and McDermott)
• “The study of the computations that make it possible to perceive,
reason and act” (Winston)
Systems that act rationally
“Rational agent” approach
• Rational behavior: doing the right thing
• The right thing: that which is expected to maximize goal achievement,
given the available information
• Agent : Something that acts
• Rational agent : One that acts so as to achieve the best outcome or,
when there is uncertainty, the best expected outcome
• Logic  only part of a rational agent, not all of rationality
• Sometimes logic cannot reason a correct conclusion
• At that time, some specific (in domain) human knowledge or information is used
• Thus, it covers more generally different situations of problems
• Compensate the incorrectly reasoned conclusion
The Foundation of AI
• Philosophy
• At that time, the study of human intelligence began with no formal expression
• Initiate the idea of mind as a machine and its internal operations
The Foundation of AI
Mathematics
It formalizes the three main area of AI: computation, logic and probability
Computation leads to analysis of the problems that can be computed
complexity theory
Probability contributes the “degree of belief” to handle uncertainty in AI
Decision theory combines probability theory and utility theory (bias)
The Foundation of AI
• Psychology
• How do humans think and act?
• The study of human reasoning and acting
• Provides reasoning models for AI
• Strengthen the ideas
• humans and other animals can be considered as information processing machines
The Foundation of AI
• Computer Engineering
• How to build an efficient computer?
• Provides the artifact that makes AI application possible
• The power of computer makes computation of large and difficult problems
more easily
• AI has also contributed its own work to computer science, including: time-
sharing, the linked list data type, OOP, etc.
The Foundation of AI
• Control theory and Cybernetics
• How can artifacts operate under their own control?
• The artifacts adjust their actions
• To do better for the environment over time
• Based on an objective function and feedback from the environment
• Not limited only to linear systems but also other problems
• as language, vision, and planning, etc.
The Foundation of AI
• Linguistics
• For understanding natural languages
• different approaches has been adopted from the linguistic work
• Formal languages
• Syntactic and semantic analysis
• Knowledge representation
History of AI
AI Applications
• Autonomous Planning &
Scheduling:
• Autonomous rovers.
AI Applications
• Autonomous Planning & Scheduling:
• Telescope scheduling
AI Applications
• Autonomous Planning & Scheduling:
• Analysis of data:
AI Applications
• Medicine:
• Image guided surgery
AI Applications
• Medicine:
• Image analysis and enhancement
AI Applications
• Transportation:
• Autonomous vehicle
control:
AI Applications
• Transportation:
• Pedestrian detection:
AI Applications
Games:
AI Applications
• Games:
AI Applications
• Robotic toys:
AI Applications
Other application areas:
• Bioinformatics:
• Gene expression data analysis
• Prediction of protein structure
• Text classification, document sorting:
• Web pages, e-mails
• Articles in the news
• Video, image classification
• Music composition, picture drawing
• Natural Language Processing
• Perception
Intelligent Agent
• The concept of rationality can be applied to a wide variety of agents
operating in any imaginable environment.
• Using this concept to develop a small set of design principles for
building successful agents—systems that can reasonably be called
intelligent.
Agents and Environments
• An agent is anything that can be viewed as perceiving its environment through
sensors and acting upon that environment through actuators.
• Percept to refer to the agent’s perceptual inputs at any given instant. An agent’s
percept sequence is the complete history of everything the agent has ever
perceived.
• Mathematically speaking, we say that an agent’s behavior is described by the
agent function that maps any given percept sequence to an action.
• The agent function for an artificial agent will be implemented by an agent
program.
• It is important to keep these two ideas distinct. The agent function is an abstract
mathematical description; the agent program is a concrete implementation,
running within some physical system.
Example : Vaccum Cleaner
The Concept of Rationality
• A rational agent is one that does the right thing—conceptually
speaking, every entry in the table for the agent function is filled out
correctly.
• When an agent is plunked down in an environment, it generates a
sequence of actions according to the percepts it receives. This
sequence of actions causes the environment to go through a sequence
of states. If the sequence is desirable, then the agent has performed
well. This notion of desirability is captured by a performance
measure that evaluates any given sequence of environment states.
Rationality
• What is rational at any given time depends on four things:
1. The performance measure that defines the criterion of success.
2. The agent’s prior knowledge of the environment.
3. The actions that the agent can perform.
4. The agent’s percept sequence to date.
This leads to a definition of a rational agent:
For each possible percept sequence, a rational agent should select an action
that is expected to maximize its performance measure, given the evidence
provided by the percept sequence and whatever built-in knowledge the
agent has.
Omniscience, learning and autonomy
• We need to be careful to distinguish between rationality and
omniscience. An omniscient agent knows the actual outcome of its
actions and can act accordingly; but omniscience is impossible in
reality.
• Information gathering—is an important part of rationality. A rational
agent not only to gather information but also to learn as much as
possible from what it perceives. The agent’s initial configuration could
reflect some prior knowledge of the environment, but as the agent
gains experience this may be modified and augmented.
• A rational agent should be autonomous—it should learn what it can to
compensate for partial or incorrect prior knowledge. If an agent relies
on the prior knowledge of its designer rather than on its own percepts,
we say that the agent lacks autonomy.
The Nature of Environments
• we are almost ready to build TASK ENVIRONMENT rational
agents. First, however, we must think about task environments, which
are essentially the “problems” to which rational agents are the
“solutions.”
• We begin by showing how to specify a task environment, illustrating
the process with a number of examples.
• PEAS (Performance, Environment, Actuators, Sensors)
Properties of task environments
• Fully observable vs. partially observable
• Single agent vs. multiagent
• Deterministic vs. stochastic
• Episodic vs. sequential
• Static vs. dynamic
• Discrete vs. continuous
• Known vs. unknown
• Fully observable vs. partially observable: If an agent’s sensors give
it access to the complete state of the environment at each point in time,
then we say that the task environment is fully observable.
• Single agent vs. multiagent: An agent solving a crossword puzzle by
itself is clearly in a single-agent environment, whereas an agent
playing chess is in a two agent environment. Chess is a competitive
multiagent environment. In the taxi-driving environment, on the other
hand, avoiding collisions maximizes the performance measure of all
agents, so it is a partially cooperative multiagent environment. It is
also partially competitive because, for example, only one car can
occupy a parking space.
• Deterministic vs. stochastic: If the next state of the environment is
completely determined by the current state and the action executed by
the agent, then we say the environment is deterministic; otherwise, it is
stochastic.
• Static vs. dynamic: If the environment can change while an agent is
deliberating, then we say the environment is dynamic for that agent;
otherwise, it is static.
• Discrete vs. continuous: The discrete/continuous distinction applies
to the state of the environment, to the way time is handled, and to the
percepts and actions of the agent. For example, the chess environment
has a finite number of distinct states (excluding the clock). Chess also
has a discrete set of percepts and actions. Taxi driving is a
continuous-state and continuous-time problem.
• Known vs. unknown: In a known environment, the outcomes for all
actions are given. Obviously, if the environment is unknown, the agent
will have to learn how it works in order to make good decisions.
The Structure of Agents
• The job of AI is to design an agent program that implements the agent
function— the mapping from percepts to actions.
• This program will run on some sort of computing device with physical
sensors and actuators—we call this the architecture:
agent = architecture + program
Agent programs :
• The agent programs that we design all have the same skeleton: They
take the current percept as input from the sensors and return an action to
the actuators.
• The agent program takes just the current percept as input because
nothing more is available from the environment; if the agent’s actions
need to depend on the entire percept sequence, the agent will have to
remember the percepts.
Four kinds of agent programs
•Simple reflex agents
•Model-based reflex agents
•Goal-based agents
•Utility-based agents
simple reflex agent
• The simplest kind of agent is the simple reflex agent. These agents
select actions on the basis of the current percept, ignoring the rest of
the percept history.
• For example, the vacuum agent whose agent function is a simple
reflex agent, because its decision is based only on the current location
and on whether that location contains dirt. An agent program for this
agent is given below
• Simple reflex behaviors occur even in more complex
environments. Imagine yourself as the driver of the
automated taxi. If the car in front brakes and its brake lights
come on, then you should notice this and initiate braking.
• In other words, some processing is done on the visual input
to establish the condition we call “The car in front is
braking.” Then, this triggers some established connection in
the agent program to the action “initiate braking.” We call
such a connection a condition–action rule, written as
if car-in-front-is-braking then initiate-braking.
Model-based reflex agents
• The most effective way to handle partial observability is for the agent
to keep track of the part of the world it can’t see now. That is, the
agent should maintain some sort of internal state that depends on the
percept history and thereby reflects at least some of the unobserved
aspects of the current state.
• Updating this internal state information as time goes by requires two
kinds of knowledge to be encoded in the agent program. First, we need
some information about how the world evolves independently of the
agent.
• Second, we need some information about how the agent’s own actions
affect the world.
• An agent that uses such a model is called a model-based agent.
Goal-based agents
• Knowing something about the current state of the
environment is not always enough to decide what to do.
• For example, at a road junction, the taxi can turn left, turn
right, or go straight on. The correct decision depends on
where the taxi is trying to get to. In other words, as well as a
current state description, the agent needs some sort of goal
information that describes situations that are desirable—for
example, being at the passenger’s destination. The agent
program can combine this with the model to choose actions
that achieve the goal.
Utility-based agents
• Goals alone are not enough to generate high-quality behavior in most
environments. For example, many action sequences will get the taxi to its
destination but some are quicker, safer, more reliable, or cheaper than others.
• Goals just provide a crude binary distinction between “happy” and “unhappy”
states. A more general performance measure should allow a comparison of
different world states according to exactly how happy they would make the
agent. Because “happy” does not sound very scientific, economists and
computer scientists use the term utility instead.
• We have already seen that a performance measure assigns a score to any given
sequence of environment states, so it can easily distinguish between more and
less desirable ways of getting to the taxi’s destination.
• An agent’s utility function is essentially an internalization of the performance
measure. If the internal utility function and the external performance measure
are in agreement, then an agent that chooses actions to maximize its utility will
be rational according to the external performance measure.
Problem-solving agents
• It is a goal-based agent.
• Problem solving begins with precise definitions of problems and their
solutions and give several examples to illustrate these definitions.
• It is typically performed by searching through an internally modelled
space of world states.
• A problem can be defined formally by five components:
1. Initial State
2. Actions
3. Transition model
4. Goal test
5. Path cost
• Initial state : In(Arad)
• Actions : From the state In(Arad), the applicable actions are
{Go(Sibiu), Go(Timisoara), Go(Zerind)}.
• Transition model : A description of what each action does,
specified by a function RESULT(s, a).
RESULT(In(Arad),Go(Zerind)) = In(Zerind)
• Goal test : {In(Bucharest)}
• Path cost : Assigns a numeric cost to each path. Here,
distance between the cities.
Example Problems
VACUUM WORLD :
• States: The state is determined by both the agent location and the dirt
locations. The agent is in one of two locations, each of which might or
might not contain dirt. Thus, there are 2 × 2**2= 8 possible world states. A
larger environment with n locations has n · 2**n states.
• Initial state: Any state can be designated as the initial state.
• Actions: In this simple environment, each state has just three actions: Left,
Right, and Suck.
• Transition model: The actions have their expected effects, except that
moving Left in the leftmost square, moving Right in the rightmost square,
and Sucking in a clean square have no effect.
• Goal test: This checks whether all the squares are clean.
• Path cost: Each step costs 1, so the path cost is the number of steps in the
path.
8-Puzzle Problem
• States: A state description specifies the location of each of the eight
tiles and the blank in one of the nine squares.
• Initial state: Any state can be designated as the initial state.
• Actions: The simplest formulation defines the actions as movements
of the blank space Left, Right, Up, or Down.
• Transition model: Given a state and action, this returns the resulting
state; for example, if we apply Left to the start state in Figure, the
resulting state has the 5 and the blank switched.
• Goal test: This checks whether the state matches the goal
configuration shown in Figure (Other goal configurations are possible.)
• Path cost: Each step costs 1, so the path cost is the number of steps in
the path.
8 – Queens Problem
• States: Any arrangement of 0 to 8 queens on the board is a state. One
per column in the leftmost n columns, with no queen attacking
another.
• Initial state: No queens on the board.
• Actions: Add a queen to any square in the leftmost empty column
such that it is not attacked by any other queen.
• Transition model: Returns the board with a queen added to the
specified square.
• Goal test: 8 queens are on the board, none attacked.

You might also like