Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

12/16/2018

INDIAN INSTITUTE OF TECHNOLOGY ROORKEE The ability to solve problems

• The 8-queens problem can be defined as follows: Place 8


queens on an (8 by 8) chess board such that none of the
Introduction to queens attacks any of the others.

Artificial Intelligence

Dr. Partha Pratim Roy


Department of Computer Science and Engineering
Slides by Stuart Russell and Peter Norvig

Acting humanly: Turing Test Thinking humanly: cognitive modeling


• Interrogator asks questions without seeing and
hearing. One is a human, the other one a machine. • Cognitive Science approach
– Try to get “inside” our minds
– E.g., conduct experiments with people to try to “reverse-engineer” how
• To determine only through questions and answers. we reason, learning, remember, predict

• If it cannot distinguish between human and computer, • Problems


the machine has passed the test! – Humans don’t behave rationally
– The brain’s hardware is very different to a computer program
– One has to know functioning of brain and its mechanism for possessing
information.

Thinking rationally: "laws of thought" Acting rationally: rational agent

• Represent facts about the world via logic • Rational behavior: doing the right thing
• Decision theory
• Use logical inference as a basis for reasoning about these – Set of states of the world
facts – Set of possible actions an agent can take
- “Amit is IITian; all IITians are intelligent; therefore Amit is – Utility = gain to an agent for each action/state pair
intelligent” – An agent acts rationally if it selects the action that maximizes its “utility”

• Can be a very useful approach to AI • The right thing: that is expected to maximize goal achievement, given
– E.g., theorem-provers the available information

1
12/16/2018

Rational agents Design methodology and goals

• An agent is an entity that perceives and acts Human Rational


Think rationally =>
• Abstractly, an agent is a function from percept Think like humans formalize inference
process
histories to actions: "cognitive science"
"laws of thought"
[f: P*  A]

Act
Act like humans Act rationally

• For any given class of environments and tasks, Turing Test

we seek the agent (or class of agents) with the


best performance

How to design an intelligent agent? Examples of Agents

• Definition: An agent perceives its environment via – human agent


sensors and acts in that environment with its effectors. • eyes, ears, skin, taste buds, etc. for sensors
Hence, an agent gets percepts one at a time, and maps
• hands, fingers, legs, mouth, etc. for actuators
this percept sequence to actions (one action at a time)

• Properties: – robot
– Operating under autonomous control • camera, infrared, etc. for sensors
– Interacts with other agents plus the environment • grippers, wheels, lights, speakers, etc. for actuators
– Persisting over a prolonged time period
– Goal oriented

Agents and environments Vacuum-cleaner world

A B

Percepts: location and contents, e.g., [A, Dirty]


Agents interact with environment through sensors and Actions: Lef t, Right, Suck, N oOp
effectors

2
12/16/2018

A vacuum-cleaner agent Concept of Rationality


• Rational agent
• One that does the right thing

• every entry in the table for the agent function is


correct (rational).

• What is correct?
function Reflex-Vacuum-Agent([ location,status]) returns an action • The actions that cause the agent to be most
if status = Dirty then return Suck successful
else if location = A then return Right
else if location = B then return Left • So we need ways to measure success.

What is the right function?


Can it be implemented in a small agent program?

Performance measure Performance measure


• An objective function that determines • A general rule:
• How the agent does successfully? e.g., 90% or 30% • Design performance measures according to

• What one actually wants in the environment

• An agent, based on its percepts • Rather than how one thinks the agent should behave

 action sequence :
– if desirable, it is said to be performing well. • In vacuum-cleaner world
• No universal performance measure for all agents • We want the floor clean, no matter how the agent
behave
• We don’t restrict how the agent behaves

Rationality Rational agent


• What is rational at any given time depends on • For each possible percept sequence,
four things: – a rational agent should select
• an action expected to maximize its performance
• The performance measure defining the criterion of success measure, given the evidence provided by the
percept sequence and whatever built-in knowledge
• The agent’s prior knowledge of the environment
the agent has
• The actions that the agent can perform
• The agent’s percept sequence up to now • e.g., an exam
• Maximize marks, based on
• the questions on the paper & your knowledge

3
12/16/2018

Example of a rational agent Example of a rational agent


• Performance measure • Actions that can perform
• Awards one point for each clean square • Left, Right, Suck and NoOp
• at each time step, over 10000 time steps

• Percept sequences
• Prior knowledge about the environment • Where is the agent?
• The geography of the environment • Whether the location contains dirt?
• Only two squares
• The effect of the actions • Under this circumstance, the agent is rational.

Task environments Task environments


• Task environments are the problems • Performance measure
• While the rational agents are the solutions
• How can we judge the automated driver?
• Specifying the task environment
• Which factors are considered?
• PEAS description as fully as possible
• Performance
• getting to the correct destination
• Environment • minimizing fuel consumption
• Actuators • minimizing the trip time and/or cost
• Sensors • minimizing the violations of traffic laws
In designing an agent, the first step must always be to specify the task
environment as fully as possible. • maximizing the safety and comfort, etc.
• Use automated taxi driver as an example

Task environments Task environments

• Environment • Actuators (for outputs)


• Control over the accelerator, steering, gear shifting and
braking
• A taxi must deal with a variety of roads • A display to communicate with the customers
• Traffic lights, other vehicles, pedestrians, stray animals,
road works, police cars, etc. • Sensors (for inputs)
• Interact with the customer • Detect other vehicles, road situations
• GPS (Global Positioning System) to know where the taxi
is
• Many more devices are necessary

4
12/16/2018

Examples of how the agent function can be


Examples of agents in different applications implemented
Agent type Percepts Actions Goals Environment

Medical diagnosis
system
Symptoms,
findings,
Questions, tests,
treatments
Healthy patients,
minimize costs Patient, hospital
1. Table-driven agent
patient's answers

More
2. Simple reflex agent
Satellite image Pixels of varying Print a categorization of Correct Images from sophisticated 3. Reflex agent with internal state
analysis system intensity, color scene categorization orbiting satellite
4. Agent with explicit goals
Part-picking robot Pixels of varying
intensity
Pick up parts and sort
into bins
Place parts in
correct bins
Conveyor belts
with parts
5. Utility-based agent
6. Learning Agent
Temperature,
Refinery controller pressure Open, close valves; Maximize purity, Refinery
readings adjust temperature yield, safety

Interactive English Print exercises,


tutor Typed words suggestions, Maximize Set of students
corrections student's score
on test

1. Table-driven agent Example


An agent based on a pre-specified lookup table. It keeps track of
percept sequence and just looks up the best action

• Uses a percept sequence / action table in memory to


find the next action. Implemented as a (large) lookup
table.
• Problems
– Huge number of possible percepts (consider an automated
taxi with a camera as the sensor) => lookup table would be
huge
– Takes long time to build the table
– Not adaptive to changes in the environment; requires entire
table to be updated if changes occur

2. Simple reflex agent


• Agents do not have memory of past world states
or percepts. So, actions depend solely on
current percept.
• Action becomes a “reflex.”

5
12/16/2018

A Simple Reflex Agent in Nature 3. Reflex agents with state

percepts • Act by stimulus-response to the current state of the


(size, motion) environment.
• Each reactive agent is simple and interacts with
others in a basic way.
RULES:
(1) If small moving object,
then activate SNAP • Benefits: robustness, fast response time
(2) If large moving object,
then activate AVOID and inhibit SNAP • Challenges: scalability, how intelligent?
ELSE (not moving) then NOOP
needed for
completeness Action: SNAP or AVOID or NOOP

Reflex agents with state Reflex agents with state

4.Goal-based agents Goal-based agents

6
12/16/2018

Goal-based agents 5.Utility-Based Agent

Utility-Based Agent 6. Learning Agent

Summary: Learning Agents

Solving problems by searching

7
12/16/2018

Problem solving agent Problem solving agent

• Problem is simplified if an agent can adopt a


goal and aim at satisfying it. 3. Search for Solution: Given the problem,
1. Goal Formulation: Set of one or more
search for a solution – a sequence of
(desirable) world states (e.g. checkmate in actions to achieve the goal starting from
chess). the initial state.
2. Problem formulation: What actions and states 4. Execution of the Solution.
to consider, given a goal and an initial state.

Example: Path Finding problem Formulate, Search, Execute


Initial
• Formulate goal: State
– be in Bucharest
(Romania) • Problem of looking for such a sequence is called
• Formulate problem: search.
– action: drive between
Goal
pair of connected
State
cities (direct road)
– state: be in a city • Search algorithms takes problem as input and return
(20 world states)
solution in form of action sequence.
• Find solution:
– sequence of cities
leading from start to Environment: fully observable
goal state, e.g., Arad, • Execution: to carry out the action sequence
Sibiu, Fagaras, (map),
Bucharest
• Execution
deterministic, and the agent
– drive from Arad to knows effects
Bucharest according of each action. Is this really the
to the solution
case?

States State space

• A problem is defined by its elements and their


relations. • The state space is the set of all states
• A state is a representation of those elements at reachable from the initial state.
a given moment.
• Two special states are defined: • It forms a graph (or map) in which the nodes are
– Initial state (starting point) states and the arcs between nodes are actions.
– Final state (goal state)

• A path in the state space is a sequence of


states connected by a sequence of actions.

8
12/16/2018

Missionaries and Cannibals Missionaries and Cannibals


-also known as “goats and cabbage”, “wolves and sheep”, etc -also known as “goats and cabbage”, “wolves and sheep”, etc

• Goal: Transport the missionaries and cannibals to • A state description that allows us to describe our
the right bank of the river. state and goal:
• Constrains: (ML,CL,B)
• Whenever cannibals outnumber missionaries, ML : number of missionaries on left bank
the missionaries get eaten CL : number of cannibals on left bank
B: location of boat (L,R)
• Initial State: (3,3,L) Goal: (0,0,R)

Graph formulation of the problem Stating a Problem as a Search Problem

• Nodes: All possible states.


• Edges: edge from state u to state to v if v is reachable
• State space S (nodes)
from u (by an action of the agent)
• Successor function: the
• Edges for missionaries and cannibals problem?
states you can move to by
• Problem is now to find a path from (3,3,L) to (0,0,R).
taking an action (edge)
• In general, paths will have costs associated with them,
from the current state
so the problem will be to find the lowest cost path from
• Initial State
initial state to the goal.
• Goal State
• Is state x a goal?
• Cost

Missionaries and Cannibals The (partially expanded) search graph

Actions (operators):
CCR – transport two cannibals to the right bank
MCL – transport a missionary and a cannibal to the left bank

How many actions in total?


Why no MMR from this state?

9
12/16/2018

Repeated states Uninformed search strategies

• Uninformed: While searching you have no clue


whether one non-goal state is better than any
other. Your search is blind.
• Various blind strategies:
– Breadth-first search
– Uniform-cost search
– Depth-first search
– Iterative deepening search
– etc..

Search strategies: Evaluation Criteria Breadth-First


• A search strategy is defined by picking the order of node expansion. • all the nodes reachable from the current
• Strategies are evaluated along the following dimensions: node are explored first
– completeness: does it always find a solution if one exists? – achieved by the TREE-SEARCH method by
– time complexity: number of nodes generated
• does not include the time to perform actions appending newly generated nodes at the end
– space complexity: maximum number of nodes in memory of the search queue
– optimality: does it always find a least-cost solution?
function BREADTH-FIRST-SEARCH(problem) returns solution
• Time and space complexity are measured in terms of
– b: maximum branching factor of the search tree return TREE-SEARCH(problem, FIFO-QUEUE())
– d: depth of the least-cost solution
– m: maximum depth of the state space (may be ∞)

Breadth-First Search Breadth-First Snapshot

1 1 Initial
Visited
Fringe
2 3 2 3 Current
Visible
Goal
4 5 6 7 4 5 6 7

Note:
The goal test is
8 9 10 11 12 13 14 15 8 9 10 11 12 13 14 15 positive for this
node, and a
solution is
found in 24
steps.
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

Fringe: [25,26,27,28,29,30,31]

10
12/16/2018

Breadth-First Depth-First
• continues exploring newly generated nodes
• all the nodes reachable from the current node
– achieved by the TREE-SEARCH method by appending
are explored first
newly generated nodes at the beginning of the search
– achieved by the TREE-SEARCH method by queue
appending newly generated nodes at the end • utilizes a Last-In, First-Out (LIFO) queue, or stack
of the search queue
function BREADTH-FIRST-SEARCH(problem) returns solution function DEPTH-FIRST-SEARCH(problem) returns solution

return TREE-SEARCH(problem, FIFO-QUEUE()) return TREE-SEARCH(problem, LIFO-QUEUE())

Time Complexity bd Time Complexity bm


(keeps every node in memory-
Space Complexity bd Space Complexity b*m
Completeness yes (for finite b)
Bigger Issue) b branching factor
Completeness no (for infinite branch length)
b branching factor m maximum path length
Optimality yes (for non-negative Optimality no
path costs) d depth of the tree

Depth-First Snapshot Depth-First vs. Breadth-First

1 Initial • depth-first goes off into one branch until it reaches a leaf
Visited node
Fringe – not good if the goal is on another branch
2 3 Current – Neither is optimal
Visible – uses much less space than breadth-first
Goal • much fewer visited nodes to keep track of
4 5 6 7 • smaller fringe
• breadth-first is more careful by checking all alternatives
– complete and optimal
8 9 10 11 12 13 14 15
• under most circumstances
– very memory-intensive

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

Fringe: [3] + [22,23]

Thank You

65

11

You might also like