Week4 (Blocks World and Rejecting Symbolic AI)

You might also like

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 27

1 Has Symbolic AI failed?

This unit has three main sections


 More on planning

 Strong and weak AI

 Failure of Symbolic AI

 Rejecting symbols
2 Symbolic AI in the world

Blocks world
The real world is an incredibly complex and chaotic place.
However, considering all of these fine details can obscure

the detail of how planning (and other tasks) is done.


One answer might be to eliminate all the messy details by

constructing a very simple world in which the planner can


operate
The attention can be focused on the core problem, the

construction of the plan.


3 Symbolic AI in the world
Blocks world
 One such simplified world has played a leading part in the
development of AI systems. It is usually known as Blocks World.
 Blocks World was used as an environment for early natural
language understanding systems and robots
 Blocks World is closely linked with the problem of planning and
with the early planning system, STRIPS.
4 Symbolic AI in the world

Blocks world
 Blocks World is a tiny ‘world’ comprising an (infinitely
large) flat table on which sit a set of children’s
building blocks.
 The blocks can be moved around and stacked on top
of one another by a single robot hand.
 The hand can only hold one block at a time.
 Blocks world is most often simulated inside a
computer, so all blocks are presumed to be perfectly
regular, the movements of the arm infinitely precise.
5 Symbolic AI in the world
 Planning in Blocks World means deciding the steps
required to move blocks from an initial configuration
(the start state) to another configuration (the goal
state).
On(B,C) ^ OnTable(C) ^ OnTable(A) ^ HandEmpty
6 Symbolic AI in the world

Blocks world
 The robot hand manipulates the world by picking up
blocks and moving them around.

 A block x may only be picked up if both of the


following are satisfied:
 The robot hand is empty (HandEmpty).

 There is no block sitting on top of the selected


block (Clear(x)).
7 Symbolic AI in the world

Blocks world
 The hand can execute simple commands
 PickUp(A) picks up Block A, provided that the block is clear

and the hand is empty; whilst


 PutDown(A) places Block A on the table provided that
the hand is holding the block.
 Stack(A,B) places Block A on top of Block B provided
the hand is holding A and that the top face of B is
clear;
 UnStack(A,B) removes Block A from Block B provided
that the hand is empty and that the top of A is clear.
8 Symbolic AI in the world

To describe the state Process/command


On(x,y) PickUp(x) Commands
from and to
OnTable(x) PutDown(x) the table

HandEmpty() Stack(x,y)
Clear(x) UnStack(x,y)
9 Symbolic AI in the world

Planning in the Blocks world


 Describe the initial state and the goal state of the

following:
10 Symbolic AI in the world

Planning in the Blocks world: divide the problem.


From the initial state we want to end up with Block A
on the table, Block C on the table and Block B on
top of Block A
11 Symbolic AI in the world

Planning in the Blocks world:


 The planner knows what actions it can perform, and
the consequences of those actions.
 Actions are expressed as operators. Each operator
has four parts: its name, a set of preconditions, an
add list and a delete list.
 The world changes with the execution of the
operator, by specifying which facts are added to and
deleted from the world state.
12 Symbolic AI in the world

Planning in the Blocks world:


13 Symbolic AI in the world
Planning using means-end analysis STRIPS:
 First, the goal conditions are added to the agenda.

 Planning then proceeds by popping the first condition from the


agenda and, if it’s not already true, finding an operator that can
achieve it.
 The operator’s action is then pushed on the agenda, as is each of
the operator’s precondition terms.
 Achieving each of these preconditions requires its own sub-plan.

 The process continues until the only things left on the agenda are
actions.
 If these are performed, in sequence, the goals will be achieved
14 Symbolic AI in the world
STRIPS: it starts with the three goals conditions being added to the agenda:
 OnTable(A)

 On(B,A)

 OnTable(C)

the topmost element, OnTable(A) is already true, so there is nothing to be done


to achieve it, it is popped from the agenda and discarded
The second term is not already true, so the system finds the Stack operator to
achieve it. Stack(B,A) is pushed onto the agenda and the operator’s
preconditions (Clear(A) and Holding(B)) are pushed on the agenda
 Clear(A)

 Holding(B)

 Stack(B,A)

 OnTable(C)
15 Symbolic AI in the world
The process begins again.

Clear(A) is already true, so that goal is discarded


without action. Holding(B) will become true after
an Unstack(B,C) operation, so that operator is
pushed on the stack together with its preconditions,
at this stage the agenda is:
 Clear(B)
 On(B,C)
 UnStack(B,C)
 Stack(B,A)
 OnTable(C)
16 Symbolic AI in the world

 The top two goals in the stack are true, so are


Clear(B)
popped from the agenda
On(B,C)
UnStack(B,C)
 The two operations (Unstack(B,C) and Stack(B,A))
Stack(B,A)
are performed in that order
OnTable(C)  The final goal (OnTable(C)) is already true and so
is removed.
 As the agenda is empty, all the goals have been
achieved and the planning has succeeded.
17 Symbolic AI in the world
Example
18 Symbolic AI in the world

Goal state: On(A,B) and


It is not On(B,C) and OnTable(C)
always
successful

Sub-plans goals are achieved


Plan is not achieved (sussman anomaly)

The cause of the problem is the


implementation order and the dependencies
between sub-plans
19 Symbolic AI in the world

Planning using means-end analysis STRIPS partial-order


planning systems:

 The technical term for when completing one sub-plan


undoes the achievements of another is Clobbering

 Solution: partial-order planning systems. The planner in this


case commits itself to ensuring that the operations for each sub-
plan occur in order, but they can be preceded, followed or
interleaved with steps from other sub-plans.

 Once all the actions for each sub-plan have been described, the
planner attempts to combine the actions in such a way as to
minimize clobbering.
20 Symbolic AI in the world
Learning, Adaptation and Heuristics
 One characteristic that we would surely associate with an
intelligent individual, natural or artificial, is the ability to learn
from its environment, whether this means widening the range
of tasks it can perform, or, performing the same tasks better.
 If we really want to understand the nature of intelligence, we
have to understand learning.
 Another reason for investigating learning is to make the
development of intelligent systems easier
 Rather than equipping a system with all the knowledge it
needs, we can develop a system that begins with adequate
behavior, but learns to become more competent.
 The ability to learn is also the ability to adapt to changing
circumstances, a vital feature of any system.
21 Symbolic AI in the world
Learning, Adaptation and Heuristics
 In Symbolic AI systems, behavior is governed by the
processes defined for that system.
 If a system is to learn, it must alter these, by either
modifying existing processes or adding new ones.
 Many existing learning systems have the task of
classification: the system is presented with a set of
examples and learns to classify these into different
categories.
 The learning can be either supervised (where the correct
classifications are known to the learner) or unsupervised
(where the learner has to work out the classifications for
itself).
22 Symbolic AI in the world
Learning, Adaptation and Heuristics
Other approaches to automated learning include:
 speed-up learning: In speed-up learning a system
remembers situations it has been in before and the actions it
took then. When it encounters a similar situation later, it
decides on an action by remembering what it did last time,
rather than determining it from first principles all over
again;
 inductive programming: A learning system is presented

with the inputs and desired outputs of a program or


procedure. The system has to derive the program that
satisfies these constraints.
23 Block II, Unit IV:
Has Symbolic AI failed?

Strong and weak AI


 No clear common understanding
 The main problem is that many of the questions with which AI

started in the early fifties were never properly debated and were
simply left unresolved.
 The result was that the Symbolic AI project has never had clear,

agreed goals.
Impressive inventions, it is to note that: We have now explained why AI is
exciting, but we have not said what it is. We could just say, ‘Well it has to do with
smart programs, so let’s get on and write some.’ Russel and Norvig, 2001
24 Has Symbolic AI failed?

Strong and weak AI


 Computer researchers, then as now, are impatient with abstract
philosophical debates.
 So foundational questions were never tackled and have come
back to bedevil the AI project.
 What are these questions ?
25 Has Symbolic AI failed?
Strong and weak AI
What are these questions?
 What is intelligence? You have been asked to consider a definition. Every
attempt falls into the trap of disagreement, abstractness, circularity and
reliance on a few key examples.
 Simulation or emulation? What are we trying to do? Are we trying to
mimic intelligence (simulation), or to create systems that, in some way,
are intelligent (emulation)?
 Strong and weak AI. If you need to, you should look back at this
important distinction.
 Again, what is AI trying to do? Create systems that do certain
recognizably intelligent things? Or create intelligence?
26 Has Symbolic AI failed?
Failure of Symbolic AI
there are three contributing factors to Symbolic AI’s perception as a
failure:
 ‘Just programming’. Some of AI’s successes have not been seen as

successes, but are instead regarded as ‘just programming’.


 AI’s realization in some areas has often failed to live up to the

expectations.
 Concealment. Many AI techniques have become accepted within

mainstream computing and AI systems often take the form of


specialized modules within large software packages. As such, their
intelligence goes unnoticed.
27 Has Symbolic AI failed?

Rejecting Symbolic

 Criticism: it seams that at a very high level and for


certain kinds of problem human thinking does
seem to be based on logic and mathematic.
 Debate on the original approach and call for re-
assessment
 New trend: Nouvelle AI

You might also like