Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

,.:-- +.

-em-
:=.:. :. .,<
.,*.. .---.;....:..,.:--., .v.--, ;.:'.IS?:.~
":.L. $
:; ;.';
. , ,..-.,..< - ..,.
. :;-.., .:.; .. . ,-.
.'> , .*.. -. .. .
'
-. -. - . .-: -. - ., .' --
.
: . $ - . .-:..,:.:
.. .
- . . .,>:. ..-:.::,;;-~.:~.-*<-::~+~.y~.-:~~:::
--
$$:;i.
;,-*--a*:,
>;;,.-;-..3..:
>:;
:..a,.-: ....,
>::.,. ;.+> -.=-.
,<>.? >. ;
,.; .,
,;..
. .
-. .:,-. :;:-.;,
:;.,
.-....
=.:2.:
-. 2.::.
.... ,..; , , .-. .. . . -, .-.. ,-
s=fentFf&
y?; .+:
\3.;-.

. ~ m i i r i c a n , Inc .
%. ,
..2
,-<
: , ;.-,>

i . 7 ' : : , F r ~ ~S. !~ i e d t iic'-.Amerida';.;.:


f ...(c) January 1 9 9 ( ) ~ - ~ ~ ~ Permission
:; ::';
... . t o , : r e p r i n t .. '.. g. r.a n t e d ' b y . t h e , ' p u b i i s h e r .. .. -+..,..: .,:.. .. . '

. . . .. . . 1,. the Br sin' Mind


L,
,

'
me Chinese writing looks like so many
meaningless squiggles. Now suppose I
am placed in a room containing bas-
kets full of Chinese symbols. Suppose
also that I am given a rule Sook in
1:11~li:iIt
Tr~rIII:IICII/IIE
C ~ I ~ I I I:I~IIII.)~~I:{
:..(C
with other Chinese symbols. The roles
idcntify the symbols entirely by thcir
shapes and do nor requirc that T un-
.Nu. A pr-ogr;im mere& man ~ P Uk t e s symbols, derstand any of them.The rules might
say such things as, "Take a squiggle- .I.:
whereas a brain attaches meaning
- to them squiggle si, from basket number one
&d put it next to a ~quoggle~squoggle
sign from basket number two."
-
by John R. Searle Imagine that people outside the '

ii ,
"
room who understand Chinese hand
in small bunches of symbols anci *hat .
in response I manipulate the symbols %
.

C an a machine think? Can a ma- creating minds. They believe further- according to the rule book and hand
chine have conscious thoughts more that they have a scientific test back more small bunches of symbols.
in exactly the same sense that for determining success or failure: the Now, the rule book is the "computer
you and I have? If by "machine" one Turing test delised by Alan bI. Turing, program" The people who wrote it are
.means a physical system capable of the founding father of artificial intelli- "programmers," and I am the "com-
performing certain functions (md gence. The Turing test, as currently puter." The baskets full of symbols
what else can one mean?), then hu- understood, is simply this: if a com- are the "data base," the small bunches
mans are machines of a special biolog- puter can perform in such a way that that are handed in to me are "ques-
ical kind, and humans can think, .md an expert cannot distinguish its per- tions" and the bunches I thenhand out
so of course machines can think. And. fonnance from that of a human who are "answers."
for all we know, it might be possible has a certain cognitive ability-say, Now suppose that the rule book is
to produce a thinking machine out the ability to do addition or to un- written in such a way that my "an-
of different materials altogether-say, derstand Chinese-then the computer swers" to the "questions" are indistin-
out of silicon chips or vacuum tubes. also has that ability. So the goal is to guishable from those of a native Chi-
Maybe it will. turn out to be impossible, design programs that will simulate nese speaker. For euample, the people
but we certainly do not know that yet. human cognition in such a way as to outside might hand me some symbols
In recent decades, however, the pass the Turing test What is more, that unknown to me mean, "What's
question of whether a machine can such a program would not merely be a your favorite color?" and I might after
think has 3ccn givcn a different intcr- modcl of thc mind; it would literally going through the rules give back
pretation entirely. The question that be a mind, in the same sense that a symbols that, also unknown to me, '
has been posed in its place is, Could human mind is a mind. mean, "My favorite is blue, but I also
machine think just by vir- By no means does every worker like green a lot." I satisfy the Turing
-plementing a computer program? Is in artificid intelligence accept so ex- test forunderstandingChinese.AI1 the
the program by itself consntuuve of treme a view. A more cautious ap- same, I am totally ignorant of Chinese. L
thinking? This is a completely differ- proach is to think of computer models And there is no wily I could come to ,
ent questior? because it is not about as being u s e N in studying the mind understand Chinese in the system as I
the physical, causal properties of actu- in the same way that they are useful described, since there is no way that I ,
al or possible physical systems but in studying the weather, economics can learn the meanings of any of the
rather about che abstract, computa- or molecular biology. To distinguish symbols. Like a computer, I manipu-'
tional properties of formal computer these two approaches, I call the first late symbols, but I attach no meaning;
proflams that can be implemented in strong AI and thesecond weak AI. It is to the symbols.
ally sort of substance a t all, providcd important to scc just how bold an ?'he point of thc
only that the substance is able to carry approach strong AI is.$rongAI c l a w ment is G s : if I do
the program. that thinldng is merely the manipula- Chinese solely on the
A fair number of researchers in arti- non offohn& symbols, and that cs a computer program for understand-
ficial intelligence (AI) believe the an- ,-what the computer does: ma- ing Chinese, then neither does any,
swer to the second question is yesa - mpulate fonnal symbols. This view is other digital computer solely on that
that is, they believe that by designing often summaried by - s a d basis. Digital computers merely ma-
the right programs with the right in- mind is to the 33in as th! pro- is nipulate formal symbols according to
puts and outputs, they are literally to the hardwale." rules in the program.
What goes for Chinese goes for o h -
trong A1 is unusual among theo- er forms of cognition as well. Just
JOHN R. SEARLE is professor ries o lna m at least two manipulating the symbols is not by
losaphy ar the University of Californ!a, respects: i itself enough to guarantee cognition,
Berkeley. He received his BA., MA. aad perception, understanding, thinldng
D.Phil. from the University of Oxford. refutatiozThe refutation is one that and so forth. And since computersl
where he xvas a Rhodes scholar. He wish- any person can try for himself or her- qua computers, are symbol-manip-
es to thank Sruart Dreyfus, Stemn Har-
nad, Elizabeth Iloyd and h t n Rock for self. Here is how it goes. Consider a ulating devices, mereIy running The
their comments and suggestions. language you don't understand. In my computer program is not enough to
case, I do not understand Chinese. To guarantee cognition.

2G SCIENTIFIC January 1990


~[ENGW
.:This simple argument Is decisive their content they can be about ob-
.
It is important to see what is
. . ,
'
. against the cIaims of strong AI. The jects and states of affairs in the world. and not proved by this argLment

+
first premise of the argument simply If the content @valves language, there First, I have not tried to prove that "a
states the formal character of a com- will be syntax in addition to seman- computer cannot think." Since .an -
puter program Programs are defined tics, b u r linguistic understanding re- thing that can be simulate computa-
in terms o f . symbol manipulations, quires at least a semantic framework.
and the symbols are purely formal, If, for example. I am thinking about
or The formal character the last presidential election, certain
of the program, by the way, is what words will go through my mind, but
makes computers so powerful The the words are about the election only
same program can be m on an indefi- because I attach specific meanings to
nite variety of hardwares, and one these words, in accordance with my symbol manipulation and the fact that
hardware system can run anindefinite knowledge of English. In this respect it is thinking, it does not follow that ,
range of computer programs. Let me . they are unlike Chinese symbols for thinking is equivalent to formal syrn-
abbreviate this "axiom" as me, Let me abbreviate tF,:s axiom as bol manipulation.
Axiom 1. Computer programs are kviom 2. Human minrtr have mental Second,I have not tried to show that
formal (syntactic). ,contents (semantics). only biologically based systems like ,
'
This point is so crucial that it is Now let me add the point that the our brains can think. Right now those
worth explaining in more detail. A dig- Chinese room demonstrated. Having are the o d y systems we know for a
ital computer processes information. :he symbols by themselves-just hav- fact can think, but we might find oth-
by first encoding it in the symbolism ,lng the syntax-is not sufficient for ' er systems in the universe that can
that the computer uses and then ma- having the semantics. Merely manipu- produce conscious thoughts, a.ld we
nipulating the symbols through a set .lating s y m b 4 ~is not enough to guar- might even come to be able to create
of preciseIy stated rules. These rules ,antee knobledge of what they mean. I thinking systems artificially. I regard
constitute the program For example, shall abbreviate this as this issue as up for grabs.
in Turing's early theory of computers, - M o m 3. Syntax by itself is nei- Third, strong Al's thesis is not that,
the symbols were simply 0's and l's, ther constitutive of nor sumcient for for all we know, computers with the
and the rules of the program said such semantics. right programs might be thinking, that
things as, "Print a 0 on the tape, move At one level this principle is true by they might have some as yet undetect-
,. 'one square to the left and erase a 1." definition One might, of course, de- ed psychological properties;.rather it
The astonishing thing about comput- fine the terms syntax and semantics is that they must be thinking because
. . . ers is that any information that can be
8 differently. The point is that there is a that is all there is to thinking.
. . stated in a language can be encoded in 'distinction between formal elements, Fourth, I have tried to refute strong
. i. such a system, and any information- -which have no intrinsic mezning or A1 so defined. I have tried to demon-
.j ,processing task that can be solved by content, and those phenomena that strate that theprogram by itsclfis not
' '
1 explicit mIes can be programmed. have intrinsic content From these constitutive of thinking -because the
. premises it follows that program is purely a matter of formal

T
wo further poin~sarc important. - Concl~lsion1. Programs arc ncither syni~olmnnipi~laLion-i~nd wc know
First, symbols and programs are constitutive of nor sufficient for minds. independently that symbol manipula-
purely abstract notions: they And that is just another way of say- tions by themselves are no[ sufficient
have no essential physical properties ing rhat strong A1 is false. to guarantee the presence of mean-
to defme them and can be implement-
ed in any physical medium whatsoev-
er. The 0's and l's, qua symbols. have
. . ...
' no essential physical properties and a . . .
',fortiori have no physical, causal prop-
erties.'~emphasize this point because ' :

: it is tempting to identify computers


:-. with Bome specific technology-say,
, $

'. silicon chips-and to think that the


issues are about the physics of silicon
...: chips o r to think that syntax identi-
- fies some physical phenomenon that
'

might have as yet unkr.own causal


.. . power., in the way that actu
. , . c-omena
, such as e l e c n o z
. - ic radiation or hydrogen atoms have
... .hhysical, causal properties. The sec-
ond point is that symbols are manipu-
;... . . laled WILIIOLIL rcl'c~-c~lcc t o any 1ncn11-
. . ings. The symbols of the program can
' stand for a n m g the programmer or ..

uscr wants. In this scnsc thc program .


has syntax but no semantics.
The next axiom is just a reminder o r
the obvious fact that thoughts, per-
ceptions, understandings and so forth
have a mental content. By virtue of I san'sfy the ?bring t m for understanding Chinese
7f, SCIEIV~IFICAIVIERICAN January 1993 27
. .:.:*:&,+.<:z,<.
7.<
;:~.,5~:~
.-'.+.
. .
+

:.;:
.;:-.,~,::~i...;<
.-.~--::><5;:>,5yAx~?<::*~:f
, . ,- . . . .
.
,
--..---..
--,= , x+:;; ?..,,<

;:.2-
. . .
. . . : . . . . . .. .'. .'

-
--,,.-2,,!
. .
.
2.1:: ;:., :+ ..,............
..
--:..:;>:../2.::::
-..:.?..;'.,.'
. ..:. .... . . . . : . . I
:..:
I: .
,'

?'
..
.. . . . . . .. ..:-
. .
.- . .
..'.
.
.
. . ,;,:;.<;$'.j;;::>.--:.b+.:,--,
. :.::~+<+.~;;=:y7~;;,.y:*:~f> -.-- -
-...--,
++:
........... ..: . : ;.,: .; ,::
-, . ..........
:.; .-;--, '-,
: -L
.%.>"
.
;
.: ,<.,
.
.... ,l.7,- *.:.:.
'
,-
..
..
a'
.
.
. . ?-,. .; .....,;- ..
... ..<. .. .. ...
....c..'..?..---.-+.
.............
a>.
-
.,-,.*2 2;:,: . .
a
"ri-

-.r-
. < :-'.
'.
i.-.,:..-.
. ;. . . . . . .. .,. ..........
. -1
,
;.. . .- . . . . . .
- .'
.
. .., ..-:.:.. . .. ...... . .., . ,
.
. -..
. .
k " . * ~ - , , . * n y - - u .r. r
.....
..:. ,: .
.a<*---
:.
< - . ;.:;:,.
:..... .. . . . :.. :-j . .
- -
. . ..
. . ..
- .. - . .-
1 c., <-,: ;:-. . ,. .
:;- ... ',....
"%..
3.
;>?;-;:?': ,. , .:. . !*
.: ..' :.-:
.;L : . .; . . . . . .
. . .. ., ...,.('.'.
...... : . . 5
,. . . . , .. . . .
. .-..-.: < -
.I....
.. ;. . , .
.: '5
..,
traditional serial architecturi 1 dies. 3
...
, But the advantages of p a r d 4 archi- ;
. . tecture for weak A1 are quite irrele- .;
vant to the issues between the Chinese
. -
:
room argument and strdng AI.
The Churchlands miss this -point
when they say that a big enough chi- :
nese gym might have higher-level
... mental features that emkrge from the
size and complexity of the system,
... .. just as whole brains have mental fea-
tures that are not had by individual
neurons. That is. of course, a possibili-
ty, but il has nr~tliingto clo wilh cum-
putation. Computationally, serial and
peallel systems are equivalent: any
computation that can be done in par-
allel can be done in serial. If the man in
the Chinese room is computationally
equivalent to both, then if he does not
understand Chinese solely by virtue of
doing the computations, neither do
they. The Churchlands are correct in '
saying that the original Chinese room
argument was designed with tradi-
tional A1 in mind but wrong in thinkillg
that connectionism is immune to the
argument It applies to any computa-
Computer programs are formal (syntactic). :ional system. You can't get semanti-
Human minds have mental contents (semaarics) cally loaded thought contents from
formal computations alone, whether
ings. That is the principle on whirh the The parallel, "brainlike" character of they are done in serial or in parallel;
Chinese room ar,went works. the proce~sing,however, is irrelevant that is why the Chinese room argu-
I emphasize these points here partly to the purely computational aspects ment refutes snong AI in any forn~.
because it sccrns to m c the Church- of thc process. Any function that

.
..
.
lands [see "Could a Machine Think!" be co~nputcdon a par c m a c w
by Paul M. Churchland and Patrit:ia :an also be c o m s t e d on a serial ma-, .
Smith Churchland, page 321 have cot Zhine. Indeed, because parallel ma-
quite understood the issues. They G
any pcople who are i~nprcssed
by this argument are none-
theless puzzled about the dif- .
s are still rare, connectionist pro- ferences between people and comput-
think that strong AI is claiming that grams are usually run on traditional ers. If humans are, at least in a triv-
--M
computers might turn out to think serial machines. Parallel processing, ial sense, computers, and if humans
and that I am denying this possibility then, does not afford a way around the have a semantics, then why couldn't
on commonsense grounds. But that is Chinese ro.om argument we give semantics to other com-
not the claim of strong AI, and my What is more, the connectionist sys- puters? Why couldn't we program a
argument against it has nothing to do tem is subject even on its own terms Vax or a Cray so .that it too would
with common sense. to a variant of the objection present- have thoughts and feelings? Or why
I will have more to say about .ed by the original Chinese room ar- couldn't some new computer technol-
' their objections later. Meanwhile I gument. Imagine that instead of a Chi- ogy overcome the gulf between form .
should point out that, contrary to .nese room, I have a Chinese gym: a and content, between syntax and se-'
: I what the Churchlands suggest, the hall containing many monolingual, En- mantics? What, in fact, are the differ-
Chinese room argument also refutes 'glish-speaking men These men would ences between animal brains and corn-
any strong-AI claims made for the new carry out the same operations as the puter systems that enable the Chinese .
parallel technologies that are inspired1 nodes and synapses in a cgnqection- room argument to work against com-
by and rnodclcd on ncural ncbvorks. ist architecture a s descri6"c by the putcrs but not against brains?
Unlike the traditional von Neumann Churchlands, and the outcome w o d d The most obvious difference is h a t
computer, which proceeds in a step- be the same as having one man ma- the processes that define something
by-step fashion, these systems have nipulate symbols according to a rule as a computer-computational proc-
many computational elcments that book. No one in thc gym speaks a csscs-are completcly independent
operate in parallel and interact with word of Chinese, and there is no way of any reference to a specific type of
or?e another according to rules in- for the system as a whole to learn hardware implementation One cc!ild
.. spired by neurobiology. Although the the meanings of any.Chinese words. in principle make a computer out of
results are still modest, these "parallel Yet with appropriate adjustments, the old beer cans strung together with '

distributed processing," or "comec- systeni could give the correct answers wires and powered by windmills.
tionist," models raise useful questions to Chinese questions. But when it comes to brains, al-
about how complex, parallel network There. are, a; I suggested earlier, though science is largely ignorant of
systems like those in brains might interesting properties of connection- how brains function to produce men-
actually function in the production of ist nets that enable them to simulate tal states, one is struckby the ememe
intelligent behavior. brain processes more accurately than specificity of the anatomy and the

26 S C I E N ~ I F I.&~IERICM'
C J C ~ U R ? : V1990 74
. -
. . . . .
physiology. Where some understand- Conclusion 2. Any other .system ca- clearly touched some sensitive nerve.
ing exists of how brain processes pable of causing minds would have to The thesis of strong A1 is that-any
produce mental phenomena-for ex- have causal powers <atleasrl equiva- system whatsoever-whether it is '

ample, pain, thirst, vision, smell-it lent to those of brairlz. made of beer cans, silicon chips or
' is clear that specific neurobioIogical This is like saying that if an electri- toilet paper-not only might have
proccsscs arc involvcd.Thirst, at lcast cal cnginc is to bc ablc to run a car thoughts and fcclings but musr have
of certain kinds, is caused by certain as fast as a gas engine, it must have thoughts-and feelings, provided onIy
: types of neuron firings in the hypo- (at least) an equivalent power output. that it implements the right program,
' thdamus. which in turn are caused by .This conclusion says nothing about with the right inputs and outputs.
the action of a specific peptide, angio- the mechanisms. As a matter of fact, Now, that is a profoundly antibiologi-
tensin If. The causation is from the cognition i s a biological pnenome- cal view, and one would think that
"bottom up" in the sense that lower- non: mental states and er-pr people in A1 would be glad to abandon
level neuronal processes cause high- x & d by brain processes. This does it. Many of them, especially the young-
er-level mental phenomena. Indeed, as S t lmply that only a biological system er generation, agree with me, but I am
far as we know, every "mental" event, could think,but it does imply that any amazed at the number and vehemence
ranging from feelings of thirst to alternative system, whether made of of the defenders. Here are some of the
thoughts of mathamatical theorems, silicon, beer cans or whatever, would common objections.
and memories of childhood, is caused' -have to have the relevant causal capac- a. In the Chinese room yo-u really do
by specific neurons firing in specific. ities equivalent to those of brains. So understand Chinese, even though you
neural architectures. ; :%ow I can derive don't know it. It is, after all, possible to
But why should this specificity mat-, - Conclusion 3. Any artifact thar pro- understand something without h o w -
ter? After all, neuron firings could ' :duced mental phenomena, any artifi- ing that one understands it.
be simulated on computers that had cia1 brain, would have to be able to b. You don't understand Chinese,
a completely different physics and' -duplicate the specific causal powers of but there is an (unconscious) subsys-
chemistry from that of the brain. The ,brains, and it.could not do thar just by tcm in you [hat docs. I t is, alter all,
answer is that the brain does not running a formal program.
, . possible to have unconscious rncnral
merely instantiate a formal pattern or', Furthermore. I can derive an impor- states, and there is no reason why
. ' program (it does that, too), but it also tant conclusion about human brains: your understanding oL'Chincscshould
causes mental events by virtue of spe- Conclusion 4. The way that human not be wholly unconscious.
; cific neurobiological processes. Brains brains actually produce mental phe- c. You don't understand Chinese,
- are specific biological organs, and nomena cannot be solely by virtue of but the whole room does. YOU.are like
: ' their specific biochemical properties running a computer program. r J ? a single neuron in the brain, and just
. enable them to cause consciousness as such a single neuron by itself can-
-
.'
:.

,
.
and other sorts of mental phenomena.
Computer simulations of brain proc-
: esses provide models of the formal

. simula.tion...sheuld- not be confused


I first presented the Chinese room not understand but only contributes
parable in the pages of Behavioral to the. understanding of the whole
and Brain Sciences in 1980, where system, you don't understand, but the
aspects of these processes. But the it appeared, as is the pracrice of the whole system does.
journal, along with -r:er commentary, d. Semantics doesn't exist anyway;
with duplication The computational in this case, 26 conznentaries. Frank- there is only s y n t z ~ It . is a kind of
model of mental processes is no more iy, I think the point it makes is rath- piescientific illusion to suppose that
real than the computational model of er obvious, but to my surprise the there exist in the brain some mys-
any other natural phenomenon. publication was followed by a fur- terious "mental contents," "thought
One can imagine a computer simula- ther flood of objections that-more processes" or "semantics." AU that
tion of the action of peptides in the surprisingly-continues to the pres- exists in the brain is the same sort
hypothalamus that is accurate down ent day. The Chinese room argument of syntactic symbol manipu~ationthat
'
to the last synapse. But equally one
can imagine a computer simulation of
, the oxidation of hydrocarbons in a car
engine or the action of digestive proc-
csscs In a stomacll when I t is dlgcstlng
pizza. And the simulation is no more
, the real thing in the case of the brain
than it is in the case of the car or the
stomach Barring miracles, you could
not run your car by doing a computer
simulation of the oxidation of gaso-
line, and you could not digest pizza by
running the program that simulates .
such digestion It seems obvious that a
simulation of cognition will similarly
not produce the effects of the neuro-
--
.biology of cognition
All mental phenomena, then, are,
caused by neurophysiologica1 proc- ,
esses
- Axiom in the brain. Hence, .,
4. Brains cause minds.
In conjunction with my earlier deri- . . .
vation, I immediately derive, trivially. Which semantics is the sy!:remgiving off now?

SCIEN~FIC
htEru0.N January 1990 29
C s r a D - . r w U r O n ~ . I h l l M . I * I M ~ - - ~
.:. __.
.
,I I
. . . .. . .. . .
. ...* ...
-.; , goes on In computers. Nothing more . the thought experiment Imaginethat that light was entirely constiruted by '.;
, .;' C
. . ., .. e. You are not really running the . I me-morize the contents of the bas- electromagnetic radiation, could not -,
. .....
;
... cornputer program-you only think kets and the rule book, and I do all later investigation also show that te- .i
:. . . .
you are. Once you have a conscious the calculations in my head. You can mantics are eutirely constituted of .
-
,
..:.
.
. agent going through the steps of the even imagine that I work out in the - syntax? Is this not a question for fur-
... . .:
program, it ceases to be a case of o p e n There is nothing in the "sys- ther scientific investigation? ;
. .....
...,: '
. . .
implementing a program at a l l ' . tern" that is net in me, and since Arguments from analogy are notori-
f. Computers would have semantics I don't undersrand Chinese, neither ously weak, because before one can
and not just syntax if their inputs and does the system. make the argument work, one has to .
... outputs were put in appropriate caus- The Ch1:xchlands in their compan- establish that the two 'cases are truly
'-'< . . . . a1 relation to the rest of the world. ion piece produce a variant of the analogous. And here I think they are, ,
Imagine that we put the computer into systems reply by imagining an arnus- not. The account of light in terms of '
a robot. attached television cameras to ing analogy. Suppose that ..so..meone elecFromanneric taZZRRm rs a GL&
the robot's head, installed transducers said that light could not ffe electro- -a'iTstory right down to the mound.
. connecting the television messages to magnetic because if you sh&e a bar Msa causal account of the physics
.
1.
,
the computer and had the computer magnet in a dark ro-om, the system of electromann-
.. output operate the robot's arms and still will not give off visible !ight Now, analogy with formal svmbols fails be-
.... legs. Then the whole system, would the Churchlands ask, is not the Chi- caus e symbols have no ph.y.s
have a semantics. nese room argument just like that? ' i x causal ower . The only power
. .
g.' If the program simulated the Does it not merely say that if you t h m p i i i i i q u a symbols, is the
operation of the brain of a Chinese shake Chinese symbols in a semanti- power to cause the next step in the
.-. . program when the machine is running.
.. .. . speaker, then it would understand cally dark room, they will not give off
. , .. Chinese Suppose that we simulated the light of Chinese understanding? And there is no question of waiting on
the brain of a Chinese person at the But just as later investigation showed further research to reveal the physical,
level of neurons. Then surely such a
..
system would understand Chinese as
well as any Chinese person's brain.
. . ... . And so o n
AU of these arguments share a com-
. . mon feature: they are all inadequate
. .
.. .. . (
because they fail to come to grips with
... . : the actual Chinese room argument.
.. -- That argument rests on the distinction
.. . b m t h e o0f-1 manip*. -
.:.. . ....
, tion that 1s done by the computer and
.:- . . .::
. ..
.. . .-
,
the mentd contents biologically pro-
.. -. . aisnncnoHl nave
....
... hope not misleading-
Ty-as the distinction between syntax
and-semanncs. I vvlll not repeat my
. -..
s ro alfof these objections, but
. .
. ,.
it will help to clarify the issues if I
. . explain the weaknesses of the most
widely held objection, argument c-
... what I call the systems reply. (The
...
. . brain simulator rcply, argumcnt g, is
another popular one, but I have al-
. .. ... . . ready addressed that one in the preiri-
ous section.)

T
. .-. , . he systems reply asserts that
.. ... . of course you don't understand
- -.
Chinese but the whole system-
. . . .. .
you, the room, the rule book, the
. . bushel baskets full of symbols-
. .
does. When I first heard this explana-
...... . tion, I asked one of its proponents,
. . . ..
..... "Do you mean the room understands
.,.--.. .... . Chinese?" His answer was yes. It is a
-. .
.- . . .. daring move, but aside from its im-
.... plausibility, it wLll not work on purely
.... logical grounds. The point of the origi-
. . . nal argument was that. symbol shuf-
-. ..: . .
fling by itself does not give any access
. . . to the meanings of the symbols. But
. . this is as much m e of the whole
room as it is of the person inside. How could anyone have supposed that a computer simulafion
Onc can scc this point by extending of u mcnrul procrs.~musl hc rhc rcul 1hiny7

30 SCIENTIFICAMERICAN January 1990 62


I *

causal properties of 0's and 1's. The A1 is that the physical features of the specific system, one could not possi-
only relevant properties of 0's and 1's implementing medium are totally ir- bly hope to create minds just by de-
are abstract computational properties, relevant. What matters are programs, signing programs.
and they are already wen known. and programs are pu~zlyformal. HistoricalIy, scientific developments
The Churchlands complain that I am The-Churchlands' snalogy between in the West that have treated humans
"begging the question" when1 say that syntax and electromagnetism, then, is .as just a part of the ordinary physical,
uninterpreted formal symbols are not confronted with a dilemma; either the biological order have often been op-
identical to mental contents. Well, I syntax is construed purely formally posed by various rearguard actions.
cerpainly did not spend much time in terms of its abstract mathematical Copcrnicus and Galilco wcrc opposed
arguing for it, because I take it as a properties, or it is not. If it is, then the because they denied that the earth
logical truth:~swith any logical m t h , analogy breaks down, because syntax was the center of the universe; Darwin
one can quickly see that it is m e , so construed has no physical powers was opposed because he claimed that
because one gets inconsistencies if and hence no physical, causal powers. humans had descended from the low-
one tries to imagine the converse. So If, on the other hand, one is supposed er animals. It is best to see strong Al: as
let us try it. Suppose that in the Chi- to think in terms of the physics of the one of the last gasps of this antiscien-
nese room some undetectable Chi- implem~nTthgmedium, then there is tific tradition, for it denies that there
nese thinking really is going o n What indeed an analogy, but it is not one is anytlung essentially physical and
exactly is supposed to make the ma- that is relevant to strong N . biological about the human mind. The
nipulation of the syntactic elements mind according to strong AI is inde-
into specifically Chinese thougllt con-
tents? Well, after all, I am assuming
that the programmers were Chinese
process Chinese information.
B ccausc thc poilits I have bccn
making are rather obvious-syn-
taxis not the same as semantics,
speakers, programming the system to brain processes cause mental phe-
nomena-the question arises, How did
pcnclcnl of the brain. 11 is a computer
program and as such has no essential
connection to any specific hardware.
Elany people who have doubts about
the psychological significance of k U
Fine. But now imagine that as I am we get into this mess? How could think that computers might be able to
sitting in the Chinese room shuffling anyone have supposed that a com- understand Chinese and think about
the Chinese symbols, I get bored with puter simulation of a mental process numbers but cannot do the crucially
just shuffling the-to me-meaning- must be the real thing? After ail, the human things, namely-and thenfol-
less symbols. So, suppose that I decide whole point of models is that they con- lows their favorite human specialty-
to interpret the symbols as standing tain only certain features of the mod- falling in love. having a sense of hu-
for moves in a chess game. Which eled domain and leave out the rest. No mor, feeling the angst of postindus-
semantics is the system giving off one expects to get wet in a pool filled trinl society under late capitalism,
now? Is it giving off a Chinese seman- with Ping-Pong-ball models oc' water or whatever. But workers in AI com-
tics or a chess semantics, or both moIecules. So why would anyone think pldin-correctly-that this is a case of
sin~?lltaneously?Suppose there is a a computer model of thought process- moving the goalposts. .ks soon as an AI
third person looking in through the es would actually think? silnulation succeeds, it ceases to be of
window, and she decides that the sym- Part of the answer is that people psychological importance. In this de-
boi manipulations can all be interpret- have inherited a residue of behaviorist bate both sides fail to see the distinc-
ed as stock-market predictions. And ' ,psychological the0ric.s of the past gen- tion between simulation and duplica-
so o n There is no limit to the number erationphe Turing test enshrines the tion As far as simulation is concerned,
of semantic interpretations that can' temptation to think that if something there is no difficulty in programming
be assigned to the symbols because, behaves as if it had certain mental my computer so that it prints out, "I
to repeat, the symbols are purely for- processes, then it must actually have love you, Suzy"; "Ha ha"; or "I am
mrj:. They have no intrinsic semantics. those mental processes.]And this is suffering the angst of postindustiial
Is there any way to rescue the part of the behaviorists' mistaken as- society under late capitalism" The im-
Churchlands' analogy from incoher- sumption that in order to be scientific, portant point is that simulation is not
ence? I said above that formal sym- psychology must confine its study to the same as duplication, and that fact
bols do not have causal properties. But externally observable behavior. Par- holds as much import for thinking
of course the program will always adoxically, this residual behaviorism about arithmetic as it does for feeling
be implemented in some hardware or is tied to a residual dualism Nobody angst. The point is not that the com-
another, and the hardware will have thinks that a computer simulation of puter gets only to the 40-yard line and
specific physical, causal powers. And digestion would actually digest any- not all the way to the goal line. The
any real computer will give off vari- thing, but where cognition is con- computer doesn't even get started. It
ous phen0rnena.M~computers, for ex- cerned, people are willing to believe is not playing that game.
ample, give off heat, and they make in such a miracle because they fail
a humming noise and sometimes to recognize that the mind is just as FURTHER READING
crunching sounds. So is there some much a biological phenomenon as di- MIND DESIGN: PHILOSOPHY,
PSYCHOLO-
logically compelling reason why they gestion. The mind, they suppose, is GY, AR-nncrar IE~LUGENCE. Edited by
could not also give off consciousness? something formal and abstract, not a John Haugeland. The bllT Ress. 1980.
No. Scientifically, the idea is out of the part of the wet and slimy stuff in our MINDS, BRAINS, AND PROGRA~IS.John
question, but it is not something the, heads. The poIemical literature in Al Searle in Behavioral and Brain Sciencex,
Chinese room argument is supposed, usually contgins attacks on something Vol. 3, No. 3. pages 417-458; 1980.
to refute, and it is not something that the authors call dualism, but what MINDS. BRAINS, AND SCIWCE. John R.
an adherent df strong AI would wish to: they fail to see is that they themselves Searle. Harvard University Press, 1981.
MINDS,MACHINES A ~ DSEAW. Stevan
defend, because any such giving off display dualism in a strong form, for Harnad in journal of fiperimenral and
would have to derive from the physi- unless one accepts the idea that the T h e o r e r i c a l ~ i ~ c iInrelligence,
al Vol. 1,
cal features of the implementing me- mind is compIetely independeiit of No. 1, pages 5-25; 1989.
dium. But the basic premise of strong the brain or of any other physically
. .
From S c i e n t i f i c A m e r i c a n .
( c ) J a n u a r y 1990 by S c i e n t i f i c A m e r i c a n , I n c .
tdugence) In the original version of
the Turing test, the inputs to rhe S::I
machine are ~ 0 w e r ~ a t i 0 nquestions
al
and remarks typed into a c&ole by
,
,,
a

ClassicaZ A1 is unlikely to yield conscious . ,,


you or me, and the outpkts are type- .
respowes
chine. The machine passes this test
machines; systems that mimic the brain might.. for co,aous fntelligence if its re-
sponses cannot be discriminated from
the typewritten responses of a real,
by Paul M.Churchland and Patricia Smith Churchland intelligent person Of course, at pres-
ent no one h o w s the function that
would produce the output behavior of

A
rtificial-intelligence research is two important results in computation- a consaous person But the Church
undergeing a revolution. To ex- al theory. The first was Church's the- and Turing results assure us that,
plain how and why, and to put sis, which states that e y r v eeective- whatever that (presumably effective)
John R. Searle's argument in perspec- ly computable function is recursive- function might be. a suitable SM ma-
tive, we fi-st need a flashback. l y computable. m y computabIe chine could compute it.
By the early 1950's the old, vague means that there is a "rote" procedure This is a sigillficant conclusion, es-
question, Could a machine think? had for determining, in finite time, the out- pecially since Turing's portrayal of a
bccn replaccd by thc morc approach- put of thc function for a givcn input. purcly tclcrypcd intcnction is an un-
able question. Could a machine that Recursively computable means more necessary restricdon The same con-
manipulated physical symbols accord- specifically that there is a finite set of clusion follows even if the SMmachine
ing to structure-sensitive d e s think? operations that can be applied to a interacts with the world in more corn-
This question was an improvement given input, and then applied again plex ways: by direct vision, real speech
because formal logic and cornputa- and again to the successive results of and so forth After all, a more complex
rional theory had seen major devel- such applications, to yield the func- recursive function is still Turing-com-
opments in the preceding half-centu- tion's output in finite time. The notion putable. The only remaining prob
ry. Theorists had come to appreciate of a rote procedure is nonformal and is to identity the u n d o u b t e d l e
the enormous power of abstract sys- intuitive; thus, Church's thesis does plex function that governs the llunan,.
terns of symbols that undergo rule- not admit of a formal proof. But it pattern of response to the environ-
govemedtransformations.Ifthosesys- does go to the heart of what it is to =*hen wriw,k,.pp.~(~e
terns could just be automated. -hen compute, and many lines of evidence set o recursively appjicabre rules) by
their absuact computational power, it converge in supporting i t %hich rhe SM machine will compute IT.
seemed, wg~uldbe displayed in a real The second important result was These goals form t-amental re-
physical system This insight spawned Alan M. Turing's demonstration that search program of classical AI.
a well-defined research program with any recursively computable function Initial results were positive. Sb4
deep theoretical underpinnings. can be computed in finite time by a machines with clever programs per-
Could a machine think? There were ma,dmidly simple sort of symbol-ma- formed a variety of ostensibly cog-
many reasons for saying yes. One of nipulating machine that has come to nitive activities. They responded 10
the earliest and deepest reasons lay in be called a unive;: al Turing machine. complex instructions, solved com-
This machine is by a set of re- plex arithmetic, algebraic and tactical
r
PALL M. CHURCHLAND and PAmCL4 cursively applicable rules that are sen- problems, played checkers and chess,
SMmf CHURCHL4ND are professors of sitive to [lie identity, order and ar- proved theorems and engaged in sim-
phiIosophy at the University of Califor- rangement of the elementary symbols ple dialogue. Performance continued
nia at San Diego. .Together thcy have it encounters as input. to improve with the appearance of
studied the nature of the mind and larger memories and faster machines
knowledge for rhe past two decades.

T hese two results. entail some- and with the use of longer and more
Paul Ch~rrchlandfocuses o n the nature thing remarkable, namely that a cunning programs. Classicd, or "pro-
of scientific knowledge and its develop- gram-writing," A1 was asvigorous and
ment, while Patricia Churchland focuses standard digital computer, given
on the neurosciences and on how the orily.rhe right program, a large enough successful research effort from al-
brain sustains cognition Paul Church- memory and sufficient time, can corn- most every perspective. The gcca-
land's hfarrer and Consciousness is the pute any rule-governed input-output sional denial that an SM =.a"chine
standard textbook on the phi!osophy of function. That is, it can display any might eventually think appeaqd unin-
the mind, and Patrida Churchiznd's systematic pattern of responses to the. formed and ill motivated. The case for
Neurophilosophy brings together theo- environment whatsoever. a positive answer to our fitle quesdon
ries of cognition from both philosophy More specifically, these results im- was overwhelming.
and biology. Paul Chuichland is current- There were a few puzzles, of course-
ly chair of the philosophy depamnent at ply that a suitabIy programmed sym-
UCSD, and the two are, respectively. bol-manipulating machine (hereafter, For one thing. SM machines were ad-
president and past president of the So- SM machine) should be able..ta, pass mittedly not very brainlike. Even here,

'I
ciety for Philosophy and Psychology. Pa- the Turing test for conscibbs intel- however, the classical approach had a
u-icia Churchland is also an adjuncr pro- ligence. The Turing test is a purely conbincing answer. First, the physical
fessor at the Salk Insnrute for Biological behavioral test for conscious intelli- material of any SM machine has noth-
Studies in San Diego. The Churchlands gence, but it is a very demanaing ing essential to do with what function
arc also members of thc UCSD cognitive ic computes. That is fixed by its Pro-
science faculry, its hstirute for Neural test even so. (Whether it is a fair test
Computation and its Science Studies will be addressed below, where we gram Second, the engineering details
program. shall also encounter a second and of any machine's functional architec-
quite d~ficrent"test" for conscious in- ture are also irrelevant, since different

32 S c r ~ k n ~~l L
c V E R IJanuary
~N 19.90
f3
;\7
4;-
architectures running quite cliff erent results reqhired longer and longer pe- required that the computer program
programs can still be computing the riods of computer time, periods far have access to an extremely large
same input-output function. in excess of what-a real visual system knowledge base. Constructing the rel-
Accordingly, A1 sought to find the requires. This rclativc slowness of the cvant knowledge base was problcm
input-output funcrion characteristic simulations was darkly curious; signal enough, and it was compounded by
of intelligence and the most efficient propagation in a computer is rough- the problem of how to access just
of the many possible programs for ly a million times faster than in the the contextually relevant parts of that
computing i t The idiosyncratic way in brain, and the clock frequency of a knowledge base in real time. As the
which the brain computes the func- computer's central processor is great- knowledge base got bigger and bet-
tion just doesn't matter, it was said. er than any frequency found in the ter, the access problem got worse. Ex-
This completes the rationale for clas- brain by a similarly dramatic margin. haustive search took too much time,
sical A1 and for a positive answer to And yet, on realistic problems, the z.nd heuristics for relevance did poor-
our title question tortoise easily outran the hare. 1... Worries of the sort Dreyfus had
Furthermore, realistic performance raised finally began to take hold here
f l o d d a machine think? There
were also some arguments for
saying no. Through the 1960's
interesting negative arguments 'were
T H E CHINESE ROOM
. . .. . . . .
THE I.UMINOU5 ROOM .::... ..:.!j!:.:-._.... . ::-::
relatively rare. The objection was OC- Axioni 1. Electricity and magnetii+l;z, .
Axiom 1 . ~ o m ~ u t ~ ; i ~ r o ~arc rams
casionaLly made that thinking was a formal (syntactic). . are forces. . : ...,..,..
.
::
. -...
nonphysical process in an immaterial . .. . - . . . . ;.,- - 'a

s o d But such dualistic resistance was Axiom 2. ~ u m a n , ' m i n d shave mental Axiom 2. The essential prope~ty,,of .-.! +
....... ,..-.
\.:.:;
:. :,:;
neither evoIutionarilj, nor explanatori- contents (semantics). light is luminance. .. , : ...-.%:.....
. ,..... .._ 1
1)-'plausible.It had a negligible impact
on A1 rcscarch.
Axiom 3 . Syntax by itself is &her
~ x i o m . 3 .F o r c ~ sby themselves aie"?!:
neither constitutive of nar sum; :i::
constitutive of nor sufficient for
A quite different Line of objection semantics. cient for luminance. . ......
... .- <:..a,..;; ..
was more successful in gaining the AI ... . .. .:-- .?=...
...: X..q:r
~ o n c l u s i o n1 : Electricity and mag-:>$+
=,.A:.
.L.:

community's attention In 1972 Hu- Conclusion I . programs are neicher


bert L. Dreyfus published a book that constitutive of nor sufficient for netism are neither constitu-. - .;:
was highly critical of the parade-case minds. tive o f nor sufficient For . :.lighrtij:';;,. .%

simulations of cognitive activin;. He


a r y c d for thcir inadequacy a s - s i m -
ulations of genuine cognition, and he
pointed to a pattern of failure in these
attempts. P a a t they were missing. he
suggested, was the vast store of inar-
ticulate background knowledge every
person possesses and the common-
sense capacity for drawing on relevant
aspects of that knowledge as changing
circumstance demands. Dreyfus did
not deny the possibility that an arti-
fiaal physical system of some kind
might think,but he was highly critical
of the idea that this could be achieved
solely by symbol manipulation at the
hands of rccursively applicable rules.
. Dreyfus's complaints were broadly
perceived within the AI community,
and within the discipline of philoso-
phy as well, as shortsighted and un-
sympathetic, as harping on the inevi-
table simplifications of a research ef-
fort still in its youth These deficits
might be real, but surely they were
temporary. Bigger machines and bet-
ter programs should repair them in
?:le course. Time, it was felt, was o~
AJ's side. ,Here again the impact on
research was negligible.
Time was on Dreyfus's side as
well: the rate of cognitive return on in-
creasing speed and memory began to
slacken in the late 1970's and earlv
1980's. The simulation of object-reg- OSCILJATE% ELECTROM4GNRIC FORCES constitute light e v e n t h o u g h a ma&
ognition in the visual system, for ex- p u m p e d b y a p e r s o n a p p e a r s t o produce n o light whatsoever. Similarly, r d e - b a s e d
ample, proved computationally inten- symbol manipulation might constitute intelligence e v e n t h o u g h t h e rule-based sys-
sive to an unexpected degree. Realistic tcnl inside John R . Scarlc's "Chinese room" a p p e a r s to lack real undcrstandinf!.
C
'
, .a
:

-[
,
er organized, could ever constitute or late the magnet, the wavelengthof the
be sufficient for life. Plainly, what peo- electromagnetic waves produced is far
ple can or cannot imagine often has too long and their intensity is much w at we need to know is this:
How does the brain achieve
cognition? Reverse engineer-
nothing to do with what is or is not the too weak for human retinas to re- ing is a common practice in indus-
' ,case, even where the people involved spond to t h e n ) But in the dimate of try. When a new piece of technology
)
\- are highly intelligent
understanding here contemplated- comes on the market, competitors find
To see how this lesson applies to the 1860's-this tactic is likely to elicit out how it works by taldng it apart
Searle's case, consider a deliberate- laughter and hoots of derision "Lumi- and divini~~g its structural rationale.
ly manufactured parallel to his ar- nous room, my foot. Mr. Maxwell. It's In the case of the brain, this strategy
gument and its supporting thought pitch-black in there!" presents an uusually stiff challenge,
experiment. Aias, poor hiax-rvell has no easy route fcr the brain is the most complicated
Axiom 1. Electricity and magnetism out of this predicament. All he can do ar.d sophisticated thing on the planet.
are forces. is insist on the following three points. Elden so, the neurosciences have re-
Axiom 2. The essential property of First, axiom 3 of the above argument is vealed much about the brain on a wide
light is luminance false. Indeed, it begs the question de- variety of structural levels. Three ana-
Axiom 3. Forces by themselves are spite its intuitive plausibili:~.Second, tomic points will provide a basic con-
neitherconstitutive of norsufficientfor the luminous room experiment dem- trast with the architecture of conven-
luminance onstrates nothing of interest one way tional electronic computers.
Conclusion 1. Electricity and mag- or the other about the nature of light First, nervous systems are parallel
netism are neither constitutive of nor And third, what is needed to settle machines, in the sense that signals
sufficientfor light. the problem of light and the possibil- are processed in miUions of different
Imagine this argument raised short- ity of artificial luminance is an ongo- pathways simultaneously. The retina,
ly ifter James Clerk Maxwell's 1864 ing research program to determine for example, presents its complex in-
suggestion that light and electro- whether under the appropriate condi- put to the brain not in chunks of eight,
magnetic waves are identical but be- tions the behavior of eIectromagnetic 16 or 32 elements, as in a desktop
fore the world's full appreciation of waves does indeed mirror perfectly computer, but rather in the form of
the systematic parallels between the the behavior of light. almost a million distinct signal ele-
properties of light and the properties

I This is also the response that clas- ments aniving simultaneously at the
of electromagnetic waves. This argu- , sical AI should give to Searle's ar- target of the optic nerve (the lateral
ment could have served as a compel-4 gument. Even though Searle's Chinese geniculate nudeus), there to be proc-
ling objection to Maxwell's imagina- room may appear to be "semantical- essed collectively, simultaneously and
tive hypothesis, especially if it were Iy dark," he is in no position to insist, in one fell swoop. Second, the brain's
accompanied by the following com- on the strength of this appearance, basic processing unit, the neuron,
mentary in support of axiom 3. that rule-governed symbol manipu- is comparatively simple. Furthermore, w,
"Consider a dark room containing a lation can never constitute seman- its response to incoming signals is
man holdlng a bar magnct or charged ticphenomena,cspccIallywhenpeoplc analog, not digital, inasmuch as its 7
object If the man pumps the magnet have only an uninformed common- output spiking frequency varies co
up and down, then, according to Max-
well's theory of artificial luminance
understanding of the semantic tinuously with its input signals. Third,
phenomena that need in the brain, axons projecting from
J- d

(AL),it will initiate a spreading cir- than exploit one neuronal population to another
cle of electromagnetic waves and will of these things, are often matched by a,,ons return-
thus be luminous. But as all of us who exploits one's ing from their target populationThese
have toyed with magnets or charged descending or recurrent projections
balls well know, their forces (or any of Searle's allow the brain to modulate the char-
other forces for that matter), even argument in place, we return to the acter of its sensory processing. More
when set in motion, produce no I&- question of whether the research important still, their existence makes
nance at all. It is inconceivable that program of classical A1 has a realistic the brain a genuine dynamical system
you might constitute real luminance chance of solving the problem of con- whose continuing behavior is both
just by moving forces around!" scious intelligence and of producing a highly complex and to some degree
How shouldMaxwell respond to this machine that thinks. We believe that independent of its peripheral stimuli.
challenge? He might begin by insisting the prospects are poor, but we rest Highly simplified model networks
that the 'luminous room" experiment this opinion on rea-mns very differ- have been useful in suggesting how
is a misleading display of the phenom- ent from Searle's. Our reasons derive real neural networks might work and
eqon of luminance because the fre- from the specific performance failures in revealing the computational prop-
quency of oscillation of the magnet of the cla~.ical research program in AI erties of parallel architectures. For
is absurdly low, too low by a factor and frdm'a variety of lessons learned example, consider a three-layer mod-
of This fnight well elicit the im- from the biolbgical brain and a new el consisting of neuronlike units f ~ ~ l l y
pazent response that frequency has class of computational models in- connected by axonlike connections to
nothing to do with it, that the room spired by its structure. We have al- the units at the next layer. ,4n input
with the bobbing magnet already ready indicated some of the failures of stimulus produces some activation
contains cvcrything csscntial to light, classical N regarding tasks that thc lcvcl in a given input unit, which con-

S C ~ T F I CAMERICAN January 1990 35


veys a s i g . d of proportional strength
along its "axon" to its many "synaptic" -
1 METER
conncctions to thc hiddcn units. The
global effect is that a pattern of activa-
tions across the set of input units
produces a distinct pattern of activa-
tions across the set of hidden units.
The same story applies to the out- 'O CENT'METERS
put unlls. As before, iln acllvntlon pol-
tern across the hidden units produces
a distinct activation pattern across the
output units. All told, this network is a
device for transforming any one of a
great many possible input vectors (ac-
tintion patterns) into a uniquely cor-
responding output vector. It is a de-
vice for computing a specific function
Ejtactly which function it computes is 1 MILLIMETER
fixed by the global configuration of its
synaptic weights.
There are various procedures for
adjusting the weights so as to yield
a network that computes almost any 1 oo MICRONS
function-that is, any vector-to-vec-
tor transformation-that one mieht -
desire. In fact, one can even impose on
it a function one is unable to specify,
so long as one can supply a set of I MICRON
examples of the desired input-output
pairs. This process, called "training up
Illc ~ic~wurl;."
proccctls by succcsslvc
adjustment of the network's weights
until it performs the input-output
transformations desired.
Although this model network vast-
' ly oversircplifies the the NERVOUS SYSTEMSspan manyL Lscales of organization, from neurotransmitrer mole-
\

-1 ) brain* & im- cdes (batcorn)to the entire brain and spinal cord. Intermediate levels include sinpie
:fop ponaotidear First*a 1 - ar&te,~- neurons and drcuits made up of a few neurons, such as those that produce orien-
s ~ r o v i d e s x a m a t i cspeed ad- tation selectiviry to a visuaI stimulus (middle),and systems made up of circuits such
vmeenconvennonal Uter, as those that subserve language (top rfghb. Only research can decide how close-
for the many synapses at e=el ly an artiiicial system must mimic the biological one to be capable of intelligence.
5erform many small computations si-
,+ _"ultaneously instead of in laborious -
, w c e . T$S advantage gets larger formation is stored in the spedec off its attack: distinguishing food
as the num er of neurons increases configuration of synaptic connection from nonfood and mates from non-
5
,&
$O at each layer. Strikingly, the speed of strengths, as shaped by past learning- mates; navigating through a complex
processing is entirely independent cf Relevant information is "released" as and ever-changing physical/social en-
d' both the number of units involved iil the input vector passes through-and vironment; and so on.
each layer and the complexity of th': is transformed by-that configuration Finally, it is important to note thar
function they are computing. Each of connections. the ara
layer could have four units or a hun- Parallel processing is not ideal for m a n i ~ u l a t i n ~ b o according
ls P
dred million: its configuration of syn- all types of computation. On tasks that -re-sensitive rules. Rather sym-
apcic weights coula oe comDuting require only a small input vector, but bol manipulation appears to be just
s s p l e one-di@t sums or second-or- many millions of swiftly iterated re- one of many cognitive skills that a
der difterenrial equanons. It uaul~$ cursive computations, the brain per- network may or may not learn to dis-
make no cunerence. The forms very badly, whereas classical SM play. Rule-governed symbol manipda-
time wmcl he exactly t h e same. machines exceL This class of compu- tion is not its basic mode of operation
Second. massive parallelism means tations is very Iarge and important, Searle's argument is directed against
that the system is fault-tolerant and so classical machines will always be rule-governed SM machines; vector
functionally persistent; the loss of a useful, indeed, vital. There is, howev- nansformers of the kind we describe
few connections, even quite a few, has er, an equally large class of computa: are therefore not threatened by his
a negligible effect on the character of tions for which the brain's architec- Chinese room argument even ifit were
the overall uansformation performed ture is the superior technology. These sound, which we have found indepen-
by the surviving network. are the computations that typitalIy dent reason to doubt
Third. a parallel system stores large- confront living creanues: recognizing Searle is aware of parallel
amounts of information in a distrib- apredator's outline in a noisy environ- sors but thinksthey too will be devoid
uted fashion, any part of which can ment; recalling instantly how to avoid of real semantic content. To
be accessed in milliseconds. That in- its gaze, flee its approach or fend their inevitable failure, he outlines a

36 SClrNTIFlc AMERICAN January 1990


( L:-I<
-
"'
second thought experiment, the Chi- meaning, more must be known about of the artificial cochlea. Their circuit-
nese gym, which has a gymnasium full how neurons code and transform sen- ry is based on the known anatomy and
of people organized into a parallel sory signals, about the neural basis physiology of the cat retina and the
network. From there his argument of memory, learning and emotion and barn ~ w cochlea,
l and their output is
proceeds as in the Chinese room about the interaction of these capaa- dramatically similar to the h o w n out-
We find this second story far less re- ties and the motor s y s t m A neurally put of the organs at issue.
sponsive or compelling than his first. grounded theory of meaning may re- ' These chips do not use any neu-
For one, it is irrelevant that no unit quire revision of the very intuitions rochemicals, so neurochemicals are
in his system understands Chinese, that now seem so secure and that are dearly not necessary to achieve the
since the same is true of nervous sys- so freely exploited in Searle's argu- evident results. Of course, the artifi-
terns: no neuron in my brain under- ments. Such revisions are common in a a l retina cannot be said to see any-
stands Enghsh, although my whole the history of science. thing, because its output does not
brain does. For another, Searle ne- Could science construct an artifi- haveanartificial thalamus occortex to
glects to mention that his simulation cia1 intelligence by exploiting what go to. Whether Mead's program could
(using one person per neuron, plus a is known about the nervous system? be sustained to build an entire artifi-
fleet-footed child for each synaptic We see no principled reason why a a l brain remains to be seen, but there
connection) will require at least 10" 1not. Searle appears to agree, although is no evidence now that the absence of
people, since the human brair. has 19" c he qualifies his claim by saying that biochemicals renders it quixotic.
3
neurons, each of which avFages over "any other system capable of causing
10' connections. His system will re-' minds would have to have causal pow-
quire the entire human populations of ??ers (at least) equivalent to those of
over 10,000 earths. One gymnasium <brains." We close by addressing this
will not begin to hold a fair simulation;0 claim We presume that Searle is not
On the other hand, if such a system bclaiming that a successful artificial
were to be assembled on a suitablyj? mind must have all the causal pow-
w e, and Searle, reject the
test as a sufficient condition
for conscious intelligence. At
one level our reasons for doing so are
similar: we agree that it is also very
important how the input-output func-
cosmic scale, with all its pathways +ers of the b r a such as the power to tion is achieved; it is important that
faithfully modeled on the human case, 25 smell baawhen rotting. to harbor slow the right sorts of things be going on
we might then have a large, slow, odd- 2 viruses such as kuru, t.1 stain yello:v inside the artificial machine. At anoth-
ly made but still functional brain o n y w i t h horseradish peroxidase and so er level, our reasons are quite differ-
our hands. In that case the default,\ forth. Requiring perfect parity would e n t SearIe bases his position on com-
assumption is surely that, given prop- be like requiring that an artificial fly- monsense intuitions about the pres-
er inputs, it would think, not that it ing device lay eggs. ence or absence of semantic content.
coui.-'nlt.There is no guarantee that its Presumably he means only to re- We base ours on the specific behav-
activity would constitute real thought, quire of an artificial mind all of the ioral failures of the classical SM ma-
because the vector-processing theory causal powers relevant, as he says, to chines and on the specific virtues of
sketched above may not be the correct conscious intelligence. But which ex- machines with a more brainlike ar-
theory of howbrains work. But neither actly are they? We are back to quarrel- chitecture. These contrasts show that
is there any a priori guarantee that it ing about what is and is not relevant certain computational strategies have
could not be thinking. Searle is once This is an entirely reasonable place for vast and decisive advantages over oth-
more mistaking the limits on his (or a disagreement, but it is an empirical ers where typical cognitive tasks are
the reader's) current imagination for matter, to be tried and tested Because concerned, advantages that are empir-
the limits on objective reality. so little is known about what goes into ically inescapable. Clearly, the brain is
the process of cognition and seman- making systematic use of these com-

T he brain is a kind of computer, tics, it is premature to be very confi-


although most of its properties dent about what features are essential.
remain to be discovered. Charac- Searle hints at various points that ev-
terizing the brain as a ldnd of comput- ery level, including the biochemicd,
er is neither trivial nor frivolous. The must be represented in any machine
putational advantages. But it need not
be the only physical system capable
of doing so. Artificial intelligence, in
a nonbiological but massively pardel
machine, remains a compelling and
brain does compute functions, func- that is a candidate for artificial intelli- discernible prospect.
tions of great compledty, but not in gence. This claim is almost surely too
the classical A1 fashion When brains strong. An artificial brain might use
are said to be computers, it should not something other than biochemicals to FURTHER RE4DING
be implied that they are serial, digital achieve the same ends. C O ~ ~ P V ~ NMACHINERY
C AND b i ~ ~ ~ l - 1 -
computers, that they are programmed, This possibility is illustrated by C u - GENCE. Alan M. Turing in Mind, Val. 59,
that they exhibit the distinction be- ver A. Mead's research at the Califor- Pages 433-4613; 1950-
tween hardware and software or that nia Institute of Technology. Mead and WH1'T c O M P ~ CmlQuE
they must be symbol manipulators or his colleagues have used analog VLSI ~ j $ ~ , " ~ r ~ R ~ ~ Drei-
~ ; ~ e r r
rule followers. Brains are computers techniques to build an artificial retina
in a radically different style. and an artificial cochlea. (In animals
NEI,.ROPHILOSOPHY: T~~~~ ,,
UNIFIED
UNDERSTANDING OF THE MIND/BIWN.
How the brain manages meaning is the retina and cochlea are not mere Patricia Smith Churchland The bilT
still unknown, but it is clear that the Transducers: both systems enbody a Press, 1986.
problem reaches beyond language use complex processing network.) These FAST THINKING in ~he~ntenrionalsrance.
and beyond humans. A small mound are not m q e simu1atio;ls in a mini- Danie1ClernentDennett The
of fresh dirt Signifies to a person, computer of the kind that Searle de- ~ ~ ~ o c o M p v r A n ompEcnvr:
N ~
and also to coyotes, that a gopher is rides; they are real information-proc- NAmRE OF MIm AND THE snuc-
around; an echo with a certain spectral essing units responding in real time to
character signifies to a bat thc prcs- real light, in thc casc of thc artificial
,
,
~ h Mrr
Sclmce. Paul M. churchland.
, Press, in prcss.
ence s f a moth To develop a theory of retina, and to real sound, in the case
S c n m ~ hfERICAN
~c January 1990 37
_ --- - - -_ _ -.
~ ~ m r c ~ . ~ r O n ~ . T h i e r l . - d ~ ~ ~
_-A*.

- .

You might also like