Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Phil Lecture 4/4: Can a Machine Think?

April 5, 2012
1. Remember from last time, Turing thought his test was sucient for saying a machine can think if a machine passes his test, it qualies as thinking. (a) The question is, does the machine who passes the Turing Test actually think or is it merely simulating thought? i. For Turings test to work he needs there to be no dierence between simulating thought and actually thinking. ii. A simulation of a hurricane on a computer certainly isnt a real hurricane. The same seems true of any physical phenomenon. What is the relevant dierence between thought and these physical phenomena? iii. Consider a computer that simulates addition. If a computer simulates addition, it just is doing addition. There is no difference in this case between simulating the action and actually doing it. A. For Turings test to work, this has to be the case for thought as well. iv. Perhaps the relevant dierence is the concrete vs. the abstract A. Hurricanes are concrete and so cannot be simulated via abstract means. B. Since thought (or assuming that thought) is abstract, simulation doesnt come apart from actual thought. v. Phenomenal states, states of how things feel to you (like pains and the appearance of colors), are dicult cases for the simulation of thought. A. It seems problematic to think that if something is exhibiting pain behavior it actually is in pain, that it feels pain. 2. John Searle (a) Searle is challenging Strong AI. It is what underlies the Turing Test. i. Weak AI holds only that computer programs can simulate aspects of thought and might serve as a useful tool in understand cognitive states and thinking. 1

ii. Strong AI holds that if you can design an appropriately programmed computer than that computer IS thinking and has cognitive states. This is the thought behind the Turing Test A. i.e., if a computer can pass the Turing Test it is thinking and has cognitive states (b) His target is a program to design a computer that can understand stories the way that humans do. (c) The idea is that if a computer could understand stories like we do, then it is thinking/understanding. i. This is a stronger conclusion than Turing who required the computer do more than understand stories. 3. The Chinese Room (a) Imagine you have no knowledge of chinese and youre isolated in a room. Youre handed a bunch of chinese symbols and youre given a bunch of rules for associating chinese symbols with one another. When he is given symbols of a certain shape, the rules instruct him to hand back out symbols of a certain other shape. i. This is supposed to mimic the manipulation of symbols that a computer would perform in a story understanding program. ii. If the rules are well written, the man in the room will seem to understand the stories because he is giving all of the right responses. His simulation will seem awless. iii. It is obvious, Searle contends, that the man doesnt understand chinese (or the chinese stories) on the basis of performing manipulation of symbols and providing responses that he doesnt really understand. (b) Searle thinks this thought experiment shows very clearly that you can have simulation of thinking or understanding without any thinking or understanding going on. (c) There is this analogy that the mind is to the brain as computer software is to the computer hardware. This underlies the materialist position (or is often endorsed). i. Searle thinks that this analogy is wrong and that certain features of the mind cannot be reduced to the brain. He is a materialist, but he thinks that what counts as a mental state depends on biological features of the brain whereas a computer program does not. So the reason a computer couldnt understand is because it isnt built in the right way. ii. Critics of Searle say that he thinks that brain tissue is wonder tissue; that there is something bizarrely special about the brain that gives rise to thought. 2

4. Responses to the Chinese Room and Searle; How to defend Strong AI (a) The Systems Reply i. We have this room, the man in the room, pieces of paper with chinese symbols, etc. The point is that of course the man doesnt understand chinese, but the system as a whole can be said to understand chinese. There is more to the system than just the man. ii. The whole room with everything in it is what understands the chinese story. A. Sounds kind of strange, but as long as mental states are abstract, it is open that they could be realized by strange complex systems. iii. Searles Reply A. Suppose that the man internalizes everything in the room. He no longer has any paper with symbols or written down rules, he just has the entire room in his head now. Searle wants to say that the man (who now is the whole system) still doesnt understand the chinese story. B. This isnt feasible, no human could internalize a program this complex. So no human could be this entire system. What we would be talking about would be so idealized as to not be a human being anymore. Applying our intuitions about what humans understand then, is illegitimate. C. There is something about the complexity of a system that could pass the Turing Test that is what justies the ascription of cognitive states to this system. D. Once you include all of the idealizations required to internalize the room, it is unclear why you should think that being doesnt understand chinese. E. Searle seems to be pulling a fast one. He has you focused on the original man who doesnt understand chinese and tries to argue that no idealization of him would either. But, the countervailing thought goes, the man would have to understand chinese somewhere along the way of internalize all of these rules. (b) Consider a version of the argument where the Chinese Room is functionally equivalent to a brain (the man is performing the role of carrying the electrical pulses from neuron to neuron), such that the two objects share the same structure. How could Searle deny that the room (which is structurally identical to a brain) understands chinese? i. He would have to revert to the brain matter is magical reply. This seems like a real problem.

(c) People are still sympathetic to the view that software isnt sucient for understanding. 5. Three problems with Strong AI (a) Ambiguity with information processing. i. What computers do when they process information is manipulating meaningless symbols. It is all syntax with no semantics. ii. What humans do involves content/meaning so there is a large disconnect between the way humans and computers process information. iii. This could be resisted. When you get to a very complex system, you might think that syntax will generate a semantics. Searles distinction might just be begging the question. (b) Strong AI has a behaviorist assumption. i. Like we discussed with Turing, it isnt really an objectionable form of behaviorism because behavior is only a sucient condition for the possession of given mental states. (c) Strong AI is a version of dualism. i. Minds, like software, seem to have a sort of existence independent from the brain/hardware they are instantiated in. This seems to commit Strong AI to some form of dualism. ii. You could have eternal life if you copied your brain and put it into a computer.

You might also like