Professional Documents
Culture Documents
150 Studyguide1 Fa16 (Mind)
150 Studyguide1 Fa16 (Mind)
150 Studyguide1 Fa16 (Mind)
STUDY GUIDE
Unit I, The Mind-Body Problem
Unit II, Can Non-Humans Think?
The final exam will cover all material presented in classroom lectures and discussions as
well as handouts through the eight week, plus the following readings:
Introduction:
philia-sophia
Philosophy begins in wonder.
Philosophy is the attempt to see how things, in the broadest possible sense of the term,
hang together, in the broadest possible sense of the term.
Metaphysical questions are questions about the nature of things as general as reality, life,
death, immortality, the soul, and personhood. Metaphysical categories that we have discussed
include matter, mind, substance, qualities, and ideas.
1
What is Philosophy of Mind? Name two questions that Philosophy of Mind attempts to
answer.
The Mind-Body Problem: What is the mind and how does it differ from the body, if at all? How
does one account for the relation (if any!) between the mind and the body?
dualism*
interactionism (Rene Descartes)
parallelism
occasionalism
epiphenomenalism (Frank Jackson)
materialism
behaviorism* (B.F. Skinner)
reductive materialism, or mind-brain identity theory* (David Papineau)
eliminative materialism* (Paul Churchland)
functionalism* (William Lycan)
Dualism is the view that there are two basic kinds of stuff, two substances: thinking
substance (or mind), and extended substance (or matter). René Descartes’ is the most famous
dualist philosopher. His picture of the relationship between the mind and the body: res
cogitans/res extensa. The mind has no extension but it does have a location—pineal gland. But
how to account for mind-body interaction? There are other versions of dualism in philosophy of
mind, too, including interactionism, epiphenomenalism (Frank Jackson), parallelism (Leibniz),
and occasionalism (Malebranche).
Idealism (or immaterialism) in metaphysics is the view that there is only one substance,
mind. Ideas are what minds “have.” All sensible objects are ideas.
Eliminative materialism is the view that a mature neuroscience probably will establish
that our commonsense psychological conceptions of the causes of human behavior and the nature
of cognitive activity (“folk psychology”) are so radically false that humans simply do not undergo
mental states such as beliefs, desires, and intentions. Folk psychology might turn out, in
retrospect, to resemble stories about Santa and the took fairy.
What is the distinction between a brain state and a mental state, according to: a.) a
dualist; b.) a materialist? Give examples of both.
What is an advantage of each of these positions?
What is a possible objection or drawback to each of these positions?
Brie Gertler (a dualist) defends a view she calls naturalistic dualism, arguing that, in
feeling pain, we know the essence of pain. (Gertler defends her self-description as a naturalist by
saying that, although science studies nothing supernatural, still nonphysical events could in
2
principle be objects of scientific study. Objection: But if we hold that naturalism can admit
nonphysical states into the natural order, don’t we thereby gut the term of substantive meaning?)
The view she argues against in her paper is mind-brain identity theory: she believes that
mental states are not identical to physical states: pain is not just stimulation of C-fibers. Thus,
there would appear to be (at least) two very distinct sorts of events or states: physical events or
states, and mental events or states.
“I will defend dualism by arguing that it is possible that you experience pain even if you
are in no physical state, that is, even if you have no body whatsoever.” (361b) Review Gertler’s
Disembodiment Argument (363a). “If our argument succeeds in showing that pain can be present
in the absence of any physical state, we will have established dualism, for we will have shown
that pain is not identical to anything physical, and thus that at least some mental states […] are
not physical.” Here is the argument, schematically:
1. I can “conceive of” (imagine) experiencing this very pain while possessing no
physical features; while being disembodied.
2*. If, using concepts that are sufficiently comprehensive, I can conceive of a particular
scenario
occurring, then that scenario is possible.”
4. If this very pain was identical so some physical state, then it could not possibly occur
in a disembodied being.1
6. Conclusion: Therefore, the identity thesis, which says that every mental state is
identical to some physical state, is false.
Frank Jackson (a dualist): Here is his argument: “Nothing you could tell of a physical
sort captures the smell of a rose, for instance. Therefore, Physicalism is false.” Review
Jackson’s Knowledge Argument (373a).
The smell of a rose is an example of a quale (plural: qualia). Qualia are the qualitative
“feel” of experiences; the inexpressively private “innerness” of mental states; the what-it-is-like
aspect of experience. According to Jackson, they are caused by physical states, notably brain
states, but they do not cause physical states or mental states; indeed, they do not do anything.
Qualia are what we have privileged access to. A quale is information of a sort, but it is not
physical information. Qualia are epiphenomena. I
Who is Jackson’s figure Fred? What role does he play in Jackson’s Knowledge
Argument for qualia? What is Jackson’s tomato-sorting thought experiment, the one involving
red1 and red2?
Who is Jackson’s figure, Mary? What is she supposed to illustrate about qualia?
Can you give an example of a mental event seeming to cause a physical event? What
might Frank Jackson say about this?
1
If A is identical to B, then it is impossible for (“inconceivable that”? “impossible to imagine that”?)
something to be an A without being a B. So (5) follows by modus tollens from (4) and (3).
3
In response to a possible objection to epiphenomenalism, Jackson speculates that having
qualia might be a by-product of an evolved characteristic that is conducive to survival.
Behaviorism: Talk about emotions, sensations, beliefs and desires is a shorthand way of
talking about actual and potential patterns of behavior. (Ryle)
Behaviorists hold that statements containing expressions about mental states and events
(hopes, fears, dreams, imaginations, desires, volitions, and so on) have the same meaning as (and
are translatable into) some set of publicly verifiable, testable, statements describing behavioral
and bodily processes and dispositions. “I’m thirsty” means “I am disposed to get a beer from the
fridge.”
Papineau vocabulary: mental property/physical property; the causal argument (for mind-
body identity/materialism); the Completeness of Physics assumption; over-determination
of physical effects of mental states; the “belts and braces” account of mental and physical
causation; causal danglers.
4
theory so fundamentally defective that both the principles and the ontology of that theory will
eventually be displaced, rather than smoothly reduced, by completed neuroscience."
He devotes much of his discussion to a critical examination of functionalism.
Functionalism in Philosophy of Mind was an attempt to address some serious objections
to Behaviorism, while retaining some insights of behaviorism. Functionalists see themselves as
combining the best features physicalism and behaviorism, without the drawbacks of either.
According to functionalism (or token-identity theory), the defining feature of any type of
mental state is the set of causal relations it bears to: (1) environmental effects on the body; (2)
other types of mental states, and (3) bodily behavior.
What are Churchland’s objections to functionalism?
Alan Turing wants to address the question, Can machines think? How are we to
determine the answer to this question? (“The original question, ‘Can machines think?’ I believe
to be too meaningless to deserve discussion.”)
Turing vocabulary: discrete state machines (such as, approximately, a digital computer);
the imitation game; the Turing test (a version of the imitation game); the learning
machine.
Describe the imitation game. Describe a Turing machine. What is it supposed to show?
Turing advocates a position that we today would call. Strong Artificial Intelligence (SAI).
This is the view that appropriately programming a machine is sufficient for giving it intentional
states. According to SAI, computers could literally be said to understand and have other
cognitive states (desire; intentional states). “In strong AI, because the programmed computer has
cognitive states, the programs are not mere tools that enable us to test psychological explanations;
rather, the programs are themselves the explanations.”
Notice that SAI is a version of functionalism.
The Computational Model of the Mind (accepted by Functionalists):
So Turing replaces the question Can machines think? with the question: Are there
imaginable digital computers that would do well in the imitation game? He believes that a
properly programmed computer could think, could understand.
Turing addresses six objections to his view (pp. 299-303). Review these objections.
5
mind, capable of cognitive states (or mental states); (2) the view that programs are (or could be)
psychological explanations--explanations of cognitive states.
Describe Searle’s Chinese Room Argument. Why is this an argument against Strong AI?
Searle points out that one cannot say that understanding is taking place in the Chinese
Room: the person in the room cannot be said to understand the story in Chinese. (This becomes
clear when we compare the two routines, one in Chinese and English, and the other in only
English.) This is true for any such algorithms because they are syntactically individuated (on the
basis of “grammar”—rules for correlating symbols) but intentional states such as understanding
are semantically individuated—words have meanings based on what they “refer to,” what they
are “about.”
Searle subscribes to non-reductive physicalism, which holds that high-level features and
processes are not necessarily reducible to lower-level features and processes; mental states are
physical phenomena and results of material processes; nevertheless, they cannot be neatly
reduced to brain states.
William Lycan disagrees with Searle’s view of the computational model. He defends
functionalism. In “Robots and Minds,” he makes the case that digital computers, or machines
like digital computers that conform to the computational model are, at least potentially, capable
of intelligence.
What did Lycan mean when he said that computers have intelligence (in the sense of
information-sensitivity), but computers have no judgment?
Etc.