Professional Documents
Culture Documents
Epistemology
Epistemology
Epistemology
believe that one of your sources of belief is unreliable, you have a defeater
for all beliefs based on the source. You cannot defeat this defeater and
regain justification for these beliefs by means of epistemically circular
arguments. Yet, there are still disturbing cases in which you do not doubt
the reliability of a source; you are just ignorant of it. The present account
allows your gaining knowledge about the reliability of the source too easily.
Thus there seems to be no completely satisfactory solution to the problem of
epistemic circularity. This suggests that the ancient problem of the criterion
is a genuine skeptical paradox.
Table of Contents
1.
2.
Epistemic Failure
3.
4.
5.
6.
7.
8.
Sensitivity
9.
10.
11.
2. Epistemic Failure
The problem of epistemic circularity derives from our intuition that there is
something wrong with it. Many philosophers have expressed doubts that
this intuition is completely explained by dialectical considerations. The fault
seems to be epistemic rather than just dialectical. Richard Fumerton (1995)
and Jonathan Vogel (2000) argue that we cannot gain knowledge and
justified beliefs by means of epistemically circular reasoning. They conclude
that any account of knowledge or justification that allows this must be
mistaken. Their target is reliabilism in particular. Fumerton writes:
You cannot use perception to justify the reliability of perception! You
cannot use memory to justify the reliability of memory! You
cannot useinduction to justify the reliability of induction! Such attempts to
respond to the skeptic's concerns involve blatant, indeed pathetic,
circularity. Frankly, this does seem right to me and I hope it seems right
to you, but if it does, then I suggest you have a powerful reason to conclude
thatexternalism is false. (1995, 177)
Pyrrhonian
skeptics
If we understand the right criteria of truth as reliable sources of belief-sources that mostly produce true beliefs--we arrive at the following
formulation of the problem of the criterion:
(1) We can know that a belief based on source K is true only if we first know
that K is reliable.
(2) We can know that K is reliable only if we first know that some beliefs
based on source K are true.
Assumption (1) is a formulation of the strengthened KR principle. Together
with assumption (2), it leads to skepticism: we cannot know which sources
are reliable nor which beliefs are true. To be sure, (2) does not require us to
know that beliefs based on K are true through K itself; we can rely on some
other source. However, (1) posits that this other source can deliver
knowledge only if we first know that it is reliable, and (2) that, in order to
know this, we need to know that some beliefs based on it are true. In order
to know this, in turn, we once again have to rely on some third source, and
so on. Because we cannot have an infinite number of sources, sooner or
later we have to rely on sources already relied on at some earlier point. We
are thus reasoning in a circle, and circular reasoning is unable to provide
knowledge.
The circle we are caught in is not epistemic. It is a straightforwardly logical
circle. It is clear that a logical circle does not produce knowledge. Such a
circle is nowhere connected to reality. Thus in trying to avoid epistemic
circularity, we are caught in a more clearly vicious circle--a logical circle.
It is natural to think that epistemic circularity is the lesser evil. If we only
have the alternatives of making knowledge too easy or impossible, most
philosophers would surely choose the former. This may be the motivation
behind currently popular reliabilist and evidentialist epistemologies that
deny higher-level requirements for knowledge, but are these really our only
options? Could we not reject assumption (2) instead of (1)?
turn be inferred from some other knowledge. One might insist instead that
our knowledge about our own reliability is basic or noninferential. This
would break the skeptic's circle.
Thomas Reid (1983, 275) seems to be the traditional advocate of this
position. He takes it as a first principle that our cognitive faculties are
reliable. He states that first principles are self-evident: we know them
directly without deriving them from some other truths (257). How is it
possible to know directly a generalization that is only contingently true? It
may be easy to see how we can directly know a generalization, such as "All
triangles have three angles," which is a necessary truth: we can simply see
its truth through a priori intuition. However, we cannot simply see that our
faculties are reliable. The faculty of a priori reason does not give us
knowledge of contingent generalizations.
Reid (259-260) posits that there is a special faculty for knowing the first
principles, which he calls common sense. Thus, common sense tells us that
our faculties are reliable. However, it cannot give us knowledge unless we
first know that it is reliable. How can we know this? The only available
answer seems to be that we also know this through common sense.
(Bergmann 2004, 722-724) There is a serious problem if we assume the
skeptic's strengthened KR principle. This entails that we can know that
common sense is reliable only if we first know that it is reliable. We must
know it before we know it, which is impossible. We avoid this result if we go
back to Cohen's original KR principle (Van Cleve, 2003, 50-52), but then we
face epistemic circularity once again.
According to the Reidian view, knowledge about the reliability of our
faculties is basic, and the source of it is common sense. However, common
sense delivers this knowledge only if it is itself known to be reliable. If we
accept Cohen's original KR principle and deny the skeptic's requirement
that this knowledge be prior to other knowledge delivered by common
sense, we allow that common sense delivers simultaneously basic
knowledge about the reliability of our faculties and about the reliability of
common sense itself. This is a coherent position.
However, this Reidian view allows one kind of epistemic circularity.
Although it is not quite the same kind as in the track-record argument, it
allows that we can know that a faculty is reliable by using that very same
faculty. The only difference is that this is basic knowledge and not
knowledge based on reasoning. It seems that this view makes knowledge
about reliability even easier than before.
If we wanted to determine whether to trust a guru, we could construct an
inductive argument based on the premises about the truth of what he says
and leading to the conclusion that he is reliable. If our belief in the premises
is itself based on what he tells us, our argument is epistemically circular. It
seems that this cannot be a way of gaining knowledge about his reliability in
that it would be intuitively too easy. It would be even easier to base our
belief in his reliability on his simply saying that he is reliable. If we cannot
gain knowledge through epistemically circular reasoning, how could we gain
it by taking this more direct route?
8. Sensitivity
It is possible to reject the KR principle without allowing epistemic
circularity. One might simply deny--as Wittgenstein does--that we have any
knowledge about our own reliability. One could defend this view--as
Wittgenstein does not do--on the basis of the sensitivity condition of
knowledge. Analyses of knowledge as defended by Fred Dretske (1971)
and Robert Nozick (1981) set the following necessary condition for S's
knowing that p:
Sensitivity: if it were not true that p, S would not believe that p.
According to Cohen (2002, 316), our beliefs about the reliability of our
sources of belief do not satisfy this condition. Assume that we form a belief
in the reliability of sense perception on the basis of epistemically circular
reasoning. According to the sensitivity condition, we cannot know on this
basis that sense perception is reliable if we believed on this basis that it is
reliable even if it were not reliable. It seems that this is exactly what is
wrong with such arguments: they would cause us to believe that a source is
reliable even if it were not. A guru would tell us that he is reliable even if he
were not.
The sensitivity condition concerns the possible worlds in which our belief is
false but which are otherwise closest to the actual world. Alvin Goldman
(1999, 86) suggests that the relevant alternative to the hypothesis that
visual perception is reliable is that visual perception is randomly unreliable.
If this is the case in the closest possible worlds in which our belief in the
reliability of visual perception is false, it may be that we can, after all, know
reasoning. Such reasoning would rely on those very same beliefs for which I
have lost the justification. It is unable to defeat reliability defeaters.
(Bergmann 2004, 717-720)
We can thus readily explain the failure of epistemically circular arguments
in cases in which there are serious doubts about reliability. They fail to
remove these doubts. However, as the case of Roxanne shows, dialectical
ineffectiveness and the failure to defeat defeaters cannot be the only things
that are wrong with epistemic circularity. Neither Roxanne nor anybody
else doubts her gas gauge; she is just ignorant about its reliability. She has
no knowledge or justified beliefs about the matter. Our intuition is that she
cannot gain knowledge or justified beliefs about the reliability of the gauge
through the process of bootstrapping.
from the (possibly) false claim that I could come to know the latter on the
basis of my knowing the former, since my basis for knowing the former
involves presupposing the latter (by taking my sense experience and
memory at more or less face value, for instance).
Closure principles are employed in both skeptical and anti-skeptical
arguments. The skeptic points out that if one knows an ordinary common
sense proposition (such as that one has hands) to be true, and knows that
this proposition entails the falsity of a skeptical hypothesis (such as that one
is a handless brain in a vat, all of whose experiences are hallucinatory), one
could know the falsity of the skeptical hypothesis, in virtue of knowledge
being closed under known entailment. Since one cannot know the falsity of
the skeptical hypothesis (or so the skeptic maintains), one also must not
know the truth of the common sense claim that one has hands.
Alternatively, the anti-skeptic might insist that we do know the truth of the
common sense proposition, and hence, in virtue of the closure principle, we
can know that the skeptical hypothesis is false. Although the closure
principle is sometimes used by anti-skeptics, some view the rejection of
closure as the key to refuting the skeptic.
Less schematically, this says that if one knows one thing to be true and the
known claim logically entails a second thing, then one knows the second
thing to be true. This principle has obvious counter-examples. A
complicated theorem of logic is entailed by anything (and hence by any
proposition one knows), but one may not realize this and may thus fail to
believe (or even grasp) the theorem. Since one must at least believe a
proposition in order to know that it is true, we see that one may fail to know
something entailed by something else that one knows. Additionally, even if
a proposition is entailed by something one knows, if one comes to believe
the proposition through some epistemically unjustified process, one will fail
to know the proposition (since ones belief of it will be unjustified). For
instance, if one knows that one will start a new job today and then comes to
believe that one will either start a new job today or meet a handsome
stranger based on the testimony of her astrologist, then perhaps she will fail
to know the truth of the entailed disjunction.
b. The Closure of Knowledge Under Known Entailment
It is more plausible that knowledge is closed under known entailment:
If S knows that p, and knows that p entails q, then S knows that q.
As stated, however, the principle seems vulnerable to counter-examples
similar to the ones just discussed. The subject might fail to put his
knowledge that p together with knowledge that p entails q and thus fail to
infer q at all. One might know that she has ten fingers and that if she has ten
fingers then the number of her fingers is not prime, but simply not bother to
go on to deduce and form the belief that her number of fingers is not prime.
Alternatively, although the subject could have come to believe q by inferring
it correctly from something else that she knows (since she is aware of the
entailment), she instead might have come to believe q through some other,
epistemically unjustified, process.
How can we capture the idea that one can add to ones store of knowledge
by recognizing and assenting to what is entailed by what one already
knows? This formulation seems suitably qualified:
The closure principles discussed thus far are instances of single premise
closure. For instance, ones knowledge that a given particular premise is
true, when combined with a correct deduction from that premise of a
conclusion, seems to guarantee that one knows the conclusion. There are
alsomultiple premise closure principles. Here is an example:
If S knows that p and knows that q, and S comes to believe r by correctly
deducing it from p and q, then S knows that r.
That is, if I know two things to be true and can deduce a third thing from
the first two, then I know the third thing to be true. There is good reason to
be dubious of multiple premise closure principles of justification, such as
If S is justified in believing that p and justified in believing that q, and S
correctly deduces r from p and q, then S is justified in believing that r.
Lottery examples reveal the difficulty. Given that there are a million lottery
tickets and that exactly one of them must win, it is plausible (though not
obvious) that for any particular lottery ticket, I am justified in believing that
it will lose. So I am justified in believing that ticket one will lose, that ticket
two will lose, and so forth, for every ticket. But if I know that there are a
million tickets, and I am justified in believing each of a million claims to the
effect that ticket n will lose and I can correctly deduce from these claims
that no ticket will win, then by closure I would be justified in concluding
that no ticket will win, which by hypothesis is false. Justified belief is
fallible, in that one can be justified in believing something even if there is a
chance that one is mistaken; conjoin enough of the right sort of justified but
fallible beliefs and the resulting conjunction will be unlikely to be true, and
thus unjustified.
If knowledge, like justified belief, is fallible (say, only 99.9% certainty is
required), then multiple premise closure principles for knowledge will fail
as well. One could be sufficiently certain for knowledge about each of a
thousand claims (I will not die today; I will not die tomorrow; ; I will
not die exactly 569 days from today; etc.), but not sufficiently certain of the
conjunction of these claims (I will not die on any of the next thousand
days) in order to know it, even though it is jointly entailed by those
thousand known claims (and thus true). The fallibility of knowledge is far
more controversial than the fallibility of justified belief, however.
Similarly, closure might be thought to hold for different types of knowledge,
such as a priori knowledge (i.e. knowledge not gotten through sense
experience, to oversimplify a bit). If one knows a priori that p, and knows a
priori that p entails q, then one knows a priori that q. Intuitively, it seems
that if one knows the premises of an argument a priori and is able to validly
deduce a conclusion from those premises, one would know the conclusion a
priori as well. This last point is on weaker ground, however, as discussed
in Section 5b.
2.
3.
If I know one thing, and I know that it entails a second thing, then I also know
the second thing. (Closure)
4.
Thus, I do not know that I have hands. (From 2 and 3, if I knew I had hands I
would know that I am not a brain in a vat, in contradiction with 1).
If one really knew the ordinary common sense claim to be true, one could
deduce the falsity of the skeptical claim from it and come to know that the
skeptical claim is false (by closure). The fact that one cannot know that the
skeptical claim is false (as per the first premise) demonstrates that one does
not in fact know that the common sense proposition is true either. (See
also Contemporary Skepticism).
But one persons modus tollens (the inference from if p then q and not-q to
the conclusion not-p) is another persons modus ponens (the inference
from if p then q and p to the conclusion q), as we can see from an antiskeptical argument of the sort associated with G.E. Moore. (See Moore
1959).
1.
2.
3.
If I know one thing, and I know that it entails a second thing, then I also know
the second thing. (Closure)
4.
From the fact that one knows that she has hands and this is incompatible
with a skeptical hypothesis under which her hands are illusory, one can
infer, and thus come to know (if closure is correct), the falsity of the
skeptical hypothesis.
The closure principle can be used even in defense of a dogmatic rejection of
any recalcitrant evidence that counts against something that one takes
oneself to know. The argument runs as follows (adapted from Harman
1973):
1.
2.
I know that if my car is parked in Lot A, and there is evidence that my car is
not parked in Lot A (say, testimony that the car has been towed), then the evidence is
misleading. (Analytic, since evidence against a truth must be misleading)
3.
Thus, I know that any evidence that my car is not parked in Lot A is
misleading. (Closure)
4.
I know that there is evidence that my car is not parked in Lot A. (Assume)
5.
Thus, I know that this evidence (testimony that my car was towed) is
misleading. (Closure)
6.
7.
Thus, I ought to disregard any evidence that my car is not parked in Lot A.
(From 5 and 6)
I know that I have mental property M (say, the thought that water is wet).
(Assume privileged access to ones own thoughts)
2.
I know that if I have mental property M (the thought that water is wet), then I
meet external conditions E (say, living in an environment containing water).
(Externalism with respect to content)
3.
If I know one thing, and I know that it entails a second thing, then I know the
second thing. (The principle of the closure of knowledge under known entailment).
4.
Thus, I know that I meet external conditions E (namely, that I live in environs
containing water). (From 1, 2 and 3)
in Jerusalem (q). We may suppose that one can correctly deduce qfrom p.
Even so, since ones belief that p tracks the truth of p and counts as
knowledge and ones belief that q does not do so, knowledge fails to be
closed under known entailment. One may know that p, and know
that p entails q (and come to believe the latter by correctly deducing it from
the former), and yet fail to know that q.
Nozicks account has at least two virtues. One is that the tracking analysis of
knowledge is plausible. The other is that the rejection of closure allows us to
reconcile the following two claims, both of which seem plausible but had
seemed incompatible: (1) we do know many common sense propositions,
such as that I have hands, and (2) we do not know that skeptical
hypotheses, such as that I am a handless, artificially stimulated brain in a
vat, are false. One desideratum of a theory of knowledge is that it refutes
skepticism while accounting for the plausibility and persuasiveness of the
skeptics case against common sense knowledge claims. Both the skeptic
and the Moorean anti-skeptic come up short here. The skeptic must deny
our common sense knowledge claims and the Moorean must maintain that
we can know the falsity of skeptical hypotheses. As long as we accept the
closure principle, whether we are skeptics or anti-skeptics, we cannot
maintain both that we know common sense propositions and that we do not
know that the skeptical hypotheses are false, since we know that the
common sense propositions entail the falsity of the skeptical propositions.
Knowledge of the truth of the common sense claims would, if knowledge is
closed under known entailment, guarantee our knowledge that skeptical
hypotheses are false. Citing our failure to know that skeptical hypotheses
are false, the skeptic applies modus tollens and infers that we must not
know the common sense propositions. The rejection of closure blocks this
move by the skeptic.
This is not to say that there are not plausible counterexamples to the
tracking account of knowledge. I may know my mother is not the assassin
since she was with me when the assassination took place. But
counterfactually, if she were the assassin, I would still believe she was not,
since after all I couldnt believe such a thing of my mother. My belief that
my mother is not the assassin fails to track the truth, since I would have
believed it even if it were false, but it seems quite plausible that I do know
listener naturally would conclude that the insult preceded the dismissal. For
more on this, seeGrice 1989).
John Hawthorne (2005: 30-31) makes two points in reply. First, he says, it
is unclear what sort of Gricean mechanism could make it true but
conversationally inappropriate to utter "S knew that p and correctly
deduced q from p, but did not know that q." Second, an appeal of this sort
can at best explain why we do not utter certain true propositions, but not
why we actually believe their negations. Even if it is true that ones wife is
his best friend, it would be inappropriate for him to introduce her to
someone as his best friend. But the conversational mechanism at play here
could hardly be an explanation for why he believed that his wife was not his
best friend (even though she was). Why, if the denial of closure is true but
conversationally infelicitous, do so many not only not deny closure in
conversations but in fact believe that the closure principle is true?
One might reply that many people, even philosophers, are apt in some
situations to mistake what is conversationally appropriate for what is true
(as with conditional claims that have false antecedents), so an explanation
of why a true claim violates conversational norms might well explain why
people believe the negation of the claim.
e. Alternative Anti-Skeptical Strategies Need Not Reject Closure
There are alternative strategies for refuting skepticism that seem to have
many of the virtues of the tracking account of knowledge, but do not entail
the falsity of closure principles. Contextualism, for example, says that
knowledge attributions are sensitive to context, in that a subject S might
know a proposition p relative to one context, but simultaneously fail to
know that p relative to another context. The contextual factors to which
knowledge attributions are taken to be sensitive include things like whether
a particular doubt has been raised or acknowledged and the importance of
the belief being correct.
In an ordinary context, where skeptical scenarios have not been raised, the
standards for knowledge are quite low, but, in contexts in which skeptical
doubts have been raised, such as an epistemology class, standards for
knowledge have been raised to levels that typically cannot be met. One
might know relative to the everyday context that she has hands, but fail to
know this relative to the skeptics context, because a skeptical scenario has
been raised and she cannot rule it out.
Or a true belief with a certain level of justification might count as knowledge
as long as it is not terribly important that the belief be correct, but would no
longer be knowledge if the stakes were raised. One might know that the
bank will be open on Saturday after confirming that the bank has Saturday
hours, even if one has not checked whether the bank has changed its hours
in the past two weeks, as long as no great harm will befall one if it turns out
one is wrong. But if financial ruin will befall one were a check not deposited
before Monday, then ones justification might need to be stronger before it
would be correct to say that one knows the bank is open Saturday.
The contextualist then can reconcile the intuitions that it is sometimes
correct to attribute to someone knowledge of everyday common sense
propositions, despite her inability to rule out skeptical propositions, and
that we are sometimes correct in refusing to attribute knowledge of the
falsity of a skeptical scenario when the subject is unable to rule out such
scenarios. But the contextualist can do this while accepting at least some
version of closure. The contextualist says that epistemic closure holds
within an epistemic context, but fails inter-contextually. For instance, in the
everyday, low epistemic standards context, one knows that one has hands
and anything that one can correctly deduce from this claim, such as that one
is not a handless being deceived into thinking that one has hands. In the
context with much higher epistemic standards, one knows neither that one
is not a handless, artificially stimulated brain in a vat, nor (by an application
of the closure of knowledge under known entailment) that one has hands.
Closure will fail only when it extends across contexts. For instance, if one
were to cite ones knowledge that one has hands (in the ordinary context) as
grounds for saying in the heightened context that one knows that the brain
in a vat hypothesis is false (as the Moorean might), one would illegitimately
apply the closure principle. The skeptic's citing ones failure to know the
falsity of the skeptical hypothesis (in the heightened context) as entailing
that one does not know the common sense proposition (in the ordinary
context) would be a similar misuse of the closure principle.
predict) the same observational data, then that observational data does not
support (or justify belief of) one of the theses over the other. With this
principle and the premise that the two theses are incompatible but
observationally equivalent, we can deduce that our apparent perception of
our hands does not justify us in believing that we have hands.
The argument is greatly oversimplified, but the outline of the skeptical
argument from underdetermination now ought to be clear. The argument
does not explicitly employ any closure premise, so the rejection of closure
would seem not to undermine the argument in any straightforward way.
One could always argue that the appeal of the argument from
underdetermination implicitly relies on the closure principle or that the
argument from underdetermination is objectionable on other grounds.
Skeptical arguments from underdetermination, however, seem as plausible
as other skeptical arguments and their plausibility seems not to depend on
the plausibility of any of the closure principles.
Infinite regress arguments for skepticism also do not straightforwardly
appeal to closure. A regress argument that no belief is epistemically justified
(and hence than no belief counts as knowledge) runs as follows. We assume
that all justification is inferential. That is, every justified belief is justified by
appeal to some other justified belief. The basis for this claim might be the
nature of argumentation. One is justified in believing a conclusion if one is
justified in believing the premises that support the conclusion. If the
conclusion is one of the premises, then the argument is question-begging, or
circular, and not rationally persuasive. But if every justified belief can be
justified only be inferring it from some further justified belief and there
cannot be an infinite regress of justified beliefs, then it must be that no
beliefs are justified. (A foundationalist about justification, on the other
hand, while agreeing that an infinite regress of justified beliefs is
impossible, insists that there are justified beliefs, and hence that some
beliefs are justified non-inferentially, or in other words, that some justified
beliefs are basic or foundational). The claim that no justified belief is selfjustifying does not entail any closure principle of justification or knowledge,
so the argument seems to be independent of closure and thus not vulnerable
to arguments against closure principles. (See also Ancient Skepticism).
The proponent of the tracking account of knowledge need not answer all
forms of the skeptical argument with the same tools, so even if some
skeptical arguments do not depend on the closure principle, the tracking
analysis might provide the resources for countering the skeptical arguments
from underdetermination or regress.
cases, to consider the possibility that one does not in fact know that r,
rather than simply inferring that the testimony is misleading. Learning the
truth of the antecedent that there is strong evidence against r may
undermine the justification for believing the conditional itself, thus making
the conditional resistant to modus ponens. Knowledge of the conditional
depends on ones knowing that the antecedent is false. Finding evidence in
favor of the antecedent even if in fact it is misleading may weaken ones
justification for the conditional, such that one no longer knows the
conditional to be true.
This blocking of the dogmatist argument does not involve denying closure,
though. The reason the modus ponens inference fails to go through is
because the conditional is a junk conditional; one can know the
conditional to be true only if one does not know the antecedent to be true,
and the closure principle applies only if one simultaneously knows both the
conditional and its antecedent to be true.
Another explanation that does not require the denial of closure is due to
Michael Veber (Veber 2004). He says that even if the dogmatist argument is
sound, the principle If a piece of evidence E is known by S to be
misleading, S ought to disregard it, ought not to be endorsed on grounds of
human fallibility. We are frequently enough wrong in taking ourselves to
know what we in fact do not know that following such a principle would
lead one to disregard evidence that is not misleading. There is nothing
wrong with the principle, provided it is correctly applied; but due to the
difficulty or impossibility of correctly applying it, adopting such a policy is
contraindicated.
believe that cans are made of aluminum, since both of these states bear a
causal relation to aluminum, rather than twaluminum. (See Burge 1988 and
Heil 1988). Whatever makes it the case that S thinks that p (instead of q)
will also make it the case that S thinks I am thinking that p(instead of I am
thinking that q). Coupled with a reliabilist theory of knowledge, these
second-order beliefs count as knowledge since they cannot go wrong and
the thesis of privileged access is reconciled with externalism.
Enter McKinseys Paradox. We assume that we know content externalism to
be true and that it is compatible with a suitably robust thesis of privileged
access to thought contents. We may now reason as follows:
1.
I know that I am in mental state M (say, the state of believing that water is
wet). (Privileged Access)
2.
3.
If I know one thing and I know that it entails a second thing, then I know the
second thing. (Closure of knowledge under known entailment)
4.
The knowledge attributed in the premises is a priori in the broad sense that
includes knowledge gotten through introspection and/or philosophical
reflection. That knowledge is not gained via empirical investigation of the
external world. The conclusion follows by an application of the closure
principle. What is paradoxical is that, given closure, it seems that one can
know the truth of an empirical claim about the external world (say, that
ones environment contains water or that it contains aluminum rather than
twaluminum) simply by inferring it from truths known by reflection or
introspection. This argument bolsters the incompatibilists case: since it is
only by investigation of the world that one can know that one meets a
particular set of external conditions and since the premises (including
closure) entail that this fact can be known on the basis of knowledge not
dependent on investigation of the world, either the privileged access
premise or the externalist thesis must be false (provided that the closure
principle is correct).
b. Davies, Wright, and the Closure/Transmission Distinction
There are many responses to this argument. Some reject externalism, some
(like McKinsey) deny privileged access, and some compatibilists (Brueckner
1992) argue that even if externalism is known to be true, nothing as specific
as the second premise of the argument could be known a priori. But
perhaps the most influential attempt to solve the paradox is due to Martin
Davies (1998) and Crispin Wright (2000). They argue that even though
arguments like McKinseys are valid and their premises are known to be
true, this knowledge is not transmitted across the entailment to the
conclusion. At first blush, it seems like Davies and Wright are rejecting
closure, which is certainly one way to deal with the paradox. Davies and
Wright accept closure, though, and only reject a related but stronger
epistemological principle that says that knowledge is transmitted over
known entailment.
Davies and Wright are distinguishing between the closure of knowledge
under known entailment and what they take to be a common misreading of
it. The closure principle says that if one knows that p and knows
that p entails q, then one knows that q, but the principle is silent on what
ones basis or justification for q is and does not claim that the basis for q is
the knowledge that p and that p entails q. The principle of the transmission
of knowledge under known entailment, however, states that if one knows
that p, and knows that p entails q, then one knows q on that basis what
enables one to know that p and that p entails q also enables one to know
that q. Davies and Wright accept the closure principle but deny the
transmission principle, arguing that it fails when the inference
from p to q is, although valid, not cogent. Here cogency is understood as an
arguments aptness for producing rational conviction.
One way an argument could be valid but fail to be cogent is that the
justification for the premises presupposes the truth of the conclusion. If I
reason from the premise that I have a drivers license issued by the state of
North Carolina (based on visual inspection of my license and memory of
having obtained it at the North Carolina Department of Motor Vehicles) to
the conclusion that there exists an external world, including North Carolina,
outside my mind, it is plausible that my justification for the premise (taking
sense experience and memory at face value) presupposes the truth of the
conclusion. If this is so, then it seems that the premise could not be my
basis for knowing the conclusion. Anyone in doubt about the conclusion
would not accept the premise, so although the premise entails the
conclusion, the premise could not provide the basis for rational conviction
that the conclusion is true. Such an argument is valid, but not cogent. It
would not be a counterexample to closure, for anyone who knows the
premise and the entailment also must know the conclusion, but it is a
counterexample to the transmission principle, since the conclusion would
not be known on the basis of the knowledge of the premise.
According to Davies and Wright, the McKinsey argument is valid but not
cogent because knowledge of the conclusion is presupposed in ones
supposed introspective knowledge of the premises. Thus, it is a
counterexample to transmission, but poses no threat to closure. The nonempirical access to the externally individuated thought contents is
conditional on the assumption that certain external conditions obtain (such
as that ones environs include aluminum rather than twaluminum), which
can only be confirmed empirically. Thus one may not reason from the nonempirical knowledge claimed in the premises to non-empirical knowledge
of an empirical truth that enjoys presuppositional status with regard to the
premises. That one has a thought about water may entail that one bears a
causal relation to water in ones environment (if externalism is correct) and
one may know the former and the entailment only if one knows the latter,
but one may not cogently reason from the premise to the conclusion, since
the inference begs the question. Anyone who doubts the conclusion of the
McKinsey argument in the first place would not (or at least should not -- the
presuppositions of our premises are not always recognized as such) be
moved to accept the premises that entail it.
Consider then the following principle about a priori knowledge:
(APK) If a subject knows something a priori and correctly deduces (a
priori) from it a second thing, then the subject knows a priori the second
claim.
We can describe this principle in two equivalent ways. It is the principle of
closure of a priori knowledge under correct a priori deduction and,
alternatively, it is a specific instance of the principle of transmission of
knowledge under known entailment, since it claims that the a priori basis
because the justification for p presupposes q. One knows that q(on some
independent basis), so there is no counterexample to closure, but q will not
be known on the basis of p, so the transmission principle is false.
Clarifying the closure principle as a principle about the distribution of
knowledge across known entailment, rather than as a principle about the
transmission or acquisition of knowledge, divorces the closure principle, to
some extent, from the initial intuitive support for it, which is the idea that
we can add to our store of knowledge (or justified belief) by accepting what
we know to be entailed by propositions we know (or justifiably believe). On
this understanding of closure, knowledge and justified belief are distributed
across known entailment even when drawing the inference in question
could not add to ones store of knowledge or justified belief.
such a thing? (The coining of the term lottery proposition and the
discovery that this phenomenon is widespread, is due to Jonathan Vogel).
The apparently inconsistent triad is (i) one knows the ordinary proposition,
(ii) one fails to know the lottery proposition, and (iii) closure. One may
eliminate the inconsistency by denying closure on the sort of grounds that
Dretske and Nozick cite. Plausibly, ones belief of so-called ordinary
propositions tracks the truth, while ones belief of lottery propositions does
not. If Cheney were not Vice-President, one would not believe he was, but
had Cheney died in the past thirty seconds, one still would believe he was
Vice-President.
One might bite the skeptical bullet and insist that one really does not know
that Cheney is Vice-President. One of a more anti-skeptical bent might
maintain that one can really know the lottery propositions, such as that
Cheney did not die in the last thirty seconds. Such a resolution has
considerable costs, but denying closure is not among them.
Alternatively, one might argue for a contextualist handling of the problem
that does not require the denial of closure or biting the skeptical or antiskeptical bullet.
Coherentism in Epistemology
Coherentism is a theory of epistemic justification. It implies that for a belief
to be justified it must belong to a coherent system of beliefs. For a system of
beliefs to be coherent, the beliefs that make up that system must "cohere"
with one another. Typically, this coherence is taken to involve three
components: logical consistency, explanatory relations, and various
inductive (non-explanatory) relations. Rival versions of coherentism spell
out these relations in different ways. They also differ on the exact role of
coherence in justifying beliefs: in some versions, coherence is necessary and
sufficient for justification, but in others it is only necessary.
This article reviews coherentisms recent history, and marks off
coherentism from other theses. The regress argument is the dominant anticoherentist argument, and it bears on whether coherentism or its chief
rival, foundationalism, is correct. Several coherentist responses to this
1. Introduction
a. History
British Idealists such as F.H. Bradley (1846-1924) and Bernard Bosanquet
(1848-1923) championed coherentism. So, too, did the philosophers of
science Otto Neurath (1882-1945), Carl Hempel (1905-1997), and W.V. Quine
(1908-2000). However, it is a group of contemporary epistemologists that
has done the most to develop and defend coherentism: most notably
Laurence BonJour in The Structure of Empirical Knowledge (1985) and
Keith Lehrer in Knowledge (1974) and Theory of Knowledge (1990), but
also Gilbert Harman, William Lycan, Nicholas Rescher, and Wilfrid Sellars.
Despite this long list of names, coherentism is a minority position among
epistemologists. It is probably only in moral epistemology that coherentism
enjoys wide acceptance. Under the influence of a prominent interpretation
of John Rawlss model of wide reflective equilibrium, many moral
philosophers have opted for a coherentist view of what justifies moral
beliefs.
b. Describing Coherentism
Epistemological coherentism (or simply "coherentism") needs to be
distinguished from several other theses. Because it is not a theory of truth,
coherentism is not the coherence theory of truth. That theory says that a
proposition is true just in case it coheres with a set of propositions. This
theory of truth has fallen out of favor in large part because it is thought to be
too permissive an obviously false proposition such as I am a coffee
cupcoheres with this set of propositions: I am not a human, I am in the
kitchen cupboard, I weigh 7 ounces. Even contemporary defenders of
coherentism are usually quick to distance themselves from this theory of
truth.
2.
beliefs do not owe their justification to any other beliefs from which they are
inferred. According to the infinitist option, the series of relations wherein
one belief derives its justification from one or more other beliefs goes on
without either terminating or circling back on itself. According to one
construal of the coherentist option, the series of beliefs does circle back on
itself, so that it includes, once again, previous beliefs in the series.
Standard presentations of the Regress Argument are used to establish
foundationalism; to this end, they include further arguments against the
infinitist and coherentist options. These arguments are the focus of the
second stage. Lets focus on the two most popular arguments against
coherentism which figure into the Regress Argument; and lets continue to
construe coherentism as saying that beliefs are justified in virtue of forming
a circle. The first argument makes a circularity charge. By opting for a
closed loop, the charge is that coherentism certifies circular reasoning. A
necessity coherentist will be charged with making circular reasoning
necessary for justified belief. A sufficiency coherentist will be charged with
making circular reasoning part of something (namely, coherence) that is
sufficient for justified belief. But circular reasoning is an epistemic flaw, not
an epistemic virtue. It is neither necessary, nor part of what is sufficient, for
justified belief; in fact, it precludes justified belief.
The second argument takes aim at the claim that coherence is necessary for
justification. Since a belief is justified only if, through a chain of other
beliefs, we ultimately return to the original belief, coherentism is
committed, despite the initial appearance, to the claim that the original
belief is justified, at least in part, by itself. This is supposed to follow from
the coherentist corollary that if the chain of supporting beliefs did not
eventually double back on the original belief, then the original belief would
not be justified. But the claim that my belief that tomorrow is Wednesday is
justified (even in part) by itself is mistaken after all, it is derived, via
inference, from other beliefs. Call this, the self-support charge.
b. Coherentist Responses
Coherentists need not resist the first stage of the regress argument since
that stage, recall, just generated the candidate views. Their responses focus
on the second stage. That coherentism is the best of the three candidates is
this, namely that beliefs are therefore justified at least in part because they
stand in support relations to themselves. In slogan form: reflexive relations
justify.
So what about the self-support charge? Does making this charge require
assuming that symmetrical support relations cannot justify? We need to be
careful. While the claim that the support relation is transitive and the claim
that supporting relations link back to a previously linked belief implies that
the relevant belief supports itself, coherentists are not thereby stuck with
the claim that this belief is justified in virtue of supporting itself. Arguably,
it is open to the coherentist to hold, instead, that this belief is justified in
virtue of the circular structure of the support relations, while denying that it
is justified in virtue of supporting itself. Still, this may not be enough, since
the coherentist might still have to maintain that justified belief is
compatible with self-support.
is wincing, the belief that Moe is squealing, the belief that Moe is yelling
that hurts, and the belief that Moe is in pain. None of these beliefs
logically implies any of the others. Nor does the conjunction of any three of
them imply the fourth. Despite the lack of entailments, though, the beliefs
together seem to constitute a system of beliefs that is intuitively quite
coherent. So coherence can be earned by relations weaker than entailment.
ii. The Propositional Relation: Inductive Relations
Many coherentists have required, in addition to logical consistency,
probabilistic consistency. So if one believes that p is 0.9 likely to be true,
then one would be required to believe that not-p is 0.1 likely to be true. Here
probability assignments appear in the content of what is believed.
Alternatively, a theory of probability might generate consistency constraints
by imposing constraints on the degrees of confidence with which we believe
things. So take a person who believes p, but is not fully confident that p is
correct; she believes p to a degree of 0.9. Here 0.9 is not part of the content
of what she believes; it measures her confidence in believing p. Consistency
might then require that she believe not-p to a degree of 0.1. In one of these
two ways, the axioms of probability might help set coherence constraints.
Besides being probabilistically consistent with one another, coherent beliefs
gain in justification from being inferred from one another in conformity
with the canons of cogent inductive reasoning. Foundationalists, at least
moderate foundationalists, have just as much at stake in the project of
identifying these canons. It is common to identify distinct branches of
inductive reasoning, each with their own respective canons: for example,
inference to the best explanation, enumerative induction, and various forms
of statistical reasoning. For present purposes, what is crucial in all of this is
that beliefs inferred from one another in conformity with the identified
canons (whatever the exact canons are) boost coherence, and therefore
justification.
iii. The Propositional Relation: Explanatory Relations
To supplement the requirements of logical, and probabilistic, consistency,
coherentists often introduce explanatory relations. This allows them to
concur that the system consisting in the beliefs that Moe is wincing, Moe is
squealing, and Moe is yelling that hurts coheres with the belief that Moe is
in pain. In addition, it allows us to disqualify the set consisting in my beliefs
that Joan is sitting, 2+2=4, and tomorrow is Wednesday on the grounds
that these propositions do not in any way explain one another.
There are two ways that a proposition can be involved in an explanatory
relation: as being what is explained, or as being what does the explaining.
These are not exclusive. The fact there are toxic fumes in the room is
explained by the fact that the cap is off the bottle of toxic liquid. The fact
that there are toxic fumes in the room, in turn, explains the fact that I am
feeling sick. So I might believe that I am feeling sick, draw an explanatory
inference and believe that there must be toxic fumes in the air, and then
from that belief draw a second explanatory inference and believe that the
cap must be off the bottle. In this case, that there are toxic fumes in the air
serves to both explain why I am sick and in turn serves as the explanatory
basis for the cap being off the bottle. Often what drives coherentists to think
that a coherent set of beliefs must consist in more than two beliefs is that
the needed explanatory richness requires more than two beliefs.
Disagreement enters when coherentists say exactly what makes one thing a
good explanation of another. Among the determinants of good explanation
are predictive power, simplicity, fit with other claims that one is justified in
believing, and fecundity in answering questions. The nature and relative
weight of these, and other, determinants is quite controversial. At this level
of detail, coherentists, even so-called explanationists who stress the central
played by explanatory considerations, frequently diverge.
Not all coherentists include explanatory relations among the determinants
of coherence. See Lehrer (1990) for example. Those that do include them
usually give one of two kinds of accounts for why believed propositions that
do a good job of explaining one another increase coherence and hence boost
justification. One kind of account claims that when beliefs do this, they
make each other more likely to be true. On this kind of account, explanatory
relations are construed as ultimately being inductive probabilifying
relations. On an alterative account, explanatory relations are irreducible
ingredients of coherence, ingredients that are simply obvious parts of what
contributes to coherence.
last nights party. However, the witnesses are unreliable about this sort of
thing. Moreover, their reports are made completely independently of one
another in other words, the report of any one witness was in no way
influenced by the report of any of the other witnesses. According to Lewis,
the congruence of the reports establishes a high probability of what they
agree upon. (p. 246) The point is meant to generalize: whenever a number
of unreliable sources operate independently of one another, and they
converge with the same finding, this boosts the probability that that finding
is correct. This is so regardless of whether the sources are individual
testifiers, various sensory modalities, or any combination of sources. Items
that individually are quite unreliable and would not justify belief, when
taken together under conditions of independent operation and convergence,
produce justified beliefs.
This argument has been charged with several shortcomings. For one, it is
not clear that the argument, even if sound, establishes coherentism. The
argument appears to rest on an inference to the best explanation, one that
can be construed along foundationalist lines. So, for each source, S1 . . . Sn, I
am justified in believing S1 reports p, S2 reports p . . . Sn reports p.
According to foundationalists, these beliefs are justified without being
inferred from any other beliefs; they are basic beliefs. Then, inferring to the
best explanation, I come to believe p. This belief-that-p is a non-basic belief,
but since it rests on basic beliefs, the overall picture is a foundationalist one,
not a coherentist one.
Second, even on standard coherence views, it is not clear that the reportsthat-p cohere with one another. Logical coherence, both in the sense of
logical consistency and in the sense of mutual derivability, is in place; but
the explanatory relations that coherentists so often emphasize are not.
Third, it is controversial whether the argument is cogent. One issue here
concerns whether each source, taken individually, provides justification for
believing p. If each independently confers some justification, then one of
coherentisms rivals namely, a version of foundationalism which says that
coherence can boost overall justification, but cannot generate justification
from scratch can agree. On the other hand, if each source fails on its own
to confer any justification whatsoever, then the question remains: does this
kind of case show that coherence can create justification from scratch? If
be aware of them for them to justify, though perhaps one does need to be
aware of them if one is to show that ones belief is justified. Here, the
coherentist argument is often charged with conflating the notion of a
justified belief with the notion of being in a position to show that ones
belief is justified.
c. For Necessity: The Need for Justified Background Beliefs
Coherentists sometimes argue in the following way. First, they invoke a
prosaic justified belief about the external world say my present belief that
there is a computer in front of me. Then they claim that this belief is
justified only if I am justified in believing that the lighting is normal, that
my eyes are functioning properly, that no tricks are being played on me, and
so forth. For if I am not justified in making these assumptions, then my
belief that there is a computer in front of me would not be justified.
Generalizing, the claim is that our beliefs about the external world are
justified only if some set of justified background beliefs is in place.
This argument has also been challenged. The key claim--that my belief that
there is a computer in front of me is justified only if I am justified in
believing these other things--is not obvious. A young child, for example,
might believe there is a computer in front of her, and this belief might be
justified, even though she is not yet justified in believing anything about the
lighting, her visual processes, and so forth. If this is correct, then the most
the argument can show is that if someone has a justified belief that there is
a computer in front of them and if they believe that the lighting is normal,
that their eyes are functioning well, and so forth, then these latter beliefs
had better be justified. This, however, is consistent with foundationalism.
Moreover, some epistemologists argue that the psychological realization
condition might not be met. For it is implausible to think that I infer that
there is a computer in front of me from one or more of my beliefs about the
lighting, my eyes, and absence of tricksters. Nor do I infer any of these latter
beliefs from my belief that there is a computer in front of me. Maybe this
non-content requirement will do instead: my computer belief is
counterfactually dependent on my beliefs about the lighting, my eyes, and
so forth, so that if I did not have any of the latter beliefs, then I would not
have the computer belief either. This is far from obvious, though. Perhaps,
belief that cn is true, and the belief that at least one of c1 through cn is false.
Some epistemologists, for example, Foley 1992, have argued that the
historian is justified in believing this set of logically inconsistent claims.
And, all of these beliefs remain justified even if she knows they are logically
inconsistent.
In response, the coherentist might appropriate any of a number of views on
this Preface Paradox. For example, John Pollock (1986) has suggested a
simple reason for thinking that the historians beliefs cannot be both
logically inconsistent and justified. Since a set of inconsistent propositions
logically implies anything whatsoever, adding a widely accepted principle
concerning justification will yield the result that one can be justified in
believing anything whatsoever. The principle is the closure principle:
roughly, it says that if one is justified in believing some set of propositions
and one is justified in believing that those propositions logically imply some
other proposition, then upon deducing this other proposition from the set
that one starts from, one is justified in believing that proposition.
A second set of cases involve beliefs that are logically inconsistent, although
this is unknown to the person who holds them. For example, while Frege
had good reason to believe that the axioms of arithmetic that he came up
with were consistent, Russell showed that in fact they were not consistent. It
is quite plausible that Freges beliefs in each of the axioms were, though
logically inconsistent, nonetheless justified (see Kornblith 1989). BonJour
(1989) responded to this case, as well as the Preface Paradox, by agreeing
that both Freges, and the historians beliefs, are justified. He claimed that
logical consistency is overrated; it is, in fact, not an essential component of
coherence.
e. Against Necessity: Counterexamples
There appear to be straightforward counterexamples to coherentism.
Introspective beliefs constitute an important class of such cases. On a broad
interpretation of empirical that encompasses sources of belief in addition
to the sensory modalities (one that contrasts with the a priori), introspective
beliefs count as empirical. Consider, then, my introspective belief that I am
in pain, or my introspective belief that something looks red to me. These
beliefs are not inferred from any other beliefs I did not arrive at either of
them by inference from premises. They are not based on any other beliefs.
In response, Lehrer (1990, p. 89) has suggested that a coherentist might
identify one, or more, background beliefs, and claim that, though the
introspective belief is not inferred from these background belief, the
introspective belief is justified because it coheres with the background
beliefs. For example, to handle the introspective belief that something looks
red to me, Lehrer points to the background belief that if I believe something
looks red to me then, unless something untoward is going on, the best
explanation is that there is something that does look red to me.
It is not clear that this response works. Let R be the proposition that
something looks red to me. Lehrers suggestion requires that coherence
holds between (i) R and (ii) if I believe R, then R. It is not clear, though, that
coherence does hold between these. Though they are logically consistent,
neither entails the other; moreover, they need not be inductively related to
one another; nor is it clear that either explains the other.
6. Looking Ahead
Intense discussion of coherentism has been intermittent. Two recent
defenses of the position, Laurence BonJours 1985 The Structure of
Empirical Knowledge and Keith Lehrers 1990 version of Knowledge,
significantly advanced the issues and triggered substantial literatures,
which mostly attacked coherentism. But undoubtedly, work on coherentism
has suffered from the fact that so few philosophers are coherentists. Even
BonJour, who did so much to reinvigorate the discussion, has abandoned
coherentism. See his 1999 paper for his renunciation. With the exception of
work being done by Bayesians, few epistemologists are presently working
on coherentism.
Epistemology would be better off if this were not so. For even if coherentism
falls to some objection, it would be nice if we had a better idea of exactly
what range of positions fall. Moreover, when it comes to the task of
clarifying the nature of coherence, an appeal can be made to many
foundationalists. While there might not be much motivation to develop a
position that one rejects, there is this: many foundationalists want to
Contextualism in Epistemology
In very general terms, epistemological contextualism maintains that whether
one knows is somehow relative to context. Certain features of contexts
features such as the intentions and presuppositions of the members of a
conversational contextshape the standards that one must meet in order
for ones beliefs to count as knowledge. This allows for the possibility that
different contexts set different epistemic standards, and contextualists
invariably maintain that the standards do in fact vary from context to
context. In some contexts, the epistemic standards are unusually high, and
it is difficult, if not impossible, for our beliefs to count as knowledge in such
contexts. In most contexts, however, the epistemic standards are
comparatively low, and our beliefs can and often do count as knowledge in
these contexts. The primary arguments for epistemological contextualism
claim that contextualism best explains our epistemic judgmentsit explains
why we judge in most contexts that we have knowledge and why we judge in
some contexts that we dontand that contextualism provides the best
solution to puzzles generated by skeptical arguments.
Table of Contents
1.
Introduction
2.
3.
1.
2.
4.
5.
6.
1.
Explanatory Contextualism
2.
Evidential Contextualism
3.
7.
Objections to Contextualism
8.
Alternatives to Contextualism
9.
Conclusion
10.
1. Introduction
Epistemological contextualism has evolved primarily as a response to views
that maintain that we have no knowledge of the world around us. Taking
quite seriously the problems presented by skepticism, contextualists seek to
resolve the apparent conflict between claims like the following:
1.
2.
But I don't know that I have hands if I dont know that Im not a brain-in-avat (that is, a bodiless brain that is floating in a vat of nutrients and that is
electrochemically stimulated in a way that generates perceptual experiences that are
exactly similar to those that I am now having in what I take to be normal
circumstances).
3.
These claims, when taken together, present a puzzle. (1), (2), and (3) are
independently plausible yet mutually inconsistent. That (1) is plausible
seems to require no explanation. (3) is plausible because it seems that in
order to know that I'm not a BIV, I must rule out the possibility that I am a
BIV. Yet the BIV and I have perceptual experiences that are exactly similar
it seems to the BIV, just as it seems to me, that he has hands, that he is
sitting at his desk and in front of his computer, and so on. Accordingly, my
perceptual experiences give me no reason to favor the belief that I am nota
BIV over the belief that I am. Thus, since I have only my perceptual
experiences to go on, I cannot rule out the possibility that I'm a BIV.
Considerations like these contribute to (3)s plausibility.
Moreover, it seems that I can't know that I have handsand, in general,
that I cant know that I have any body at all if I can't rule out the
possibility that Im a bodiless BIV. This, then, contributes to the plausibility
of (2). It seems in addition that (2) always retains its plausibility, no matter
how high or low we set the standards for knowledge. Keith DeRose (1999a)
defends this claim by noting that it is always a comparative fact that my
epistemic position with respect to the claim that I'm not a BIV is just as
strong as my epistemic position with respect to the claim that I have hands.
If this is correct, then (2) is true across contexts, no matter what the
epistemic standards.
Yet in spite of the fact that they are independently plausible, (1), (2), and (3)
are mutually inconsistent; they cannot all be true. It seems, therefore, that
we must give up one of these claims. But which one should we give up, and
why?
In trying to answer these questions, contextualists maintain that 'know'
either is or functions very much like an indexical, that is, an expression
whose semantic content (or meaning) depends on the context of its use. For
example, the word 'here' is an indexical. I say, "Jaime is here," and what I
mean depends on where I am when I say it. If I'm in the conference room,
then I mean, all other things being equal, that Jaime is in the conference
room. 'I' is also an indexicalits meaning depends on the context of its use
and, in particular, on who is using it. When Jaime says, "I am in the
conference room," then he means, all other things being equal, that Jaime is
in the conference room. Yet when Julie uses 'I', she means something
different; Julies I means Julie.
If 'know' is an indexical, its semantic content (or meaning) will depend on
the context in which it is used. Furthermore, since context will affect the
semantic content of 'know', context will have an effect on the semantic
content of complex lexical items in which 'know' appears, for example, on
the semantic content of knowledge attributions like 'Jaime knows that he's
in the conference room. Contextualists have put the point this way:
the truth-conditions of knowledge ascribing and knowledge denying
sentences (sentences of the form 'S knows that P' and 'S doesnt know that
P and related variants of such sentences) vary in certain ways according to
the contexts in which they are uttered. What so varies is the epistemic
standards that S must meet (or, in the case of a denial of knowledge, fail to
meet) in order for such a statement to be true. (DeRose 1999a, p. 187)
Given this, contextualists maintain that (1), (2), and (3) do not in fact
conflict, even though it seems that they do. They suggest, first of all, that
some contexts set very high epistemic standards, standards according to
which knowledge requires a great deal. Contexts in which these high
standards are in play are typically those in which we are considering and
taking seriously certain skeptical hypotheses. For example, in order to know
anything at all about the world around us, these high standards might
require us to rule out the possibility that we are BIVs, or the possibility that
we are now dreaming, or the possibility that we are now being deceived by
an omnipotent but malevolent demon. Yet our perceptual experiences
afford us no evidence that would allow us to rule out these skeptical
possibilities, for if we were BIVs, for example, we would be having exactly
the same perceptual experiences that we're now having. Thus, we fail to
meet these high epistemic standards with respect both to the belief that I
have hands and to the belief that I'm not a BIV. (1) is therefore false in these
high-standards contexts while (3) is true. According to contextualists, then,
we should reject (1) in high-standards contexts. When we do so, we are no
longer faced with a conflict, for the conflict presents itself only when we
insist on the truth of each of the three mutually inconsistent claims.
Moreover, in rejecting (1) in high-standards contexts, contextualism gives
the skeptic his due, and takes seriously the compelling nature of skeptical
arguments.
Nevertheless, contextualists maintain that in most contexts, the epistemic
standards are comparatively low. Typically, these are ordinary contexts in
which we are considering no skeptical hypotheses. In such contexts, we can
have knowledge of the world around us without eliminating skeptical
possibilities like the BIV possibility. In order to know that I have a hand, for
example, I need eliminate only possibilities like those in which I have no
hands, or in which I have paws or claws instead of hands. Moreover, the
evidence provided by my perceptual experiencesthe evidence that I obtain
by looking at my hands, or by hearing the sounds made when I clap them
togetherdoes allow me to eliminate these possibilities. Thus, we can meet
the epistemic standards that are in place in low-standards contexts. (1) is
therefore true in these contexts while (3) is false. According to
contextualists, then, we should reject (3) in low-standards contexts. And
here again, in rejecting (3), we keep the conflict between (1), (2), and (3)
most recently, and we will consider this view in Section 5. Let us turn now,
though, to subjunctive conditionals contextualism.
2.
But I don't know that I have hands if I dont know that Im not a BIV.
3.
DeRose claims that in contexts in which the standards for knowledge are
unusually high, we should reject (1) and that the skeptic can truthfully say
in such contexts that I don't know that I have hands. In other contexts,
however, the epistemic standards are more relaxed and we can both reject
(3) and correctly say that I do know that I have hands.
DeRose's contextualist solution seeks to explain the plausibility of (3) by
utilizing resources provided by Robert Nozick. Specifically, DeRose's
solution appeals to the Subjunctive Conditionals Account (SCA) of the
plausibility of (3). According to SCA, "we have a very strong general, though
not exceptionless, inclination to think that we don't know that P when we
think that our belief that P is a belief we would hold even if P were false"
(DeRose 1999a, p. 193). DeRose calls the belief that P insensitive if it is one
that we would hold even if P were false. SCA's generalization thus
becomes: We are inclined to think that S doesn't know that P if we think
that Ss belief that P is insensitive.
DeRose claims that even though this generalization does not represent our
ordinary standard for knowledge, there are contexts in which the skeptic
puts it into place as the standard (for example, by mentioning skeptical
possibilities like the possibility that you are now a BIV). The standard in
such contexts is the skeptical standard, according to which my beliefs must
be sensitive if they are to count as knowledge. When this standard is in
place, as it is in skeptical contexts, I fail to know that I'm not a BIV. For my
belief that Im not a BIV is not sensitive: I would believe that I wasnt a BIV
even if I were a BIV. Moreover, since (2) is true in all contexts, it follows
that I don't know in skeptical contexts that I have hands. In this way,
DeRose's contextualism explains the plausibility of (3) and gives the skeptic
his due by arguing that there are contexts in which we should reject (1).
But DeRose wants to avoid the boldly skeptical conclusion that
I never know that I have hands, and he does this by arguing that in ordinary
contexts of knowledge attributioncontexts in which the skeptical standard
is not in place and in which the epistemic standards are comparatively low
we can reject (3). In these contexts, the skeptical standard is not in place,
and our beliefs need not be sensitive in order to count as knowledge. Thus,
we can truthfully assert in ordinary contexts that I do know that I have
hands. And, since (2) is true in all contexts, it follows that I know in
ordinary contexts that I'm not a BIV. In this way, DeRoses contextualism
explains the plausibility of rejecting (3) and allows us to retain the
knowledge that we ordinarily take ourselves to have.
According to DeRose, the relevant difference between these contexts is that
the standards for knowledge are quite high in skeptical contexts but
comparatively low in ordinary ones. But what accounts for this difference?
DeRose recognizes that he must "explain how the standards for knowledge
are raised [by the skeptic]" (DeRose 1999a, p. 206) if his solution is to be
adequate. Essential to this explanation is DeRose's Rule of Sensitivity:
When someone asserts that S knows (or does not know) that P, the
standards for knowledge tend to be raised, if need be, to a level such that S's
belief that P must be sensitive if it is to count as knowledge. (DeRose 1999a,
p. 206)
He then provides the following explanation of how the skeptic raises the
standards.
In utilizing [puzzles like those generated by (1)-(3)] to attack our putative
knowledge of O [where O is a proposition that we ordinarily take ourselves
to know], the skeptic instinctively chooses her skeptical hypothesis, H, so
that it will have these two features: (1) We will be in at least as strong a
position to know that not-H as we're in to know that O, but (2) Any belief
we might have to the effect that not-H will be an insensitive belief.... Given
feature (2), the skeptic's assertion that we dont know that not-H, by the
Rule of Sensitivity, drives the standards for knowledge up to such a point as
to make that assertion true. ...And since we're in no stronger an epistemic
position with respect to O than we're in with respect to not-H (feature (1)),
then, at the high standards put in place by the skeptics assertion of [(3)], we
also fail to know that O. (DeRose 1999a, pp. 206-7)
DeRose maintains, then, that the skeptic's assertion is the mechanism she
uses to raise the standards for knowledge. When the skeptic asserts that I
don't know that Im not a BIV, the Rule of Sensitivity is invoked, and the
standards for knowledge are raised to such a level that my beliefs must be
sensitive if they are to count as knowledge. And since my belief that I'm not
a BIV is not sensitivethat is, since I would believe that I wasn't a BIV even
if I were a BIVI do not know in skeptical contexts that Im not a BIV.
Thus, given the truth of (2), I do not know in skeptical contexts that I have
hands (or, for that matter, anything that I ordinarily take myself to know.)
Nevertheless, when no one mentions a skeptical hypothesis, the Rule of
Sensitivity is not invoked, and the epistemic standards allow beliefs to count
as knowledge even though they are not sensitive. This means that in
ordinary contexts, we are still in a position to know the things we ordinarily
take ourselves to know.
they respond to skepticism. As we shall shortly see, those who reject closure
deny one of the conflicting claims, namely, (2), the claim that I don't know
that I have hands if I don't know that Im not a BIV. So, according to RA
contextualists who reject closure, there really is no conflict at all between
claims (1) and (3). But according to those who accept closure, there is such a
conflict. For, by the closure principle, in contexts in which I don't know that
certain skeptical alternatives do not obtain, I also fail to know certain things
about the external world.
In Section 4, we will see how RA contextualists who accept closure respond
to skepticism. In the following section, however, we will examine the
response provided by RA contextualists who reject closure.
b. Relevant Alternatives Contextualisms that Reject Closure
Consider the puzzle that is generated by the following argument:
1.
I don't know that Im not a BIV in a treeless world (that is, a BIVT).
2.
If I know that there is a tree before me (call the italicized proposition T), and I
know that T implies my not being a BIVT, then I know that I'm not a BIVT.
3.
So, I don't know that T (given that I know that T implies my not being a BIVT).
unable to distinguish from real trees" (Heller 1999b, p. 200). In this case,
we are inclined to say that S doesn't know that T even if she doesn't believe
that T in any of the closest not-T worlds. Here, even though worlds that are
cluttered with papier mch tree facsimiles are not among the closest not-T
worlds, they are close enough to the actual world to count as relevant. So
Heller claims that in at least some cases, if S is to know that p, she must not
believe that p in any of the close enough not-p worlds.
ERA provides the foundation for a relevant alternatives contextualism, for it
allows us to see different contexts as setting different epistemic standards.
Which not-p worlds count as epistemically relevantthat is, which not-p
worlds count as being close enough to the actual worldwill vary from
context to context. And since ERA characterizes epistemic standards in
terms of relevant alternatives (that is, in terms of relevant not-p worlds), it
allows for the context-sensitivity of epistemic standards.
In light of this, Heller maintains, we may solve the skeptical puzzle by
concluding that (5) is false. Note first of all that there are no contexts in
which I know that I'm not a BIVT. Given ERA, if I am to know that I'm not a
BIVT, I must not believe that Im not a BIVT in any of the closest BIVT
worlds. Thus, since I do believe that I'm not a BIVT in the closest BIVT
worlds, I don't know that Im not a BIVT.
Nevertheless, there are contexts in which I do know that T. This is true
because we use "different worlds as relevant alternatives when considering
whether [I know that T] from those used when considering whether [I know
that I'm not a BIVT]" (Heller 1999b, p. 197). According to ERA, I know in C
that T because I don't believe that T in any of the not-T worlds that are close
enough to the actual world. (And we need consider only the close enough
not-T worlds because those worlds include the closest not-T worlds.) So
given that ERA is true, (5) is false: I can know that there is a tree before me
(and hence evade the skeptic's snare) even though I dont know that I'm not
a BIVT. We can therefore solve the skeptical puzzle by giving up the closure
principle.
Any solution to the skeptical puzzle that denies the truth of (5) must explain
why it seems to us that (5) is true. In providing this explanation, Heller
argues that (5) seems true because some contexts conform to the demands
2.
If I don't know that Im not a BIV, then I dont know that I have hands.
3.
salient, and I can know on the basis of my reasons that this is a zebra.
However, in skeptical contexts, contexts in which someone does underscore
the chance that I believe erroneously, that chance will be salient. In these
contexts, my attention will have been focused on the chance that I am
wrong, and the alternative that this is a cleverly painted mule will be
relevant. Since I cannot eliminate that alternative, I do not know that this is
a zebra.
Cohen suggests that his relevant alternatives contextualism allows us to
solve skeptical puzzles like those that focus on zebras and cleverly painted
mules. This is because his version of the relevant alternatives theory is
formulated in terms of evidence, and such puzzles involve beliefs for which
we can have evidence. But Cohen suggests that radical skeptical paradoxes
involve beliefs for which we can have no evidence"radical skeptical
hypotheses are immune to rejection on the basis of any evidence" (Cohen
1988, p. 111). As it is, then, Cohen's relevant alternatives contextualism
seems ill equipped to resolve radical skeptical paradoxes.
To overcome this difficulty, Cohen adjusts his version of the relevant
alternatives theory so that it takes into account beliefs for which I can have
no evidence. He claims that for some such beliefs it is epistemically rational
for me to hold them even though I possess no evidence for them. He calls
beliefs of this sort intrinsically rational beliefs. Among the intrinsically
rational beliefs is my belief that I'm not a BIV. According to Cohen, it is
rational for me to believe that Im not a BIV even though I have no evidence
for that belief.
Taking into account intrinsically rational beliefs, Cohen amends the internal
criterion of relevance. First, he says that
it is reasonable for a subject S to believe a proposition q just in case S
possesses sufficient evidence in support of q, or q is intrinsically rational.
(Cohen 1988, p. 113)
He then provides the following amended version of the internal criterion, or
ICa:
Even though Cohen now admits that I can have evidence for my belief that
I'm not a BIV, he still thinks that there are beliefs for which I can never have
evidence. He formulates a new radical skeptical paradox in terms of such
beliefs. Cohen asks us to imagine a creature that is a BIV but will never
have evidence that it is. Call such a creature a BIV*. Now, my belief that I'm
not a BIV* is a belief for which I will never have evidence. We can formulate
the following new paradox in terms of that belief.
1.
1.
f I don't know that Im not a BIV*, then I dont know that I have hands.
2.
Since this paradox involves a skeptical hypothesis for which I can never
have evidence, the idea that I can have evidence for my belief that I'm not a
BIV* should not trouble Cohens solution to this new paradox.
But given that Cohen has abandoned the relevant alternatives framework,
just what is his solution to the BIV* paradox? He notes first of all that my
belief that I'm not a BIV* can be intrinsically rational, or what he now
calls non-evidentially rational. Once again, S's belief that p is nonevidentially rational if it is epistemically rational for S to believe that p even
though S has no evidence for that belief. Furthermore, Cohen now suggests
that
S knows that p if and only if her belief that p is epistemically rational to
some degree d, where epistemic rationality has both an evidential and a
non-evidential component, and where d is determined by context. (see
Cohen 1999, pp. 63-69, 76-77)
Suppose, then, that I have a certain amount of evidence for my belief that I
have hands, and that my belief that I have hands is therefore evidentially
rational to degree de:. Suppose too that my belief that I'm not a BIV* is nonevidentially rational to some degree dne. Cohen claims that "the nonevidential rationality [of my belief that I'm not a BIV*] is a component of
the overall rationality or justification for any empirical proposition" (Cohen
1999, p. 86, fn. 36). So we may suppose that my belief that I have hands is
epistemically rational to degree d*, where d* equals de plus dne.
Cohen now says that the degree to which a belief must be epistemically
rational if it is to count as knowledge is "determined by some complicated
function of speaker intentions, listener expectations, presuppositions of the
conversation, salience relations, etc." (Cohen 1999, p. 61). He suggests that
the listeners' cooperation is an essential part of this function. He also claims
that in ordinary contexts this complicated function specifies that a belief is
sufficiently epistemically rational if it is epistemically rational to degree do.
And d*the degree to which my belief that I have hands is epistemically
rationalis greater than do. This means that I can know in ordinary
contexts that I have hands. "And since my having a hand entails my not
being a brain-in-a-vat [and a fortiori a BIV*], in those same [ordinary]
contexts, my belief that I am not a brain-in-a-vat is sufficiently rational for
me to know I am not a brain-in-a-vat" (Cohen 1999, p. 77). This allows him
to overcome the objection that I know a priori that I'm not a BIV, for "my
knowledge that I am not a brain-in-a-vat is based, in part, on my empirical
evidence (the evidence that I have a hand), and so is not a priori" (Cohen
1999, p. 76). In ordinary contexts, then, we accept propositions (1) and (7)
of the new radical skeptical paradox, but deny proposition (8).
But in skeptical contexts the complicated function specifies that a belief is
sufficiently epistemically rational only if it is epistemically rational to
degree ds. And d* is less than Ds This means that in skeptical contexts "my
belief that I have a hand is not sufficiently rational for me to know I have a
hand. In those same [skeptical] contexts, I have no basis for knowing I am
not a brain-in-a-vat" (Cohen 1999, p. 77). In skeptical contexts, we accept
propositions (7) and (8) but deny proposition (1). In this way, then, Cohen
solves the BIV* paradox while maintaining that closure holds.
7. Objections to Contextualism
In this section, we will discuss two leading objections to epistemological
contextualism. These are by no means the only criticisms that have been
leveled against contextualism, but they introduce themes that have
motivated additional objections as well as alternatives to contextualism. A
discussion of these objections, then, should provide a center of operations
for an exploration of objections to contextualism.
Palle Yourgrau (1983) argues that contextualism allows for dialogues such
as the following since it claims that the standards for knowledge shift from
context to context:
A:
Is
that
a
zebra?
B:
Yes,
it
is
a
zebra.
A: But can you rule out its merely being a cleverly painted mule?
B:
No,
I
can't.
A:
So
you
admit
you
didn't
know
it
was
a
zebra.
B: No, I did know then that it was a zebra. But after your question, I no
longer knew.
This dialogue strikes Yourgrau as absurd, for it seems that nothing changes
during the course of the conversation that would account for a change in B's
epistemic state: B is in just as good an epistemic position at the beginning of
the conversation as she is at the end of the conversation, and so it seems
that if B knows at the beginning, she should also know at the end. This
suggests that, contrary to epistemological contextualism, we cannot affect
shifts in the standards for knowledge simply by mentioning certain
skeptical possibilities.
Contextualists (see DeRose 1992) have replied to this sort of objection by
saying that once A introduces a skeptical possibility and thereby raises the
standards for knowledge, B can no longer truly say, "I did know then that it
was a zebra." Once the standards for knowledge have been raised, the truth
of any attribution of knowledge, including an attribution that is meant to
apply only at some time in the past, must be judged according to those
higher standards. Once the standards have been raised, B cannot both
attribute knowledge to himself in the past and deny knowledge to himself in
the present. He should now only deny himself knowledge; once the
standards have been raised, neither B's past self nor his present self knows
that this is a zebra.
Stephen Schiffer has leveled a different sort of criticism at epistemological
contextualism. Again, contextualism maintains that we attribute knowledge
relative to standards that shift from context to context. This is to say, in
effect, that when we say that B knows that this is a zebra, we mean that she
knows relative to such-and-such an epistemic standard that this is a zebra.
Putting this another way, contextualism maintains that our knowledge
attributions are implicitly relative. Yet the contextualist's response to
Yourgraus objection suggests that Bor anyone else, for that matter
might fail to realize that our knowledge attributions are implicitly relative to
8. Alternatives to Contextualism
Objections like these push people away from epistemological contextualism
and toward theories that envisage epistemic standards that remain
invariant from context to context. Two such theories present themselves as
alternatives to contextualism. The first is skepticism, and the second is
Mooreanism. Both skeptics and Mooreans maintain that the standards for
knowledge do not shift. Yet while the skeptic claims that they are invariantly
quite high, the Moorean claims that the standards are invariantly
comparatively low.
The skeptic contends not only that there are no contexts in which we know
that we're not BIVs, but also that there are no contexts in which we know
that we have hands (see, for example, Unger 1975 and Stone 2000). This
response strikes some as implausible, however, since it does not accord with
the thought that there are many contexts in which we can and do know
things about the world around us.
The Moorean contends that there are never any insurmountable obstacles
to our knowing both that we have hands and that we're not BIVs.
Ernest Sosa's Moorean response begins with the rejection of Nozicks idea
that knowledge requires sensitivity (see Section 2). He argues instead that
knowledge requires safety, according to which S would believe that p only if
it were the case that p (see Sosa 1999, p. 142). Moreover, both my belief that
I have hands and my belief that I'm not a BIV are safe. Hence, both beliefs
can always count as knowledge. Sosa says that
after all, not easily would one believe that [one was not radically deceived]
without it being true . In the actual world, and for quite a distance away
from the actual world, up to quite remote possible worlds, our belief that we
are not radically deceived matches the fact as to whether we are or are not
radically deceived. (Sosa 1999, p. 147)
Yet if I can know across contexts that I'm not a BIV, why is it that it
sometimes seems as if I don't know that Im not a BIV? Sosa maintains that
since we can easily mistake safety for sensitivity, and since the belief that
we're not BIVs is not sensitive, it can sometimes seem to us that our belief
that we're not BIVs is not safe and thus that we dont know that were not
BIVs. Nevertheless, this is, according to Sosa, a mere appearance. For, since
our belief is safe, we can know across contexts that we're not BIVs and thus
adopt a Moorean response to our skeptical puzzles.
Tim Black also provides a Moorean response to these puzzles. Employing
Nozick's sensitivity requirement for knowledge, Black argues in "A Moorean
Response to Brain-in-a-Vat Scepticism" that the only worlds that are
relevant to whether or not S knows that p are those in which S's belief is
produced by the method that actually produces it. This means that BIV
worldspossible worlds in which S is a BIVare not relevant to whether S
knows that she's not a BIV. For BIV worlds are worlds in which her belief is
produced by a method other than the one that actually produces it. Thus,
since BIV worlds are not relevant to whether S know things about the
external world, S can know both that she has hands and that she's not a
BIV. This, too, suggests a Moorean response to our skeptical puzzles.
9. Conclusion
We have now characterized epistemological contextualism in a way that
allows several different theories to count as versions of that position. We
have seen in particular that epistemological contextualists maintain that
certain features of conversational contexts shape the standards that one
must meet in order for one's beliefs to count as knowledge. Understood in
this way, a fairly wide range of views will count as versions of
epistemological contextualism. Different versions will disagree
over which features of conversational contexts can shape the epistemic
standards, and over how the relevant contextual features help to shape