Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

Epistemology

Epistemology is primarily the study of the nature of knowledge and justification, and the

relationship between the two. We have already considered the nature of a certain kind of

knowledge, what is called propositional knowledge. It is knowledge that such-and-such is the

case. It is called propositional knowledge because it is knowledge that..., and what comes after

the “that” is a proposition. Sentences are physical things: writings on some surface or produced

electronically, sounds spoken in the air, or signs made with some part of the body, say, the hands

as in American sign language. But these marks, sounds, or signs can have meaning. They can

express a proposition, which is something that is true or false. The same proposition can be

expressed in different languages. “The book is red” and the Spanish, “El libro es rojo,” express

the same proposition. They have the same meaning, and the proposition they express is either

true or false. There are various accounts of what is required to have propositional knowledge,

and we will return to that question shortly.

I. Knowledge

But there are other kinds of knowledge besides propositional knowledge. There is what

might be called skill knowledge, which is a kind of know how. A young child might know how

to ride a bicycle but not know that you should lean into a turn, not away from it, that if you are

falling to the right you should turn the front wheel to the left, etc. Coaches often have lots of

knowledge that you should do so-and-so to acquire, or perfect, some athletic skill, but they may

be quadriplegic and not know how to do what they are coaching others to do. Knowledge that... is

neither necessary nor sufficient to have knowledge how.

There is also knowledge by acquaintance. I didn’t know Alfred Hitchcock, but I know

that he produced such films as Rear Window, North by Northwest, Psycho, and The Birds, and

that he often made non-speaking cameo appearances in his films. I have propositional knowledge
about Hitchcock, but I don’t have, and never had, acquaintance knowledge of him, though his

actors did.

Finally, there is knowing what it’s like. A person who can only see in black and white

does not know what it’s like to experience red. She may come to have this sort of knowledge

because of some change in her brain or, as in the movie Pleasantville (1998), as the result of

having certain feelings. No one in the town Pleasantville had ever seen in color before Jennifer

Parker visited them. But after she had sex with one of the inhabitants of the town, some began to

see in color! There are many different emotions and feelings, and a person will not know what

they are like if she has not experienced them. For instance, a person who has never experienced

grief or joy will not know what it’s like to have those experiences even if she knows what sort of

behavior often accompanies what people call “grief” and “joy.”

While there are these other kinds of knowledge, philosophers have focused their attention

on propositional knowledge. While many philosophers think that such knowledge requires

justification, not all agree. Some think that knowledge is reliably produced true belief. The

thought is that perception can produce knowledge, and when it does it is because perception is a

reliable belief producing faculty or mechanism, that is, one that usually yields true beliefs. This

suggests that any source that usually produces true belief will result in knowledge if, on the

particular occasion, the belief is true and results from that reliable source. In order to have

knowledge, the person who has a true belief that was produced by a reliable source does not need

to know, or be justified in believing, that the source is reliable. All that’s required is that the

belief is true and it is in fact produced by a reliable source in an appropriate environment. Alvin

Plantinga is a famous philosopher of religion and is a reliabilist about knowledge in general and

about God in particular. He thinks we have a sensus divinitatis (a kind of special sixth sense) that

can yield knowledge of God if it is working properly in appropriate situations, even if we have

no reason to think that we have this reliable belief producing faculty.


But there are counterexamples to the view that knowledge is reliably produced true

belief. Suppose there is a person I’ll call Truenorth. He has a sort of internal compass that is

reliable in telling him which direction is north, and also south, east, and west. However, he does

not realize that he has this faculty; he has never put it to the test. And he has no reason to think

that others have, or lack, this ability to tell compass direction without an actual compass. One

day he is at a party and people are bragging that they can tell which direction is north, south, etc.,

when blindfolded. They decide to see if anyone can. They draw straws to see who will go first.

Truenorth draws the longest straw and goes first. They blindfold him, spin him around, and with

the blindfold still on ask him to point in the direction he believes is north. He does, and, of

course, gets it right because he has a reliable belief producing faculty regarding compass

direction. According to reliabilism, he knows that the direction he points to is north because he

has a reliably produced true belief that it is. Intuitively, he does not know this. He has no reason

to think that he has a reliable belief producing faculty about compass direction. He is no better

off when it comes to knowledge than the person who just guesses correctly that the direction to

which he pointed is north. This example shows that the view that knowledge is reliably produced

true belief does not offer sufficient conditions of knowledge.

Lots of other accounts of knowledge have been given that do not require justification.

Causal accounts of knowledge say that if a person has a true belief that has been caused “in the

right way,” she has knowledge. Perceptual knowledge seems to fit the bill. But there are

counterexamples to this suggestion, too, even in the realm of perception. Suppose you cannot

distinguish between two identical twins, Judy and Trudy, but you believe you are looking at Judy

and your belief is caused “in the right way” because you are actually looking at Judy (Feldman,

pp. 85-86). Then according to the causal theory of knowledge, you know that you are looking at

Judy. But you would believe the same thing if you were looking at Trudy instead, who is nearby

though you don’t know this. You are lucky that you happen to be looking at Judy.
Knowledge is incompatible with this sort of luck.

In the Overview I discussed the example called Stopped Clock where you believed the

time was 10:00 am on the basis of looking at a clock that had stopped exactly twenty-four hours

earlier. From past experience with this clock, you had every reason to think the clock was

working properly. So you had a justified true belief (JTB) that it was 10:00 am though,

intuitively, you did not know it was. Knowledge is incompatible with this sort of luck.

Another well-known example of this sort is called Sheep. In Sheep a jokester farmer

breeds poodles to look like sheep and he grooms them so that from the road that passes by his

field they look just like sheep. You drive by and see these animals in the field and form the

belief, “There are sheep in the field.” As it turns out, there are, but they are lying down out of

sight behind some large boulders in the far corner of the field. So you have a justified true belief

that there are sheep in the field. But, intuitively, you do not know that there are. Knowledge is

incompatible with this sort of luck where you reach the truth but by misleading evidence.

A third example is called Barn Facade. A movie company is making a film in the

country and part of the film has a couple driving on a road with lots of barns that are visible from

the road. But there are very few barns on any of the roads where the movie company is filming.

So they decide to construct barn facades that look just like real barns from the country road

they’ve picked out. You and your partner take a drive along this road on a Sunday when no

filming is taking place. At one point you pull over just for a rest and happen to stop in front of

the one real barn that is on this road. You form the belief that you are looking at a barn. It’s true

that you are and you are justified in thinking that you are looking at a barn because what you see

does look like a barn! However, the other 99 structures on the road that look like barns are barn

facades. You are just lucky that you have a true belief since you could easily have had a false one

by stopping somewhere else on this street.


Some people have the intuition that you don’t know that you are looking at a barn. The

example seems a lot like Sheep. You seem lucky in both cases in ending up with a true belief

given the situation. But others say that you do know that you are looking at a barn. Unlike in

Sheep, what makes your belief true (you are looking at a barn) is also the source of your

evidence. In Sheep, what makes your belief true (there are sheep lying down behind some large

boulders in the far corner of the field) is not the source of your evidence (the poodles that look

just like sheep are the source of your evidence).

The contrast between Sheep and Barn Facade suggests an account of what knowledge is.

You know some proposition, P, if and only if, you believe P, P is true, you are justified in

believing P, and the source of the evidence that provides your justification is the state of affairs

that makes P true (say, that you are looking at a real barn or real sheep). 1

In Barn Facade, if you are looking at a facade, you have evidence for its being a barn but

that is not produced by a barn, contrary to when you are actually looking at a barn. It seems that

evidence for something (say, that there are sheep in the field) can provide justification for

believing it but more is required for knowledge. Here the source of the evidence for must be the

state of affairs that makes your belief true.

This theory has been used to argue that what is crucial to knowledge is not the absence of

some kind of luck, contrary to what seemed true in the other cases we considered. To see this

consider the following example. George took a couple of epistemology classes in college and

now enjoys creating real world examples that are at least somewhat like those examples. He lives

near the sea shore and ducks often land on the beach and leave foot prints in the sand. But then

the tide comes in and washes the prints away. George builds some mechanical ducks and has

1Adrian Heathcote defends this account of knowledge in several papers on the Gettier problem.

So does Sven Bernecker


them walk on the beach where the real ducks walked. He does this when, and only when, the surf

has just washed away the prints of the real ducks. The prints of the mechanical ducks are

indistinguishable from those of the real ducks. So people who walk by see either the prints of the

real ducks or the mechanical ones. In both cases they form the belief D = ducks recently walked

there. Their belief is true and given the shape of the prints, they are justified in believing it’s true.

What people mean by “safety” is: a belief is safe if its nearly always true when a person believes

it in similar circumstances. When looking at the duck prints of the mechanical ducks, people

have a justified, true, safe belief that ducks recently walked there. But they lack knowledge.

They are like the people who have a justified true belief that there are sheep in the field on the

basis of looking at cleverly groomed poodles. They lack knowledge, but their beliefs are not safe.

Adding safety to the example where people are looking at the duck prints of the mechanical

ducks does not help. Though it seems that these people are not lucky to have a true belief, they

still lack knowledge. They aren’t lucky when seeing the prints of the real ducks (this is just

normal perception) and they aren’t lucky when viewing the prints of the mechanical ducks

because of the clever way George has tied their appearance to a slightly earlier appearance of

real duck prints. Their belief that ducks have reTently walked there is reliable when they base it

on looking at the foot prints of the mechanical ducks.

Ducks seems to show that knowledge does not require an anti-luck condition. On the

other hand, the view of knowledge that it supports has the implication that people in Barn

Facade know that they are looking at the one barn on a road populated by many facades. That

seems counterintuitive to many.

Defenders of the importance for knowledge of an anti-luck condition might argue that

Ducks does involve luck and so violates that condition. You are lucky to have a true belief that

there are sheep in the field when you base it on looking at poodles. You are lucky to have a true

belief that real ducks recently walked along the beach when you base it on looking at prints left
by mechanical ducks. That is true even though the belief is reliable and safe because of George’s

clever plan. What Ducks shows is that safety does not guarantee that luck is not present, not that

an anti-luck condition is not necessary for knowledge. In fact Sheep can become just like Ducks

if we have the farmer put out the poodles when, and only when, there are sheep in the field that

are out of sight. Then the tourists will have a justified, true, and safe belief but still lack

knowledge.

Most philosophers think that examples like Sheep and Stopped Clock show that having a

justified true belief (JTB) is not sufficient for knowledge even if they disagree about whether

knowledge is present in Barn Facade. A possible way to save a (JTB) account of knowledge is

to add a fourth condition that involves the notion of a defeater. A defeater is a true proposition

that would defeat the justification that the person has for what he believes IF he were aware of it.

He need not actually be aware of it. That the clock in the classroom has stopped is a defeater

because if you knew it had, you would no longer be justified in believing on the basis of looking

at that clock that it was, say, 10:00 am. The proposed account of knowledge is that you know

that P if and only if you have a (JTB) that P and there are no defeaters. So this proposed account

gives the intuitively right answer in Stopped Clock, namely, that you don’t know that it’s ten

o’clock. That’s because there is a defeater, namely, that you are looking at a stopped clock. In

the case of Sheep, the defeater is that you are looking at poodles, not sheep.

The Grabit case is a well-known example that poses a problem for the no-defeaters

account of knowledge. Suppose Mrs. Grabit has a son, Tom, that you know well. You see him

take a book off the shelf in the library, look around to see if anyone is watching, put the book in

his backpack, and then walk out of the library without stopping at the check-out desk. On this

basis you believe, and know, that Tom took a book out of the library without checking it out. On

my version of this case, I assume that you also know Mrs. Grabit well and know (1) that she is a

“helicopter” mom who is usually aware of where her son Tom is and (2) that she is a very honest
person who would not lie to protect her son if he did something wrong. So you are at least

justified in believing: IF Mrs. Grabit said that Tom was not in the library at a certain date and

time, THEN Tom was not in the library at that date and time.

Now assume, unbeknownst to you, that Mrs. Grabit unwittingly ate some hallucinogenic

mushrooms and had a vision of Tom at the bank with a clock showing the date and time in the

background. Assume, also, that some friend of Tom is trying to meet up with him and calls Mrs.

Grabit and asks if Tom is in the library. Still under the influence of the mushrooms, she says: No,

he is at the bank. What she says is a defeater because if you were aware of it (though in fact you

are not) you would no longer be justified in believing that Tom took the book from the library

without checking it out. If you were aware that she said what she did to Tom’s friend, and given

your justified background belief that Mrs. Grabit is someone who wouldn’t lie to protect Tom

and almost always knows where he is, that would defeat your justification for thinking he took

the book from the library without checking it out. So the no-defeaters account of knowledge

would imply that you don’t know Tom took the book from the library. That’s because what Mrs.

Grabit said to Tom’s friend is a defeater: had you known that she said he was NOT in the library

you would not be justified in thinking he was there and walked out without checking the book

out. That is counterintuitive. How could what a woman says based on a hallucination destroy

your knowledge, given that you have no idea that she said it? The JTB + no defeaters account of

knowledge implies that you would lack knowledge if there was a defeater. But, intuitively, in this

Grabit case you still know that Tom took the book even though what Mrs. Grabit said to Tom’s

friend is a defeater. So it’s possible to have knowledge even when there are defeaters. Having no

defeaters is not a necessary condition of knowledge.

A way to modify the no-defeaters proposal is to replace the initial no-defeaters clause

with: and there is no defeater of the person’s justification that is not itself defeated = there are no

undefeated defeaters. This condition is met in the Grabit case since what Mrs. Grabit said is in
turn defeated by the fact that it is based on an hallucination. That further fact undercuts any

defeating force that what Mrs. Grabit said would normally have. So there is no defeater left that

defeats your justification for thinking Tom took the book from the library. So on this account

you do know that Tom took the book. The account that says that knowledge is (JTB) + no

undefeated defeaters does not have the counterintuitive consequence that you don’t know that

Tom took the book from the library without checking it out.

This modification of the no-defeaters account of knowledge is promising proposal for the

elusive fourth condition (J-T-B being the three). But there will have to be some limit placed on

what a defeater defeater can be. For any candidate belief that P to amount to knowledge, P will

have to be true. But a true P will always be an ultimate defeater defeater. For instance, in

Stopped Clock it’s true that the clock reads the correct time. That will defeat the defeater that

says the clock is stopped. Yes, the clock is stopped (defeater) but it reads the correct time

(defeater defeater). So on the no-ultimate-defeaters account, you will know the time by looking

at a stopped clock. That can’t be right! Perhaps this result can be blocked by requiring that the

candidate defeaters can’t include P itself, or even any proposition that logically entails P.

The last proposal I will consider for adding a fourth condition to (JTB) requires that the

subject’s belief not depend essentially on any falsehoods [see Feldman, pp. 36-37]. But doesn’t

your belief that Tom took the book from the library without checking it out depend essentially on

a false belief, namely, your belief that Mrs. Grabit did not say anything that implies that Tom

was not in the library at the relevant time? But she did. She said Tom was at the bank at that time

and so wasn’t in the library. And if you believed what she said, given your other justified

background beliefs about Mrs. Grabit’s usually knowing where Tom is and her being trustworthy

and honest, you would not be justified in believing that Tom took the book. So the requirement

that knowledge not depend essentially on any false beliefs seems to imply that you don’t know

that Tom took the book. That is the same counterintuitive result that counts against the simple
no-defeaters requirement for knowledge. A hallucinating Mrs. Grabit’s saying something to

someone in a private phone conversation is not going to destroy your knowledge that Tom took

the book from the library even if you believe that she did not say anything that implied Tom was

not in the library at that time. Neither of the proposed fourth conditions (no defeaters at all and

your belief does not depend essentially on any false beliefs) is necessary for knowledge. We can

have knowledge where there are defeaters and knowledge that rests essentially on a false belief.

It’s not easy to solve the Gettier Problem, but there are many other proposals that assume

that knowledge requires JTB and propose additional conditions, C. They claim that knowledge is

JTB + C. I have offered some important proposals for C. There is no widely accepted view about

what C is. The examples of Stopped Clock, Sheep, Barn Facade, and Grabit appear in almost all

discussions of the Gettier problem.

II. Justification

A. The epistemic goal

Knowledge is not the only notion that epistemologists investigate. Justification also

interests them whether or not it is a necessary condition of knowledge. Jurors need to form

justified beliefs about whether a defendant is guilty or not. Even if the defendant is not guilty,

they should fit their beliefs to the evidence presented at trial. So if the defendant has been

cleverly framed, they should believe he is guilty because that is what the evidence supports. A

plausible account of justification is that it concerns the responsible pursuit of truth, and the

responsible pursuit of truth requires fitting your beliefs to the evidence. That the aim of

justification is truth does not mean that you should try to maximize the number of true beliefs

you have. You might do that by memorizing people’s phone numbers listed in some phone book

or on some online site. Nor does it mean that you should minimize the number of false beliefs

you have. You might do that by just remembering your name and a few simple mathematical

truths like 2 + 2 = 4. Rather, it means for any proposition you are interested in believing, the goal
is to believe it if and only if it’s true. That’s the aim of justification, just as the aim of criminal

trials is to convict a defendant if and only if he is guilty. But justification is the responsible

pursuit of that aim, and whether you are a member of a jury, a scientist, or just living your

everyday life, the responsible pursuit of that aim requires you to fit your beliefs to the evidence. 2

You can fail to responsibly pursue the truth about something that you are considering and yet

luckily believe it, perhaps by a lucky guess. But you would not be justified in believing it even

though you achieved the epistemic goal of believing what’s true. Conversely, you may

responsibly pursue the truth and yet fail to believe what’s true. That’s what happens in The

Matrix and can happen in everyday life when we base our beliefs on misleading evidence, as can

happen when a defendant at trial is framed. We can have justified false beliefs even if we are

responsibly pursuing the truth.

B. Evidence

Justification requires fitting your beliefs to the evidence. But what is evidence? The sort

of evidence relevant to justification is not external to you; it is something you are aware of or

should be aware of. If Smith killed Jones with a handgun and then threw it in the river, his

fingerprints on the gun are not evidence that justifies you in believing he killed Jones if you are

not aware of them. Suppose you think Holley killed Jones. You then learn that he was not at the

crime scene when Jones was killed but you overlook the relevance of this piece of evidence.

Even if you have the evidence in the sense that you are aware of it, it won’t justify you in ruling

out Holley until you are aware of its significance. On the internalist view of evidence I have

been discussing, it must be some internal mental state and justification requires that you see the

significance of it to the proposition you are considering.

2Keith Lehrer says essentially this in “The Coherence Theory of Knowledge,” Philosophical

Topics 14, no. 1 (Spring 1986): 6-7.


There are various sources of the sort of evidence that can provide justification. Perception

is one source; what we are aware of through our eyes, ears, touch, etc., can provide evidence.

Introspection is another: what we are aware of through paying attention to how we are currently

feeling, what we are now thinking, what we now want, etc., can be a source of evidence.

Proprioception is awareness of where parts of our body are located and how they are moving.

That can be another source of evidence. Intuition as defined earlier can be a fourth source of

evidence. A rational or a priori intuition is defined as an intellectual seeming that a proposition is

true, where that seeming is based solely on your understanding that proposition or some

relevantly related concepts. If it seems true to you that the person in Sheep has a justified true

belief but lacks knowledge, and that seeming is based solely on your understanding that

proposition, or the relevantly related concepts “knowledge” and “justified true belief,” then you

are having a rational or a priori intuition that its true. Rational or a priori intuition is a fourth

source of evidence. Memory can also be a source of justification We can have memory

impressions that provide us with reason to believe certain propositions about what we have done

or where we have been, about what our teachers said, or about what some author wrote in a

book. The testimony of others that we are aware of is a further, very important source of

evidence. Much of what we know about the world is based on the testimony of our teachers,

parents, friends, and scientists, that we’ve heard speak or read about in some newspaper, book, or

online source. Of course, memory is involved in most of our justification that comes from

testimony since it is often remembered testimony that provides justification.

The crucial thing to remember is that the evidence that is relevant to justification is

something we are aware of. However, sometimes we may lack justification because we should be

aware of some information but are not. This would be a case of epistemic negligence and can

prevent you from being justified in believing something on the basis of the evidence you possess.

The detective might have some evidence that Holley murdered Smith, but he may not be justified
in believing this if he has not gathered enough information about who the murderer is, even

though he would be justified in believing it is Holley on the basis of the scant evidence he has.

I have defended a view that is called internalism about justification which says that

justification must be based on evidence you are aware of or should be aware of. Externalism

about justification denies this and says that at least some justification does not depend on such

evidence. There are reliabilists about justification, as well as about knowledge, but some of the

same examples that count against knowledge reliabilism also count against justification

reliabilism. Truenorth does not know, nor is he justified in believing, that the direction he pointed

to was north even though his belief was reliably produced. And people in The Matrix can be

justified in believing what they do about what the world is like even though they lack knowledge

because their beliefs are false.

C. Reasoning from evidence

Suppose you have evidence in the relevant sense. What determines what you should

believe on the basis of that evidence? For example, suppose you see human footprints in the wet

sand at the seashore, but you don’t see anyone around. What should you believe: that someone

recently walked there OR that some jokesters bought rubber feet at a novelty store, attached them

to the end of long poles, rented a helicopter, and made footprints in the sand with the rubber feet

as the pilot flew over the beach? Obviously, you should believe that someone recently walked

there. But why? There are two standard replies. The first is that the best explanation for the

footprints is that someone recently walked there, and you should believe what is the best

explanation of your evidence. This would involve appeal to what is called Inference to the Best

Explanation (IBE). The second standard reply appeals to what is called Induction. There are

various statements of the principle of induction, but I think the best statement of it is this: If in

many cases you have observed that all A’s are B’s, then you have some reason to believe that All

A’s are B’s (and also some reason to believe there is a B when you have observe an A). Let’s
assume that in many cases you have observed humans walking along and leaving behind them

footprints in sand or mud. So given your observation of these footprints in the sand, but no one

actually walking there, induction gives you some reason to think that someone recently walked

there. If you read in the local newspaper that jokesters were renting helicopters and making

footprints in the sand, your reason for thinking someone recently walked there would be

weakened. But barring that additional evidence, the reason you have based on induction is strong

enough that you are justified in believing someone recently walked along the beach.

In the case of both Inference to the Best Explanation and Induction, the truth of the

conclusion is not guaranteed by the truth of the premises. It’s true that you now observe human

footprints in the sand, it’s true that many times in the past you have seen humans walking in wet

sand and leaving footprints like these behind, it’s true that you have not read or heard any reports

of jokesters making footprints like those you see. Still, all those true premises do not guarantee

that it’s true that someone recently walked there. Maybe this is the first time the jokesters made

footprints in the sand. Maybe some artist made them from a wheeled vehicle and then erased the

tire tracks. Maybe someone bought rubber footprints, put them on a monkey, and trained it to

walk along the beach. The true propositions you base your conclusion on don’t guarantee, don’t

entail, that someone recently walked along the beach. They just make it reasonable to believe

this. Contrast this with the climate change argument I gave earlier.

1. If climate change is at least partly the result of human activity and that change poses dangers

to present and future generations, we ought to alter the human activities that contribute to it.

2. Climate change is at least partly the result of human activity,

3. That change poses dangers to present and future generations.

4. Therefore, we ought to alter the human activities that contribute to it.

In this case, if the premises are true the conclusion must be true. Of course, the scientific

justification for believing (2) and (3) must rest on induction, or more likely, on (IBE). If
scientists are justified in believing that climate change is at least partly the result of human

activity it is because that hypothesis is the best explanation of recorded data from across time

that is based on what others have observed. If we are justified in believing that (3) is true it is

because we are justified in accepting scientific testimony about the dangers of climate change.

And if we are justified in accepting scientific testimony, it is because the theories scientists

accept have proven to be the best explanation of the scientific evidence they possessed at the

time. Of course, some of the theories they have accepted turned out to be false. The earth is not

flat and the sun does not revolve around it. But they were the best explanation of the relevant

evidence at the time. To be epistemically justified in believing something is to base your beliefs

on the available evidence. And that is what scientists do. So we have reason to believe that what

they tell us is true, and that’s the best we can do in the pursuit of truth. If we got the truth in any

other way, we’d just be lucky to have it.

Reasoning of the sort found in the climate change argument, (1)-(4), is called deductive

reasoning, and the conclusion of a valid deductive argument must be true if the premises are true

(as was said in the chapter, Tools). But in good arguments based on induction or inference to the

best explanation, the premises support, but don’t guarantee, the truth of the premises. Reasoning

based on induction or IBE is called ampliative reasoning because the conclusion, in a sense, goes

beyond the premises, that is, its truth is not guaranteed by the truth of the premises. Except for

mathematics, many deductive arguments are a combination of deductive and ampliative

reasoning since induction or inference to the best explanation is required for justification of at

least some of the premises.

D. Types of justification

In general, the sources of justification fall into two categories: a priori and not a priori. A

priori justification rests on rational or a priori intuition as I defined that term in the Overview

and above. It is sometimes said to be justification that is independent of experience, but that
means independent of experience beyond that which is needed to understand the proposition at

issue. This is sometimes stated as the requirement that a priori justification must not rest on any

experience beyond enabling experience, that is, beyond the experience needed to understand the

proposition at issue. So you can be justified in believing that knowledge requires justified true

belief without having any more experience than is needed to understand what knowledge is, but

you cannot be justified in believing that you are looking at a computer screen or a chair or a tree

without having experience beyond what you needed to understand those propositions. You can

be a priori justified in believing that all brothers are male but not that all brothers are treated

better by their parents than their sisters even if that were true and there was lots of empirical

evidence to support it. Here is a matrix that summarizes the basic sources of evidence and the

methods of reasoning that can take you from the evidence to some conclusion. The columns

represent possible types of evidence; the rows, the types of reasoning that can be used to reach

conclusions based on the evidence.

a priori non-a priori Mixed: a priori and non-a priori

deductive reasoning

non-deductive reasoning ?
(induction & IBE)

mixed: deductive &


non-deductive reasoning

It is possible for there to be cases in all the nine cells of this matrix. But I marked the cell

in the first column and second row with a “?” because some might think that there is no role for

induction or inference to the best explanation when it comes to purely a priori evidence. But that

sort of case is common in philosophy. A priori intuitions about specific cases can provide the

evidence for a philosophical theory that is at least partly justified because it explains why the

propositions that are the objects of those intuitions are true. For instance, theories of knowledge
should explain why knowledge is absent in Sheep and Stopped Clock but present in ordinary

perception of sheep in a field and of a working clock, and also in the Grabit case.

E. The structure of justification

A further question about justification has to do with its structure. Does all justification

rest on some sort of foundation so that all of our justified beliefs ultimately get their justification

from these foundations? Perhaps these foundations are certain kinds of beliefs or maybe they are

perceptual sensations, what we are currently thinking or feeling, or a priori intuitions, that are

not themselves beliefs but perhaps can justify beliefs. Coherentists in epistemology deny that

there are such foundations. Their view is that justification stems from our beliefs fitting together

in the right way to form a coherent system of beliefs, or perhaps a coherent system of beliefs

together with relevant sensations, what we are aware of through introspection, and a priori

intuitions. The metaphor often used to characterize foundationalism is the pyramid with the basic

beliefs or experiences forming the base and other beliefs up the pyramid (derivative beliefs)

supported by the basic beliefs or experiences. The metaphor often used to characterize

coherentism is a net, like a fishing net. The intersection of the cords where knots appear

represent beliefs, or beliefs and sensations, seemings, and the like, and a belief is justified if it is

a member of an appropriate net, that is, if it is a member of a coherent net.

Leaving the metaphor behind, there are various accounts of what a coherent system of

beliefs, etc., requires. Some require that there not be any contradictory beliefs in a person’s

system of beliefs. This seems too high a standard since probably everyone holds some

contradictory beliefs even if they do not realize it, and so on this requirement no one would be

justified in believing anything. It also seems too strict to require coherence in all sub-areas of a

person’s beliefs for her to have justified beliefs in a given sub-area. We might have justified

perceptual beliefs because they cohere even if we lack coherence in our religious or political
beliefs. It is hard to say what coherence requires, but there seem to be counterexamples to

coherentism whatever it requires in detail.

Gliese 581d is one of the first planets that scientists thought might have life on it.

Suppose someone I’ll call Glen has a fecund imagination and has read what scientists have said

about this planet, including that it may support life. He then believes that there are intelligent

beings like us on Gliese and that they send him messages about what goes on there. On this basis

he has a set of coherent beliefs about the economy, government, agriculture, family structure,

educational system, occupations, etc., of the inhabitants of Gliese 581d. If you ask Glen why we

are not receiving messages from these inhabitants, Glen says it’s because he is the chosen one;

they have singled him out knowing that he would be receptive to the messages they send while

others would scoff at the idea that they were receiving messages from the inhabitants of a distant

planet. Glen has a coherent system of beliefs, and they do not conflict with any of his beliefs

about what goes on here on earth. But he is not justified in believing any of what he believes

about Gliese 581d. Those beliefs have no foundation and are just the product of a wild and

creative imagination. But coherentism implies that Glen is justified in believing what he does.

While some sort of coherence might be necessary to have a justified belief, this example shows

that it is not sufficient.

There are other examples involving moral beliefs that make the same point. A racist or

anti-semite might have a coherent system of beliefs. He might believe that Jews or Blacks are

morally inferior because of their genes or their culture or both. All of his friends and those he

respects might believe the same. He has anecdotal evidence from particular examples of evil

people who were Jewish or Black. All this fits together in a coherent system of beliefs and

supports his view that Jews or Blacks should be expelled from society, imprisoned, or even

killed. Some people in the United States accept conspiracy theories about a “deep state”

controlled by an elite bunch of satanic pedophiles that secretly run things in the United States.
Perhaps those actual theories do not meet the conditions for a coherent system of beliefs,

whatever they may be. However, I believe that it is plausible to think that they could be modified

so they would meet those conditions, but members of, say, Qanon still would not be justified in

accepting them even if they had a coherent system of beliefs.

F. Skepticism

There is a long history in philosophy of arguments that attempt to show that we do not

know, and are not even justified in believing, that we are in an external world, that is, a world

that exists independently of us that contains real people, trees, mountains, cars, buildings, etc.,

that exist beyond our own minds. Rene Descartes is a famous 17th century mathematician and

philosopher who wondered how we know that we are not just having a coherent dream or know

that all of our experiences are not caused by what he called an evil demon. The contemporary

version of Descartes’ evil demon are the supercomputers in the film The Matrix that cause all the

perceptual experiences in people who are really just floating in tanks filled with a gel-like liquid.

Their mental activity provides the electricity that runs the supercomputers. How do we know, or

how are we at least justified in believing, that we are not in the Matrix?

Probably the best argument to show that we don’t know, and are not even justified in

believing, that we are in an external world surrounded by objects independent of us, not in the

Matrix, is what I will call the argument from indistinguishability. It goes like this:

1. If you can’t distinguish X’s from Y’s (say, German Shepherds from wolves), then you don’t

know (and can’t even be justified in believing) that something is an X (e.g., a German Shepherd)

even if it is.

2. So, if we can’t distinguish being in a real world external to us from being in the Matrix, then

we can’t know (and can’t even be justified in believing) that we are in the real world even if we

are. [from the application of (1): let X = the external world and Y = the Matrix].

3. We can’t distinguish being in a real world external to us from being in the Matrix.
4. Therefore, we can’t know (and can’t even be justified in believing) that we are in the real

world external to us even if we are. [from 2 & 3 via modus ponens]

This argument seems pretty convincing. But do you really think we are not even justified

in believing that there are people, trees, buildings, etc. that exist outside of us, independently of

us? If you think we are justified in believing there are things outside of us, you must think that

one of the premises in the argument is false. The argument is valid (an instance of modus

ponens), so if all the premises were true, the conclusion would have to be true.

Let’s start considering them. Let’s start with (1). It might mean that if we can’t

distinguish by perception alone between something’s being an X (German Shepherd) and its

being a Y (wolf), then we don’t know (and can’t even be justified in believing) that something is

an X (German Shepherd) even if it is. But on that understanding (1) is false because you might

know by non-perceptual means that there are no wolves around in the city park where you are

looking at some animal that looks like a German Shepherd. So given what the animal looks like

and your background knowledge that no wolves are in the park, you do know that you are

looking at a German Shepherd, even though you cannot distinguish by perception alone a

German Shepherd from a wolf. You can, by other means, rule out its being a wolf.

Suppose we understand (1) to mean: (1*) If you can’t distinguish in any way X’s from

Y’s (say, a real $20 bill from a counterfeit), then you don’t know (and can’t even be justified in

believing) that something is an X (a real $20 bill) even if it is. Perhaps (1*) is true. But now let’s

look at (3) which now must say: (3*) We can’t distinguish in any way being in a real world

external to us from being in the Matrix. Is that true? Recall the two hypotheses to explain the

footprints in the sand: (H1) Someone recently walked there and (H2) Jokesters made the

footprints from a helicopter. Assume that the footprints would look just as they do regardless of

whether (H1) or (H2) were true. So you can’t tell via perception alone footprints made from

walking from footprints made from a helicopter. Suppose, also, that you have no relevant
background beliefs that would allow you to rule out one of these hypotheses in the way that your

background belief that there are no wolves in the park enabled you to rule out the wolf

hypothesis. Still, inference to the best explanation (IBE) or induction can give you grounds to

rule out (H2), the helicopter hypothesis.

Perhaps (IBE) can be used to rule out the Matrix hypothesis. Isn’t the best explanation of

our perceptions that there are real objects outside of us causing perceptions in us? How could an

evil demon or supercomputers do this? Other things being equal, a hypothesis that contains a

detailed causal mechanism is a better hypothesis than one that does not. The evil demon and

supercomputers hypotheses do not; the standard perceptual hypothesis does. Scientists can offer

detailed explanations of how light bounces off of objects outside us, strikes our retinas, causes

electrical activity in our optic nerves, and then produces a visual image.

Almost everyone who defends and uses (IBE) agrees that, other things being equal, a

simpler hypothesis is better than a more complex one. But they do not agree on what simplicity

is. Some think that a hypothesis that posits fewer entities, or kinds of entities, is simpler than one

that posits more entities, or more kinds of entities. If you can explain a series of home break-ins

on the basis of one intruder rather than two or more, then, other things being equal, the single-

intruder hypothesis is the better one. Other things being equal, a theory that explains the origin

and nature of the universe in terms of the activity of one God is a better explanation than one that

explains this in terms of the activities of many gods. Other things being equal, monotheism beats

polytheism.

But many philosophers think that simplicity is not the only factor that can make one

hypothesis better than another. One can find in Peter Lipton’s book Inference to the Best

Explanation eight different considerations that, holding other things equal, determine whether

one hypothesis is better than another. I use the acronym BUMPFESS as an aid to recalling them.

B stands for fit with the already justified background beliefs or theories we hold. Ultimately we
might have to accept mental telepathy as a real phenomenon to explain certain events, but our

background beliefs found a presumption that it does not exist (the same thing can be said when it

comes to the existence of immaterial beings). U stands for unifying. Other things being equal, a

theory of gravity that explains planetary motion as well as apples falling from trees on earth is a

better theory than one that only explains phenomena on earth. M stands for mechanism, which

concerns how detailed a causal story a theory provides. The theory that opium causes sleep

because of its dormitive powers is not as good a theory as one that refers to the chemical

properties of opium and how they affect the neurons of the brain to cause sleep. P stands for

precision. Other things being equal, a theory that yields precise testable results is better than one

that does not. F stands for fruitfulness. A theory with the ability to explain many other types of

phenomena is better than one that lacks this ability. E stands for elegance. Mathematicians prefer

elegant theories to ones that are inelegant. There is a famous story about Gauss who, along with

other students in his class, were required to figure out the sum of the first one hundred integers.

The hard way is to add 1 + 2 = 3; then add 3 +3 = 6; then add 6 + 4= 10, and so on. But Gauss

noticed that there are pairs of numbers that add up to 101. For instance, 1 + 100 = 101; 2 + 99 =

101; 3 + 98 = 101, and so on. There are 50 such pairs between 1 and 100. So Gauss multiplied 50

X 101 = 5050 to get the answer. That was an elegant solution.

In science, a theory of planetary motion that says that the planets revolve about the sun

in elliptical paths is more elegant than one that says the sun and the other planets revolve about

the earth in roughly circular paths that also contain epicycles. An epicyclical path is one that a

smaller circle would take if it rolled around the perimeter of another larger circle. So on the

epicyclical theory, the planets revolve in curlicue paths about the earth. Even if the epicyclical

theory predicted where the sun and other planets would be at any given time as well as the

elliptical path theory, the elliptical theory would be a better explanation of those observations

than the epicyclical theory.


The first S in BUMPFESS stands for scope. A feather will not fall as fast as an anvil in

the atmosphere, but it will in a vacuum. A theory that talks about rate of fall in a vacuum has

narrower scope than one that does not limit it to vacuums. Scope and unification are in tension.

It’s good if a theory explains a wider range of phenomena; such a theory has more explanatory

power. But by explaining a wider range of phenomena, it has broader scope and so that lowers its

probability. This means that scope must be weighed against unification to determine which of

two theories is best overall. But since all of the BUMPFESS criteria are prima facie reasons for

accepting one theory over another, this is true of all eight considerations: they all must be

weighed against the others to determine what is the best explanation all things considered for

some phenomena. The last S stands for simplicity which I have already discussed briefly.

Sometimes “simplicity” is used as a general term to refer to all of the other more specific criteria

as, say, “arthritis” is used to refer to a condition in the joints that can cause pain in the hands, the

knees, ankles, etc.

Now according to the BUMPFESS criteria, which hypothesis is the best explanation of

what we perceive, the Matrix hypothesis or the standard view that says our perceptions are

caused by real people, trees, buildings, etc., external to us? Well, as mentioned earlier, the Matrix

hypothesis does not provide a detailed explanation of how the supercomputers gained control of

the world and how exactly they get electricity from us and produce perceptions in us. So it does

not seem to fare as well as the standard view on measure M. Further, it seems a less fruitful

hypothesis in that it prevents us from looking further into things to answer these questions about

the detailed causal mechanisms. While in the Matrix, we aren’t able to investigate these matters.

However, on the standard view we are. Scientists can investigate how our perceptions are caused

by investigating our eyes and brain, how light bounces off of surfaces and causes effects in our

eyes, and how the brain processes the information from our eyes to produce visual sensations.

Further, the Matrix hypothesis is a kind of two-world hypothesis: the world in the matrix where
perceptions are caused in one way and the world outside of it where they are caused in another

way. That means that the Matrix hypothesis is not as unified as the standard view. So, all in all,

the Matrix hypothesis does not fare as well on the BUMPFESS criteria and so is not as good an

explanation of our perceptions as the standard view. So according to (IBE), we should accept the

standard view over the Matrix hypothesis.

Although we cannot distinguish being in the external world from being in the Matrix via

perception together with our background beliefs, (3*) is false. It says: we can’t distinguish in any

way being in the real world as we normally conceive it from being in the Matrix. But we can

appeal to (IBE) to distinguish between the two competing hypotheses, and that justifies us in

believing that there is a real world external to us that is pretty much like we think it is and that

causes the perceptions we have.

G. Perspectivism

There are several recent approaches to philosophy that call for paying attention to the

perspectives of others whose views are often marginalized: women, Blacks, indigenous people,

Jews, LGBQT+ people, etc. It is claimed that these people know what it’s like to be oppressed,

disrespected, and treated unfairly, and this kind of knowledge (that is neither propositional

knowledge, knowledge how, or knowledge by acquaintance) is important evidence about the

specific injustices that have been imposed on them. Further, these voices should be listened to

when it comes to determining what diseases and afflictions should be studied. Women contract

breast cancer but few men do. Women get cancer of the uterus and no men do. Blacks suffer

from sickle cell disease more frequently than non-blacks. Further, they should be listened to

when we are considering how to shape our social, political, economic, and legal institutions to

make them fair to everyone.

No reasonable person can deny that the experiences of marginalized groups can provide

valuable evidence on a variety of topics. The story of the blind men and the elephant is relevant
here. To ignore the evidence based on the experiences of marginalized people is like the blind

man who concludes that the elephant is a snake on the basis of feeling only its trunk, or thinks

it’s a rope on feeling only its tail, etc. He should listen to those who feel its legs, side, and ears

before concluding what an elephant is like. Similarly, people from marginalized groups should

be listened to when we are considering how to shape our social, political, economic, and legal

institutions to make them fair to everyone.

It is worth considering whether there is more to the claim that various perspectives should

be respected. That claim is controversial if respecting various perspective means that there are no

limits to how different people can legitimately weigh the same total evidence, that is, “to each

according to their own perspective.” Linus is a cartoon character in Charles Schulz’s comic strip,

“Peanuts.” Every Halloween Eve he waits in a pumpkin patch for the Great Pumpkin to appear.

Suppose he believes on the basis of seeing blighted grass and hearing the rustling of leaves that

the Great Pumpkin has passed by. That belief would not be justified even if Linus thought his

evidence supported it, and even if there was a community of Great Pumpkinites that shared his

“perspective.”

Suppose the child of some parent has been given a vaccine and then is diagnosed with

autism, and suppose that parent knows that the same thing has happened with other children. She

then believes that vaccines can cause autism on the basis of this anecdotal evidence. Her belief

would not be justified on her evidence. That she is also a member of a group of anti-vaxxers who

share her “perspective” would not change that. While the experiences of marginalized people

should be considered, their weighting of it has no privileged status. It may be correct, but that

does not follow from the fact that they have evidence that many people outside of their

community lack and that many people share their “perspective,” that is, share their weighting of

the evidence.
Even if there are limits to how people can weigh the evidence, it is very hard to say what

those limits are. David Lewis was a highly respected American philosopher who died in 2001.

Peter van Inwagen is still alive and is also a highly respected American philosopher. They

disagreed about the relationship between determinism and free will, Lewis holding they are

compatible, van Inwagen that they are not. In other words, van Inwagen thought that IF

determinism is true, THEN no one ever acts freely and no one is ever morally responsible for

what she does. Lewis denied that this conditional statement is true. They read each other’s

articles and had personal exchanges on the issue. It’s hard to believe that they had different

evidence. So they must have weighed the evidence they had differently. If these two excellent

philosophers could weigh the evidence differently, it does not seem reasonable to believe that

there is just one legitimate way to weigh the evidence. Still, not just anything goes. Linus and the

Great Pumpkinites don’t weigh the evidence correctly and neither do the anti-vaxxers. This is a

really hard question in epistemology that requires further thought: what are the limits on how

people can weigh evidence and what is their basis? But we don’t need to answer this question to

know that not just any weighting goes.

You might also like