Epistemic Argument Against Moral Universalism

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

Matt Lutz and Jake Ross

An Epistemological Argument against Moral Universalism

1 Overall Structure

A1. If there are any universal moral requirements, then an epistemological


condition must be satisfied (Jake)

A2. This epistemological condition is not satisfied (Matt)

A3. Therefore, there are no universal moral requirements.

2 Defining Universal Moral Requirements

Norm: any general principle of action of action, expressible as an imperative, that


tells every agent within some class of agents to act (or refrain from acting) in some
manner.

E.g., “Let no human being kill another human being” “Let no American citizen
travel to Cuba”

A norm applies to an agent iff the norm is addressed to that agent and hence tells
that agent how to act or refrain from acting in some possible circumstance.

A universal norm is a norm that applies to everyone, or at last to every possible


human being. These can be expressed by saying “let no one phi” or “let everyone
phi.”

A norm N is a moral requirement iff it would be wrong for anyone to whom N


applies to violate N.

Thus, a universal moral requirement is a universal norm which it would be morally


wrong for anyone to violate (where by “anyone” we mean any possible human
being).
3 From Wrong Action to Blameworthy Belief

We can be blameworthy not only for the actions we perform or fail to perform, but
also for the attitudes we have or fail to have. This is true, in particular, of guilt.

B1. If R is a moral requirement, and you know that you violated R without an
excuse, then you’d be blameworthy for not feeling guilty or remorseful about it.

B2. The kind of remorse in question is the kind of attitude expressed by sincere
apologies

(That is, if R is a moral requirement, and you know that you violated R without an
excuse, then you’d be blameworthy for not having the kind of attitude toward your
action that you could express by sincerely apologizing for it.)

B3. But the attitude expressed by sincerely apologizing for an action includes the
belief that this action was morally wrong—if you think the action for which you’re
apologizing was perfectly fine, then your apology is not sincere.

B4. Therefore, if R is a moral requirement, and you know you violated R without
an excuse, then you’d be blameworthy for not having an attitude of guilt or
remorse that involves the belief that what you did was morally wrong.

B5. But, in general, if you’re blameworthy for failing phi, and phi-ing involves
psi-ing, then you’re blameworthy for failing to psi.

B6. Therefore, if R is a moral requirement, and you know you violated R without
an excuse, then you’d be blameworthy for failing to believe that you did something
wrong.

4. From Guilt to Compunction

There is a family of related attitudes of moral disapproval that vary in terms of


one’s relation to the object of the attitude. Guilt or remorse is the disapproving
attitude we have toward our own past wrong actions. And blame or indignation is
the disapproving attitude we have toward the past wrong actions of others. But
there is a related attitude we can have toward wrong actions we are contemplating
performing in the future, an attitude that motivates us to avoid performing this
action. We may call this attitude moral compunction.
Just as remorse consists in feeling bad about a past action on the ground that this
past action was wrong, so moral compunction involves feeling averse toward a
possible action on the ground that this possible action would be wrong.

And, just as we can be blameworthy for failing to feel remorse, so we can be


blameworthy for failing to feel compunction.

5. The Main Argument

C1. If R is a moral requirement, and you know that a possible action of yours
violates R, then you’d be blameworthy for not feeling compunction about this
action.

C2. Just as feeling remorse for a past action involves believing that this past action
was morally wrong, so feeling compunction toward a possible action involves
believing that this possible action is morally wrong.

C3. Therefore, if R is a moral requirement, and you know that a possible action of
yours violates R, then you’d be blameworthy for not believing that this action is
morally wrong.

C4. Therefore, if R is a universal moral requirement, then whenever any possible


human being knows that a possible action of theirs violates R, they would be
blameworthy for not believing that this action is morally wrong.

C5. You can be blameworthy for failing to believe a proposition only if you have
compelling epistemic reason to believe this proposition.

C6. Therefore, if R is a universal moral requirement, then whenever any possible


human being knows that a possible action of theirs violates R, they have
compelling epistemic reason to believe that this action is morally wrong.

C7. However, for every universal norm R, there is some possible human being S
and some possible action phi such that S knows that phi violates R but S does not
have compelling epistemic reason to believe that phi is morally wrong.

C8. Therefore, there are no universal moral requirements.


6. Defending the Epistemological Premise

C7. For every universal norm R, there is some possible human being S and some
possible action phi such that S knows that phi violates R but S does not have
compelling epistemic reason to believe that phi is morally wrong.

We will provide two arguments for 7: the foundationalist-friendly argument and


the coherentist-friendly argument.

6.1 The Coherentist-Friendly Argument

Def. An agent’s beliefs are in a state of wide reflective equilibrium iff that agent’s
considered judgments have been brought into a state of perfect logical consistency
and maximal explanatory unity.

D1. If an agent S is in a state of wide reflective equilibrium and a proposition p is


inconsistent with S’s considered judgments, then S does not have compelling
epistemic reason to believe that p.

D2. For every universal norm R, there is some possible human being S and some
possible action phi such

(i) S knows that phi violates R and

(ii) S is in a state of wide reflective equilibrium and

(iii) S’s considered judgements are inconsistent with the proposition that phi is
morally wrong.

Therefore,

C7. For every universal norm R, there is some possible human being S and some
possible action phi such that S knows that phi violates R but S does not have
compelling epistemic reason to believe that phi is morally wrong.
Supporting the premises:

D1 – If your beliefs are in wide reflective equilibrium and your considered


judgements entail not-p, then it’s reasonable for you to believe that not-p. And if
it’s reasonable for you to believe that not-p, then you don’t have compelling
epistemic reason for believing that p.

D2 – Consider norm T: “Let no one torture babies for the fun it.”

And consider an ideally coherent Caligula—call him Coheregula. Coheregula’s


beliefs might be in a state of wide reflective equilibrium and, and he might know
that a given action would violate T. And yet his considered judgements might
entail that this action would not be wrong.

The Foundationalist-Friendly Argument

E1. An agent S has compelling epistemic reason to believe a proposition p


only if p could be arrived at through some combination of presentational
experiences (e.g. perception, memory, introspection, intuition) and sound
inferential processes (e.g. deduction, induction, abduction).
E2. Hume’s Principle There are no sound inferential processes that yield
moral conclusions from purely descriptive premises and/or presentational
states with purely descriptive contents.
E3. Therefore, an agent S has compelling epistemic reason to accept a moral
proposition M only if M is supported (at least in part) by that agent’s moral
presentational experiences (i.e. moral intuitions) and/or moral beliefs.
E4. For every universal norm R, there is a possible human being S and a
possible action phi such that S knows that phi violates R but S’s moral
beliefs and moral intuitions do not support the proposition that phi is morally
wrong.

D7. Therefore, for every universal norm R, there is a possible human being
S and a possible action phi such that S knows that phi violates R but S does
not have compelling epistemic reason to believe that phi is morally wrong.

Supporting the premises:


E1 – This is a very general foundationalist framework. Our beliefs based on
perceptual experiences are our basic or foundational beliefs; those beliefs are
immediately justified. Our inferential beliefs are those that are (ultimately) derived
from those basic beliefs by sound inferences.

E2 – Presentational contents provide immediate justification only for the


propositions that they present as true. Inductive inference cannot yield moral
beliefs from descriptive premises because inductive reasoning extends patterns
from observed to unobserved instances, and moral beliefs do not extend descriptive
patterns. Deductive inference cannot yield moral beliefs from descriptive premises
because moral concepts and descriptive concepts are distinct (Open Question
Argument). Abductive inference cannot yield moral beliefs from descriptive
premises because there will always be a descriptive explanation of descriptive data
that is at least as good as moral explanations of moral data. Theory-laden
descriptive-to-moral abductive inference relies on moral premises.

E4 – Once again, consider norm T (“Let no one torture babies for the fun it.”)
Coheregula might know that a given action would violate T. And yet his perverse
moral beliefs and moral intuition might provide no support whatsoever for the
claim that this action would be morally wrong.

The Unifying Thought

Epistemic justification is subjective; it depends on the agent’s evidence, in the


form of their presentational experiences and background beliefs. Accordingly, it is
possible for an agent’s subjectively-justified beliefs to be false and horribly
unreliable, provided that they are starting from a set of false background beliefs
and misleading experiences (think: evil demons).

You might also like