Download as pdf or txt
Download as pdf or txt
You are on page 1of 67

Regard for Reason in the Moral Mind

Joshua May
Visit to download the full and correct content document:
https://ebookmass.com/product/regard-for-reason-in-the-moral-mind-joshua-may/
Regard for Reason
in the
Moral Mind
Joshua May

Oxford University Press

Published in May 2018

[Final draft | 6 December 2017 | 108,718 words]

Abstract: The burgeoning science of ethics has produced a trend toward pessimism. Ordinary
moral judgment and motivation, we’re told, are profoundly influenced by arbitrary factors and
ultimately driven by unreasoned feelings or emotions—fertile ground for sweeping debunking
arguments. This book counters the current orthodoxy on its own terms by carefully engaging
with the empirical literature. The resulting view, optimistic rationalism, maintains that reason
plays a pervasive role in our moral minds and that ordinary moral reasoning is not particularly
flawed or in need of serious repair. The science does suggest that moral knowledge and virtue
don’t come easily, as we are susceptible to some unsavory influences that lead to rationalizing
bad behavior. Reason can be corrupted in ethics just as in other domains, but the science
warrants cautious optimism, not a special skepticism about morality in particular. Rationality
in ethics is possible not just despite, but in virtue of, the psychological and evolutionary
mechanisms that shape moral cognition.

Keywords: rationalism, optimism, skepticism, debunking arguments, moral judgment, moral


motivation, moral knowledge, virtue, reason, emotion, rationalization
Regard for Reason | J. May

Table of Contents
Table of Contents 2
Preface 5
List of Tables and Figures 7
Introduction 8
Ch. 1: Empirical Pessimism 9
1.1 Introduction 9
1.2 Pessimism about Moral Cognition 10
1.3 Pessimism about Moral Motivation 17
1.4 Optimistic Rationalism 20
1.5 Coda: Appealing to Science 24
PART I: Moral Judgment & Knowledge 26
Ch. 2: The Limits of Emotion 27
2.1 Introduction 27
2.2 Moralizing with Feelings? 28
2.3 Accounting for Slight Amplification 35
2.4 Psychopathology 39
2.5 Conclusion 45
Ch. 3: Reasoning beyond Consequences 47
3.1 Introduction 47
3.2 Consequences 48
3.3 Beyond Consequences 50
3.4 Moral Inference 59
3.5 Conclusion 66
Ch. 4: Defending Moral Judgment 68
4.1 Introduction 68
4.2 Empirical Debunking in Ethics 69
4.3 The Debunker’s Dilemma 70
4.4 Emotions 71
4.5 Framing Effects 73
4.6 Evolutionary Pressures 76
4.7 Automatic Emotional Heuristics 80
4.8 Explaining the Dilemma 84
4.9 Conclusion 85
Ch. 5: The Difficulty of Moral Knowledge 87
5.1 Introduction 87
5.2 The Threat of Selective Debunking 88
5.3 The Threat of Peer Disagreement 94
5.4 Conclusion 102
PART II: Moral Motivation & Virtue 105
Ch. 6: Beyond Self-Interest 106
6.1 Introduction 106
pg. 2 of 206
Regard for Reason | J. May

6.2 The Egoism-Altruism Debate 107


6.3 Empirical Evidence for Altruism 109
6.4 Self-Other Merging 114
6.5 Dividing Self from Other 117
6.6 Conclusion 121
Ch. 7: The Motivational Power of Moral Beliefs 122
7.1 Introduction 122
7.2 Ante Hoc Rationalization 123
7.3 Rationalizing Immorality 127
7.4 Motivating Virtue 135
7.5 Conclusion 138
Ch. 8: Freeing Reason from Desire 140
8.1 Introduction 140
8.2 Anti-Humean Moral Integrity 141
8.3 Neurological Disorders 143
8.4 Special Mechanisms 148
8.5 Aspects of Desire 150
8.6 Simplicity 153
8.7 Conclusion 154
Ch. 9: Defending Virtuous Motivation 156
9.1 Introduction 156
9.2 The Defeater’s Dilemma 157
9.3 The Threat of Egoism 159
9.4 The Threat of Situationism 164
9.5 Containing the Threats 172
9.6 Conclusion 173
Conclusion 175
Ch. 10: Cautious Optimism 176
10.1 Introduction 176
10.2 Lessons 176
10.3 Enhancing Moral Motivation 178
10.4 Enhancing Moral Cognition 181
10.5 Conclusion 183
References 185
Index 202

pg. 3 of 206
Regard for Reason | J. May

To Jules, my mighty girl,

may you grow up to be righteous.

pg. 4 of 206
Regard for Reason | J. May

Preface
During graduate school in Santa Barbara, I developed a passion for the interdisciplinary study of
ethics. Fortunately for me, the field was just beginning to explode, fueled by fascinating
discoveries in the sciences and renewed interest in their philosophical implications. Both
philosophers and scientists have primarily taken the research to reveal that commonsense moral
deliberation is in need of serious repair. I was energized by this burgeoning area of research and I
too felt the pull toward pessimism about ordinary moral thought and action and particularly
about the role of reason in them. However, as I began to dig into the research and arguments,
many pessimistic conclusions seemed to be based on insufficient evidence. I’ve come to find that
our moral minds are more defensible in light of the science than many have let on.
This book articulates and defends my optimistic rationalism. It argues that our best
science helps to defend moral knowledge and virtue against prominent empirical attacks, such as
debunking arguments and situationist experiments. Being ethical isn’t easy, as our understanding
of the human brain certainly confirms. But our moral minds exhibit a regard for reason that is not
ultimately beholden to blind passions. Although we are heavily influenced by automatic and
unconscious processes that have been shaped by evolutionary pressures, virtue is within reach.

Gratitude: I am grateful to so many people who have aided in the development of this project.
The number is enormous partly because some of the ideas have been in the works for nearly a
decade, since my early years in graduate school. My colleagues have been invaluable at the
University of California at Santa Barbara, then Monash University in Australia, and now at the
University of Alabama at Birmingham. Many have provided incisive feedback and stimulating
discussion that have indirectly helped along the ideas that appear in this book.
So as to avoid a dozen pages of acknowledgments, I think it wise to confine my thanks
here to those who have provided feedback (in oral or written form) on this particular manuscript
or draft papers that have become core elements of it. These individuals are, to the best of my
fallible memory and records: Ron Aboodi, Mark Alfano, C. Daniel Batson, Bradford Cokelet,
Stephen Finlay, Jeanette Kennett, Charlie Kurth, Hyemin Han, Yongming Han, Julia Henke
Haas, Richard Holton, Bryce Huebner, Nageen Jalali, Karen Jones, Matt King, Victor Kumar,
Andy Lamey, Elizabeth Lanphier, Neil Levy, Dustin Locke, Heidi Maibom, John Maier, Colin
Marshall, Kevin McCain, John Mikhail, Christian Miller, Brandon Murphy, Charles Pigden,
François Schroeter, Laura Schroeter, Luke Semrau, Neil Sinhababu (the best “philosophical
nemesis” one could ask for), Walter Sinnott-Armstrong, Michael Slote, Jesse Summers, Raluca
Szekely, John J. Tilley, Brynn Welch, Danielle Wylie, and Aaron Zimmerman. My sincere
apologies to anyone I’ve unintentionally left out.
The work on this book, and key papers leading to it, took place at several institutions
outside of my home department at UAB. In 2014, I attended a summer seminar at the Central
European University in Budapest ably directed by Simon Rippon. In 2015, at the Prindle Institute
for Ethics at DePauw University, Andrew Cullison graciously hosted a writing retreat. In 2017, I
attended the fantastic Summer Seminar in Neuroscience and Philosophy at Duke University,
generously supported by the John Templeton Foundation and directed by Walter Sinnott-
Armstrong and Felipe De Brigard. Many thanks to the directors and funders for those
opportunities. Part of these visits, and many others in recent years, have also been made possible
pg. 5 of 206
Regard for Reason | J. May

by my department at UAB. I’m forever grateful to our chair, Gregory Pence, for his considerable
support of research, including many trips to present my work both in the U.S. and abroad.
A handful of people deserve to be singled out for special thanks. Throughout my
philosophical development, one group has provided continuous mentorship and moral support
that has sustained me through the uphill battle that is modern academia. That crew includes Dan
Batson, Walter Sinnott-Armstrong, and Aaron Zimmerman. Special thanks also go to the
reviewers and advisors of the book for Oxford University Press and to the editor, Peter
Momtchiloff. Their guidance, comments, and support made the reviewing and publishing process
a pleasure when it easily could have been demoralizing. Finally, I thank two talented philosophy
majors, Elizabeth Beckman and Samantha Sandefur, who worked as research assistants.

Previous work: Some of this book draws on my previously published work. Little of it is merely
recycled, as I have significantly updated both the presentation (organization and prose), as well
as some of the content. Chapters 2 and 3 draw on “The Limits of Emotion in Moral Judgment”
(forthcoming in The Many Moral Rationalisms, eds. K. Jones & F. Schroeder, Oxford University
Press). Chapter 2 also draws partly on “Does Disgust Influence Moral Judgment?” (published in
2014 in the Australasian Journal of Philosophy 92(1): 125–141). Chapter 3 also draws partly
from “Moral Judgment and Deontology: Empirical Developments” (published in 2014 in
Philosophy Compass 9(11): 745-755). Chapters 4 and 5 draw on a paper I’ve co-authored with
Victor Kumar: “How to Debunk Moral Beliefs” (to appear in The New Methods of Ethics, eds. J.
Suikkanen & A. Kauppinen). Chapter 6 is based partly on two articles: “Egoism, Empathy, and
Self-Other Merging” (published in 2011 in the Southern Journal of Philosophy 49(S1): 25–39,
Spindel Supplement: Empathy & Ethics, ed. R. Debes) and “Empathy and Intersubjectivity”
(published in 2017 in the Routledge Handbook of Philosophy of Empathy, ed. Heidi Maibom,
Routledge). Chapter 7 draws a little bit from a short commentary piece, “Getting Less Cynical
about Virtue” (published in 2017 in Moral Psychology, Vol. 5: Virtue & Happiness, eds. W.
Sinnott-Armstrong & C. Miller, MIT Press, pp. 45-52). Chapter 8 draws a little bit from
“Because I Believe It’s the Right Thing to Do” (published in 2013 in Ethical Theory & Moral
Practice 16(4): 791–808). For permission to republish portions of these works, I’m grateful to
the publishers and, in one case, to my brilliant co-author, Victor Kumar.

Audience: Given the interdisciplinary nature of the material in this book, I aim for its audience to
include a range of researchers, especially philosophers, psychologists, and neuroscientists. I have
thus attempted to write in an accessible manner, which sometimes requires explaining concepts
and theories with which some readers will already be quite familiar. Another consequence is that
I sometimes omit certain details that one otherwise might discuss at length. I hope that, all things
considered, such choices actually make for a better read.

Title: Finally, a note on the book’s title. The phrase “regard for reason” came to me
independently (or so it seems) years ago. I later found that over a century ago Henry Sidgwick
used it when describing Kant’s view (see Sidgwick 1874/1907: 515). I also discovered that
Jeanette Kennett uses the similar phrase “reverence for reason” in her account of moral
motivation (2002: 355).

pg. 6 of 206
Regard for Reason | J. May

List of Tables and Figures


Tables:
2.1: Example Data from a Disgust Experiment
3.1: Cases Varying Intention and Outcome
3.2: Two Modes of Moral Cognition
4.1: Example Processes Subject to the Debunker’s Dilemma
5.1: Five Moral Foundations
6.1: Proportion of Participants Offering to Help
7.1: Proportion of Later Indulgence by Choosing Cake
7.2: Mean Self-Reported Likelihood to Engage in Behavior
7.3: Mean Responses to Whether a Job was Suited for a Particular Race
7.4: Task Assignment and Moral Ratings of It
9.1: Situational Influences on Classes of Behavior
9.2: Example Factors Subject to the Defeater’s Dilemma

Figures:
1.1: Key Sources of Empirically Grounded Pessimism
3.1: The Switch Case
3.2: The Footbridge Case
3.3: Loop Track vs. Man-in-Front
5.1: Ideological Differences in Foundation Endorsement
8.1: Accounts of Moral Motivation

pg. 7 of 206
Regard for Reason | J. May

Introduction

pg. 8 of 206
Regard for Reason | J. May

Ch. 1: Empirical Pessimism


Reason is wholly inactive, and can never be the source of so
active a principle as conscience, or a sense of morals.
– David Hume

Word count: 8,971

1.1 Introduction
Moral evaluation permeates human life. We readily praise moral saints, admonish those who
violate ethical norms, and teach children to develop virtues. We appeal to moral reasons to guide
our own choices, to structure social institutions, and even to defend atrocities. But is this a
fundamentally rational enterprise? Can we even rely on our basic modes of moral thought and
motivation to know right from wrong and to act virtuously?
Empirical research may seem to warrant doubt. Many philosophers and scientists argue
that our moral minds are grounded primarily in mere feelings, not rational principles. Emotions,
such as disgust, appear to play a significant role in our propensities toward racism, sexism,
homophobia, and other discriminatory actions and attitudes. Scientists have been increasingly
suggesting that much, if not all, of our ordinary moral thinking is different only in degree, not in
kind. Even rather reflective people are fundamentally driven by emotional reactions, using
reasoning only to concoct illusory justifications after the fact. As Jonathan Haidt has put it, “the
emotions are in fact in charge of the temple of morality” while “moral reasoning is really just a
servant masquerading as the high priest” (2003: 852).
On such influential pictures, ordinary moral thinking seems far from a reasoned pursuit of
truth. Even if some ordinary moral judgments are rational and reliable, brain imaging research
suggests that the intuitive moral judgments that align with commonsense morality are driven
largely by inflexible emotional alarms instilled in us long ago by natural selection. The same
apparently goes for our thinking about even the most pressing of contemporary moral issues,
such as abortion, animal rights, torture, poverty, and climate change. Indeed, some theorists go
so far as to say that we can’t possibly acquire moral knowledge, or even justified belief, because
our brains have been shaped by evolutionary forces that can’t track supposed “moral facts.”
As a result, virtue seems out of reach because most of us don’t know right from wrong.
And it gets worse. Even if commonsense moral judgment is on the right track, distinctively
moral motivation may be impossible or exceedingly rare. When motivated to do what’s right, we
often seem driven ultimately by self-interest or non-rational passions, not our moral beliefs. If
our moral convictions do motivate, they are corrupted by self-interested rationalization or
motivated reasoning. Scientific evidence suggests that people frequently lie and cheat to benefit
themselves whenever they believe they can get away with it. Sure, we can feel empathy for
others, but mostly for our friends and family. Those suffering far away don’t stir our sentiments
and thus don’t motive much concern. When we do behave well, it’s often to gain some reward,

pg. 9 of 206
Regard for Reason | J. May

such as praise, or to avoid punishment. Doing what’s right for the right reasons seems like a
psychological rarity at best.
While theorists disagree over the details, there has certainly been an increase in
scientifically motivated pessimism (a term I borrow from D’Arms & Jacobson 2014). These
pessimists contend that ordinary moral thought and action are ultimately driven by non-rational
processes. Of course, not all empirically informed philosophers and scientists would describe
themselves as “pessimists.” They may view themselves as just being realistic and view the
optimist as a Panglossian Pollyana. But we’ll see that the label of “pessimism” does seem apt for
the growing attempts to debunk ordinary moral psychology or to pull back the curtain and reveal
an unsophisticated patchwork in need of serious repair.
This book aims to defend a more optimistic view of our moral minds in light of our best
science. Knowing right from wrong, and acting accordingly, is indeed difficult for many of us.
But we struggle not because our basic moral beliefs are hopelessly unjustified—debunked by
evolutionary pressures or powerful emotions—or because deep down we are all motivated by
self-interest or are slaves to ultimately non-rational passions. Science can certainly change our
conception of humanity and cause us to confront our biological and cultural limitations. Not all
of commonsense morality can survive, but we should neither oversell the science nor commit
ordinary moral thinking to the flames.
Ultimately, I argue for an optimistic rationalism. Ordinary moral thought and action are
driven by a regard for “reason”—for reasons, reasonableness, or justifiability. Pessimists
commonly point to our tendencies toward irrationality, but perhaps paradoxically it is often our
irrationalities that reveal our deep regard for reason. If ordinary moral cognition had little to do
with reason, then we would not so often rationalize or provide self-deceived justifications for bad
behavior. Driven by this concern to act in ways we can justify to ourselves and to others, moral
knowledge and virtue are possible, despite being heavily influenced by unconscious processes
and despite being sensitive to more than an action’s consequences.
In this chapter, I’ll introduce some key sources of pessimism about two core aspects of
moral psychology. Some theorists are doubtful about the role of reason in ordinary moral
cognition and its ability to rise to knowledge. Others are doubtful about the role of reason in
moral motivation and our ability to act from virtuous motivation. After surveying a diverse range
of opponents, I’ll explain the plan in the coming chapters for defending a cautious optimism
about our moral minds, and one that lies within the rationalist tradition.

1.2 Pessimism about Moral Cognition


1.2.1 Sources of Pessimism

Contemporary moral philosophers have rightly turned their attention to the sciences of the mind
in order to address theoretical and foundational questions about ethics. What is going through
our minds when we condemn others or are motivated to do what’s right? Is moral thinking a
fundamentally inferential process or are sentiments essential? To test proposed answers to such
questions, some philosophers are now even running their own experiments.
Unfortunately, though, philosophers and scientists alike have tended to hastily take this
empirically informed movement to embarrass ordinary moral thinking or the role of reason in it.

pg. 10 of 206
Regard for Reason | J. May

Ethical theories in the tradition of Immanuel Kant, in particular, have taken a serious beating,
largely for their reverence for reason.
To be fair, Kantians do claim that we can arrive at moral judgments by pure reason alone,
absent any sentiments or feelings. Contemporary Kantians likewise ground morality in rational
requirements, not sentiments like resentment or compassion. Thomas Nagel, for example, writes:
“The altruism which in my view underlies ethics is not to be confused with generalized affection
for the human race. It is not a feeling” (1970/1978: 3). Instead, Kantians typically ground
morality in reflective deliberation about what to do (Wallace 2006) or in reflective endorsement
of one’s desires and inclinations. Michael Smith, for example, argues that moral approbation
expresses a belief about “what we would desire ourselves to do if we were fully rational” (1994:
185). Similarly, Christine Korsgaard writes that “the human mind… is essentially reflective”
(1996/2008: 92), and this self-consciousness is required for moral knowledge and virtue, for it
allows us to make reasoned choices that construct our own identities. Morality, according to
Korsgaard (2009), arises out of “the human project of self-constitution” (4), which involves a
“struggle for psychic unity” (7).
Many empirical pessimists contend that reflection and deliberation do not play such a
substantial role in our moral minds. Haidt even speaks of a “rationalist delusion” (2012: 103),
and it’s not difficult to see why. The study of moral development in psychology was dominated
in the 20th century by Lawrence Kohlberg (1973), who was heavily inspired by Kant. However,
that tradition has largely fallen out of favor to make room for psychological theories in which
emotion plays a starring role. Many psychologists and neuroscientists now believe that a
surprising portion of our mental lives is driven by unconscious processes, many of which are
automatic, emotional, and patently irrational or non-rational. Reasoning comes in to justify that
which one’s passions have already led one to accept. As Haidt has put it, “moral reasoning does
not cause moral judgment; rather, moral reasoning is usually a post-hoc construction, generated
after a judgment has been reached” (2001: 814).
This is the challenge from a brand of sentimentalism which contends that moral cognition
is fundamentally driven by emotion, passion, or sentiment that is distinct from reason (e.g.,
Nichols 2004; Prinz 2007). Many now take the science to vindicate sentimentalism and Hume’s
famous derogation of reason. Frans de Waal, for example, urges us to “anchor morality in the so-
called sentiments, a view that fits well with evolutionary theory, modern neuroscience, and the
behavior of our primate relatives” (2009: 9). Even if reasoning plays some role in ordinary moral
judgment, the idea is that sentiment runs the show (Haidt 2012: 77; Prinz 2016: 65).
Other critics allow that ordinary moral judgment can be driven by reason, but they
attempt to debunk all or large portions of commonsense morality, yielding full or partial
skepticism. Evolutionary debunkers argue that Darwinian pressures prevent our minds from
tracking moral truths. Even if blind evolutionary forces get us to latch onto moral facts, this is an
accident that doesn’t amount to truly knowing right from wrong. As Richard Joyce puts it,
“knowledge of the genealogy of morals (in combination with some philosophizing) should
undermine our confidence in our moral judgments” (2006: 223; see also Ruse 1986; Rosenberg
2011).
Other debunkers align good moral reasoning with highly counter-intuitive intuitions
consistent with utilitarian (or other consequentialist) ethical theories. Peter Singer (2005) and
Joshua Greene (2013), for example, argue that moral thinking is divided into two systems—one
is generally trustworthy but the other dominates and should be regarded with suspicion. The
commonsense moral intuitions supporting non-utilitarian ethics can be debunked since they arise
pg. 11 of 206
Regard for Reason | J. May

from unreliable cognitive machinery. Greene writes that our “anti-utilitarian intuitions seem to
be sensitive to morally irrelevant things, such as the distinction between pushing with one’s
hands and hitting a switch” (328). These pessimists are utilitarian debunkers who argue that the
core elements of ordinary moral judgment should be rejected, largely because they are driven by
automatic emotional heuristics that place moral value on more than the consequences of an
action. While some moral judgments are rational, and can yield knowledge or at least justified
belief, most of our ordinary intuitions are not among them. Such utilitarians are often content
with imputing widespread moral ignorance to the general population, which likewise renders
virtuous action exceedingly rare.
Many debunkers conceive of moral cognition as facing a dilemma in light of the science.
As Singer has put it:
We can take the view that our moral intuitions and judgments are and always will be
emotionally based intuitive responses, and reason can do no more than build the best
possible case for a decision already made on nonrational grounds. […] Alternatively, we
might attempt the ambitious task of separating those moral judgments that we owe to our
evolutionary and cultural history, from those that have a rational basis. (2005: 351)
It seems we can avoid wholesale sentimentalism only by undermining large swaths of ordinary
moral thinking.
Whether by embracing sentimentalism or debunking, a pessimistic picture of ordinary
moral thinking seems to result. The worry is that, if our best science suggests that our moral
minds are driven largely by non-rational passions, then that way of thinking may be indefensible
or in need of serious revision or repair. Now, sentimentalists frequently deny that their view
implies that our moral beliefs are somehow deficient (see, e.g., Kauppinen 2013; D’Arms and
Jacobson 2014), and of course emotions aren’t necessarily illicit influences. However,
sentimentalists do maintain that genuinely moral cognition ultimately requires having certain
feelings, which suggests that it’s fundamentally an arational enterprise in which reason is a slave
to the passions.
At any rate, I aim to provide a defense of ordinary moral cognition that allows reason to
play a foundational role. First, I’ll argue for an empirically informed rationalism: moral
judgment is fundamentally an inferential enterprise that is not ultimately dependent on non-
rational emotions, sentiments, or passions. Second, I’ll advance a form of anti-skepticism against
the debunkers: there are no empirical grounds for debunking core elements of ordinary moral
judgment, including our tendency to place moral significance on more than an action’s
consequences.

1.2.2 Reason vs. Emotion?

Philosophers and scientists increasingly worry that the reason/emotion dichotomy is dubious or
at least fruitless. We of course shouldn’t believe that reason is good and reliable while emotion is
bad and biasing (Jones 2006; Berker 2009). Moreover, as we further understand the human brain,
we find great overlap between areas associated with reasoning and emotional processing with
apparently few differences. Like paradigm emotional processing, reasoning can be rapid and
relatively inaccessible to consciousness. And emotions, like paradigm reasoning, aid both
conscious and unconscious inference, as they provide us with relevant information (Dutton &

pg. 12 of 206
Regard for Reason | J. May

Aron 1974; Schwarz & Clore 1983), often through gut feelings about which of our many options
to take (Damasio 1994).
The position developed in this book is likewise skeptical of the reason/emotion
dichotomy, but this won’t fully emerge until the end. For now, let’s begin by attempting to
articulate a working contrast between reason and emotion.
Reasoning is, roughly, a kind of inference in which beliefs or similar propositional
attitudes are formed on the basis of pre-existing ones. For example, suppose Jerry believes that
Elaine will move into the apartment upstairs only if she has $5000, and he recently learned that
she doesn’t have that kind of money to spare. Jerry then engages in reasoning when, on the basis
of these two other beliefs, he comes to believe that Elaine won’t move into the apartment
upstairs. It’s notoriously difficult to adequately characterize this notion of forming a belief “on
the basis” of other beliefs in the sense relevant to inference (see, e.g., Boghossian 2012). But
such issues needn’t detain us here.
Some philosophers and psychologists define reasoning more narrowly as conscious
inference (e.g., Haidt 2001: 818; Mercier & Sperber 2011: 57; Greene 2013: 136). This may
capture one ordinary sense of the term “reasoning.” The archetype of reasoning is indeed
deliberate, relatively slow, and drawn out in a step-wise fashion. For example, you calculate your
portion of the bill, weight the pros and cons of divorce, or deliberate about where to eat for
lunch.
But there’s no need to be overly restrictive. As Gilbert Harman, Kelby Mason, and
Walter Sinnott-Armstrong point out: “Where philosophers tend to suppose that reasoning is a
conscious process… most psychological studies of reasoning treat it as a largely unconscious
process” (2010: 241). Moreover, ordinary usage and dictionary definitions don’t make conscious
awareness essential to reasoning, presumably because rule-governed transitions between beliefs
can be a rather automatic, unconscious, implicit, and unreflective process. For example:
• You just find yourself concluding that your son is on drugs.
• You automatically infer from your boss’s subtly unusual demeanor that she’s about to
fire you.
• You suddenly realize in the shower the solution to a long-standing problem.
These beliefs seem to pop into one’s head, but they aren’t born of mere feelings or non-
inferential associations. There is plausibly inference on the basis of representations that function
as providing reasons for a new belief. Reasoning occurs; it’s just largely outside of awareness
and more rapid than conscious deliberation.
Indeed, it is now common in moral psychology to distinguish conscious from
unconscious reasoning or inference (e.g., Cushman, Young, & Greene 2010; Harman et al.
2010). The idea is sometimes emphasized by rationalists (e.g., Mikhail 2011), but even
sentimentalists allow for unconscious reasoning, particularly in light of research on unconscious
probabilistic inference (Nichols, Kumar, & Lopez 2016; see also Zimmerman 2013).
No doubt some of one’s beliefs are formed without engaging in reasoning, conscious or
not. Basic perceptual beliefs are perhaps a good example. You believe that the door opening in
front of you retains a rectangular shape, but arguably you don’t form this judgment on the basis
of even tacit beliefs about angles in your field of vision. Rather, your visual system generates
such perceptual constancies by carrying out computational work among mental states that are
relatively inaccessible to introspection and isolated from other patterns of belief-formation (such
states are often called sub-personal, although sub-doxastic [Stich 1978] is probably more apt
[Drayson 2012]). As the visual experience of a rectangular door is generated, you believe that the
pg. 13 of 206
Regard for Reason | J. May

door is rectangular by simply taking your visual experience at face value. So perhaps it’s
inappropriate to posit unconscious reasoning (about angles and the like) at least because the
relevant transitions aren’t among beliefs—not even tacit ones.
Nevertheless, some inferential transitions between genuine beliefs are unconscious.
Within the category of unconscious mental processes, some generate beliefs on the basis of prior
beliefs (e.g., inferring that your son is on drugs). Other belief-generating processes don’t amount
to reasoning or inference (e.g., believing that an opening door is constantly rectangular), at least
because they are “subpersonal” or “subdoxastic.”
What about emotion? There is unfortunately even less consensus here. There are staunch
cognitivist theories on which emotions have cognitive content, much like or even exactly like
beliefs. Martha Nussbaum, for example, argues that our emotions contain “judgments about
important things” which involve “appraising an external object as salient for our own well-
being” (2001: 19). Non-cognitivist theories maintain that emotions lack cognitive content. Jesse
Prinz, for example, holds that emotions are “somatic signals… not cognitive states” although
they “represent concerns” (2007: 68). Moreover, while we often think of emotional processes as
rapid and automatic, they can be more drawn out and consciously accessible. One can, for
example, be acutely aware of one’s anxiety and its bodily effects, which may ebb and flow over
the course of days or weeks, as opposed to occurring in rapid episodes typical of fear or anger.
I suspect the concept of emotion is flexible and not amendable to precise definition. I’m
certainly not fond of classical analyses of concepts, which posit necessary and sufficient
conditions (May & Holton 2012; May 2014b). In any case, we can be ecumenical and conceive
of emotions as mental states and processes that have certain characteristic features. Heidi
Maibom provides a useful characterization of emotions as “mental states associated with
feelings, bodily changes, action potentials, and evaluations of the environment” (2010: 1000; cf.
also Haidt 2003: 853).
Suppose I negligently step on your gouty toe, so you become angry with me. Your anger
has an affective element: a characteristic feel. The emotion also has motivational elements that
often appear to activate relevant behavior: e.g., it motivates you to retaliate with verbal and
physical abuse (but see Seligman et al. 2016: ch. 8). Emotions also seem to have physiological
effects—e.g., your anger will lead to a rise in blood pressure, increased heart rate, and other
bodily changes. Finally, feeling angry also typically involves or at least causes cognitive
elements, such as thoughts about my blameworthiness, about the damage to your toe, about how
you could best retaliate, and so on.
I will understand such cognitive elements as, roughly, mental items whose function is to
accurately represent. A cognitive mental state, like a belief, can be contrasted with motivations,
goals, or desires, which arguably function to bring about the state of affairs they represent (Smith
1994). Tim and I may both believe that there is a taco on the table, but only I want to eat it, for he
is stuffed. My longing for the scrumptious taco involves a desire or a mental state whose
function is to bring it about that I eat the taco. Importantly, cognitive elements represent how
things are and can thus play a role in inference. Insofar as emotions can have cognitive elements
or at least effects on cognition, emotions can provide information and facilitate reasoning.
The cognitive elements or effects of emotions make the apparent reason/emotion
dichotomy becomes blurry at best. Despite the similarities between the two, however, at least
one important difference may remain: it’s commonly assumed that feelings are essential to
emotions but not to the process of reasoning. Many researchers use the term “affect” to refer to a
kind of feeling (see, e.g., Seligman et al. 2016: 50), although it is something of a technical term
pg. 14 of 206
Regard for Reason | J. May

with different meanings for some theorists. Perhaps, then, we should just speak of the dichotomy
between inference/affect or cognitive/non-cognitive states. However, sometimes the connection
to rationalism and sentimentalism is clearer if we operate with the working conception of
reasoning and emotion and then contrast their cognitive vs. affective aspects.
So far, the working conception respects the worry that there is no sharp division between
reason and emotion. This overlap view, as we might call it, seems to satisfy many in empirical
moral psychology (e.g., Greene 2008; Maibom 2010; Helion & Pizarro 2014; Huebner 2015).
For others, however, it doesn’t go far enough.
On the total collapse view, there is no difference between reasoning and emotional
processing. Peter Railton, for example, construes the “affective system” quite broadly such that
“affect appears to play a continuously active role in virtually all core psychological processes:
perception, attention, association, memory, cognition, and motivation” (2014: 827; cf. also
Damasio 1994; Seligman, Railton, Baumeister, & Sripada 2016). On this picture, it may seem
that the debate between rationalists and sentimentalists is spurious, since affect and inference are
inextricable. However, what motivates the collapse view is a range of empirical evidence which
suggests that “emotion” turns out to be more like inference than we thought, not that “reason”
turns out to be less like inference than we thought. As James Woodward has put it, areas of the
brain associated with emotion are “involved in calculation, computation, and learning” (2016:
97).
This would be a welcome result for the view to be defended in this book, which aims to
emphasize the role of reasoning and inference in moral psychology. Indeed, the affective system
broadly construed is something humans share with many other animals (Seligman et al. 2016).
The total collapse view suggests that affective processes are necessary for moral judgment
merely because they’re required for inference generally, moral or otherwise. So we give
sentimentalists a better chance if we operate with the overlap view instead. To see this, we need
to consider in more detail the debate between rationalists and sentimentalists.

1.2.3 Rationalism vs. Sentimentalism

Clearly, both reason and emotion play a role in moral judgment. Nevertheless, a traditional
dispute remains between rationalists and sentimentalists over the comparative roles of inference
vs. feelings in distinctively moral cognition (Nichols 2008: n. 2; Maibom 2010: 1000; May &
Kumar forthcoming). The issue is interesting in its own right and we’ll eventually see that it has
important practical implications for how to develop moral knowledge and virtue.
The empirical claim made by those in the rationalist tradition is that reasoning is central
to moral cognition in a way that the affective elements of emotions are not. Such (empirical)
rationalists hold that moral judgment, just like many other kinds of judgment, is fundamentally
“a product of reason” (Nichols 2004: 70) or “derives from our rational capacities” (Kennett
2006: 70). However, as only a psychological thesis, “rational capacities” here is meant to be non-
normative—even poor reasoning counts as deriving from one’s “rational” capacities. We can
more clearly capture this idea by construing rationalism as the thesis that moral judgment is
ultimately “the culmination of a process of reasoning” (Maibom 2010: 999). Emotions can
certainly influence moral cognition, according to rationalists, but primarily insofar as they
facilitate inference; they aren’t essential for making a judgment distinctively moral.
On the sentimentalist picture I’ll resist, mere feeling or the affective component of
emotions is essential for moral cognition and thus moral knowledge (if such knowledge is
pg. 15 of 206
Regard for Reason | J. May

possible). Without emotions, a creature can’t make any moral judgments, because the feelings
constitutive of emotions are in some way essential to having moral concepts. As Hume famously
put it, when we condemn an action or a person’s character:
The vice entirely escapes you, as long as you consider the object. You never can find it,
till you turn your reflection into your own breast, and find a sentiment of disapprobation,
which arises in you, towards this action. (1739-40/2000: 3.1.1)
Hume clearly conceives of such sentiments or passions as feelings, and it’s this aspect of
emotions, not their role in inference, that sentimentalists see as distinctive of moral judgment.
Contemporary sentimentalists, such as Shaun Nichols, continue this line of thought, stating that
“moral judgment is grounded in affective response” (2004: 83, emphasis added). Moreover,
sentimentalists don’t merely claim that lacking feelings or affect would hinder moral judgment,
but rather that this would render one incapable of understanding right from wrong. Even when
sentimentalists emphasize the importance of reasoning and reflection in moral judgment, they
remain sentimentalists because they give “the emotions a constitutive role in evaluative
judgment” in particular (D’Arms & Jacobson 2014: 254; cf. also Kauppinen 2013).
Rationalists can agree that emotions are commonly involved in human moral judgment
and that lacking them leads to difficulties in navigating the social world. Humans are
undoubtedly emotional creatures, and sentiments pervade social interactions with others. To
build a moral agent, one might have to endow it with emotions, but only because a finite creature
living in a fast-paced social world requires a mechanism for facilitating rapid reasoning and
quickly directing its attention to relevant information. A creature with unlimited time and
resources needn’t possess emotions in order to make distinctively moral judgments (cf. Jones
2006: 3).
On the rationalist view, the role of emotions in morality is like the role of ubiquitous
technologies: they facilitate information processing and structure our way of life. If the Internet
was somehow broken, for example, our normal way of life would be heavily disrupted, but it’s
not as though the Internet is fundamental to the very idea of communication and business
transactions. Of course, in one sense the Internet is essential, as we rely on it for how we happen
to operate. But a cognitive science of how communication fundamentally works needn’t feature
the ability to use email. No doubt the analogy only goes so far, since emotions are not some
recent invention in human life. They are part of human nature, if there is such a thing. The point
is simply that, for sentimentalists, emotions are more than vehicles for information processing;
they partially define what morality is. Thus, even if emotions aid in reasoning, we still can
conclude that their affective elements aren’t necessary for moral judgment. The sentimentalist
tradition isn’t vindicated if emotions are merely ways of processing information more quickly,
rigidly, and without attentional resources (see Prinz 2006: 31).
Of course, emotions may be required for moral judgment, especially knowledge, merely
because experiencing certain emotions seems necessary for knowing what another is feeling.
Indeed, sentimentalists sometimes draw an analogy between moral judgments and judgments
about color: they are both beliefs typically caused by certain experiences (e.g., Hume 1739-40:
3.1.1; Prinz 2007: 16; Slote 2010; Kauppinen 2013: 370; Sinhababu 2017: ch. 4). The relevant
experience may then be necessary for knowledge, particularly because such experiences are
conscious, or essentially qualitative, mental states. And understanding what a sensation or
experience is like seems impossible without having it oneself (Jackson 1982). In the moral
domain, men in power have historically taken a paternalistic attitude toward women, and yet men

pg. 16 of 206
Regard for Reason | J. May

presumably don’t generally know exactly what it’s like to be a woman or to carry a child to term.
As some liberals are fond of saying: If men were giving birth, there wouldn’t be much discussion
about the right to have an abortion. Perhaps even women don’t know these things either until
they have the relevant experiences (see Paul 2014). Similarly, an emotionless robot may be
ignorant of some moral facts in virtue of lacking feelings of love, grief, pride, or fury.
Even so, this doesn’t show that emotions are essential for making a moral judgment. At
best certain experiences are sometimes required for understanding a phenomenon. A
sophisticated robot could acquire the relevant knowledge by having the requisite experiences. In
fact, this is just an instance of a more general problem of ignorance of morally relevant
information. Suppose I visit my grandmother in the hospital in Mexico. I know what it is to
suffer but I falsely believe that the Spanish word “sufre” refers to, not suffering, but the
vegetarian option at a Chipotle restaurant. Then I won’t know that the nurse did wrong when she
made “mi abuela sufre.” Does this imply that Spanish is essential for moral knowledge? In
certain circumstances, I must know the relevant language, but this is too specific for a general
characterization of what’s psychologically essential for moral judgment. Similarly, suppose one
doesn’t fully understand, say, the anguish of torture or the humiliation of discrimination unless
one experiences them firsthand. Such examples don’t demonstrate that feelings are essential for
making distinctively moral judgments but rather judgments about specific cases. The
theoretically interesting position for sentimentalists to take is the one that many have indeed
taken: emotions are required for understanding right from wrong generally, not merely for
understanding a subset of particular moral claims.

1.3 Pessimism about Moral Motivation


1.3.1 Sources of Pessimism

Suppose the previous challenges have been rebutted: ordinary moral cognition is a fundamentally
rational enterprise capable of rising to moral knowledge or at least justified belief. Still, we
might worry that we rarely live up to our scruples, for self-interest and other problematic
passions too frequently get in the way. Even if we do end up doing the right thing, we do it for
the wrong reasons. When we’re honest, fair, kind, and charitable, it’s only to avoid punishment,
to feel better about ourselves, or to curry someone’s favor. Something seems morally lacking in
such actions—let’s say that they’re not fully virtuous. Just as merely true but unjustified belief
doesn’t seem to deserve a certain honorific (e.g., “knowledge”), merely doing the right thing, but
not for the right reasons, doesn’t warrant another moniker (“virtue”).
To be truly virtuous, it seems in particular that moral considerations should more
frequently guide our behavior; reason cannot be a slave to non-rational passions, selfish or
otherwise. Kant (1785/2002) famously thought that only such actions—those done “from
duty”—have moral worth. For example, we’d expect a virtuous merchant not only to charge a
naïve customer the normal price for milk but to do it for more than merely self-interested
reasons—e.g., to avoid a bad reputation.
Many believe the science warrants pessimism: deep down we’re primarily motivated to
do what’s right for the wrong reasons, not morally relevant considerations. Robert Wright, for
example, proclaims that an evolutionary perspective on human psychology reveals that we’re
largely selfish, and yet we ironically despise such egoism:
pg. 17 of 206
Regard for Reason | J. May

[T]he pretense of selflessness is about as much a part of human nature as is its frequent
absence. We dress ourselves up in tony moral language, denying base motives and
stressing our at least minimal consideration for the greater good; and we fiercely and self-
righteously decry selfishness in others. (1994: 344)
This disconcerting account paints us as fundamentally egoistic. On the most extreme version—
psychological egoism—all of one’s actions are ultimately motivated by self-interest. We are
simply incapable of helping others solely out of a concern for their welfare. An ulterior motive
always lurks in the background, even if unconsciously.
There is a wealth of rigorous research that seems to suggest that altruism is possible
particularly when we empathize with others. However, compassion can be rather biased,
parochial, and myopic. We are more concerned for victims who are similar to ourselves, or part
of our in-group, or vividly represented to tug at our heartstrings, rather than a mere abstract
statistic (Cialdini et al. 1997; Jenni & Loewenstein 1997; Batson 2011). Moreover, studies of
dishonesty suggest that most people will rationalize promoting their self-interest instead of moral
principles (Ariely 2012). Even if we’re not universally egoistic, we may not be far from it
(Batson 2016).
A related source of pessimism draws on the vast research demonstrating the situationist
thesis that unexpected features of one’s circumstances have a powerful influence on behavior.
Many have taken this literature to undermine the existence of robust character traits or
conceptions of agency and responsibility that require accurate reflection. However, even if we
jettison commitments to character traits and reflective agency, results in the situationist literature
pose a further challenge. If our morally relevant actions are often significantly influenced by the
mere smell of fresh cookies, the color of a person’s skin, an image of watchful eyes, and the like,
then we are motivated by ethically arbitrary factors (see, e.g., Nelkin 2005; Nahmias 2007;
Vargas 2013; Doris 2015). A certain brand of situationism, then, may reveal that we’re
chronically incapable of acting for the right reasons.
Suppose we do often do what’s right for more than self-interested or arbitrary reasons.
Proponents of Humeanism would argue that, even when we behave morally, we are beholden to
our unreasoned passions or desires (e.g., Sinhababu 2009; Schroeder, Roskies, & Nichols 2010).
If Humeans are right, our actions are always traceable to some ultimate or intrinsic motive that
we have independent of any reasoning or beliefs. Bernard Williams famously discusses an
example in which a callous man beats his wife and doesn’t care at all about how badly this
affects her (1989/1995: 39). On the Humean view, we can only motivate this man to stop his
despicable behavior by getting him to believe that being more kind will promote something he
already cares about. We must try to show him that he’ll eventually be unhappy with himself or
that his treasured marriage will fall apart. Pointing out that he’s being immoral will only
motivate if he happens to care, and care enough, about that. If, however, refraining from physical
abuse will not promote anything this man already wants, then the Humean says there is nothing
that could motivate him to stop except a change in his concerns.
The Humean theory can be conceived as a kind of pessimism if acting for the right
reasons requires ultimately acting on the basis of recognizing the relevant reasons, not an
antecedent desire. Some, like Thomas Reid, seem to think so:
It appears evident… that those actions only can truly be called virtuous, and deserving of
moral approbation, which the agent believed to be right, and to which he was influenced,
more or less, by that belief. (1788/2010: 293)

pg. 18 of 206
Regard for Reason | J. May

We do often describe one another’s actions this way—e.g., “She did it because she knew it was
the right thing to do”—without appealing to an antecedent desire to be moral.
However, Humeans might retort that acting for the right reasons requires only being
motivated by specific moral considerations (e.g., kindness, fairness, loyalty), not the bare belief
that something is right per se (cf., e.g., Arpaly 2003: ch. 3). Perhaps, for example, a father
shouldn’t have “one thought too many” about whether he should save his own drowning
daughter over a stranger’s (Williams 1976/1981). In general, the virtuous person presumably
wouldn’t “fetishize” morality but rather be ultimately concerned with the welfare of others,
fidelity to one’s commitments, and so on (Smith 1994), and a moral belief might still be
problematic in this way (Markovits 2010). We’ll grapple with this issue later (Chapters 7-8), but
for now suffice it to say that a certain kind of pessimism about the role of reason in moral
motivation remains if Humeanism is right.
For a variety of reasons, pessimists conclude that the aim of doing what’s right for the
right reasons is practically unattainable. On a common account of what’s required for virtuous
motivation, it’s practically out of reach for most of us. I aim to show that we are capable of
genuinely altruistic motivation and that our beliefs about what we ought to do can motivate
action without merely serving or furthering some antecedent desire. Moreover, while features of
the situation certainly influence what we do, the ethically suspect influences do not
systematically conflict with virtuous motivation. I ultimately argue that humans are capable of
acting from duty or doing the right thing for the right reasons. Morally good motives are not
rarities.

1.3.2 Non-cognitivism & Relativism

The discussion so far has assumed that we can have moral beliefs, conceived as distinct from
emotions, desires, or other passions. A complete defense of anti-Humeanism and rationalism
requires showing that moral judgments don’t just express non-cognitive states. Consider, for
example, the sentence “Slavery is immoral.” It seems such sentences don’t always merely
express one’s negative feelings toward slavery. That is, it seems that non-cognitivism about
moral judgment is false. Unlike beliefs, mere feelings and desires arguably can’t be evaluated for
truth or accuracy, which makes it difficult to see how they can be part of a process of reasoning
or inference.
Importantly, rejecting non-cognitivism needn’t commit one to denying relativism, the
view that moral statements are only true relative to some framework, such as the norms of one’s
culture. I don’t assume that moral judgments are robustly objective but rather that they can be
cognitive, similar to other beliefs. When I say “Lebron is tall,” this may be true only relative to a
certain contrast class (ordinary people, not basketball players), but it is nonetheless assessable for
truth or falsity in a certain context. In a somewhat similar fashion, moral truths are nonetheless
truths even if they are in some sense relative to a culture, species, or kind of creature. So we
needn’t assume that moral truths are objectively true—a core element of moral realism (Shafer-
Landau 2003)—in order to defend moral knowledge, conceived as justified true belief.
I don’t intend to argue at length against non-cognitivism. The view has largely already
fallen out of favor among many researchers. A survey of philosophers conducted in 2009 reveals
that only 17% lean toward or accept it (Bourget & Chalmers 2014: 476). There is good reason
for this. The famous Frege-Geach problem, which I won’t rehearse here, shows that non-
cognitivists struggle to make sense of moral language without drastically revising our best
pg. 19 of 206
Regard for Reason | J. May

conception of logic and semantics (Schroeder 2010). Non-cognitivism is not exactly a live
empirical theory either, as psychologists and neuroscientists appear to assume that moral
judgments express beliefs. For example, rather than simply identify moral judgments with
emotions or desires, researchers look to whether emotions are a cause or consequence of the
moral judgment. In fact, the vast majority of “pessimists” I’ll target assume cognitivism as well.
Moreover, we needn’t accept non-cognitivism to account for the various uses to which
moral judgment can be put. For example, hybrid theories can capture the idea that we sometimes
use sentences like “That’s just wrong” to express a negative reaction, like a feeling or desire, or
to express a belief that an action or policy is wrong. Compare statements containing a pejorative,
such as “Yolanda’s a Yankee,” which in some countries is used to express both a belief (Yolanda
is American) and a distaste for her and other Americans (Copp 2001: 16). I favor something like
this model (May 2014a), according to which moral judgments can express both cognitive and
non-cognitive states (cf. also Kumar 2016a). However, I assume here only the falsity of non-
cognitivism, which is compatible with either a hybrid view or a strong cognitivist theory on
which moral judgments only or chiefly express beliefs.

1.4 Optimistic Rationalism


My primary aim is to resist the predominant pessimism about ordinary moral psychology that has
developed in light of scientific research on the human mind. I will offer a more optimistic
defense of ordinary moral thought and action in which reason plays a fundamental role—
optimistic rationalism, if you will.
Since pessimism comes in many forms, an optimistic view must be multi-faceted, with
various components in opposition to the variety of pessimistic arguments. In particular, I aim to
undermine some popular sources of empirically grounded pessimism (see Figure 1.1). I thus
contend that moral judgments are generated by fundamentally cognitive and rational processes
(rationalism), which are not subject to wide-ranging empirical debunking arguments (anti-
skepticism). Moreover, moral motivation is not always ultimately egoistic (psychological
altruism), is heavily driven by a concern to do what’s right, and is not always a slave to
unreasoned passions (anti-Humeanism). All of this casts doubt on the idea that virtuous
motivation is rare among ordinary individuals (anti-skepticism).

Figure 1.1: Key Sources of Empirically Grounded Pessimism

pg. 20 of 206
Regard for Reason | J. May

Note: Parentheses indicate in which chapters the source is primarily addressed.

Some may regard this cluster of views as closely associated with the Kantian tradition in
moral philosophy. However, one can defend an optimistic picture of moral psychology without
adopting a specific Kantian articulation of what precisely makes an action immoral. For
example, Kant (1785/2002) says an action is wrong if the maxim on which it is based can’t be
rationally chosen as a universal law. The theory developed in this book does not commit to
knowledge of such specific accounts of fundamental moral principles. It’s similar in some
important respects to the moral psychology of the great Chinese philosopher Mencius (Morrow
2009) and of some contemporary philosophers who are not particularly Kantian. So non-Kantian
moral theorists—especially virtue ethicists, but even some consequentialists—may find much to
agree with in what follows.
At any rate, few optimists have taken the empirical challenges seriously, let alone
answered them successfully. Some valiant attempts are simply incomplete in that they only
address one aspect of moral psychology, such as moral judgment (e.g., Maibom 2005; Kamm
2009; Kennett & Fine 2009; Mikhail 2011; Sauer 2017) or moral motivation (e.g., Kennett 2002;
Kennett & Fine 2008; de Kenessey & Darwall 2014; Sie 2015). Others claim to be optimists but
embrace what I regard as sources of pessimism, such as simple sentimentalism (e.g., de Waal
2009) or revisionary utilitarianism (e.g., Greene 2013). This book aims to provide a more
complete and satisfactory defense.
I employ a divide and conquer strategy, breaking our moral minds into two key
components (and their corresponding normative ideals): moral judgment (and knowledge) and
moral motivation (and virtue). Consider how these two may come together or apart. Suppose
you’re deciding whether you ought to press charges against your thieving son who is in the grips
of a severe drug addiction. If all goes well, you form the correct judgment, it’s warranted or
justified, and you thus know what to do. Suppose you decide it’s best to proceed with the
charges. Next is the important task of living up to your standards. If you’re virtuous, you will act
according to this judgment and for the right reasons, yielding moral motivation that exhibits
virtue.
One of my overarching aims is to reveal the deep connections and parallels in these two
aspects of our moral minds—judgment and motivation—which are often addressed separately
pg. 21 of 206
Regard for Reason | J. May

and by different sets of researchers. In subsequent chapters, we’ll see that our moral beliefs are
formed primarily by on the basis of unconscious inference, not feelings, and that these moral
beliefs play a prominent role in motivating action.

1.4.1 From Moral Judgment to Knowledge

The next four chapters form Part I, which tackles moral judgment and to what extent it rises to
knowledge or at least justified belief.
Chapter 2 (“The Limits of Emotion”) argues that, contrary to the current sentimentalist
orthodoxy, there is insufficient reason to believe that feelings play an integral role in moral
judgment. The empirical evidence for sentimentalism is diverse, but it is rather weak and has
generally been overblown.
Chapter 3 (“Reasoning Beyond Consequences”) turns to some of the complex inferential
processes that do drive ordinary moral thinking. Ample experimental evidence establishes in
particular that we often treat more than just the consequences of one’s actions as morally
significant. Ultimately, much of ordinary moral judgment involves both conscious and
unconscious reasoning about outcomes and an actor’s role in bringing them about.
But don’t we have empirical reasons to believe that core elements of ordinary moral
judgment are defective? Chapter 4 (“Defending Moral Judgment”) argues that ordinary moral
cognition can yield justified belief, despite being partly influenced by emotions, extraneous
factors, automatic heuristics, and evolutionary pressures. I rebut several prominent, wide-ranging
debunking arguments by showing that such pessimists face a Debunker’s Dilemma: they can
identify an influence on moral belief that is either defective or substantial, but not both. Thus
wide-ranging empirical debunkers face a trade-off: identifying a substantial influence on moral
belief implicates a process that is not genuinely defective.
By restoring reason as an essential element in moral cognition, the foregoing chapters
undermine key sources of support for the sentimentalists and the debunkers. Such pessimists
have tended to accept the idea that feelings play an important role in ordinary moral judgment.
Sentimentalists embrace this as a more or less complete characterization. Debunkers instead use
the apparent power of emotion as a source of skepticism about either all of moral judgment or
only some of its more intuitive bases. With a regard for reason, ordinary moral thinking is on
safer ground.
However, while moral knowledge is possible, Chapter 5 (“The Difficulty of Moral
Knowledge”) admits that we are far from flawless moral experts. There are two key empirical
threats to the acquisition or maintenance of well-founded moral beliefs. First, empirical research
can indeed reveal questionable influences on our moral views. While wide-ranging debunking
arguments are problematic, this does not hinder highly targeted attacks on specific sets of moral
beliefs (e.g., some influenced by implicit biases). Second, while people share many values, most
ordinary folks have foundational disagreements with others who are just as likely to be in error
(“epistemic peers”). However, this threat is likewise constrained since many moral
disagreements aren’t foundational or aren’t with what most people should regard as their peers.

1.4.2 From Moral Motivation to Virtue

pg. 22 of 206
Regard for Reason | J. May

Part II consists of four chapters that focus on ordinary moral action and whether it’s compatible
with virtuous motivation, which involves doing the right thing for the right reasons.
Chapter 6 (“Beyond Self-Interest”) argues that we can ultimately be motivated by more
than egoistic desires. Decades of experiments in social psychology provide powerful evidence
that we are capable of genuine altruism, especially when empathizing with others. The
psychological evidence, moreover, cannot be dismissed as showing that empathy blurs the
distinction between self and other so much that it makes helping behavior non-altruistic.
Even if we can rise above self-interest, we may just be slaves to our largely, if not
entirely, egoistic passions. Chapter 7 (“The Motivational Power of Moral Beliefs”) argues that
the motivational power of reason, via moral beliefs, has been understated. A wide range of
experimental research shows that when we succumb it’s often due in part to a change in moral
(or normative) belief. Rationalization, perhaps paradoxically, reveals a deep regard for reason—
to act in ways we can justify to others and to ourselves. The result is that, even when behaving
badly, actions that often seem motivated by self-interest are actually ultimately driven by a
concern to do what’s right (moral integrity). This addresses a second form of egoistic pessimism
but also sets up a challenge to the Humean theory addressed in the next chapter.
Chapter 8 (“Freeing Reason from Desire”) picks up on the idea that our beliefs about
which actions we ought to perform have a pervasive effect on what we do. Humean theories
would of course insist on connecting such beliefs with an antecedent motive, such as a desire to
do what’s right. However, I first shift the burden of proof onto Humeans to motivate their more
restrictive, revisionary account. I then show that Humeans are unlikely to discharge this burden
on empirical grounds, whether by appealing to research on neurological disorders, the
psychology of desire, or the scientific virtue of parsimony.
Chapter 9 (“Defending Virtuous Motivation”) considers further empirical threats to our
ability to act for the right reasons. There are two main threats: self-interested rationalization and
arbitrary situational factors. However, wide-ranging versions of such empirical challenges
resemble sweeping attempts to debunk moral knowledge, and they’re likewise subject to a
dilemma. One can easily identify an influence on a large class of actions that is either substantial
or defective but not both. Thus, like moral knowledge, the science suggests that the empirical
threat to virtue is limited.

1.4.3 Moral Enhancement

The previous chapters defend the idea that, based on our regard for reason, ordinary moral
thought and action are capable of rising to knowledge and virtue. But of course such optimism
must be cautious. We do often behave badly, or do what’s right for the wrong reasons, or lack
justified moral beliefs.
Chapter 10 (“Cautious Optimism”) serves as a brief conclusion with a recapitulation of
the main claims and moves made in the book, along with a discussion of how moral knowledge
and virtue can be enhanced. One broad implication of optimistic rationalism is that the best
method for making more of us more virtuous will not target our passions to the exclusion of our
cognitive, reasoning, and learning abilities. However, sound arguments aren’t enough, for human
beings are fallible creatures with limited attention spans. Still, the impediments to virtue are not
primarily the absence of reason or our basic modes of moral thought; rather we must combat
ignorance, self-interested rationalization, and the acquisition of misinformation and vices.

pg. 23 of 206
Regard for Reason | J. May

There is further reason for caution and caveat. For all I will say here, one might adopt a
truly global skepticism and conclude, on empirical grounds, that we don’t know right from
wrong and can’t act virtuously because reason itself is thoroughly saturated with defective
processes, both inside and outside the moral domain. It’s beyond the scope of this book to
grapple with such a deep skepticism about our cognitive and inferential capacities. A vindication
of moral knowledge or virtue, especially given a rationalist moral psychology, would ultimately
require defending the reliability of our cognitive faculties generally. I’ll be content here,
however, if I can show that empirical research doesn’t reveal that reason is largely absent or
defective in our basic modes of moral thought and motivation.

1.5 Coda: Appealing to Science


We’ll encounter a great deal of empirical research throughout this book. We should proceed with
some caution given heightened awareness of concerns arising in experimental psychology and
other sciences.
First, there is a somewhat surprising amount of fraud, in which researchers fabricate
data—and moral psychologists are no exception (Estes 2012). Second, there is an unsettling
amount of poor scientific practice. Much of this falls under the heading of p-hacking, as when
researchers continuously run participants in a study until they find a statistically significant
result, which increases the likelihood of a false positive. Third, the scientific process itself has
flaws. For example, there are publication biases in favor of shocking results and against null
findings, including failures to replicate a previous result. One consequence is the file drawer
problem in which failures to detect a significant effect are not published or otherwise circulated,
preventing them from being factored into the cumulative evaluation of evidence. Related to this,
the rate of replication seems unfortunately low in the sciences generally, including psychology in
particular—an issue some call RepliGate (e.g., Doris 2015). A recent group of over 200
researchers attempted to carefully replicate 100 psychological studies and found roughly that
only 39% succeeded (Open Science Collaboration 2015).
An additional problem is that much of the empirical research in moral psychology is done
on a small portion of the population, typically undergraduates in North American and European
universities. That is changing, as researchers are increasingly recruiting participants from outside
of universities, including some from multiple cultures. Still, as Joseph Henrich and his
collaborators (2010) have put it, the majority of research participants are from societies that are
predominantly Western, educated, industrialized, rich, and democratic (WEIRD people). This is
especially problematic when we have empirical evidence that what appear to be psychological
universals are not, at least not to the same degree in all societies.
Of course, we shouldn’t overreact. The vast majority of scientists are not frauds and
many conduct careful and rigorous studies. While participants are often WEIRD, such a subject
pool may suffice if one’s aim is merely to establish the existence or possibility of certain
psychological mechanisms, not their universality. Moreover, replication attempts shouldn’t
necessarily be privileged over the original studies. The original could have detected a real effect
while the later study is a false negative. The cutoff for statistical significance (typically p < .05)
is somewhat arbitrary, after all. A statistically significant result only means, roughly, that there is
a low probability (less than .05) that the observed difference, or a greater one, would appear in
the sample, even when there is no real difference in the population (that is, when the null
pg. 24 of 206
Regard for Reason | J. May

hypothesis is true). The p-value importantly doesn’t represent the probability that any hypothesis
is true but rather a conditional probability: the probability of observing a certain result assuming
that the null hypothesis is true. Thus, if a replication attempt is close to passing the conventional
threshold—nearly yielding a successful replication—we may still have some reason to believe in
the effect. Observing a difference between experimental groups that yields a p-value of .06, for
example, doesn’t exactly amount to conclusive reason to accept the null. In general, it’s more
difficult to prove a negative (e.g., that an effect is bogus) than it is to establish the existence of a
phenomenon.
There is certainly room for improvement in science, including larger sample sizes, more
replication attempts, and more cross-cultural research. But science can clearly advance our
knowledge, even about the mind and our complex social world, provided we aren’t overly
credulous. For example, as Machery and Doris (2017) emphasize, one shouldn’t stake a
conclusion on a single study, ideally not even on a few studies from one lab, especially when
sample sizes are low. It’s best to draw on a large set of studies in the literature, appealing where
possible to meta-analyses and reviews, while recognizing of course that these aren’t definitive
either. Caution and care can ultimately yield strong arguments based on scientific data.
Despite judicious appeal to the science, I tread lightly when drawing conclusions from
empirical studies or philosophical analysis. Like Hume, I suspect the truth about such perennial
issues will be difficult to uncover, and “to hope we shall arrive at it without pains, while the
greatest geniuses have failed with the utmost pains, must certainly be esteemed sufficiently vain
and presumptuous” (1739-40: intro, 3). So I don’t claim to have conclusively proven the theses
in this book. Thankfully, though, my main aim is more modest. Defending a more optimistic
conception of our righteous minds requires merely showing that it’s a plausible approach given
our best evidence to date. No chapter is meant to establish definitively the relevant claim it
defends. The value of the book is largely meant to arise from all of the parts coming together to
exhibit a counterweight to the pessimistic trend.

pg. 25 of 206
Regard for Reason | J. May

PART I: Moral Judgment & Knowledge

pg. 26 of 206
Regard for Reason | J. May

Ch. 2: The Limits of Emotion


Word count: 11,252

2.1 Introduction
Emotions and moral judgment seem to go hand in hand. We feel outraged at injustice,
compassion for victims of abuse, repugnance toward corrupt politicians, and a warm joyous
elevation toward moral saints and heroes. Suppose, for example, that you hear breaking news of
a deadly terrorist attack. Watching the story unfold, learning details of the gratuitous suffering
and the loss of so many innocent lives, you experience a mixture of feelings—sadness, anger,
disgust—and you naturally judge the relevant action to be immoral. But which came first, the
feelings or the judgment? Do you believe the act of terror was unethical because of your negative
feelings or do you have those feelings because of your moral evaluation?
Empirical evidence could help to settle the issue. Psychological science, particularly in
the tradition of Lawrence Kohlberg (1973), used to emphasize the role of inference and
reflection in mature or adult moral judgment, not emotion. The tradition fit well with the
rationalist idea that reasoning is integral to moral cognition. Feelings are either merely the
natural consequences of moral judgment or provide just one way of instigating or facilitating
reasoning that leads to such judgments.
More recently, however, there has been something of an “affect revolution” in
psychology generally, as well as moral psychology in particular (Haidt 2003: 852). There is
apparently converging scientific evidence for the sentimentalist idea that the affective aspects of
moral emotions, or feelings, play a foundational role in moral judgment. Jesse Prinz, for
example, proclaims: “Current evidence favors the conclusion that ordinary moral judgments are
emotional in nature” (2006: 30). Again, there is no bright line dividing reason from emotion (see
Chapter 1, §1.2.2), and each clearly influence moral thinking to some degree. However,
sentimentalists maintain that feelings play a foundational role in distinctively moral judgment
(see Chapter 1, §1.2.3).
This chapter and the next together form an empirically grounded argument against the
new sentimentalist orthodoxy. We’ll see in this chapter that there is no compelling evidence that
the affective elements of moral emotions are causally necessary or sufficient for making a moral
judgment or for treating norms as distinctively moral. Chapter 3 then shows that, while it is
misguided to emphasize reflection and the articulation of reasons, moral cognition is chiefly
driven by unconscious inference, just like other forms of thought.

pg. 27 of 206
Regard for Reason | J. May

2.2 Moralizing with Feelings?


A wealth of studies purport to show that feelings substantially influence moral judgment. There
are certainly studies showing that moral judgments are correlated with emotions (Moll et al.
2005), but that is no evidence in favor of sentimentalism. Since we care deeply about moral
issues, rationalists can happily accommodate emotions being a consequence of moral judgment
(Prinz 2006: 31; Huebner et al. 2009). Ideally sentimentalists would be able to show that simply
having strong feelings can make us moralize an action—e.g., come to believe that an action is
wrong when we previous thought it morally acceptable.
To establish this, we must be able to disentangle emotions from their cognitive
components or their effects on inference. As Prinz points out, rationalists could happily admit a
causal role for emotions by holding, for instance, that they “merely draw our attention to morally
relevant features of a situation” (2006: 31) at which point reasoning processes could play a
substantial role (cf. Huebner et al. 2009; Nichols 2014: 738; Scanlon 1998: ch. 1.8). Moreover,
many moral emotions, such as compassion and indignation, are intimately bound up with beliefs
about the morally relevant facts. Empathy nicely illustrates the issue. Many classical and
contemporary sentimentalists have pointed out that putting ourselves in the shoes of a victim can
lead us to condemn the perpetrator’s actions. The moral judgment was surely driven at least by
the cognitive side of empathy, in which we acquire a vivid understanding of the victim’s plight.
But in empathizing we also share in the victim’s feelings of anguish. Is this affective side of
empathy essential? It’s difficult to tell. Sentimentalists have accordingly been drawn to the
explosion of research on incidental emotions in which the feelings are unrelated to the action
evaluated. Being grossed out by someone using the bathroom, for example, can be divorced from
morally relevant thoughts about embezzlement.

2.2.1 Moralizing Conventions

One prominent sentimentalist strategy is to establish that feelings are essential to making
distinctively moral judgments. For this to work, we need a characterization of the concept of
morality or some core aspect of it.
One mark of moral norms is that they appear to be distinct from mere conventions. The
norms of etiquette require that I utter certain words to people after they sneeze, and some school
rules dictate that children wear a certain uniform. Such conventions can be contrasted with moral
norms, such as those that prohibit physically harming innocent people or invading someone’s
privacy. Violating moral norms is rather serious and this isn’t contingent on an authority’s
decree. Wearing pajamas to school, by contrast, seems less serious generally and certainly more
acceptable if the teacher says it’s okay. Moreover, explanations for why one shouldn’t violate a
convention are less likely to point to considerations of harm, fairness, or rights.
A large body of empirical evidence seems to confirm that people ordinarily draw some
sort of moral/conventional distinction. In general, compared to moral transgressions, we treat
violations of conventions as less serious, more permissible, contingent on authority, valid more
locally than universally, and involving distinct justifications that don’t primarily appeal to
another’s welfare or rights (Turiel 1983). This distinction between types of norms appears to
develop quite early—around age 4—and appears to be universal across many cultures, religions,
and social classes (Nucci 2001).

pg. 28 of 206
Regard for Reason | J. May

Drawing heavily on this research, Shaun Nichols (2004) has argued that what makes us
moralize a norm is that it’s backed by strong feelings or affect. While rules or norms are essential
to moral judgment, they aren’t sufficient, for they may be conventional, not moral. What makes a
judgment moral has to do with our feelings toward the norm that has been violated (or upheld,
presumably).
The key test of this “sentimental rules account” comes from studies in which Nichols
(2004) sought to demonstrate that people would moralize the violation of a convention if they
were especially disgusted by it (e.g., a person snorting and spitting into his napkin at dinner). In
the first experiment, Nichols found evidence that participants would treat repulsive
transgressions of etiquette as more like moral transgressions (that is, less conventional)
compared to violations of emotionally neutral conventions. The people in his small sample were
inclined to rate the disgusting transgressions as slightly more serious, less permissible, and less
authority contingent (while justifications varied). In the second experiment, Nichols divided
participants up into those that are highly disgust-sensitive, based on their score on a disgust
scale, previously validated by other researchers. Participants especially sensitive to disgust
tended to treat disgusting transgressions as less conventional, compared to the other group.
However, while disgust-sensitive participants rated repulsive transgressions as more serious and
less authority contingent, there was no difference between the groups’ permissibility ratings
(2002: 231).
Does this provide strong evidence that feelings alone can moralize? There are several
reasons for doubt. First, disgust was not manipulated in either experiment, and in the second
study disgust was merely identified as likely to be more intense in a certain group. We can’t be
sure that the different responses these groups provided were merely due to differing levels of
disgust experienced, rather than another factor. Second, permissibility ratings are arguably a key
element of moral judgment, yet there was no difference among those participants who were
especially disgust-sensitive. While these participants did rate disgusting transgressions as slightly
more serious and less contingent on authority, this is a far cry from moralizing. It is interesting
that elevated disgust seems to correspond to treating a transgression as less authority contingent.
However, third, Nichols did not directly measure whether more disgusting violations strike
people as involving more psychological harm, which fails to pry the emotion apart from a
morally relevant belief and would explain any tendency to treat disgusting transgressions as a bit
more of an ethical issue. Follow-up studies by Royzman et al. (2009) suggest that perception of
harm accounts for some of the moralization of disgusting transgressions. Moreover, with a much
larger sample size Royzman and colleagues were not able to replicate Nichols’s original result
when the disgust scale was administered two weeks prior to soliciting moral reactions to the
hypothetical transgressions. With this improved design, participants were less likely to be aware
of the hypothesis being tested or to have their assessments of the transgressions influence their
responses on the disgust scale.
Nichols, along with his collaborator David Yokum, has conducted a related study that
directly manipulated another incidental emotion: anger (reported in Nichols 2014: 737). Some
participants were randomly assigned to write about an event that made them particularly angry
and then judged the appropriateness of an unrelated etiquette violation. Some of the participants
feeling greater incidental anger were more likely than controls to say that if someone disagrees
with them about the etiquette violation, then one of the disputants “must be mistaken.” This
study might seem to further support sentimentalism (Prinz 2016: 55). However, the small effect
was found only among women. More importantly, even if such an effect had been found for
pg. 29 of 206
Regard for Reason | J. May

more than a subgroup (as in Cameron et al. 2013), the data suggest a change in judgments of
objectivity, not authority-independence in particular—and certainly not a change in all or even
most of the characteristic features of norms that transcend mere convention.
A broader problem here is that it’s unclear whether the moral/conventional distinction
does appropriately measure moralizing anyway. Daniel Kelly et al. (2007) had participants
evaluate a broader range of harmful actions than the usual “school yard” transgressions found in
work on the moral/conventional distinction. The results provide some evidence that not all
violations of moral rules yield the signature pattern of responses. For example, most of their
participants thought that it’s very bad to train people in the military using physical abuse—but
only if government policy prohibits it. The norm is apparently regarded as a moral one even
though its status is authority-dependent.
While there may be concerns about some aspects of the study conducted by Kelly and
colleagues (Kumar 2015), there are good theoretical reasons for expecting such data. As Heidi
Maibom (2005: 249) points out, many norms that would be dubbed mere “conventions” often
seem moral. For example, if I speak without the talking stick in hand, then I’ve violated a rule
that’s not very serious, not exactly highly impermissible, and dependent on an authority that set
the rule. If the councilor says anyone can talk, with or without the stick, then there’s no
transgression. Nevertheless, when the rule is in place, consistently speaking without the stick and
interrupting others is rude, pompous, and inconsiderate. A line between moral and merely
conventional is difficult to discern when one is treating others poorly by violating a local
convention.
In sum, it doesn’t seem sentimentalists can find strong support in research on incidental
emotions and the moral/conventional distinction. The distinction is certainly a valuable heuristic
for distinguishing many moral rules from non-moral ones, perhaps even as a rough way of
characterizing the essence of a norm’s being moral (Kumar 2015). But it’s unclear in this context
whether one group of people count as moralizing a norm just because they treat a transgression
as slightly less conventional than another group does. More importantly, even if treating a rule as
slightly less conventional suffices for moralization, we lack solid evidence that this is driven by
mere feelings, such as incidental disgust or anger, rather than tacit thoughts about increased
psychological harm.

2.2.2 Amplifying with Incidental Emotions

A better route to sentimentalism appeals to research that manipulates emotions specifically and
directly measures moral judgment. However, recall that rationalists predict that emotions can
influence moral judgments by influencing reasoning. For example, emotions can draw one’s
attention to morally relevant information that then facilitates inference. The best evidence for
sentimentalism, then, would demonstrate that manipulating incidental feelings alone
substantially influences moral cognition.
Dozens of such experiments purport to demonstrate just such an effect. And many
philosophers and scientists champion them as vindicating the role of emotions in practically all
of moral judgment (e.g., Haidt 2001; Prinz 2007; Chapman & Anderson 2013; Sinhababu 2017)
or at least large swaths of it (e.g., Nado, Kelly, & Stich 2009; Horberg, Oveis, & Keltner 2011;
Kelly 2011; Plakias 2013; Greene 2013). The evidence, however, again underwhelms. Rather
than support sentimentalism, the studies suggest that incidental emotions hardly influence moral

pg. 30 of 206
Regard for Reason | J. May

judgment and are instead typically a mere consequence. But let’s first consider some of the key
evidence.
Most of the experiments again involve the manipulation of disgust immediately before
participants provide their moral opinions about hypothetical scenarios described in brief
vignettes. Some participants are randomly assigned to a control group that isn’t induced to feel
heightened levels of disgust before evaluating the vignettes. Those in the manipulation group,
however, have this emotion elevated in various ways. Thalia Wheatley and Jonathan Haidt
(2005), for example, hypnotized some people to feel disgust upon reading a certain word. Other
experiments induce disgust by having participants sit at a dirty desk with remnants of food and
sticky substances; smell a foul odor; watch a gruesome film clip involving human feces; or recall
a disgusting experience (Schnall et al. 2008). Still other researchers had some participants drink a
bitter beverage, as opposed to water or something sweet (Eskine, Kacinik, & Prinz 2011), or
listen for one minute to the sickening sound of a man vomiting (Seidel & Prinz 2013a). A related
set of experiments manipulate incidental feelings of what seems disgust’s opposite: cleanliness.
But the results are rather mixed: some studies suggest that cleanliness reduces the severity of
moral judgments while others suggest the exact opposite (see Tobia 2015 for discussion).
In all of these studies, and some more, incidental disgust alone has tended to make moral
judgments harsher. If such effects are real, widespread, and substantial, then they provide
powerful evidence in favor of sentimentalism. However, the data are rather limited, for many
reasons (cf. May 2014a).
(1) Generalizing from Subgroups: Many of the effects were found only among certain
types of people or subgroups of the sample. Subjects in Wheatley and Haidt’s (2005)
experiments were only people who were “highly hypnotizable.” Similarly, Schnall and her
collaborators (2008) found the disgust effect only among those who were especially aware of
their own bodily feelings (they scored high on a Private Body Consciousness scale).
(2) Scarce Effects: While participants respond to many vignettes, the disgust effect was
detected only among a minority of them. In Wheatley and Haidt’s (2005) first experiment, for
example, only two out of six vignettes produced a statistically significant result, although the
“composite mean” of responses to all vignettes together was also significant (see Table 2.1). So
the effects on moral judgment are scarce, which means it’s not quite right to say: “Across the
board, ratings [of moral wrongness and disgustingness] were more severe when disgust was
induced” (Kelly 2011: 25). It could be that disgust does affect our moral judgments about most
of the individual vignettes, but the researchers didn’t find it in their sample. After all, failing to
find an effect doesn’t mean there isn’t one—unless of course the study has the statistical power
to accept the null hypothesis that there isn’t an effect. But experiments in the social sciences are
often underpowered, which precludes this inference. At best, then, we have no evidence either
way, in which case we still shouldn’t say there is an effect “across the board” when one wasn’t
found.

Table 2.1: Example Data from a Disgust Experiment

Morality Ratings
Vignette Disgust absent Disgust present
Cousin incest 43.29 67.63**
Eating one’s dog 65.64 65.26
Bribery 78.73 91.28*
Lawyer 59.82 73.26
pg. 31 of 206
Regard for Reason | J. May

Shoplifting 67.75 79.81


Library theft 69.40 71.24
Composite Mean 64.67 73.94*

100-point scale (0 = not at all morally wrong; 100 = extremely morally wrong),
* = p < .05, ** = p < .01. Table adapted from Wheatley & Haidt (2005: 781).

(3) Small Effects: Even when detected, the effect is rather small (an issue also briefly
noticed by others, such as Mallon & Nichols 2010: 317–8; Pizarro, Inbar, and Helion 2011). For
example, in one of Wheatley and Haidt’s (2005) vignettes, which described an act of bribery, the
average moral ratings differed between the control and disgust group by only 12.55 points on a
100-point scale (see Table 2.1). This mean difference between the groups is statistically
significant, but that at best warrants the conclusion, roughly, that the difference was not likely
due to chance. More precisely, the probability is rather low (less than 0.05) that we’d observe
this difference in a sample even assuming there’s no real difference in the population. At any
rate, statistical significance alone doesn’t license the conclusion that the observed difference is
substantial or significant in the ordinary sense. If anything, the observed difference between
groups seems rather small and fails to shift the valence (or polarity) of the moral judgment.
Disgusted or not, both groups tend to agree about whether the hypothetical action was right or
wrong. At best, these studies only provide support for the idea that incidental emotions can color
or intensify a moral judgment whose existence is due to some other factor.
Of course, sentimentalists might predict only a small shift in moral judgment from a
small increase in emotion (Sinhababu 2017: 76). But part of the reason the disgust experiments
are important to examine is that the emotional inductions are often quite powerful, as reflected in
manipulation checks. Yet the (rare) effect on moral judgment is miniscule at best, even when
participants are in the presence of a truly foul smell, sipping on a conspicuously bitter beverage,
listening to someone vomiting, or watching a scene from a film in which a man rifles through a
used toilet while visibly struggling not to lose his lunch. Just thinking about being a participant
in such experiments is disgusting enough!
All of the key disgust experiments ask participants to rate moral transgressions. Wheatley
and Haidt (2005), however, did run one vignette which interestingly tests whether incidental
disgust alone can lead one to judge an action wrong that one would ordinarily consider perfectly
acceptable. Wheatley and Haidt included a “Student Council” scenario, in which a student
performs a mundane, morally neutral action:
Dan is a student council representative at his school. This semester he is in charge of
scheduling discussions about academic issues. He [tries to take/often picks] topics that
appeal to both professors and students in order to stimulate discussion. (2005: 782)

Those who read this without their disgust-inducing word present (“take” vs. “pick”) tended to
rate Dan’s action as “not at all morally wrong” (providing marks near this end of the scale). But
ratings were significantly elevated for those who read the version with the trigger word.
Moreover, the experimenters offered participants an opportunity to explain their judgments, and
some wrote of Dan that “It just seems like he’s up to something” or that he seems like a
“popularity-seeking snob” (783). Wheatley and Haidt conclude that disgusted subjects
“condemned Dan” and made “severe judgments” (783).
If this is an accurate description of the results, then that would clearly be powerful and
surprising, as many have noticed. Plakias, for example, deems it a “striking demonstration of the
pg. 32 of 206
Regard for Reason | J. May

power of disgust [to affect moral judgment]” (2013: 264). The crucial Student Council case,
however, is underwhelming. The mean rating of moral wrongness for those who did not receive
the version of this story with their disgust-inducing word was 2.7 (recall: 0 = “not at all morally
wrong” and 100 = “extremely morally wrong”). Disgusted participants, however, had a mean
rating of 14, which still seems to count the action as not morally wrong (cf. Mallon and Nichols
2010: 317–8).
Some researchers are not concerned about whether their participants’ responses tend to
fall on opposite sides of the midpoint, so long as the difference is statistically significant. For
example, in their study of how moral judgments affect various intuitions in folk psychology,
Dean Pettit and Joshua Knobe explicitly propose to disregard whether responses tend to straddle
the midpoint (2009: 589–90). While this may be a fine approach to some research questions, it
can over-inflate the import of certain results, and the disgust experiments are a clear example.
It’s of course unclear whether we should take the scales used in such research to have a genuine
midpoint at all, or to otherwise clearly deliver information about whether participants tended to
judge the action as right or wrong, rather than being uncertain or ambivalent. But that would only
further pose a problem for the sorts of claims many have made regarding these studies, especially
the Student Council case. Still, it is useful to consider where on these various scales subjects
were tending to fall, even if it is difficult to determine a valence for the mean response.
Consider how the data conflict with the usual descriptions of Wheatley and Haidt’s
hypnotism studies. Prinz, for example, summarizes one of their experiments thus: “when the
trigger word is used in [morally] neutral stories, subjects tend to condemn the protagonist”—
“[they] find this student morally suspect” (2007: 27–8). (Note: There was only one neutral story.)
Likewise, Richard Joyce writes that people responding to the Student Council story: “were often
inclined to follow up with a negative moral appraisal” (2006: 130). Kelly similarly writes:
“Participants maintained their unfavorable judgment of Dan despite their complete lack of
justification for it….” (2011: 25). And Plakias says, “subjects who had been hypnotized judged
Dan’s actions morally wrong” (2013: 264), which is similar to Valerie Tiberius’s statement that
“for the students who did feel disgust… there was a tendency to rank Dan’s actions as wrong”
(2014: 78). Contrary to all of the above descriptions of Wheatley and Haidt’s results, if anything
it appears their subjects tended to regard the representative’s action as not morally wrong. The
studies certainly don’t provide evidence that “disgust is sufficient to bring about an appraisal of
moral wrongness even in the absence of a moral violation” (Plakias 2013: 264).
While the different morality ratings between the groups may not straddle the midpoint,
one might contend that the effect is nonetheless substantial. Kelly, for example, claims Wheatley
and Haidt’s disgust-manipulation “increased judgments of disgustingness and moral wrongness
by factors of roughly 10 and 6, respectively” (2011: 25). While it’s true that the morality ratings
of subjects increased by a factor of 6 (mean responses were 2.7 vs. 14 in the Student Council
case) in the direction of the “extremely morally wrong” end of the scale (100), again this looks if
anything to be on the side of counting Dan as not having done something wrong. The factor by
which it increased along the “moral wrongness” scale would have to be much greater just to get
it barely in the realm of being judged somewhat morally wrong (i.e., above 50). So, while disgust
may have made participants’ judgments more “harsh” (as some more carefully put it), we do not
have evidence that it tended to alter their valence—e.g., from permissible to wrong. Such data
only warrant something like the conclusion that disgust slightly amplifies moral judgments in the
direction of condemnation (as briefly noted by some commentators, e.g., Huebner, Dwyer, &
Hauser 2009; Pizarro, Inbar, & Helion 2011; and Royzman 2014).
pg. 33 of 206
Regard for Reason | J. May

One might retort that in fact some of the disgusted participants rated Dan’s action as
immoral. Joshua Greene, for example, says, “Many subjects who received matching
posthypnotic suggestions indicated that his behavior was somewhat wrong, and two subjects
gave it high wrongness ratings” (2008: 58). Such claims are apparently based on an earlier
version of the manuscript that circulated prior to publication, which discusses some additional
details about earlier versions of the data. (Thanks to Thalia Wheatley, via Walter Sinnott-
Armstrong, for clarifying this issue and providing the earlier version of the paper.) But the
“many” to which Greene refers was a minority of the group (about 20% by my calculations), and
their ratings are only reported (in the manuscript) as being “above 15” which is still well on the
“not morally wrong” side of the 100-point scale. Furthermore, the two subjects (out of sixty-
three) who allegedly provided “high wrongness ratings” were at most in the area of judging the
act somewhat morally wrong (“above 65”). More importantly, these data points are mere
outliers—the kind that are often removed from analysis in experimental work. However, even if
we included the data points from the older manuscript and the authors’ description of them,
Greene’s gloss is fairly misleading and the outliers are irrelevant anyhow. What matters are the
central tendencies of subjects’ ratings, which we can subject to statistical analysis. Yet the means
from both groups are still quite low (14 in the published article; 15 in the prior manuscript),
indicating either way a tendency to count the act as morally permissible.
Finally, to further support the alleged effect of disgust, many authors also point to the
written explanations subjects provided regarding the Student Council story. While some
disgusted participants did explain their morality ratings by reporting suspicions of Dan and so
forth, Wheatley and Haidt don’t report the percentages. They tell us only that “some
participants” engaged in this apparently post-hoc “search for external justification” (2005: 783).
And these existential generalizations can be true even if only a small minority of participants
provided such explanations (e.g., the two outliers). Indeed, while Wheatley and Haidt provide no
explicit indication either way, it is likely that only a small minority provided these
rationalizations, since only a small minority provided harsher moral judgments, and only two
outliers provided a response that indicates a condemnation of Dan’s behavior. So we shouldn’t
be led into thinking that the above reports from some of the participants are representative of the
experimental group.
The problems with the disgust experiments have been buttressed by a recent meta-
analysis of the effect of incidental disgust on moral cognition. Landy and Goodwin (2015)
combed the literature for published studies and collected numerous unpublished ones, yielding
fifty experiments and over 5,000 participants. Using Cohen’s standard, estimated effect size
based on all of these studies was officially “small” (d = 0.11). Moreover, the effect disappears
when one considers only unpublished experiments, which suggests a bias against publishing the
null results or replication failures. The mainstream and underground studies cleave on this point:
“the published literature suggests a reliable, though small, effect, whereas the unpublished
literature suggests no effect” (2015: 528). Given publication bias and possible confounds, Landy
and Goodwin conclude that incidental disgust’s amplification effect on moral cognition is
extremely small at best, perhaps nonexistent.
While disgust has received the most attention, some researchers have also manipulated
other incidental emotions, often using audio piped through headphones. For example, in one
experiment, researchers manipulated either of two positive emotions—mirth or elevation—by
having participants evaluate moral dilemmas while listening to clips from either a stand-up
comedy routine or an inspirational excerpt from Chicken Soup for the Soul (Strohminger et al.
pg. 34 of 206
Regard for Reason | J. May

2011). Interestingly, the experiments found that mirth slightly increased utilitarian responses to
moral dilemmas while elevation had the opposite effect. One worry is that the audio clips
involved different statements that were not properly controlled for in the various conditions.
Other studies, however, avoid this problem by having participants listen to instrumental music.
To manipulate incidental anger, for example, Seidel and Prinz (2013a) had participants listen to
Japanese “noise music,” which is irritating to most people. In another experiment, they induced
positive feelings of happiness with uplifting classical music (Seidel & Prinz 2013b). The key
results in these studies were that incidental anger slightly amplified condemnation of autonomy
violations and happiness slightly amplified judgments of praise and moral obligation (while
anger reduced such judgments).
Do these few additional sound studies demonstrate the power of incidental emotions in
moral judgment? One worry is that certain noises, particularly irritating ones, could significantly
distract participants from fully processing morally relevant information in the vignettes. More
importantly, though, all of the findings are similar to those of the disgust experiments. While the
effects weren’t restricted to subgroups in the samples, and sometimes the effects were found for
all or most vignettes tested (not just a minority), the differences between groups are again
consistently small shifts on a fine-grained scale. Now, in these studies, the extraneous emotion
does sometimes shift the valence of the moral judgment on average compared to controls (as
emphasized by Prinz 2016: 54). But the shift is consistently from roughly the midpoint (the mean
response in the control group) to slightly beyond (in the relevant manipulation group). So these
few studies from one lab don’t provide sufficient evidence that incidental emotions have more
than a negligible effect on moral judgment. Further research, replications, and meta-analyses are
required before we can confidently conclude that the effects are stable and substantial enough to
support sentimentalism.

2.3 Accounting for Slight Amplification


Ultimately, rigorous empirical studies are converging on the idea that incidental emotions are
hardly sufficient for moral judgment and instead are often elicited by them. Now we may be
inclined to ask: If an emotion, such as disgust, is primarily a consequence of moral judgment,
then why does it sometimes slightly amplify them? As Prinz has recently put it, “Why should
emotions have any effect?” (2016: 54). This does call for explanation if we are to deny that
incidental emotions play a significant role in forming moral beliefs. Further research is required,
but there are plausible proposals available. To appreciate these, however, we first need to see
why it’s plausible to suppose the emotions often follow moral judgments.

2.3.1 Emotions as Effects of Moral Judgments

No doubt emotions can sometimes influence cognition, as affect in general sometimes provides
us with information, even outside of the moral domain (Dutton & Aron 1974; Schwarz & Clore
1983). However, feelings alone do not appear to be a substantial cause (or sustainer) of a sizeable
class of distinctively moral beliefs. Instead, it seems that emotions are often effects of moral
judgment.
The phenomenon is familiar from various emotions. When empathizing with others in
distress, for example, the compassion that typically results is modulated by pre-existing moral
pg. 35 of 206
Regard for Reason | J. May

beliefs. There is evidence in particular that one’s empathic response toward people in need is
dampened if one judges them to be blameworthy for their distress (Betancourt 1990; cf. Pizarro
2000: 366). But let’s focus once more on disgust. Of course, some reactions of revulsion are not
connected to moral beliefs at all. Eating an insect might disgust many of us but it needn’t have
any relation to our moral beliefs, either as an effect or as a cause. When there is a connection
between a moral belief and repugnance, however, the emotion is often elicited by the belief, not
the other way around (cf. Huebner et al. 2009; Pizarro et al. 2011; May 2016a).

Behavioral data: Moralization


Consider first changes in reactions of disgust following a change in specific moral beliefs. A
natural example concerns omnivores who become vegetarians and are eventually disgusted by
meat. Not every vegetarian becomes repulsed by meat, perhaps for various reasons. Some may
be vegetarian primarily for reasons of health, not ethics. Even for so-called “moral vegetarians,”
the desire for meat may be too entrenched, given one’s personal preferences or length of time as
a meat eater.
Nonetheless, there is some empirical evidence that moral vegetarians are more disgusted
by meat than health vegetarians (Rozin et al. 1997). And further research suggests that this result
is not simply due to moral vegetarians already being more disgust-sensitive (Fessler et al. 2003).
Thus, it seems the ethical beliefs of many moral vegetarians eventually elicit disgust as a
consequence. The emotional response is related to the moral judgment by following it.
This general phenomenon, which Rozin has called “moralization,” is not restricted to
vegetarianism either. Few people are vegetarians, let alone for moral reasons, but many more are
now disgusted by cigarette smoke. Just in the past fifty years, attitudes toward smoking tobacco
have radically changed. Interestingly, there is some evidence that people in the United States
have become more disgusted by cigarettes and other tobacco products after forming the belief
that it’s a morally questionable habit and industry to support (Rozin & Singh 1999). Such
research confirms a common phenomenon in ordinary experience: emotions commonly follow
one’s moral judgments.

Neuroscientific data: ERP


Extant experiments on disgust and moral judgment have not precisely measured temporal
ordering. Qun Yang et al. (2013), however, attempted to do precisely that using a complicated,
yet clever, application of a Go/No-Go task with a sample of people in China.
In this paradigm, researchers instruct participants to respond (Go) to certain cues but not
others (No-Go). Researchers then can use an electroencephalogram (EEG) to measure increased
electrical activity on the surface of participants’ brains, specifically event-related potentials,
which indicate preparation for motor activity. By identifying “lateralized readiness potentials” in
particular, the experimenters could discern when participants were preparing to move either their
left or right hands to respond to a specific cue. Importantly, detecting preparation to move in a
No-Go condition indicates that participants were prepared to move (Go) before they realized
they shouldn’t (No-Go). The paradigm allows researchers to determine which of two judgments
people tend to make first by essentially asking them to eventually make both judgments but act
on one only if it has a certain relationship to the other. For example, we can gain evidence about
whether people tend to process an individual’s race or gender first by asking them to press a
button to indicate the gender of a person (Go) but not if the person is Caucasian (No-Go), and
then swapping the Go/No-Go mapping. Preparation to move in the No-Go condition suggests
pg. 36 of 206
Regard for Reason | J. May

that participants first processed the information mapped to Go (e.g., gender) but only afterward
realized it met the No-Go condition (Caucasian) and so didn’t follow through with pushing the
button.
To test the temporal order of judgments of morality vs. disgust, Yang and colleagues
divided the experiment into two sessions. In the first session, participants were instructed to
report a moral judgment about a scenario (Go) by pressing a button with their left or right hands
but not (No-Go) when the action was physically disgusting. For example, subjects should report
their moral judgment in response to “A person at a party is drinking water” but not for “…is
drinking blood.” In the second session, the Go/No-Go instructions were reversed: participants
were supposed to report their assessment about whether the act was disgusting but not when they
think the action is immoral. For example, they should make an assessment of disgustingness
(Go) for “A person at a party is drinking water” but not for “…is stealing money” (No-Go). The
experimenters’ hypothesis was that participants would make their moral judgments prior to their
assessments of disgust but not vice versa.
The resulting data confirmed this hypothesis by detecting brain activity that indicated
participants were prepared to make an assessment of morality before disgustingness in the
relevant setting (Session 1) but not vice versa (Session 2). In the first session, the researchers
detected significant preparation to move in the No-Go trials. This indicates participants generally
made a moral judgment first and then realized they’d need to inhibit reporting it when they
judged the act to be disgusting. Of course, this should be predicted by any hypothesis, given the
instructions for the first session. Participants were supposed to make a moral judgment unless the
act was disgusting, so they may have chosen to make moral judgments first and then decided to
respond or not based on whether they found the act disgusting. However, in the crucial second
session, participants were instructed to assess disgustingness only if the act was immoral. Yet
there was no evidence of preparation to move in No-Go trials, which suggests that participants
weren’t already preparing to respond to whether the act was disgusting. It seems they knew no
response was required because they continued to make their moral judgment first. Thus, the data
from the two sessions indicate that it’s more natural for people to make moral judgments first
and judgments of disgust second. There is then no evidence that disgust informs or causes the
moral judgments, because the emotional response seems to occur too late.
These EEG data provide some rigorous evidence in favor of the idea that disgust often follows
moral judgments, rather than serves as an input to them. And similar results were achieved by
Yang and his collaborators using EEG again but with a different research paradigm (Yang et al.
2014). Two EEG studies from a single lab certainly do not settle the matter. However, combined
with the other empirical studies (e.g., on moralization and compassion), there is growing
evidence that disgust tends to follow negative moral judgments, not vice versa.

2.3.2 Misattribution of Arousal

Now, to understand how incidental emotions could sometimes slightly amplify moral judgments
despite not generally being important causes, consider what has come to be known as
“misattribution of arousal,” an established phenomenon in the social psychology literature.
In an early study, Schachter and Singer (1962) conducted an experiment that they led
participants to believe was about how an injection of vitamin supplements (“Suproxin”) affects
vision. The “vitamin” injections were actually either a placebo for one group or adrenalin
(epinephrine) for another, the latter of which causes noticeable physiological responses, such as
pg. 37 of 206
Regard for Reason | J. May

elevated heart rate and respiration. Some participants were informed of these real side effects of
the injection, while others were misinformed of false side effects, and a third group remained
ignorant. Subjects were then paired with a confederate of the experiment (“the stooge”) who
pretended to react to the injection either with noticeable euphoria or anger. Eventually,
participants provided self-reports of their own mood. Schachter and Singer found that those who
didn’t have the right information to attribute their symptoms to the adrenalin shot reported
feeling more euphoria or anger (like the stooge). The authors conclude that “given a state of
physiological arousal for which an individual has no explanation, he will label this state in terms
of the cognitions available to him” (395). In other words, when we have unexplained feelings,
we often attribute them to some source, even if inaccurately.
In a similar vein, Dutton and Aron (1974) famously had men interviewed by an attractive
woman either over a wobbly suspension bridge with a 230-foot drop or a more solid bridge only
10 feet above a small creek. The researchers measured how many in each group accepted the
woman’s phone number and how many later called to find out more about the experiment. As
expected, participants who interviewed on the scary bridge more often accepted the number and
later called. Dutton and Aron suggest that on one interpretation of the results such participants
misattributed (or “relabeled”) the source of their racy feelings to the attractive woman, not to fear
of the bridge. Making a mistake in the source of their feelings affected their assessment of their
attraction to the woman.
A number of experiments have uncovered similar findings. In addition to two field
experiments, Dutton and Aron found similar results in a lab experiment involving fear of electric
shocks. More recently, a meta-analysis of the effect of arousal on attraction indicates that it’s
robust, albeit small to moderate in size (Foster et al. 1998). The authors of the meta-analysis
conclude that the data are consistent with the misattribution theory of the arousal-attraction link,
even if other plausible theories are not ruled out either.
Now consider how the idea of misattributing sources of emotion can extend to disgust.
Experimenters induce incidental feelings of this emotion—e.g., via a foul smell, watching a
repulsive film clip, tasting a bitter beverage—that are unrelated to the moral scenarios under
consideration. To what source, other than the actual source, might participants attribute their
feelings? There are two main possibilities, but only the first affords disgust a direct causal impact
on moral judgment.
The first appeal to misattribution can be found in the work of Simone Schnall and her
collaborators (2008). They deliberately attempted to “induce low-level, background feelings of
disgust” so that “any disgust elicited by the moral dilemmas” (1106) wouldn’t be correctly
attributed to the real cause of the incidental feelings of the emotion, such as a dirty desk. On this
account, the experimental manipulation elicits disgust and participants are expected to
misattribute the source of the incidental feelings to the act or actor in the vignette. If it’s too
obvious that the source of the disgust is really from, say, a dirty desk, then participants in the
disgust condition will not amplify their negative moral judgments. On this account,
misattribution explains the effect but on the assumption that disgust does influence moral
judgment.
However, we can appeal to misattribution in a different way that doesn’t rely on disgust
directly amplifying moral judgment. Some people may misattribute their elevated levels of
disgust to their moral judgment about the story, not the actor in the vignette. This misattribution
is then combined with the tacit knowledge that we tend to feel disgust toward those acts we think
are especially heinous, which leads to a tendency among some to report a harsher moral belief.
pg. 38 of 206
Regard for Reason | J. May

Compare anger. Incidental feelings of anger might make me rate an act as worse just because,
usually, the angrier I am about a violation, the worse I think it is. Just as we automatically take
smoke as evidence of a fire, we tacitly take an emotional reaction to be a natural consequence of
a moral judgment.
So there are three possible sources for the incidental feelings of disgust: the real source
(dirty desk, film clip, etc.), the moral violation in the vignette, and the moral judgment about the
vignette. Schnall and company maintain that participants misattribute their elevated feelings of
disgust to the vignette (rather than the real source). Again, this assumes that disgust typically
causes relevant moral judgments, rather than the other way around. On an alternative theory,
however, participants misattribute the feelings to the moral judgment. This assumes only that
people are tacitly aware that disgust typically follows relevant moral judgments.
The second misattribution account explains why sometimes researchers find (slightly)
harsher self-reported moral judgments among people feeling incidental disgust. And the small
effect is explained without providing any reason to believe that disgust plays an important role in
an interesting class of moral judgments. Compare: the arousal-attraction link does not provide
strong evidence that fear plays an important role in judgments of attraction; rather such studies
indicate that incidental and unexplained feelings strike us as calling for explanation, even if
unconsciously. How we unconsciously reconcile such feelings is left open. The misattribution
account of the disgust studies shows that we can explain this reconciliation without assuming
that disgust is primarily a cause of moral judgment. In particular, the affective element of this
emotion needn’t be a normal part of the mechanism for producing moral judgment, just as fear
isn’t a normal part of the cause of judgments of attractiveness. Rather, people sometimes tacitly
take the disgust as evidence that they think the act is worse.
Misattribution accounts also explain why disgust only sometimes amplifies moral
judgment. After all, the account predicts the effect will only show up among some people who
tacitly make the error or misattribution. This might seem problematic since, in some
experiments, disgust appears to effect moral judgment only among those who are more skilled at
detecting their internal physical states (e.g., Schnall et al. 2008). But these participants can only
be expected to be adept at noticing the arousal, not its true source; misattribution is still plausible
at least for some. Indeed, it makes perfect sense that only those who notice the unexplained
emotion will (unconsciously) rationalize it. Moreover, in keeping with this misattribution
account, there is some evidence that the slight effect of incidental disgust on moral judgment
disappears in participants who can more finely distinguish their own emotions (Cameron, Payne,
& Doris 2013).
We thus have a clear way to explain amplification that’s consistent with denying that
incidental disgust plays an important role in moral judgment. In fact, as the arousal-attraction
studies indicate, misattribution accounts can generalize to other emotions. Of course, there may
be multiple explanations for amplification, which aren’t mutually exclusive. Either way, there
are explanations for how incidental emotions might slightly influence moral judgment indirectly,
without supposing that feelings ordinarily play an important direct causal role.

2.4 Psychopathology
We have seen that various popular experiments fail to show that mere feelings play an integral
role in mature moral judgment. Other avenues of support come primarily from psychopathology.

pg. 39 of 206
Regard for Reason | J. May

By studying when moral judgment breaks down, we can perhaps uncover whether an emotional
deficit best explains the problem.

2.4.1 Psychopathy

Not all psychopaths are alike, but they are typically characterized as callous, lacking in remorse
and guilt, manipulative, having a superficial charm, impulsive, irresponsible, and possessing a
grandiose sense of self-worth (Hare 1993). Most studied are incarcerated men and many have
committed violent crimes or engaged in reckless actions that leave innocent people injured, dead,
or destitute. Psychopathy is similar but not exactly equivalent to antisocial personality disorder in
the Diagnostic and Statistical Manual of Mental Disorders. Researchers instead typically
diagnose psychopaths using Robert Hare’s Psychopathy Checklist (Revised), which has a range
of criteria pertaining to the individual’s psychological traits and history of past infractions. While
the exact causes of psychopathy are unknown, it’s clearly a developmental disorder due to some
combination of factors present early in life. Some of these factors are genetic predispositions but
can also include traumatic experiences or environmental influences on gene expression, which
may include profound neglect, abuse, and even lead exposure (Glenn & Raine 2014: ch. 6).
Psychopaths seem to be prime examples of adults who are morally incompetent due to a
severely impaired capacity for moral judgment. If this is correct and best explained by emotional
deficits, then psychopathy seems to provide evidence in favor of sentimentalism (Nichols 2004;
Prinz 2007). However, we’ll see that what’s at issue here isn’t incidental feelings but rather
broad emotional capacities that are intimately bound up with cognition, attention, learning, and
reasoning.

Moral Competence
Psychopaths don’t just behave badly; some research suggests they don’t properly grasp moral
concepts. Some theorists point to their poor use of moral terms, as when some psychopaths don’t
appear to properly understand what it means to regret hurting someone (Kennett & Fine 2008).
More striking is the apparent failure to draw the moral/conventional distinction, which some
theorists believe is necessary for a proper grasp of morality (see Chapter 2, §2.2.1). In particular,
some research on adult incarcerated psychopaths suggests that they treat conventions like moral
rules by categorizing them as just as serious and independent of authority (Blair 1995). One
hypothesis is that such inmates incorrectly categorize conventional norms as moral in a futile
attempt to show that they know it’s wrong to violate most norms.
However, other evidence suggests that psychopaths do not have such a substantial deficit
in moral judgment. One study attempted to remove the motivation to treat all transgressions as
serious by telling inmates that the community regards only half of the transgressions as moral
violations. Yet the researchers found that a higher score on the Psychopathy Checklist did not
correlate with less accurate categorization of the norms as moral vs. conventional (Aharoni et al.
2012). However, the researchers in this later study did find that two sub-factors of the
Psychopathy Checklist (affective deficits and anti-social traits) correlate with a diminished
ability to accurately categorize transgressions.
Another line of research focuses on patterns of moral judgments about sacrificial moral
dilemmas in which one innocent person can be harmed for the greater good. Most people believe
it’s immoral to save five people by killing one person in a “personal” way, such as pushing him
to his death. Yet one study found that incarcerated psychopaths were more “utilitarian” than
pg. 40 of 206
Regard for Reason | J. May

other inmates, as they were more inclined to recommend sacrificing one to save several other
people in such personal dilemmas (Koenigs et al. 2012). However, abnormal responses to
sacrificial dilemmas might not indicate a deficit in moral judgment as opposed to a different set
of moral values. The resulting moral judgments may be somewhat abnormal, but utilitarians like
Greene (2014: 715) would have us believe that psychopaths happen to be morally correct.
In any event, other studies provide conflicting data regarding “utilitarian” values. One
found that incarcerated and non-incarcerated psychopaths responded like most other people,
categorizing personal harm as morally problematic even if it could bring about a greater good
(Cima et al. 2010). Moreover, Andrea Glenn and her colleagues observed no difference in non-
incarcerated psychopaths’ moral judgments about personal vs. impersonal dilemmas (Glenn et al.
2009).
Thus, while there is some evidence of impaired moral cognition in psychopaths, it’s
decidedly mixed. A recent, even if limited, meta-analysis (Marshall et al. forthcoming) examined
dozens of studies and found at best a small relationship between psychopathy and impaired
moral judgment (assuming that abnormal “utilitarian” responses to sacrificial moral dilemmas
are evidence of a deficit in moral judgment). The researchers take their meta-analysis as
“evidence against the view that psychopathic individuals possess a pronounced and overarching
moral deficit,” concluding instead that “psychopathic individuals may exhibit subtle differences
in moral decision-making and reasoning proclivities” (8).
In sum, a diverse array of evidence suggests a rather attenuated conclusion about moral
cognition in psychopathy. There is most likely some deficit in the psychopath’s grasp and
deployment of moral concepts, but the extent of it is unclear. And much of a criminal
psychopath’s behavior can be explained by abnormal motivation, such as a lack of concern for
others, even if knowledge of right and wrong is roughly intact (Cima et al. 2010). As Glenn and
her collaborators put it: “Emotional processes that are impaired in psychopathy may have their
most critical role in motivating morally relevant behavior once a judgment has been made”
(2009: 910). Such a conclusion may require admitting the possibility of making a moral
judgment while lacking motivation to act in accordance with it—a form of “motivational
externalism.” But rationalists can happily accept that the connection between moral judgment
and motivation breaks down when one isn’t being fully rational (Smith 1994: ch. 3).

Rational Deficits
So what in psychopaths explains their (slightly) impaired capacity for moral cognition? The most
popular account points primarily to emotional deficits, based on various studies of the behavioral
responses and brain activity of either psychopaths or people with psychopathic tendencies. For
example, key brain areas of dysfunction are the amygdala and ventromedial prefrontal cortex
(VMPFC), both of which appear to be implicated in processing emotion, among many other
things, including implicit learning and intuitive decision-making (Blair 2007). Moreover, as
already noted, the sub-factors in psychopathy that have been correlated with diminished ability to
draw the moral/conventional distinction involve emotional deficits (e.g., lack of guilt, empathy,
and remorse) and anti-social tendencies (Aharoni et al. 2012). Further evidence comes from
studies which indicate that, compared to normal individuals, when psychopaths evaluate moral
dilemmas they exhibit decreased activation in the amygdala (Glenn et al. 2009).
The idea that psychopathy primarily involves an emotional deficit seems bolstered when
compared to autism (Nichols 2004). On many accounts, autism typically involves difficulty
understanding the thoughts and concerns of others. People on the spectrum can be in some sense
pg. 41 of 206
Regard for Reason | J. May

“anti-social” but they aren’t particularly aggressive or immoral, and they don’t have such
difficulty feeling guilt, remorse, or compassion for others. Moreover, autism doesn’t seem to
yield a lack of moral concepts, at least because high-functioning children with autism seem to
draw the moral/conventional distinction (Blair 1996). However, autism, especially when severe,
can impair moral judgment by limiting the understanding of others’ projects, concerns, and
emotional attachments (Kennett 2002). There is evidence, for example, that adults with high-
functioning autism don’t tend to treat accidental harms as morally permissible (Moran et al.
2011), although neurotypical adults do (see Chapter 3, §3.3.1).
Importantly, though, there is ample evidence that psychopaths have profound deficits that
are arguably in their inferential or reasoning capacities. Notoriously, they are disorganized, are
easily distracted, maintain unjustified confidence in their skills and importance, and struggle to
learn from negative reinforcement (see, e.g., Hare 1993; Blair 2007; Glenn & Raine 2014).
Moreover, a meta-analysis of twenty studies suggests that individuals with psychopathy (and
related anti-social personality disorders) have difficulty detecting sad and fearful facial
expressions in others (Marsh & Blair 2008). Such deficits can certainly impair one’s reasoning
about both morality and prudence at least by preventing one from properly assessing the merits
of various choices and resolving conflicts among them (cf. Kennett 2002; 2006; Maibom 2005).
Consider an example. One psychopath tells the story of breaking into a house when an
old man unexpectedly appears, screaming about the burglary. Annoyed that the resident wouldn’t
“shut up,” this psychopath apparently beat the man into submission, then lay down to rest and
was later awoken by the police (Hare 1993: 91). Such aggressive and reckless actions, common
in psychopathy, are easily explained by impaired cognitive and inferential capacities, such as
overconfidence in one’s abilities, inattention to relevant evidence, and the failure to learn from
past punishment. Affective processes certainly facilitate various forms of learning and inference,
particularly via the VMPFC (Damasio 1994; Woodward 2016; Seligman et al. 2016), and such
processes are no doubt compromised in psychopathy. But that is just evidence that affective
deficits are disrupting domain-general inferential and learning capacities, not necessarily moral
(or normative) cognition specifically.

Abnormal Development
It’s important that psychopathy is a developmental disorder. There is some evidence that people
who acquire similar brain abnormalities in adulthood—VMPFC damage—retain at least some
forms of moral judgment (more on this in the next section). The difference with psychopaths is
that their brain abnormalities are present at birth or during childhood (or both), which prevents or
hinders full development of social, moral, and prudential capacities in the first place.
Some worry then that psychopathy at best indicates that the affective elements of
emotions are merely developmentally necessary for acquiring full competence with moral
concepts (Kennett 2006; Prinz 2007: 38). However, even this concedes too much ground to
sentimentalism. As we’ve seen, there’s reason to believe that psychopaths have only some
deficits in moral judgment and they experience domain-general problems with learning,
attention, and inference. This rationalist-friendly view is actually bolstered by the fact that these
problems in psychopathy arise early in development and continue throughout a psychopath’s life.
The rationalist needn’t explain the psychopath’s immoral and imprudent behavior by positing a
lack of conscious understanding of a moral argument written down with explicit premises and
conclusion. Rather, the rationalist can point to a lifetime of compromised attention, learning, and
inference.
pg. 42 of 206
Regard for Reason | J. May

Emotions are certainly part of the explanation. Like all development, social and moral
learning begins early in one’s life before one fully acquires many concepts and the abilities to
speak, read, write, and engage in complex reasoning. Yet moral development must go on.
Emotions—with their characteristic package of cognitive, motivational, and affective elements—
can be a reliable resource for sparking and guiding one’s thoughts, actions, and learning. We are
creatures with limited attentional resources in environments with more information than we can
take in. The predicament is even more dire when we’re young and developing key social
concepts, associations, and habits. Our default is not generally to pay attention to everything, for
that is impossible. Instead, we rely on our attention being directed in the right places based on
mechanisms that are quick, automatic, often emotionally driven, and sometimes fitness-
enhancing (cf. Pizarro 2000; Huebner et al. 2009). In ordinary people emotions combine with a
suite of cognitive and motivational states and processes to guide one’s attention and aid one’s
implicit learning and inferential capacities. As does a hearty breakfast, emotions facilitate the
healthy development of a human being’s normative concepts and knowledge. Due to certain
genes, neglect, abuse, and so on, psychopaths’ relevant emotional responses, such as compassion
and guilt, are missing or significantly attenuated. But these are connected to broader cognitive
and inferential deficits—such as delusions of grandeur, inattention, and poor recognition of
emotions in others. It’s no surprise that what typically results is a callous, manipulative, and
aggressive person who behaves badly and lacks a healthy grasp of the normative domain (both
morality and prudence).

In Sum
Ultimately, there are several issues to highlight that work in concert to neutralize the threat to
rationalism from psychopathy. First, extant evidence does suggest that psychopaths lack normal
moral competence, but the deficit in moral cognition is often overstated in comparison to
motivational deficiencies, such as impulsivity and concern for others. Feelings may directly
affect motivation and behavior but that isn’t in conflict with the rationalist’s claim about moral
judgment and needn’t conflict with a rationalist account of all aspects of our moral psychology.
Second, while psychopathy plausibly involves some emotional dysfunction, especially in
guilt and compassion, the condition involves at least an equal impairment in learning and
inference. A lifetime of delusions of grandeur, impulsivity, poor attention span, difficulty
processing others’ emotions, diminished sensitivity to punishment, and so on can alone explain a
slightly diminished competence with moral concepts and anti-social behavior.

2.4.2 Lesion Studies

When patients suffer brain damage, we can correlate impaired moral responses with the
dysfunctional brain areas and their usual psychological functions. Ideally, this can help
determine whether feelings play an important role in normal moral judgment. There are two key
brain lesions that have been studied in relation to emotions and moral judgment: damage to the
ventromedial prefrontal cortex and neurodegeneration in the frontal or temporal lobes.
Patients with lesions of VMPFC typically develop what Antonio Damasio (1994) has
somewhat unfortunately called “acquired sociopathy.” The famous Phineas Gage is just one
popular example: after a rod accidentally passed through his skull, a once upstanding Gage
reportedly became crass, had difficulty keeping jobs, and so forth, despite apparently

pg. 43 of 206
Regard for Reason | J. May

maintaining his level of general intelligence. There is some controversy about the various details
of Gage’s story, but now such patients are better documented.
Acquired sociopathy has some similarities with psychopathy, at least in that both involve
abnormal function in the VMPFC (Blair 2007). But Damasio’s label can mislead as the two
conditions are rather different. For one, since psychopathy is a developmental disorder, the brain
dysfunction begins early in life and thus has much more serious effects. In contrast, adults who
suffer damage to the VMPFC later in life have typically developed a fuller grasp of moral
concepts, and there is evidence that they retain at least some forms of moral judgment (Roskies
2003). For example, patients are generally able to reason about hypothetical moral (and
prudential) dilemmas and render a verdict about what one should do. The problem is more with
personal decision-making, as patients struggle to settle questions about whether to lie to their
spouse or about which apples are best to purchase at the grocery store. Those with acquired
sociopathy can know or cognize the various options for a given decision, but they seem to lack
the proper guidance from their gut feelings about what they themselves ought to do all things
considered in the this particular situation—“in situ” as Jeanette Kennett and Cordelia Fine
(2008) put it. Based primarily on studying the physiological responses of such patients while
they make decisions, a key impairment seems to be in what Damasio (1994) calls “somatic
markers” or bodily feedback that guides such decision-making. Diminished or missing somatic
markers can leave patients prone to make imprudent and morally questionable choices, but
unlike psychopaths they’re not characteristically manipulative, violent, or grandiose (we’ll
encounter “acquired sociopathy” again in Chapter 8, §8.3.1).
The brain’s VMPFC does seem crucial for intuitive and personal decision-making, which
is at least typically guided by affective feedback. But such deficits affect learning and decision-
making both within and outside the moral domain. Moreover, is there evidence of impaired
moral judgment generally? Some studies suggest that VMPFC damage does yield abnormal
processing of scenarios involving personal harm for the greater good. Such patients seem to be
more inclined to provide the abnormal “utilitarian” judgment that one should sacrifice an
innocent individual for the sake of saving a greater number of other innocents, even if it involves
up-close and personal harm (e.g., Koenigs et al. 2007; Ciaramelli et al. 2007).
A similar phenomenon arises in people who have related brain abnormalities. Patients
with frontotemporal dementia (FTD) can have a wide variety of symptoms, including overeating
and poor hygiene, since their neurodegeneration can occur in two out of the four lobes of
cerebral cortex. But some common symptoms include blunted emotions and antisocial behavior,
which are typical among those with lesions of the VMPFC. Importantly for our purposes, when
presented with moral dilemmas requiring personal harm, FTD patients also provide more
“utilitarian” moral judgments than controls (Mendez et al. 2005).
Even if we take these lesion studies at face value, they don’t show that feelings are
essential for moral judgment, for at least two reasons. First, as we’ve already seen, it’s a stretch
to assume that providing more “utilitarian” responses to sacrificial moral dilemmas yields a
profound moral deficit, lest we’re prepared to attribute moral incompetence to utilitarians.
Second, while the VMPFC and the frontal and temporal lobes may be associated with
emotional processing broadly speaking, they are also associated with many other non-affective
processes. The frontal and temporal lobes are two out of the four lobes in the cerebral cortex.
The VMPFC is a much smaller region located in the frontal lobe, but it too isn’t specific to moral
emotions, such as guilt and compassion, and certainly not their affective aspects in particular.
The area does indeed appear to, among other things, receive affective information from other
pg. 44 of 206
Regard for Reason | J. May

structures, such as the amygdala, often having to do with goals and reinforcement learning (Blair
2007). However, as James Woodward puts it, the VMPFC is clearly “involved in calculation,
computation, and learning, and these are activities that are often thought of as ‘cognitive’” (2016:
97). So, even if being more utilitarian demonstrates impaired moral judgment, it’s not clear that
this is best explained specifically by a deficit in moral emotions rather than integrating
information acquired through past experience with present circumstances in order to make a
personal decision.
Now, much like the experiments manipulating incidental emotions, one might argue that
the lesion studies provide an alternative way of showing that damage to apparently “emotional
areas” in the brain at least leads to different moral judgments. Even if patients retain the general
capacity for moral judgment, their emotional deficiencies seem to lead to some change in moral
cognition. The problem of course is that damage to relatively large areas of the brain implicates
dysfunction of a wide range of mental capacities, not just emotional responses—let alone
incidental ones. Associating areas of the brain with emotional processing is far from isolating the
feelings from the thoughts associated with them. Frontotemporal dementia involves a wide range
of neurodegeneration. While blunted emotion is a common symptom of FTD, it may simply
hinder the patient’s ability to pay close enough attention to morally relevant information
(Huebner et al. 2009), or hinder other reasoning capacities.
In sum, much like psychopathy, lesion studies support sentimentalism only if they
establish two theses: (a) that the relevant patients have impaired moral judgment and (b) that this
impairment is best explained by a deficit in moral feelings. Both of these crucial claims are
sorely lacking in empirical support. The lesion studies actually support the rationalist idea that
moral cognition can proceed even with some blunted emotions. The patients certainly seem
capable of making moral judgments about what other people should do in hypothetical situations,
even if their responses tend to be a bit more “utilitarian.” Gut feelings help to guide certain
personal decisions, but they aren’t necessary for the general capacity to judge actions as right or
wrong. As with psychopathy, affective deficits alone don’t seem to reveal a substantial
impairment in moral cognition specifically.

2.5 Conclusion
Based on the science, many believe that mature moral judgment crucially depends on the
affective aspects of moral emotions. The evidence for this sentimentalist conclusion has been
diverse. As we’ve seen, however, much of it is rather weak and has generally been overblown.
First, while the moral/conventional distinction may partly characterize the essence of moral
judgment, we lack compelling evidence that moral norms transcend convention by being backed
by affect. Second, priming people with incidental feelings doesn’t make them moralize actions.
Third, moral judgment can be somewhat impaired by damage to areas of the brain that are
associated with emotional processing but these areas also facilitate learning and inference both
within and outside the moral domain.
Feelings are undoubtedly frequent characters in the moral life. But this is because we care
deeply about morality, so feelings tend to be the normal consequences, not causes, of praise or
condemnation. I feel angry toward Nicki because I believe she wronged me; the feeling goes
away, or at least begins to subside, once I realize it was all a misunderstanding. Now it may well
be impossible for a creature to make moral judgments in the absence of certain motivations and
concerns. Perhaps, for example, moral agents must have a concern to act for good reasons or
pg. 45 of 206
Another random document with
no related content on Scribd:
1232

in infantile spinal paralysis,

1155

in insanity,

136

in spinal hyperæmia,

805

meningeal hemorrhage,

756

Ergotin, use of, in spinal sclerosis,

901

Erotomania,
148

Eruptions in nervous diseases,

57

simulated, in hysteria,

234

Erysipelas as a cause of cerebral hyperæmia,

765

of thrombosis of cerebral veins and sinuses,

985

Erythema, local, in symmetrical gangrene,

1259
Erythromelalgia,

1253

Essential vertigo,

426

Ether, habitual addiction to,

667

use of, in chorea,

455

in neuralgia,

1229

in tetanus,

556
Etiology of abscess of the brain,

796

of acute simple meningitis,

716

of acute spinal meningitis,

749

of alcoholism,

575

of atrophy of the brain,

994

995

of Bell's palsy,
1203

of capillary embolism,

979

of catalepsy,

315

of cerebral anæmia,

776

of cerebral hyperæmia,

765

of cerebral meningeal hemorrhage,

711

of chorea,

440
of chronic hydrocephalus,

741

lead-poisoning,

679

of combined forms of sclerosis,

870

of congestion of the cerebral pia mater,

716

of spinal membranes,

746

of diseases of cervical sympathetic,

1263

of disseminated sclerosis,

883
of eclampsia,

464

of ecstasy,

341

of epilepsy,

468

of external pachymeningitis,

704

of family form of tabes dorsalis,

870

of general paralysis of the insane,

177

of hæmatoma of dura mater,


707

of hypertrophy of the brain,

997

of hysteria,

214

of hystero-epilepsy,

291

of infantile spinal paralysis,

1151

of insanity,

113

of intracranial hemorrhage and apoplexy,

927
of labio-glosso-laryngeal paralysis,

1173

of neural disorders of writers and artisans,

505

of migraine,

1231

of multiple neuritis,

1197

of myxœdema,

1271

of neuralgia,

1216

of neurasthenia,

354
of neuritis,

1190

of neuromata,

1209

of occlusion of cerebral arteries,

950

of progressive unilateral facial atrophy,

693

of sciatica,

1236

of spastic spinal paralysis,

864

of spina bifida,
757

of spinal hemorrhage,

809

of spinal hyperæmia,

802

of spinal meningeal hemorrhage,

754

of symmetrical gangrene,

1261

of tabes dorsalis,

851

of tetanus,

545
of tetanus neonatorum,

563

of the chloral habit,

661

of the opium habit,

649

of thermic fever,

388

of thrombosis of cerebral veins and sinuses,

982

983

of torticollis,

463
of tubercular meningitis,

724

of tumors of the brain,

1028

of the spine,

1090

of vertigo,

418

of writers' cramp,

506

general, of syphilitic affections of nerve-centres,

999

Exaltation, in nervous diseases,


20

Exanthemata, as a cause of epilepsy,

473

of vertigo,

419

Excesses, influence on causation of general paralysis of the insane,

177

of insanity,

116

119

Excision of the tumor in spina bifida,

762
Exciting causes of insanity,

118

119

Exercise, necessity of, in hysteria,

275

Exhausting diseases as a cause of cerebral anæmia,

779

of tremor,

429

Exhaustion, insomnia from,


380

Exposure to cold and wet as a cause of tabes dorsalis,

855

Eye affections, hysterical,

247

in progressive unilateral facial atrophy,

698

Eye-strain, headache from,

304

Eyes, state of, in cerebral hemorrhage and apoplexy,

939
in chronic hydrocephalus,

743

in epilepsy,

479

480

in tubercular meningitis,

725-727

729

Disorders of, as a cause of vertigo,

424

in cerebral anæmia,

783
hyperæmia,

770

772

in diseases of cervical sympathetic,

1264

in disseminated sclerosis,

875

877

878

in tabes dorsalis,

830-833

in the chloral habit,

664
,

665

in the opium habit,

657

in tumors of the brain,

1035

1042-1044
F.

Facial atrophy, unilateral progressive,

693

expression in Bell's palsy,

1203

in cerebral hemorrhage and apoplexy,

939

in chorea,

445

in paralysis agitans,

434

nerve, paralysis of, in atrophic spinal paralysis,

1120

You might also like