Lavazza - Greenes Moral Tribes Thinking Too Fast and Too Slow

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/309643311

Greene’s Moral Tribes: Thinking Too Fast and Too Slow

Article  in  Notizie di Politeia · January 2016

CITATIONS READS

0 3,839

1 author:

Andrea Lavazza
Centro universitario internazionale, Arezzo, Italy
119 PUBLICATIONS   542 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Free Will and Cognitive Enhancement View project

Cognitive enhancers and competitive-selective contexts View project

All content following this page was uploaded by Andrea Lavazza on 03 November 2016.

The user has requested enhancement of the downloaded file.


notizie di POLITEIA, XXXII, 123, 2016. ISSN 1128-2401 pp. 135-140 135

Greene’s Moral Tribes: Thinking Too


Fast and Too Slow
Andrea Lavazza*

Abstract: Joshua Greene’s attempt to propose a new metamorality, able to solve the Us-
Them conflict and overcome the Tragedy of Commonsense Morality, heavily relies on
the assumption that our moral machinery has a dual-mode of reasoning, one of which
is biased by our natural evolution in small groups. But Greene’s utilitarian solution is
based on relevant experiments which seem to be still controversial in their interpretation.

Keywords: Utilitarianism, Metamorality, Dual-mode of moral reasoning.

Joshua Greene is one of the founders of the “new science of morality”, of which
he has recently provided an important and accessible synthesis (Greene, 2013). By
scientifically studying the way in which we reach our moral judgments, both from
the evolutionary point of view and as individuals/groups, we can see all the biases
that affect us, and which our best research today can detect and correct. Evolutionary
mechanisms have led to the emergence of a group morality that allows us to coordinate
our needs and aspirations with those of others: in short, there has been a strong push
to cooperation, in the Me-Us dialectic, as a tool to thrive in a hostile environment. But
human history has also been characterized by the formation of closed groups, each
featuring their own moral norms that create tensions and conflicts in the Us-Them
dialectic. The moral problem of finding the key to global cooperation is all the more
pressing today, when we have to act on a global scale to contain weapons of mass
destruction or climate change.
We know enough of cooperation within homogeneous groups, according to Greene:
how it has arisen and how it was internalized. And herein lies the problem, in his view,
because it is our moral machinery, if we can call it so, that impedes us to overcome
the so-called Tragedy of Commonsense Morality. Only if we rose to a higher level
of rational Metamorality could we avoid the conflict caused by the clash of different
moral perspectives – think of the debate on bioethical issues but also on matters of
religious tolerance, or on the just and adequate response to terrorist threats.
Simplifying, Greene argues in favor of overcoming, on a scientific basis, our fast
and unconscious moral judgments, which then we rationalize a posteriori often by
means of a confabulation (understood in the technical sense of psychology). Instead,
we should adopt a rational and reflective mode of moral reasoning whose cornerstone

*  Centro Universitario Internazionale, Arezzo.


136 Greene’s Moral Tribes: Thinking Too Fast and Too Slow

is the premise that what all living beings tend to is a lifetime of positive experiences.
This, though, is to be maximized under the constraint of impartiality (my experiences
are as valid as those of anyone else). This is the criterion that should inspire the new
metamorality to solve the Tragedy of Commonsense Morality.
What Greene proposes is a common morality he defines “deep pragmatism”. This
view curbs classical utilitarianism (perhaps too much), thereby overcoming some
typical objections to utilitarianism as a moral theory, and showing a side of it that makes
it attractive even for non-utilitarians. Greene’s metamorality, though, seems to assume
too much for its justification (or for its foundation) with respect to evidence coming
(especially) from empirical psychology and (partly) from cognitive neuroscience. It is
on these two points that I’ll focus my attention, while being aware that there are many
more themes touched on in a rich and important book like Greene’s.
I will start from the second point, because debunking the morality of different
tribes is allegedly what can generate the new metamorality able to resolve the disputes
dividing us from them. Take, for instance, the moral stance involved in the famous
footbridge dilemma, as a variation of the trolley problem. This view tells us that it
is wrong to throw a fat person on the tracks, even if it’s in order to stop a train from
running over five unsuspecting workers. Why is it so? If there is no other means to stop
the train – not even throwing ourselves, as we’re too thin – isn’t it better to sacrifice
one life to save five? Yes, indeed, most people across the world have answered that
they should do so. But only if they have to decide whether to pull a lever that will
divert the convoy on a secondary track, on which is located a single worker, while
the other five are on the main track. Pulling the lever is considered acceptable, while
directly pushing a human being on the track is not; most respondents think that it is
not morally acceptable to “directly” sacrifice one life, not even to save five. Yet, the
effects are exactly the same in both situations.
The difference seems to lie in the fact that human beings are endowed with a dual-
mode of moral reasoning, in line with the dual process theory of reasoning that has
been so popular in the past few decades. Its promoter, the Nobel Prize winner Daniel
Kahneman (2011), has introduced the distinction between System 1 and System 2. The
first mode to react morally is automatic, unconscious, quick, instinctive, and based
on gut feelings we have inherited from our evolutionary history. The second mode
of reacting morally is aware, slower, less instinctive; it makes use of our cognitive
resources, evaluates and calculates before issuing a reply. According to Greene’s
effective metaphor, we are like a camera that can work both in automatic mode and in
manual mode. The first is handy and allows us to take generally good pictures even if
we are not great photographers. However, in particular environmental conditions or
if we want a specific result, it is best to switch to the manual mode, which allows for
finer settings and personalised parameters. Little light and the “portrait” mode will
trigger the flash, but sometimes we may want to take a portrait in dim light, and then
we’ll need the manual mode to disable the flash.
Trolley tests (of which the footbridge dilemma is only one example) have allowed
Greene to gather a significant amount of data in this sense over the years. Our
immediate moral judgments are biased, inconsistent and ultimately irrational, despite
Andrea Lavazza 137

being seemingly rooted in deep values with philosophical dignity (not treating one’s
neighbour as a means and not killing them, in the footbridge case). Neuroscience has
also supported this thesis, indicating that there are several areas of the prefrontal cortex
at work when making decisions in either mode.
If we truly are so conditioned by our moral machinery, then it makes sense to
try to avoid all related biases. However, the point is that the dual process theory of
reasoning does not seem to be as solid as Greene claims. The fascinating exhibition of
his researches leaves out contrary evidence. If the construction of a new metamorality
should be primarily based on facts, as the author repeatedly reiterates, the facts must
be well substantiated. Are we really sure that there is an important distinction between
these two types of mental operations?
For example, Kahan (2011) notices that “motivated cognition” does not intervene
only with System 1 reasoning: this is one of the biases that Green attributes to automatic
mode and that should be overcome with the manual one. “Motivated cognition” refers
to the fact that people often try to bend various types of reasoning (and even perception)
to their practical goals (think of a political dispute) disregarding the correctness of the
argument or the appropriateness of the perceptive data used. Motivated cognition is
an unconscious process, because people who use it do so with no deliberate purpose
to bend reasoning and perception to their biased view. They want to reason correctly,
but end up doing the opposite. Defenders of the dual process theory of reasoning think
that because motivated cognition is unconscious, it only affects System 1 reasoning.
However, as various experimental data show (Chaiken and Maheswaran, 1994; Chen,
Duckworth and Chaiken, 1999; Balcetis and Dunning, 2006), conscious, rational and
“unbiased” System 2 can be affected as well. Commitment to some extrinsic end
or goal can unconsciously bias the way in which people consciously interpret and
consider arguments and empirical evidence1.
Secondly, there is the question of the clear separation and hierarchical order
between System 1 and System 2. It is posited that the biases or irrationality only affect
the first, while the second is immune. The fact that trolley tests seem to involve two
different areas depending on whether the answers are fast and affective or meditated
and utilitarian is no sufficient proof that the two processing systems are sharply
distinct. Rather, the mainstream of the dual process theory of reasoning conceives
of the two forms of reasoning as integrated and mutually supportive. In this sense, it
seems more likely that a perfectly rational System 2 using the relevant information
rightly should owe to a System 1 that has worked well, while a System 2 that falls into
inconsistencies and erroneous selections of data may follow from a System 1 that did
not perform well its preliminary task.
The question is open. Greene’s experiments, widely described in the book, hint
at the fact that the two systems are separate and work in parallel (and, sometimes,
in contradiction), while other studies seem to support the opposite perspective. For
example, Peters and colleagues (2006) showed that individuals who are high in
numeracy – that is, those who have the skills to reason thoughtfully and systematically
in this area (System 2) – rely on immediate sensations and impressionistic reasoning
(System 1) no less than others despite overcoming less skilled individuals in spotting
138 Greene’s Moral Tribes: Thinking Too Fast and Too Slow

positive-return opportunities. Indeed, they seem to resort to emotions more reliably


compared to those who are low in numeracy. In the experiment, the unconscious
processing system responding positively or negatively was what told those high in
numeracy that they were facing a good (or bad) opportunity. Consequently, they then
enacted the best strategies using System 2. People low in numeracy, instead, showed
a quick and unspecific response faced with different situations, failing to trigger the
best and most suitable rational procedures. It’s possible that “greater capacity for
systematic (System 2) reasoning over time calibrates heuristic or affective processes
(System 1), which thereafter, unconsciously but reliably, turns on systematic
reasoning” (Kahan)2.
More recently, it has been claimed that the two systems are not so rigidly separated:
“rapid autonomous processes (Type 1) are assumed to yield default responses unless
intervened on by distinctive higher order reasoning processes (Type 2). What defines the
difference is that Type 2 processing supports hypothetical thinking and load heavily on
working memory” (Evans and Stanovich, 2013). This hypothesis was formulated along
with another series of criticisms to dual process or dual system theories. One observation
is that the proposed attributed clusters are not reliably aligned: the different features
of the different systems are not always observed together (Keren and Schul, 2009).
Another one is that there is continuity between the modes of reasoning rather than the
strong break that Greene seems to posit. It should be said that in some of his experiments
Greene does say that System 2 corrects System 1, marking the interaction between them.
Furthermore, he could differentiate moral reasoning from cognitive reasoning, which is
the object of most criticism. However, Greene himself takes up Kehneman’s theory of
the two systems; also, if the dual process theory of reasoning were to fail in the cognitive
sphere, it seems unlikely that it could hold in the moral sphere.
Beyond empirical disputes, if one accepts the separation between automatic moral
inclinations and rational reasoning, with the manual mode – says Greene – we can
only be utilitarians. However, it is interesting to see what kind of utilitarians, given
the recent increase in different orientations within this category. Greene only cites the
fathers of the theory and does not delve into the contemporary debate. Here it might be
useful to follow Reichlin (2014), who identifies a few points of distinction. The first
is that of the value with respect to which one is a utilitarian. This value is generally
“well-being”, but there are several theories explaining this notion: preferentialism,
hedonism and the theory of objective values. Greene seems to implicitly reject the last
one, because in his view there are no values in the traditional sense, as they are merely
labels for discussion. One should only ever consider the consequences of each course
of action. But Greene also refuses hedonism, arguing against the ideal cases that, on the
very basis of hedonism, are used against utilitarianism (for example, the scapegoat);
Greene thinks these are not possible cases in the real world, since no utilitarian should
support them. So Greene seems to be oriented toward preferentialism, but its version
of preferentialism fades in the idea of impartiality compared to the value.
As for the justification of the principle of utility maximization, it is known that all
major utilitarian authors have found it difficult to provide one. According to Greene,
there is simply no foundation or ultimate justification for that principle. One simply
Andrea Lavazza 139

recognizes the plausibility of its pragmatic consequentialism on the basis that it seems
to be the general principle that works best, everything else considered. But this rule
is always revisable case by case. The key point is to show the inconsistency of values
and principles derived in any other way, and in his opinion no other ethics other than
the utilitarian can pass the test of rational consistency. In many respects Greene does
not stray from common sense: he merely aspires to correct it on a scientific basis.
About the fact that consequentialism often proves too demanding in its imperative to
maximize well-being, Greene does not hesitate to reject its excesses. He recommends
giving others a little more than is usually done and required by moral common sense,
but does not try to introduce a precise principle on this point. Finally, there is the issue
of personal integrity: in this sense, Bernard Williams has objected that utilitarianism
leads to neglect feelings, motivations, and causal links leading to the behaviour of
utility maximization. Greene’s implicit answer is that one doesn’t lose the sense of
one’s own moral identity by recognizing the new metamorality that unites all human
beings. And this is because what appears to be our venerated deep morality is nothing
more than an evolutionary legacy that is poorly wired in the brain, and produces many
biases and many conflicts. Therefore, Greene concludes, it is best to switch to the new
and more efficient metamorality, which promises to increase everybody’s well-being.
Most people generally share this hope, and yet this view seems to rest on scientific
data that are still not well supported, while also using pragmatic requirements that
seem to lack adequate cognitive and motivational power.

Notes
1
  Cf. D. Kahan, “Two common (& recent) mistakes about dual process reasoning & cognitive bias”,
http://www.culturalcognition.net/blog/2012/2/3/two-common-recent-mistakes-about-dual-process-
reasoning-cogn.html.
2
 Ibidem.

References

Chaiken, S., Maheswaran, D. (1994), “Heuristic processing can bias systematic


processing: effects of source credibility, argument ambiguity, and task importance on
attitude judgment”, Journal of Personality and Social Psychology, 66(3): 460-473.
Chen, S., Duckworth, K., Chaiken, S., (1999), “Motivated Heuristic and Systematic
Processing”, Psychological Inquiry, 10: 44-49.
Balcetis, E., Dunning, D. (2006), “See what you want to see: motivational influences
on visual perception”, Journal of Personality and Social Psychology, 91(4): 612-625.
Evans, J.S.B., Stanovich, K.E. (2013), “Dual-process theories of higher cognition
advancing the debate”, Perspectives on Psychological Science, 8(3): 223-241.
Greene, J. (2013), Moral Tribes: Emotion, Reason and the Gap between Us and
Them, New York: The Penguin Press.
140 Greene’s Moral Tribes: Thinking Too Fast and Too Slow

Kahan, D.M. (2011), “The Supreme Court 2010 Term – Foreword: Neutral
Principles, Motivated Cognition, and Some Problems for Constitutional Law”,
Harvard Law Review, 125: 1-77.
Kahneman, D. (2011), Thinking: Fast and Slow, New York: Farrar, Straus and
Giroux.
Keren, G. and Schul, Y. (2009), “Two is not always better than one a critical
evaluation of two-system theories”, Perspectives on Psychological Science, 4(6):
533-550.
Peters, E., Västfjäll, D., Slovic, P., Mertz, C.K., Mazzocco, K., Dickert, S. (2006),
“Numeracy and decision making”, Psychological Science, 17(5): 407-413.
Reichlin, M. (2013), L’utilitarismo, Bologna: il Mulino.

View publication stats

You might also like