Moral Cognition: Group: Dilemmas Students: Alexander Rodríguez Cruz Marlon Andrés Ríos López Leandro Ezequiel Cortés

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

Group: Dilemmas

Students:
 Alexander Rodríguez Cruz
 Marlon Andrés Ríos López
 Leandro Ezequiel Cortés

Moral Cognition
Group: Dilemmas
Students:
 Alexander Rodríguez Cruz
 Marlon Andrés Ríos López
 Leandro Ezequiel Cortés
How do we decide what’s right or wrong?
Why do people so often disagree about moral issues?
When we do agree, where does that agreement come from?
Can a better understanding of morality help us solve the problems that divide us?

We study the mechanics of moral thinking, primarily using behavioral experiments and
functional neuroimaging (fMRI). Much of our research has focused on understanding the
respective contributions of “fast” automatic processes (such as emotional “gut reactions”) and
“slow” controlled processes (such as reasoning and self-control). We’ve applied this dual-process
framework to classic hypothetical dilemmas (here, here, here, andhere), real temptations toward
dishonesty (here and here), beliefs about free will and punishment (here andhere), belief in God
(here), and cooperation (here and here).

Thinking about morality in terms of “fast” and “slow” processes provides a general framework,
but that’s just a first step. Our long-term goal is to understand these automatic and controlled
processes in more detailed functional terms, characterizing the kinds of information being
processed and the neural systems that do the processing. (See our papers here, here, here,
and here). Our research indicates that there is no dedicated “moral sense” or “moral faculty.”
Instead, moral judgment depends on the functional integration of multiple cognitive systems,
none of which appears to be specifically dedicated to moral judgment.

As explained in this book chapter and this paper, the category “morality” is like the category
“vehicle.” Sailboats and motorcycles are both vehicles, but their mechanics are very different. A
sailboat works more like a kite than a motorcycle, and a motorcycle works more like a (gas-
powered) lawnmower than a sailboat. This doesn’t mean that the things we call “vehicles” are
too heterogeneous to form a meaningful category. Rather, vehicles are unified at a functional
level (by what they do) rather than at a mechanical level (by how they do it). So, too, with
morality. As explained in Moral Tribes, I (along with many others) believe that morality is a
suite of psychological devices that allow otherwise selfish individuals to reap the benefits of
cooperation. But these devices seem to rely on the same neural systems that we use for thinking,
feeling, and deciding in general.
Group: Dilemmas
Students:
 Alexander Rodríguez Cruz
 Marlon Andrés Ríos López
 Leandro Ezequiel Cortés
In light of this, our research strategy is not to isolate and characterize the moral parts of the brain,
but rather to understand how moral judgments arise from the coordinated interaction of various
domain-general cognitive systems. These include systems that enable reasoning and cognitive
control (as shown here, here, here, andhere), the representation of value and the motivation of its
pursuit (as shown here, here, and here), the simulation of distal events using sensory imagery
(here and here), and the representation of structured thoughts (here).

Moral Dilemmas and the Trolley Problem

In the late 1990s, Greene and Jonathan Cohen initiated a line of research inspired by the Trolley
Problem, which was originally posed by the philosophers Philippa Foot and Judith Jarvis
Thomson.

First, we have the switch dilemma: A runaway trolley is hurtling down the tracks toward five
people who will be killed if it proceeds on its present course. You can save these five people by
diverting the trolley onto a different set of tracks, one that has only one person on it, but if you
do this that person will be killed. Is it morally permissible to turn the trolley and thus prevent five
deaths at the cost of one? Most people say "Yes."

Then we have the footbridge dilemma: Once again, the trolley is headed for five people. You are
standing next to a large man on a footbridge spanning the tracks. The only way to save the five
people is to push this man off the footbridge and into the path of the trolley. Is that morally
permissible? Most people say "No."

These two cases create a puzzle for moral philosophers: What makes it OK to sacrifice one person
to save five others in the switch case but not in the footbridge case? There is also a psychological
puzzle here: How does everyone know (or "know") that it's OK to turn the trolley but not OK to
push the man off the footbridge?

(And, no, you cannot jump yourself. And, yes, we’re assuming that this will definitely work.)
Group: Dilemmas
Students:
 Alexander Rodríguez Cruz
 Marlon Andrés Ríos López
 Leandro Ezequiel Cortés
As the foregoing suggests, our differing responses to these two dilemmas reflect the influences
of competing responses, which are associated with distinct neural pathways. In response to both
cases, we have the explicit thought that it would be better to save more lives. This response is a
more controlled response (see papers inhere and here) that depends on the prefrontal control
network, including the dorsolateral prefrontal cortex (see papers here and here). But in response
to the footbridge case, most people have a strong negative emotional response to the proposed
action of pushing the man off the bridge. Our research has identified features of this
action that make it emotionally salient and has characterized the neural pathways through which
this emotional response operates. (As explained in this book chapter, the neural activity
described in our original 2001 paper on this topic probably has more to do with the representation
of the events described in these dilemmas than with emotional evaluation of those events per se.)

Research from many labs has provided support for this theory and has, more generally, expanded
our understanding of the neural bases of moral judgment and decision-making. For an overview,
see this review. Recent theoretical papers by Fiery Cushman and Molly Crockett link the
competing responses observed in these dilemmas to the operations of “model free” and “model
based” systems for behavioral control. This is an important development, connecting research in
moral cognition to research on artificial intelligence as well as research on learning and decision-
making in animals.

What does all of this mean for normative questions about right and wrong? As explained in this
paper and in Moral Tribes, our dual-process moral brains are very good at solving some kinds of
moral problems and very bad at solving others. We do not think that science can, by itself, tell us
what’s right or wrong. However, we believe that scientific self-knowledge can help us make
progress on distinctively modern moral problems—ones that our brains were not designed to
solve. To make good moral decisions it helps to understand the tools that we bring to the job.

Source:
http://www.joshua-greene.net/research/moral-cognition/

YouTube Videos

https://youtu.be/9xHKxrc0PHg

https://www.youtube.com/watch?v=muFAuRkE3-w

You might also like