Bafykbzacebnmulohkh 3 Vrdcogrphd 24 Vel 2 Hwtakk 6 I 3 A 3 DZ 5 Albwvthl 4 L 56

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

Exported for Jason Bao on Tue, 12 Sep 2023 21:16:33 GMT

Reason Better
An interdisciplinary guide to critical thinking

Chapter 1. Reasoning

Introduction
Thinking is effortless enough when we're just letting our minds wander. But reasoning—especially good
reasoning—is hard. It requires us to cut through all the irrelevant noise and form our beliefs in ways that
reliably reflect how things are.

Good reasoning helps us acquire accurate beliefs and make good decisions. But, instead, we often use
our reasoning skills to justify our prior beliefs and actions, which just cements our original mistakes. As
the evidence from cognitive psychology shows, we're especially likely to do this when we're thinking
about the things that are most important to us—all the while believing ourselves to be free of bias.

This text is not intended to help you build a more persuasive defense of the beliefs you already have and
the actions you've already chosen. Rather, it's intended to help you reason better, so you can develop
more accurate beliefs and make better choices.
To this end, we will draw from several disciplines to identify the most useful reasoning tools. Each
discipline will shed light on a different aspect of our subject:

cognitive psychology: how systematic cognitive errors unfold, and can be fixed
philosophy: how to clarify our inferences and understand the nature of evidence
statistics: how to make reliable generalizations
probability theory: how to adjust our confidence in response to evidence
decision theory and behavioral economics: how the logic of decision-making works and why we so
o en make choices that are predictably irrational.

The process of gathering and organizing this material has made me better at reasoning; my hope is that
reading this book will bring the same benefits to you.

Learning objectives
By the end of this chapter, you should understand:

the di erence between specific and general reasoning skills


why strong reasoning requires facts and skills, but also the right mindset
which features distinguish the processes of Systems 1 and 2
how cognitive illusions arise from conflicts between these systems
the broad outlines of the availability heuristic, the evidence primacy e ect, belief perseverance,
confirmation bias, and motivated reasoning

1.1 What it takes


This book contains a lot of information about good and bad reasoning, but the main point is not to learn
facts about good reasoning: it's to acquire the skill of reasoning well.
It would be great if we could improve our reasoning ability just by learning how good reasoning works.
But unfortunately that's not enough, and it often doesn't help at all. Some of the systematic errors we
make are so ingrained that we continue to fall prey to them even when we are tested immediately after
learning about them! [1]

So we need more than new facts; we need new skills. And as in any other area of life, developing skills
takes discipline and practice. Athletic ability is a useful analogy. You can know all about the moves,
strategies, and training regimen needed to excel at a sport. But all of that knowledge isn't enough. Being
good at the sport also requires training hard to develop a certain skill set. The same is true for good
reasoning.

Here's another way that reasoning skills are like any other skill: if
we've developed bad habits, we must start by unlearning those habits
first. People who have been swinging golf clubs badly or playing the
cello without any training must retrain their muscle memory if they
want to excel.

Likewise, the science of cognition has uncovered many bad habits


and instincts that we all exhibit in our reasoning. And since we've been forming beliefs and making
decisions all our lives, we've been reinforcing some of those bad habits and instincts. We'll have to
unlearn them first before we can become good reasoners.

Specific vs. general skills


We've all been told that a major reason to go to college is that doing so makes us better at
reasoning. But the reality is that college students spend almost all their time developing a number of
very specific ways of reasoning. They take specialized courses and are taught how to think like
accountants, biologists, art historians, engineers, or philosophers. Do these "domain-specific" reasoning
skills have anything in common? And do they really help our general reasoning ability?

The answer appears to be a qualified yes: a standard college education, with its array of narrow subject
areas, does improve our general reasoning ability to some degree. At the same time, there is evidence
that explicit instruction in critical reasoning is even more effective at improving our general reasoning
ability. [2]

If our goal is to become skilled at reasoning, then, it's useful to focus on the general features of good
thinking. Many of the reasoning errors we'll discuss are made by people in all walks of life—even by
experts in areas like medicine and law. If we can fix those errors, we'll reason better, no matter what our
career or area of interest.
A similar thing holds true in sports. Training for almost any sport will
enhance one's general athleticism to some degree. But well-rounded
athletes make sure to have the foundations of physical fitness in place
—endurance, flexibility, balance, and coordination—and can't spend
all their time on specific skills like scoring three-pointers or making
corner-kicks.

So, to supplement your more specialized courses, this course will help you develop the foundations of
reasoning. Here are some of the things it will help you do:

understand the structure of our reasons for beliefs and actions


draw appropriate conclusions using logic and probability
search for evidence in a way that avoids selection e ects
understand when and how to generalize on our observations
identify correlations and distinguish them from causal relationships
assess theories for plausibility and explanatory power
make good decisions by comparing the value and probability of outcomes

These general reasoning skills should be of use to you regardless of how you specialize.

The right mindset


Even more important than the skills on this list, however, is a mindset that allows for good reasoning. [3]
Once again, an analogy to sports is helpful. Think of a highly skilled athlete who only cares about looking
good on the field rather than helping the team win or growing as an athlete. This person might be highly
skilled but is held back by the wrong mindset.

Similarly, if I have great reasoning skills but I only use them to justify my pre-existing views or to
persuade others, I'm not putting those skills to good use. And if my views happen to be wrong to begin
with, I'll just compound my original errors as I become more confident in them. The right mindset for
reasoning requires honestly wanting to know how the world really is. This leads us to actively seek
evidence and continuously revise our own beliefs. (These themes will recur throughout the text.)

In the rest of this chapter, we'll consider insights from cognitive psychology regarding how we form
beliefs, and why we make certain predictable mistakes. Then, in Chapter 2, we'll look more closely at the
kind of mindset needed to resist those systematic errors.
Section Questions

1-1

Learning all about how to reason well...

A is not enough to become a good reasoner; the right skills and mindset are also necessary

B is enough to become a good reasoner as long as your knowledge is paired with the right skills

C is not important because, just like in sports, the only thing that matters is skill

D is unnecessary: all you need is the right mindset of curiosity, openness, and perseverance

1-2

Focusing on general reasoning skills and not just specific reasoning skills...

A is important because acquiring specific reasoning skills does not improve general reasoning skills

B is the only way to become better at reasoning

C is a more e ective way to improve general reasoning skills

is unhelpful because acquiring specific reasoning skills is just as e ective a way to become a good
D reasoner in general
1.2 Our complex minds
The ancient Greek philosopher Plato compared the human soul to a team of horses driven by a
charioteer. The horses represent the desires and emotions that motivate us, pulling us along. The
charioteer represents our Reason, which controls the horses and reins in bad impulses. (A similar
analogy can be found in the Hindu Upanishads.)

Unfortunately, this analogy is misleading. Plato lumps together all of


the aspects of thinking and deciding as though performed by a single
"part" that is fully under our control. But thinking and deciding are
actually performed by a complex web of processes, only some of
which are conscious and deliberate.

Certainly, it may feel like our souls are directed by a unified charioteer
of Reason that reflects on our perceptions and makes deliberate choices. In reality, though, our
conscious mind is only doing a fraction of the work when we form beliefs about the world and decide
how to act.

Two systems
Using a common theme from cognitive psychology, we can divide our thought processes into two
groups, based on various features they share. [4] On the one hand, we have a set of conscious and
deliberative processes that we primarily use when we:

figure out how to fix a sink;


work on a di icult math problem;
decide whether to bring an umbrella; or
weigh the pros and cons of a policy proposal.

On the other hand, we have a set of cognitive processes that operate beneath our awareness, but that
are no less essential to how we form beliefs and make decisions. They interpret sensory information for
us, provide us with impressions and hunches, and help us make "snap judgments". We're primarily using
this set of processes when we:

recognize people's faces;


sense that someone is angry, using cues like body language and tone;
get the sudden impression that something is scary or disgusting; or
instantly sense the sizes and locations of objects around us through vision.
These specialized processes seem to operate automatically in the background. For example, if you have
normal facial recognition abilities, you can recognize people you know well without consciously trying.
You don't need to actively think about their identity; your facial recognition system just instantly tells you
who each person is.

By contrast, some people have a severe impairment of this process


called prosopagnosia. [5] When they see the faces of people they
know, they just don't experience that flash of instant recognition.
Because of this, they need to compensate by memorizing distinctive
features and then consciously working out whose face they're looking
at. But this is much harder, slower, and less accurate than relying on
subconscious facial recognition. Just take a moment to reflect on how different it would feel if you had
to consciously recall the features of a loved one's face in order to recognize him or her. This should give
you a good sense of the difference between these two types of processes.

So what should we call these two types of processes? The convention in cognitive psychology is to
bundle together the faster, more automatic processes and call them System 1, and to bundle together
the slower, more deliberate processes and call them System 2. (Cognitive psychology is not known for
its creative labels.) To help remember which is which, note that System 1 has its name for two reasons:
first, it is more primitive, since it evolved earlier and is common to many non-human animals; and
second, it reacts faster to sensory input, so it is the first process to respond.

Because we are far more aware of System 2's activities, it's tempting to assume that we form our beliefs
and make our decisions using only our conscious and deliberate system. In addition, whenever we
actually reflect on or explain why we believe something, we have to use System 2. But that doesn't mean
the beliefs were formed by System 2. For example, suppose I make a snap judgment that someone is
angry, based on something about her facial expression or behavior. It's likely that I don't have direct
access to the factors that gave rise to this snap judgment. But if you ask me why I think she's angry, the
explanation I provide will be a kind of reconstruction that makes it sound like I was using System 2 the
whole time.

Let's dig a little deeper into three features (aside from speed) that distinguish the two types of processes:
direct control, transparency, and effort.

Direct control
We consciously decide whether to work on a difficult math problem, or figure out how to fix the sink. By
contrast, System 1 processes operate automatically and can't be turned on or off at will.
For example, if I see someone I know well, I can't just choose not to recognize them. Likewise, when I
hear someone speaking English, the sounds coming from their mouths are automatically recognized as
meaningful words. I can't just choose to turn this interpretive system off and hear words in my own
language as merely noise, the way a completely unknown language sounds to me. (This is why it's much
harder to "tune out" people speaking in a language you understand!)

Here's another example. If you look straight-on at the checkerboard on the left, your visual system will
tell you that area A is darker than area B.

But, in fact, they are exactly the same shade. Your eyes are receiving the same wavelength and intensity
of light from those two places. (You can verify this using the image on the right, or by covering up the rest
of the image with paper.) However, even when you realize that those squares on your screen are the
same color, you can't make your visual system stop seeing them as different. You can't directly control
how your brain interprets the image, not in the same way you can control how you go about solving a
math problem.

Transparency
When we work on a difficult math problem or figure out how to fix the sink, we are aware of all the
individual steps of our reasoning. The process we use to reach our conclusion is open to our conscious
inspection. In other words, System 2 is fairly transparent: we can "see" into it and observe the reasoning
process itself.

System 1 processes, by contrast, are not very transparent. When you look at the checkerboard image,
your visual system also goes through a process to reach a conclusion about the colors of the squares.
The process it performs is complex: starting with raw visual information from your retinas, combining it,
and automatically adjusting for environmental cues. The illusion arises from a sophisticated way of
taking into account the apparent distribution of shade in the image. Your visual system adjusts for the
fact that if square B is in the shade, and square A is in direct light, the only way your retinas would get the
same input from them is if the squares were different shades of gray. So it concludes that they are
different shades, then tells you that by making them look different, even though your retinas are getting
the same input from those two squares. But all of this adjustment and interpretation happens below the
threshold of awareness. You can't turn it off, and you can't even sense it happening, let alone look inside
the process to see what complex cues your visual system is using to conclude that the squares are the
same shade.
Likewise for the processes that recognize faces, interpret sounds as meaningful words, and figure out
how to catch a ball with a parabolic trajectory. The outputs of these processes may reach your conscious
awareness as impressions, but all the information processing happens under the hood.

Sometimes even the outputs of these processes aren't worth bringing


to your attention. For example, there are processes that monitor
sounds while you're sleeping, and don't bother waking you unless the
sounds seem to matter. Even when you're unconscious, these
processes monitor words people are saying and respond differently to
important words, such as your name. [6]

The fact that System 1 operates below the threshold of awareness might seem like a weakness, but it's
actually crucial to our sanity. Reading this text, you have tuned out all kinds of sensory inputs:
background noise, things in your peripheral vision, and the sensations in your feet that you didn't notice
until I mentioned them just now. You have System 1 processes that monitor these inputs in case any of
them deserves to be brought to your attention while you focus on reading. We may, for example, sense
danger before we have any conscious idea of a specific threat. The same goes for the numerous other
social and contextual cues that are being interpreted all the time by System 1. [7] If we constantly had to
pay conscious attention to all of our sensory inputs, we would be overwhelmed and completely unable
to function.

E ort
For most of us, multiplying two-digit numbers in our heads is hard. We don't notice it, but our pupils
dilate and our heart accelerates with the effort. (In one set of studies, subjects were told to wait a while
and then, at a time of their choosing, solve a multiplication problem in their heads. Meanwhile,
psychologists who were measuring their pupils could tell exactly when the subjects started solving the
multiplication problem.)
Unlike System 1 tasks, all System 2 tasks require effort. Consider the last time you weighed the pros and
cons of a decision. How long did you actually think in a single sitting? It's one thing to sleep on a
problem, idly turning it over for a few days. It's another thing to actually sit down and think hard, even
for five minutes. Seriously. Try setting a timer, closing your eyes, and actually thinking hard about a
problem for five minutes without distraction. You probably have some life problems that would benefit
from this exercise. But my guess is that you won't actually try this because it sounds too difficult; and if
you do try, you'll find it very hard not to let your mind wander.

Very few of us regularly force ourselves to think hard. This means we are avoiding the effort of really
engaging System 2: it's just too strenuous and we don't have the mental endurance. We are what
psychologists call cognitive misers: averse to the energy expenditure required by activating System 2 for
very long. [8] In other words, we avoid thinking hard about things for the same reason we avoid
exercising: we're lazy. This is a great shame because thinking hard (like exercise) brings great benefits.

Of course, we don't get this feeling of effort when we use System 1 processes. Imagine that someone
throws you a ball. Using only a couple of seconds of visual input, your automatic processes can work out
where you need to position your hands to catch it. This involves approximating some complex
equations in Newtonian physics to calculate the ball's trajectory, but it all happens effortlessly. When we
consciously work out difficult math problems, our brains fatigue very quickly. But no one ever said, "I'm
so tired from sitting here while my visual system interprets the two visual feeds from my eyes, integrates
them into a three-dimensional field of objects, and calculates the trajectory of objects in that field."

A great strength of System 1 processes is that, though limited to performing specific tasks, they perform
those tasks extremely quickly and well. This is remarkable because, as a sheer matter of processing
complexity, recognizing faces and words is much harder than multiplying three-digit numbers. This is
why, even though the first mechanical calculator was invented in the 17th century, we have only recently
managed to create software that can recognize speech and faces as accurately as humans do, after
billions of dollars in investment.

Clarifications
Now that we grasp the basic distinction between the two systems, I want to make some crucial
clarifications.

First, it's important not to be misled by the "system" labels. What we're describing are not two discrete
units in the mind, but rather two types of processes. (In particular, System 1 is not very unified, being
composed of many specialized processes in different areas of the brain.) Furthermore, the distinction is
really one of degree: in reality, there is an array of processes that are more or less slow, transparent,
controlled, and effortful. We call something a System 1 process if it leans to one side of this continuum,
and a System 2 process if it leans to the other. But there will also be cases in the middle where neither
term fits.
Second, I've used certain examples of tasks that are primarily
performed by processes of each type. But again, things are more
complex than that. For one thing, a given cognitive task might be
performed in a controlled, conscious way on some occasions, and
automatically on others (like breathing). A musician who starts off
needing to consciously focus on a tough fingering task, might perform
it automatically after enough practice. This kind of muscle memory allows us to off-load some tasks
onto System 1, freeing up System 2 for other work.

In addition, many cognitive tasks involve a complex combination of both types of processes. For
example, when considering the pros and cons of a public policy, you are primarily using conscious
deliberation. But System 1 is also at work, giving you subtle impressions of positivity and negativity,
some of which have their source in subconscious associations. As we will see, one of the most important
reasoning skills you can develop is the ability to notice such impressions and moderate their influence
with specific strategies.

Systems in conflict
Consider again the checkerboard above. Even once you've become convinced that the two gray squares
are the same shade, your visual system will keep telling you they're different. In this sense, your
conscious, deliberative reasoning system and your automatic visual system are each telling you
something different.

Likewise, if you let your eyes wander over the image below, you'll get the impression that the dots in the
image are gently moving. In fact, the image is completely still. (If you freeze your gaze for several
seconds, you should stop seeing movement. Or, if you need more proof, take a screenshot!)

But even once you've realized this, you can't just command your
visual system to stop seeing movement. As long as you let your eyes
explore the image, your visual system will keep telling you that the
dots are moving. How it interprets the image is not under your direct
control. (Getting the motion to stop by freezing your gaze only counts
as indirect control, like staying still so that a squirrel doesn't run
away.)
When our visual system keeps getting things wrong in a systematic way even after we realize that it's
doing so, we call that a visual illusion. But other System 1 processes can also get things wrong in a
systematic way even after we realize that they're doing so. We can call this more general category of
errors cognitive illusions. [9] For example, it's common for people to be afraid of flying even though
they know perfectly well that commercial jets are extremely safe—about a hundred times safer than
driving the same distance. [10] Our automatic danger-monitoring process can keep telling us that we are
in mortal danger even when we know at a conscious level that we are safe, and we can't turn off that
System 1 process. We can't just reason with a low-level process that evolved over millions of years to
keep animals from falling from trees or cliffs. And it doesn't care about safety statistics; it just feels very
strongly that we should not be strapped into a seat forty thousand feet above the ground.

Here is a useful interview of the Nobel Prize-winning psychologist Daniel Kahneman, in which he
describes the strengths and weaknesses of the two systems, and introduces the notion of a cognitive
illusion:

Video
Please visit the textbook on a web or mobile
device to view video content.

As we will see throughout this text, System 1 is subject to many other kinds of systematic and
predictable errors that impair our thinking. Good reasoning requires us to notice the influence of
subconscious processes and discern which are trustworthy and which are not.

A metaphor
Despite their limitations, metaphors can be useful. At the outset of this section, we encountered a
picture of the mind as made up of horses and a charioteer. But the psychologist Jonathan Haidt
suggests an alternative that better suits what we've learned about the mind. To describe the relationship
between System 1 and System 2, he uses the analogy of an elephant and rider The rider, he says, is
"conscious, controlled thought," whereas the elephant includes "gut feelings, intuitions, and snap
judgments." [11]

The elephant and the rider each have their own intelligence, and when they
work together well they enable the unique brilliance of human beings. But they
don't always work together well.
—Jonathan Haidt, The Happiness Hypothesis

This metaphor aptly captures the elements of control, transparency, and effort. Unlike a charioteer,
whose reins attach to bits in the horses' mouths, an elephant rider has no chance of directly controlling
the animal as it walks a given path. The elephant has a mind of its own, the inner workings of which are
hidden from the rider. Moreover, from the rider's perspective, the elephant's actions are automatic and
effortless. (If the elephant is well-trained, the rider can even doze off now and then!)

The rider's role is to know the ultimate destination, to make plans,


and to guide the elephant in that direction. But an elephant can't be
steered like a car. Elephants are extremely intelligent and will follow a
path all on their own, making good decisions along the way about
where to step, and so on. If the elephant knows the path well, having
been trained over many occasions to walk that same path, it needs
little to no input from the rider.

In many situations, however, the rider must be alert and ready to correct the animal. The elephant may
not know the final destination, and may even have its own preferences about what to do along the way.
It is liable to get distracted, spooked, or tired. It may wander towards a different goal, or choose a
treacherous shortcut. In such situations, the rider can't rely on brute force: the elephant must be guided,
coaxed, or even tricked. It takes repetitive training for the elephant to become responsive to the rider's
corrections.

Our minds, then, are both rider and elephant. On the path of good reasoning, we need a well-trained
and responsive elephant as well as a discerning rider who knows when to trust the animal and when to
nudge it in a different direction.
Section Questions

1-3

In this section, the example of prosopagnosia was primarily used to illustrate...

A the di erence between the process that recognizes faces and the process that interprets emotions

B the di erence between transparency and e ort in facial recognition

C the fact that facial recognition occurs in a specialized region of the brain

D how di erent it would feel if we had to use System 2 to recognize faces

1-4

System 1 has the name it does because...

A it is the most important system, and therefore considered primary

B it is older and responds more quickly in a given situation

C it was the first to be identified by cognitive psychologists who study thought processes

D it is more accurate and e ective and therefore considered primary

1-5

The "transparency" of System 2 refers to the fact that

A its processes can be turned on or o at will


B its reasoning process itself is open to our awareness

C our threat-detection system has innate knowledge of several ancient threats to humans

D it cannot be monitored because it is invisible

1-6

Visual illusions are like cognitive illusions in that...

A they illustrate how System 1 can be trained to become more accurate in automatic judgments

B the illusions do not arise at all for people who are su iciently careful to monitor their System 1

C they show us that we can't know the truth about how the world really is

D it is hard to shake the incorrect impression even a er we are aware that it is incorrect

1.3 Guiding the mind


Although much of our everyday reasoning is very good, we are sometimes faced with reasoning tasks
that the human mind is not well-suited to perform. In such situations, we may feel that we are reasoning
perfectly well, while actually falling prey to systematic errors. Avoiding these pitfalls requires an attentive
rider who is ready to correct the elephant when needed. Eventually, with enough training, the elephant
may get better at avoiding them all on its own.

The errors that I call cognitive pitfalls include not only mental glitches uncovered by cognitive
psychologists, but also errors in probabilistic reasoning, common mistakes in decision making, and
even some logical fallacies. In short, when we tend to mess up
reasoning in a systematic way, that counts as a cognitive pitfall.
We will encounter many more cognitive pitfalls in subsequent
chapters, but it is worth introducing a few examples in order to
illustrate why we run into them. We can break these down into three
sources of error:

we like to take shortcuts rather than reason e ortfully;


we hold onto beliefs without good reason; and
we have motivations for our beliefs that conflict with accuracy.

All of these errors tend to go unnoticed by our conscious thought processes. Moreover, thinking in a
hurried and impulsive manner makes us more susceptible to them.

Shortcuts
The elephant often knows where to go, but there are points along the way where its inclinations are not
reliable. In those cases, if the rider isn't careful, the elephant may decide on a "shortcut" that actually
leads away from the path. One of the most important skills in reasoning is learning when we can trust
the elephant's impulses and when we can't.

Too often, when faced with a question that we should answer through effortful reasoning, we simply
allow System 1 to guess. This is a cognitive shortcut we use to avoid the effort of System 2 thinking. But
for the sort of question that System 1 is not naturally good at answering, it'll hand us an answer using a
quick process that's ill-suited to the situation.

For example, we answer extremely simple math problems using System 1. If I ask you what 2 + 2 equals,
the answer just pops into your head effortlessly. Now consider this question:

A bat and a ball together cost $1.10. The bat costs a dollar more than the ball. How much does the
ball cost?

Take a moment to answer the question before reading on.

The answer might seem obvious, but what if I told you that most Ivy League students, and 80% of the
general public, get it wrong? If the answer took virtually no effort on your part, that's a sign that you just
let your System 1 answer it, and you might want to go back and check your answer.
Nearly everyone's first impulse is to say that the ball costs 10 cents. This answer jumps out at us because
subtracting $1 from $1.10 is so easy that it's automatic. But some people question that initial reaction, at
which point they notice that if the bat costs $1 and the ball costs 10 cents, the bat does not actually cost
a dollar more than the ball: it only costs 90 cents more.

We don't get the answer wrong because the math is difficult. Instead,
the problem is that a plausible answer jumps out, tempting us to use
no additional effort. The tendency to override that temptation is what
psychologists call cognitive reflection, and the bat-and-ball question
is one of a series of questions used to measure it. What makes the
question so hard is precisely that it seems so easy. In fact, when
people are given versions of the bat-and-ball problem with slightly more complex numbers, they tend to
do better. Faced with a trickier math problem in which no answer seems obvious, we have to actually
engage System 2. And once we've done that, we're more likely to notice that we can't solve the problem
simply by subtracting one number from the other. [12]

Here's another example. Two bags are sitting on a table. One bag has two apples in it, and the other has
an apple and an orange. With no idea which bag is which, you reach into one and grab something at
random. It's an apple. Given this, how likely is it that the bag you reached into contains the orange?
Think about this long enough to reach an answer before moving on.

The answer that jumps out to most people is that the probability is 50%, but it's easy to see why that
can't be right. You were more likely to pull out an apple at random from a bag if it only had apples in it.
So pulling out an apple must provide some evidence that the bag has only apples. Before you reached
into the bag, the probability it contained only apples was 50%, so that probability should be higher
now. (As we'll see, this is one of the central principles of evidence: when you get evidence for a
hypothesis, you should increase your confidence in that hypothesis.)

If you are like most people, this explanation still won't dislodge your intuition that the right answer
should be 50%. It might help to consider the fact that now there are three unknown fruits left, and only
one of them is in the bag you reached into. In Chapter 8, you'll learn the principles of probability that
explain how to work out the right answer. For now, though, the point is that our brains are not very good
at even simple assessments of probability. This is important to know about ourselves, because
probability judgments are unavoidable in everyday life as well as many professional contexts.

These two examples illustrate why we should be suspicious of answers handed to us by System 1. In
some cases—such as the bat-and-ball example—we can start by second-guessing our initial reaction
and then simply calculate the correct answer using System 2. In other cases—such as the bags-of-fruit
example—we may not know how to calculate the correct answer. But at least we should think twice
before assuming that our intuition is correct! The first and most important step is noticing when we don't
really know the answer.
Here is a third kind of case where System 1 tends to get thrown by a
tempting answer. Suppose you're wondering how common some
kind of event is—for example, how often do people die from shark
attacks, tornados, or accidents with furniture? An easy way to guess is
by asking ourselves how easily we can bring to mind examples from
our memory. The more difficult it is to summon an example, the more
uncommon the event—or so it might seem. As far as System 1 is concerned, this seems like a pretty good
shortcut to answering a statistical question. Any cognitive shortcut that we commonly need to use to
bypass effortful reasoning is a heuristic, and this particular one is called the availability heuristic. But,
of course, the ease with which we can recall things is affected by irrelevant factors including the
vividness and order of our memories. So the availability of an example to our memory is not always a
good indication of how common it is in reality.

If the examples we remember come from what we've seen or heard in the media, things are even worse.
There's no systematic relationship between how many media reports cover a given type of event and
how many events of that type actually occur. Shark attacks and tornados are rare and dramatic, while
drownings and accidents with furniture are usually not considered newsworthy. In a world of
sensational news media, System 1's availability heuristic is an extremely unreliable way to evaluate the
prevalence of events.

In contemporary life, it's crucial for us to be able to make accurate judgments in cases where our
automatic intuitions may not be reliable. Even if System 1 is not well-suited to answer the kind of
question we face, it might strongly suggest an answer. And it requires a concerted effort to check that
impulse. Situations like those described in this section should raise our suspicions, because System 1 is
not very good at assessing costs, probabilities, or risks. Better to set aside the tempting answer, grit our
teeth, and actually think through the problem.

Stubbornness
It takes effort to change our minds, so we tend not to. Once a belief has lodged itself in our heads, it can
be hard to shake, even if we no longer have any support for it. In our analogy, when the elephant is
marching happily along, effort is required to get the animal to change course. If the rider is not paying
attention or is too lazy to make a correction, the elephant will just keep marching in the same direction,
even if there is no longer any good reason to do so.
This phenomenon is known as belief perseverance, and it is well-established in cognitive psychology.
For example, people allow their beliefs to be influenced by news and journal articles even after learning
that those articles have been debunked and retracted by the source. [13] The same effect has been
found in all kinds of cases where people discover that their original grounds for believing something
have been completely undercut. In a standard kind of experiment testing this phenomenon, subjects
first answer a series of difficult questions. Afterward, they receive scores that are given completely at
random—though at first they do not know this. Eventually, they are told the truth: that their scores had
nothing to do with their performance. Still, when asked later to assess themselves on their ability to
answer the relevant kind of question, those scores continued to impact their self-evaluations. [14]

Another type of study involves two groups being told opposite made-
up "facts". For example, one group of people is told that people who
love risk make better firefighters, while another group is told that they
make worse ones. Later, everyone in the study is informed that the
two groups were told opposite things, and that the claims had been
fabricated. Even after the original claims were debunked, though,
when asked about their own personal views regarding risk-taking and firefighting ability, subjects
continued to be influenced by whichever claim they had been randomly assigned. [15]

Once a piece of misinformation has been incorporated into someone's belief system, significant
cognitive effort is required to dislodge it! This is true not only in cases where people don't have the
opportunity to seek out additional evidence, like those in the studies just described. It's also true even
when people do have that opportunity. This is because we are subject to confirmation bias: we tend to
notice and focus on potential evidence for our pre-existing views while neglecting or discounting
evidence to the contrary. [16]

Confirmation bias is perhaps the best known and most widely accepted notion
of inferential error to come out of the literature on human reasoning.
—Jonathan Evans, Bias in Human Reasoning

The impact of confirmation bias on our reasoning is hard to overstate. Again and again, and in a wide
range of situations, psychologists have noted that people find ways to confirm their pre-existing beliefs.
Sometimes this is because we are emotionally attached to those beliefs and thus are motivated to seek
out only sources of evidence that supports them. But confirmation bias also occurs when the issue is
something we don't care much about. This is because our expectations color how we interpret
ambiguous or neutral experiences, making them seem to fit well with our pre-existing views. [17] In other
words, we tend to see what we'd expect to see if our views were true, and not to notice details that
would conflict with our views.
Confirmation bias leads to a related cognitive pitfall involving cases where we form beliefs by making a
series of observations over time. In such cases, we tend to develop opinions early and then either
interpret later evidence in a way that confirms those opinions (due to confirmation bias), or simply pay
less attention to new information out of sheer laziness. The result is known as the evidence primacy
effect: earlier evidence has greater influence on our beliefs. [18]

Imagine you are asked to judge a murder case based on a series of


facts. You are given a description of the case followed by about 20
relevant pieces of evidence, half supporting guilt and the other half
supporting innocence. (For example, the defendant was seen driving
in a different direction just before the murder; on the other hand, the
defendant and the victim had recently engaged in a loud argument.)
After considering the evidence, you are asked to assess the probability of the defendant's guilt. Would it
matter in what order you heard the evidence?

Most of us would like to think that the order of evidence would have little effect on us. As long as we see
all of the evidence, we expect to be able to weigh it fairly. But in a study where subjects were put in
exactly this scenario, those who saw the incriminating evidence first assigned an average probability of
75% to the defendant's guilt, while for those who saw the exonerating evidence first assigned an average
probability of 45%. [19] It is hard to escape the unfortunate conclusion that the order of evidence in
court could determine whether or not someone is thrown in jail for the rest of their lives.

The importance of first impressions goes beyond the courtroom, of course. Suppose, for example, I see
Bob being friendly to someone, and form the impression that he is friendly. If I subsequently see Bob
interacting normally with people, I am more likely to notice things about those interactions that fit with
my image of Bob as friendly. If I then see Bob being unfriendly, I am more likely to assume that he had a
good reason to behave that way. But, if I had witnessed all of these scenes in reverse order, I would
probably have ended up with a very different impression of Bob. I would have interpreted all of the later
experiences through the lens of the unfriendly interaction, which I would have seen first. Either way, I'd
be making a mistake: the first time you meet someone is no more representative of their true nature
than any other time. Perhaps less so, if they are trying especially hard to make a good impression!
To sum up: when we have formed beliefs, we often cling to them even when our original reasons are
debunked. And then we go on to interpret subsequent bits of evidence in a way that confirms them.
Even when our beliefs concern things we don't particularly care about, they are hard to shake. So what
happens when our beliefs do concern things we care about? In that case, they are even harder to shake,
as we're about to find out!

Motivated reasoning
Mostly, we want to have accurate beliefs. But other motivations are involved as well, often beneath the
threshold of our awareness. In other words, even if the rider's goal is to form accurate beliefs, the
elephant may have goals of its own.

In forming and maintaining beliefs, we are often at some level motivated by how we would like things to
be rather than merely by how they actually are. This is called motivated reasoning. For example, we like
to have positive opinions of ourselves, and at least in cases where there is plenty of room for
interpretation, we are inclined to believe that we have personal traits that are above average. Studies
have found that 93% of American drivers rated themselves better than the median driver; a full 25% of
students considered themselves to be in the top 1% in terms of the ability to "get along with others"; and
94% of college professors thought they did "above-average work". [20] Many studies have found a
human tendency to consider oneself better than average on a wide range of hard-to-measure
personality traits.

Of course, we can't just choose to believe something because we


want it to be true. I can't just convince myself, for example, that I am
actually on a boat right now, however much I may want to be on a
boat. Motivated reasoning doesn't work like that: we can't just
override obvious truths with motivated beliefs. [21] So our motivated
reasoning has to be subtle, tipping the scales here and there when
the situation is murky. We typically don't let ourselves notice when we
are engaging in motivated reasoning, since that would defeat the purpose. We want to feel like we've
formed our beliefs impartially, thereby maintaining the illusion that our beliefs and desires just happen
to line up. (It's important to stress that many of our beliefs are "motivated" in this sense even though
we're unaware that we have the relevant motivations: most of our motivated reasoning is entirely
subconscious and non-transparent.)

For this reason, we become more susceptible to motivated reasoning the less straightforward the
evidence is. For example, it is fairly easy to just believe that we are better than average when it comes to
traits that are hard to measure, like leadership ability and getting along with others. This is much harder
when it comes to beliefs about our trigonometry skill or ability to run long distances. [22] We may have
no strong evidence that we are better leaders than other people, but then we don't have strong evidence
against it either. When we want to believe something, we are tempted to apply lower standards for how
much evidence is required. That goes a long way towards allowing us to adopt the beliefs we like,
without the process being so obvious to us that it defeats the purpose. In fact, we tend not to notice the
process occurring at all.

It is neither the case that people believe whatever they wish to believe nor that
beliefs are untouched by the hand of wishes and fears.
—Peter Ditto and David Lopez, 'Motivated Skepticism'

Motivated reasoning can also work behind the scenes to influence our response to evidence. For
example, since we have limited time and energy to think, we often focus on the bits of evidence that we
find interesting. But, without our realizing it, the "interesting" bits of evidence tend to be those that
happen to support our favored views.

This general tendency has been found in dozens of studies of human reasoning. Here are just a few:

When people with opposing views examine the same evidence (which provides support for both
sides), most people end up more confident in whatever view they started with. [23]
People who are motivated to disbelieve the conclusions of scientific studies look harder to find
problems with them. [24]
In a series of studies, subjects were "tested" for a made-up enzyme deficiency. Those who "tested
positive" considered the test less accurate and the deficiency less serious than those who "tested
negative", even though everyone was given the same information about the deficiency and the test.
[25]

Together, motivated reasoning and confirmation bias make a powerful mix. When our opinions are not
only preconceived but also motivated, we are not only more likely to notice evidence that confirms our
motivated opinions, but also to apply selective standards when evaluating evidence. This means we
tend to accept evidence that supports our views uncritically, while also seeking ways to discredit
evidence that conflicts with them.

Because the way motivated reasoning works is typically not transparent to us, we are often unaware of
the real causes of our beliefs. Of course, if we're challenged by someone else, we can often come up with
justifying reasons for holding the belief. And we can rehearse some justifying reasons to ourselves if we
need to help ourselves feel like we have good reasons. But they may not be our real reasons—in the
sense of actually explaining why we have the belief. The real explanation might be a preference followed
by a subconscious process of noticing primarily evidence that confirms this preference.
A caveat in closing
Having considered the cognitive pitfalls involving System 1, you might get the impression that System 1
is extremely unreliable, and that we should never "listen to our gut". Or, as the main character puts it in
High Fidelity:

"I've been thinking with my guts since I was fourteen years old, and frankly
speaking, between you and me, I have come to the conclusion that my guts
have shit for brains."
—Nick Hornby, High Fidelity

But this isn't the right way to think about the reliability of System 1 in general, for two reasons. First, we
have deliberately been focusing on situations that tend to lead to bad outcomes. Our automatic
processes are generally far better and faster at processing information than our deliberate ones. In fact,
the sheer volume of the sensory data that System 1 handles would entirely overwhelm our conscious
minds. No one could consciously perform enough real-time calculations to approximate the amount of
processing that System 1 must do when, for example, we catch a baseball.

The second reason not to disparage System 1 is this. Even for situations where our automatic judgments
tend to get things wrong, they can sometimes become more reliable over time, while remaining much
faster than deliberate reasoning. This means that with enough training of the right sort, we can
sometimes develop our gut reactions into genuinely skilled intuition: the ability to make fast and
accurate judgments about a situation by recognizing learned patterns in it. But this only works under the
right set of conditions: in particular, a great deal of experience in an environment that offers clear and
reliable feedback about the accuracy of one's judgments. [26] Unfortunately, it is common for people to
think they have developed skilled intuition when they actually have not. One of the most important
skills of rationality is knowing when the elephant can be trusted to find its way, and when it needs to be
guided by an attentive rider.
Section Questions

1-7

The bat-and-ball example and the bags-of-fruit example both illustrate...

A that in certain cases we should be wary of our immediate intuitions

B that our System 1 is not very good at calculating probabilities

C that we are "cognitive misers" when it comes to answering very di icult numerical problems

D that under the right conditions, our System 1 can be trained to provide quick and reliable intuitions

1-8

The murder case was used to illustrate...

A that motivated reasoning can color how we interpret ambiguous evidence

B that our System 1 is not very good at estimating probabilities

C that our beliefs are o en a ected by which pieces of evidence we get first

that we are more likely to judge a person as being guilty than as being innocent when we are given
D evidence on both sides

1-9

When we interpret evidence in a biased way due to motivated reasoning, we tend to...
simply decide that we want to believe something and then figure out ways to convince ourselves that it
A is true

B knowingly apply selective standards in order to discredit conflicting evidence

C deliberately ignore evidence on the other side so that we can bolster our own view

D think we are actually being unbiased and fair

1-10

If System 1 is not naturally skilled at a certain kind of reasoning task, ...

A it may still be possible, under the right conditions, to train it to improve

B it is easy to tell that it is not skilled and avoid trusting its responses when faced with that kind of task.

then that task is not the sort of task that System 1 performs, because there is a clear division between
C System 1 tasks and System 2 tasks

the only way that reasoning task can be performed reliably is with e ortful and deliberate thought
D processes

Key terms
Availability heuristic: judging the frequency or probability of an event or attribute by asking ourselves
how easily we can bring examples to mind from memory.

Belief perseverance: the tendency to continue holding a belief even if its original support has been
discredited, and in the face of contrary evidence.
Cognitive illusions: an involuntary error in our thinking or memory due to System 1, which continues
to seem correct even if we consciously realize it's not

Cognitive pitfalls: a common, predictable error in human reasoning. Cognitive pitfalls include mental
glitches uncovered by cognitive psychologists, as well as logical fallacies.

Cognitive reflection: the habit of checking initial impressions supplied by System 1, and overriding
them when appropriate.

Confirmation bias: the tendency to notice or focus on potential evidence for our pre-existing views,
and to neglect or discount contrary evidence. Confirmation bias can be present with our without an
underlying motive to have the belief in the first place.

Evidence primacy effect: in a process where information is acquired over time, the tendency to give
early information more evidential weight than late information. This tendency arises when we develop
opinions early on, leading to confirmation bias when interpreting later information, or simply a failure to
pay as much attention to it.

Heuristic: a cognitive shortcut used to bypass the more effortful type of reasoning that would be
required to arrive at an accurate answer. Heuristics are susceptible to systematic and predictable errors.

Motivated reasoning: forming or maintaining a belief at least partly because, at some level, we want it
to be true. This manifests itself in selective standards for belief, seeking and accepting evidence that
confirms desired beliefs, and ignoring or discounting evidence that disconfirms them

Skilled intuition: the ability to make fast and accurate judgments about a situation by recognizing
learned patterns in it. This requires training under specific kinds of conditions.

System 1: the collection of cognitive processes that feel automatic and effortless but not transparent.
These include specialized processes that interpret sensory data and are the source of our impressions,
feelings, intuitions, and impulses. (The distinction between the two systems is one of degree, and the
two systems often overlap, but it is still useful to distinguish them.)

System 2: the collection of cognitive processes that are directly controlled, effortful, and transparent.
(The distinction between the two systems is one of degree, and the two systems often overlap, but it is
still useful to distinguish them.)

You might also like