Heuristics and Cognitive Bias By Me

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Heuristics and Cognitive Bias

Human perception and cognition are composed of a set of tools that were and are still crucial to our
survival and flourishment. These tools are also nonetheless limited in many ways, and leave us
humans open to systematic errors when making decisions. In order to be able to make quick
judgements, we humans make use of heuristics, which are mental shortcuts that facilitate problem-
solving and probability judgments. These tools are not perfect or optimal, and in many cases not
even rational. They are powerful still in that they allow us to take decisions in the face of virtually
limitless problem of calculation. They often times are more than enough to reach a desired goal
often through approximation. Heuristics include tools such as rules of thumb and generalizations,
which leave our thinking open to a range of issues revolving around cognitive bias.

A cognitive bias is a misleading and false logical thought pattern. This form of thinking takes place
when the individual attempts to make a judgment or decision quickly. Cognitive biases influence our
choices, especially when dealing with large amounts of information or when time is limited. It is a
form of dysfunctional reasoning which nonetheless is valuable because it often works. We are biased
in believing these judgments because we have seen them work in our past experience. Nonetheless,
it can cause an alteration in judgment, which in turn can significantly distort the individual’s decision-
making ability.

While it was first studied in cognitive psychology, the notion of cognitive bias has been exploited in
different fields by a number of corporations and politicians in order to nudge the public’s judgement
in pursuit of various goals. Examples of this are innumerable. One of the most famous is former US
President Ronald Reagan’s televised address to the nation where he outlined his administration’s
plan for tax reduction in July 1981. The president presented a chart without any numbers for
reference, which led large parts of the public to falsely believe tax cuts were going to be significant.

In a world that is geared more and more towards exploiting people’s cognitive biases, it is becoming
increasingly important to at the very least be aware of these biases. As such, let’s explore some of
the ways that heuristics give rise to cognitive bias:

Representative Heuristics:

Representative heuristics are used in order to determine the degree to which an object X resembles
another object Y. The perceived degree to which one object resembles the other can then be used in
order to answer other questions, such as “Does X originate from Y?”. If X was judged to be very
resembling to Y, then the heuristic lends more credence to X originating from Y. If X doesn’t resemble
Y, then it follows that X probably doesn’t originate from Y.

This heuristic can be used in a number of ways in everyday life. For example, an individual is offered a
description of a person X (that they are tidy, well organized, do not like going out, prefer being
alone), and they are asked to guess from a list the occupation of this person X. By using the
representative heuristic, they will guess the occupation that most resembles the description
provided, and so they say he’s a librarian. Through examples like this, a number of issues can arise.

One of the issues related to representative heuristics is that their use can be insensitive to prior
probability of outcomes. When one seeks to determine an outcome, they should take into
consideration the prior base rates of the variables they ascertain. When it comes to the previous
example, that of person X, guessing that they are a librarian does not take into consideration the fact
that there are only few librarians in the world when compared to farmers for example.
A study set out to expose such issues by presenting participants a description of a hundred
individuals. In one experiment, there were 30 engineers and 70 lawyers, and in another 70 engineers
and 30 lawyers. The participants classified the individuals as either lawyer or engineer, but did so in
complete disregard for the prior probabilities of the data set.

Another issue related to representativeness relates to insensitivity to sample size. One example of
this systemic error take place is one getting asked to guess the probability of having a specific result
out of a data set. One can see this in the case of being asked to guess the height of a given individual.
Some might be quick to determine that 180 centimeters would be the best answer. However, this
does not take into account which population is being studied. A study asking that same question saw
participants failing to appreciate the importance of sample size when it comes to answering it.

Insensitivity to sample size can also lead to systematic mistakes when it comes to judging the
probability of a sample being drawn from a specific population among a set of different populations.
A study has shown that this heuristic is not affected by the size of the sample, and instead
concentrates mainly on the proportion of the sample.

Yet another issue relates to misconceptions of chance. Here, it is argued that people will see a
sequence of random events and extrapolate from that sequence the essential characteristics of the
process by which random events were generated itself. Examples of this include the coin toss
sequence issue. When a sequence of coin tosses is composed of three heads in a row followed by
only one tails, then people will expect to see tails in the next try. This heuristic is flawed because it
does not take into consideration the fact that the process of generating a new option, the coin toss
itself, is always 50 percent, and is not affected by the overall sequence of coin tosses. The tendency
to view chance as a force that corrects itself is one of the blind spots of representativeness heuristics.

Insensitivity to predictability also comes into play. People every day make predictions on numbers
such as stock values and basketball games scores. The issue here is that people will often make these
predictions on the basis of a description of the entity they are trying to analyze. For example, if a
football team is described to them as being the best in the world, they will probably predict that that
team will be victorious in their upcoming match. However, people in this case would be taking a
description at face value without taking the time to make their own research.

The main rule being broken here is a statistical one, in that considerations of predictability are the
main tool used to control for the extremeness and range of the predictions. Studies have shown
repeatedly that people often break this rule when making numerical predictions.

The illusion of validity is yet another issue when it comes to predictions. Here, we’ve seen how
people can be misguided when picking a sample from a population. People can be more or less
certain of their pick by basing their judgement on degree of representativeness, and pay small
attention to elements that would hinder their predictive capability. People will tend to be more
certain of their predictions when they are based on redundant input variables. However, statistical
theory again shows that predictions are more precise when they are based on variables that are
independent from one-another. In consequence, redundancy in data decreases the accuracy of
guesses but increases the confidence of the individual, which in turn leads o mistakes.

Misconceptions of regression is also quite important. If we take a variable A that is at the high end of
a specific scale, and another variable B that is at the low end of that scale, and are asked to
determine the next value of these variables, then one might answer that A would remain high and B
would remain low. However, there is a regression towards the mean, which people tend to not take
into consideration when making these judgments. These values will probably both get closer to the
mean of all other values instead of going further into the extremes.

Availability Heuristics:

Availability heuristics are used when making predictions, and their operation relates to determining
something on the basis of information readily available. One example would be hearing that
someone might have cancer, and determining it is probably not true as you remember three
different people you know who mistakenly thought they had cancer. The issue here is that judgment
is based on easily retrievable but biased information. This is because instead of referring to a more
representative population, the person instead referred to his personal acquaintances, which is a
more available data set.

Availability heuristics are sensitive to biases due to the retrievability of instances, in that we tend to
choose the data set that is easiest to analyze (maybe we know more people from one data set than
the other).

They are also sensitive to biases relating to the effectiveness of a search set. One task asks people to
determine if there are more words that start with R than words that have R in the third letter. People
would answer that more words start with R simply because it is easier to come up with words that
start with R than those who have the letter within the word: the words starting with R are more
available and thus appear first.

Biases of imaginability are also important, and here participants will have to determine the
frequency of a variable that instances are not stored but generated through a rule. This plays a
concrete role when for example judging the danger of an expedition. People will often overestimate
certain risks but underestimate others simply on the basis of availability.

Illusory correlation is also quite important. There is a bias when people judge the frequency of two
events occurring at the same time, which leads to illusions of correlation.

Adjustment and Anchoring:

Individuals will, depending on the situation, make predictions that have a specific value as a starting
point which is altered in a way as to make the final prediction. These alterations usually follow some
computational method, but it was found that these alterations will often be inadequate for the
problem.

One of the main issues surrounding adjustment and anchoring is that people often make insufficient
adjustments. When asked to calculate the result of a series of multiplications in a short time, people
will often have multiplied only part of the sequence, and would use that number in order to
determine the final result, which often ends in underestimation.

People are also biased when it comes to evaluating conjunctive and disjunctive events. This can be
especially relevant when it comes to planning.

Anchoring in the assessment of subjective probability distributions can also be problematic.

You might also like