Professional Documents
Culture Documents
All Summaries EP 17 - 18
All Summaries EP 17 - 18
1
Session: All of them Exam Guide
The computational model of the mind is based on the assumption that the essence of thinking
can be captured by describing what the brain does as manipulating symbols.
The “cognitive revolution” in psychology really got under way (in the 1960s) when the first
computer programming languages were applied to the task of summarizing and mimicking the
mental operations of people performing intellectual tasks like chess playing, logical deduction,
and mental arithmetic.
Many aspects of human thinking, including judgment and decision making, can be captured
with computational models. The essential parts of these models are symbols and operations
that compare, combine, and record (in memory) the symbols.
The other half of the cognitive theory is a description of the elementary information processes
that operate on the representations to store them, compare them, and transform them in
productive thought.
Although we are aware of (and can report on) some aspects of cognitive processing, mostly the
symbolic products of hidden processes such as the digit ideas in mental arithmetic, most of the
cognitive system is unconscious.
o So, the first insight from cognitive science is that we can think of intellectual
achievements, like judging and deciding, as computation and that computation can be
broken down into symbolic representations and operations on those representations.
o In addition, we emphasize that both automatic and controlled modes of thinking can be
modelled as computations in this sense.
The early outlines of the cognitive system included three kinds of memory stores:
I. sensory input buffers that hold and transform incoming sensory information over a span
of a few seconds;
II. a limited short-term working memory where most of conscious thinking occurs; and
III. a capacious long-term memory where we store concepts, images, facts, and procedures.
Modern conceptions distinguish between several more processing modules and memory
buffers, all linked to a central working memory.
o In the multi-module model, there are input
and output modules, which encode
information from each sensory system
(relying on one or more memory buffers) and
generate motor responses.
o A Working Memory, often analogized to the
surface of a workbench on which projects
(problems) are completed, is the central hub
of the system, and it comprises a central
executive processor, a goal stack that organizes processing, and at least two short-term
memory buffers that hold visual and verbal information that is currently in use.
o The other major part of the system is a Long-Term Memory that contains all sorts of
information including procedures for thinking and deciding.
Two properties of the memory stores will play major roles in our explanations for judgment and
decision-making phenomena.
I. First, the limited capacity of Working Memory will be used to explain some departures
from optimal, rational performance.
James March and Herbert Simon (1958) introduced the concept of bounded
2
Session: All of them Exam Guide
rationality in decision making, by which they meant approximately optimal
behaviour, where the primary explanation for departures from optimal is that we
simply don’t have the capacity to compute the optimal solutions because our
working memory imposes limits on how much information we can use
II. Second, we will often refer to the many facts and procedures that have been learned
and stored in long-term memory.
1.4 Through Darkest Psychoanalytic Theory and Behaviourism to Cognition
Until the 1950s, psychology was dominated by two traditions: psychoanalytic theory and
behaviourism.
One of the most important psychopathologies of the 20th century, Nazism.
According to the behaviouristic approach, in sharp contrast to psychoanalysis, the reinforcing
properties of the rewards or punishments that follow a behaviour determine whether the
behaviour will become habitual.
The “law of effect,” which maintains that the influence of consequences is automatic.
Marvin Levine (1975) demonstrated that participants’ conscious beliefs were virtually perfect
predictors of their responses, particular error patterns, and time it took to learn.
Most psychologists today accept the compelling assumption that ideas and beliefs cause
behaviour and that cognitive theories are the best route to understanding and improving
important behaviours.
1.5 Quality of Choice: Rationality
A rational choice can be defined as one that meets four criteria:
I. It is based on the decision maker’s current assets. Assets include not only money, but
also physiological state, psychological capacities, social relationships, and feelings.
II. It is based on the possible consequences of the choice.
III. When these consequences are uncertain, their likelihood is evaluated according to the
basic rules of probability theory.
IV. It is a choice that is adaptive within the constraints of those probabilities and the values
or satisfactions associated with each of the possible consequences of the choice.
In fact, there are common decision-making procedures that have no direct relationship to these
criteria of rationality. They include the following:
V. Habit, choosing what we have chosen before;
VI. Conformity, making whatever choice (you think) most other people would make or
imitating the choices of people you admire (Boyd and Richerson [1982] have pointed out
that imitation of success can be adaptive in general, though not, for example, if it is
imitation of the drug use of a particular rock star or professional athlete you admire for
his or her professional achievements); and
VII. Choosing on the basis of (your interpretation of) religious principles or cultural
mandates.
Because reality is not contradictory, contradictory thinking is irrational thinking. A proposition about
reality cannot be both true and false.
3
Session: All of them Exam Guide
such as Girolamo Cardano (1501–1576), a true Renaissance man who was simultaneously a
mathematician, physician, accountant, and inveterate gambler.
The most recent impetus for the development of a rational decision theory, however, comes
from a book published in 1947 entitled Theory of Games and Economic Behaviour by
mathematician John von Neumann and economist Oskar Morgenstern.
Von Neumann and Morgenstern provided a theory of decision making according to the principle
of maximizing expected utility.
Traditional economists, looking at the aggregate behaviour of many individual decision makers
in broad economic contexts, are satisfied that the principle of maximizing expected utility does
describe what happens.
o There are good reasons to start with the optimistic hypothesis that the rational, expected
utility theory and the descriptive—how people really behave—theories are the same. After
all, our decision-making habits have been “designed” by millions of years of evolutionary
selection and, if that weren’t enough, have been shaped by a lifetime of adaptive learning
experiences.
In contrast, psychologists and behavioural economists studying the decision making of
individuals and organizations tend to reach the opposite conclusion from that of traditional
economists. Not only do the choices of individuals and social decision-making groups tend to
violate the principle of maximizing expected utility; they are also often patently irrational.
o Those behavioural scientists who conclude that the rational model is not a good descriptive
model have also criticized the apparent descriptive successes of the rational model
reported by Becker and others. The catch is that by specifying the theory in terms of utility
rather than concrete values (like dollars), it is almost always possible to assume that some
sort of maximization principle works and then, ex post, to define utilities accordingly.
Risk situation in which the likelihood of each possible outcome is known or can be estimated and
no single possible outcome is certain to occur.
4
Session: All of them Exam Guide
Five main topics are examined:
I. Degree of Risk. Probabilities are used to measure the degree of risk and the likely profit
from a risky undertaking.
II. Decision Making Under Uncertainty. Whether people choose a risky option over a non-
risky one depends on their attitudes toward risk and on the expected payoffs of each
option.
III. Avoiding Risk. People try to reduce their overall risk by not making risky choices, taking
actions to lower the likelihood of a disaster, combining risks, insuring, and in other ways.
IV. Investing Under Uncertainty. Whether people make an investment depends on the
riskiness of the payoff, the expected return attitudes toward risk, the interest rate, and
whether it is profitable to alter the likelihood of a good outcome.
V. Behavioural Economic of Risk. Because some people do not choose among risky options
the way that traditional economic theory predicts, some researchers have switched to new
models that incorporate psychological factors.
Important Terms and Quick Overview
A particular event has a number of possible outcomes. To describe how risky this activity/event
is, we need to quantify the likelihood that each possible outcome occurs.
A probability is a number between 0 and 1 that indicates the likelihood that a particular outcome
will occur.
If we have a history of the outcomes for an event, we can use the frequency with which a
particular outcome occurred as our estimate of the probability.
Often we do not have a history that allows us to calculate the frequency. We use whatever
information we have to form a subjective probability, which is our best estimate of the
likelihood that an outcome will occur.
A probability distribution relates the probability of occurrence to each possible outcome.
Expected value
The variance is the probability weighted average of the squares of the differences between the
observed outcome and the expected value.
o Holding the expected value constant,
the smaller the standard deviation,
the smaller the risk.
Expected utility is the probability weighted
average of the utility from each possible
outcome.
Fair bet A wager with an expected value of
zero.
Risk averse unwilling to make a fair bet.
Risk neutral indifferent about making a fair bet.
Risk preferring willing to make a fair bet.
A person whose utility function is concave picks the less risky choice if both choices have the
same expected value.
Risk premium The amount that a risk averse person would pay to avoid taking a risk.
Someone who is risk neutral has a constant marginal utility of wealth: Each extra dollar of
wealth raises utility by the same amount as the previous dollar. With constant marginal utility of
wealth, the utility curve is a straight line in a utility and wealth graph.
5
Session: All of them Exam Guide
In general, a risk neutral person chooses the option with the highest expected value, because
maximizing expected value maximizes utility. A risk-neutral person chooses the riskier option if it
has even a slightly higher expected value than the less risky option. Equivalently, the risk
premium for a risk-neutral person is zero.
Individuals and firm often reduce their overall risk by making many risky investments instead of
only one. This practice is called diversifying.
The extent to which diversification reduces risk depends on the degree to which various events
are correlated over states of nature.
Diversification can eliminate risk if two events are perfectly negatively correlated.
Diversification reduces risk even if the two events are imperfectly negatively correlated,
uncorrelated, or imperfectly positively correlated.
In contrast, diversification does not reduce risk if two events are perfectly positively correlated.
Fair insurance A bet between an insurer and a policyholder in which the value of the bet to the
policyholder is zero.
When fair insurance is offered, risk averse people full insure.
An insurance company sells policies only for risks it can diversify.
6
Session: All of them Exam Guide
Serano and Feldman: Preferences and Utility – Summary
1. Introduction
A utility function is a numerical representation of how a consumer feels about alternative
consumption bundles: if she likes the first bundle better than the second, then the utility
function assigns a higher number to the first than to the second, and if she likes them equally
well, then the utility function assigns the same number to both.
When there are two goods any consumption bundle can easily be shown in a standard two-
dimensional graph, with the quantity of the first good on the horizontal axis and the quantity of
the second good on the vertical axis. All the figures in this lesson are drawn this way.
In the shopping centre of life some bundles are feasible or affordable for the consumer; these
are the ones which her budget will allow. Other bundles are non-feasible or unaffordable; these
are the ones her budget won’t allow.
2. The Consumer’s Preference Relation
If there are two goods, X is a vector (x1, x2), where x1 is the quantity of good 1 and x2 is the
quantity of good 2.
If the consumer likes X and Y equally well, we say she is indifferent between them. We write X ∼
Y in this case, and ∼ is called the indifference relation.
The ≽ relation is sometimes called the weak preference relation.
Assumptions on preferences
Assumption 1: Completeness. For all consumption bundles X and Y, either X ≻ Y , or Y ≻ X, or X ∼ Y
. That is, the consumer must like one better than the other, or like them equally well.
Having a complete ordering of bundles is very important for our analysis.
Assumption 2: Transitivity. This assumption has four parts:
I. First, transitivity of preference: if X ≻ Y and Y ≻ Z, then X ≻ Z.
II. Second, transitivity of indifference: if X ∼ Y and Y ∼ Z, then X ∼ Z.
III. Third, if X≻Y and Y ∼Z, then X≻Z.
IV. Fourth and finally, if X ∼Y and Y ≻Z, then X ≻Z.
The transitivity of preference assumption is meant to rule out irrational preference cycles.
The transitivity of indifference assumption (that is, if X ∼ Y and Y ∼ Z, then X ∼ Z) makes
indifference curves possible.
Assumption 3: Monotonicity. We normally assume that goods are desirable, which means the
consumer prefers consuming more of a good to consuming less. That is, suppose X and Y are two
bundles of goods such that (1) X has more of one good (or both) than Y does and (2) X has at
least as much of both goods as Y has. Then X ≻ Y.
o Some important consequences of monotonicity are the following: indifference curves
representing preferences over two desirable goods cannot be thick or upward sloping.
Nor can they be vertical or horizontal.
In Figure 2.3 below we show a
downward sloping thin indifference
curve, which is what the monotonicity
assumption requires.
7
Session: All of them Exam Guide
Another implication of the assumptions of transitivity (of indifference) and monotonicity is that
two distinct indifference curves cannot cross.
Assumption 4: Convexity for indifference curves. This assumption means that averages of
consumption bundles are preferred to extremes. Consider two distinct points on one
indifference curve. For example, the bundle made up of 1/2 times X plus 1/2 times Y , that is X/2
+ Y /2, is preferred to either X or Y . This is what we normally assume to be the case.
We call preferences well behaved when indifference curves are downward sloping and convex.
4. The Consumer’s Utility Function
Imagine that we assign a number to each bundle. For example, we assign the number u(X) =
u(x1,x2) = 5, to the bundle X = (x1,x2); we assign the number u(Y) = u(y1,y2) = 4, to Y =(y1,y2); and
so on.
We say that such an assignment of numbers to bundles is a consumer’s utility function if:
I. First, u(X)>u(Y) whenever X≻Y.
II. And second, u(X)=u(Y) whenever X∼Y.
Our consumer’s utility function is said to be an “ordinal” utility function rather than a “cardinal”
utility function.
o An ordinal statement only gives information about relative magnitudes; for instance, “I
like Tiffany more than Jennifer.”
o A cardinal statement provides information about magnitudes that can be added,
subtracted, and so on. For instance, “Billy weighs 160 lbs. and Johnny weighs 120 lbs.”
o Today, for the most part, we treat utility simply as an ordinal magnitude.
For one individual, differences or ratios of utility numbers from different bundles generally do
not matter, and comparisons of utilities across different individuals have no meaning.
If we start with a utility function representing my preferences, and modify it with what’s called
an order-preserving transformation, then it still represents my preferences. All this is summed
up in the following statement:
If u(X) = u(x1, x2) is a utility function that represents the preferences of a consumer, and f is
any order-preserving transformation of u, the transformed function f (u(X )) = f (u(x 1, x2)) is
another utility function that also represents those preferences.
What is the connection between indifference curves and utility functions? The answer is that we
use indifference curves to represent constant levels of utility.
8
Session: All of them Exam Guide
Hastie and Dawes (H&D), Ch. 2 – Summary
What is Decision Making?
2.1 Definition of a Decision
A decision, in scientific terms, is a response in a situation that is composed of three parts:
I. First, there is more than one possible course of action under consideration in the choice
set.
II. Second, the decision maker can form expectations concerning future events and outcomes
following from each course of action, expectations that can be described in terms of
degrees of belief or probabilities
III. Third, the consequences associated with the possible outcomes can be assessed on an
evaluative continuum determined by current goals and personal values.
The problem with this definition is that it includes so many situations that it could almost serve
as a definition of intentional behaviour, not just decision behaviour.
9
Session: All of them Exam Guide
The equation prescribes that for each alternative course of action under consideration (each
major branch of the decision tree), we need to weight each of the potential consequences by its
probability of occurrence, and then add up all the component products to yield a summary
evaluation called an expected utility for each alternative course of action (each initial left-hand
branch).
Note that these calculations assume we can describe the decision process in terms of numerical
probabilities and values and that arithmetic operations (adding, multiplying) describe the
decision maker’s thought processes. The calculation also assumes that the decision maker
thoroughly considers all (and only) the options, contingencies, and consequences in the decision
tree model of the situation.
10
Session: All of them Exam Guide
2.6 The Rationality of Considering Only the Future
In general, the past is relevant, but only for estimating current probabilities and the desirability
of future states. It is rational to conclude that a coin that has landed heads in 19 of 20 previous
flips is probably biased, and that therefore the probability it lands heads on the 21st flip is
greater than 1/2. It is not rational to estimate the probability of landing heads on the 21st toss by
assigning a probability to the entire pattern of results including those that have already occurred.
Rational estimation of probabilities and rational decision making resulting from this estimation
are based on a very clear demarcation between the past and the future.
13
Session: All of them Exam Guide
There are many reasons for the resistance to actuarial, statistical judgment models:
I. First of all, they are an affront to the narcissism (and a threat to the income) of many
experts.
II. Another objection is to maintain that the outcomes better predicted by linear models are
all short-term and trivial
III. A final objection is the one that says, “10,000 Frenchmen can’t be wrong.” Experts have
been revered—and well paid—for years for their “It is my opinion that ... ” judgments.
IV. But there is also a situational reason for doubting the inferiority of global, intuitive
judgment. It has to do with the biased availability of feedback. When we construct a
linear model in a prediction situation, we know exactly how poorly it predicts. In contrast,
our feedback about our own intuitive judgments is flawed. Not only do we selectively
remember our successes, we often have no knowledge of our failures.
A judgment that some people are poor tippers leads to inferior service, which in turn leads to
poor tips—thereby “validating” the waiter’s judgment. (Not all prophecies are self-fulfilling—
there must be a mechanism, and intuitive judgment often provides one. Intuition is also a
possible mechanism for some self-negating prophecies, such as the feeling that one is
invulnerable no matter how many risks one takes while driving.)
We want to predict outcomes that are important to us. It is only rational to conclude that if one
method (a linear model) does not predict well, something else may do better. What is not
rational—in fact, it’s irrational—is to conclude that this “something else” necessarily exists and,
in the absence of any positive supporting evidence, that it’s intuitive global judgment.
One important lesson of the many studies of human judgment is that outcomes are not all that
predictable; there is a great deal of “irreducible uncertainty” in the external world, on the left-
hand side of the Lens Model diagram.
People find linear models of judgment particularly distasteful in assessing other people.
14
Session: All of them Exam Guide
Hastie and Dawes (H&D), Ch. 4: The Fundamental Judgment Strategy; Anchoring and
Adjustment – Summary
4.1 Salient Values
Often, our estimates of frequencies, probabilities, and even the desirability of consequences are
vague. In ambiguous situations, an “anchor” that serves as a starting point for estimation can
have dramatic effects.
What happens is that people will adjust their estimates from this anchor but nevertheless remain
too close to it.
When we sequentially integrate information in this manner, we usually “underadjust.”
The finding of such insufficiency is general, and is
related to the credibility of the original anchor and
the amount of relevant information that the judge
has available in memory or at hand.
The anchor-and-adjust process appears in many
judgments, and it is especially clear when the
anchor selected is “obviously” arbitrary.
4.2 Anchoring and (Insufficient) Adjustment
This serial judgment process is a natural result of
our limited attention “channels” and the selective
strategies we have developed to deal with that
cognitive limit.
Just as we can only focus on one location in a
visual scene or listen to one conversation at a
crowded cocktail party, we attend to one item of
evidence at a time as we make estimates.
We can summarize the judgment process with a
flowchart diagram as in Figure 4.1.
The flowchart shows that the judgment process is
complex and there are many places where biases Figure 4.1 A flowchart representing the cognitive processes
might enter the system. The basic bias is that the that occur in an anchor and adjust judgment.
process is prone to underadjustments or primacy
effects—information considered early in the
judgment process tends to be overweighted in the final judgment.
The anchor produces its bias through two cognitive routes:
I. There is a conservatism in the adjustment process near the output end of the entire
procedure. As the inclination to respond with higher or lower values occurs when new
information is considered, it overweights what is already known.
II. There is a biasing effect of the anchor, more accurately the concepts associated with the
anchor, on the kind of information that is considered second. Subsequently, especially
when retrieving information from memory to make the judgment
The true underadjustment process plays a large role only when the person making the estimate
selects his or her own anchor value.
One indicator of sources of bias in the underlying cognitive process is the source of the anchor.
15
Session: All of them Exam Guide
When the time course of a judgment process can be mapped out, the most commonly observed
sequential effect is a primacy effect, best interpreted as anchoring on the initial information
considered and then underadjusting for subsequent information.
The most common anchor, of course, is the status quo. While we are not constrained mentally—
as we are physically—to begin a journey from where we are, we often do. Changes in existing
plans or policies more readily come to mind than do wholly new ones, and even as new
alternatives close to the status quo are considered, they, too, can become anchors.
This generalization is true of organizations as well as of individuals.
When people are asked to produce a monetary equivalent (through the selling price procedure),
they anchor on the dollar amounts of the outcomes, and they insufficiently adjust, on the basis
of the probabilities involved.
But when the same people think of winning versus losing, they anchor on the probability of
success; higher probabilities are more desirable. And then they insufficiently adjust their value
judgment on the basis of the dollars to be won or lost.
Robust preference reversals also provide an instructive example of how the scale on which a
response is made can bias people to choose one anchor over another (favoring dollar amounts,
when the response is a price, but favoring probabilities, when the response is a choice).
Furthermore, the results challenge standard economic theory, which equates the utility
(personal value) of an object with the amount of money people are willing to pay for it.
4.3 Anchoring on Ourselves
When we make judgments about someone we do not know well, we engage in an egocentric
process that some researchers have called “projection.”
Subtle social judgment begins with anchoring on the explicit contents of the message—we might
call this the first interpretation— and only given the benefit of deeper thinking can that
interpretation be adjusted away from the surface meaning to appreciate the true sarcastic
message.
The basic tendency to see ourselves in others is well established, and it is clear that an anchor-
and-adjust process is responsible for many of these effects.
4.4 Anchoring the Past in the Present
Anchoring and adjustment can also severely affect our retrospective personal memory. While
such memory is introspectively a process of “dredging up” what actually happened, it is to a
large extent anchored by our current beliefs and feelings.
16
Session: All of them Exam Guide
Hastie and Dawes (H&D), Ch. 5: Judging Heuristically – Summary
5.1 Going Beyond the Information Given
A good account of many underlying cognitive judgment processes is provided by assuming that
we have a cognitive toolbox of mental heuristics stored in long-term memory.
Heuristics are efficient, but sometimes inaccurate procedures for solving a problem—in this case,
providing rough-and-ready estimates of frequencies, probabilities, and magnitudes.
These heuristic mechanisms are usually constructed from more primitive mental capacities such
as our similarity, memory, and causal judgment processes.
These cognitive tools are acquired over a lifetime of experience.
They tell us what information to seek or select in the environment and how to integrate several
sources of information to infer the characteristics of events that are not directly available to
perception. We learn these cognitive tools from trial-and-error experience.
The notion is that when we encounter a situation in which a judgment is needed, we select a tool
from our cognitive toolbox that is suited to the judgment.
For many everyday judgments, we use heuristic strategies because they are relatively “cheap” in
terms of mental effort and, under most everyday conditions, they provide good estimates.
5.2 Estimating Frequencies and Probabilities
A single psychophysical function relating objective quantities to subjective estimates is
characteristic of almost all memory-based frequency-of-occurrence estimates.
At the low end of the objective frequency scale, estimates tend to be overestimates. As the
number of to-be-estimated events increases, our subjective judgments err in the direction of
underestimation.
The almost perfect accuracy for estimates of up to five items was taken as a measure of the
scope of apprehension.
When the number of to-be-estimated events exceeds about 10 items, however, the tendency to
underestimate the objective total appears as in the memory-based function.
When there are more than about seven items (often cited as the capacity of short-term,
conscious working memory), a more deliberate estimation strategy is used to make the
judgment.
5.3 Availability of Memories
Many of the judgments we make are memory-based in the sense that we don’t have the “data”
necessary to make the judgment right in front of us, but we have learned information in the past,
now stored in long-term memory, that is relevant to the judgments.
This simple form of associative thinking is called the availability heuristic by researchers, and we
rely on ease of retrieval to make a remarkable variety of judgments.
The operations of the availability heuristic can be broken down into seven subprocesses or
subroutines (summarized in Figure 5.2):
I. the original acquisition or storage of relevant information in long-term memory;
II. retention, including some forgetting, of the stored information;
17
Session: All of them Exam Guide
III. recognition of a situation in which stored
information is relevant to making a
judgment;
IV. probing or cueing memory for relevant
information;
V. retrieval or activation of items that
match or are associated with the
memory probe;
VI. assessment of the ease of retrieval
(perhaps based on the amount recalled,
quickness of recall, or subjective
vividness of the recalled information);
and
VII. an estimate of the to-be-judged
frequency or probability based on sensed
ease of retrieval.
There are several points in the process at which biases might perturb the final judgment: The
experienced sample of events stored in long-term memory (the information that is available to
be remembered) might be biased, as in our example of suicide versus homicide estimates.
The memory cue that is the basis for retrieval might be biased to produce a biased sample of
remembered events, even if the population of events in memory is not itself unrepresentative.
Events may vary in their salience or vividness, so that some more salient events dominate the
assessment of ease of retrieval. Any of these factors, individually or jointly, may introduce
systematic biases into memory-based judgments.
5.4 Biased Samples in Memory
Rationally defensible deductive logic involves a specification from the universal to the particular,
but much less reliable inductive logic involves generalization from the particular to the universal.
However, we are prone to do the exact opposite: we under-deduce and over-induce.
5.5 Biased Sampling From Memory
It is obvious that if the sample of information stored in memory is biased (perhaps because it is
filtered through the popular media), subsequent memory-based judgments will be biased, too.
But other aspects of the memory process can produce systematic biases as well.
The emotion evoked by an event may have a further effect on memory and, hence, memory-
based judgments: When we are in a particular emotional state, we have a tendency to
remember events that are thematically congruent with that state.
When we have experience with a class of phenomena (objects, people, or events), those with a
salient characteristic most readily come to mind when we think about that class.
It follows that if we estimate the proportion of members of that class who have the distinctive
characteristic, we tend to overestimate it.
Our estimate will be higher than the one we would make if we deliberately coded whether each
member of that class did or did not have that characteristic as we encountered it (e.g., by
keeping a running tally with a mechanical counter).
Selective retrieval from memory can produce large misestimates of proportions, leading to a
misunderstanding of a serious social problem, and finally to biases in important decisions like
18
Session: All of them Exam Guide
those required of voters, jurors, and policy makers.
5.6 Availability to the Imagination
Availability to the imagination influences our estimates of frequency. The problem arises—just
as with the availability of actual instances in our experience or availability of vicarious instances
—in that this availability is determined by many factors other than actual frequency.
The resultant ease of imagining biases our estimates of frequencies, and hence our judgments of
probability based on such frequencies.
5.7 From Availability to Probability and Causality
Tversky and his colleagues explained the subadditivity of probabilities by proposing that support
for each proposition was recruited from the physicians’ imaginations. The complementary
subevent descriptions provide effective cues to generate reasons for the specific outcomes.
The cognitive processes underlying these overestimates probably fall in the middle of the
continuum from memory retrieval to imaginative generation, though retrieval is surely part of
the explanation: Follow-up studies showed that these subadditive estimates were highly
correlated with the respondents’ ability to recall specific contributions, implying that memory
availability was a component of the judgment process.
Both the subadditive and the superadditive findings and other clever demonstrations of
retrieval fluency verify the prominent role of availability as an underlying cognitive process.
Some of the most important practical implications concern the manner in which citizens (and
their political leaders) set agendas for the investment of public resources.
5.8 Judgment by Similarity: Same Old Things
The second elementary cognitive process that is often heuristically substituted for magnitude,
frequency, and probability judgments is similarity.
The common tendency to make judgments and decisions about category membership based on
similarity between our conception of the category and our impression of the to-be-classified
object, situation, or event.
As in the case of availability-based judgments, similarity slips into the judgment process
automatically and dominates spontaneous judgments of category membership.
The primary behavioral “signature” of relying on similarity is that people miss important
statistical or logical structure in the situation and ignore relevant information.
This overreliance on similarity occurs even when people simultaneously acknowledge that the
information they are using is unreliable, incomplete, and non-predictive.
5.9 Representive Thinking
The purpose of these (book) examples was to demonstrate
I. that category membership judgments are usually based on the degree to which
characteristics are representative of or similar to prototypical category exemplars,
II. that representativeness does not necessarily reflect an actual contingency, and
19
Session: All of them Exam Guide
III. that probability estimates or confidence in judgments are related to similarity and not
necessarily to the deeper structure of the situations about which we are making
judgments.
We find these early studies to be quite
convincing on the point that people (over-)
rely on similarity when making many
probability judgments, perhaps because our
own self-reflections when solving the
original problems are completely consistent
with the representativeness-similarity
interpretation.
The last piece of cognitive theory that we
will need for our discussion of category
classification is a model of the similarity
judgment process.
The most general model of this process is
called the contrast model, and it says that
we perceive similarity by making (very rapid)
comparisons of the attributes of the two or more entities whose similarity is being evaluated.
A useful model of this process is to suppose that our global impression of similarity arises from a
quick tabulation of the number of attributes that “match” for two entities versus the number
that “mismatch.”
In many cases, once an object is classified into a category, an association-based judgment is
automatically made.
Sometimes our associations with categories are morally troublesome or just flat-out irrational.
Perhaps the most troublesome characteristic of these racial, gender, and religious stereotypes is
that they automatically evoke emotional reactions that affect our behavior toward members of
the category.
The basic problem with making probability or confidence judgments on the basis of
representative characteristics is that the schema accessed may in fact be less probable, given
the characteristic, than one not accessed.
This occurs when the schema not accessed has a much greater extent in the world than the
accessed one.
5.10 he Ratio Rules
In contrast to representative judgments, accurate judgments can be made by using the simplest
rules of probability theory. Let c stand for a characteristic and S for a schema (category).
The degree to which c is representative of S is indicated by the conditional probability p(c|S)—
that is, the probability that members of S have characteristic c. (In the present examples, this
conditional probability is high.)
The probability that the characteristic c implies membership in S, however, is given by the
conditional probability p(S|c), the probability that people with characteristic c are members of S,
which is the inverse of p(c|S). Now, by the basic laws of probability theory,
20
Session: All of them Exam Guide
This relationship is called the ratio rule—the ratio of inverse probabilities equals the ratio of
simple probabilities.
In the present context of inferring category membership, this simple ratio rule provides a
logically valid way of relating p(c|S) to p(S|c).
To equate these two conditional probabilities in the absence of equating p(c) and p(S) is simply
irrational.
Representative thinking, however, does not reflect the difference between p(c|S) and p(S|c) and
consequently introduces a symmetry in thought that does not exist in the world.
There are still experimental conditions under which base rates are neglected.
Another situation in which we attend to base rates occurs if people ascribe some causal
significance to discrepant rates. When they can see the causal relevance of the base rates, they
often incorporate them into their reasoning.
We seem to reason more competently in statistical problems of all types when we conceptualize
the underlying relationships in terms of concrete numbers, frequency formats, rather than more
abstract proportions and probabilities.
Naive subjects do not distinguish between p(A|B) and p(B|A) in many circumstances, and when
given one conditional probability, they infer the other without reference to the base rates p(A)
and p(B), which must be considered according to the ratio rule.
Our natural habit is to think associatively about what is salient to us in the immediate situation
or what is immediately available from memory. It takes willpower and training to escape from
the “dominance of the given” and to actually think about events and relationships that are not
salient and explicit in our experience.
21
Session: All of them Exam Guide
23
Session: All of them Exam Guide
the best-fitting story is one basis for confidence in their judgment.
The construction of multiple stories is almost forced on the decision maker by the traditions of
our adversarial trial system. However, we suspect that in most everyday situations, when story
construction is the basis for decisions, people stop after they have constructed one story.
During construction and evaluation of a story, people do consider alternative versions of parts of
the stories. (This form of reasoning is called counter-factual thinking, because it involves
imagining alternatives to “factual” reality that might have occurred—what we have referred to
as Piagetian “scientific reasoning” elsewhere)
Jurors who reasoned, “If there had been more guards, the rape would still have occurred” - In
legal contexts, this kind of reasoning is called the “but for” test of causality; a philosopher would
probably describe it as testing if a candidate cause (few security guards) is a necessary condition
for an effect to occur.
6.5 Scenarios About Ourselves
The notion of narrative truth is consistent with the rationales behind many forms of
psychotherapy. These therapies assume that clients’ (narrative) representation of their lives is
the key to understanding their maladaptive behaviors. The therapist’s reconstruction of the
client’s life story into a more coherent and adaptive narrative is the primary goal of therapy.
Autobiographical memories tend to be dominated by our current attitudes, beliefs, and feelings
about ourselves.
6.6 Scenarios About the Unthinkable
The probabilistic approach to reducing the danger of nuclear war and other societal and
personal risks. Small differences in probability for small intervals can yield large differences in
broad ones.
scenario thinking can once again get in the way of a probabilistic assessment. The desirable
scenario for most of us would be an agreement among all countries capable of producing nuclear
weapons resulting in technological control of such weapons to the point that they could not be
used in haste or by accident.
We exaggerate the probability of confrontation and of total agreement while we neglect policies
that would reduce the probability of nuclear war each year by some small amount.
The big problem with scenario thinking is that it focuses the thinker on one, or a few, causal
stories and diverts the decision maker from a broader, more systematic representation of the
decision situation. Scenario thinking grossly overestimates the probability of the scenarios that
come to mind and underestimates long-term probabilities of events occurring one way or
another. Furthermore, there is a general tendency for memories and inferences to be biased so
as to be consistent with the themes and theories underlying the scenarios.
Rational analysis requires a systematic, comprehensive representation of situations and
alternative outcomes, in order to assess the important underlying probabilities of events.
6.7 Hindsight: Reconstructing the Past
People who know the nature of events falsely overestimate the probability with which they
would have predicted them.
We are “insufficiently surprised” by experience. One result is that we do not learn effectively
from it.
Hindsight effects only occurred under conditions where persuasive causal explanations could be
24
Session: All of them Exam Guide
generated by the participants to “glue” the causes to the outcomes.
This hindsight bias is not always reducible to a knew-it-all-along attempt to appear more
omniscient than we are.
Sometimes motivational factors probably apply
Sometimes, when we believe in change, we recall change even when it has not occurred. In
order to make our recollection compatible with this belief, we resort (again not consciously) to
changing our memory of the earlier state. We can, for example, reinforce our belief in a non-
existent change for the better by simply exaggerating how bad things were before the change.
Moods also effect recall.
25
Session: All of them Exam Guide
Hastie and Dawes (H&D), Ch. 7: Chance and Cause – Summary
7.1 Misconceptions About Chance
Probability theory is a language we can use to describe the world or, more precisely, to describe
the relationships among our beliefs about the world.
7.2 Illusions of Control
Not only do people behave as if they can control random events; they also express the conscious
belief that doing so is a skill, which, like other skills, is hampered by distractions and improves
with practice.
7.3 Seeing Causal Structure Where It Isn’t
A pernicious result of representative and scenario-based thinking is that they make us see
structure (non-randomness) where none exists.
This occurs because our naïve conceptions of randomness involve too much variation— often to
the point where we conclude that a generating process is not random, even when it represents
an ideal random trial.
Representativeness enters in because when we are faced with the task of distinguishing between
random and non-random “generators” of events, we rely on our stereotype of a random process
and use similarity to judge or produce a sequence.
Thus, when we encounter a truly random sequence, we are likely to decide it is non-random
because it does not look haphazard enough—because it shows less alternation than our
incorrect stereotype of a random sequence.
The gambler’s fallacy—the notion that “chances of [independent, random] events mature” if
they have not occurred for a while.
The strategy of analysing individual clusters and looking for correlations with some (any)
environmental cause is called the Texas sharpshooter fallacy by epidemiologists, after the story
about a rifleman who shoots a cluster of bullet holes in the side of a barn and then draws a
bull’s-eye around the holes.
7.4 Regression Toward the Mean
A final problem with representative thinking about events with a random (unknown causes)
component is that it leads to non-regressive predictions.
Extreme scores will be less extreme. Regression toward the mean is inevitable for scaled
variables that are not perfectly correlated.
It is only when one variable is perfectly predictable from the other that there is no regression. In
fact, the (squared value of the) standard correlation coefficient can be defined quite simply as
the degree to which a linear prediction of one variable from another is not regressive. The
technical definition of regression toward the mean is the difference between a perfect
relationship (+/–1.00) and the linear correlation:
Regression = perfect relationship – correlation
The rational way of dealing with regression effects is to “regress” when making predictions.
Then, if there is some need or desire to evaluate discrepancy (e.g., to give awards for
“overachievement” or therapy for “underachievement”), compare the actual value to the
predicted value—not with the actual value of the variable used to make the prediction.
Regression toward the mean is particularly insidious when we are trying to assess the success of
26
Session: All of them Exam Guide
some kind of intervention designed to improve the state of affairs.
The worst case scenarios for understanding the effects of interventions occur when the
intervention is introduced because “we’ve got a problem.”
The chances are, the interventions are going to show improvements, and it is almost certain that
some or most of the effect will be due to regression toward the mean.
Hastie and Dawes (H&D), Ch. 8.1-8.2: Thinking Rationally about Uncertainty – Summary
8.1 What to Do About the Biases
One of the goals of the book is to teach analytical thinking about judgment processes.
The best way we know to think systematically about judgment is to learn the fundamentals of
probability theory and statistics and to apply those concepts when making important
judgments.
Adding, keeping track, and writing down the rules of probabilistic inference explicitly are of great
help in overcoming the systematic errors introduced by representative thinking, availability,
anchor-and-adjust, and other biases.
8.2 Getting Started Thinking in Terms of Probabilities
Modern probability theory got its start when wealthy noblemen hired mathematicians to advise
them on how to win games of chance at which they gambled with their acquaintances.
Perhaps the fundamental precept of probabilistic analysis is the exhortation to take a bird’s-eye,
distributional view of the situation under analysis and to define a sample space of all the possible
events and their logical, set membership interrelations.
The systematic listing is unlikely to make us confident about precise probabilities, but it will
remind us just how uncertain that future is and keep us from myopically developing one scenario
and then believing in it too much.
What did we learn:
1. We introduced the basic set membership relationships that are used to describe events to
which technical probabilities can be assigned.
2. We introduced four kinds of situations to which we might want to attach probabilities:
a. situations, like conventional games of chance (e.g., throwing dice), where idealized
random devices provide good descriptions of the underlying structure and where logical
analysis can be applied to deduce probabilities;
b. well-defined “empirical” situations where statistical relative frequencies can be used to
measure probabilities (e.g., our judgments about kinds of students at the University of
Chicago);
c. moderately well-defined situations, where we must reason about causation and
propensities (rather than relative frequencies—e.g., predicting the outcome of the next
U.S. presidential election), but where a fairly complete sample space of relevant events
can be defined with a little thought; and
d. Situations of huge ignorance, where even a sample space of relevant events is difficult
to construct, and where there seem to be no relevant frequencies
Many errors in judging and reasoning about uncertainty stem from mistakes that are made at
the very beginning of the process, when comprehending the to-be-judged situation. If people
could generate veridical representations of the to-be-judged situations and then keep the
27
Session: All of them Exam Guide
(mostly) set membership relationships straight throughout their reasoning, many errors would
be eliminated.
Many times judgments under uncertainty are already off-track even before a person has tried to
integrate the uncertainties.
The primary advice about how to make better judgments under uncertainty is focused on
creating effective external (diagrammatic and symbolic) representations of the situation being
judged.
28
Session: All of them Exam Guide
29
Session: All of them Exam Guide
Subadditivity involves estimating that the
probability of a subset, nested event is
greater than the probability of a superset,
superordinate event in which the subset
event is nested.
The problem is termed subadditivity
because the probability of the whole is
judged to be less than that of the sum of
its parts—in the case of the conjuncture
fallacy, less than that of a single part.
8.7 The Other Side of the Coin: Probability of a
Disjunction of Events
Just as we tend to overestimate the
probability of conjunctions of events (to
the point of committing the conjunction
probability fallacy), we tend to
underestimate the probability of
disjunctions of events.
There seem to be two reasons for this:
1. Our judgments tend to be made on the basis of the probabilities of individual components;
as illustrated, even though those probabilities may be quite low, the probability of the
disjunction may be quite high. We attribute this error primarily to the anchor-and-(under-)
adjust estimation process.
2. Any irrational factors that lead us to underestimate the probabilities of the component
events — such as difficulty of imagining the event — may lead us to underestimate the
probability of the disjunction as a whole.
Rationally, of course, disjunctions are much more probable than are conjunctions.
There is evidence for a disjunction probability fallacy comparable to the conjunction probability
error—such a fallacy consisting of the belief that a disjunction of events is less probable than a
single event comprising it.
8.8 Changing Our Minds: Baye’s Theorem
A very common judgment problem arises when we receive some new information about a
hypothesis that we want to evaluate, and we need to update our judgment about the likelihood
of the hypothesis.
The famous and useful formula for updating
beliefs about a hypothesis (e.g., that an event is
true or will occur) given evidence is called Bayes’ theorem after Thomas Bayes, the British
clergyman who derived it algebraically in his quest for a rational means to assess the probability
that God exists given the (to him) abundant evidence of God’s works.
What systematic errors do people make as they try to update their beliefs about an event when
they receive new information relevant to the judgment?
Failure to consider the alternative hypothesis, and to ignore the probability that the evidence
would be observed even if the hypothesis is false.
A second error is to ignore the base rates of occurrence of simple events
30
Session: All of them Exam Guide
When the problem statement links the base rate information more strongly to the outcomes in
the situation, especially when causal
relationships make the connection,
people are more likely to incorporate
the base rates into their judgments.
The authors speculate that causal
scenario–based reasoning may be an
intuitive way to keep track of the
most important relationships among
events—important when we need to
make predictions, diagnoses, or just
update our “situation models.”
If the person only uses the formula to
organize his or her thinking (but not
to calculate), we expect
improvements from:
1. identification of incomplete and ambiguous descriptions of the judgment problem,
2. consideration of nonobvious information necessary to make the calculation, and
3. motivation to search for specific information and to think about focal hypothesis–
disconfirming information
The authors recommend thinking about the situation in terms of frequencies and the use of
diagrams to represent the to-be- judged situation and to guide information search, inferences,
and calculations
8.9 Statistical Decision Theory
How should we use judgments to decide whether or not to take consequential actions? The
normative “should do” answer is provided by statistical decision theory.
The answer to the “Should I take action?” question depends on these probabilities (relating your
current knowledge and the true condition you’re trying to infer) and how much you care about
each of the possible outcomes.
If we know how we value the outcomes, we can work backward and calculate the threshold
probability that prescribes when we should shift from inaction to action, to maximize those
values.
Often, we cannot increase the accuracy of a diagnosis or other judgment, but we can trade off
the two types of errors (and “corrects,” too).
If misses are most costly, we can lower our threshold for action on the judgment dimension and
reduce misses (but at the cost of more false alarms); if false alarms are the costly error, we can
move the decision threshold up and reduce that error (trading it for more misses, of course).
n most circumstances, we should recognize we are stuck with trade-offs, proceed with a sensible
discussion of what we value, and then set a decision threshold accordingly.
If we do face these trade-offs, we need to try to value the various judgment-outcome
combinations and then apply statistical decision theory to set a proper decision threshold.
The task gets more daunting when we must perform an analysis across stakeholders with
different personal values, as we must in any organizational or societal policy analysis.
8.10 Concluding Comment on Rationality
31
Session: All of them Exam Guide
If a scientific theory cannot state when an event will occur, a skeptic might ask what good it is.
The answer is that insofar as we are dealing with mental events and decisions of real people in
the booming, buzzing confusion of the real world, we can neither predict nor control them
perfectly.
32
Session: All of them Exam Guide
Hastie and Dawes (H&D), Ch. 9: Evaluating Consequences – Fundamental Preferences –
Summary
9.1 What Good is Happiness?
Many people would say that the goal of decision making is to get outcomes that will make the
decision maker happy.
When decisions are driven by the “pursuit of happiness,” it is not the experiences of pleasure
and pain that are most important. What is most important at the time of decision is our
predictions and what will make us happy after we make a decision. Daniel Kahneman has called
this anticipated satisfaction “decision utility,” to contrast it with “experienced utility”
Psychologists are just beginning to uncover the processes that underlie subjective feelings of
pleasure and pain, what we’re calling experienced utility.
The principle that “good things satiate and bad things escalate” can be visualized with the simple
graph in Figure 9.1.
When the good, satiating characteristics (+) are added to the bad, escalating characteristics (-),
the result is a single-peaked function that has a maximum value of a moderate amount (the
dotted line in Figure 9.1). Net welfare (positive combined with negative) is maximized at
moderate amounts.
Coombs and Avrunin (1977) have proven that if
1. Good characteristics satiate (the function relating goodness to amount having a slope that is
positive and decreasing) and
2. Bad things escalate (the function relating bad characteristics to amount having a slope that
is negative and decreasing—becoming more negative), and
3. The negative function is more rapidly changing than the positive one, then
4. The resulting sum of good and bad experiences (that is, the sum of the good characteristics
function and the bad characteristics function) will always be single-peaked.
In fact, a single-peaked function results from any additive combination where the sum starts off
positive from 0,0 and the absolute
value of the slope of the utility for bad
characteristics is greater everywhere
than the absolute value for good
characteristics.
Furthermore, the “flat maximum”
nature of this peak in Figure 9.1 is
common. It is often very difficult to
discriminate among the neighbouring
“good experiences.”
The important point here is that many
experiences exhibit a single-peaked preference function relating the amount of the experience
(food consumed, days on vacation) to the associated pleasure (pain); in other words, we have
personal “ideal points” on amount-of- experience dimensions.
An implication of the peak-end principle is duration neglect: People tend to be surprisingly
insensitive to the length of the experience.
Given what we know about feelings of well-being in general, our best advice is not to
overemphasize predicted happiness in making decisions, but rather deliberately to consider
33
Session: All of them Exam Guide
other aspects of decision alternatives and their consequences.
It is important to realize that happiness and related feelings of well-being are not the only
considerations in the evaluation of consequences. Many times we focus on other aspects of the
expected consequences of our actions, and sometimes we appear to make decisions in a non-
consequentialist manner.
And, of course, impulsivity can play an important role in such decisions.
9.2 The Role of Emotions in Evaluations
The authors think four concepts will be useful to solve the problem of a universal definition of
emotion: emotions, feelings, moods, and evaluations.
We define emotion as reactions to motivationally significant stimuli and situations, usually
including three components: a cognitive appraisal, a “signature” physiological response, and
phenomenal experiences. We would add that emotions usually occur in reaction to perceptions
of changes in the current situation that have hedonic consequences.
Second, we propose that the term mood be reserved for longer-duration background states of
our physiological (autonomic) system and the accompanying conscious feelings. Note, the
implication is that emotions and moods are not always conscious, and that the phenomenal
experience component is not a necessary element of an emotional reaction.
Finally, we suggest that the word evaluation be used to refer to hedonic, pleasure– pain, good–
bad judgments of consequences.
Emotion, if considered at all, was just one more “input” into a global evaluation or utility. We
would still assign a major role to anticipated emotional responses in the evaluation of the value
or utility (either decision utility or experienced utility) of an outcome of a course of action;
people usually try to predict how they will feel about an outcome and use that anticipated
feeling to evaluate and then decide.
There seems to be agreement on the conclusion that an early, automatic reaction to almost any
personally relevant object or event is a good–bad evaluation.
Others have argued that there is a bivariate evaluative response system with two neutrally
independent circuits, one (dopamine-mediated) assessing positivity, one (acetylcholine-
mediated) assessing negativity.
People with relatively active right prefrontal hemispheric areas tend to exhibit more positive
ambient moods and react more positively to stimulus events, while left prefrontal activation is
associated with more negative moods and emotional reactions.
We humans have an emotional signalling system that helps us make quick decisions and decide
when our slower deliberative cognitive systems are overwhelmed with too much information.
What’s especially important here is the emphasis on the helpful, adaptive role of emotions —the
claim is that without them, we’d make much worse decisions. This is a sharp contrast with the
traditional emphasis in religion and Freudian psychology on the notion that emotions play a
troublemaking role in decisions and interfere with clear, rational thinking processes.
Another recent conclusion is that experienced utility is intensified if it produces regret or
rejoicing, and especially if it is a surprise.
9.3 The Value of Money
In the 1923 edition of Webster’s International Dictionary, the first definition of value is “a quality
of a thing or activity according to which its worth or degree of worth is estimated.”
Even in everyday usage, value has come to be almost synonymous with monetary equivalent.
34
Session: All of them Exam Guide
The more general concept of the degree of worth or desirability specific to the decision maker,
as opposed to mere money, is better termed utility. Even that term is ambiguous, however,
because the dictionary definition of utility is “immediate usefulness,” and that is not what
decision theorists have in mind when they discuss utility.
The authors own preferred term is personal value.
In general, the amount a stimulus must be incremented (or decremented) physically for people
to notice a difference is proportional to the stimulus magnitude itself; that is, it must be
incremented (or decremented) by a certain fraction of its physical intensity in order to achieve
what is technically termed a just noticeable difference.
The proportion that stimuli must be incremented or decremented to obtain a just noticeable
difference in intensity has been termed a Weber fraction.
The fact that this fraction is more or less constant for any particular type of sensory intensity has
been termed Weber’s law. It does not hold exactly over all dimensions and ranges of intensity,
but it is useful in research and practice as a rough approximation.
The psychologist Gustav Fechner (1801–1887) proposed that just noticeable differences could be
conceptualized as units of psychological intensity, as opposed to physical intensity. This means
that psychological intensity is a logarithm of physical intensity, and such a proposal became
known as Fechner’s law.
Again, it does not hold over all dimensions and ranges of intensity, but it is a good approximate
rule.
The logarithmic function follows what may be termed the law of diminishing returns, or the law
of decreasing marginal returns.
It is tempting to try to relate this function to Coombs and Avrunin’s (1977) derivation of the
single-peaked preference curve.
It is more correct to suppose that the Bernoullian utility function reflects both Coombs’s positive
and negative substrates and to imagine that it is single-peaked, too. This would mean that there
is such a thing as too much money—a point at which harassment, social enmity, threats of
kidnapping, and other anti-wealth or anti-celebrity actions would become so aversive that the
curve would peak and more wealth would be less desirable.
Prospect theory as a descriptive theory of decision behaviour. A basic tenet of this theory is that
the law of diminishing returns applies to good and bad objective consequences of decisions. Two
components of the theory concern us here:
1. An individual views monetary consequences in terms of changes from a reference level,
which is usually the individual’s current reference point (usually the status quo). The values
of the outcomes for both positive and negative consequences of the choice then have the
diminishing-returns characteristic as they “move away” from that reference point.
35
Session: All of them Exam Guide
2. The resulting value function is steeper for losses than for gains.
Irrationality enters because people do not look at the final outcomes of their choices, but rather
allow the reference level to change and make judgments relative to that moving reference
point.
The psychological justification for
viewing consequences in terms of
the status quo can be found in
the more general principle of
adaptation – “Our perceptual
apparatus is attuned to the
evaluation of changes or
differences rather than to the
evaluation of absolute
magnitudes.“
The difference between prospect
theory and the standard
economic theory of diminishing
marginal utility is that the latter
assumes that decision makers
frame their choices in terms of
the final consequences of their
decisions.
The diminishing-return shape of
the utility function guarantees
that any gamble between two negative outcomes is less negative, worth more in terms of utility
than the corresponding certainty, and the utility for any gamble between the positive outcomes
is worth less than the corresponding certain outcome.
Pseudocertainty; the prospect theory explanation is that the chooser adopts a particular stage in
a probabilistic process as a psychological status quo and consequently becomes risk-averse for
gains and risk-seeking for losses following it.
The irrationality here is that pseudocertainty leads to contradictory choice prior to knowledge of
the outcome—depending upon whether the choice is viewed in its totality or sequentially by
components.
Notice that the pseudocertainty effect depends on the manner in which we reason about
probabilities; it would occur for both raw dollar values or for subjective utility values.
Classic economic utility theory, in contrast, is a normative theory of how we should choose, and
only sometimes a description of how we do choose.
While we should not always be bound by the normative theory’s implications, we should be
aware of them and violate them only with self- awareness and for a compelling reason.
36
Session: All of them Exam Guide
9.4 Decision Utility – Predicting What We Will Value
If the rational model of decision making is to be of any practical use, it must assume that tastes
do not change often or capriciously and that decision makers have some capacity to predict what
they will like and dislike when they experience them in the future.
Proponents of the unstable values view conclude there is a basic unreliability in the mental
process that underlies the generation of answers to the evaluative questions.
How do people predict, at the point of decision, what will make them happy or unhappy as a
consequence of the actions they choose? We propose that a good account can be provided in
terms of judgment strategies or heuristics that are employed to predict value. We call these
evaluation heuristics by analogy with the judgment heuristics.
The authors propose three basic evaluation heuristics:
1. Predictions of value based on remembered past experiences,
2. Predictions based on simulating what the future experience will be like, and
3. Predictions based on deliberate calculations or inferential rules.
Past experience, learning, and memory play the dominant role in predictions of the future.
Another reason that memory for past pleasures and pain is important is because it is a
contributor to our current feelings of satisfaction.
If we do not appreciate regression effects, we will systematically overestimate how positively
we will feel about good consequences and overestimate the negativity of the bad
consequences.
When we rely on simulation, we are biased by our current emotional states. A very common and
important judgment bias is associated with situations in which we exhibit bounded self-control.
Loewenstein attributes this family of prediction errors to what he calls the “hot-cold empathy
gap”—people cannot know what the effects of their feelings will be on their own behaviours
when they are in different emotional states.
Some surprising biases in evaluations occur as a result of incidental emotions— emotions
experienced at the time a decision is made that have nothing to do with the decision itself, that
is, they are involved in neither decision utility or experienced utility.
Under many conditions we deliberately calculate how much we will like a future experience; we
call this the calculation heuristic for evaluations. The diversification bias is an example of a
systematic miss-prediction that occurs when we deliberately infer what we will like.
In our view, most of the apparent instabilities in value judgments can be accounted for with
reference to several psychological considerations.
1. There are simple changes in momentary goals. As noted above, when our current goals
change, our evaluations change.
2. There is a gap between predicted satisfactions and experienced satisfactions, and
researchers are developing a catalogue of systematic biases in predictions of future
satisfactions.
3. There will sometimes be shifts in value dependent on the changes in the evaluation
heuristics that we rely on when we remember, simulate, or calculate future values.
9.5 Constructing Values
Originally the belief sampling model was designed to explain instabilities in responses to general
surveys. But we think the model can be applied usefully to explain unreliability in evaluations of
many types. As the name suggests, the heart of the model is a (memory) sampling process.
37
Session: All of them Exam Guide
The general properties of any cognitive memory system, namely fluctuations in the availability of
information from memory, explain unreliability in the system. Human memory retrieval is highly
context- dependent, and the specific information retrieved will fluctuate with small changes in
the encoding of the retrieval probe and other changes in activation of parts of the system.
Hastie and Dawes (H&D), Ch. 10: From Preferences to Choices – Summary
38
Session: All of them Exam Guide
perfect alternative
Gerd Gigerenzer labels some of the most common choice rules “fast and frugal heuristics,”
because they approach optimality, but are frugal, requiring the consideration of relatively little
information about the alternatives, and hence are fast.
The amount of cognitive effort—measured subjectively or objectively— varies across strategies.
Effort also depends on the structure of the choice set. If the set is large, requires a lot of trade-
offs across dimensions and across alternatives, lacks critical or reliable information, or includes a
lot of similar alternatives, most of the strategies will demand a considerable effort.
Some strategies involve across-attribute compensatory trade-offs, while others do not.
Non-compensatory strategies, are unforgiving: If the rent is over $700 per month, the apartment
is rejected, no matter how good its other features.
Non-compensatory strategies, especially, are likely to miss “balanced, all-around good”
alternatives and sometimes terminate the search before a truly dominant “winner” is found.
A useful distinction is between alternative-based strategies and attribute-based ones.
In alternative-based strategies, attention is focused on one alternative at a time, its attributes
are reviewed, and a summary evaluation is performed of that item before attention is turned to
another alternative.
The contrasting organizational principle is a strategy based on attributes: An attribute (e.g.,
price, location) is selected and several alternatives are evaluated on that attribute. Then
attention turns to the next attribute.
Attribute-based strategies often stop with an “answer” after reviewing less information than
alternative- based strategies; therefore, alternative-based strategies tend to be more cognitively
demanding than those based on attributes.
There is some dependence on the structure of the set of choice alternatives, the strategies differ
in terms of the amount of information each is likely to consume in the choice process. Some are
exhaustive and require perusal of all relevant information (and even deploy inference processes
to fill in the gaps in information); others are likely to make a choice after a small subset of the
total accessible information has been covered.
The most thorough, systematic, cognitively demanding choice strategy is the multi-attribute
utility theory (MAUT) evaluation process that is essentially the linear weight- and-add, Lens
Model judgment policy applied to valuation, rather than to estimating or forecasting “true states
of the world” (i.e., applied to estimate our “internal” reaction to the object of choice).
Most efforts to improve choice habits focus on inducing people to use strategies that are more
like MAUT evaluation method.
39
Session: All of them Exam Guide
40
Session: All of them Exam Guide
41
Session: All of them Exam Guide
43
Session: All of them Exam Guide
Ben Franklin advices is echoed in popular books on decision making that recommend the listing
of possible consequences of choices, linking those to our personal values, and then choosing the
alternative that has the highest summary value according to a simple weight-and-add rule.
The answer to the question of importance is rather easy: It is up to the decision maker. In
constructing a weighting scheme, we should list the variables that are important to us, given our
current goals. If, for example, we think of “job level” in a global and amorphous way, then we
should list it.
Franklin advises not on what to decide, but how to decide it. When suggesting a list, he was not
advising what should be on it, but rather suggesting how to become explicit about what is
important to the decision maker.
Hastie and Dawes (H&D), Ch. 12: From Preferences to Choices – Summary
44
Session: All of them Exam Guide
1. valuation, in which the value function is applied to each consequence associated with each
outcome;
2. decision weighting, in which each valued consequence is weighted for impact by a function
based on its objective probability of occurrence; and
3. integration, in which the weighted values across all the outcomes associated with a
prospect are combined by adding them up.
46
Session: All of them Exam Guide
Prospect theory includes a decision weighting process, analogous to the weighting of outcomes
by their probabilities of occurrence or expectations in expected utility theories.
support theory (and what we know about heuristic judgment processes) could provide a
connection to non-numerical (“non-risky” in technical terms) uncertainty situations; support
theory would translate subjective uncertainty into numerical subjective probability on the x-axis
of the decision weight function
The modal decision weight function (again, typical for most individuals in most decision
situations) looks like the backward S-shaped curve in Figure 12.2. A useful rule of thumb to
interpret these psychophysical functions is that when the curve is steeper, it implies the decision
maker will be more sensitive to differences on the objective dimension (x-axis): If the curve is
steep, there is relatively more change in the psychological response to any difference on the
objective dimension, as compared with where the curve is flatter.
Several mechanisms have been postulated as explanations for the differences in steepness or
slope—for example:
1. differential attention,
2. differences in sense organ sensitivity, and
3. differences in the reactivity of neural-biochemical substrates.
47
Session: All of them Exam Guide
48
Session: All of them Exam Guide
Figure 12.3 is a summary of the decision processes proposed by prospect theory. We have taken
some liberties, by ordering the three preliminary sub-stages (editing, valuation, and decision
weighting) in a temporal sequence. The theory itself is not explicit about the order of the
computations.
50
Session: All of them Exam Guide
Hastie and Dawes (H&D), Ch. 13: What’s Next – Summary
New Directions in Research on Judgment and Decision Making
One especially interesting result is the observation that brains are responsive to the relative—
not absolute—amounts to be gained or lost, as predicted by prospect theory.
There is accumulating evidence that the brain performs utility calculations like those prescribed
by prospect theory.
We already know from Ellsberg’s work that there is a behavioural difference, with most people
strongly preferring well-defined risky prospects to murky (obscure) ambiguous prospects. But are
51
Session: All of them Exam Guide
different regions of the brain engaged when a person contemplates risk versus ambiguity, and do
the specific active regions give us some clues as to the nature of that reaction?
Two brain areas showed more activity when ambiguous gambles were considered: the
amygdala and the orbitofrontal cortex.
The amygdala is frequently associated with emotional responses, most notably to fear-evoking
stimuli, such as frightened (effrayés) faces, and the orbitofrontal cortex often appears to play a
role in integrating cognitive and emotional information; patients with injuries (avec des lesions)
to the orbitofrontal cortex often behave inappropriately in social situations, despite knowledge
of the proper behaviour (malgré la connaissance du comportement adéquat).
Conversely, the dorsal striatum (including the nucleus accumbens) was more active when risky
(compared with ambiguous) prospects were considered. This area (see above) seems to play a
role in predicting rewards (especially monetary rewards). These interpretations suggest that the
brain treats ambiguous prospects as a bit scary and emotional, but treats risky prospects as
something to think about in a “calculating” manner.
Camerer’s results suggest that there are two brain systems—one associated with the amygdala
and the orbitofrontal cortex, the other associated with the striatum—that respond to
uncertainty in prospects presented for decisions. Both are active, but as uncertainty increases
and becomes ambiguity, there is a shift toward relatively more activation of the amygdala to the
orbitofrontal system.
Furthermore, the same shift in system activation was observed for uncertainty introduced by
simple card-draw gambles and for increasing uncertainty produced by lack of expertise (i.e., your
outcome depends on judging temperature in Tajikistan) and by the potential actions of a human
opponent, implying that the systems are reacting to a very general sense of uncertainty-
ambiguity.
One important behavioural observation was that the patients’ (with damage in the orbitofrontal
cortex) choices were both risk-and ambiguity-neutral, while non-brain-injured participants were
mostly averse to risk and even more averse to ambiguity.
This article describes ten regularities in naturally occurring data that are anomalies for expected
utility theory but can all be explained by three simple elements of prospect theory: loss-aversion,
reflection effects, and nonlinear weighting of probability; moreover, the assumption is made that
people isolate decisions (or edit them) from others they might be grouped with.
In expected utility, gambles that yield risky outcomes xi with probabilities pi are valued according
53
Session: All of them Exam Guide
to pi u(xi), where u(x) is the utility of outcome x. In prospect theory they are valued by (pi)v(xi r),
where (p) is a function that weights probabilities nonlinearly, overweighting probabilities below .
3 or so and under- weighting larger probabilities.1 The value function v(x r) exhibits diminishing
marginal sensitivity to deviations from the reference point r, creating a “reflection effect”
because v(x r) is convex for losses and concave for gains (i.e., v (x r) 0 for x r and v (x r) 0 for x r).
The value function also exhibits loss aversion if the value of a loss x is larger in magnitude than
the value of an equal-sized gain (i.e., v( x) v(x) for x 0).
54
Session: All of them Exam Guide
than winners because such sales generate losses that can be used to reduce the taxes owed on
capital gains.
Interestingly, the winner- loser differences did disappear in December. In this month investors
have their last chance to incur a tax advantage from selling losers.
3. Labour Supply
Camerer, Babcock, Loewenstein, and Thaler (in this volume) talked to cab drivers in New York
City about when they decide to quit driving each day. Most of the drivers lease their cabs for a
fixed fee for up to 12 hours. Many said they set an in- come target for the day and quit when
they reach that target. Although daily in- come targeting seems sensible, it implies that drivers
will work long hours on bad days when the per-hour wage is low and will quit earlier on good
high-wage days. The standard theory of the supply of labour predicts the opposite: Drivers will
work the hours that are most profitable, quitting early on bad days and making up the shortfall
by working longer on good days.
The daily targeting theory and the standard theory of labour supply therefore predict opposite
signs of the correlation between hours and the daily wage.
To measure the correlation, we collected three samples of data on how many hours drivers
worked on different days. The correlation between hours and wages was strongly negative for
inexperienced drivers and close to zero for experienced drivers.
Daily income targeting assumes loss aversion in an indirect way. To explain why the correlation
between hours and wages for inexperienced drivers is so strongly negative, one needs to assume
that drivers take a 1-day horizon and have a utility function for the day’s income that bends
sharply at the daily income target. This bend is an aversion to “losing” by falling short of an
income reference point.
55
Session: All of them Exam Guide
depending on their current income, anticipations of future income, and their discount factors.
Consumption is “sticky downward” for two reasons:
(1) Because they are loss-averse, cutting current consumption means they will consume below their
reference point this year, which feels awful.
(2) Owing to reflection effects, they are willing to gamble that next year’s wages might not be so
low; thus, they would rather take a gamble in which they either consume far below their
reference point or consume right at it than accept consumption that is modestly below the
reference point. These two forces make the teachers reluctant to cut their current consumption
after receiving bad news about future income prospects, which explains Shea’s finding.
6. Status Quo Bias, Endowment Effects, And Buying – Selling Price Gaps
Samuelson and Zeckhauser (1988) coined the term status quo bias to refer to an exaggerated
preference for the status quo and showed such a bias in a series of experiments. They also
reported several observations in field data that are consistent with status quo bias.
There is a huge literature establishing that selling prices are generally much larger than buying
prices, although there is a heated debate among psychologists and economists about what the
price gap means and how to measure “true” valuations in the face of such a gap.
Conclusion
Economists value (1) mathematical formalism and econometric parsimony, and (2) the ability of
theory to explain naturally occurring data.
Loss-aversion can explain the extra return on stocks compared with bonds (the equity premium),
the tendency of cab drivers to work longer hours on low-wage days, asymmetries in consumer
reactions to price increases and decreases, the insensitivity of consumption to bad news about
in- come, and status quo and endowment effects. Reflection effects—gambling in the domain of
a perceived loss—can explain holding losing stocks longer than winners and refusing to sell your
house at a loss (disposition effects), insensitivity of consumption to bad income news, and the
shift toward longshot betting at the end of a racetrack day. Nonlinear weighting of probabilities
can explain the favourite-longshot bias in horse-race betting, the popularity of lotto lotteries with
large jackpots, and the purchase of telephone wire repair insurance. In addition, note that the
disposition effect and downward-sloping labour supply of cab drivers were not simply observed
but were also predicted in advance based on prospect theory.
prospect theory is a suitable replacement for expected utility because it can explain anomalies
like those listed above and can also explain the most basic phenomena expected utility is used to
explain.
57
Session: All of them Exam Guide
Christine Jolls – Behavioural Law and Economics
Abstract
This paper describes and assesses the current state of behavioral law and economics. Law and
economics had a critical (though underrecognized) early point of contact with behavioral economics
through the foundational debate in both fields over the Coase theorem and the endowment effect. The
paper concludes with reference to a new emphasis in behavioral law and economics on ”debiasing
through law” using existing or proposed legal structures in an attempt to reduce people’s departures
from the traditional economic assumption of unbounded rationality.
1. Introduction
An important threshold question for the present work involves how to characterize the domains of
both “law and economics” and “behavioral law and economics.”
Three features of the work considered in this article are:
1. much of this work focuses on various areas of law that were not much studied by economists
prior to the advent of law and economics
2. it often (controversially) employs the normative criterion of “wealth maximization” rather than
that of social welfare maximization.
3. sustained interest in explaining and predicting the content, rather than just the effects, of legal
rules.
Behavioral law and economics involves both the development and the incorporation within law and
economics of behavioral insights drawn from various fields of psychology and attempts to improve
the predictive power of law and economics by building in more realistic accounts of actors’
behavior.
The vehicle of “debiasing through law,” behavioral law and economics may open up a new space
within law and economics between, on the one hand, unremitting adherence to traditional
economic assumptions and, on the other hand, broad structuring or restructuring of legal regimes
on the assumption that people are inevitably and permanently bound to deviate from traditional
economic assumptions.
2. The Endowment Effect in Behaviours Economics and Behavioural Law and Economics
2.1 The Coase Theorem
This theorem posits that allocating legal rights to one party or another will not affect outcomes if
transaction costs are sufficiently low.
Thus, for instance, whether the law gives a factory the right to emit pollution next to a laundry
or, instead, says the laundry has a right to be free of pollution will not matter to the ultimate
outcome (pollution or no pollution) as long as transaction costs are sufficiently low. The reason
for this result is that, with low transaction costs, the parties should be expected bargain to the
efficient outcome under either legal regime.
The Coase theorem is central to law and economics because of (among other things) the theorem’s
claim about the domain within which normative analysis of legal rules – whether rule A is preferable
to rule B or the reverse – is actually relevant.
61
Session: All of them Exam Guide
Laws banning usurious lending and price gouging when such activities are prevalent are a
straightforward prediction of the theory of bounded self-interest described above.
The account above of bounded self-interest suggests that if trades are occurring frequently in a
given jurisdiction at terms far from those of the reference transaction, there will be strong pressure
for a law banning such trades. Note that the prediction is not that all high prices (ones that make it
difficult or impossible for some people to afford things they might want) will be banned; the
prediction is that transactions at terms far from the terms on which those transactions generally
occur in the marketplace will be banned.
Of course, waiting in line for scarce goods is precisely what happens with laws against price gouging.
Thus, pervasive fairness norms appear to shape attitudes (and hence possibly law) on both usury
and price gouging.
As a positive matter, behavioral law and economics predicts that if trades are occurring with some
frequency on terms far from those of the reference transaction, then legal rules will often ban
trades on such terms.
62
Session: All of them Exam Guide
6. Conclusion
The ultimate sign of success for behavioral economics will be that what is now behavioral
economics will become simply “economics.” The same observation applies to behavioral law and
economics.
Debiasing through law, may hasten the speed at which this transition occurs by pointing to a wide
range of possibilities for recognizing human limitations while at the same time avoiding the step of
paternalistically removing choices from people’s hands
63
Session: All of them Exam Guide
Alvin E. Roth: Repugnance as a Constraint on Markets
Repugnant Markets
The examples in Table 1 and others, even where there may be willing suppliers and demanders
of certain transactions, aversion to those transactions by others may constrain or even prevent
the transactions.
How Repugnant Combines with Other Factors
Some markets are banned or limited for combinations of reasons that include both repugnance
and also concerns about negative externalities. In some repugnant markets transactions may
not always involve two willing parties. But repugnance can be present even when the
externalities are minimal.
Therefore, bans of some repugnant markets seem sometimes only limited private consumptions.
Some kinds of repugnance are also intermixed with concerns about providing incentives for bad
behaviour.
Dwarf tossing
Essentially dwarf tossing was so repugnant that it imposed a negative externality by diminishing
human dignity, a public good.
64
Session: All of them Exam Guide
Cash Payments and Repugnance
One often-noted regularity is that some transactions that are not repugnant as gifts and in-kind
exchanges become repugnant when money is added.
Offering money is often regarded as inappropriate even when not repugnant. For example,
dinner guests at your home may respond in-kind, by bringing wine or inviting you to dinner in
return, but they would likely not be invited back if they offered to pay for their dinner.
Sometimes the level of the price is regarded as repugnant rather than the existence of a price:
after a natural disaster it is often regarded as acceptable to sell supplies at their pre-disaster
price, but as repugnant price-gouging to raise the price. There may be resistance to charging for
goods that have previously been provided for free or at low cost, like water or the right to drive
in cities during rush hours.
Of course, sometimes laws or public outrage focus on monetary transactions only because they
are easier to ban than nonmonetary transactions.
Concerns about the monetization of transactions fall into three principal classes:
1. One concern is objectification: that is, the fear that putting a price on certain things and
buying or selling them might move them into a class of impersonal objects to which they
should not belong.
2. A second concern is that offering substantial monetary payments might be coercive, in the
sense that it might leave some people, particularly the poor, open to exploitation from
which they deserve protection.
3. A third concern, sometimes less clearly articulated, is that monetizing certain transactions
that might not themselves be objectionable may cause society to slide down a slippery
slope to genuinely repugnant transactions.
Conclusion
Repugnance can be a real constraint on markets. Almost whenever I have been involved in
practical market design, the question of whether certain kinds of transactions may be
inappropriate has come up for discussion.
66
Session: All of them Exam Guide
To say that repugnance is a real phenomenon doesn’t mean that repugnance isn’t sometimes
deployed for strategic purposes by self-interested parties to recruit allies who would not respond
to a clear appeal to narrower motives such as rent seeking.
One way of seeing the role that repugnance plays in this debate is to compare it to a difficult
technological barrier. If the technological barriers could be over- come that currently prevent,
say, transplanting pig kidneys into human patients, such “xenotransplants” would also end the
kidney shortage.
repugnance is similar to technological barriers in this respect: markets that we can envision may
nevertheless not be easily achievable. I would not like to guess whether repeal of the widespread
laws against kidney sales is likely to happen more quickly than the advances in
xenotranplantation, or artificial kidneys, or other medical breakthroughs that would end the
shortage of kidneys.
Of course, there can also be “technological” developments in the law. For example, Volokh
(forthcoming) endorses a “medical right to self-defence,” that would give a person dying of end-
stage renal disease the right to pursue all reasonable avenues to preserve their life, including
purchasing a kidney.
economists see very few trade-offs as completely taboo, noneconomists often decline to discuss
trade-offs at all, preferring to focus on the repugnance of transactions like organ sales.
The current situation can be viewed as a regulated market with the only legal price being zero,
which makes it difficult to prevent unregulated transactions on international black markets.
Living donors give an organ while alive. Living donation of kidneys, which represented 96 per- cent
of all US living organ donations in 2012 (OPTN), is possible since humans have two kidneys but can
live a healthy life with only one, allowing the other to be removed and donated.
Just over 3 percent of living donor kidneys in the United States came from non-directed donors in
2012
additional organs are recovered when next of kin consent to donation on behalf of unregistered
deceased. (Next of kin are also asked to consent to donation of registered donors. While this
confirmation is not deemed to be legally necessary to proceed with donation, it is usually done
anyway
One standard first response by economists is that we can solve excess demand by raising the price
from the current legal limit of zero by allowing organs to be bought and sold, potentially for both
living and deceased donation.
there is evidence that the manner of the payment to an organ donor may mitigate some of the
repugnance concerns. Niederle and Roth (forthcoming) find that payments to non-directed kidney
donors are deemed more acceptable when they arise as a reward for heroism and public service
than when they are viewed as a payment for kidneys.
As kidney exchange began to assemble pools of patient-donor pairs, it became possible to offer
non-directed donors the possibility of initiating a long chain of donations, in which the nondirected
donor would donate to the patient in an incompatible pair, whose donor would donate to another
pair, and so on
Some nations have introduced allocation schemes that provide priority on organ donor waiting lists
to individuals who have previously registered as donors.
The priority allocation rule led to a large, significant increase in donation. Additional treatments
revealed that the main mechanism driving this increase was the monetary incentive effect of
67
Session: All of them Exam Guide
priority; the same increase in donation was induced by providing a rebate for donation equal to the
expected value of having priority or by lowering the cost of donation by the expected value of
having priority.
The experiment assumed that the allocation rule could be implemented so that everyone who
registered as an organ donor to receive priority would actually donate when in a position to do so
The check box (on the sign-up form for becoming an organ donor) has the potential to operate as a
loophole in the priority allocation system whereby an individual signs a donor card to receive
priority on the waiting list if he is ever in need of an organ but expects his family or clergyman to
decline the donation if he dies and is in a position to donate. Essentially it allows individuals to
receive priority even though they would never make a donation.
If this loophole exists it completely eliminates the benefit of priority.
This leads then to fewer donations under a priority system with a loophole than under a first-come,
first-served system without priority.
However, subjects treat taking the loophole as a worse affront than simply not donating,
presumably since those who take the loophole are explicitly abusing a system designed to reward
donors.
European countries that have opt-out systems have vastly higher donor registration rates than the
European countries that have opt-in systems. In the United States since organ donation falls under
gift law and so requires a positive statement of support in favour of donation
Subjects are less likely to report that next of kin should donate the organs of an unregistered
deceased if the deceased explicitly said no to registration in a mandated choice framed question
than if the deceased simply chose not to opt in. This suggests that asking individuals to register
under a mandated choice frame may make it harder to get permission for organ donation from the
next of kin of those who remain unregistered.
68
Session: All of them Exam Guide
Camerer & Loewenstein: Behavioural Economics: Past, Present, and Future – Summary
Intertemporal Choice
The discounted-utility (DU) model assumes that people have instantaneous utilities from their
experience each moment, and that they choose options that maximize the present discounted sum
of these instantaneous utilities.
Typically, it is assumed that instantaneous utility each period depends solely on consumption in that
period, and that the utilities from streams of consumption are discounted exponentially, applying
the same discount rate in each period.
Time Discounting
A central issues in economics is how agents trade off costs and benefits that occur at different pints
in time. The standard assumption is that people weight future utilities by an exponentially declining
discount factor d ( t ) =δ t, where 1 > δ >0. Note that the discount factor δ is often expressed as
1/(1+r), where r is a discount rate.
A simple hyperbolic time discounting function of d(t) = 1/(1+ kt) tends to fit experimental data
better than exponential discounting.
Immediacy effect discounting is dramatic when one delays consumption that would otherwise be
immediate.
Hyperbolic time discounting implies that people will make relatively farsighted decisions when
planning in advance – when all costs and benefits will occur in the future – but will make relatively
short-sighted decisions when some costs or benefits are immediate.
The systematic changes in decisions produced by hyperbolic time discounting create a time
inconsistency in intertemporal choice not present in the exponential model.
Somebody with time inconsistent hyperbolic discounting will wish prospectively that in the future
he would take farsighted actions; but when the future arrives he will behave against his earlier
wishes, pursuing immediate gratification rather than long-run well being.
Quasi hyperbolic time discounting is basically standard exponential time discounting plus an
immediacy effect; a person discounts delays in gratification equally expect the current one – caring
differently about well being now versus later.
Partially illiquidity of an asset plays in helping consumers constrain their own future consumption.
An important question in modelling self-control is whether agents are aware of their self-control
problem (“sophisticated”) or not (“naïve”). Naïveté typically make damage from poor self control
worse. In some cases being sophisticated about one’s self-control problem can exacerbate yielding
to temptation. If you are aware of you tendency to yield to a temptation in the future, you may
conclude that you might as well yield now; if you naively think you will resist temptation longer in
the future, that may motivate you to think it is worthwhile resisting temptation now.
A new model by Loewenstein and Perlec includes effect as “magnitude effect”, “temporal losses”,
and lower discount rates for losses than for gains. This model departs in two major ways from the
DU. First it incorporates discount function. Second, it incorporates a utility function with special
curvature properties that is defined over gains and losses rather than final levels of consumptions.
Negative time discounting – If people like savouring pleasant future activities they may postpone
them to prolong the pleasure ( and they may get painful activities over with quickly to avoid dread).
69
Session: All of them Exam Guide
Abstract
childhood self- control predicts physical health, substance dependence, personal finances, and
criminal offending outcomes, following a gradient of self-control.
the sibling with lower self-control had poorer out- comes, despite shared family background.
Interventions addressing self-control might reduce a panoply of societal costs, save tax- payers
money, and promote prosperity.
Introduction
Self-control is an umbrella construct that bridges concepts and measurements from different
disciplines (e.g., impulsivity, conscientiousness, self-regulation, delay of gratification, inattention-
hyperactivity, executive function, willpower, intertemporal choice).
Neuroscientists study self-control as an executive function sub- served by the brain’s frontal cortex.
Behavioural geneticists have shown that self-control is under both genetic and environmental
influences.
Health researchers report that self-control predicts early mortality; psychiatric disorders; and
unhealthy behaviours, such as overeating, smoking, unsafe sex, drunk driving, and noncompliance
with medical regimens. Sociologists find that low self-control predicts unemployment and name
self-control as a central causal variable in crime theory, providing evidence that low self-control
characterizes law- breakers.
preschool programs that targeted poor children 50 y ago, although failing to achieve their stated
goal of lasting improvement in children’s intelligence quotient (IQ) scores, somehow produced by-
product reductions in teen pregnancy, school dropout, delinquency, and work absenteeism
policy-makers might exploit this by enacting so-called “opt-out” schemes that tempt people to eat
healthy food, save money, and obey laws by making these the default options that require no
effortful self-control.
First, we tested whether children’s self-control predicted later health, wealth, and crime similarly at
all points along the self-control gradient, from lowest to highest self-control.
some Dunedin study members moved up in the self-control rank over the years of the study, and
we were able to test the hypothesis that improving self-control is associated with better health,
wealth, and public safety.
the hypothesis that individual differences in preschoolers’ self-control predict outcomes in
adulthood. If so, early childhood would also be an intervention window.
70
Session: All of them Exam Guide
Results
Mean levels of self-control were higher among girls than boys, but the health, wealth, and public
safety implications of childhood self-control were equally evident and similar among boys and girls
Dunedin study children with greater self-control were more likely to have been brought up in
socioeconomically advantaged families and had higher IQs, raising the possibility that low self-
control could be a proxy for low social class origins or low intelligence.
Predicting Health
Childhood self-control predicted adult health problems, even after accounting for social class origins
and IQ.
As adults, children with poor self-control were not at elevated risk for depression. They had
elevated risk for substance dependence, however, even after accounting for social class and IQ.
Predicting Wealth
Childhood self-control also foreshadowed the study members’ financial situations. Although the
study members’ social class of origin and IQ were strong predictors of their adult socioeconomic
status and income, poor self-control offered significant incremental validity in predicting the
socioeconomic position they achieved and the income they earned
Childhood self-control predicted whether or not these study members’ offspring were being reared
in one-parent vs. two- parent households (e.g., the study member was an absent father or single
mother), also after accounting for social class and IQ.
At the age of 32 y, children with poor self-control were less financially planful. Compared with other
32-y-olds, they were less likely to save and had acquired fewer financial building blocks for the
future
Children with poor self-control were also struggling financially in adulthood. They reported more
money-management difficulties and had accumulated more credit problems.
Poor self-control in childhood was a stronger predictor of these financial difficulties than study
members’ social class origins and IQ.
Predicting Crime
Children with poor self-control were more likely to be convicted of a criminal offense, even after
accounting for social class origins and IQ
Self-Control Gradient
The self-control gradient was even apparent when we removed children in the least and most self-
controlled quin- tiles
The childhood measure of self-control was significantly correlated with a personality measurement
of self-control administered to our cohort in young adulthood, at a moderate magnitude, consistent
with expectations.
As a caveat, it is not clear that natural history change of the sort we observed in our longitudinal
study is equivalent to intervention-induced change.
Self-Control and Adolescent Mistakes
Data collected at the ages of 13, 15, 18, and 21 y showed that children with poor self-control were
more likely to make mistakes as adolescents, resulting in “snares” that trapped them in harmful
lifestyles.
71
Session: All of them Exam Guide
More children with low self-control began smoking by the age of 15 y, left school early with no
educational qualifications, and became unplanned teenaged parents. The lower their self-control,
the more of these snares they encountered. In turn, the more snares they encountered, the more
likely they were, as adults, to have poor health, less wealth, and criminal conviction
How Early Can self-Control Predict Health, Wealth and Crime
pre-schoolers’ self-control significantly predicted health, wealth, and convictions at the age of 32 y,
albeit with modest effect sizes
Sibling Comparison
Models showed that the 5-y-old sibling with poorer self-control was significantly more likely to
begin smoking as a 12-y-old, perform poorly in school, and engage in antisocial behaviours, and
these findings remained significant even after controlling for sibling differences in IQ.
Comment
Differences between individuals in self-control are present in early childhood and can predict
multiple indicators of health, wealth, and crime across 3 decades of life in both genders.
Furthermore, it was possible to disentangle the effects of children’s self-control from effects of
variation in the children’s intelligence, social class, and home lives of their families, thereby singling
out self-control as a clear target for intervention policy.
Differences between children in self-control predicted their adult outcomes approximately as well
as low intelligence and low social class origins, which are known to be extremely difficult to improve
through intervention.
It has been shown that self-control can change. Programs to enhance children’s self-control have
been developed and positively evaluated, and the challenge remains to improve them and scale
them up for universal dissemination
Abstract
As firms switch from defined-benefit plans to defined-contribution plans, employees bear more
responsibility for making decisions about how much to save. The employees who fail to join the plan or
who participate at a very low level appear to be saving at less than the predicted life cycle savings rates.
Behavioural explanations for this behaviour stress bounded rationality and self-control and suggest that
at least some of the low-saving households are making a mistake and would welcome aid in making
decisions about their saving.
The essence of the SMarT program is straightforward: people commit in advance to allocating a portion
of their future salary increases toward retirement savings.
Our key findings, from the first implementation, which has been in place for four annual raises, are as
follows: (1) a high pro- portion (78 percent) of those offered the plan joined, (2) the vast majority of
those enrolled in the SMarT plan (80 percent) remained in it through the fourth pay raise, and (3) the
average saving rates for SMarT program participants increased from 3.5 percent to 13.6 percent over
72
Session: All of them Exam Guide
the course of 40 months. The results suggest that behavioural economics can be used to design effective
prescriptive pro- grams for important economic decisions.
Introduction
Households are assumed to want to smooth consumption over the life cycle and are expected to
solve the relevant optimization problem in each period before deciding how much to consume and
how much to save.
Actual household behaviour might differ from this optimal plan for at least two reasons.
1. the problem is a hard one, even for an economist, so households might fail to compute the
correct savings rate.
2. even if the correct savings rate were known, households might lack the self-control to reduce
current consumption in favour of future consumption
the basic idea of SMarT is to give workers the option of committing themselves now to increasing
their savings rate later, each time they get a raise.
A Prescriptive Approach to Increasing Savings Rates
Raiffa (1982) suggested that economists and other social scientists could benefit from distinguishing
three different kinds of analyses: normative, descriptive, and prescriptive.
Normative theories characterize rational choice and are often derived by solving some kind of
optimization problem.
Descriptive theories simply model how people actually choose, often by stressing systematic
departures from the normative theory.
prescriptive theories are attempts to offer advice on how people can improve their decision making
and get closer to the normative ideal.
Prescriptions often have a second-best quality.
Before writing a prescription, one must know the symptoms of the disease being treated.
Households may save less than the life cycle rate for various reasons.
1. determining the appropriate savings rate is difficult, even for someone with economics training.
One obvious solution to this problem is financial education
2. Second, saving for retirement requires self-control.
3. A third problem, closely related to self-control, is procrastination, the familiar tendency to
postpone unpleasant tasks.
economists have known that intertemporal choices are time consistent only if agents discount
exponentially using a discount rate that is constant over time. But there is considerable evidence
that people display time-inconsistent behavior, specifically, weighing current and near-term
consumption es- pecially heavily.
present-biased preferences (as discussed in the two other articles for this session) can be captured
with models that employ hyperbolic discounting. These models come in two varieties: sophisticated
and naive. Sophisticated agents realize that they have hyperbolic preferences and take steps to deal
with the problem, whereas naive agents fail to appreciate at least the extent of their problem.
Hyperbolic agents procrastinate because they (wrongly) think that whatever they will be doing later
will not be as important as what they are doing now.
The costs of actively joining the (retirement) plan (typically filling out a short form) are trivial
compared with the potential benefits of the tax-free accumulation of wealth, and in some cases a
73
Session: All of them Exam Guide
“match” is provided by the employer, in which the employer typically contributes 50 cents to the
plan for every dollar the employee contributes, up to some maximum. In contrast, if agents display
procrastination and status quo bias, then automatic enrolment could be useful in increasing
participation rates.
Consistent with the behavioural predictions, automatic enrolment plans have proved to be
remarkably successful in increasing enrolments.
A goal of the SMarT plan is to obtain some of the advantages of automatic enrollment while
avoiding some of the disadvantages.
The program should be simple and should help people approximate the life cycle saving rate if they
are unable to do so themselves.
Hyperbolic discounting implies that opportunities to save more in the future will be considered
more attractive than those in the present.
Procrastination and inertia suggest that once employees are enrolled in the program, they should
remain in until they opt out.
The final behavioural factor that should be considered in designing a prescriptive savings plan is loss
aversion, the empirically demonstrated tendency for people to weigh losses significantly more
heavily than gains.
Loss aversion affects savings because once households get used to a particular level of disposable
income, they tend to view reductions in that level as a loss.
The combination of loss aversion and money illusion suggests that pay increases may provide a
propitious time to try to get workers to save more, since they are less likely to consider an increased
contribution to the plan as a loss than they would at other times of the year.
Outcome
Tables in the End
74
Session: All of them Exam Guide
Conclusion
The initial experience with the SMarT plan has been quite successful. Many of the people who were
offered the plan elected to use it, and a majority of the people who joined the SMarT plan stuck
with it. Consequently, in the first implementation, for which we have data for four annual raises,
SMarT participants almost quadrupled their saving rates. Of course, one reason why the SMarT plan
works so well is that inertia is so powerful.
Once people enroll in the plan, few opt out. The SMarT plan takes precisely the same behavioural
tendency that induces people to postpone saving indefinitely (i.e., procrastination and inertia) and
puts it to use. As the financial consultant involved in the first implementation has noted, in
hindsight it would have been better to offer the SMarT plan to all participants, even those who were
willing to make their initial savings increase more than the first step of the SMarT plan. Very few of
these eager savers ever got around to changing their savings allocations again, whereas the SMarT
plan participants were already saving more than they were after just 16 months.
Some economists have criticized practices such as automatic enroll- ment and the SMarT plan on
the grounds that they are paternalistic, a term that is not meant to be complimentary.
The authors agree that these plans are paternalistic, but since no coercion is involved, they
constitute what Sunstein and Thaler (2003) call “libertarian paternalism.” Libertarian paternalism is
a philosophy that advocates designing institutions that help people make better decisions but do
not impinge on their freedom to choose. Automatic enrollment is a good example of libertarian
paternalism.
75
Session: All of them Exam Guide
76
Session: All of them Exam Guide
Introduction
The brain controls human behavior. Economic choice is no exception. Recent studies have shown
that experimentally induced variation in neural activity in specific regions of the brain changes
people’s willingness to pay for goods, renders them more impatient, more selfish, and more willing
to violate social norms and cheat their trading partner.
Transcranial magnetic stimulation (TMS) and transcranial direct current stimulation (tDCS), which
enable the researcher to exogenously increase or decrease neural activity in specific regions of the
cortex before subjects make decisions in experimental tasks that elicit their preferences.
Neuroeconomics combines methods and theories from neuroscience, psychology, economics, and
computer science to investigate three basic questions:
1. What are the variables computed by the brain to make different types of decisions, and how do
they relate to behavioral outcomes?
2. How does the underlying neurobiology implement and constrain these computations?
3. What are the implications of this knowledge for understanding behavior and well-being in
77
Session: All of them Exam Guide
various contexts: economic, policy, clinical, legal, business, and others?
79
Session: All of them Exam Guide
2) the model predicts that the probability of choosing x is a logistic function of the difference in
the decision value signals [v(x) – v(y)]
3) given the stochasticity of choice, there is always a positive probability that individuals will
choose the option with the lowest decision value. This probability increases with the difficulty
of the choice (as measured by how small is the value | v(x) – v(y) |, and decreases with the
parameter θ and with the height of the barriers.
the model makes specific predictions about how the shape of the reaction time distribution varies
with the difficulty of the choice and with the parameters of the model.
From the brain’s point of view, decision values are estimated with noise at any instant. If the
instantaneous decision value signals are computed with identical and independently distributed
Gaussian noise, then the drift-diffusion model implements the optimal statistical solution to the
problem, which entails a sequential likelihood ratio test.
The relative decision value R t can be thought of as the accumulated evidence in favor of the
hypothesis that the alternative x is better (when R t > 0), or the accumulated evidence in favor of
the alternative hypothesis (when R t < 0). The more extreme these values become, the less likely it
is that the evidence is incorrect. The probability of a mistake can be controlled by changing the size
of the barriers that have to be crossed before a choice is made.
Rangel (forthcoming) argue that a brain area involved in implementing the drift- diffusion model
choice process must exhibit the following properties:
1) its level of activity in each trial at the time of choice should correlate with the total level of
activity predicted by the best-fitting drift-diffusion model;
2) it should receive as an input the computations of the area of the ventromedial prefrontal cortex
associated with computing decision values; and
3) it should modulate activity in the motor cortex in a way that is consistent with implementing
the choice.
They found that activity in two parts of the brain—the dorsomedial prefrontal cortex and the
bilateral intraparietal sulcus—satisfied the three required properties and thus was consistent with
the implementation of the drift-diffusion model.
4. Decision values are computed by integrating information about the attributes associated with each
option and their attractiveness.
Let di(x) denote the characteristics of option x for dimension i. The model assumes that
for some set of weights wi .Consider several aspects of this assumption. First, the decision values
used to guide choices depend on the attributes that are computed for each option at the time of
choice. This implies that the decision value signals, and thus the choice process, take into account
the value of an attribute only to the extent that the brain can take it into account in the
construction of the decision values.
Second, it provides a source of preference heterogeneity across individuals: some people might fail
to incorporate a particular dimension in the decision values, not because they don’t value it, but
because they might not be able to compute it at the time of choice.
activity in the posterior superior temporal gyrus, which has been widely associated with the
computation of semantic meaning, correlated with the value of the semantic attribute but not with
the aesthetic value. The opposite was true for an area of fusiform gyrus that is known to be involved
80
Session: All of them Exam Guide
in computing the visual properties of the stimuli. In addition, activity in the ventromedial prefrontal
cortex correlated with the decision values and received inputs from both areas.
5. The computation and comparison of decision values is modulated by attention.
Attention can affect the choice process in two different ways. First, it might affect how attributes
are computed and how they are weighted in the decision value computation.
This can be incorporated into the model as follows: Let a be a variable describing the attentional
state at the time of choice. The computed decision value is then given by
Second, attention can also affect how decision values are compared at the time of choice.
The model is identical to the basic drift-diffusion set-up except that the path of the integration at
any particular instant now depends on which option is being attended to. Thus, for example, when
the x option is being attended, the relative decision value signal evolves according to
where β measures the attentional bias towards the attended option. We refer to this model as the
“attention drift-diffusion model.” If β = 1, the model is identical to the basic model and choice is
independent of attention, but if β > 1, choices are biased towards the option that is attended
longer.
Two properties of the model are worth highlighting.
1) it predicts that exogenous changes in attention (for example, through experimental or
marketing manipulations) should bias choices in favor of the most attended option when its
value is positive, but it should have the opposite effect when the value is negative.
2) the model makes strong quantitative predictions about the correlation between attention,
choices, and reaction times—predictions that can be tested using eye-tracking.
evidence for a substantial attention bias in the choice process: options that were fixated on more,
due to random fluctuations in attention, were more likely to be chosen.
several studies have found that it is possible to bias choices through exogenous manipulations of
visual attention
81
Session: All of them Exam Guide
critical component of their methodology is the identification of “suspect” choice situations in which
there is reason to believe that the subject might have made a mistake.
Neural Foundations for Random Utility Models
the neuroeconomic model provides a neurobiological foundation for random utility models.
However, the two models have one important difference. In the drift-diffusion model, the noise
arises during the process of comparing the computed decision values, and thus it does not reflect
changes in underlying preferences: it is purely computational or process noise. In contrast, random
utility models assume stochastic shocks to the underlying preferences. This difference is important,
because the two models will make different normative predictions about the quality of choices.
Time pressure leads to noisier choices - a single change in the drift-diffusion model parameters: the
barriers of the drift-diffusion model (as illustrated in Figure 1) were smaller under time pressure.
The change in the model also provides a mechanism for why subjects might make fewer mistakes
when the stakes are sufficiently high: in those cases, subjects might increase the size of the barriers
significantly in order to slow the choice process and reduce mistakes.
“Wired” Restrictions in the Choice Correspondence
individual choices will be affected by the observable characteristics of the situation.
key finding of these studies is that the decision value signals exhibit “range adaptation”: the best
and worst items receive the same decision value, regardless of their absolute attractiveness, and
the decision value of intermediate items is given by their relative location in the scale. This finding
matters for economics because it implies that the likelihood and size of decision mistakes increases
with the range of values that needs to be encoded.
Attention, Marketing, and Behavioral Public Policy
The variation of visual contrast (this manipulation) had a sizable effect in attention paid and that it
biased the choices as predicted.
Other candidates for how attention plays a critical role in economic choice include cultural norms
that affect memory retrieval and cognitive patterns; educational interventions that have a similar
effect; and many of the “nudge” or “libertarian parternalistic” policies that have been advocated by
behavioral economists
Novel Insights about Experienced Utility
it is often difficult to disentangle behavioural implications from competing explanations using only
choice data.
Neuroeconomic methods provide an alternative methodology to address this problem: measure
neural activity in areas that are known to encode experienced utility and test the extent to which
the hypothesized effects are present.
De Gustibus Est Disputandum
subjects who donate more to charities activate more strongly the posterior temporal sulcus at the
time of choice and that the responses in this area modulate activity in the areas of ventromedial
prefrontal cortex that compute decision values. The posterior temporal sulcus has been shown to
play a critical role in characterizing the mental states of others
This suggests that some of the observed individual differences in the amount of altruism might be
due to cognitive limitations and not to the absence of an altruistic component in experienced utility.
82
Session: All of them Exam Guide
More Complex Decisions: Self Control, Social Preferences, and Norm Compliance
Examples of more complex choices include intertemporal choices involving monetary or pleasure–
health tradeoffs; financial decisions in complex environments such as the stock market; choices
involving social preferences; and compliance with prevailing social norms.
Intertemporal Choice
In the basic version of the problem (intertemporal choices), individuals choose between two
options, x and y, in the present, and their choices have consequences on multiple dimensions for
extended periods of time.
To a large extent, the existent evidence suggests that all of the key components of the model for
simple choice are also at work here: choices are made by assigning decision values to each option at
the time of choice; these decision values are computed by identifying and weighting attributes;
decision values are compared using a drift-diffusion model; and all of these processes are
modulated by attention.
the same areas of ventromedial prefrontal cortex that encode them in simple choices also do so in
more complex situations involving dietary choices
In intertemporal choice, the decision values seem to continue to be based on a weighted sum of
attributes, but the attributes all need to be time-dated, and attributes can have different weights at
different times (which allows for time-discounting in the weighting of attributes)
This suggest decision values are computed by integrating the value of attributes over dimensions
and time.
A working assumption is that the grid of attributes and time horizons can be partitioned into two
sets: those attributes at given times that are easily computed, and those attributes at given times
that are considered only if cognitive effort is deployed.
There is evidence that activity in areas of the dorsolateral prefrontal cortex that have been shown
to be involved in implementing the type of scarce cognitive processes described above are more
active at the time of choice in self-control than in non-self-control group.
Furthermore, the dorsolateral prefrontal cortex modulated the ventromedial prefrontal cortex
decision value signals in the self-control group but not in the non-self-control group.
The Problem of Experienced Utility in Intertemporal Choice
Things are significantly more complicated in the case of intertemporal choice since decisions have
hedonic consequences over extended periods of time.
This implies that experienced utility at each instant depends on the entire history of choices and not
on a single consumption episode.
Competing Decision Systems in Complex Choice
The Pavlovian controller is activated by stimuli that activate automatic “approach or avoid”
behaviours. A typical example is the common tendency to move quickly away from stimuli such as
snakes and spiders.
The habitual system is more flexible than the Pavlovian system and less flexible than the goal-
directed one. In particular, the habitual system learns to promote actions that have repeatedly
generated high levels of experienced utility in the past over those that have generated lower levels.
the behavioural implementation of fairness goals or social norms depends on the functioning of
elaborate cognitive and neural machinery that is dissociable from the knowledge of what
83
Session: All of them Exam Guide
constitutes fair or norm-compliant behaviour.
Economic Implications for Complex Choice
The basic idea is simple: since scarce computational processes are not always deployed correctly,
and are not even available in some cases, decision mistakes can result. In this model, an individual’s
ability to make optimal intertemporal choices depends on the ability to deploy the cognitive control
facilitated by dorsolateral prefrontal cortex processes.
First, the cognitive control processed by the dorsolateral prefrontal cortex is impaired during stress,
sleep deprivation, or intoxication, and it is depleted in the short term with repeated use.
Second, the lateral prefrontal cortex is the last area of the brain to mature fully, often only when
people age into their mid-20s
Third, and more speculatively, the areas of dorsolateral prefrontal cortex identified in these studies
have also been shown to play a role in cognitive processes such as working memory.
84