Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 84

Session: All of them Exam Guide

Economic Psychology Fall 2017


Hastie and Dawes (H&D), Preface, Ch. 1:Summary
Preface
 The book compares basic principles of rationality with actual behaviour in making decisions.
Moreover, this discrepancy is due not to random errors or mistakes but to automatic and
deliberate thought processes that influence how decision problems are conceptualized and how
future possibilities in life are evaluated.
 The overarching argument is that our thinking processes are limited in systematic ways, and we
review extensive behavioural research to support this conclusion.
Thinking and Deciding
1.1 Decision Making is a Skill
 The book is about decision making, but it is not about what to choose; rather, it is about how we
choose.
 Choosing wisely is a learned skill, which, like any other skill, can be improved with experience.
1.2 Thinking: Automatic and Controlled
 What is thinking? Briefly, it is the creation of mental representations of what is not in the
immediate environment.
 Thinking is probably best conceived of as an extension of perception—an extension that allows
us to fill in the gaps in the picture of the environment painted in our minds by our perceptual
systems, and to infer causal relationships and other important “affordances” of those
environments.
 There are basically two types of thought processes: automatic and controlled.
 Pure association is the simplest type of automatic thinking; much of our thinking is associational.
 At the other extreme is controlled thought, in which we deliberately hypothesize a class of
objects or experiences and then view our experiences in terms of these hypothetical possibilities.
Controlled thought is “what if” thinking. Other formal thinking types are: visual imagination,
creation, and scenario building.
 The prototype of automatic thinking is the thinking involved when we drive a car. We respond
to stimuli not present in the environment—for example, the expectation that the light will be red
before we get to the intersection.
 When automatic thinking occurs in less mundane areas, it is often termed intuition
 In contrast, a prototype of controlled thought is scientific reasoning. While the original ideas
may arise intuitively, they are subjected to rigorous investigation by consideration of alternative
explanations of the phenomena the ideas seem to explain.
 Occasionally, the degree to which thinking is automatic rather than controlled is not clear until
the process is examined carefully. The situation is made more complicated by the fact that any
significant intellectual achievement is a mixture of both automatic and controlled thought
processes.

1.3 The Computational Model of the Mind

1
Session: All of them Exam Guide
 The computational model of the mind is based on the assumption that the essence of thinking
can be captured by describing what the brain does as manipulating symbols.
 The “cognitive revolution” in psychology really got under way (in the 1960s) when the first
computer programming languages were applied to the task of summarizing and mimicking the
mental operations of people performing intellectual tasks like chess playing, logical deduction,
and mental arithmetic.
 Many aspects of human thinking, including judgment and decision making, can be captured
with computational models. The essential parts of these models are symbols and operations
that compare, combine, and record (in memory) the symbols.
 The other half of the cognitive theory is a description of the elementary information processes
that operate on the representations to store them, compare them, and transform them in
productive thought.
 Although we are aware of (and can report on) some aspects of cognitive processing, mostly the
symbolic products of hidden processes such as the digit ideas in mental arithmetic, most of the
cognitive system is unconscious.
o So, the first insight from cognitive science is that we can think of intellectual
achievements, like judging and deciding, as computation and that computation can be
broken down into symbolic representations and operations on those representations.
o In addition, we emphasize that both automatic and controlled modes of thinking can be
modelled as computations in this sense.
 The early outlines of the cognitive system included three kinds of memory stores:
I. sensory input buffers that hold and transform incoming sensory information over a span
of a few seconds;
II. a limited short-term working memory where most of conscious thinking occurs; and
III. a capacious long-term memory where we store concepts, images, facts, and procedures.
 Modern conceptions distinguish between several more processing modules and memory
buffers, all linked to a central working memory.
o In the multi-module model, there are input
and output modules, which encode
information from each sensory system
(relying on one or more memory buffers) and
generate motor responses.
o A Working Memory, often analogized to the
surface of a workbench on which projects
(problems) are completed, is the central hub
of the system, and it comprises a central
executive processor, a goal stack that organizes processing, and at least two short-term
memory buffers that hold visual and verbal information that is currently in use.
o The other major part of the system is a Long-Term Memory that contains all sorts of
information including procedures for thinking and deciding.
 Two properties of the memory stores will play major roles in our explanations for judgment and
decision-making phenomena.
I. First, the limited capacity of Working Memory will be used to explain some departures
from optimal, rational performance.
 James March and Herbert Simon (1958) introduced the concept of bounded
2
Session: All of them Exam Guide
rationality in decision making, by which they meant approximately optimal
behaviour, where the primary explanation for departures from optimal is that we
simply don’t have the capacity to compute the optimal solutions because our
working memory imposes limits on how much information we can use
II. Second, we will often refer to the many facts and procedures that have been learned
and stored in long-term memory.
1.4 Through Darkest Psychoanalytic Theory and Behaviourism to Cognition
 Until the 1950s, psychology was dominated by two traditions: psychoanalytic theory and
behaviourism.
 One of the most important psychopathologies of the 20th century, Nazism.
 According to the behaviouristic approach, in sharp contrast to psychoanalysis, the reinforcing
properties of the rewards or punishments that follow a behaviour determine whether the
behaviour will become habitual.
 The “law of effect,” which maintains that the influence of consequences is automatic.
 Marvin Levine (1975) demonstrated that participants’ conscious beliefs were virtually perfect
predictors of their responses, particular error patterns, and time it took to learn.
 Most psychologists today accept the compelling assumption that ideas and beliefs cause
behaviour and that cognitive theories are the best route to understanding and improving
important behaviours.
1.5 Quality of Choice: Rationality
 A rational choice can be defined as one that meets four criteria:
I. It is based on the decision maker’s current assets. Assets include not only money, but
also physiological state, psychological capacities, social relationships, and feelings.
II. It is based on the possible consequences of the choice.
III. When these consequences are uncertain, their likelihood is evaluated according to the
basic rules of probability theory.
IV. It is a choice that is adaptive within the constraints of those probabilities and the values
or satisfactions associated with each of the possible consequences of the choice.
 In fact, there are common decision-making procedures that have no direct relationship to these
criteria of rationality. They include the following:
V. Habit, choosing what we have chosen before;
VI. Conformity, making whatever choice (you think) most other people would make or
imitating the choices of people you admire (Boyd and Richerson [1982] have pointed out
that imitation of success can be adaptive in general, though not, for example, if it is
imitation of the drug use of a particular rock star or professional athlete you admire for
his or her professional achievements); and
VII. Choosing on the basis of (your interpretation of) religious principles or cultural
mandates.
Because reality is not contradictory, contradictory thinking is irrational thinking. A proposition about
reality cannot be both true and false.

1.6 The Invention of Modern Decision Theory


 It began in Renaissance Italy, for example, in the analysis of the practice of gambling by scholars

3
Session: All of them Exam Guide
such as Girolamo Cardano (1501–1576), a true Renaissance man who was simultaneously a
mathematician, physician, accountant, and inveterate gambler.
 The most recent impetus for the development of a rational decision theory, however, comes
from a book published in 1947 entitled Theory of Games and Economic Behaviour by
mathematician John von Neumann and economist Oskar Morgenstern.
 Von Neumann and Morgenstern provided a theory of decision making according to the principle
of maximizing expected utility.
 Traditional economists, looking at the aggregate behaviour of many individual decision makers
in broad economic contexts, are satisfied that the principle of maximizing expected utility does
describe what happens.
o There are good reasons to start with the optimistic hypothesis that the rational, expected
utility theory and the descriptive—how people really behave—theories are the same. After
all, our decision-making habits have been “designed” by millions of years of evolutionary
selection and, if that weren’t enough, have been shaped by a lifetime of adaptive learning
experiences.
 In contrast, psychologists and behavioural economists studying the decision making of
individuals and organizations tend to reach the opposite conclusion from that of traditional
economists. Not only do the choices of individuals and social decision-making groups tend to
violate the principle of maximizing expected utility; they are also often patently irrational.
o Those behavioural scientists who conclude that the rational model is not a good descriptive
model have also criticized the apparent descriptive successes of the rational model
reported by Becker and others. The catch is that by specifying the theory in terms of utility
rather than concrete values (like dollars), it is almost always possible to assume that some
sort of maximization principle works and then, ex post, to define utilities accordingly.

Economic Psychology Fall 2017


Perloff (2012) Section 17.0-17.3 – Summary

 Risk situation in which the likelihood of each possible outcome is known or can be estimated and
no single possible outcome is certain to occur.
4
Session: All of them Exam Guide
 Five main topics are examined:
I. Degree of Risk. Probabilities are used to measure the degree of risk and the likely profit
from a risky undertaking.
II. Decision Making Under Uncertainty. Whether people choose a risky option over a non-
risky one depends on their attitudes toward risk and on the expected payoffs of each
option.
III. Avoiding Risk. People try to reduce their overall risk by not making risky choices, taking
actions to lower the likelihood of a disaster, combining risks, insuring, and in other ways.
IV. Investing Under Uncertainty. Whether people make an investment depends on the
riskiness of the payoff, the expected return attitudes toward risk, the interest rate, and
whether it is profitable to alter the likelihood of a good outcome.
V. Behavioural Economic of Risk. Because some people do not choose among risky options
the way that traditional economic theory predicts, some researchers have switched to new
models that incorporate psychological factors.
Important Terms and Quick Overview
 A particular event has a number of possible outcomes. To describe how risky this activity/event
is, we need to quantify the likelihood that each possible outcome occurs.
 A probability is a number between 0 and 1 that indicates the likelihood that a particular outcome
will occur.
 If we have a history of the outcomes for an event, we can use the frequency with which a
particular outcome occurred as our estimate of the probability.
 Often we do not have a history that allows us to calculate the frequency. We use whatever
information we have to form a subjective probability, which is our best estimate of the
likelihood that an outcome will occur.
 A probability distribution relates the probability of occurrence to each possible outcome.
 Expected value
 The variance is the probability weighted average of the squares of the differences between the
observed outcome and the expected value.
o Holding the expected value constant,
the smaller the standard deviation,
the smaller the risk.
 Expected utility is the probability weighted
average of the utility from each possible
outcome.
 Fair bet A wager with an expected value of
zero.
 Risk averse unwilling to make a fair bet.
 Risk neutral indifferent about making a fair bet.
 Risk preferring willing to make a fair bet.
 A person whose utility function is concave picks the less risky choice if both choices have the
same expected value.
 Risk premium The amount that a risk averse person would pay to avoid taking a risk.
 Someone who is risk neutral has a constant marginal utility of wealth: Each extra dollar of
wealth raises utility by the same amount as the previous dollar. With constant marginal utility of
wealth, the utility curve is a straight line in a utility and wealth graph.
5
Session: All of them Exam Guide
 In general, a risk neutral person chooses the option with the highest expected value, because
maximizing expected value maximizes utility. A risk-neutral person chooses the riskier option if it
has even a slightly higher expected value than the less risky option. Equivalently, the risk
premium for a risk-neutral person is zero.
 Individuals and firm often reduce their overall risk by making many risky investments instead of
only one. This practice is called diversifying.
 The extent to which diversification reduces risk depends on the degree to which various events
are correlated over states of nature.
 Diversification can eliminate risk if two events are perfectly negatively correlated.
 Diversification reduces risk even if the two events are imperfectly negatively correlated,
uncorrelated, or imperfectly positively correlated.
 In contrast, diversification does not reduce risk if two events are perfectly positively correlated.
 Fair insurance A bet between an insurer and a policyholder in which the value of the bet to the
policyholder is zero.
 When fair insurance is offered, risk averse people full insure.
 An insurance company sells policies only for risks it can diversify.

6
Session: All of them Exam Guide
Serano and Feldman: Preferences and Utility – Summary

1. Introduction
 A utility function is a numerical representation of how a consumer feels about alternative
consumption bundles: if she likes the first bundle better than the second, then the utility
function assigns a higher number to the first than to the second, and if she likes them equally
well, then the utility function assigns the same number to both.
 When there are two goods any consumption bundle can easily be shown in a standard two-
dimensional graph, with the quantity of the first good on the horizontal axis and the quantity of
the second good on the vertical axis. All the figures in this lesson are drawn this way.
 In the shopping centre of life some bundles are feasible or affordable for the consumer; these
are the ones which her budget will allow. Other bundles are non-feasible or unaffordable; these
are the ones her budget won’t allow.
2. The Consumer’s Preference Relation
 If there are two goods, X is a vector (x1, x2), where x1 is the quantity of good 1 and x2 is the
quantity of good 2.
 If the consumer likes X and Y equally well, we say she is indifferent between them. We write X ∼
Y in this case, and ∼ is called the indifference relation.
 The ≽ relation is sometimes called the weak preference relation.
Assumptions on preferences
 Assumption 1: Completeness. For all consumption bundles X and Y, either X ≻ Y , or Y ≻ X, or X ∼ Y
. That is, the consumer must like one better than the other, or like them equally well.
 Having a complete ordering of bundles is very important for our analysis.
 Assumption 2: Transitivity. This assumption has four parts:
I. First, transitivity of preference: if X ≻ Y and Y ≻ Z, then X ≻ Z.
II. Second, transitivity of indifference: if X ∼ Y and Y ∼ Z, then X ∼ Z.
III. Third, if X≻Y and Y ∼Z, then X≻Z.
IV. Fourth and finally, if X ∼Y and Y ≻Z, then X ≻Z.
 The transitivity of preference assumption is meant to rule out irrational preference cycles.
 The transitivity of indifference assumption (that is, if X ∼ Y and Y ∼ Z, then X ∼ Z) makes
indifference curves possible.
 Assumption 3: Monotonicity. We normally assume that goods are desirable, which means the
consumer prefers consuming more of a good to consuming less. That is, suppose X and Y are two
bundles of goods such that (1) X has more of one good (or both) than Y does and (2) X has at
least as much of both goods as Y has. Then X ≻ Y.
o Some important consequences of monotonicity are the following: indifference curves
representing preferences over two desirable goods cannot be thick or upward sloping.
Nor can they be vertical or horizontal.
 In Figure 2.3 below we show a
downward sloping thin indifference
curve, which is what the monotonicity
assumption requires.

7
Session: All of them Exam Guide
 Another implication of the assumptions of transitivity (of indifference) and monotonicity is that
two distinct indifference curves cannot cross.
 Assumption 4: Convexity for indifference curves. This assumption means that averages of
consumption bundles are preferred to extremes. Consider two distinct points on one
indifference curve. For example, the bundle made up of 1/2 times X plus 1/2 times Y , that is X/2
+ Y /2, is preferred to either X or Y . This is what we normally assume to be the case.
 We call preferences well behaved when indifference curves are downward sloping and convex.
4. The Consumer’s Utility Function
 Imagine that we assign a number to each bundle. For example, we assign the number u(X) =
u(x1,x2) = 5, to the bundle X = (x1,x2); we assign the number u(Y) = u(y1,y2) = 4, to Y =(y1,y2); and
so on.
 We say that such an assignment of numbers to bundles is a consumer’s utility function if:
I. First, u(X)>u(Y) whenever X≻Y.
II. And second, u(X)=u(Y) whenever X∼Y.
 Our consumer’s utility function is said to be an “ordinal” utility function rather than a “cardinal”
utility function.
o An ordinal statement only gives information about relative magnitudes; for instance, “I
like Tiffany more than Jennifer.”
o A cardinal statement provides information about magnitudes that can be added,
subtracted, and so on. For instance, “Billy weighs 160 lbs. and Johnny weighs 120 lbs.”
o Today, for the most part, we treat utility simply as an ordinal magnitude.
 For one individual, differences or ratios of utility numbers from different bundles generally do
not matter, and comparisons of utilities across different individuals have no meaning.
 If we start with a utility function representing my preferences, and modify it with what’s called
an order-preserving transformation, then it still represents my preferences. All this is summed
up in the following statement:

If u(X) = u(x1, x2) is a utility function that represents the preferences of a consumer, and f is
any order-preserving transformation of u, the transformed function f (u(X )) = f (u(x 1, x2)) is
another utility function that also represents those preferences.

 What is the connection between indifference curves and utility functions? The answer is that we
use indifference curves to represent constant levels of utility.

8
Session: All of them Exam Guide
Hastie and Dawes (H&D), Ch. 2 – Summary
What is Decision Making?
2.1 Definition of a Decision
 A decision, in scientific terms, is a response in a situation that is composed of three parts:
I. First, there is more than one possible course of action under consideration in the choice
set.
II. Second, the decision maker can form expectations concerning future events and outcomes
following from each course of action, expectations that can be described in terms of
degrees of belief or probabilities
III. Third, the consequences associated with the possible outcomes can be assessed on an
evaluative continuum determined by current goals and personal values.
 The problem with this definition is that it includes so many situations that it could almost serve
as a definition of intentional behaviour, not just decision behaviour.

2.2 Picturing Decisions


 The conventions of the decision tree
diagram are that the situation is
represented as a hypothetical map of
choice points and outcomes that lead
to experienced consequences, like a
roadmap representing forks in a road
and the objects that are located along
the road.
 On the far right-hand side of the
diagram, we list the consequences that
are associated with choice points and
events in the decision tree.
 We will also express the decision maker’s degrees of uncertainty in judging the possible
outcomes that occur at the event nodes in the diagram in numerical terms. Here we will use a
probability scale (from 0.00, could not possibly occur, to 1.00, certain to occur).

2.3 Decision Quality, Revisited


 The crucial first step in understanding any decision is to describe the situation in which the
decision occurs.
 Then the diagram prompts us to solve the challenging problem of quantifying the uncertainties
and values that define the decision. Solving the problem of inferring how another person has
conceptualized a decision situation is usually the toughest part of psychological research or
applied decision analysis.
 If we believe that we have captured our subject’s situation model in a decision tree diagram, it is
relatively easy to calculate the decision that leads to the highest expected outcome by applying a
rule that follows from decision theory.
 This rule is called the rational expectations principle, and it is usually summarized as an
equation:

9
Session: All of them Exam Guide
 The equation prescribes that for each alternative course of action under consideration (each
major branch of the decision tree), we need to weight each of the potential consequences by its
probability of occurrence, and then add up all the component products to yield a summary
evaluation called an expected utility for each alternative course of action (each initial left-hand
branch).
 Note that these calculations assume we can describe the decision process in terms of numerical
probabilities and values and that arithmetic operations (adding, multiplying) describe the
decision maker’s thought processes. The calculation also assumes that the decision maker
thoroughly considers all (and only) the options, contingencies, and consequences in the decision
tree model of the situation.

2.4 Incomplete Thinking: A Legal Example


 People do not appear to engage in the thorough, consistent thought process that is demanded
by the decision tree representation when they make these kinds of decisions in everyday life,
even when they are in the jury box in a trial where their decision will have serious
consequences.
 Rather, people seem to focus on one or two nodes and reason extensively about those, but
incompletely about the whole tree. Typically, people focus on the gains and losses associated
with the decision they initially believe is most attractive, but ignore the gains and (especially) the
losses associated with the other alternatives.
 A decision maker’s thoughts are dominated by his or her initial impression, a phenomenon
referred to as a primacy effect or confirmatory hypothesis testing.
 People usually do not exhibit the systematic kind of reasoning demanded by decision theory and
summarized in the decision tree representation.

2.5 Over-Inclusive Thinking: Sunk Costs


 Rationally, sunk costs should not affect decisions about the future.
 When we behave as if our non-refundable expense is equivalent to a current investment, we are
honouring a sunk cost.
 Honouring sunk costs is irrational.
 The descriptive, psychological point is that we have a habit of paying too much attention to past
losses and costs when we make decisions about the future. Even in the context of our discussion
of justifications of sunk cost thinking in terms of future consequences, there is ample evidence
that we give too much weight to sunk costs in many practical decisions.
 To conclude on a practical note, the social problems that arise after abandoning a sunk cost can
be ameliorated by a type of conceptual framing. The framing consists of explaining that one is
not forsaking a project or enterprise, but rather wisely refusing “to throw good money after
bad.”
 This “good money after bad” framing focuses the listener’s attention on the present as the
status quo and phrases the abandonment of a sunk cost as the avoidance of a sure loss (which is
good). In contrast, honouring a sunk cost involves framing a past state as the status quo and
abandoning it as the acceptance of a sure loss (which is bad).

10
Session: All of them Exam Guide
2.6 The Rationality of Considering Only the Future
 In general, the past is relevant, but only for estimating current probabilities and the desirability
of future states. It is rational to conclude that a coin that has landed heads in 19 of 20 previous
flips is probably biased, and that therefore the probability it lands heads on the 21st flip is
greater than 1/2. It is not rational to estimate the probability of landing heads on the 21st toss by
assigning a probability to the entire pattern of results including those that have already occurred.
 Rational estimation of probabilities and rational decision making resulting from this estimation
are based on a very clear demarcation between the past and the future.

Economic Psychology Fall 2017


Hastie and Dawes (H&D), Ch. 3 – Summary
A General Framework for Judgment
3.1 A Conceptual Framework for Judgment and Prediction
 This chapter is an introduction to the psychology of judgment, the human ability to infer,
estimate, and predict the character of unknown events.
 Within psychology, a conceptual framework has been developed to deal with our judgments
and expectations concerning events and outcomes of possible courses of action.
 The framework is called the Lens Model, and it was invented by an Austrian-American
psychologist named Egon
Brunswik.
 The framework is divided into
two halves, one representing
the psychological events
inside the mind of the person
making a judgment and the
other representing events
and relationships in the “real
world” in which the person is
situated. The framework
forces us to recognize that a
complete theory of judgment
must include a representation
of the environment in which
the behaviour occurs. We refer to it as a framework, because it is not a theory that describes the
details of the judgment process; rather, it places the parts of the judgment situation into a
conceptual template that is useful by itself and can be subjected to further theoretical analysis.
 The left side of the Lens Model diagram summarizes the relationships between the true, to-be-
judged state of the world, called the criterion, and the cues that may point to that state of the
world.
 The right-hand side of the lens diagram is the psychological judgment process part of the
framework. It refers to the inferences that a person makes to integrate information conveyed by
the cues so as to form an estimate, prediction, or judgment of the value of the criterion. The
11
Session: All of them Exam Guide
overarching path in the diagram (labelled “achievement”) represents the judge’s ability to
estimate the to-be-judged criterion accurately.

3.2 Research With the Lens Model Framework


 The Lens Model was invented by psychologists for use in research, so it can be interpreted as a
blueprint for a method to analyse judgment processes.
I. Once a judgment has been selected for study, the first step for the researcher is to
identify and measure the cues on which the judge relies.
II. The second step in the analysis is the creation of a model of the events on the left side of
the diagram. Often, a linear regression model can be used to summarize the criterion–
cue relationships in terms of the many correlations between the criterion and each of the
cues that are related to it and might be used by a judge to infer the criterion.
In this analysis, the correlation coefficient (or a related statistic) is used to summarize the
strength of the relation between the criterion and a cue (the ecological validity of the
cue) and between the cue and the judgment (the cue utilization coefficient or, more
informally, the psychological impact of the cue on the judgment).
III. The third step in research shifts over to the right-hand side of the diagram and involves
inventing and testing models of the psychological process of cue utilization.
 The Lens Model approach analyses the judgment by calculating an algebraic model to provide a
summary of the weights placed on the cue values for each case so as to predict the judge’s
(physician’s, admissions officer’s) judgments. The weights are based on the correlation
coefficients summarizing the linear dependency of the judgment of each cue; with everything
else equal, the higher the correlation is, the greater the weight will be.
 The model can be extended to include nonlinear relationships.
 If we had criterion values for our sample of judgments, we could also calculate a summary model
for the left-hand side of the Lens Model diagram. In many applications to actual judgment tasks,
however, it is difficult to obtain criterion values.

3.3 Capturing Judgment in Statistical Models


 Historically, some of the earliest psychological research on judgment addressed the question of
whether trained experts’ predictions were better than statistically derived, weighted averages of
the relevant predictors.
 In all studies evaluated, the statistical method provided more accurate predictions (or the two
methods tied).
 The practical lesson from these studies is that in many judgment situations, we should ask the
experts what cues to use, but let a mechanical model combine the information from those cues
to make the judgment. The finding that linear combination is superior to global judgment is
general; it has been replicated in diverse contexts.

3.4 How do Statistical Models Beat Human Judgment?


 The mathematical principle is that both monotone relationships of individual variables and
monotone (“ordinal”) interactions are well approximated by linear models. Two factors
“interact” when their combined impact is greater than the sum of their separate impacts, but
they do not interact in the sense that the direction in which one variable is related to the
12
Session: All of them Exam Guide
outcome is dependent upon the magnitude of the other variable.
 The principle of nature that partly explains the success of the linear statistical model is that most
interactions that exist are, in fact, monotone. It is easy to hypothesize crossed interactions, but
extraordinarily difficult to find them in everyday situations, especially in the areas of psychology
and social interactions. Because the optimal amount of any variable does not usually depend
upon the values of the others, what interactions there are tend to be monotone.
 The psychological principle that might explain the predictive success of linear models is that
people have a great deal of difficulty in attending to two or more non-comparable aspects of a
stimulus or situation at once.
 Sometimes the format of the information will determine the salient anchor value, as when a
bias is introduced by placing one type of information (e.g., test scores) in a prominent location,
such as first in a list of applicant information.
 Given that monotone interactions can be well approximated by linear models (a statistical
fact), it follows that because most interactions that exist in nature are monotone and because
people have difficulty integrating information from non-comparable dimensions, linear models
will outperform clinical judgment.
 A further, more speculative conjecture is that not only is the experienced world fairly linear, but
our judgment habits are also adaptively linear.
 The mind is in many essential respects a linear weighting and adding device. In fact, much of
what we know about the neural networks in the physical brain suggests that a natural
computation for such a “machine” is weighting and adding, exactly the fundamental processes
that are well described by linear equations.

3.5 Practical Implications of the Surprising Success of the Linear Model


 Unit or random linear models are termed improper because their coefficients (weights) are not
based on statistical techniques that optimize prediction. The research indicates that such
improper models are almost as good as proper ones. When it comes to the coefficients in a
linear model, the signs on the coefficients are much more important than the specific numerical
weights.
 We would also point out that human judges relying on intuition are not very competent about
adjusting for differences in the metrics of the scales that convey numerical information.
 Another effective, though also “improper,” approach is to fit a linear model to a large sample of a
human judge’s own judgments and then to use that model-of-the-judge instead of the original
judge. This method is called bootstrapping (not to be confused with the “statistical bootstrap”),
and it almost invariably outperforms human experts, including the person who was used as the
source of judgments for the original model.
 But most of the success can probably be attributed to the remarkable robustness and power of
(even improper) linear models that derive from their mathematical properties and their match to
the underlying structure of the events in the to-be-judged environment.

3.6 Objections and Rebuttals


 The conclusion that random or unit or “bootstrapped” weights outperform global judgments of
trained experts is not a popular one with experts, or with people relying on them. All of these
findings have had almost no effect on the practice of expert judgment.

13
Session: All of them Exam Guide
 There are many reasons for the resistance to actuarial, statistical judgment models:
I. First of all, they are an affront to the narcissism (and a threat to the income) of many
experts.
II. Another objection is to maintain that the outcomes better predicted by linear models are
all short-term and trivial
III. A final objection is the one that says, “10,000 Frenchmen can’t be wrong.” Experts have
been revered—and well paid—for years for their “It is my opinion that ... ” judgments.
IV. But there is also a situational reason for doubting the inferiority of global, intuitive
judgment. It has to do with the biased availability of feedback. When we construct a
linear model in a prediction situation, we know exactly how poorly it predicts. In contrast,
our feedback about our own intuitive judgments is flawed. Not only do we selectively
remember our successes, we often have no knowledge of our failures.
 A judgment that some people are poor tippers leads to inferior service, which in turn leads to
poor tips—thereby “validating” the waiter’s judgment. (Not all prophecies are self-fulfilling—
there must be a mechanism, and intuitive judgment often provides one. Intuition is also a
possible mechanism for some self-negating prophecies, such as the feeling that one is
invulnerable no matter how many risks one takes while driving.)
 We want to predict outcomes that are important to us. It is only rational to conclude that if one
method (a linear model) does not predict well, something else may do better. What is not
rational—in fact, it’s irrational—is to conclude that this “something else” necessarily exists and,
in the absence of any positive supporting evidence, that it’s intuitive global judgment.
 One important lesson of the many studies of human judgment is that outcomes are not all that
predictable; there is a great deal of “irreducible uncertainty” in the external world, on the left-
hand side of the Lens Model diagram.
 People find linear models of judgment particularly distasteful in assessing other people.

14
Session: All of them Exam Guide
Hastie and Dawes (H&D), Ch. 4: The Fundamental Judgment Strategy; Anchoring and
Adjustment – Summary
4.1 Salient Values
 Often, our estimates of frequencies, probabilities, and even the desirability of consequences are
vague. In ambiguous situations, an “anchor” that serves as a starting point for estimation can
have dramatic effects.
 What happens is that people will adjust their estimates from this anchor but nevertheless remain
too close to it.
 When we sequentially integrate information in this manner, we usually “underadjust.”
 The finding of such insufficiency is general, and is
related to the credibility of the original anchor and
the amount of relevant information that the judge
has available in memory or at hand.
 The anchor-and-adjust process appears in many
judgments, and it is especially clear when the
anchor selected is “obviously” arbitrary.
4.2 Anchoring and (Insufficient) Adjustment
 This serial judgment process is a natural result of
our limited attention “channels” and the selective
strategies we have developed to deal with that
cognitive limit.
 Just as we can only focus on one location in a
visual scene or listen to one conversation at a
crowded cocktail party, we attend to one item of
evidence at a time as we make estimates.
 We can summarize the judgment process with a
flowchart diagram as in Figure 4.1.
 The flowchart shows that the judgment process is
complex and there are many places where biases Figure 4.1 A flowchart representing the cognitive processes
might enter the system. The basic bias is that the that occur in an anchor and adjust judgment.
process is prone to underadjustments or primacy
effects—information considered early in the
judgment process tends to be overweighted in the final judgment.
 The anchor produces its bias through two cognitive routes:
I. There is a conservatism in the adjustment process near the output end of the entire
procedure. As the inclination to respond with higher or lower values occurs when new
information is considered, it overweights what is already known.
II. There is a biasing effect of the anchor, more accurately the concepts associated with the
anchor, on the kind of information that is considered second. Subsequently, especially
when retrieving information from memory to make the judgment
 The true underadjustment process plays a large role only when the person making the estimate
selects his or her own anchor value.
 One indicator of sources of bias in the underlying cognitive process is the source of the anchor.

15
Session: All of them Exam Guide
 When the time course of a judgment process can be mapped out, the most commonly observed
sequential effect is a primacy effect, best interpreted as anchoring on the initial information
considered and then underadjusting for subsequent information.
 The most common anchor, of course, is the status quo. While we are not constrained mentally—
as we are physically—to begin a journey from where we are, we often do. Changes in existing
plans or policies more readily come to mind than do wholly new ones, and even as new
alternatives close to the status quo are considered, they, too, can become anchors.
 This generalization is true of organizations as well as of individuals.
 When people are asked to produce a monetary equivalent (through the selling price procedure),
they anchor on the dollar amounts of the outcomes, and they insufficiently adjust, on the basis
of the probabilities involved.
 But when the same people think of winning versus losing, they anchor on the probability of
success; higher probabilities are more desirable. And then they insufficiently adjust their value
judgment on the basis of the dollars to be won or lost.
 Robust preference reversals also provide an instructive example of how the scale on which a
response is made can bias people to choose one anchor over another (favoring dollar amounts,
when the response is a price, but favoring probabilities, when the response is a choice).
Furthermore, the results challenge standard economic theory, which equates the utility
(personal value) of an object with the amount of money people are willing to pay for it.
4.3 Anchoring on Ourselves
 When we make judgments about someone we do not know well, we engage in an egocentric
process that some researchers have called “projection.”
 Subtle social judgment begins with anchoring on the explicit contents of the message—we might
call this the first interpretation— and only given the benefit of deeper thinking can that
interpretation be adjusted away from the surface meaning to appreciate the true sarcastic
message.
 The basic tendency to see ourselves in others is well established, and it is clear that an anchor-
and-adjust process is responsible for many of these effects.
4.4 Anchoring the Past in the Present
 Anchoring and adjustment can also severely affect our retrospective personal memory. While
such memory is introspectively a process of “dredging up” what actually happened, it is to a
large extent anchored by our current beliefs and feelings.

16
Session: All of them Exam Guide
Hastie and Dawes (H&D), Ch. 5: Judging Heuristically – Summary
5.1 Going Beyond the Information Given
 A good account of many underlying cognitive judgment processes is provided by assuming that
we have a cognitive toolbox of mental heuristics stored in long-term memory.
 Heuristics are efficient, but sometimes inaccurate procedures for solving a problem—in this case,
providing rough-and-ready estimates of frequencies, probabilities, and magnitudes.
 These heuristic mechanisms are usually constructed from more primitive mental capacities such
as our similarity, memory, and causal judgment processes.
 These cognitive tools are acquired over a lifetime of experience.
 They tell us what information to seek or select in the environment and how to integrate several
sources of information to infer the characteristics of events that are not directly available to
perception. We learn these cognitive tools from trial-and-error experience.
 The notion is that when we encounter a situation in which a judgment is needed, we select a tool
from our cognitive toolbox that is suited to the judgment.
 For many everyday judgments, we use heuristic strategies because they are relatively “cheap” in
terms of mental effort and, under most everyday conditions, they provide good estimates.
5.2 Estimating Frequencies and Probabilities
 A single psychophysical function relating objective quantities to subjective estimates is
characteristic of almost all memory-based frequency-of-occurrence estimates.
 At the low end of the objective frequency scale, estimates tend to be overestimates. As the
number of to-be-estimated events increases, our subjective judgments err in the direction of
underestimation.
 The almost perfect accuracy for estimates of up to five items was taken as a measure of the
scope of apprehension.
 When the number of to-be-estimated events exceeds about 10 items, however, the tendency to
underestimate the objective total appears as in the memory-based function.
 When there are more than about seven items (often cited as the capacity of short-term,
conscious working memory), a more deliberate estimation strategy is used to make the
judgment.
5.3 Availability of Memories
 Many of the judgments we make are memory-based in the sense that we don’t have the “data”
necessary to make the judgment right in front of us, but we have learned information in the past,
now stored in long-term memory, that is relevant to the judgments.
 This simple form of associative thinking is called the availability heuristic by researchers, and we
rely on ease of retrieval to make a remarkable variety of judgments.
 The operations of the availability heuristic can be broken down into seven subprocesses or
subroutines (summarized in Figure 5.2):
I. the original acquisition or storage of relevant information in long-term memory;
II. retention, including some forgetting, of the stored information;

17
Session: All of them Exam Guide
III. recognition of a situation in which stored
information is relevant to making a
judgment;
IV. probing or cueing memory for relevant
information;
V. retrieval or activation of items that
match or are associated with the
memory probe;
VI. assessment of the ease of retrieval
(perhaps based on the amount recalled,
quickness of recall, or subjective
vividness of the recalled information);
and
VII. an estimate of the to-be-judged
frequency or probability based on sensed
ease of retrieval.
 There are several points in the process at which biases might perturb the final judgment: The
experienced sample of events stored in long-term memory (the information that is available to
be remembered) might be biased, as in our example of suicide versus homicide estimates.
 The memory cue that is the basis for retrieval might be biased to produce a biased sample of
remembered events, even if the population of events in memory is not itself unrepresentative.
 Events may vary in their salience or vividness, so that some more salient events dominate the
assessment of ease of retrieval. Any of these factors, individually or jointly, may introduce
systematic biases into memory-based judgments.
5.4 Biased Samples in Memory
 Rationally defensible deductive logic involves a specification from the universal to the particular,
but much less reliable inductive logic involves generalization from the particular to the universal.
 However, we are prone to do the exact opposite: we under-deduce and over-induce.
5.5 Biased Sampling From Memory
 It is obvious that if the sample of information stored in memory is biased (perhaps because it is
filtered through the popular media), subsequent memory-based judgments will be biased, too.
But other aspects of the memory process can produce systematic biases as well.
 The emotion evoked by an event may have a further effect on memory and, hence, memory-
based judgments: When we are in a particular emotional state, we have a tendency to
remember events that are thematically congruent with that state.
 When we have experience with a class of phenomena (objects, people, or events), those with a
salient characteristic most readily come to mind when we think about that class.
 It follows that if we estimate the proportion of members of that class who have the distinctive
characteristic, we tend to overestimate it.
 Our estimate will be higher than the one we would make if we deliberately coded whether each
member of that class did or did not have that characteristic as we encountered it (e.g., by
keeping a running tally with a mechanical counter).
 Selective retrieval from memory can produce large misestimates of proportions, leading to a
misunderstanding of a serious social problem, and finally to biases in important decisions like
18
Session: All of them Exam Guide
those required of voters, jurors, and policy makers.
5.6 Availability to the Imagination
 Availability to the imagination influences our estimates of frequency. The problem arises—just
as with the availability of actual instances in our experience or availability of vicarious instances
—in that this availability is determined by many factors other than actual frequency.
 The resultant ease of imagining biases our estimates of frequencies, and hence our judgments of
probability based on such frequencies.
5.7 From Availability to Probability and Causality
 Tversky and his colleagues explained the subadditivity of probabilities by proposing that support
for each proposition was recruited from the physicians’ imaginations. The complementary
subevent descriptions provide effective cues to generate reasons for the specific outcomes.
 The cognitive processes underlying these overestimates probably fall in the middle of the
continuum from memory retrieval to imaginative generation, though retrieval is surely part of
the explanation: Follow-up studies showed that these subadditive estimates were highly
correlated with the respondents’ ability to recall specific contributions, implying that memory
availability was a component of the judgment process.
 Both the subadditive and the superadditive findings and other clever demonstrations of
retrieval fluency verify the prominent role of availability as an underlying cognitive process.
Some of the most important practical implications concern the manner in which citizens (and
their political leaders) set agendas for the investment of public resources.
5.8 Judgment by Similarity: Same Old Things
 The second elementary cognitive process that is often heuristically substituted for magnitude,
frequency, and probability judgments is similarity.
 The common tendency to make judgments and decisions about category membership based on
similarity between our conception of the category and our impression of the to-be-classified
object, situation, or event.
 As in the case of availability-based judgments, similarity slips into the judgment process
automatically and dominates spontaneous judgments of category membership.
 The primary behavioral “signature” of relying on similarity is that people miss important
statistical or logical structure in the situation and ignore relevant information.
 This overreliance on similarity occurs even when people simultaneously acknowledge that the
information they are using is unreliable, incomplete, and non-predictive.
5.9 Representive Thinking
 The purpose of these (book) examples was to demonstrate
I. that category membership judgments are usually based on the degree to which
characteristics are representative of or similar to prototypical category exemplars,
II. that representativeness does not necessarily reflect an actual contingency, and

19
Session: All of them Exam Guide
III. that probability estimates or confidence in judgments are related to similarity and not
necessarily to the deeper structure of the situations about which we are making
judgments.
 We find these early studies to be quite
convincing on the point that people (over-)
rely on similarity when making many
probability judgments, perhaps because our
own self-reflections when solving the
original problems are completely consistent
with the representativeness-similarity
interpretation.
 The last piece of cognitive theory that we
will need for our discussion of category
classification is a model of the similarity
judgment process.
 The most general model of this process is
called the contrast model, and it says that
we perceive similarity by making (very rapid)
comparisons of the attributes of the two or more entities whose similarity is being evaluated.
 A useful model of this process is to suppose that our global impression of similarity arises from a
quick tabulation of the number of attributes that “match” for two entities versus the number
that “mismatch.”
 In many cases, once an object is classified into a category, an association-based judgment is
automatically made.
 Sometimes our associations with categories are morally troublesome or just flat-out irrational.
 Perhaps the most troublesome characteristic of these racial, gender, and religious stereotypes is
that they automatically evoke emotional reactions that affect our behavior toward members of
the category.
 The basic problem with making probability or confidence judgments on the basis of
representative characteristics is that the schema accessed may in fact be less probable, given
the characteristic, than one not accessed.
 This occurs when the schema not accessed has a much greater extent in the world than the
accessed one.
5.10 he Ratio Rules
 In contrast to representative judgments, accurate judgments can be made by using the simplest
rules of probability theory. Let c stand for a characteristic and S for a schema (category).
 The degree to which c is representative of S is indicated by the conditional probability p(c|S)—
that is, the probability that members of S have characteristic c. (In the present examples, this
conditional probability is high.)
 The probability that the characteristic c implies membership in S, however, is given by the
conditional probability p(S|c), the probability that people with characteristic c are members of S,
which is the inverse of p(c|S). Now, by the basic laws of probability theory,

20
Session: All of them Exam Guide

 This relationship is called the ratio rule—the ratio of inverse probabilities equals the ratio of
simple probabilities.
 In the present context of inferring category membership, this simple ratio rule provides a
logically valid way of relating p(c|S) to p(S|c).
 To equate these two conditional probabilities in the absence of equating p(c) and p(S) is simply
irrational.
 Representative thinking, however, does not reflect the difference between p(c|S) and p(S|c) and
consequently introduces a symmetry in thought that does not exist in the world.
 There are still experimental conditions under which base rates are neglected.
 Another situation in which we attend to base rates occurs if people ascribe some causal
significance to discrepant rates. When they can see the causal relevance of the base rates, they
often incorporate them into their reasoning.
 We seem to reason more competently in statistical problems of all types when we conceptualize
the underlying relationships in terms of concrete numbers, frequency formats, rather than more
abstract proportions and probabilities.
 Naive subjects do not distinguish between p(A|B) and p(B|A) in many circumstances, and when
given one conditional probability, they infer the other without reference to the base rates p(A)
and p(B), which must be considered according to the ratio rule.
 Our natural habit is to think associatively about what is salient to us in the immediate situation
or what is immediately available from memory. It takes willpower and training to escape from
the “dominance of the given” and to actually think about events and relationships that are not
salient and explicit in our experience.

21
Session: All of them Exam Guide

Economic Psychology Fall 2017


Hastie and Dawes (H&D), Ch. 6: Explanation-Based Judgments – Summary
6.1 Everyone Likes a Good Story
 Human beings, perhaps uniquely among all animals, create mental models of the situations they
are in, and those situation models often take the form of stories.
 Like every other fundamental characteristic of our minds, story construction plays a role in
judgment and decision making. Scenarios, or narratives, are representations of temporally
ordered sequences of events glued together by causal relationships
 Usually, narratives come in the form of simple linear causal chains
6.2 The Conjunction Probability Error (Again)
 Tversky and Kahneman (1983) term the belief that a specific combination of events can be more
likely than parts of that combination the conjunction fallacy (A more precise designation is
conjunction probability error).
 A combination of causes can yield an effect with higher probability than each cause alone, belief
that an effect is “due to” a combination of things does not constitute a conjunction probability
fallacy.
6.3 Judging from Explanations
 When we imagine the future, the content of our imagination tends to conform to our intellectual
schemas. Thus, many of our scenarios are conjunctions of specific events that we believe are
highly probable. Again, such belief is fairly automatic.
 The scenario construction process and its consequences for judgment are summarized in Figure
6.1. Story construction, at least the perception of causal relationships between events, is so
natural that we would list it as an automatic cognitive capacity along with our capacities for
frequency registration, memory recognition, and similarity judgments.
 Experience is a temporal sequence of events, and that is the cognitive format we use to
summarize the past and to project to
the future.
 Many prefabricated scenarios are
available to our imaginations either
because they correspond to
stereotyped scripts or because they
are available through particular past
experiences.
 Availability here, however, must refer
to availability to our imagination
rather than availability in fact—
because it is logically impossible for us
to experience conjunctions of events
more frequently than we experience
the individual components of these
conjunctions.
22
Session: All of them Exam Guide
 Belief in the likelihood of scenarios is associated with belief in the likelihood of their
components; believable components yield believable scenarios (and often vice versa as well).
Complete stories, detailed stories, and sensible stories (with reference to other stories or to our
general beliefs about human motivation and natural causality) are influential stories.
 Scenarios are even more believable if the components form a good gestalt because they fit into
or exemplify some familiar narrative schema.
 In fact, the probability of any conjunction of events, even if they don’t compose a plausible
narrative, is often overestimated. Painstaking behavioral studies suggest that this overestimation
is usually the result of an anchor-and-adjust estimation strategy; people anchor on a typical
component event probability and then under-adjust.
6.4 Legal Scenarios: The Best Story Wins the Courtroom
 The central cognitive process in juror decision making is story construction— the creation of a
narrative summary of the events under dispute.
 The central claim of this hypothesis is that the story the juror constructs, often quite deliberately
to “piece together the puzzle of historical truth,” determines the juror’s verdict.
 Note that the evidence at trial is almost never presented to the jurors in the chronological order
of the events in the original crime, so the jurors must reorganize evidence, as they comprehend
it, to produce memory structures that reflect the original chronological order of events
 More important, though, is the finding that jurors who choose different verdicts have reliably
different mental representations.
 When jurors construct summaries of the evidence they are hearing and seeing in legal disputes,
there are usually at least two competing interpretations (otherwise the dispute would not have
gotten to court—over 90% of criminal cases and civil suits plea-bargain or settle before they get
to court, presumably because one side or the other did not have the evidence to construct a
plausible story).
 Different jurors are likely to construct different stories, and the stories lead to different
verdicts. At least, after jurors reach different verdicts, they have different stories in mind. This
situation is summarized:
1. The juror constructs a story summary of the evidence (and there are usually only a few, two
or three at most, alternate stories for any case);
2. The juror learns something about the possible verdicts from the judge’s instructions at the
end of the trial; then the juror makes a decision by “classifying” the story into the best-fitting
verdict category.
 Jurors with more complete, more detailed, and more unique stories were more confident than
those whose stories were less compelling
 Jurors were likeliest to convict the defendant when the prosecution evidence was presented in
Story Order and the defense evidence was presented in Witness Order (78% of the jurors judged
the defendant guilty), and they were least likely to convict when the prosecution evidence was in
Witness Order and the defense was in Story Order (31% said guilty).
 A more subtle aspect of scenario-based judgment occurs because stories tend to exist, sui
generis, without multiple interpretations or versions.
 the perceived strength of one side of the case depended on the order of evidence both for that
side and for the other side of the case. This finding implies that the jurors attempted to construct
more than one story summary of the evidence and that the uniqueness or relative goodness of

23
Session: All of them Exam Guide
the best-fitting story is one basis for confidence in their judgment.
 The construction of multiple stories is almost forced on the decision maker by the traditions of
our adversarial trial system. However, we suspect that in most everyday situations, when story
construction is the basis for decisions, people stop after they have constructed one story.
 During construction and evaluation of a story, people do consider alternative versions of parts of
the stories. (This form of reasoning is called counter-factual thinking, because it involves
imagining alternatives to “factual” reality that might have occurred—what we have referred to
as Piagetian “scientific reasoning” elsewhere)
 Jurors who reasoned, “If there had been more guards, the rape would still have occurred” - In
legal contexts, this kind of reasoning is called the “but for” test of causality; a philosopher would
probably describe it as testing if a candidate cause (few security guards) is a necessary condition
for an effect to occur.
6.5 Scenarios About Ourselves
 The notion of narrative truth is consistent with the rationales behind many forms of
psychotherapy. These therapies assume that clients’ (narrative) representation of their lives is
the key to understanding their maladaptive behaviors. The therapist’s reconstruction of the
client’s life story into a more coherent and adaptive narrative is the primary goal of therapy.
 Autobiographical memories tend to be dominated by our current attitudes, beliefs, and feelings
about ourselves.
6.6 Scenarios About the Unthinkable
 The probabilistic approach to reducing the danger of nuclear war and other societal and
personal risks. Small differences in probability for small intervals can yield large differences in
broad ones.
 scenario thinking can once again get in the way of a probabilistic assessment. The desirable
scenario for most of us would be an agreement among all countries capable of producing nuclear
weapons resulting in technological control of such weapons to the point that they could not be
used in haste or by accident.
 We exaggerate the probability of confrontation and of total agreement while we neglect policies
that would reduce the probability of nuclear war each year by some small amount.
 The big problem with scenario thinking is that it focuses the thinker on one, or a few, causal
stories and diverts the decision maker from a broader, more systematic representation of the
decision situation. Scenario thinking grossly overestimates the probability of the scenarios that
come to mind and underestimates long-term probabilities of events occurring one way or
another. Furthermore, there is a general tendency for memories and inferences to be biased so
as to be consistent with the themes and theories underlying the scenarios.
 Rational analysis requires a systematic, comprehensive representation of situations and
alternative outcomes, in order to assess the important underlying probabilities of events.
6.7 Hindsight: Reconstructing the Past
 People who know the nature of events falsely overestimate the probability with which they
would have predicted them.
 We are “insufficiently surprised” by experience. One result is that we do not learn effectively
from it.
 Hindsight effects only occurred under conditions where persuasive causal explanations could be
24
Session: All of them Exam Guide
generated by the participants to “glue” the causes to the outcomes.
 This hindsight bias is not always reducible to a knew-it-all-along attempt to appear more
omniscient than we are.
 Sometimes motivational factors probably apply
 Sometimes, when we believe in change, we recall change even when it has not occurred. In
order to make our recollection compatible with this belief, we resort (again not consciously) to
changing our memory of the earlier state. We can, for example, reinforce our belief in a non-
existent change for the better by simply exaggerating how bad things were before the change.
 Moods also effect recall.

25
Session: All of them Exam Guide
Hastie and Dawes (H&D), Ch. 7: Chance and Cause – Summary
7.1 Misconceptions About Chance
 Probability theory is a language we can use to describe the world or, more precisely, to describe
the relationships among our beliefs about the world.
7.2 Illusions of Control
 Not only do people behave as if they can control random events; they also express the conscious
belief that doing so is a skill, which, like other skills, is hampered by distractions and improves
with practice.
7.3 Seeing Causal Structure Where It Isn’t
 A pernicious result of representative and scenario-based thinking is that they make us see
structure (non-randomness) where none exists.
 This occurs because our naïve conceptions of randomness involve too much variation— often to
the point where we conclude that a generating process is not random, even when it represents
an ideal random trial.
 Representativeness enters in because when we are faced with the task of distinguishing between
random and non-random “generators” of events, we rely on our stereotype of a random process
and use similarity to judge or produce a sequence.
 Thus, when we encounter a truly random sequence, we are likely to decide it is non-random
because it does not look haphazard enough—because it shows less alternation than our
incorrect stereotype of a random sequence.
 The gambler’s fallacy—the notion that “chances of [independent, random] events mature” if
they have not occurred for a while.
 The strategy of analysing individual clusters and looking for correlations with some (any)
environmental cause is called the Texas sharpshooter fallacy by epidemiologists, after the story
about a rifleman who shoots a cluster of bullet holes in the side of a barn and then draws a
bull’s-eye around the holes.
7.4 Regression Toward the Mean
 A final problem with representative thinking about events with a random (unknown causes)
component is that it leads to non-regressive predictions.
 Extreme scores will be less extreme. Regression toward the mean is inevitable for scaled
variables that are not perfectly correlated.
 It is only when one variable is perfectly predictable from the other that there is no regression. In
fact, the (squared value of the) standard correlation coefficient can be defined quite simply as
the degree to which a linear prediction of one variable from another is not regressive. The
technical definition of regression toward the mean is the difference between a perfect
relationship (+/–1.00) and the linear correlation:
Regression = perfect relationship – correlation
 The rational way of dealing with regression effects is to “regress” when making predictions.
Then, if there is some need or desire to evaluate discrepancy (e.g., to give awards for
“overachievement” or therapy for “underachievement”), compare the actual value to the
predicted value—not with the actual value of the variable used to make the prediction.
 Regression toward the mean is particularly insidious when we are trying to assess the success of
26
Session: All of them Exam Guide
some kind of intervention designed to improve the state of affairs.
 The worst case scenarios for understanding the effects of interventions occur when the
intervention is introduced because “we’ve got a problem.”
 The chances are, the interventions are going to show improvements, and it is almost certain that
some or most of the effect will be due to regression toward the mean.

Hastie and Dawes (H&D), Ch. 8.1-8.2: Thinking Rationally about Uncertainty – Summary
8.1 What to Do About the Biases
 One of the goals of the book is to teach analytical thinking about judgment processes.
 The best way we know to think systematically about judgment is to learn the fundamentals of
probability theory and statistics and to apply those concepts when making important
judgments.
 Adding, keeping track, and writing down the rules of probabilistic inference explicitly are of great
help in overcoming the systematic errors introduced by representative thinking, availability,
anchor-and-adjust, and other biases.
8.2 Getting Started Thinking in Terms of Probabilities
 Modern probability theory got its start when wealthy noblemen hired mathematicians to advise
them on how to win games of chance at which they gambled with their acquaintances.
 Perhaps the fundamental precept of probabilistic analysis is the exhortation to take a bird’s-eye,
distributional view of the situation under analysis and to define a sample space of all the possible
events and their logical, set membership interrelations.
 The systematic listing is unlikely to make us confident about precise probabilities, but it will
remind us just how uncertain that future is and keep us from myopically developing one scenario
and then believing in it too much.
 What did we learn:
1. We introduced the basic set membership relationships that are used to describe events to
which technical probabilities can be assigned.
2. We introduced four kinds of situations to which we might want to attach probabilities:
a. situations, like conventional games of chance (e.g., throwing dice), where idealized
random devices provide good descriptions of the underlying structure and where logical
analysis can be applied to deduce probabilities;
b. well-defined “empirical” situations where statistical relative frequencies can be used to
measure probabilities (e.g., our judgments about kinds of students at the University of
Chicago);
c. moderately well-defined situations, where we must reason about causation and
propensities (rather than relative frequencies—e.g., predicting the outcome of the next
U.S. presidential election), but where a fairly complete sample space of relevant events
can be defined with a little thought; and
d. Situations of huge ignorance, where even a sample space of relevant events is difficult
to construct, and where there seem to be no relevant frequencies
 Many errors in judging and reasoning about uncertainty stem from mistakes that are made at
the very beginning of the process, when comprehending the to-be-judged situation. If people
could generate veridical representations of the to-be-judged situations and then keep the

27
Session: All of them Exam Guide
(mostly) set membership relationships straight throughout their reasoning, many errors would
be eliminated.
 Many times judgments under uncertainty are already off-track even before a person has tried to
integrate the uncertainties.
 The primary advice about how to make better judgments under uncertainty is focused on
creating effective external (diagrammatic and symbolic) representations of the situation being
judged.

28
Session: All of them Exam Guide

Economic Psychology Fall 2017


Hastie and Dawes (H&D), Ch. 8.4 - end: Thinking Rationally about Uncertainty – Summary
8.4 Testing for Rationality
 The conditions necessary to conclude a judgment is inaccurate are relatively straightforward:
1. We need to have some measurable criterion event or condition in mind that is the target of
the judgment;
2. We need to be sure the person making the judgment is in agreement with us on the nature
of the target and is trying to estimate, predict, or judge the same criterion value that we
have in mind; and
3. We also want to be sure that the judge is motivated to minimize error in the prediction and
that the “costs” of errors are symmetric so the judge will not be biased to over-or
underestimate the criterion.
 If we are sure that a collection of judgments is incoherent, we can be sure that some are also
inaccurate, though we often cannot say exactly which of the individual judgments are in error.
 Once we have committed ourselves to using logic, mathematics, and decision theory as the
standards to evaluate rationality in judgments and choices, there is much more work to be done
to evaluate rationality in practice.
 First, it is not always obvious how to represent a decision situation objectively so that rational
principles can be applied. Furthermore, it is often difficult to specify exactly what an actor’s goals
are in a situation, and most rational analysis requires knowing what the actor is trying to
“maximize” to define a rational standard for evaluation.
 Second, it is not always appropriate to focus on the short-run performance of a fully informed
person with plenty of time to think in an ideally quiet environment. It may well be that the
optimal, ideally rational judgment calculation is not the adaptively best judgment process under
more realistic conditions.
 “Fast-and-frugal” algorithms or heuristics for judgments and choices may be more robust,
sturdier, and have better survival value than optimal calculations that are superior only when
lots of information, computational capacity, and time are available.
8.5 How to Think About Inverse Probabilities
 One remedy against confusions is to shift to systematic symbolic representations. Translating
each to-be-judged situation into probability theory notation and then carefully applying basic
rules from probability theory can help.
 It is difficult for many people to think without words. In fact, some eminent thinkers maintain
that it is virtually impossible.
“Disciples should be on their guard against the seduction of words and sentences and their
illusive meaning, for by them the ignorant and dull-witted become entangled and helpless as an
elephant floundering around in deep mud.” – Lankavarta Sutra
 For most problems the authors recommend decision trees and probability trees because they are
more generally applicable and they are more useful for organizing numerical information
relevant to decision problems.
8.6 Avoiding Subadditivity and Conjunction Errors

29
Session: All of them Exam Guide
 Subadditivity involves estimating that the
probability of a subset, nested event is
greater than the probability of a superset,
superordinate event in which the subset
event is nested.
 The problem is termed subadditivity
because the probability of the whole is
judged to be less than that of the sum of
its parts—in the case of the conjuncture
fallacy, less than that of a single part.
8.7 The Other Side of the Coin: Probability of a
Disjunction of Events
 Just as we tend to overestimate the
probability of conjunctions of events (to
the point of committing the conjunction
probability fallacy), we tend to
underestimate the probability of
disjunctions of events.
 There seem to be two reasons for this:
1. Our judgments tend to be made on the basis of the probabilities of individual components;
as illustrated, even though those probabilities may be quite low, the probability of the
disjunction may be quite high. We attribute this error primarily to the anchor-and-(under-)
adjust estimation process.
2. Any irrational factors that lead us to underestimate the probabilities of the component
events — such as difficulty of imagining the event — may lead us to underestimate the
probability of the disjunction as a whole.
 Rationally, of course, disjunctions are much more probable than are conjunctions.
 There is evidence for a disjunction probability fallacy comparable to the conjunction probability
error—such a fallacy consisting of the belief that a disjunction of events is less probable than a
single event comprising it.
8.8 Changing Our Minds: Baye’s Theorem
 A very common judgment problem arises when we receive some new information about a
hypothesis that we want to evaluate, and we need to update our judgment about the likelihood
of the hypothesis.
 The famous and useful formula for updating
beliefs about a hypothesis (e.g., that an event is
true or will occur) given evidence is called Bayes’ theorem after Thomas Bayes, the British
clergyman who derived it algebraically in his quest for a rational means to assess the probability
that God exists given the (to him) abundant evidence of God’s works.
 What systematic errors do people make as they try to update their beliefs about an event when
they receive new information relevant to the judgment?
 Failure to consider the alternative hypothesis, and to ignore the probability that the evidence
would be observed even if the hypothesis is false.
 A second error is to ignore the base rates of occurrence of simple events
30
Session: All of them Exam Guide
 When the problem statement links the base rate information more strongly to the outcomes in
the situation, especially when causal
relationships make the connection,
people are more likely to incorporate
the base rates into their judgments.
 The authors speculate that causal
scenario–based reasoning may be an
intuitive way to keep track of the
most important relationships among
events—important when we need to
make predictions, diagnoses, or just
update our “situation models.”
 If the person only uses the formula to
organize his or her thinking (but not
to calculate), we expect
improvements from:
1. identification of incomplete and ambiguous descriptions of the judgment problem,
2. consideration of nonobvious information necessary to make the calculation, and
3. motivation to search for specific information and to think about focal hypothesis–
disconfirming information
 The authors recommend thinking about the situation in terms of frequencies and the use of
diagrams to represent the to-be- judged situation and to guide information search, inferences,
and calculations
8.9 Statistical Decision Theory
 How should we use judgments to decide whether or not to take consequential actions? The
normative “should do” answer is provided by statistical decision theory.
 The answer to the “Should I take action?” question depends on these probabilities (relating your
current knowledge and the true condition you’re trying to infer) and how much you care about
each of the possible outcomes.
 If we know how we value the outcomes, we can work backward and calculate the threshold
probability that prescribes when we should shift from inaction to action, to maximize those
values.
 Often, we cannot increase the accuracy of a diagnosis or other judgment, but we can trade off
the two types of errors (and “corrects,” too).
 If misses are most costly, we can lower our threshold for action on the judgment dimension and
reduce misses (but at the cost of more false alarms); if false alarms are the costly error, we can
move the decision threshold up and reduce that error (trading it for more misses, of course).
 n most circumstances, we should recognize we are stuck with trade-offs, proceed with a sensible
discussion of what we value, and then set a decision threshold accordingly.
 If we do face these trade-offs, we need to try to value the various judgment-outcome
combinations and then apply statistical decision theory to set a proper decision threshold.
 The task gets more daunting when we must perform an analysis across stakeholders with
different personal values, as we must in any organizational or societal policy analysis.
8.10 Concluding Comment on Rationality
31
Session: All of them Exam Guide
 If a scientific theory cannot state when an event will occur, a skeptic might ask what good it is.
 The answer is that insofar as we are dealing with mental events and decisions of real people in
the booming, buzzing confusion of the real world, we can neither predict nor control them
perfectly.

32
Session: All of them Exam Guide
Hastie and Dawes (H&D), Ch. 9: Evaluating Consequences – Fundamental Preferences –
Summary
9.1 What Good is Happiness?
 Many people would say that the goal of decision making is to get outcomes that will make the
decision maker happy.
 When decisions are driven by the “pursuit of happiness,” it is not the experiences of pleasure
and pain that are most important. What is most important at the time of decision is our
predictions and what will make us happy after we make a decision. Daniel Kahneman has called
this anticipated satisfaction “decision utility,” to contrast it with “experienced utility”
 Psychologists are just beginning to uncover the processes that underlie subjective feelings of
pleasure and pain, what we’re calling experienced utility.
 The principle that “good things satiate and bad things escalate” can be visualized with the simple
graph in Figure 9.1.
 When the good, satiating characteristics (+) are added to the bad, escalating characteristics (-),
the result is a single-peaked function that has a maximum value of a moderate amount (the
dotted line in Figure 9.1). Net welfare (positive combined with negative) is maximized at
moderate amounts.
 Coombs and Avrunin (1977) have proven that if
1. Good characteristics satiate (the function relating goodness to amount having a slope that is
positive and decreasing) and
2. Bad things escalate (the function relating bad characteristics to amount having a slope that
is negative and decreasing—becoming more negative), and
3. The negative function is more rapidly changing than the positive one, then
4. The resulting sum of good and bad experiences (that is, the sum of the good characteristics
function and the bad characteristics function) will always be single-peaked.
 In fact, a single-peaked function results from any additive combination where the sum starts off
positive from 0,0 and the absolute
value of the slope of the utility for bad
characteristics is greater everywhere
than the absolute value for good
characteristics.
 Furthermore, the “flat maximum”
nature of this peak in Figure 9.1 is
common. It is often very difficult to
discriminate among the neighbouring
“good experiences.”
 The important point here is that many
experiences exhibit a single-peaked preference function relating the amount of the experience
(food consumed, days on vacation) to the associated pleasure (pain); in other words, we have
personal “ideal points” on amount-of- experience dimensions.
 An implication of the peak-end principle is duration neglect: People tend to be surprisingly
insensitive to the length of the experience.
 Given what we know about feelings of well-being in general, our best advice is not to
overemphasize predicted happiness in making decisions, but rather deliberately to consider

33
Session: All of them Exam Guide
other aspects of decision alternatives and their consequences.
 It is important to realize that happiness and related feelings of well-being are not the only
considerations in the evaluation of consequences. Many times we focus on other aspects of the
expected consequences of our actions, and sometimes we appear to make decisions in a non-
consequentialist manner.
 And, of course, impulsivity can play an important role in such decisions.
9.2 The Role of Emotions in Evaluations
 The authors think four concepts will be useful to solve the problem of a universal definition of
emotion: emotions, feelings, moods, and evaluations.
 We define emotion as reactions to motivationally significant stimuli and situations, usually
including three components: a cognitive appraisal, a “signature” physiological response, and
phenomenal experiences. We would add that emotions usually occur in reaction to perceptions
of changes in the current situation that have hedonic consequences.
 Second, we propose that the term mood be reserved for longer-duration background states of
our physiological (autonomic) system and the accompanying conscious feelings. Note, the
implication is that emotions and moods are not always conscious, and that the phenomenal
experience component is not a necessary element of an emotional reaction.
 Finally, we suggest that the word evaluation be used to refer to hedonic, pleasure– pain, good–
bad judgments of consequences.
 Emotion, if considered at all, was just one more “input” into a global evaluation or utility. We
would still assign a major role to anticipated emotional responses in the evaluation of the value
or utility (either decision utility or experienced utility) of an outcome of a course of action;
people usually try to predict how they will feel about an outcome and use that anticipated
feeling to evaluate and then decide.
 There seems to be agreement on the conclusion that an early, automatic reaction to almost any
personally relevant object or event is a good–bad evaluation.
 Others have argued that there is a bivariate evaluative response system with two neutrally
independent circuits, one (dopamine-mediated) assessing positivity, one (acetylcholine-
mediated) assessing negativity.
 People with relatively active right prefrontal hemispheric areas tend to exhibit more positive
ambient moods and react more positively to stimulus events, while left prefrontal activation is
associated with more negative moods and emotional reactions.
 We humans have an emotional signalling system that helps us make quick decisions and decide
when our slower deliberative cognitive systems are overwhelmed with too much information.
 What’s especially important here is the emphasis on the helpful, adaptive role of emotions —the
claim is that without them, we’d make much worse decisions. This is a sharp contrast with the
traditional emphasis in religion and Freudian psychology on the notion that emotions play a
troublemaking role in decisions and interfere with clear, rational thinking processes.
 Another recent conclusion is that experienced utility is intensified if it produces regret or
rejoicing, and especially if it is a surprise.
9.3 The Value of Money
 In the 1923 edition of Webster’s International Dictionary, the first definition of value is “a quality
of a thing or activity according to which its worth or degree of worth is estimated.”
 Even in everyday usage, value has come to be almost synonymous with monetary equivalent.
34
Session: All of them Exam Guide
The more general concept of the degree of worth or desirability specific to the decision maker,
as opposed to mere money, is better termed utility. Even that term is ambiguous, however,
because the dictionary definition of utility is “immediate usefulness,” and that is not what
decision theorists have in mind when they discuss utility.
 The authors own preferred term is personal value.
 In general, the amount a stimulus must be incremented (or decremented) physically for people
to notice a difference is proportional to the stimulus magnitude itself; that is, it must be
incremented (or decremented) by a certain fraction of its physical intensity in order to achieve
what is technically termed a just noticeable difference.
 The proportion that stimuli must be incremented or decremented to obtain a just noticeable
difference in intensity has been termed a Weber fraction.
 The fact that this fraction is more or less constant for any particular type of sensory intensity has
been termed Weber’s law. It does not hold exactly over all dimensions and ranges of intensity,
but it is useful in research and practice as a rough approximation.
 The psychologist Gustav Fechner (1801–1887) proposed that just noticeable differences could be
conceptualized as units of psychological intensity, as opposed to physical intensity. This means
that psychological intensity is a logarithm of physical intensity, and such a proposal became
known as Fechner’s law.
 Again, it does not hold over all dimensions and ranges of intensity, but it is a good approximate
rule.
 The logarithmic function follows what may be termed the law of diminishing returns, or the law
of decreasing marginal returns.

 It is tempting to try to relate this function to Coombs and Avrunin’s (1977) derivation of the
single-peaked preference curve.
 It is more correct to suppose that the Bernoullian utility function reflects both Coombs’s positive
and negative substrates and to imagine that it is single-peaked, too. This would mean that there
is such a thing as too much money—a point at which harassment, social enmity, threats of
kidnapping, and other anti-wealth or anti-celebrity actions would become so aversive that the
curve would peak and more wealth would be less desirable.
 Prospect theory as a descriptive theory of decision behaviour. A basic tenet of this theory is that
the law of diminishing returns applies to good and bad objective consequences of decisions. Two
components of the theory concern us here:
1. An individual views monetary consequences in terms of changes from a reference level,
which is usually the individual’s current reference point (usually the status quo). The values
of the outcomes for both positive and negative consequences of the choice then have the
diminishing-returns characteristic as they “move away” from that reference point.

35
Session: All of them Exam Guide
2. The resulting value function is steeper for losses than for gains.
 Irrationality enters because people do not look at the final outcomes of their choices, but rather
allow the reference level to change and make judgments relative to that moving reference
point.
 The psychological justification for
viewing consequences in terms of
the status quo can be found in
the more general principle of
adaptation – “Our perceptual
apparatus is attuned to the
evaluation of changes or
differences rather than to the
evaluation of absolute
magnitudes.“
 The difference between prospect
theory and the standard
economic theory of diminishing
marginal utility is that the latter
assumes that decision makers
frame their choices in terms of
the final consequences of their
decisions.
 The diminishing-return shape of
the utility function guarantees
that any gamble between two negative outcomes is less negative, worth more in terms of utility
than the corresponding certainty, and the utility for any gamble between the positive outcomes
is worth less than the corresponding certain outcome.
 Pseudocertainty; the prospect theory explanation is that the chooser adopts a particular stage in
a probabilistic process as a psychological status quo and consequently becomes risk-averse for
gains and risk-seeking for losses following it.
 The irrationality here is that pseudocertainty leads to contradictory choice prior to knowledge of
the outcome—depending upon whether the choice is viewed in its totality or sequentially by
components.
 Notice that the pseudocertainty effect depends on the manner in which we reason about
probabilities; it would occur for both raw dollar values or for subjective utility values.
 Classic economic utility theory, in contrast, is a normative theory of how we should choose, and
only sometimes a description of how we do choose.
 While we should not always be bound by the normative theory’s implications, we should be
aware of them and violate them only with self- awareness and for a compelling reason.

36
Session: All of them Exam Guide
9.4 Decision Utility – Predicting What We Will Value
 If the rational model of decision making is to be of any practical use, it must assume that tastes
do not change often or capriciously and that decision makers have some capacity to predict what
they will like and dislike when they experience them in the future.
 Proponents of the unstable values view conclude there is a basic unreliability in the mental
process that underlies the generation of answers to the evaluative questions.
 How do people predict, at the point of decision, what will make them happy or unhappy as a
consequence of the actions they choose? We propose that a good account can be provided in
terms of judgment strategies or heuristics that are employed to predict value. We call these
evaluation heuristics by analogy with the judgment heuristics.
 The authors propose three basic evaluation heuristics:
1. Predictions of value based on remembered past experiences,
2. Predictions based on simulating what the future experience will be like, and
3. Predictions based on deliberate calculations or inferential rules.
 Past experience, learning, and memory play the dominant role in predictions of the future.
 Another reason that memory for past pleasures and pain is important is because it is a
contributor to our current feelings of satisfaction.
 If we do not appreciate regression effects, we will systematically overestimate how positively
we will feel about good consequences and overestimate the negativity of the bad
consequences.
 When we rely on simulation, we are biased by our current emotional states. A very common and
important judgment bias is associated with situations in which we exhibit bounded self-control.
 Loewenstein attributes this family of prediction errors to what he calls the “hot-cold empathy
gap”—people cannot know what the effects of their feelings will be on their own behaviours
when they are in different emotional states.
 Some surprising biases in evaluations occur as a result of incidental emotions— emotions
experienced at the time a decision is made that have nothing to do with the decision itself, that
is, they are involved in neither decision utility or experienced utility.
 Under many conditions we deliberately calculate how much we will like a future experience; we
call this the calculation heuristic for evaluations. The diversification bias is an example of a
systematic miss-prediction that occurs when we deliberately infer what we will like.
 In our view, most of the apparent instabilities in value judgments can be accounted for with
reference to several psychological considerations.
1. There are simple changes in momentary goals. As noted above, when our current goals
change, our evaluations change.
2. There is a gap between predicted satisfactions and experienced satisfactions, and
researchers are developing a catalogue of systematic biases in predictions of future
satisfactions.
3. There will sometimes be shifts in value dependent on the changes in the evaluation
heuristics that we rely on when we remember, simulate, or calculate future values.
9.5 Constructing Values
 Originally the belief sampling model was designed to explain instabilities in responses to general
surveys. But we think the model can be applied usefully to explain unreliability in evaluations of
many types. As the name suggests, the heart of the model is a (memory) sampling process.
37
Session: All of them Exam Guide
 The general properties of any cognitive memory system, namely fluctuations in the availability of
information from memory, explain unreliability in the system. Human memory retrieval is highly
context- dependent, and the specific information retrieved will fluctuate with small changes in
the encoding of the retrieval probe and other changes in activation of parts of the system.

Hastie and Dawes (H&D), Ch. 10: From Preferences to Choices – Summary

10.1 Deliberate Choices Among Complex Alternatives


 Many important choices are made more deliberately from choice sets, available to inspection,
that contain several complex, multi-attribute options. When we need to choose between
apartments to rent, courses to enroll in, mountain bikes to purchase, vacations to commit to, or
job offers to accept, we rely on choice strategies that are analogous to the judgment heuristics.
 These choices are complex because they involve the integration of many small valuations into a
global evaluation.
 The evaluation of individual attributes depends on current goals.
 The most difficult choices occur when there are negative correlations among the values of the
attributes across the alternatives, forcing us to make difficult trade-offs because there is no

38
Session: All of them Exam Guide
perfect alternative
 Gerd Gigerenzer labels some of the most common choice rules “fast and frugal heuristics,”
because they approach optimality, but are frugal, requiring the consideration of relatively little
information about the alternatives, and hence are fast.
 The amount of cognitive effort—measured subjectively or objectively— varies across strategies.
Effort also depends on the structure of the choice set. If the set is large, requires a lot of trade-
offs across dimensions and across alternatives, lacks critical or reliable information, or includes a
lot of similar alternatives, most of the strategies will demand a considerable effort.
 Some strategies involve across-attribute compensatory trade-offs, while others do not.
 Non-compensatory strategies, are unforgiving: If the rent is over $700 per month, the apartment
is rejected, no matter how good its other features.
 Non-compensatory strategies, especially, are likely to miss “balanced, all-around good”
alternatives and sometimes terminate the search before a truly dominant “winner” is found.
 A useful distinction is between alternative-based strategies and attribute-based ones.
 In alternative-based strategies, attention is focused on one alternative at a time, its attributes
are reviewed, and a summary evaluation is performed of that item before attention is turned to
another alternative.
 The contrasting organizational principle is a strategy based on attributes: An attribute (e.g.,
price, location) is selected and several alternatives are evaluated on that attribute. Then
attention turns to the next attribute.
 Attribute-based strategies often stop with an “answer” after reviewing less information than
alternative- based strategies; therefore, alternative-based strategies tend to be more cognitively
demanding than those based on attributes.
 There is some dependence on the structure of the set of choice alternatives, the strategies differ
in terms of the amount of information each is likely to consume in the choice process. Some are
exhaustive and require perusal of all relevant information (and even deploy inference processes
to fill in the gaps in information); others are likely to make a choice after a small subset of the
total accessible information has been covered.
 The most thorough, systematic, cognitively demanding choice strategy is the multi-attribute
utility theory (MAUT) evaluation process that is essentially the linear weight- and-add, Lens
Model judgment policy applied to valuation, rather than to estimating or forecasting “true states
of the world” (i.e., applied to estimate our “internal” reaction to the object of choice).
 Most efforts to improve choice habits focus on inducing people to use strategies that are more
like MAUT evaluation method.

39
Session: All of them Exam Guide

40
Session: All of them Exam Guide

41
Session: All of them Exam Guide

10.2 Ordering Alternatives


 Since we cannot think of all of our decision options—and their possible consequences—
simultaneously, we must do so sequentially. The resulting order in which we consider options
and consequences may have profound effects on decision making.
 A severe problem with the ordering of alternatives is that it may exclude consideration of certain
possibilities.
 The strategy of searching through possible alternatives only until the first satisfactory one is
found has important implications in the study of the rationality of choice. It means the order in
which people search may be of paramount importance, but order can be determined by many
factors having very little to do with the consequences of choice (for example, left-to-right bias),
or it can even be manipulated by a clever person with control of the agenda of a discussion.
 Bounded rationality can, nevertheless, have desirable consequences.
1. First, there are situations in which it is not possible to specify all of the alternatives, their
attributes, and their consequences in advance.
2. Second, the consideration of all relevant possibilities and consequences involves decision
costs, which are difficult to integrate with the costs and benefits of payoffs, because they are
of a qualitatively different type. Let us give two examples— first, one of decision costs.
 Another procedure for simplifying a search process involves concentrating on aspects of
alternatives rather than on the alternatives themselves. The elimination by aspects strategy
involves choosing a desirable aspect, eliminating all alternatives that do not have it (or enough of
42
Session: All of them Exam Guide
it), then choosing another desirable aspect and eliminating all those alternatives not containing
it, and so on, until either a single alternative is left or so few are left that they can be evaluated
thoroughly.
 If the aspects are considered in the same order as their desirability, this form of bounded
rationality results in reasonably good choices—although it involves no compensatory
mechanism. If the aspects are chosen probabilistically in proportion to their importance, the
procedure is less successful. And, if they are chosen ad hoc on the basis of the ease with which
they come to mind, it is a decidedly flawed procedure.
 Even when there were no limits on computational resources or available information, many of
the less cognitively demanding strategies did almost as well as the ideal multi-attribute utility
evaluation (with reasonable choice sets and no missing information). When a choice deadline
was imposed on the strategies, the MAUT strategy was prone to deadlock or “crash,” while some
of the other “quick and dirty” strategies still performed close to optimally.

10.3 Grouping Alternatives


 Just as a particular visual stimulus (for example, a grey circle) appears different in different
contexts (when surrounded by a yellow versus a blue background), a particular choice alternative
may appear different to a decision maker when it is considered in different contexts.
 Specifically, it may be evaluated as more or less desirable when it appears in different choice
sets. The more judgmental the evaluation, the greater is the importance of context effects.
 One principle of rationality that most theorists accept is that choice should be independent of
irrelevant alternatives. That is, if alternative A is preferred to alternative B when the two are
considered alone, then A should be preferred to B when alternative C is considered along with
them; the presence of alternative C should be irrelevant to the preference between A and B. Of
course, if C is the preferred alternative in the set consisting of A, B, and C, we have no way of
knowing whether its existence has reversed the preference for A over B, because C will be
chosen. Hence, to demonstrate that choice may violate this principle of rationality, we must
show that the choice of A over B is reversed in a situation where C is not chosen.

10.4 Choosing Unconsciously


 there are two research programs that do make a convincing case that intuition trumps analysis
under some conditions.
 when choice options are simple and their “consumption” does not involve extensive cognitive
analysis, simpler, more intuitive choice processes may produce better, more satisfying outcomes.
 there are two modes of thinking—unconscious versus conscious thought—and that the
unconscious thought system has much greater computational capacity than the conscious
system (2-3 bits vs 11 million bits).

10.5 How to Make Good Choices

43
Session: All of them Exam Guide
 Ben Franklin advices is echoed in popular books on decision making that recommend the listing
of possible consequences of choices, linking those to our personal values, and then choosing the
alternative that has the highest summary value according to a simple weight-and-add rule.
 The answer to the question of importance is rather easy: It is up to the decision maker. In
constructing a weighting scheme, we should list the variables that are important to us, given our
current goals. If, for example, we think of “job level” in a global and amorphous way, then we
should list it.
 Franklin advises not on what to decide, but how to decide it. When suggesting a list, he was not
advising what should be on it, but rather suggesting how to become explicit about what is
important to the decision maker.

Hastie and Dawes (H&D), Ch. 12: From Preferences to Choices – Summary

12.1 Non-expected Utility Theories


 Both the economists’ paradoxes and psychologists’ experiments have repeatedly shown that
subjective expected utility theory is not a valid descriptive theory of human behaviour.
 non-expected utility theories are called that way as a reminder that they are derived from the
expected utility framework.
 But why should we work within the general expected utility framework?
1. First, the framework includes the ingredients that our intuitions and experience tell us are
essential to deliberate decision making.
2. Second, the framework provides a roughly accurate descriptive account of decision
behaviours in many situations; some economists call it a positive theory because it relates
inputs and outputs (psychologists might say stimuli and responses) in decision behaviour to
one another approximately correctly.
3. Third, the framework captures the essence of rationality (as best our culture can define it),
and it is likely that we are adapted to be approximately rational in our behaviours; our
optimistic hypothesis is that people are at least half-smart in achieving their personal goals.
 The most influential and successful of these non-expected utility theories is Kahneman and
Tversky’s prospect theory
 prospect theory adopts an algebraic formulation to represent decision processes: A prospect is
an alternative or course of action defined by one or more outcomes (i) that result in
consequence values (vi) that are weighted by decision weights (Πi) that are related to the
objective probabilities for each outcome’s occurrence. The overall value (V) for that prospect is
V =∑ ( Π i v i )
 which is essentially the same equation as the rational expectations principle at the heart of all
expected utility theories.
 There are two stages in the prospect theory decision process: editing the alternatives, which
involves constructing a cognitive representation of the acts, contingencies, and outcomes
relevant to the decision; and evaluation, in which the decision maker assesses the value of each
prospect and chooses accordingly.
 The evaluation stage can be broken down, for clarity of exposition, into three steps for each
prospect:

44
Session: All of them Exam Guide
1. valuation, in which the value function is applied to each consequence associated with each
outcome;
2. decision weighting, in which each valued consequence is weighted for impact by a function
based on its objective probability of occurrence; and
3. integration, in which the weighted values across all the outcomes associated with a
prospect are combined by adding them up.

Editing and framing the Decision


 Comprehension results in a cognitive representation that includes the prospects’ outcomes,
events, contingencies among them, associated values, a reference point, and perhaps links to
other information (in long-term memory or in the immediate environment) relevant to the
assessment of values or decision weights.
 The first major editing operation that is hypothesized to occur is setting a reference point on the
objective valuation scale.
 The location of the reference point is central in explanations for many value-related phenomena,
as it determines what is a gain and what is a loss, and it predicts where the decision maker will
be most sensitive to changes in value (near the zero reference point).
 The reference outcome is usually a state to which one has adapted; it is sometimes set by social
norms and expectations; it sometimes corresponds to a level of aspiration, which may or may
not be realistic”.
 Like all of the framing subprocesses proposed in prospect theory, it is not highly constrained,
making it difficult to derive a priori predictions or to formally estimate post hoc parameters, such
as the location of the reference point.
 three locations play key roles in people’s evaluations of uncertain prospects:
1. a reference point,
2. an aspiration level (“What are the chances that I will achieve my goal of ... ?”), and
3. a security level (“What is the chance that I will lose ... or more?”).
 The individual differences in security-mindedness (analogous to risk-aversion) and potential-
mindedness (analogous to risk-seeking) are stable across time (e.g., from experimental session to
experimental session), at least in the financial domain, and they may correspond to “cautious” or
“risky” personality types.
 The aspiration level for an individual is hypothesized to be much more labile (unstable), and
dependent on situational factors.
 business decision makers consider other critical amounts as well:
1. Downside risk
2. Break-even points and
3. Survival points
 Executives of corporations are probably very sensitive to differences in financial values in the
neighbourhood of these attention-drawing reference points.
 The second major editing operation hypothesized in prospect theory involves combining or
segregating outcomes. It is hypothesized that people sometimes group gains and losses to
increase their overall satisfaction (mental accounting).
 we are most sensitive to gains and losses near our reference point (the status quo) two small
movements, up or down from the zero point, on a diminishing returns value function would have
45
Session: All of them Exam Guide
a bigger impact on our satisfaction levels than one larger movement. Remember, the theory also
assumes that the reference point shifts rapidly; otherwise, two small gains or losses in sequence
would be no different from one large gain or loss.
Evaluation
 The first step of the evaluation phase, valuation, involves inferring the personal values for the
consequences attached to each outcome. The value function summarizes prospect theory’s
assumptions about the translation of an objective measure of consequences into personal values
for a typical person making a decision. The theory acknowledges that there will be individual
differences in the basic function form and, sure enough, when the functions have been
measured they do vary across individuals and across decisions, although there is considerable
consistency, too.
 Each consequence is identified, as part of the framing process, and then translated into a
personal value according to the value function. An illustrative equation for this function can be
written as follows:
x α if x ≥0 (thegains portion of the function)
V ( X )=
{
−λ (−x ) β if x< 0(the losses portion of the function )

 This process has three major characteristics:


1. Reference-level dependence: An individual views consequences (monetary or other) in
terms of changes from the reference level, which is usually that individual’s status quo (the
0,0 coordinate on the value function graph).
2. Gains and losses satiate: The values of the outcomes for both positive and negative
consequences of the choice have marginally diminishing returns. The exponents, α and β, for
the gain and loss portions of the value function are usually found to be less than 1.00—a
value of 0.88 for both α and β is a typical estimate (if the parameters were 1.00, the curves
would be linear; if greater than 1.00, they would be positively accelerating).
3. Loss aversion: The resulting value function is steeper for losses than for gains; losing $100
produces more pain than gaining $100 produces pleasure. The coefficient λ indexes the
difference in slopes of the positive and negative arms of the value function. A typical
estimate of λ is 2.25, indicating that losses are approximately twice as painful as gains are
pleasurable. (If λ = 1.00, the gains and losses would have equal slopes; if λ < 1.00, gains
would weigh more heavily than losses.)

46
Session: All of them Exam Guide

 Prospect theory includes a decision weighting process, analogous to the weighting of outcomes
by their probabilities of occurrence or expectations in expected utility theories.
 support theory (and what we know about heuristic judgment processes) could provide a
connection to non-numerical (“non-risky” in technical terms) uncertainty situations; support
theory would translate subjective uncertainty into numerical subjective probability on the x-axis
of the decision weight function
 The modal decision weight function (again, typical for most individuals in most decision
situations) looks like the backward S-shaped curve in Figure 12.2. A useful rule of thumb to
interpret these psychophysical functions is that when the curve is steeper, it implies the decision
maker will be more sensitive to differences on the objective dimension (x-axis): If the curve is
steep, there is relatively more change in the psychological response to any difference on the
objective dimension, as compared with where the curve is flatter.
 Several mechanisms have been postulated as explanations for the differences in steepness or
slope—for example:
1. differential attention,
2. differences in sense organ sensitivity, and
3. differences in the reactivity of neural-biochemical substrates.

47
Session: All of them Exam Guide

 Let’s walk through the characteristics of the decision weight function.


1. Near the zero point the curve is steep, implying that people are very sensitive to the
difference between impossibility and “possibility-hood.” This steepness is consistent with
people’s overreactions to small probability risks and is also part of the explanation for why
people purchase incredibly long-shot lottery tickets. The business of industrial and
governmental risk management is complicated by people’s willingness to pay exorbitant
amounts to completely eliminate low-probability threats.
2. There is a crossover point at about .20 on the objective probability dimension where, in
many gambling situations (e.g., cards, dice, horse racetrack betting), people are well-
calibrated in terms of their sense of “objective” probabilities.
3. In most of the central portion of the curve, people are “regressive,” the curve is “too flat,”
and substantial changes in objective probabilities produce small changes in decision
weights. People are insensitive to differences in intermediate probabilities. This portion of
the function implies that people will be super-additive for events associated with these
objective probabilities: The sum of a set of decision weights will be smaller than the sum of
the objective probabilities.
 Finally, at the high end of the objective probability scale, the curve becomes steep again as a
high probability changes to certainty. This phenomenon is sometimes called the certainty effect.
It provides part of an explanation for the observed pattern of preferences between gambles in
the Allais paradox (discussed in Section 11.4). It is important to people to be certain of getting
the big prize, so important that the shift from .99 to 1.00 is worth more to the experimental
subject than the shift from .10 to .11. When choosing between the cleverly designed Allais
paradox bets, for example, it leads people to violate the independence axiom of expected utility
theory.

48
Session: All of them Exam Guide

 Figure 12.3 is a summary of the decision processes proposed by prospect theory. We have taken
some liberties, by ordering the three preliminary sub-stages (editing, valuation, and decision
weighting) in a temporal sequence. The theory itself is not explicit about the order of the
computations.

12.2 Gain-Loss Framing Effects


 The influence of the frame for the outcomes in a decision problem can be demonstrated by
creating two versions of the problem—two different statements that describe identical decision
situations in different words.
 Psychologists have discovered that framing effects are particularly strong in matters regarding
life and death. Why? Because the first life saved is the most important, just as is the first life lost.
Thus, decision makers are risk-averse when questions are framed in terms of saving lives, but
risk-seeking when the identical questions are framed in terms of losing lives. The number of lives
lost plus the number of lives saved, however, must equal the number of people at risk for death
—hence, the contradiction.

12.3 Loss Aversion


 The simple endowment effect (you judge an object differently as soon as you possess it, you are
valuing higher than before) has led to several further results extending our understanding of the
role of cognitions and emotions in valuation processes.
 query theory, referring to the manner in which consumers’ thought contents are changed by
internal or external “queries” to their memories, resulting in changed valuations of products and
other objects of consumption.
 incidental emotional state, changes the preferences expressed by sellers versus buyers in an
49
Session: All of them Exam Guide
endowment effect task -> emotion appraisal theory framework = the gist of this theory is that
when a person is in an emotional state, the emotion activates certain action tendencies.
 The query theory interpretation and emotional state findings do not diminish the importance of
the basic endowment effect as evidence for the loss aversion principle.
 The loss aversion asymmetry is important in formal markets, where it predicts the common
situation where a seller sincerely values his or her commodity more highly than a buyer. The
endowment effect is surely part of the explanation for the malfunction of some markets in which
trading occurs at inefficiently slow rates.
 Myopic loss aversion = over investment in stable financial instruments.

12.4 Look to the Future


 Prospect theory is the best comprehensive description we can give of the risky decision process
so far.

50
Session: All of them Exam Guide
Hastie and Dawes (H&D), Ch. 13: What’s Next – Summary
New Directions in Research on Judgment and Decision Making

 So, what’s next in the behavioral decision sciences?


 Foremost is the trend to explore the neural substrates of judgment and decision behaviours, a
subfield (un sous-domaine) of cognitive neuroscience sometimes called neuroeconomics.
 Second are explorations of the roles of emotions in judgments and decisions.
 Third, there is a new frontier of research on dynamic decision processes that was opened up by
the popularity of dynamic decision tasks such as the Iowa Gambling Task in which participants
make many choices and have to extract information about probabilities and payoffs from
experience.

13.1 The Neuroscience of Decisions


 The major impetus (l’impulsion) for the recent surge (augmentation) in behavioral neuroscience
research is the advent of brain imaging techniques that allow researchers to “functionally”
identify activity in the brain by measuring the flow of blood to neurons in various regions of
interest
 Some conclusions about brain functions underlying decisions seem to be well-established.
Cognitive processing that involves the deliberate consideration of gambles or consumer products
seems to always include the dorsolateral prefrontal cortex (areas of the brain just behind the
temples), which are generally assumed to be the residence of Working Memory
 Valuation processes for hedonic (consummation avec Plaisir) consumption experiences, painful
experiences, and money are associated with activity in central, motivational areas of the brain,
sometimes called the limbic system, such as the striatum (including the nucleus accumbens), the
amygdala, and the insula.
 The orbitofrontal cortex (a region of the brain just behind and above the eye sockets) seems to
play a special role, integrating cognitive situation information and emotional valuations.

 One especially interesting result is the observation that brains are responsive to the relative—
not absolute—amounts to be gained or lost, as predicted by prospect theory.
 There is accumulating evidence that the brain performs utility calculations like those prescribed
by prospect theory.
 We already know from Ellsberg’s work that there is a behavioural difference, with most people
strongly preferring well-defined risky prospects to murky (obscure) ambiguous prospects. But are

51
Session: All of them Exam Guide
different regions of the brain engaged when a person contemplates risk versus ambiguity, and do
the specific active regions give us some clues as to the nature of that reaction?
 Two brain areas showed more activity when ambiguous gambles were considered: the
amygdala and the orbitofrontal cortex.
 The amygdala is frequently associated with emotional responses, most notably to fear-evoking
stimuli, such as frightened (effrayés) faces, and the orbitofrontal cortex often appears to play a
role in integrating cognitive and emotional information; patients with injuries (avec des lesions)
to the orbitofrontal cortex often behave inappropriately in social situations, despite knowledge
of the proper behaviour (malgré la connaissance du comportement adéquat).
 Conversely, the dorsal striatum (including the nucleus accumbens) was more active when risky
(compared with ambiguous) prospects were considered. This area (see above) seems to play a
role in predicting rewards (especially monetary rewards). These interpretations suggest that the
brain treats ambiguous prospects as a bit scary and emotional, but treats risky prospects as
something to think about in a “calculating” manner.
 Camerer’s results suggest that there are two brain systems—one associated with the amygdala
and the orbitofrontal cortex, the other associated with the striatum—that respond to
uncertainty in prospects presented for decisions. Both are active, but as uncertainty increases
and becomes ambiguity, there is a shift toward relatively more activation of the amygdala to the
orbitofrontal system.
 Furthermore, the same shift in system activation was observed for uncertainty introduced by
simple card-draw gambles and for increasing uncertainty produced by lack of expertise (i.e., your
outcome depends on judging temperature in Tajikistan) and by the potential actions of a human
opponent, implying that the systems are reacting to a very general sense of uncertainty-
ambiguity.
 One important behavioural observation was that the patients’ (with damage in the orbitofrontal
cortex) choices were both risk-and ambiguity-neutral, while non-brain-injured participants were
mostly averse to risk and even more averse to ambiguity.

13.2 Emotions in Decision Making


 Historically, in theories of decision making, emotions were usually viewed as an auxiliary
phenomenon that operated to perturb the primary cognitive decision processes. This image of
an impulsive emotional system that occasionally interferes with a more orderly rational system
has been popular throughout the history of speculations about human nature.
 Dynamic inconsistency: Receive $20 immediately, today, versus receive $25 in 1 week. Most
choose the $20 today and their reaction is almost visceral (profond et irraisonné); they are
already savouring the consumption of the $20. Now, consider the same amounts of money, but
to be received in 5 weeks versus 6 weeks—$20 in 35 days versus $25 in 42 days. For most
people, the delayed choice is also easy; they choose to wait for the $25.
 The interpretation of the dynamic inconsistency result is that when an outcome is immediate, a
visceral emotional system controls our behaviour and chooses the immediately available
gratification. However, when gratification is not immediately available, our cooler, rational
system leads us to choose more wisely.
 Although emotions certainly lead us to act against our best interests under some conditions,
there has been a shift to the view that emotions also play a positive adaptive role in behaviour.
52
Session: All of them Exam Guide
 Robert Zajonc (1980), who established the importance of emotional reactions as useful guides to
rapid evaluations and approach-avoid behaviour. His classic aphorism, that “preferences need no
inferences,” referred to the fact that often emotions and emotion-based choices are inescapably
(inévitablement) evoked prior to any conscious analysis.
 Mere exposure effect – participants who were repeatedly confronted with unfamiliar stimulus
increasingly liked them with increasing repetitions. This also applied when participants were
presented so briefly to them that they were not aware.
 Affect heuristic similar to the other heuristics in the sense that one concept (affect, fluency,
similarity) is substituted for another (dangerousness, frequency, probability).  It depends on
the availability information heuristic, your judgment will be changed by the amount of
information received

13.3 The Rise of Experimental Methods to Study Dynamic Decisions


 Damasio (4 card decks experiment) hypothesized that normal adaptive decision making in
complex, uncertain environments depends on somatic markers, emotional signals that warn us
that important events (both good and bad) are about to occur.
 Thus, somatic markers warn us about exceptional threats or opportunities, or at least interrupt
processing of other events and give a “heads up” (signal d’alerte) signal that something
important is about to occur. In routine decision making, somatic markers may help us winnow
(peuvent nous aider à réduire) down large choice sets into manageable smaller sets.
 The specifics of the somatic marker hypothesis have been restated (ont été réaffirmées au cours
d’une décénnie d’efforts vigoureux) over the course of a decade of vigorous, often critical,
follow-up research. However, there is accumulating evidence that the orbitofrontal cortex plays
a mediating role between cognitive (frontal cortex) and motivational (limbic system) neural
systems

13.4 Do We Really Know Where We’re Headed?


 We are quite certain that neuroscience, emotion, and dynamic decision tasks will play an
increasingly important role in the immediate future of decision research. It is interesting that
these new directions are interrelated and reinforce one another, neuroscience contributing to
our understanding of what emotion is and what it does.
 The dynamic tasks and models for behavior in them help us relate non-laboratory decision
behavior to its neural substrates.

Prospect Theory in the Wild: Evidence from the Field


Colin F. Camerer

 This article describes ten regularities in naturally occurring data that are anomalies for expected
utility theory but can all be explained by three simple elements of prospect theory: loss-aversion,
reflection effects, and nonlinear weighting of probability; moreover, the assumption is made that
people isolate decisions (or edit them) from others they might be grouped with.
 In expected utility, gambles that yield risky outcomes xi with probabilities pi are valued according

53
Session: All of them Exam Guide
to pi u(xi), where u(x) is the utility of outcome x. In prospect theory they are valued by (pi)v(xi r),
where (p) is a function that weights probabilities nonlinearly, overweighting probabilities below .
3 or so and under- weighting larger probabilities.1 The value function v(x r) exhibits diminishing
marginal sensitivity to deviations from the reference point r, creating a “reflection effect”
because v(x r) is convex for losses and concave for gains (i.e., v (x r) 0 for x r and v (x r) 0 for x r).
The value function also exhibits loss aversion if the value of a loss x is larger in magnitude than
the value of an equal-sized gain (i.e., v( x) v(x) for x 0).

1. Finance: The Equity Premium


 One anomaly is called the equity premium. Stocks—or equities—tend to have more variable
annual price changes (or “returns”) than bonds do. As a result, the average return to stocks is
higher as a way of compensating investors for the additional risk they bear.
 A person with enough risk-aversion to explain the equity premium would be indifferent between
a coin flip paying either $50,000 or $100,000 and a sure amount of $51,209.
 Therefore, to explain the equity premium Benartzi and Thaler must assume that investors take a
short horizon over which stocks are more likely to lose money than bonds. They compute the
expected prospect values of stock and bond returns over various horizons, using estimates of
investor utility functions from Kahneman and Tversky

2. Finance: The Disposition Effect


 Shefrin and Statman (1985) predicted that because people dislike incurring losses much more
than they like incurring gains and are willing to gamble in the domain of losses, investors will
hold on to stocks that have lost value (relative to their purchase price) too long and will be eager
to sell stocks that have risen in value. They called this the disposition effect.
 The disposition effect is anomalous because the purchase price of a stock should not matter
much for whether you decided to sell it. If you think the stock will rise, you should keep it; if you
think it will fall, you should sell it. In addition, tax laws encourage people to sell losers rather

54
Session: All of them Exam Guide
than winners because such sales generate losses that can be used to reduce the taxes owed on
capital gains.
 Interestingly, the winner- loser differences did disappear in December. In this month investors
have their last chance to incur a tax advantage from selling losers.

3. Labour Supply
 Camerer, Babcock, Loewenstein, and Thaler (in this volume) talked to cab drivers in New York
City about when they decide to quit driving each day. Most of the drivers lease their cabs for a
fixed fee for up to 12 hours. Many said they set an in- come target for the day and quit when
they reach that target. Although daily in- come targeting seems sensible, it implies that drivers
will work long hours on bad days when the per-hour wage is low and will quit earlier on good
high-wage days. The standard theory of the supply of labour predicts the opposite: Drivers will
work the hours that are most profitable, quitting early on bad days and making up the shortfall
by working longer on good days.
 The daily targeting theory and the standard theory of labour supply therefore predict opposite
signs of the correlation between hours and the daily wage.
 To measure the correlation, we collected three samples of data on how many hours drivers
worked on different days. The correlation between hours and wages was strongly negative for
inexperienced drivers and close to zero for experienced drivers.
 Daily income targeting assumes loss aversion in an indirect way. To explain why the correlation
between hours and wages for inexperienced drivers is so strongly negative, one needs to assume
that drivers take a 1-day horizon and have a utility function for the day’s income that bends
sharply at the daily income target. This bend is an aversion to “losing” by falling short of an
income reference point.

4. Asymmetric Price Elasticities of Consumer Goods


 The price elasticity of a good is the change in quantity demanded, in percentage terms, divided
by the percentage change in its price. Hundreds of studies estimate elasticities by looking at how
much purchases change after prices change. Loss- averse consumers dislike price increases more
than they like the windfall gain from price cuts and will cut back purchases more when prices rise
compared with the extra amount they buy when prices fall. Loss-aversion therefore implies
elasticities will be asymmetric, that is, elasticities will be larger in magnitude after price increases
than after price decreases.
 Note that for loss-aversion to explain these results, consumers must be narrowly bracketing
purchases of a specific good (e.g., eggs or orange juice). Other- wise, the loss from paying more
for one good would be integrated with gains or losses from other goods in their shopping cart
and would not loom so large.
5. Savings and Consumption: Insensitivity to Bad Income News
 In economic models of lifetime savings and consumption decisions, people are assumed to have
separate utilities for consumption in each period, denoted u[c(t)], and discount factors that
weight future consumption less than current consumption. These models are used to predict
how much rational consumers will consume (or spend) now and how much they will save,

55
Session: All of them Exam Guide
depending on their current income, anticipations of future income, and their discount factors.
 Consumption is “sticky downward” for two reasons:
(1) Because they are loss-averse, cutting current consumption means they will consume below their
reference point this year, which feels awful.
(2) Owing to reflection effects, they are willing to gamble that next year’s wages might not be so
low; thus, they would rather take a gamble in which they either consume far below their
reference point or consume right at it than accept consumption that is modestly below the
reference point. These two forces make the teachers reluctant to cut their current consumption
after receiving bad news about future income prospects, which explains Shea’s finding.

6. Status Quo Bias, Endowment Effects, And Buying – Selling Price Gaps
 Samuelson and Zeckhauser (1988) coined the term status quo bias to refer to an exaggerated
preference for the status quo and showed such a bias in a series of experiments. They also
reported several observations in field data that are consistent with status quo bias.
 There is a huge literature establishing that selling prices are generally much larger than buying
prices, although there is a heated debate among psychologists and economists about what the
price gap means and how to measure “true” valuations in the face of such a gap.

7. Racetrack Betting: The Favourite-Longshot Bias


 Overbetting longshots implies favourites are underbet. Indeed, some horses are so heavily
favoured that up to 70% of the win money is wagered on them. For these heavy favourites, the
return for a dollar bet is very low if the horse wins.
 cumulative prospect theory fits much better than rank-dependent theory and expected utility
theory. They estimated that the utility function for small money amounts is convex. Their
estimate of the probability weighting function (p) for probabilities of gain is almost linear, but the
weighting function for loss probabilities severely overweights low probabilities of loss (e.g., (.l) .
45 and (.3) .65).
 Bettors like longshots because they have con- vex utility and weight their high chances of losing
and small chances of winning roughly linearly. They hate favourites, however, because they like
to gamble (u(x) is convex) but are disproportionately afraid of the small chance of losing when
they bet on a heavy favourite.

8. Racetracks Betting: The End-of-the-Day Effect


 bettors tend to shift their bets toward longshots, and away from favourites, later in the racing
day. Because the track takes a hefty bite out of each dollar, most bettors are behind by the last
race of the day. These bettors really prefer longshots because a small longshot bet can generate
a large enough profit to cover their earlier losses, enabling them to break even.

9. Telephone Wire Repair Insurance


 People who had previously been uninsured, when a new insurance option was introduced, were
less likely to buy it than new customers were.
 Consumers who buy wire repair insurance are integrating their wealth and valuing the insurance
56
Session: All of them Exam Guide
according to expected utility (and know the correct probabilities of damage). A more plausible
explanation comes immediately from prospect theory—consumers are overweighting the
probability of damage. (Loss-aversion and reflection can- not explain their purchases because, if
they are loss averse, they should dislike spending the $.45 per month, and reflection implies they
will never insure unless they overestimate the probability of loss.)
 Once again, narrow bracketing is also required: consumers must be focusing only on wire repair
risk; otherwise, the tiny probability of a modest loss would be absorbed into a portfolio of life’s
ups and downs and weighted more reasonably.

10. State Lotteries


 Cook and Clotfelter (1993): If players tend to judge the likelihood of winning based on the
frequency with which someone wins, then a larger state can offer a game at longer odds but with
the same per- ceived probability of winning as a smaller state. The larger population base in
effect conceals the smaller probability of winning the jackpot, while the larger jackpot is highly
visible. This interpretation is congruent with prospect theory.

Conclusion
 Economists value (1) mathematical formalism and econometric parsimony, and (2) the ability of
theory to explain naturally occurring data.
 Loss-aversion can explain the extra return on stocks compared with bonds (the equity premium),
the tendency of cab drivers to work longer hours on low-wage days, asymmetries in consumer
reactions to price increases and decreases, the insensitivity of consumption to bad news about
in- come, and status quo and endowment effects. Reflection effects—gambling in the domain of
a perceived loss—can explain holding losing stocks longer than winners and refusing to sell your
house at a loss (disposition effects), insensitivity of consumption to bad income news, and the
shift toward longshot betting at the end of a racetrack day. Nonlinear weighting of probabilities
can explain the favourite-longshot bias in horse-race betting, the popularity of lotto lotteries with
large jackpots, and the purchase of telephone wire repair insurance. In addition, note that the
disposition effect and downward-sloping labour supply of cab drivers were not simply observed
but were also predicted in advance based on prospect theory.
 prospect theory is a suitable replacement for expected utility because it can explain anomalies
like those listed above and can also explain the most basic phenomena expected utility is used to
explain.

57
Session: All of them Exam Guide
Christine Jolls – Behavioural Law and Economics

Abstract
This paper describes and assesses the current state of behavioral law and economics. Law and
economics had a critical (though underrecognized) early point of contact with behavioral economics
through the foundational debate in both fields over the Coase theorem and the endowment effect. The
paper concludes with reference to a new emphasis in behavioral law and economics on ”debiasing
through law” using existing or proposed legal structures in an attempt to reduce people’s departures
from the traditional economic assumption of unbounded rationality.

1. Introduction
 An important threshold question for the present work involves how to characterize the domains of
both “law and economics” and “behavioral law and economics.”
 Three features of the work considered in this article are:
1. much of this work focuses on various areas of law that were not much studied by economists
prior to the advent of law and economics
2. it often (controversially) employs the normative criterion of “wealth maximization” rather than
that of social welfare maximization.
3. sustained interest in explaining and predicting the content, rather than just the effects, of legal
rules.
 Behavioral law and economics involves both the development and the incorporation within law and
economics of behavioral insights drawn from various fields of psychology and attempts to improve
the predictive power of law and economics by building in more realistic accounts of actors’
behavior.
 The vehicle of “debiasing through law,” behavioral law and economics may open up a new space
within law and economics between, on the one hand, unremitting adherence to traditional
economic assumptions and, on the other hand, broad structuring or restructuring of legal regimes
on the assumption that people are inevitably and permanently bound to deviate from traditional
economic assumptions.

2. The Endowment Effect in Behaviours Economics and Behavioural Law and Economics
2.1 The Coase Theorem
 This theorem posits that allocating legal rights to one party or another will not affect outcomes if
transaction costs are sufficiently low.

Thus, for instance, whether the law gives a factory the right to emit pollution next to a laundry
or, instead, says the laundry has a right to be free of pollution will not matter to the ultimate
outcome (pollution or no pollution) as long as transaction costs are sufficiently low. The reason
for this result is that, with low transaction costs, the parties should be expected bargain to the
efficient outcome under either legal regime.

 The Coase theorem is central to law and economics because of (among other things) the theorem’s
claim about the domain within which normative analysis of legal rules – whether rule A is preferable
to rule B or the reverse – is actually relevant.

2.2 The Endowment Effect Within Law and Economics


 A central task of law and economics is to assess the desirability of actual and proposed legal rules.
The endowment effect both preserves a larger scope for such normative economic analysis –
58
Session: All of them Exam Guide
because the Coase theorem and the associated claim of irrelevance of legal rules no longer hold –
and profoundly unsettles the bases for such analysis.
 The reason that the endowment effect so unsettles the bases for normative economic analysis of
law is that in the presence of this effect the value attached to a legal entitlement will sometimes
vary depending on the initial assignment of the entitlement.
 In the presence of the endowment effect a “cost-benefit study cannot be based on willingness to
pay (WTP), because WTP will be a function of the default rule.” Thus, the cost-benefit study “must
be a more open- ended (and inevitably somewhat subjective) assessment of the welfare
consequences.”
 One possible approach to normative analysis when the value of an entitlement varies depending on
the initial assignment of the entitlement is to base legal policy choices not on the joint wealth or
welfare of the parties directly in question – because the answer to the question of which rule
maximizes their joint wealth or welfare may turn on the initial rule choice – but rather on the third-
party effects of the competing rules.
 An alternative approach to normative analysis with varying entitlement values depending on the
initial assignment of the entitlement is to make a judgment about which preferences – the ones
with legal rule A or the ones with legal rule B – deserve greater deference.

2.3 The Importance of Context


 Particularly in light of the central relevance of the endowment effect to normative economic
analysis of law, it is appropriate to emphasize the important role of context in whether this effect
occurs.
 The endowment effect, and not the Coase theorem, provides the best account of the effects of
contract law default rights. The deepening of knowledge about when the endowment effect does
and does not occur – across contract settings and elsewhere – will help refine our understanding of
the scope of this effect and, as a direct consequence, the validity of and limits on conventional
normative economic analysis of law.

3. The Modern Domain of Behavioural Law and Economics


 It is useful for purposes of behavioural law and economics analysis to view human actors as
departing from traditional economic assumptions in three distinct ways: human actors exhibit
1. bounded rationality,
2. bounded willpower, and
3. bounded self-interest.

3.1 Bounded Rationality


3.1.1 Judgment Errors
 Implicit Racial and Other Group-Based Bias. Perhaps the most elementary definition of the word
“bias” is that a person believes, either consciously or implicitly, that members of a racial or other
group are somehow less worthy than other individuals.
 The idea of implicit bias, by contrast, suggests that discriminatory behaviour often stems not from
taste-based preferences that individuals are consciously acting to satisfy, but instead from implicit
attitudes afflicting individuals who seriously and sincerely disclaim all forms of prejudice, and who
would regard their implicitly biased judgments as “errors.”
 A particular measure, known as the Implicit Association Test (IAT), has had particular influence in
the implicit bias research. In the IAT, individuals are asked to categorize words or pictures into four
groups, two of which are racial or other groups (such as “black” and “white”), and the other two of
which are the categories “pleasant” and “unpleasant.”
 Implicit bias is defined as faster categorization when the “black” and “unpleasant” categories are
paired than when the “black” and “pleasant” categories are paired. The IAT reveals significant
59
Session: All of them Exam Guide
evidence of implicit bias, including among those who assiduously deny any prejudice.
 Scores on the IAT and similar tests show correlations with third parties’ ratings of the degree of
general friendliness shown by individuals toward members of other groups.
 Implicit bias may often result from the way in which the characteristic of race or other group
membership operates as a sort of “heuristic” – a form of mental shortcut.
 The “Heuristics and Biases” Literature. One such judgment error is optimism bias, in which
individuals believe that their own probability of facing a bad outcome is lower than it actually is.
 Optimism bias is probably highly adaptive as a general matter; by thinking that things will turn out
well, people may often increase the chance that they will turn out well.
 A second judgment error prominent in behavioural law and economics is self-serving bias.
Whenever there is room for disagreement about a matter to be decided by two or more parties –
and of course there often is in litigation as well as elsewhere – individuals will tend to interpret
information in a direction that serves their own interests.
 A third judgment error extensively discussed in behavioural law and economics is the hindsight
bias, in which decision makers attach excessively high probabilities to events simply because they
ended up occurring.

3.2 Bounded Willpower


 The central normative question concerns how to view a decision to spend rather than save, to
consume desserts rather than salads, or to go to the gym rather than the movies.
 One possible answer, partially reminiscent of a strand of the endowment effect discussion above, is
that saving, eating salad, or going to the gym creates desirable third-party effects that are absent
with spending, eating dessert, or watching movies. Another possible answer, also with an analogue
in the earlier discussion, is that the preferences of the self who wishes to save, eat salad, or go to
the gym reflect a considered judgment about the matter in question

3.3 Bounded Self-Interest


 Much of traditional law and economics posits a relatively narrow set of ends that individuals are
imagined to pursue.
 Bounded self-interest within behavioural economics emphasizes that many people care about both
giving and receiving fair treatment in a range of settings
 A central question raised by bounded self-interest is what counts as “fair” treatment. Behavioural
economics suggests that people will judge outcomes as unfair if they depart substantially from the
terms of a “reference transaction” – a transaction that defines the benchmark for the parties’
interactions

4. Illustrative Applications of Behavioural Law and Economics


4.1 “Distributive Legal Rules”
 A leading law and economics argument in favor of addressing distributional issues through the tax
system rather than through non-tax legal rules is the argument that any desired distributional
consequence can be achieved at lower cost through the tax system than through distributive legal
rules.
 whatever the desired distributive consequences, under traditional economic analysis they can
always be achieved at lower cost by choosing the wealth-maximizing legal rule and adjusting
distributive effects through the tax system than by choosing a non-wealth-maximizing rule because
of its distributive properties.
 Work incentives are assumed to be distorted by the same amount as a result of a probabilistic, non-
tax mode of redistribution, such as the law governing accidents, as they are as a result of a tax.
 The expected costs of the two forms of redistribution are the same, and thus behavior is affected in
the same way. At least that is the assumption that traditional economic analysis makes.
60
Session: All of them Exam Guide
 a salient feature of distributive tort liability is the uncertainty of its application to any given actor.
 Bounded rationality in the form of optimism bias – the tendency to think negative events are less
likely to happen to oneself than they actually are – suggests that uncertain events are often
processed systematically differently from certain events.
 What does optimism bias with respect to the probability of the negative event of tort liability imply
for the distortionary effects of distributive tort liability as opposed to taxes? People will tend to
underestimate the probability that they will be hit with liability under distributive tort liability;
therefore, their perceived cost of the rule will be lower.
 With underestimation of the probability of liability, work incentives will typically be distorted less by
distributive legal rules than by taxes.

4.2 Discovery Rules in Litigation


 The U.S. system relies centrally upon an adversary approach, under which competing sides are
represented by legal counsel who argue in favor of their respective positions.
 The U.S. legal system contains a set of rules governing when and how one side in a legal dispute
may obtain (“discover”) information from the other side.
 Under conventional economic analysis this approach should tend to increase the convergence of
parties’ expectations and, thus, the rate at which they settle disputes out of court.
 The phenomenon of self-serving bias described in section 3.1.1 above, however, suggests that
individuals often interpret information differently depending on the direction of their own self-
interest.
 The timing of exposure to the case materials matters because “[s]elf-serving interpretations are
likely to occur at the point when information about roles is assimilated,” for the simple reason that
it “is easier to process information in a biased way than it is to change an unbiased estimate once it
has been made”
 The prospect that litigants will interpret at least some information in a self-serving fashion means
that the exchange of information in litigation may cause a divergence rather than convergence of
parties’ expectations.

4.3 The “Business Judgment” Rule in Corporate Law


 A central rule of U.S. corporate law is the “business judgment” rule, according to which corporate
officers and directors who are informed about a corporation’s activities and who approve or
acquiesce in these activities have generally fulfilled their duties to the corporation as long as they
have a rational belief that such activities are in the interests of the corporation.
 This highly deferential standard of liability makes it difficult to find legal fault for the decisions of
corporate officers and directors.
 Hindsight bias suggests that things will often seem negligent in hindsight, once a negative outcome
has materialized and is known, so the business judgment rule insulates officers and directors from
the risk of such hindsight-influenced liability determinations.

4.4 Rules Governing Contract Renegotiation


 an individual with bounded willpower will often have difficulty sticking to even the best-laid plans.
 Only if renegotiation is impossible can the parties avoid the effects of bounded willpower and
achieve commitment to the initial plan.
 The primary exception to enforcement of such agreements concerns renegotiated agreements
coerced by one party’s threat to breach the original contract if renegotiation does not occur. But in
the model discussed here, renegotiation is truly welfare-enhancing – at the time at which it occurs –
for both parties relative to the original contract, so the coercion concern does not apply, and thus
the default rule would allow enforcement of the renegotiated agreement.

4.5 The Content of Consumer Protection Law

61
Session: All of them Exam Guide
 Laws banning usurious lending and price gouging when such activities are prevalent are a
straightforward prediction of the theory of bounded self-interest described above.
 The account above of bounded self-interest suggests that if trades are occurring frequently in a
given jurisdiction at terms far from those of the reference transaction, there will be strong pressure
for a law banning such trades. Note that the prediction is not that all high prices (ones that make it
difficult or impossible for some people to afford things they might want) will be banned; the
prediction is that transactions at terms far from the terms on which those transactions generally
occur in the marketplace will be banned.
 Of course, waiting in line for scarce goods is precisely what happens with laws against price gouging.
Thus, pervasive fairness norms appear to shape attitudes (and hence possibly law) on both usury
and price gouging.
 As a positive matter, behavioral law and economics predicts that if trades are occurring with some
frequency on terms far from those of the reference transaction, then legal rules will often ban
trades on such terms.

5. Debiasing through Law


 The basic promise of strategies for debiasing through law is that these strategies will often provide a
middle ground between unyielding adherence to the assumptions of traditional economics, on the
one hand, and the usual behavioural law and economics approach of accepting departures from
those assumptions as a given, on the other.

5.1 Debiasing Through Substantive and Procedural Law


-

5.2 General Typology of Strategies for Debiasing through Law


 Figure 1 generalizes the point by mapping the terrain of strategies for debiasing through law more
fully.
 The column division marks the line between procedural rules governing the adjudicative process
and substantive rules regulating actions taken outside of the adjudicative process.
 The row division marks the line between debiasing actors in their capacity as participants in the
adjudicative process and debiasing actors in their capacity as decision makers outside of the
adjudicative process.
 The upper-left box in this matrix represents the type of debiasing through law on which the prior
work on such debiasing has focused: the rules in question are procedural rules governing the
adjudicative process, and the actors targeted are individuals in their capacity as participants in the
adjudicative process
 the lower-left box in the matrix is marked with an “X” because procedural rules governing the
adjudicative process do not have any obvious role in debiasing actors outside of the adjudicative
process – although these rules certainly may affect such actors’ behavior in various ways by
influencing what would happen in the event of future litigation.
 The lower-right box in the matrix represents the category of debiasing through law: the rules in
question are substantive rules regulating actions taken outside of the adjudicative process, and the
actors targeted are decision makers outside of the adjudicative process.
 The upper right corner of the matrix represents a hybrid category that warrants brief discussion, in
part to demarcate it from the category (just discussed) of debiasing through substantive law. In this
hybrid category, it is substantive, rather than procedural, law that is structured to achieve
debiasing, but the judgment error that this debiasing effort targets is one that arises within, rather
than outside of, the adjudicative process.

62
Session: All of them Exam Guide
6. Conclusion
 The ultimate sign of success for behavioral economics will be that what is now behavioral
economics will become simply “economics.” The same observation applies to behavioral law and
economics.
 Debiasing through law, may hasten the speed at which this transition occurs by pointing to a wide
range of possibilities for recognizing human limitations while at the same time avoiding the step of
paternalistically removing choices from people’s hands

63
Session: All of them Exam Guide
Alvin E. Roth: Repugnance as a Constraint on Markets

 Some kinds of transactions are


repugnant in some times and
places and not in others.
 Economists need to under- stand
better and engage more with the
phenomenon of repugnant
transactions. Attitudes about the
repugnance (or other kinds of
inappropriateness) of
transactions shape whole
markets, and therefore shape
what choices people face.

Repugnant Markets
 The examples in Table 1 and others, even where there may be willing suppliers and demanders
of certain transactions, aversion to those transactions by others may constrain or even prevent
the transactions.
How Repugnant Combines with Other Factors
 Some markets are banned or limited for combinations of reasons that include both repugnance
and also concerns about negative externalities. In some repugnant markets transactions may
not always involve two willing parties. But repugnance can be present even when the
externalities are minimal.
 Therefore, bans of some repugnant markets seem sometimes only limited private consumptions.
 Some kinds of repugnance are also intermixed with concerns about providing incentives for bad
behaviour.
Dwarf tossing
 Essentially dwarf tossing was so repugnant that it imposed a negative externality by diminishing
human dignity, a public good.

Repugnance is Hard to Predict


 Repugnance, whether alone or in alliance with other objections, can impose serious constraints
on various transactions. However, predicting when repugnance will play a decisive role is
difficult, because apparently similar activities and transactions are often judged differently. For
example, while dwarf tossing is repugnant in many places, wife carrying, another sport that
involves persons of disparate stature, has North American and world championships.
 A proposed prediction market for terrorist events met with vigorous denunciation, but general
prediction markets have thrived, including some that include bets on various aspects of current
events

64
Session: All of them Exam Guide
Cash Payments and Repugnance
 One often-noted regularity is that some transactions that are not repugnant as gifts and in-kind
exchanges become repugnant when money is added.
 Offering money is often regarded as inappropriate even when not repugnant. For example,
dinner guests at your home may respond in-kind, by bringing wine or inviting you to dinner in
return, but they would likely not be invited back if they offered to pay for their dinner.
 Sometimes the level of the price is regarded as repugnant rather than the existence of a price:
after a natural disaster it is often regarded as acceptable to sell supplies at their pre-disaster
price, but as repugnant price-gouging to raise the price. There may be resistance to charging for
goods that have previously been provided for free or at low cost, like water or the right to drive
in cities during rush hours.
 Of course, sometimes laws or public outrage focus on monetary transactions only because they
are easier to ban than nonmonetary transactions.
 Concerns about the monetization of transactions fall into three principal classes:
1. One concern is objectification: that is, the fear that putting a price on certain things and
buying or selling them might move them into a class of impersonal objects to which they
should not belong.
2. A second concern is that offering substantial monetary payments might be coercive, in the
sense that it might leave some people, particularly the poor, open to exploitation from
which they deserve protection.
3. A third concern, sometimes less clearly articulated, is that monetizing certain transactions
that might not themselves be objectionable may cause society to slide down a slippery
slope to genuinely repugnant transactions.

Compensating Organ Donors


Objectification
 Many people clearly regard monetary compensation for organ donation as something that
transforms a good deed into a bad one. Law (in the US) also exempts payment of expenses
directly incurred by organ donors, like travel expenses.
Coercion
 quite common in the organ transplant literature and elsewhere, is that money may be coercive,
so that allowing kidneys to be sold would allow the poor to be exploited.
 “It is an unethical approach to shift the tragedy from those waiting for organs to those exploited
into selling them.” – Kahn and Delmonico
Slippery Slope
 Concern that monetizing some transactions might lead to other changes seems to lurk beneath
the more explicit concerns.
 This concern is not altogether different from concerns about how legalizing certain kinds of
voluntary transactions may change the terms of trade so as to disadvantage those who don’t
wish to participate in them.
 Some (but by no means all) of the opposition to monetary compensation for deceased donor
organs seems also to be of the slippery slope variety, with the concern being that it might pave
the way for live organ sales.
65
Session: All of them Exam Guide
 Some researchers worry that monetary markets could reduce the incidence of deceased
donation, which supplies not only kidneys but other organs as well and with the loss of intrinsic
motivation that might accompany the introduction of monetary payments.
Other Sources of Repugnance Toward Paying For Live Donor Kidneys
 A surgeon who is already overcoming some distaste for performing a nephrectomy (kidney
removal) on a healthy person may find the distaste more difficult to overcome if he views himself
as facilitating a commercial transaction.
Historical Perspective
 -
Economists Voices in the Debate About Organ Sales
 When confronted with repugnance toward a market transaction, economists often respond as if
a sufficiently clear argument focused on the welfare gains due to trade will overcome that
repugnance.
 The claim that organ sales “objectify” people is met by noting that in labour markets generally,
poorer workers tend to take more dangerous and less pleasant jobs in return for wages, and that
we mostly think they do not diminish their humanity by doing so. The response to arguments
about “coercion” is typically that voluntary transactions increase welfare of both the seller and
the buyer, if the transaction is truly voluntary. The response to “slippery slope” arguments is that
markets can be regulated if necessary.
 Attempts to justify the deep feelings of repugnance which are the real driving force of
prohibition, and feelings of repugnance among the rich and healthy, no matter how strongly felt,
cannot justify removing the only hope of the destitute and dying.
 Some discussion has focused on thinking about how the worst abuses of unregulated markets
could be reduced by regulations. Such regulations might include restrictions on compensation;
allowing outright purchases but only by a single authorized governmental buyer; requiring an
above-market-clearing price (that might be bundled with insurance or annuities); mandatory
standards for the health and postoperative care of donors; or perhaps bans on international
trade (since the thought of rich Americans importing kidneys from the third world seems to
arouse a repugnance distinct from that toward the kidney sales themselves).
 Some might note that most of the arguments designed to disarm repugnance to legalizing the
sale of a kidney would also, in principle, apply to a live donor who was willing, for a sufficiently
high price, to sell an eye, an arm, a leg—or a heart.

Market Design When Repugnance Matters


 Sometimes there is an opportunity to correct the market failures associated with unravelling and
exploding offers by creating clearinghouses that will provide a thick market. Clearinghouses are
also sometimes employed to fix market failures due to congestion.

Conclusion
 Repugnance can be a real constraint on markets. Almost whenever I have been involved in
practical market design, the question of whether certain kinds of transactions may be
inappropriate has come up for discussion.
66
Session: All of them Exam Guide
 To say that repugnance is a real phenomenon doesn’t mean that repugnance isn’t sometimes
deployed for strategic purposes by self-interested parties to recruit allies who would not respond
to a clear appeal to narrower motives such as rent seeking.
 One way of seeing the role that repugnance plays in this debate is to compare it to a difficult
technological barrier. If the technological barriers could be over- come that currently prevent,
say, transplanting pig kidneys into human patients, such “xenotransplants” would also end the
kidney shortage.
 repugnance is similar to technological barriers in this respect: markets that we can envision may
nevertheless not be easily achievable. I would not like to guess whether repeal of the widespread
laws against kidney sales is likely to happen more quickly than the advances in
xenotranplantation, or artificial kidneys, or other medical breakthroughs that would end the
shortage of kidneys.
 Of course, there can also be “technological” developments in the law. For example, Volokh
(forthcoming) endorses a “medical right to self-defence,” that would give a person dying of end-
stage renal disease the right to pursue all reasonable avenues to preserve their life, including
purchasing a kidney.
 economists see very few trade-offs as completely taboo, noneconomists often decline to discuss
trade-offs at all, preferring to focus on the repugnance of transactions like organ sales.
 The current situation can be viewed as a regulated market with the only legal price being zero,
which makes it difficult to prevent unregulated transactions on international black markets.

Kessler & Roth: Getting More Organs for Transplantation

 Living donors give an organ while alive. Living donation of kidneys, which represented 96 per- cent
of all US living organ donations in 2012 (OPTN), is possible since humans have two kidneys but can
live a healthy life with only one, allowing the other to be removed and donated.
 Just over 3 percent of living donor kidneys in the United States came from non-directed donors in
2012
 additional organs are recovered when next of kin consent to donation on behalf of unregistered
deceased. (Next of kin are also asked to consent to donation of registered donors. While this
confirmation is not deemed to be legally necessary to proceed with donation, it is usually done
anyway
 One standard first response by economists is that we can solve excess demand by raising the price
from the current legal limit of zero by allowing organs to be bought and sold, potentially for both
living and deceased donation.
 there is evidence that the manner of the payment to an organ donor may mitigate some of the
repugnance concerns. Niederle and Roth (forthcoming) find that payments to non-directed kidney
donors are deemed more acceptable when they arise as a reward for heroism and public service
than when they are viewed as a payment for kidneys.
 As kidney exchange began to assemble pools of patient-donor pairs, it became possible to offer
non-directed donors the possibility of initiating a long chain of donations, in which the nondirected
donor would donate to the patient in an incompatible pair, whose donor would donate to another
pair, and so on
 Some nations have introduced allocation schemes that provide priority on organ donor waiting lists
to individuals who have previously registered as donors.
 The priority allocation rule led to a large, significant increase in donation. Additional treatments
revealed that the main mechanism driving this increase was the monetary incentive effect of
67
Session: All of them Exam Guide
priority; the same increase in donation was induced by providing a rebate for donation equal to the
expected value of having priority or by lowering the cost of donation by the expected value of
having priority.
 The experiment assumed that the allocation rule could be implemented so that everyone who
registered as an organ donor to receive priority would actually donate when in a position to do so
 The check box (on the sign-up form for becoming an organ donor) has the potential to operate as a
loophole in the priority allocation system whereby an individual signs a donor card to receive
priority on the waiting list if he is ever in need of an organ but expects his family or clergyman to
decline the donation if he dies and is in a position to donate. Essentially it allows individuals to
receive priority even though they would never make a donation.
 If this loophole exists it completely eliminates the benefit of priority.
 This leads then to fewer donations under a priority system with a loophole than under a first-come,
first-served system without priority.
 However, subjects treat taking the loophole as a worse affront than simply not donating,
presumably since those who take the loophole are explicitly abusing a system designed to reward
donors.
 European countries that have opt-out systems have vastly higher donor registration rates than the
European countries that have opt-in systems. In the United States since organ donation falls under
gift law and so requires a positive statement of support in favour of donation
 Subjects are less likely to report that next of kin should donate the organs of an unregistered
deceased if the deceased explicitly said no to registration in a mandated choice framed question
than if the deceased simply chose not to opt in. This suggests that asking individuals to register
under a mandated choice frame may make it harder to get permission for organ donation from the
next of kin of those who remain unregistered.

68
Session: All of them Exam Guide
Camerer & Loewenstein: Behavioural Economics: Past, Present, and Future – Summary

Intertemporal Choice
 The discounted-utility (DU) model assumes that people have instantaneous utilities from their
experience each moment, and that they choose options that maximize the present discounted sum
of these instantaneous utilities.
 Typically, it is assumed that instantaneous utility each period depends solely on consumption in that
period, and that the utilities from streams of consumption are discounted exponentially, applying
the same discount rate in each period.

Time Discounting
 A central issues in economics is how agents trade off costs and benefits that occur at different pints
in time. The standard assumption is that people weight future utilities by an exponentially declining
discount factor d ( t ) =δ t, where 1 > δ >0. Note that the discount factor δ is often expressed as
1/(1+r), where r is a discount rate.
 A simple hyperbolic time discounting function of d(t) = 1/(1+ kt) tends to fit experimental data
better than exponential discounting.
 Immediacy effect discounting is dramatic when one delays consumption that would otherwise be
immediate.
 Hyperbolic time discounting implies that people will make relatively farsighted decisions when
planning in advance – when all costs and benefits will occur in the future – but will make relatively
short-sighted decisions when some costs or benefits are immediate.
 The systematic changes in decisions produced by hyperbolic time discounting create a time
inconsistency in intertemporal choice not present in the exponential model.
 Somebody with time inconsistent hyperbolic discounting will wish prospectively that in the future
he would take farsighted actions; but when the future arrives he will behave against his earlier
wishes, pursuing immediate gratification rather than long-run well being.
 Quasi hyperbolic time discounting is basically standard exponential time discounting plus an
immediacy effect; a person discounts delays in gratification equally expect the current one – caring
differently about well being now versus later.
 Partially illiquidity of an asset plays in helping consumers constrain their own future consumption.
 An important question in modelling self-control is whether agents are aware of their self-control
problem (“sophisticated”) or not (“naïve”). Naïveté typically make damage from poor self control
worse. In some cases being sophisticated about one’s self-control problem can exacerbate yielding
to temptation. If you are aware of you tendency to yield to a temptation in the future, you may
conclude that you might as well yield now; if you naively think you will resist temptation longer in
the future, that may motivate you to think it is worthwhile resisting temptation now.
 A new model by Loewenstein and Perlec includes effect as “magnitude effect”, “temporal losses”,
and lower discount rates for losses than for gains. This model departs in two major ways from the
DU. First it incorporates discount function. Second, it incorporates a utility function with special
curvature properties that is defined over gains and losses rather than final levels of consumptions.
 Negative time discounting – If people like savouring pleasant future activities they may postpone
them to prolong the pleasure ( and they may get painful activities over with quickly to avoid dread).
69
Session: All of them Exam Guide

A gradient of childhood self-control predicts health, wealth, and public safety

Abstract
 childhood self- control predicts physical health, substance dependence, personal finances, and
criminal offending outcomes, following a gradient of self-control.
 the sibling with lower self-control had poorer out- comes, despite shared family background.
Interventions addressing self-control might reduce a panoply of societal costs, save tax- payers
money, and promote prosperity.

Introduction
 Self-control is an umbrella construct that bridges concepts and measurements from different
disciplines (e.g., impulsivity, conscientiousness, self-regulation, delay of gratification, inattention-
hyperactivity, executive function, willpower, intertemporal choice).
 Neuroscientists study self-control as an executive function sub- served by the brain’s frontal cortex.
 Behavioural geneticists have shown that self-control is under both genetic and environmental
influences.
 Health researchers report that self-control predicts early mortality; psychiatric disorders; and
unhealthy behaviours, such as overeating, smoking, unsafe sex, drunk driving, and noncompliance
with medical regimens. Sociologists find that low self-control predicts unemployment and name
self-control as a central causal variable in crime theory, providing evidence that low self-control
characterizes law- breakers.
 preschool programs that targeted poor children 50 y ago, although failing to achieve their stated
goal of lasting improvement in children’s intelligence quotient (IQ) scores, somehow produced by-
product reductions in teen pregnancy, school dropout, delinquency, and work absenteeism
 policy-makers might exploit this by enacting so-called “opt-out” schemes that tempt people to eat
healthy food, save money, and obey laws by making these the default options that require no
effortful self-control.
 First, we tested whether children’s self-control predicted later health, wealth, and crime similarly at
all points along the self-control gradient, from lowest to highest self-control.
 some Dunedin study members moved up in the self-control rank over the years of the study, and
we were able to test the hypothesis that improving self-control is associated with better health,
wealth, and public safety.
 the hypothesis that individual differences in preschoolers’ self-control predict outcomes in
adulthood. If so, early childhood would also be an intervention window.

70
Session: All of them Exam Guide

Results
 Mean levels of self-control were higher among girls than boys, but the health, wealth, and public
safety implications of childhood self-control were equally evident and similar among boys and girls
 Dunedin study children with greater self-control were more likely to have been brought up in
socioeconomically advantaged families and had higher IQs, raising the possibility that low self-
control could be a proxy for low social class origins or low intelligence.
Predicting Health
 Childhood self-control predicted adult health problems, even after accounting for social class origins
and IQ.
 As adults, children with poor self-control were not at elevated risk for depression. They had
elevated risk for substance dependence, however, even after accounting for social class and IQ.
Predicting Wealth

 Childhood self-control also foreshadowed the study members’ financial situations. Although the
study members’ social class of origin and IQ were strong predictors of their adult socioeconomic
status and income, poor self-control offered significant incremental validity in predicting the
socioeconomic position they achieved and the income they earned
 Childhood self-control predicted whether or not these study members’ offspring were being reared
in one-parent vs. two- parent households (e.g., the study member was an absent father or single
mother), also after accounting for social class and IQ.
 At the age of 32 y, children with poor self-control were less financially planful. Compared with other
32-y-olds, they were less likely to save and had acquired fewer financial building blocks for the
future
 Children with poor self-control were also struggling financially in adulthood. They reported more
money-management difficulties and had accumulated more credit problems.
 Poor self-control in childhood was a stronger predictor of these financial difficulties than study
members’ social class origins and IQ.
Predicting Crime
 Children with poor self-control were more likely to be convicted of a criminal offense, even after
accounting for social class origins and IQ
Self-Control Gradient
 The self-control gradient was even apparent when we removed children in the least and most self-
controlled quin- tiles
 The childhood measure of self-control was significantly correlated with a personality measurement
of self-control administered to our cohort in young adulthood, at a moderate magnitude, consistent
with expectations.
 As a caveat, it is not clear that natural history change of the sort we observed in our longitudinal
study is equivalent to intervention-induced change.
Self-Control and Adolescent Mistakes
 Data collected at the ages of 13, 15, 18, and 21 y showed that children with poor self-control were
more likely to make mistakes as adolescents, resulting in “snares” that trapped them in harmful
lifestyles.
71
Session: All of them Exam Guide
 More children with low self-control began smoking by the age of 15 y, left school early with no
educational qualifications, and became unplanned teenaged parents. The lower their self-control,
the more of these snares they encountered. In turn, the more snares they encountered, the more
likely they were, as adults, to have poor health, less wealth, and criminal conviction
 How Early Can self-Control Predict Health, Wealth and Crime
 pre-schoolers’ self-control significantly predicted health, wealth, and convictions at the age of 32 y,
albeit with modest effect sizes

Sibling Comparison
 Models showed that the 5-y-old sibling with poorer self-control was significantly more likely to
begin smoking as a 12-y-old, perform poorly in school, and engage in antisocial behaviours, and
these findings remained significant even after controlling for sibling differences in IQ.

Comment
 Differences between individuals in self-control are present in early childhood and can predict
multiple indicators of health, wealth, and crime across 3 decades of life in both genders.
Furthermore, it was possible to disentangle the effects of children’s self-control from effects of
variation in the children’s intelligence, social class, and home lives of their families, thereby singling
out self-control as a clear target for intervention policy.
 Differences between children in self-control predicted their adult outcomes approximately as well
as low intelligence and low social class origins, which are known to be extremely difficult to improve
through intervention.
 It has been shown that self-control can change. Programs to enhance children’s self-control have
been developed and positively evaluated, and the challenge remains to improve them and scale
them up for universal dissemination

Save More Tomorrow: Using Behavioural Economics to increase Employee Saving

Abstract
As firms switch from defined-benefit plans to defined-contribution plans, employees bear more
responsibility for making decisions about how much to save. The employees who fail to join the plan or
who participate at a very low level appear to be saving at less than the predicted life cycle savings rates.
Behavioural explanations for this behaviour stress bounded rationality and self-control and suggest that
at least some of the low-saving households are making a mistake and would welcome aid in making
decisions about their saving.
The essence of the SMarT program is straightforward: people commit in advance to allocating a portion
of their future salary increases toward retirement savings.
Our key findings, from the first implementation, which has been in place for four annual raises, are as
follows: (1) a high pro- portion (78 percent) of those offered the plan joined, (2) the vast majority of
those enrolled in the SMarT plan (80 percent) remained in it through the fourth pay raise, and (3) the
average saving rates for SMarT program participants increased from 3.5 percent to 13.6 percent over

72
Session: All of them Exam Guide
the course of 40 months. The results suggest that behavioural economics can be used to design effective
prescriptive pro- grams for important economic decisions.

Introduction
 Households are assumed to want to smooth consumption over the life cycle and are expected to
solve the relevant optimization problem in each period before deciding how much to consume and
how much to save.
 Actual household behaviour might differ from this optimal plan for at least two reasons.
1. the problem is a hard one, even for an economist, so households might fail to compute the
correct savings rate.
2. even if the correct savings rate were known, households might lack the self-control to reduce
current consumption in favour of future consumption
 the basic idea of SMarT is to give workers the option of committing themselves now to increasing
their savings rate later, each time they get a raise.
A Prescriptive Approach to Increasing Savings Rates
 Raiffa (1982) suggested that economists and other social scientists could benefit from distinguishing
three different kinds of analyses: normative, descriptive, and prescriptive.
 Normative theories characterize rational choice and are often derived by solving some kind of
optimization problem.
 Descriptive theories simply model how people actually choose, often by stressing systematic
departures from the normative theory.
 prescriptive theories are attempts to offer advice on how people can improve their decision making
and get closer to the normative ideal.
 Prescriptions often have a second-best quality.
 Before writing a prescription, one must know the symptoms of the disease being treated.
Households may save less than the life cycle rate for various reasons.
1. determining the appropriate savings rate is difficult, even for someone with economics training.
One obvious solution to this problem is financial education
2. Second, saving for retirement requires self-control.
3. A third problem, closely related to self-control, is procrastination, the familiar tendency to
postpone unpleasant tasks.
 economists have known that intertemporal choices are time consistent only if agents discount
exponentially using a discount rate that is constant over time. But there is considerable evidence
that people display time-inconsistent behavior, specifically, weighing current and near-term
consumption es- pecially heavily.
 present-biased preferences (as discussed in the two other articles for this session) can be captured
with models that employ hyperbolic discounting. These models come in two varieties: sophisticated
and naive. Sophisticated agents realize that they have hyperbolic preferences and take steps to deal
with the problem, whereas naive agents fail to appreciate at least the extent of their problem.
 Hyperbolic agents procrastinate because they (wrongly) think that whatever they will be doing later
will not be as important as what they are doing now.
 The costs of actively joining the (retirement) plan (typically filling out a short form) are trivial
compared with the potential benefits of the tax-free accumulation of wealth, and in some cases a

73
Session: All of them Exam Guide
“match” is provided by the employer, in which the employer typically contributes 50 cents to the
plan for every dollar the employee contributes, up to some maximum. In contrast, if agents display
procrastination and status quo bias, then automatic enrolment could be useful in increasing
participation rates.
 Consistent with the behavioural predictions, automatic enrolment plans have proved to be
remarkably successful in increasing enrolments.
 A goal of the SMarT plan is to obtain some of the advantages of automatic enrollment while
avoiding some of the disadvantages.
 The program should be simple and should help people approximate the life cycle saving rate if they
are unable to do so themselves.
 Hyperbolic discounting implies that opportunities to save more in the future will be considered
more attractive than those in the present.
 Procrastination and inertia suggest that once employees are enrolled in the program, they should
remain in until they opt out.
 The final behavioural factor that should be considered in designing a prescriptive savings plan is loss
aversion, the empirically demonstrated tendency for people to weigh losses significantly more
heavily than gains.
 Loss aversion affects savings because once households get used to a particular level of disposable
income, they tend to view reductions in that level as a loss.
 The combination of loss aversion and money illusion suggests that pay increases may provide a
propitious time to try to get workers to save more, since they are less likely to consider an increased
contribution to the plan as a loss than they would at other times of the year.

The SMarT Program


 Save More Tomorrow . The plan has four ingredients.
1. employees are approached about increasing their contribution rates a considerable time before
their scheduled pay increase. Because of hyperbolic discounting, the lag between the sign-up
6
and the start-up dates should be as long as feasible.
2. if employees join, their contribution to the plan is increased beginning with the first paycheck
after a raise. This feature mitigates the perceived loss aversion of a cut in take-home pay.
3. the contribution rate continues to increase on each scheduled raise until the contribution rate
reaches a preset maximum. In this way, inertia and status quo bias work toward keeping people
in the plan.
4. the employee can opt out of the plan at any time.
 It is not possible to say on theoretical grounds which features are most important, nor can theory
tell us the ideal levels to select for many of the parameters that must be picked (e.g., the delay
between the solicitation letter and the start of the program, the rate of increase, and the methods
of soliciting and educating potential participants).

Outcome
Tables in the End

74
Session: All of them Exam Guide
Conclusion
 The initial experience with the SMarT plan has been quite successful. Many of the people who were
offered the plan elected to use it, and a majority of the people who joined the SMarT plan stuck
with it. Consequently, in the first implementation, for which we have data for four annual raises,
SMarT participants almost quadrupled their saving rates. Of course, one reason why the SMarT plan
works so well is that inertia is so powerful.
 Once people enroll in the plan, few opt out. The SMarT plan takes precisely the same behavioural
tendency that induces people to postpone saving indefinitely (i.e., procrastination and inertia) and
puts it to use. As the financial consultant involved in the first implementation has noted, in
hindsight it would have been better to offer the SMarT plan to all participants, even those who were
willing to make their initial savings increase more than the first step of the SMarT plan. Very few of
these eager savers ever got around to changing their savings allocations again, whereas the SMarT
plan participants were already saving more than they were after just 16 months.
 Some economists have criticized practices such as automatic enroll- ment and the SMarT plan on
the grounds that they are paternalistic, a term that is not meant to be complimentary.
 The authors agree that these plans are paternalistic, but since no coercion is involved, they
constitute what Sunstein and Thaler (2003) call “libertarian paternalism.” Libertarian paternalism is
a philosophy that advocates designing institutions that help people make better decisions but do
not impinge on their freedom to choose. Automatic enrollment is a good example of libertarian
paternalism.

75
Session: All of them Exam Guide

76
Session: All of them Exam Guide

Fehr & Rangel: Neuroeconomic Foundations of Economic Choice – Recent Advances

Introduction
 The brain controls human behavior. Economic choice is no exception. Recent studies have shown
that experimentally induced variation in neural activity in specific regions of the brain changes
people’s willingness to pay for goods, renders them more impatient, more selfish, and more willing
to violate social norms and cheat their trading partner.
 Transcranial magnetic stimulation (TMS) and transcranial direct current stimulation (tDCS), which
enable the researcher to exogenously increase or decrease neural activity in specific regions of the
cortex before subjects make decisions in experimental tasks that elicit their preferences.
 Neuroeconomics combines methods and theories from neuroscience, psychology, economics, and
computer science to investigate three basic questions:
1. What are the variables computed by the brain to make different types of decisions, and how do
they relate to behavioral outcomes?
2. How does the underlying neurobiology implement and constrain these computations?
3. What are the implications of this knowledge for understanding behavior and well-being in
77
Session: All of them Exam Guide
various contexts: economic, policy, clinical, legal, business, and others?

 This article has two main goals:


1. we provide an overview of what has been learned about how the brain makes choices in two
types of situations: simple choices among small numbers of familiar stimuli (like choosing
between an apple or an orange), and more complex choices involving tradeoffs between
immediate and future consequences (like eating a healthy apple or a less-healthy chocolate
cake).
2. we show that even at this early stage in the field, insights with important implications for
economics have already been gained.
 Neural activity is stochastic by its very nature and thus the neural computations necessary for
making choices are stochastic.
 neuroeconomic research indicates that consumption choices can be biased by simple manipulations
of subjects’ visual attention and the opportunity costs of time, thus providing insights into how
marketing actions can affect the probability of mistakes.

Simple Choices: Computational Model


 Simple choices are the simplest instance of economic decision making that can be studied using the
neuroeconomics approach. A typical example is whether to choose an apple or an orange for
dessert.
 The computational model of simple choice has five key components
1. The brain computes a decision value signal for each option at the time of choice.
 In this model, economic choice is driven by the computation and comparison of “decision value”
signals. In particular, the model assumes that the decision values are computed from the instant the
decision process starts to the moment the choice is made.
 Because choices are made by computing and comparing decision values, these signals causally drive
the choices that are made: options that are assigned a higher
 decision value will be more likely to be chosen.
 How does one test that the brain encodes a certain variable—say, decision values—at a particular
time in the choice process? The typical experiment has three main components.
1. some form of behavioral data is used to estimate the value that the brain assigned to the signal
of interest.
2. a measurement of neural activity is taken during the choice process in particular brain areas.
3. statistical methods are used to test if neural activity during the period of interest is modulated
by the signal (or signals) of interest.
 If the neural activity is statistically significantly related to the signal of interest, then this is taken to
be evidence consistent with the hypothesis that activity in that neural substrate encodes the signal.
 An item is called “appetitive” if an animal would work to consume it (for example, sugar when
hungry) and “aversive” if an animal would work to avoid it (for example, an electric shock). A
common hypothesis in psychology is that choices among appetitive items, sometimes called
approach choice, and choices among aversive items, sometimes called avoidance choice, involve
separate systems. These studies are important because they show that, at least in the case of
simple choice, the same area of the brain seems to encode the decision value for both types of
choices, thus providing evidence against the multiple system hypothesis.
78
Session: All of them Exam Guide
 The existing evidence suggests that the decision value signals are precursors, and not
consequences, of the choice process. The ventromedial prefrontal cortex seems to encode decision
values for all options being considered before the choice is made, and the signals do not depend on
which option is chosen.
 It is possible to investigate the causality of these signals by experimentally manipulating the value
signal in the ventromedial prefrontal cortex and examining the resulting behavioral changes.
2. The brain computes an experienced utility signal at the time of consumption.
 The brain needs to keep track of the consequences of its decisions to learn how to make choices in
the future. A key component of such learning is the computation of an experienced utility signal at
the time of consumption that reflects the actual consequences for the organism of consuming the
chosen option.
 decision values are distinct from the experienced utility signal: decision values are forecasts about
the experienced utility signal that will be computed at the time of consumption. Indeed, decision
values and experienced utility need not agree with each other.
 several studies have found that such signals are present in various parts of the orbitofrontal cortex
and the nucleus accumbens at the time of consuming a variety of goods including music, liquids,
foods, and art.
3. Choices are made by comparing decision values using a “drift-diffusion model.”
 The drift-diffusion model was developed by psychologist Roger Ratcliff to explain the accuracy and
response times in any task involving binary responses that can be elicited in a handful of seconds.
 The drift-diffusion model can also be applied to the comparison of decision values.
 For simplicity, consider the case of a binary choice involving two options, x and y. Krajbich and
Rangel (2011) present a generalization to multi-item choice. As Figure 1 illustrates, a binary choice is
made by dynamically computing a relative decision value signal, denoted by R, that measures the
value difference of x versus y. The signal starts at zero and at every instant t evolves according to
the formula

 where Rt denotes the level of the signal at


instant t (measured from the start of the
choice process), v(x) and v(y) denote the
decision value that is assigned to the two
options, θ is a constant that affects the
speed of the process, and εt denotes an
independent and identically distributed
error term with variance s 2. The process
continues until a prespecified barrier is
crossed: x is chosen if the upper barrier at
B is crossed first, and y is chosen if the
lower barrier at B is crossed first.
 The drift-diffusion model has several important features.
1) since the relative decision value signal evolves stochastically, choices are inherently noisy, and
the amount of noise is proportional to the parameter s 2.

79
Session: All of them Exam Guide
2) the model predicts that the probability of choosing x is a logistic function of the difference in
the decision value signals [v(x) – v(y)]
3) given the stochasticity of choice, there is always a positive probability that individuals will
choose the option with the lowest decision value. This probability increases with the difficulty
of the choice (as measured by how small is the value | v(x) – v(y) |, and decreases with the
parameter θ and with the height of the barriers.
 the model makes specific predictions about how the shape of the reaction time distribution varies
with the difficulty of the choice and with the parameters of the model.
 From the brain’s point of view, decision values are estimated with noise at any instant. If the
instantaneous decision value signals are computed with identical and independently distributed
Gaussian noise, then the drift-diffusion model implements the optimal statistical solution to the
problem, which entails a sequential likelihood ratio test.
 The relative decision value R t can be thought of as the accumulated evidence in favor of the
hypothesis that the alternative x is better (when R t > 0), or the accumulated evidence in favor of
the alternative hypothesis (when R t < 0). The more extreme these values become, the less likely it
is that the evidence is incorrect. The probability of a mistake can be controlled by changing the size
of the barriers that have to be crossed before a choice is made.
 Rangel (forthcoming) argue that a brain area involved in implementing the drift- diffusion model
choice process must exhibit the following properties:
1) its level of activity in each trial at the time of choice should correlate with the total level of
activity predicted by the best-fitting drift-diffusion model;
2) it should receive as an input the computations of the area of the ventromedial prefrontal cortex
associated with computing decision values; and
3) it should modulate activity in the motor cortex in a way that is consistent with implementing
the choice.
 They found that activity in two parts of the brain—the dorsomedial prefrontal cortex and the
bilateral intraparietal sulcus—satisfied the three required properties and thus was consistent with
the implementation of the drift-diffusion model.
4. Decision values are computed by integrating information about the attributes associated with each
option and their attractiveness.
 Let di(x) denote the characteristics of option x for dimension i. The model assumes that

 for some set of weights wi .Consider several aspects of this assumption. First, the decision values
used to guide choices depend on the attributes that are computed for each option at the time of
choice. This implies that the decision value signals, and thus the choice process, take into account
the value of an attribute only to the extent that the brain can take it into account in the
construction of the decision values.
 Second, it provides a source of preference heterogeneity across individuals: some people might fail
to incorporate a particular dimension in the decision values, not because they don’t value it, but
because they might not be able to compute it at the time of choice.
 activity in the posterior superior temporal gyrus, which has been widely associated with the
computation of semantic meaning, correlated with the value of the semantic attribute but not with
the aesthetic value. The opposite was true for an area of fusiform gyrus that is known to be involved

80
Session: All of them Exam Guide
in computing the visual properties of the stimuli. In addition, activity in the ventromedial prefrontal
cortex correlated with the decision values and received inputs from both areas.
5. The computation and comparison of decision values is modulated by attention.
 Attention can affect the choice process in two different ways. First, it might affect how attributes
are computed and how they are weighted in the decision value computation.
 This can be incorporated into the model as follows: Let a be a variable describing the attentional
state at the time of choice. The computed decision value is then given by

 Second, attention can also affect how decision values are compared at the time of choice.
 The model is identical to the basic drift-diffusion set-up except that the path of the integration at
any particular instant now depends on which option is being attended to. Thus, for example, when
the x option is being attended, the relative decision value signal evolves according to

 where β measures the attentional bias towards the attended option. We refer to this model as the
“attention drift-diffusion model.” If β = 1, the model is identical to the basic model and choice is
independent of attention, but if β > 1, choices are biased towards the option that is attended
longer.
 Two properties of the model are worth highlighting.
1) it predicts that exogenous changes in attention (for example, through experimental or
marketing manipulations) should bias choices in favor of the most attended option when its
value is positive, but it should have the opposite effect when the value is negative.
2) the model makes strong quantitative predictions about the correlation between attention,
choices, and reaction times—predictions that can be tested using eye-tracking.
 evidence for a substantial attention bias in the choice process: options that were fixated on more,
due to random fluctuations in attention, were more likely to be chosen.
 several studies have found that it is possible to bias choices through exogenous manipulations of
visual attention

Economic Implications of the Neuroeconomic Model of Simple Choice


Prevalent and Systematic Mistakes in Economic Choice
 In this framework, an optimal choice is made when the option associated with the largest
experienced utility signal at consumption is selected, and a mistake is made otherwise. There are
three potential sources of mistakes:
1) stochastic errors in choices that are embodied in the drift-diffusion model;
2) errors in the computation of decision values, perhaps by systematically failing to take into
account some attributes that will affect experienced utility; and
3) biases due to how attention is deployed in the computation of decision values, or in the weight
that they receive in the comparison process.
 Bernheim and Rangel (2009) have proposed a modified revealed preference procedure that makes
it possible to measure experienced utility from the choice data even when mistakes are possible. A

81
Session: All of them Exam Guide
critical component of their methodology is the identification of “suspect” choice situations in which
there is reason to believe that the subject might have made a mistake.
Neural Foundations for Random Utility Models
 the neuroeconomic model provides a neurobiological foundation for random utility models.
However, the two models have one important difference. In the drift-diffusion model, the noise
arises during the process of comparing the computed decision values, and thus it does not reflect
changes in underlying preferences: it is purely computational or process noise. In contrast, random
utility models assume stochastic shocks to the underlying preferences. This difference is important,
because the two models will make different normative predictions about the quality of choices.
 Time pressure leads to noisier choices - a single change in the drift-diffusion model parameters: the
barriers of the drift-diffusion model (as illustrated in Figure 1) were smaller under time pressure.
 The change in the model also provides a mechanism for why subjects might make fewer mistakes
when the stakes are sufficiently high: in those cases, subjects might increase the size of the barriers
significantly in order to slow the choice process and reduce mistakes.
“Wired” Restrictions in the Choice Correspondence
 individual choices will be affected by the observable characteristics of the situation.
 key finding of these studies is that the decision value signals exhibit “range adaptation”: the best
and worst items receive the same decision value, regardless of their absolute attractiveness, and
the decision value of intermediate items is given by their relative location in the scale. This finding
matters for economics because it implies that the likelihood and size of decision mistakes increases
with the range of values that needs to be encoded.
Attention, Marketing, and Behavioral Public Policy
 The variation of visual contrast (this manipulation) had a sizable effect in attention paid and that it
biased the choices as predicted.
 Other candidates for how attention plays a critical role in economic choice include cultural norms
that affect memory retrieval and cognitive patterns; educational interventions that have a similar
effect; and many of the “nudge” or “libertarian parternalistic” policies that have been advocated by
behavioral economists
Novel Insights about Experienced Utility

 it is often difficult to disentangle behavioural implications from competing explanations using only
choice data.
 Neuroeconomic methods provide an alternative methodology to address this problem: measure
neural activity in areas that are known to encode experienced utility and test the extent to which
the hypothesized effects are present.
De Gustibus Est Disputandum
 subjects who donate more to charities activate more strongly the posterior temporal sulcus at the
time of choice and that the responses in this area modulate activity in the areas of ventromedial
prefrontal cortex that compute decision values. The posterior temporal sulcus has been shown to
play a critical role in characterizing the mental states of others
 This suggests that some of the observed individual differences in the amount of altruism might be
due to cognitive limitations and not to the absence of an altruistic component in experienced utility.

82
Session: All of them Exam Guide

More Complex Decisions: Self Control, Social Preferences, and Norm Compliance
 Examples of more complex choices include intertemporal choices involving monetary or pleasure–
health tradeoffs; financial decisions in complex environments such as the stock market; choices
involving social preferences; and compliance with prevailing social norms.
Intertemporal Choice
 In the basic version of the problem (intertemporal choices), individuals choose between two
options, x and y, in the present, and their choices have consequences on multiple dimensions for
extended periods of time.
 To a large extent, the existent evidence suggests that all of the key components of the model for
simple choice are also at work here: choices are made by assigning decision values to each option at
the time of choice; these decision values are computed by identifying and weighting attributes;
decision values are compared using a drift-diffusion model; and all of these processes are
modulated by attention.
 the same areas of ventromedial prefrontal cortex that encode them in simple choices also do so in
more complex situations involving dietary choices
 In intertemporal choice, the decision values seem to continue to be based on a weighted sum of
attributes, but the attributes all need to be time-dated, and attributes can have different weights at
different times (which allows for time-discounting in the weighting of attributes)
 This suggest decision values are computed by integrating the value of attributes over dimensions
and time.
 A working assumption is that the grid of attributes and time horizons can be partitioned into two
sets: those attributes at given times that are easily computed, and those attributes at given times
that are considered only if cognitive effort is deployed.
 There is evidence that activity in areas of the dorsolateral prefrontal cortex that have been shown
to be involved in implementing the type of scarce cognitive processes described above are more
active at the time of choice in self-control than in non-self-control group.
 Furthermore, the dorsolateral prefrontal cortex modulated the ventromedial prefrontal cortex
decision value signals in the self-control group but not in the non-self-control group.
The Problem of Experienced Utility in Intertemporal Choice
 Things are significantly more complicated in the case of intertemporal choice since decisions have
hedonic consequences over extended periods of time.
 This implies that experienced utility at each instant depends on the entire history of choices and not
on a single consumption episode.
 Competing Decision Systems in Complex Choice
 The Pavlovian controller is activated by stimuli that activate automatic “approach or avoid”
behaviours. A typical example is the common tendency to move quickly away from stimuli such as
snakes and spiders.
 The habitual system is more flexible than the Pavlovian system and less flexible than the goal-
directed one. In particular, the habitual system learns to promote actions that have repeatedly
generated high levels of experienced utility in the past over those that have generated lower levels.
 the behavioural implementation of fairness goals or social norms depends on the functioning of
elaborate cognitive and neural machinery that is dissociable from the knowledge of what

83
Session: All of them Exam Guide
constitutes fair or norm-compliant behaviour.
Economic Implications for Complex Choice
 The basic idea is simple: since scarce computational processes are not always deployed correctly,
and are not even available in some cases, decision mistakes can result. In this model, an individual’s
ability to make optimal intertemporal choices depends on the ability to deploy the cognitive control
facilitated by dorsolateral prefrontal cortex processes.
 First, the cognitive control processed by the dorsolateral prefrontal cortex is impaired during stress,
sleep deprivation, or intoxication, and it is depleted in the short term with repeated use.
 Second, the lateral prefrontal cortex is the last area of the brain to mature fully, often only when
people age into their mid-20s
 Third, and more speculatively, the areas of dorsolateral prefrontal cortex identified in these studies
have also been shown to play a role in cognitive processes such as working memory.

84

You might also like