Behaveco 7

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 33

BEHAVIORAL ECONOMICS

ECON F345

Dushyant Kumar
BITS Pilani, Hyderabad Campus
Learning from New Information

Learning from New Information


Learning from New Information

I Benchmark: Bayesian updating..


I Prior probability and posterior probability- update is often
subject to biases..
I Example: suppose our decision maker (DM) is planning to
buy a car- two options- Sporty vs. Comfort..
I Let p denote the prior probability with which the DM beliefs
that the Sporty car is better..
I He gets a informative signal, maybe positive review..
I Informative- let θ denote the signal precision- the probability
of getting the signal if Sporty is indeed better for him..
I What should be the range of θ?
Learning from New Information

I Using the Bayes rule-

θp
Prob.(Sporty is better /signal) =
θp + (1 − θ)(1 − p)
I Example: Suppose to start with you think that the Sporty
model is better for you with probability 0.50 and the Comfort
model is good for you with probability 0.50. (p = 0.50)
I From your past experiences and all, you also think that the
review is going to be positive for the Sporty model if the
Sporty model is better, with probability 0.80. (θ = 0.80)
I With probability 1 − θ = 0.20, the review can be positive even
if the Sporty model is not better- genuine error or vested
interests..
Learning from New Information

Prob.(Sporty is better /signal i.e. positive review )


0.80 × 0.50
= = 0.80
0.80 × 0.50 + 0.20 × 0.50
I So after reading a positive review, the probability ↑ to 0.80.
I Many a times Bayesian updation gets quite complex..
I Maybe due to this complexity or even independently people
might be ambiguity and uncertainty averse..
Ambiguity Averse

I People are in general ambiguity averse, so they seek for new


information constantly to update their beliefs..
I However this update process is often prone to biases, here we
are going to discuss two most common biases in a bit detail-
the confirmatory bias and the law of small numbers.
I Law of large numbers?
I Confirmatory Bias
I Going back to our car selection example- it is often seen
people interpret and filter the reviews, advise, and signals to
suit their prior belief- they don’t follow the Bayes rule even
when it is easy to do so..
Confirmatory Bias

I Lord, Ross and Lepper (1979): people were interviewed on


their views on death penalty.. some weeks after the
interviews, they were asked to read two reports on deterrent
effects of death penalty..
I Subjects were again interviewed for the views, convincing
strength of the reports, has the report changed their views
and all..
I Results: Proponents of the death penalty rated
pro-deterrence studies as convincing and well conducted, while
anti-deterrence studies were not thought convincing or well
conducted. Vice versa for opponents.
I So the ‘education and updation’ process turned to be
polarizing!
Confirmatory Bias

I In recent times, the social media has been accused for playing
this role.. however social media can’t be blamed in isolation,
it is a general phenomenon- question need to be asked
whether the social and digital media has made this
polarization ‘easy and pacy’.

I There are many such studies- Darley and Gross (1983), ....
I Rabin and Schrag (1999) provides a model to capture this
bias..
Confirmatory Bias

I Recall the car selection example.. signal-review


I θ ∈ (0.5, 1)− if the Comfort model is good for our decision
maker (DM), with probability θ a review is going to support
the same, with probability 1 − θ it is going to suggest
otherwise..
I Let c be the signal supporting Comfort model, and s be the
signal supporting Sporty model.
I Prob.(c/C ) = Prob.(s/S) = θ
Prob.(c/S) = Prob.(s/C ) = 1 − θ
I We can have different θs as well..
Confirmatory Bias

I After observing the signal (reading the review), the individual


uses Bayes’ rule to update the belief.
I Confirmatory bias is present in a form that the individual
misreads or misinterprets the signal.
I The confirmatory bias is captured in the sense that with
probability q, the DM interpret the signal wrongly to suit his
prior belief.
I Suppose the prior belief is the Comfort model is better-
I if he reads a review that supports the comfort model, he
always interprets it correctly!
I if he read a review that supports the sporty model, with prob q
he interprets it as supporting comfort model!
Confirmatory Bias

I Prob.(st , ct /S) = θst (1 − θ)ct .


Prob.(st , ct /C ) = (1 − θ)st θct .
I If the DM started indifferent between the two cars, and after t
weeks, he thinks he has read st reviews supportive of Sporty
and ct reviews supportive of Comfort, then his posterior belief
should be
θst (1 − θ)ct
Prob(sporty is better /st , ct ) =
θst (1 − θ)ct + (1 − θ)st θct
Confirmatory Bias

I With confirmatory bias, the individual is going to overestimate


the St if the prior belief is in favour of Sporty model.
I It leads to overconfidence and polarization.
I What is crucial here is- the more you ‘read’, the more biased
you get!
Cause of concern in this age- easy access to information,
social media, and all..
I Next we move on to discuss another common bias- the Law
of Small Numbers.
I We all are familiar with the law of large numbers.
Law of Large Numbers

I Law of large numbers- convergence of sample estimate to


population estimate as the sample size increases..
I Representation or sampling biases..
The Law of Small Numbers

I Common answer- equally likely..


I Law of large number- correct answer- smaller hospital..
I Many-a-times we base our analysis on rather small sample..
moreover there seems to be some pattern in the selection of
this small sample- not truly representative..

Two common biases here-


I Gambler’s fallacy and hot hand fallacy..
The Law of Small Numbers

I Gambler’s fallacy
I Cook (1991) and Terrell (1994) analyzed data from the state
lotteries of Maryland and New Jersey. In both lotteries a
better must correctly guess a three-digit winning number.
F In Maryland, all winner gets $500 each.
F In New Jersey, a pre-fixed prize fund is split between all those
who guess the right number.
I So in the New Jersey system, a person wins more if the fewer
ones guess the winning number.
I Observation: People typically don’t bet on the number that
just won-
expectation of random events to be self-correcting!
The Law of Small Numbers

I This is known as gambler’s fallacy..


I In Maryland, this bias is not going to hurt you. why?
I In New Jersey, this biasness is costly!
I One interesting implication is that people are typically not
able to generate or guess a random sequence..
I Consider a fair coin being tossed five times, what do you think
sequence of heads and tails is going to be?
I Most of the times, people switch too often, in order to
supposedly mimic randomness..
The Law of Small Numbers
The Law of Small Numbers

I Tennis players typically switch the serves too much right and
left to mimic as random..
I In cricket- full ball followed by a short ball..
actually it ends up being predictable..
I Forced (perceived) negative auto-correlation!
I Rabin (2002) gives a simple way to model these..
I Actual- with replacement, perception- without replacement!
The Law of Small Numbers

I Implication in trading and investing- selling out-performers


and holding under-performers!
I Flip side of this is hot hand fallacy.
I Here we expect the ‘good form’ or ‘good luck’ to continue..
I Most of the initial empirical studies were done for basketball
leagues in USA.
I Hot hand fallacy are a bit difficult to verify- there may
actually be ‘good form’- better physical shape, confidence and
all matters..
I Some support however- people buying from shop where the
last lucky number was sold..
The Law of Small Numbers

I Hot hand fallacy appears totally different from the gambler’s


fallacy- but both arises due to the representation biases-
reading too much from small samples..
I We are going to build a framework to model both the
gambler’s fallacy as well as the hot hand fallacy!
I Suppose there is a box containing 2 red balls and 2 white
balls. One ball is randomly taken, put back and again pick a
ball randomly and so on..
I So the actual process is with replacement, the bias here is
that the individual perceive it to be without replacement.
The Law of Small Numbers

I Suppose in the first draw, its a white ball- what is the


probability of getting a white ball in the second draw?
I Actual- 21 .
I Perception- 13 - gambler’s fallacy..
I Similar examples can be constructed in different contexts..
I Gambler’s fallacy seems relatively easy to model!
I Can this biasness also explain hot hand fallacy?
I For hot hand fallacy, lets introduce some uncertainty- the
individual is not sure how many red balls (R) and white balls
(W) are inside the box- recall the good form in sports
example- not sure of player actual type!
The Law of Small Numbers

I The decision maker’s prior belief is that its one of the following
three- (1W , 3R), (2W , 2R), (3W , 1R). all equally likely..
I What is the probability of getting 2 white balls in two
consecutive draws?
I Under unbiased, with replacement (N large)- its
0.0625 if the box happens to be (1W , 3R),
0.25 if the box happens to be (2W , 2R),
0.5625 if the box happens to be (3W , 1R).
I So when you observe two consecutive white balls, you update
your belief-
with probability 0.071 its (1W , 3R), with probability 0.286 its
(2W , 2R), and with probability 0.643 its (3W , 1R).
The Law of Small Numbers

I Now consider the case with biasness.


I The probability of getting two white balls in consecutive draws
are-
0 if the box happens to be (1W , 3R),
0.167 if the box happens to be (2W , 2R),
0.50 if the box happens to be (3W , 1R).
I So when you observe two consecutive white balls, you update
your belief-
with probability 0 its (1W , 3R), with probability 0.25 its
(2W , 2R), and with probability 0.75 its (3W , 1R).
The Law of Small Numbers

I (1W , 3R)− 0.071 (actual) to 0 perceived,


(3W , 1R)− 0.643 (actual) to 0.75 perceived..
I So in the presence of biasness, the probabilities are shifted
towards having more white balls- hot hand fallacy!
I Typically gambler’s fallacy occurs when the probabilities are
well known, and hot hand fallacy occurs when probabilities are
not well known..
I If the ‘types’ are well known- purely objective, the arguement
of hot hand fallacy seems less suitable..
Learning from Others: Information Cascading

I Next- how do we incorporate others’ information, through


their observed actions, into our decision making..
In particular, is it possible that we just ignore our own
assessment and follow the herd?
I Many a times we have our own signals or views, and we
observe others choices or acions..
I To stress- we typically don’t observe or can’t verify others’
signal or information set..
I Others’ choices conveys their signals in ‘some’ way- how
should we aggregate these?
Learning from Others: Information Cascading

I Continuing with the car selection example- lets assume that


everyone has the same preference- either Sporty is better
suited for everyone or Comfort is better suited for everyone..
I You can test drive and have you own (imperfect?) assessment
as well as you can see others’ choices..
I Many examples- course selection, election, etc..
I Information Cascade: An information cascade occurs when
it is optimal for an individual, having observed the actions of
those ahead of him, to follow the behavior of the preceding
individual without regard to his own information.
I Banerjee (1992) and Bikhchandani et al. (1992)
Learning from Others: Information Cascading

I Let θ be accuracy of signal (test drive)- if Sporty is better,


with probability θ you are going to assess that from the test
drive and with probability (1 − θ) you misreads and assess
that Comfort is better..
I θ > 0.50 (?)
I We are considering a sequential model here:- many-a-times it
gets misinterpreted and applied in the simultaneous action
case!
I Consider the first agent (Agent 1)- he test drives the car and
(suppose) assess that Sporty is better- obvious choice to buy
Sporty..
Learning from Others: Information Cascading

I Now consider the second agent (Agent 2)- he observes the


choice of Agent 1 and, test drives as well.. if he also assess
from the test drive that Sporty is better than the choice is
easy- buy Sporty..
I But suppose the Agent 2 assess from the test drive that
Comfort is better- now there is one signal for Sporty, one
signal for Comfort- indifference..
I Lets assume that in case of indifference, an agent goes by own
signal- partial biasness..
I So the agent 2 will always follow own signal, does not matter
whether it is in sync with first agent or not!
Learning from Others: Information Cascading

I Now consider the case of Agent 3- does it matter what he is


going to assess from the test drive?
I Notice that the agent 3 knows that both agent 1 and 2
have followed their own signals.
I The agent 3 test drives and observe a signal- now agent 3
have three independent signals..
I The agent 3 will follow the majority rule..
I If the signal 1 and signal 2 are different, agent 3 will follow
own signal- others following agent 3 know this..
I If the signal 1 and signal 2 are same, then agent 3 simply
ignores own signal- further all others know this, for them as
well their own signals does not matter- information cascade!
Learning from Others: Information Cascading

I What is the probability of this cascade happening? 1 as the


number of individual increases..
I Notice a cascade will definitely happen if three consecutive
individuals happens to get the same signal. (why not two
consecutive signals? C − S − S − C )
I Divide N into the blocks of three-

1, 2, 3; 4, 5, 6; 7, 8, 9; ...

I What is the probability of any one block getting identical


signals-
q 3 + (1 − q)3
where, q is the probability of getting a particular signal.
Learning from Others: Information Cascading

I What is the probability that none of these blocks getting


N
identical signals- (1 − q 3 − (1 − q)3 ) 3 .
N
I As N → ∞, (1 − q 3 − (1 − q)3 ) 3 → 0!
I So the cascade happens with probability one.. may or may not
be ‘efficient’.
Learning from Others: Information Cascading

I Three striking results-


1. The most striking conclusion is that an information cascade
will almost definitely occur, with probability tending to one!
2. Information cascades are fragile- in above example if 3 agents
decide to test drive together and say, they all get the same
signal- Comfort is better, then they should be buying Comfort-
in particular it does not matter at what point these 3 agents
are coming! why?
3. Cascades can be ‘wrong’ or ‘sub-optimal’..
I Notice the information cascade is not irrational!
Learning from Others: Information Cascading

I Using the Bayes’ rule, a general model can be written down.


I Mixed empirical support- some studies have found that people
do overweight their private signal..
I Contrary to the wisdom of the crowd- not really- simultaneous
vs. sequential..
I Related issues-
Bubbles- financial markets
Herd behavior..

You might also like