Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

The Value of Biased Information: A Rational Choice Model of Political Advice

Author(s): Randall L. Calvert


Source: The Journal of Politics, Vol. 47, No. 2 (Jun., 1985), pp. 530-555
Published by: University of Chicago Press on behalf of the Southern Political Science
Association
Stable URL: http://www.jstor.org/stable/2130895
Accessed: 26-12-2015 18:48 UTC

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at http://www.jstor.org/page/
info/about/policies/terms.jsp

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content
in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship.
For more information about JSTOR, please contact support@jstor.org.

University of Chicago Press and Southern Political Science Association are collaborating with JSTOR to digitize, preserve
and extend access to The Journal of Politics.

http://www.jstor.org

This content downloaded from 131.172.36.29 on Sat, 26 Dec 2015 18:48:11 UTC
All use subject to JSTOR Terms and Conditions
The Value of Biased Information:
A Rational Choice Model of
Political Advice

RandallL. Calvert
Washington University in St. Louis
and
Carnegie-Mellon University

Typically, political decision making involves the concomitant problem of deciding how to
use advice. Advice can reduce uncertainty about outcomes, but it is often costly to obtain and
assimilate, and is itself subject to uncertainty and error. This paper explores how a rational
decision maker uses imperfect advice. Using only the assumption of utility maximization,
along with a specification of exactly how knowledge and advice are "imperfect," it is possible
to derive some of the initial assumptions of cognitive and bounded-rationality models. Also
changes in the decision-making environment can be connected to changes in how advice is
used, thereby providing theoretical predictions about political behavior. In particular it is
shown here that, under certain reasonable circumstances, the rational decision maker should
engage in selective exposure or "bolstering." These results do not depend upon any cost
advantage or inherent value in biased advice.

INTRODI)'CTI(N

In making complicated decisions, political actors face the necessity of


using advice. Given the scarcity of time and the need for expertise beyond
the decision maker's capabilities, advisors are called upon to distill com-
plex information about alternatives into simpler recommendations about
the choice to be made. A subsidiary problem for the decision maker, then,
is how to make the best use of available information, or how to choose
advisors. The purpose of this study is to build a rational choice model of the

?The-author is grateful to James Alt, Morris Fiorina, Ronald Harris, Kenneth Shepsle, and
Barry Weingast for their comments on earlier versions of this manuscript. This research was
supported by the Center for the Study of American Business, Washington University, St.
Louis.

This content downloaded from 131.172.36.29 on Sat, 26 Dec 2015 18:48:11 UTC
All use subject to JSTOR Terms and Conditions
BIASEDINFORMATIONIN POLITICALADVICE 531

use of advice, and in particular to characterize the role of biased advisors in


this process. As we will see, a neutral or even-handed advisor is not always
the best kind.

How Informationis Used


Studies of political decision making in fields from voting to foreign
policymaking examine how political actors make use of information. One
frequent conclusion is that decision makers use information poorly, and in
particular that they do not use it objectively. Instead, they engage in
"bolstering," attending too much to sources that share the decision maker's
own predispositions. Likewise, through "defensive avoidance," they give
too little consideration to discrepant information (e.g., Janis and Mann,
1977, p. 205; Jervis, 1976). Related phenomena, termed "selective atten-
tion" or "selective exposure," have long been subjects of inquiry in experi-
mental psychology (Wicklund and Brehm, 1976, pp. 170-90).
These phenomena have often served as examples of the way in which
political (and other) decision makers fall short of the rational ideal. One
implication of this viewpoint is that, if we want to model and predict the
behavior of political decision makers, we ought to rely upon cognitive and
bounded-rationality approaches (Simon, 1947; Steinbruner, 1974). On the
other hand, some psychologists have looked upon selectivity phenomena
as purposeful responses to the complexity of the environment (Broadbent,
1958; Moray and Fitter, 1973).
The present study explores this alternative approach, using a model
based entirely on utility maximization. As with simple, perfect-information
models of rational choice, we use only one behavioral assumption, namely
that the individual decision maker is a rational actor. As in the bounded-
rationality approach, we assume that the actor does not have and cannot
obtain complete information. Here, however, we maintain the assumption
of utility maximization throughout. In principle, we can then predict how
changes in the objective political world cause changes in decision makers'
use of advice and information. In this sense, this study explores the politics,
rather than the psychology, of political behavior. As it turns out, more can
be explained in this manner than is commonly thought.
The type of model used in this paper represents a significant gain in
theoretical parsimony over the more standard bounded-rationality
approach. Our model is grounded only on the assumption of utility maxim-
ization; it adds no initial presumptions about ways in which the decision
maker copes with an environment of imperfect information. Instead, we
derive the optimal means of coping from explicit assumptions about how
information and knowledge are imperfect. These imperfections are central
to the bounded-rationality model as well. There, however, they are not

This content downloaded from 131.172.36.29 on Sat, 26 Dec 2015 18:48:11 UTC
All use subject to JSTOR Terms and Conditions
532 JOURNALOF POLITICS,VOL.47, 1985

made rigorous, and the actor's responses to them are fixed by assumption.
The goal of a "full-rationality" model like that used here is to derive
boundedly rational behavior in such a way that the theory itself, initially
independent of stylized empirical facts, yields predictions about how
behavior should respond to changes in environmental conditions.
Using this model we will be able to say how a rational actor, in some
simple decision problems, ought to use information. The results are some-
times counterintuitive. There are, for example, perfectly ordinary circum-
stances in which the rational decision maker prefers a biased source of
information over a neutral one, even if the neutral one is just as easy to
consult, and even though the biased information has no inherent value to
the decision maker outside of its usefulness in making a good decision.
When the decision maker is predisposed to a particular belief, this behavior
includes a desire for information from sources known to share that predis-
position. In other words, selective attention or bolstering would occur even
in purely rational decision making, and it would vary predictably in
response to the objective, nonpsychological characteristics of the decision
problem.

Breadthof Application
The theoretical model presented below applies to a wide variety of
real-life political situations. The executive official making a policy decision
is the most direct application; the decision could be anything from a minor
detail of implementation to a major regulatory edict or foreign policy
action. Prior to deciding, the executive may consult advisors or conduct
policy analysis. Such problems have been addressed extensively by politi-
cal scientists such as Allison (1971), George (1980), and Jervis (1976). The
relevant questions for our purposes are how much and what kind of advice
or analysis the decision maker should choose, given constraints on time and
resources.
A similar breadth of applications exists for the decision problems of
legislators who rely, for instance, on staff members (Malbin, 1980), and of
judges who listen to expert testimony (Horowitz, 1977, pp. 45-51). Even at
the level of mass political behavior, the elements of this model come into
play (Lazarsfeld, Berelson, and Gaudet, 1944). Voters, for example, are
uncertain about which candidate will bring about the most desirable state
of the world. They may spend time and effort digesting more information
from the news media, the opinions of friends, and elsewhere to improve
their ability to make a voting decision. Weatherford (1983) has taken just
such a viewpoint in his empirical study of economic voting, in which he
considers "the voter as information processor" (p. 161). Kuklinski, Metlay,
and Kay (1982) took a similar approach in their analysis of voter decision
making on the complex issue of nuclear energy.

This content downloaded from 131.172.36.29 on Sat, 26 Dec 2015 18:48:11 UTC
All use subject to JSTOR Terms and Conditions
BIASEDINFORMATIONIN POLITICALADVICE 533

The analysis presented here fleshes out theoretically this idea of the
political actor as a rational decision maker making rational use of imperfect
and costly information. It demonstrates that, in all areas of political activ-
ity, there are circumstances in which it is useful to rely on biased advice.
More generally, it suggests a powerful way to conceptualize and analyze
the simultaneous problems of how to use advice and how to make policy
choices.

Outline
The first section below presents a model of the decision maker's problem
and examines the properties of information sources. The following section
is an analysis of a situation in which an initially ignorant decision maker has
two sources of advice, only one of which can be consulted before a final
choice between two policy alternatives is made. Even in this simple prob-
lem, the decision maker may prefer information that is biased. The third
section examines the same problem for a decision maker who begins the
process with a predisposition in favor of one alternative and against the
other. If the predisposition is strong enough, "objective" or evenhanded
information is of no value, while information that is biased in favor of the
already preferred alternative helps the decision maker achieve higher
expected utility. The final section discusses what this analysis implies about
how political advice is used and how we ought to model limited-
information decision problems.
Although the main sections of this paper involve a certain amount of
algebraic and probabilistic calculation, an effort has been made to accom-
modate the reader who is not inclined to grind through it. Each section
begins with a nontechnical summary of the assumptions and results that
follow. In addition, the principal results are restated verbally alongside
each mathematical presentation. Finally, the behavior of the probability
distributions and their updating processes, which are central to this analy-
sis, is illustrated in diagrams.'

Tie MODEL

Description
This study will address the following generic problem: a decision maker
must choose between several alternatives. For ease of presentation, we will
use only two alternatives, but the model's conclusions apply to any reason-

O
On the other hand, the messy details of standard calculations have in several places been
omitted from the text. These details are available from the author upon request.

This content downloaded from 131.172.36.29 on Sat, 26 Dec 2015 18:48:11 UTC
All use subject to JSTOR Terms and Conditions
534 JOURNALOF POLITICS,VOL.47, 1985

ably small number of alternatives. The decision maker has only partial
information, in the form of a probability distribution, about the utility that
would result from each alternative. Optionally, the decision maker may
consult various sources of further information about the alternatives. This
information will provide a better idea of the true utility of each alternative,
but it too is only probabilistic. That is, the information source may be
mistaken, or the decision maker may misunderstand its message. Keeping
this possibility in mind, the decision maker uses the information to revise
previous beliefs about the alternatives, and eventually uses this revised
understanding of the situation to choose the alternative believed best.
For concreteness, let us refer to a stylized political application of this
decision problem. An executive official must choose between several feas-
ible policies. The official has some well-defined goals to accomplish
through this choice but is not sure how well each policy would serve these
goals. Any of several expert advisors can be called upon for their evalua-
tions of how well the policies would serve the goals. These advisors are
basically truthful, but their advice is subject to certain mistakes and biases.
The official knows generally how the biases and mistakes are generated,
but cannot retrospectively unscramble the advice to learn exactly the
underlying truth. Assuming that there are political and resource costs that
prevent the official from freely consulting all possible advisors on every
problem, what kind of advisors should be consulted before a policy choice
is made? In particular, what sorts of known imperfections in advice are
admissible or even desirable?
In the model below, we abstract away the decision maker's or advisors'
consideration of any actual physical or social outcomes of the policies. We
refer only to the utility that will result for the decision maker, who may
have any combination of selfish, altruistic, or organizational motives. For
present purposes, this is a reasonable abstraction of a more general advi-
sory process. Indeed, in some cases real political advice does not go far
beyond predicting the quality of outcomes, rather than describing the
outcomes in detail. The conclusions presented below apply, in principle, to
more complex cases as well as to the simple one that is explicitly modeled.
The model assumes throughout that the advice from any source consists
only of a pronouncement that the outcome of an alternative will be either
"good" or "bad." This feature represents the basic nature of advice, a
distillation of complex reality into a simple recommendation. In order to
analyze clearly the decision maker's learning process, we portray the
advisor as giving an opinion about each policy alternative rather than
merely as recommending a best policy.
The advisor's pronouncement is assumed accurate in a probabilistic
sense: the higher the true utility of a policy, the higher the probability that
an advisor will predict it to be "good." However, advisors may differ in the

This content downloaded from 131.172.36.29 on Sat, 26 Dec 2015 18:48:11 UTC
All use subject to JSTOR Terms and Conditions
BIASEDINFORMATIONIN POLITICALADVICE 535

way their recommendations depend on the truth. For example, one source
may be very unlikely to call a certain policy "good" even if that policy's true
utility is high. Such a source is "biased" against that policy alternative. For
another advisor, the probability of calling an alternative "good" may
increase more rapidly with true utility; this makes the advisor almost
certain to say "good" if the true utility is high, "bad" if it is low, and equally
likely to say either if utility is in the middle. This advisor is more neutral, or
less biased, than the first.
Throughout, we make the key assumption that the decision maker, either
through previous experience or objective data, always knows in advance
exactly how the advisor's pronouncements depend, probabilistically, upon
true utility. In particular, the advisors' biases are known. But the decision
maker cannot know in advance what the advisor will say in a given case,
and what random (or seemingly random) element may have colored the
advisor's recommendation can never be learned.
The remainder of this section makes precise the model just described.
Then it sets forth the learning process, based on Bayesian updating. Finally,
it derives the general method for determining the value of an information
source or advisor.2

Alternatives,Information,and Utility
Let U1 and U2 be random variables describing the unknown levels of
utility two alternatives will bring. For the time being, let us represent the
decision maker's prior beliefs about U1 and U2by uniform distributions on
identical bounded intervals, so that no utility level is any more likely than
another. Without further loss of generality, we can take this to be the unit
interval, so all utility values lie between zero and one. Thus, the decision
maker's prior subjective probability density functions for the alternatives'
utility are

fi(ul) = 1, if 0 <-u< 1
= 0, otherwise.

The information from each information source, or advisor, will depend


stochastically on the true (but hidden) utility value of the alternative,

2
In applying this model, one need not assume that political decision makers engage in any
calculation process resembling the analysis in this paper. Like the consumer in macroeconomic
theory, the decision maker in the real world just gathers whatever information seems worth
the trouble. Whether the typical decision maker behaves "as if" he calculated optimal solu-
tions depends on the existence of some selection process such as competitive pressure or
imitative experimentation, factors outside the model presented here.

This content downloaded from 131.172.36.29 on Sat, 26 Dec 2015 18:48:11 UTC
All use subject to JSTOR Terms and Conditions
536 OF POLITICS,VOL.47, 1985
JJOURNAL

through a sampling distribution known to the decision maker. For ease of


exposition, we assume that each source gives advice on every alternative.
This assumption could easily be dispensed with, and corresponding results
derived for any other pattern of advisors and alternatives. Let Xi be a
random variable representing the (as yet unrealized) advice from a source
concerning alternative i. We will consider information sources that depend
on the true utility in the following way:
Xi = 1 ("good") with probability Hi
= 0 ("bad") with probability 1 - Ui

where ai is a constant greater than or equal to 1,3 and ui is the true utility of
alternative i. We will call ai the bias parameter. If ai = 1, for example, the
information source is rather evenhanded: its verdict is most likely "good," if
the alternative will provide utility above 1/2; and most likely "bad," if the
utility will be below 1/2. Figure la illustrates how the actual utility value
would then determine the probability that Xi = 1.If ai > 1, the source is not
so neutral. Suppose, for example, that ai = 2. Then for all values of ui less
than 1, the probability of Xi = 1is less than in the previous case. At ui = 1/2,
the probability that source X says "good" is P(Xi = 1 | ui) = 1/4. For larger aj,
this distortion is even more pronounced. Figure lb illustrates P(Xi= 1 1ui) as
a function of ui when ai = 2; and figure ic does so for ai = 10. Notice that for
all values of ai, if ui= 1 then Xi will certainly be 1; if ui= 0, Xi will certainly be
0.4 In between those extremes, though, a higher value of ai means a greater
tendency for Xi to be zero no matter what ui really is, and more extreme
values of ui are needed to make it likely that Xi = 1 will be observed. In a
meaningful sense, then, a source for which ai is greater than 1 is biased
against alternative i. A source with a very large value of ai is very unlikely to
tell the decision maker that alternative i is "good" no matter how good
alternative i really is, as long as ui < 1. For example, if a, = 1 while a2 = 10,
source X would be strongly biased against alternative 2, but unbiased

3By allowing 0 < a < 1, we could bias the source in favor of a candidate; this complication
is unnecessary at this point. In our discussion of selective attention, where a source tells about
two candidates, the bias can be in either direction even with a > 1. Allowing a < 0 would lead
to strange behavior of several functions, but it would not add any useful features to the model.
Allowing P(Xi = 1 Iui = 1) to be less than 1 would attenuate the results of this model. The
sources modeled in this paper are unusual in that, even when biased, they will override that
bias when the candidate in question is extremely good (or extremely bad). This truthfulness
under extreme circumstances is what makes the biased sources useful. If instead we have a
source that pronounces candidate i good with probability .75 even when ui is 1 (the maximum
possible), that source will obviously be less useful. The theoretical results of this paper will still
hold for such sources, but less often or less strongly than for the polar type of source modeled
explicitly here. The other extreme would be a source for which the probability of calling
candidate i good does not vary at all with the true utility. Such a source would of course be
worthless under any circumstances.

This content downloaded from 131.172.36.29 on Sat, 26 Dec 2015 18:48:11 UTC
All use subject to JSTOR Terms and Conditions
BIASEDINFORMATIONIN POLITICALADVICE 537
FIGURE la. P(X =l) when (Xi I

x .0

0 20
0. 0 .5 1.0
actual utility ui

ii
0 FIGURE lb. P(Xi1=I) when C(Xi=2

.5
0
D0

CL .5 .71 1.0
actual utility u1

' 1.0 FIGUREIc. P(Xj 1) when L1- 10


10

.5?

0
00 .5 .931.0
actual utility u1
FIGURE 1

This content downloaded from 131.172.36.29 on Sat, 26 Dec 2015 18:48:11 UTC
All use subject to JSTOR Terms and Conditions
538 OF POLITICS,VOL.47, 1985
JJOURNAL

toward alternative 1. In such a case, we can call a2 the bias of source X


against alternative 2.5
Aside from their mutual dependence on the alternatives' true utility
values, all sources are statistically independent of one another. In addition,
the information a source provides about alternative 1 is independent of the
information it provides about alternative 2.

Learningand Choosing
The decision maker uses the advice from each source to update his prior
beliefs about the alternatives using Bayes's rule," which gives f(ui Ixi), the
updated or "posterior" distribution of U, after the advice Xi = xi has been
received. In this application, Bayes's rule takes the form

fi(ui)P(Xi= xiIU,= u,)


f i(ui Ixi) = , for 1, 2.
f ffi(t) P(Xi = XiItUi = t) dt

If two sources, X and Y, have been consulted, we can apply the rule twice to
get:
fmIulxi,yi) = fi(ui)P(Xi= XiIUi = ui)P(Yi = IUi = u,)
I Ofi(t)P(Xi= xi IUi= t)P(Yi= yi Ui= t)dt

5 One might reasonably ask whether it would not be more sensible to use a normal
distribution with known variance to represent prior beliefs as well as the sampling distribu-
tion, since this would provide a particularly simple updating formula under Bayes's rule (see,
for example, De Groot, 1970, pp. 166-68). Also, a natural representation of bias would be
available: if ui is the true utility value, let the sampling distribution with bias ,8be normal with
mean u, + /3 and fixed variance. Unfortunately, this would represent no bias at all to the
statistical decision maker, since the Bayesian updating procedure here would be simply to
subtract the (known) bias from each observation and then to apply the usual updating
formula. In order to inject a genuine bias into the problem, it would be necessary to let the
variance of the sampling distribution depend on the difference between each actual sample
observation and the true utility value ui. This of course would sacrifice the simple updating
formula. The model used in this paper achieves this kind of genuine bias in a more tractable
-manner.
6Much has been made of results from the psychology laboratory showing that naive
subjects do not behave as Bayesians (see Kahneman and Tversky, 1979; Grether and Plott,
1979). Two general defenses can be offered for the continued use of Bayesian models. First,
naive subjects in the psychology lab are not the same as the high stakes, experienced decision
makers of real politics, business, etc. Second, no other coherent model has been formulated
for rational choice and rational learning under conditions of risk. That there could be such an
alternative model is certainly conceivable. If one were available, it could be applied here in
place of Bayes's rule. However, there is nothing magical about Bayes's rule that should cause
us to believe, in advance, that a different rule would qualitatively change our conclusions
about the rational use of biased information.

This content downloaded from 131.172.36.29 on Sat, 26 Dec 2015 18:48:11 UTC
All use subject to JSTOR Terms and Conditions
BIASED INFORMATION IN POLITICAL ADVICE 539

and so on.
Given the assumptions already made about the functional forms of the
priors and the advisors' sampling distributions, we can use Bayes's rule to
derive the particular form of the updated distributions in our model. If the
priors are uniform and Xi = 1 is advised, the updated belief about
alternative i turns out to be:
a1
fi(uiI1) = (ai + 1) U, . (1)

Similarly,if Xi = 0 is advised,
ai+1 ai1
fi(uiIO) aI (1-ui).

Figure 2 illustrates the decision maker's subjective beliefs about us before


and after Xi is learned, for ai = 1.
Before any advice is received, the expected utility of each alternative is,
for the case of a uniform prior distribution,

EUi=fJ ufi(u)du= 1/2.

After consulting X, this becomes either


ai+ 1
E(Uil)1=-0 or (2)

2a1
ai + +12)
E(U10)= 2(a1+2)

So, for example, if ai = 1 then E( Ui 1) = 2/3 and E( Ui 10)= 1/3. For higher
values of a1, E(UiI1) > 2/3 and 1/3 < E(Ui 10)< 2/3 .
Finally, we assume that to learn any advisor's information, the decision
maker must bear an opportunity cost in leisure, time available for other
decision problems or other utility-producing goods, in order to obtain and
evaluate the information. Before making any observation, then, the
decision maker compares (1) the current expectations about the alterna-
tives, (2) possible updated expectations that could result if any further
advisor is consulted, and (3) the costs of available advice. In a general
decision problem of this sort, the optimal strategy is a plan for deciding
sequentially which source to consult next, and whether at any time to stop
gathering advice and choose the best alternative given current information.
The value of a source lies in the possibility that its advice may cause the
decision maker to change her choice to an alternative whose utility turnsS

This content downloaded from 131.172.36.29 on Sat, 26 Dec 2015 18:48:11 UTC
All use subject to JSTOR Terms and Conditions
540 JOURNALOF POLITICS,VOL.47, 1985

f1(u1)

o II

Prior probability for Ui

2
2p fitu ail)/i

0
Updated probabilityfor U- if Xi I is observed

\~~f f(ui 0o)

0
0 Uj
Updated probabilityfor U- if Xi 0 is observed
FIGURE 2. UPDATING THE DECISIONMAKER'SPRIOR BELIEFS

This content downloaded from 131.172.36.29 on Sat, 26 Dec 2015 18:48:11 UTC
All use subject to JSTOR Terms and Conditions
BIASED INFORMATION IN POLITICAL ADVICE 541

out to be greater than that of the alternative favored before that source was
consulted (for further explanation, see Raiffa, 1968, p. 28).
Since in each decision to "purchase" information, the decision maker
must take into account the effects of possible future optimal decisions on
how to use information, the decision maker's problem is formally one of
dynamic programming. The value v(f) of proceeding optimally when the
prior beliefs are f = (flf2) is the maximum of: (1) the value of stopping
immediately and choosing the alternative with the highest expected value;
and (2) the expected value of paying cx to get advice from the best source X
(i.e., the one for which this value, less cost, is highest), updating f to some
new f', and then repeating the choice between (1) and (2). This can be
written as a "functional equation":
v(f) = max [EU1, EU2, Ev(f') - cx], (3)

where the first two expectations are according to f and the third is accord-
ing to the sampling distribution for the "best" source. We determine
optimal behavior by learning how this maximum can be obtained. The
examples below keep this dynamic programming problem simple by
assuming that only two advisors are available, and that each can be con-
sulted at most once.

Discussion
The model outlined in this section is just one narrow example of a
situation in which costly and imperfect advice is used to make a decision.
However, it has several important features common to all such problems.
First, new information serves only to alleviate the decision maker's uncer-
tainty, not eliminate it. Second, a decision maker may have some beliefs
prior to gathering advice, and if so, he places some weight on those prior
beliefs even after hearing the new information. Third, advice consists of a
summary or distillation of a more complicated reality unobservable to the
decision maker. Finally, advice may be right or wrong.
The notion of bias used here is not the only possible one. It does not
capture everything we usually mean when we think of bias, and it involves
some features not necessarily part of bias in the everyday sense. What the
model does portray is a certain tradeoff in the accuracy of advice. The
tradeoff is between accuracy about extreme conditions, when true utility is
very high or low, and inaccuracy about more moderate conditions. In real
life such a tradeoff often occurs when the inaccuracy appears in the form of
bias. In this model, when an alternative is moderately good, the advisor
biased against it still calls it bad. The alternative must be very good for such
an advisor to admit it. But the model does not allow for simple "reverse

This content downloaded from 131.172.36.29 on Sat, 26 Dec 2015 18:48:11 UTC
All use subject to JSTOR Terms and Conditions
542 JOURNALOF POLITICS,VOL.47, 1985

barometer" biased advice, for bias that simply overstates by some constant
the quality of an outcome or for bias that is not mitigated by the true quality
of the alternative, no matter how extreme.
This model portrays advice simply as a good or bad pronouncement
about an unobserved utility value. Mathematically, the results appear to be
generalizable to any situation in which the accuracy of biased advice is
greatest for extreme values of an underlying true characteristic. In the real
world, this condition is most likely to hold when advising consists of
distilling a complicated reality into a simpler summary judgment. The most
direct example of such advising is the use of experts by policymakers. But
even nonexpert advice is often of the same nature, due to both time
pressures placed upon the decision maker and the efficiency of having
advisors specialize in understanding certain problems.
Thus, although the model considerably simplifies the complicated world
of real political advising, it captures many of the interesting features of such
problems. The concrete results derived below are expressed in terms of the
parameters of the present model. If even this simple model generates such
effects, there is every reason to believe that corresponding phenomena
manifest themselves in the more complex real world. And aside from
presenting these specific results, the analysis carried out here serves to
illustrate how modeling of other information and learning processes might
be carried out in a rational choice framework.

OPTIMALBIAS IN SOURCES

Description
In this section, the decision maker faces a choice between two policy
alternatives whose utility is initially unknown. The option is then of choos-
ing between them immediately or consulting a single advisor and then
choosing. The advisor offers an opinion on both alternatives, pronouncing
each one either good or bad.
What are the characteristics of the best advisor for this purpose? If we
consider only advisors who are neutral toward alternative 1, the best would
be one having a considerable bias against alternative 2. That is, other things
equal, the decision maker using an information source biased in this way
against alternative 2 will choose more accurately than one who uses an
unbiased source. The size of this optimal bias can be described as follows:
for such a source, the initially ignorant decision maker estimates that there
is only a 14.5 percent chance that the source will pronounce alternative 2
good, while there is a 50 percent chance that the source will pronounce
alternative 1 good. The advantage, in terms of utility value, of using this
source instead of an unbiased one is around 2 percent of the difference

This content downloaded from 131.172.36.29 on Sat, 26 Dec 2015 18:48:11 UTC
All use subject to JSTOR Terms and Conditions
BIASED INFORMATION IN POLITICAL ADVICE 543

between the lowest possible utility level and the highest possible utility
level.
To put it another way, if the decision maker in this problem has a choice
among several advisors each unbiased toward alternative 1, she should, if
rational, prefer to get advice from a source with a bias against alternative 2.
This conclusion does not depend in any way on any cost advantages or
inherent value in the biased advice.

FormalAnalysis
Formally, suppose there is just one source of advice, having cost cx, and
that it gives advice XI about alternative 1 and X2 about alternative 2. The
functional equation is very simple here since there is only one consulting
decision to be made. If we denote by Ex the expectation with respect to the
advice distribution, and by E the expectation with respect to the prior or
posterior distribution of Ui, then the value of proceeding optimally is
v(f) = max [EU1, EU2, Ex max E(Ui JXi)- cx].

Suppose EUj is the greater of EU1 and EU2. The gain from learningX, aside
from the cost cx, is
V(X) = Ex max [E(Ui IX) - EUj].

By a well-known result in probability theory, we can always rewrite E Uj as


ExE(Uj IXj) (see for example Feller, 1968, p. 223). Thus the gain from
learning X can also be written as
V(X) = E. {max [E(UilXi)] - E(UjJXj)1. (4)

In words, this is the expected difference, over all possible values of the
source's advice, between the payoff for choosing the best alternative after
learning X and the payoff for choosing the best alternative without observ-
ing X (given what X turns out to be). Thus if learning X cannot change the
decision maker's opinion of which alternative is best, this gain is zero:
E(Ui IXi) will always be maximized by i = j. The information is valuable
only if there is a possibility of observing values of X that would cause the
decision maker to change his mind about which alternative to choose.
Under the assumption of uniform priors, EU, = EU2. Therefore let us
employ the convention of replacing E( Uj lXj) with the average 1/2
[E(U, lX,) + E(U2 1X2)], on the groundsthat either alternativemight have

This content downloaded from 131.172.36.29 on Sat, 26 Dec 2015 18:48:11 UTC
All use subject to JSTOR Terms and Conditions
544 JOURNALOF POLITICS,VOL.47, 1985

been chosen had X not been learned. The decision maker is initially ignor-
ant, and is indifferent toward the alternatives.
Consider a source X which is unbiased toward alternative 1, that is, a = 1.
Then, using equations (2) to evaluate conditional expectations, we can
write (4) as
V(X) = P(Xi = 0 and X2 = 0) {E(U2 10) - 1/2 [E(Ul 10)+ E(U2 10)]}
+ P(X =O and X2 = 1){E(U211) - 1/2 [E(Ul 1) + E(U211)]}
+ P(X1 = 1 and X2= 0) {E(Ul |1) - 1/2 [E(U I|1)+ E(U210)]1
+ P(X, = 1 and X2= 1) {E(U211) - 1/2 [E(U 1) + E(U2 |1)]}
a22 +5a2

12(a22 +3a2+2)

where P(X1 = 0 and X2 = 0), etc., are unconditional probabilities obtained


by integrating the conditionals P(X1 = 01 ui), etc., over all possible utility
values and then, since we assumed independence, multiplying P(Xi = 0) by
P(X2 = 0).
To find the optimal bias against alternative 2, differentiate V with
respect to a2 and set the result equal to zero (second-order conditions can
be evaluated by examining nearby values of V). Restricting our attention to
values of a2 greater than or equal to 1, we find that V takes on its maximum
value at a2 = 1 + /6, or approximately 3.45. There, V(X) is approximately
.100, whereas an unbiased advisor would have V = .083. Thus an advisor
biased against one alternative offers a higher expected value than one who
is unbiased.7
Discussion
To understand this result intuitively, consider what the decision maker
gains and loses in using an optimally biased advisor instead of an unbiased
one. The biased advisor is more likely to say that alternative 1 is "good" and
alternative 2 "bad" when in reality alternative 2 has the higher utility; this is
a loss for the decision maker. At the same time, it becomes safer to respond
to the information that alternative 2 is "good" by choosing 2. Indeed,
coming from the optimally biased advisor, a signal that alternative 2 is
"good" makes the information about 1 irrelevant, since 2 will be chosen
regardless. The probability of one kind of error, mistakenly choosing
1, is increased, while the probability of another kind, mistakenly choosing
2, is reduced, and the tradeoff is a profitable one.
Using the expression derived for V(X), we can determine what happens
to the value of an advisor as the bias changes. For a, = 1, we already know
7 Alternatively, we could restrict a, and a2 to be equal; in that case, V would take on its
highest value, about .086, when a, = a2 = 2. Thus the source biased only against candidate 2 is
also superior to one which is, in this case, biased equally against both candidates.

This content downloaded from 131.172.36.29 on Sat, 26 Dec 2015 18:48:11 UTC
All use subject to JSTOR Terms and Conditions
BIASE1)INFORMATIONIN POLITICALADVICE 545

that V(X) is .083 when a2 = 1, and that it increases to about .100 when a2
reaches 3.45. Further inspection of the formula reveals that V(X) declines
smoothly thereafter, but, surprisingly, does not decline toward 0. Rather, as
a2 becomes larger and larger, V(X) asymptotically approaches its unbiased
value of .083. No matter how large the bias is, however, V(X) never quite
reaches as low as .083. Hence, for the initially ignorant decision maker any
biased advisor is more valuable than the unbiased advisor, although the
difference becomes negligible outside the neighborhood of a2 = 3.45. In
other words, the tradeoff between dependability of a "good" pronounce-
ment and inaccuracy of a "bad" continues to be profitable as the bias
becomes large.
Two general kinds of results concerning the reaction of a rational deci-
sion maker to environmental changes emerge from this analysis. First, since
information cost directly reduces the value of information, anything that
increases the cost of advice will decrease the use of that source in favor of
other sources or of less information gathering. This is apparent directly
from the expressions for v(f), the functional equation. Second, if the bias of
an advisor increases, his value will at first increase and then begin to
decline, but not very far. The reaction of a fully rational decision maker to
different levels of bias may be quite out of keeping with the usual intuitive
notions of "rational" decision-making behavior.
The details of the results in this section depend, naturally, upon the
particular form of the probability model used here. However, the point is a
general one: for some combinations of functional forms in a garden-variety
Bayesian decision model, the tradeoffs between different types of errors
may be such that an "objective," "evenhanded," or "unbiased" source of
information is inferior to a source with a (known) bias. This result does not
depend upon skewed prior beliefs or favorable cost differences between
the sources.

SELECTIVEATTENTION AS OPTIMAL
STATISTICALSAMPLING

Description
We turn now to the decision maker who, instead of being ignorant about
the alternatives' payoffs, has an initial predisposition in favor of one of
them. Suppose the decision maker believes that policy 1 will likely offer a
good result while policy 2 will probably offer a worse result. Suppose that
the decision maker can, if she wishes, improve on this information by
consulting one of two available information sources. Advisor Y is unbiased
toward both alternatives. Advisor X is biased in favor of alternative 1 and
against alternative 2. In other words, the decision maker can choose

This content downloaded from 131.172.36.29 on Sat, 26 Dec 2015 18:48:11 UTC
All use subject to JSTOR Terms and Conditions
546 JOURNAL OF POLITICS, VOL. 47, 1985

between a "neutral" source of information and a source that she knows is


more likely to agree, perhaps mistakenly, with her own predispositions.
This is precisely the situation in which students of the decision-making
process have observed selective exposure and selective attention: already
predisposed toward one alternative and against another, the decision
maker consults information sources likely to agree with that preconcep-
tion, rather than consulting objective or unbiased sources. The analysis
below demonstrates that the fully rational decision maker, aware of this
predisposition, the biases of the sources, and the risky nature of the choices
involved, will under specifiable conditions choose the biased over the
objective information.
The intuition behind this result is as follows. For the decision maker
already strongly predisposed toward policy 1, the objective advice from Y
will not reverse the preference between the alternatives, regardless of what
it is (even though it might narrow the perceived difference between them).
However, there is always the possibility (small but significant) that the
biased advisor X will unexpectedly pronounce policy 1 bad or policy 2
good or both. Such an unexpected finding makes a big impression on the
decision maker; it is enough to reverse prior preferences. The information
from advisor Y can cause no change in the decision maker's choice and is
thus worthless. The information from advisor X can make a difference, and
therefore has positive value.

Analysis
Let the prior beliefs of the decision maker be represented by the proba-
bility density functions
fi(ui) = 2uI on the unit interval,
= 0 elsewhere;
f2(u2) = 2( 1 - U2) on the unit interval,
= 0 elsewhere.

These prior densities are illustrated in figure 3a. A slight generalization of


the previous model allows for an advisor to be biased in favor of one
alternative as well as against the other. As before, let the information from
advisor X about alternative 2 be biased against that alternative by a
parameter a:
a
X2 = 1 with probability U2
= 0 with probability 1 - U2a

This content downloaded from 131.172.36.29 on Sat, 26 Dec 2015 18:48:11 UTC
All use subject to JSTOR Terms and Conditions
BIASEDINFORMATIONIN POLITICALADVICE 547

0
o l
U'

f (U2

U2
FIGURE3A:
PRIOR BELIEFS FAVORING ALTERNATIVE 1

Butnow let the advice aboutalternative1 be biased in the opposite fashion;


in the present model this can be most easily accomplished by
Xi = 1 with probability 1 - (1 - ul)
= 0 with probability (1 - u )a.
These sampling distributionsare illustratedin figure 3b. Suppose finally
thatthe decision makerhas a choice between consultingX once or consult-
ing Y, which is completely unbiased, once. Our decision maker is now in
the classic settingin which psychologistsstudy the phenomenaof selective

This content downloaded from 131.172.36.29 on Sat, 26 Dec 2015 18:48:11 UTC
All use subject to JSTOR Terms and Conditions
548 JOURNALOF POLITICS,VOL.47, 1985

".1.0

.0~~~~~~~~~~~~~~01

0o

ffi ? actual utility us 1.0

1.0

X/

? 0 actual utility u2 1.0

FIGURE
3B
SOURCE XBIASED IN FAVOR OF ALTERNATIVE I AND AGAINST
ALTERNATIVE
2, WITHa =3
exposureand selective attention:the expected value of alternative1 is 2/3;
that of alternative2 is only 1/3; and there is a choice between a neutral
source and one that is very likely to agree with the decision maker's
predisposition.For example, if ax= 3 then X will pronounce alternative1
good and alternative2 bad with probability0.81, accordingto the decision
maker'spriorbeliefs. The decision makerportrayedin the politicalscience
literatureis likely to choose advisorX. But will the rationaldecision maker
opt for the more "objective"informationfrom Y?
The relevant updated density functionsand expectationscan be calcu-
lated again as before. Equation (4) can then be used to calculatethe value
of consulting either of the two advisors. As it turns out, the advice from

This content downloaded from 131.172.36.29 on Sat, 26 Dec 2015 18:48:11 UTC
All use subject to JSTOR Terms and Conditions
BIASEI)INFORMATIONIN POLITICALADVICE 549

source Y is never sufficient to alter the decision maker's predisposition


toward alternative 1, although the precise utility expectations will change.
The updated expectations are:
E(U. Y. = 1) = 3/4; E(U, Y. = 0) = 1/2;
E(U2 Y2 = O)= 1/4; E(U2 Y2 = 1) =1/2.
Even in the unlikely (according to the decision maker's prior beliefs) event
that alternative 1 is pronounced bad and alternative 2 good, the result is that
the decision maker becomes indifferent toward them, and might as well
still choose alternative 1. In other words, advice from Y cannot possibly
force the decision maker to change plans, so
V(Y) =O.
Advice biased in favor of alternative 2 would make even less of an
impression: the decision maker would expect to see it disagree with his
predispositions, and would alter his beliefs very little. But the advice from
source X, biased in favor of alternative 1, can make a difference. The
updated expectations would be:

2(a 2+6a+11) 2
E(U 1X = 1) = ; E(U 1XIX=0)=
3(a+3)2 a+3
(a+ 1)(a +5) a+
E(U21X2=O)= E(U21X2=1)=
3(a +3 )2 a +3

Now if source X, biased in favor of alternative 1, nonetheless advises that


alternative 2 is good, there will be a large effect upon the expected utilities.
Comparing the formulas above, one finds that alternative 2 will come to be
preferred whenever Xi =0 ("bad") and X2 = 1 ("good") for any value of a
greater than 1. If a> /13, then alternative 2 will also be preferred when X
pronounces both alternatives good or both bad. Thus the value of advice
from source X will be positive, since it can affect choices; for larger biases it
takes on an added importance since then "ties"are awarded to alternative 2
as well.
As in the previous section, we can calculate the value of advice from
source X:
V(X)= -4(ca-1) , whena <1/13
(a + 1)2(a+ 2)2(a + 3)
= 4(a- 1) 4a(a2 13) when a?13.
>
(a + 1)2(a + 2)2(a + 3) 3(a + 1)2(a + 2)2(a+ 3)
Of course, X usually says that 1 is good and 2 is bad. The probability of
doing otherwise can be derived:

This content downloaded from 131.172.36.29 on Sat, 26 Dec 2015 18:48:11 UTC
All use subject to JSTOR Terms and Conditions
550 ...... JOURNALOF POLITICS,VOL.47, 1985

.01

0
10 20

FIGURE4
VALUE OF INFORMATION AS A FUNCTION OF BIAS, FOR THE
PREDISPOSED DECISIONMAKER.

3a +1)
4(a2+ +3t+
p[X# (1, 0)] = - )
(a2'+ 3 a + 2)2
which is 0.283 when a = 2,8 0.145 when a = V/13, and 0.030 when a =

10-quite small for larger biases, but still providing a positive expected
payoff.
Figure 4 presents a graph of V(X) for values of a between 1 and 20. The
maximum value occurs when a is approximately 5.94, at which point V(X)
is about .007. A distinct local maximum also occurs at about a =1.66, where
V(X) temporarily peaks at .006. The kink in the graph between these two
peaks is the point where a = /13, where "ties"begin to favor alternative 2.
Notice that V(X) declines toward 0 as bias becomes large; but for any
positive level of bias (i.e., a > 1), the value of X exceeds that of the unbiased
source Y. Thus just as in the previous section, any biased source outper-
forms an unbiased source, although the difference is small for very large
biases.
Also as in the previous section, we can use the derived expression for
V(X) to predict the behavior of identical decision makers in different
environments, or of a single decision maker when the environment
changes. If there is a choice between an unbiased advisor and any advisor
biased toward alternative 1 and against 2, the decision maker will choose
the biased one even at some extra cost. Suppose, however, that there is a
choice between two biased advisors, X and Z. If Z has very little bias, then
V(Z) is near 0 and X is preferred. As the bias of Z increases toward the local
maximum at a = 1.66, it may come to be preferred over X. If the bias of Z is
further increased, it falls again, rises again to the global maximum, and falls
8 The more relevant probability when a = 2 is P[X = (0,1)], or about 0.028; in general, for all
a < 113 the probability of choosing candidate 2 is thus small.

This content downloaded from 131.172.36.29 on Sat, 26 Dec 2015 18:48:11 UTC
All use subject to JSTOR Terms and Conditions
BIASEDINFORMATIONIN POLITICALADVICE 551

yet again thereafter. Consequently, patterns of choice between two such


advisors may be very complicated indeed. But if we are able to measure the
biases and to estimate the beliefs of the decision maker, the changes in these
choices can be predicted.

Discussion
Thus our rational decision maker chooses to consult an advisor who is
biased toward her own point of view, even though neutral advice is
available. Again, this is not due to any cost advantage of biased advice. The
rational decision maker in the situation modeled here would not consult Y
at any positive cost, but would be willing to pay a positive amount [any-
thing less than V(X)] to consult X. Also, this choice of information does not
depend on any consumption value derived from learning or using the
information from source X. This rational strategy for seeking advice would
be labeled as "bolstering" by any observer possessing all the relevant facts
except the model of advising we have used.
In the real world, where the choice of information-gathering strategies is
not restricted to one consultation of a single source, rational selective
exposure will appear in a somewhat more complicated form. Other things
equal, we would expect the value at any time of seeking further advice
from a source like X to be higher than that of consulting Y. Over a large
number of decision problems, more X-type than Y-type advisors should be
consulted. In addition, there may be many biased advisors among whom to
choose, in which case the problem generalizes to one of choosing an
advisor with optimal bias.
The opportunity to consult sequentially many different information
sources would provide an additional reason for the rational decision maker
to engage in selective exposure beyond those derived here. A biased
source, because of its potential to reveal an unexpected recommendation,
offers the opportunity to avoid further information gathering if it is con-
sulted early in the game. In fact, it is possible to construct examples, based
on the model used in this paper, in which a rational decision maker will
delay consulting an objective source Y in order to consult an agreeable,
biased source X first, where X is more expensive than Y and has lower value
in the single-consultation problem (the one analyzed above) than does Y
(Calvert, 1982). The phenomenon is related to that found by Weitzman
(1979) in analyzing the "Pandora paradox" of optimal sequential search.

SUMMARYANI) CONCLUSIONS
Summary
The two previous sections have demonstrated three separate reasons
why biased information may be preferred to unbiased information, even if

This content downloaded from 131.172.36.29 on Sat, 26 Dec 2015 18:48:11 UTC
All use subject to JSTOR Terms and Conditions
552 JOURNALOF POLITICS,VOL.47,185

there is no cost advantage. Our discussion of optimal bias showed that,


even for an initially undecided decision maker, the optimal level of bias
need not be zero. This effect occurs because a biased advisor recommend-
ing the alternative that he was supposed to have been biased against is
likely thereby to prevent the decision maker from making a relatively large
error. In the particular model used here, this effect more than offsets the
possible failure of such an advisor to warn against small errors. All this
depends, naturally, upon the particular functional forms used, but the ones
used here are not obviously lopsided or pathological.
Our discussion of selective attention showed that a decision maker with a
strong predisposition toward one alternative might not benefit at all from
having advice from an unbiased source, since regardless of its content that
advice could not cause her to change her mind. In the real world, where one
seldom faces a situation in which only a single observation is possible, this
effect corresponds to the likelihood that only a large number of such
recommendations could change the decision maker's predisposition.9 With
a biased advisor, however, there is the possibility that a single recommen-
dation could reverse the decision maker's preference between alternatives.
Finally, it was noted at the end of that section that, in a setting of sequential
sampling, the optimal information-gathering procedure might use biased
advisors first, since this might eliminate the need for any further consulting,
again by giving an unexpected recommendation.
In each of the preceding sections, it was possible to derive some basic
results about the response of a rational decision maker to environmental
change. Having derived the value of advice as a function of bias, we can
connect changes in bias with changes in the patterns of a decision maker's
choice among advisors. Likewise, changes in the effective cost of informa-
tion from a source will lead to predictable changes in the use of the sources.

Applicationsin Politics and PoliticalScience


The results presented here suggest how useful it can be for a decision
maker to maintain advisors whose beliefs or self-interest bias their advice in
ways known to the advisee. This idea applies to an endless variety of
decision-making settings and of types of advice. The use of technical
experts by an executive policymaker is one obvious field of application, as
is the reliance on specialized underlings by a bureaucrat or legislator. A
related problem occurs when a policymaker chooses among several costly
and time-consuming methods of policy analysis. Each method may have its

9 Recent evidence from the psychology laboratory indicates that such "incongruous"
information is indeed of primary importance to makers of limited-information decisions in a
setting of presumptive bias. See Lingle, Dukerich, and Ostrom (1983).

This content downloaded from 131.172.36.29 on Sat, 26 Dec 2015 18:48:11 UTC
All use subject to JSTOR Terms and Conditions
BIASEDINFORMATIONIN POLITICALADVICE 553

characteristic biases, and the most neutral method might not be the best
one, once all the relevant costs and benefits are taken into account. Even
rational voters might have good reason to "select out of the passing stream
of stimuli those by which they are inclined to be-persuaded" (Lazarsfeld et
al., 1944, p. 82) and to consult the positions of biased interest groups for
advice on the proper side to take on complex new policy issues, as Kuk-
linski et al. (1982) found.
The analysis presented in this paper offers at least two lessons for
students of political decision making. First, it is not possible to infer, from
the fact that he favors biased information, that a decision maker is not being
instrumentally rational in deciding a policy question.'0 Information-
gathering methods that are rational under the model propounded here
would appear irrational to any observer using a less inclusive model of the
learning and choosing process. Precepts such as "sensitivity to relevant
information," "objectivity," or "maintaining receptivity" have a strong
intuitive appeal as hallmarks of rational information gathering (Stein-
bruner, 1974, p. 330; Janis and Mann, 1977, p. 11). But strange things can
happen when we actually model rational decision making under imperfect
information.
Second, the model analyzed here provides an example of how one can
build a theory of bounded rationality directly from initial assumptions
about imperfect information and limited processing abilities. Apart from
its special distributional forms and particular versions of "advice" and
"bias," the model contains some features of more general applicability. The
Bayesian approach provides a complete, off-the-shelf model of rational
learning, although another well-specified theory could be substituted
should one become available. The placing of positive weight both on prior
beliefs and on new information is an often noted (and sometimes criti-
cized) feature of real decision making. Perhaps most centrally, the deriva-
tion of the ex ante value of information is a step that must precede any
attempt to specify rational behavior under these conditions. The advantage
of building such a model is that it yields predictions about behavior without
first applying empirical observations, often smuggled in under the form of

10 Our model, on the other hand, has nothing to say about the decision-making "irrational-

ity" that occurs as a result of bureaucratic politics or any other group decision process. Authors
such as Allison (1971) and Steinbruner (1974) criticize the "rational actor model," which tries
to account directly for the outcomes of organized decision making by positing rational pursuit
of a group goal. However, rational choice theorists since Arrow (1951) have realized that
individual rationality in any decision process is completely consistent with (and may indeed
require) group irrationality. The object of exercises such as this paper is to explain group
decisions from a starting point of individual rationality; there are no guarantees as to the
"rationality," in a social or group sense, of the resulting outcomes.

This content downloaded from 131.172.36.29 on Sat, 26 Dec 2015 18:48:11 UTC
All use subject to JSTOR Terms and Conditions
554 JOURNALOF POLITICS,VOL.47, 1985

extra, a priori assumptions about the nature of satisficing behavior or


cognitive style.

Conclusion
Analysts of decision-making processes recognize that imperfect infor-
mation about alternatives is a critical feature of real-world decision processes.
Decision models that incorporate strategic decisions about the use of
information, as well as risky choice, such as those of McCall (1970) and
Rothschild (1974) on economic behavior and Shepsle (1972) on strategies
of electoral candidates, add a layer of nontrivial complication to our
understanding of optimal decision making.
The multiple-source information-gathering model in this paper goes a
step farther and shows that the optimal use of information has several
counterintuitive features. Thus, when we draw inferences about the nature
of decision making from our observations of how information is used, it is
not a good idea to rely on simple, intuitive notions of rational information
use. Depending on particular conditions of uncertainty and information
costs, rational choice may call for seemingly irrational use of information.
More generally, the model in this paper serves as an example of how one
ought to theorize about the effects of imperfect information and con-
strained analytical abilities on the behavior of decision makers. Just
because the real world does not meet the conditions of simple rational
decision models, the analyst cannot abandon the rigorous study of how
goals determine choices. If one believes that cognitive abilities are
"limited" and that rationality is "bounded," theoretical parsimony dictates
that one try to specify those limits and bounds and derive their implica-
tions, rather than introduce special assumptions about compensating
behavior. This is not to deny the value of empirical studies of decision and
cognition, but rather to distinguish between such studies and the effort to
develop a general theory about decision making.

REFERENCES

Allison, Graham (1971). The Essence of Decision. Boston: Little, Brown.


Arrow, K. J. (1951). Social Choice and Individual Values. New Haven; Yale University Press.
Broadbent, D. E. (1958). Perception and Communication. London: Pergamon.
Calvert, Randall L. (1982). "The Rational Preference for Biased Information." Center for the
Study of American Business Working Paper Number 75. Washington University, St.
Louis.
De Groot, Morris H. (1970). Optimal Statistical Decisions. New York: McGraw-Hill.
Feller,William(1968).AnIntroductionto ProbabilityTheoryandItsApplications.New York:
John Wiley & Sons.

This content downloaded from 131.172.36.29 on Sat, 26 Dec 2015 18:48:11 UTC
All use subject to JSTOR Terms and Conditions
BIASEDINFORMATIONIN POLITICALADVICE 555

George, Alexander L. (1980). Presidential Decisionmaking in Foreign Policy: The Effective


Use of Information and Advice. Boulder, Co: Westview Press.
Grether, David M., and Charles R. Plott (1979). "Economic Theory of Choice and the
Preference Reversal Phenomenon." American Economic Review 69: 623-38.
Horowitz, Donald L. (1977). The Courts and Social Policy. Washington, D.C.: The Brookings
Institution.
Janis, Irving L., and Leon Mann (1977). Decision Making. New York: The Free Press.
Jervis, Robert (1976). Perception and Misperception in International Politics. Princeton, NJ:
Princeton University Press.
Kahneman, Daniel, and Amos Tversky (1979). "Prospect Theory: An Analysis of Decision
Under Risk." Econometrica 47: 263-91.
Kuklinski, James H., Daniel S. Metlay, and W. D. Kay (1982). "Citizen Knowledge and
Choices on the Complex Issue of Nuclear Energy." American Journal of Political Science
26: 615-42.
Lazarsfeld, Paul F., Bernard R. Berelson, and Hazel Gaudet (1944). The People's Choice. New
York: Columbia University Press.
Lingle, John H., Janet M. Dukerich, and Thomas M. Ostrom (1983). "Accessing Information in
Memory-Based Impression Judgments: Incongruity versus Negativity in Retrieval Selec-
tivity." Journal of Personality and Social Psychology 44: 262-72.
McCall, John J. (1970). "Economics of Information and Job Search." Quarterly Journal of
Economics 84: 113-26.
Malbin, Michael J. (1980). Unelected Representatives: Congressional Staff and the Future of
Representative Government. New York: Basic Books.
Moray, Neil, and M. Fitter (1973). "A Theory and the Measurement of Attention." In
S. Kornblum (ed.), Attention and Performance IV. New York: Academic Press.
Raiffa, Howard (1968). Decision Analysis. Reading, MA: Addison-Wesley.
Rothschild, Michael (1974). "Searching for the Lowest Price When the Distribution of Prices is
Unknown." Journal of Political Economy 82: 689-711.
Shepsle, Kenneth A. (1972). "The Strategy of Ambiguity: Uncertainty and Electoral Competi-
tion." American Political Science Review 66: 555-68.
Simon, H. A. (1947). Administrative Behavior. New York: Macmillan.
Steinbruner, John D. (1974). A Cybernetic Theory of Decision. Princeton, NJ: Princeton
University Press.
Weatherford, M. Stephen (1983). "Economic Voting and the 'Symbolic Politics' Argument: A
Reinterpretation and Synthesis." American Political Science Review 77: 158-74.
Weitzman, Martin L. (1979). "Optimal Search for the Best Alternative." Econometrica 47:
641-54.
Wicklund, Robert A., and Jack W. Brehm (1976). Perspectives on Cognitive Dissonance.
Hillsdale, NJ: Lawrence Earlbaum Associates.

This content downloaded from 131.172.36.29 on Sat, 26 Dec 2015 18:48:11 UTC
All use subject to JSTOR Terms and Conditions

You might also like