Download as pdf
Download as pdf
You are on page 1of 7
Art Decision-Making in the Presence of Risk Mark J. MACHINA Proposed in the 18th century by Cramer and Bernoulli and formally axiomatized in the 20th century by von Neumann and Morgenstern and others, the expected utility model has long been the dominant framework for the analysis of decision-making under risk. A growing body of experimental evidence, however, indicates that individuals systematically violate the key behavioral as- sumption of this model, the so-called independence axi- ‘om. This has led to the development and analysis of non- expected utility models of decision-making. Although recent work in this area has shown that the analytical results of expected utility theory are more robust than viously su ‘other important issues remain unre- ome ipposed, tportant TRIUALLY ALL SOCIAL, ECONOMIC, OR TECHNOLOGICAL \ J scion ivcive some degree of fk or uncer. Ta some cases, such as games of chance, the probabilities ofthe alkemative consequences can be accurately determined. In other ‘ases, actuarial of engineering data must be used to construct ‘estimates of these likelihoods, To the extent that these probabilities ‘an be quantified, however, individuals’ attitudes toward risk can be subjected to theoretical analysis and empirical resting. The 17th-century founders of modern probability chcory such as Pascal and Fermae assumed that individuals would evaluate alterna: ‘ive monetary gambles on the basis oftheir expected values, so that a loetery offering the payotls (x, ,%) with respective probabilities (Biv +5 Pa) Would Yield as much satisfaction as a sute payment ceqial to its expected value # = Zp,, Such an approach could be justified by appealing ro the law of large numbers, which states that iffa gamble is indefinitely and independently repeated, ts long-run average payof? will necessarily converge to its expected vale However, in a one-shor choice situation which cannot be epliat- cd or averaged, individuals may well base their decisions on more than just the expected values ofthe alternative prospects. Ths point was dramatically illustrated by an cxample offered by Nicholas Bernoulli in 1728 and now known as the St. Pecrsburg Paradox: Suppose someone fers to toss fur coin repeatedly until it lands heads, and o pay you $1 if this occurs on the frst toss, $2 if takes ‘wo tosses to lana head, $4 if takes three tosses, SB if takes four tosses, and so on. What isthe largest sure payment you would be ‘willing to forgo in order to undertake a single play of this game? This game offers a 1/2 chance of winning $1, a 1/4 chance of ‘winning $2, and 50 on; its expected payoft is (1/2)$1 + (1/4)S2 + (18)$44...=$1/2 + $12 + S12 +...=S=, However, even if such a well-backed offer could be made, it was fee that few individuals would forgo more than, say, $20 fora one-shot play—a Th agg ofr of conan, Uo of Clr, Son Dg La Ware MAY 1987 far ery from the (infinite) expected value (1). The resolution ofthis paradox, proposed independently by Gabriel Cramer and Daniel Bemoulli (Nicholas cousin), would form the bass for the modern theory of decision-making tnder risk (2). "Arguing tha again of $1000 was not necessarily valued ten times asmuch asa gain of $100, Cramer and Bernoulli hypothesized that individvals possess a clty of wealth function U's), and that they would value a lottery on the basis of ts expected utility # = EU)p, rather than its expected value ® = Sg, If uty took the logarth mic form U(x) = In(x, for example, the sure monetary gain € which would yield the same level of satisfiction as the St. Petersburg ‘gamble would be given by the solution £0 Ingw + §) = (W2)Ingw +1) + (UA)ngw + 2) + (UB)In(w + 4) + where denotes the individual’ initial wealth, If w = $1000, the individual would be indiferent between taking this gamble or receiving a sure gain of about $6, if » = $50,000 this amount is about $9. OF course, someone with a different utility funetion U* (°) would assign a different sure monetary equivalent. “Two centuries later, this approach was formally asiomatized by Frank Ramsey in his treatise on the philosophy of belief, by John ‘von Neumann and Oskar Morgenstern in their development of the theory of games, and by Leonard Savage in his work on the foundations of statistical inference (3), The simplicity and intuitive appeal of its axioms, the clegance of its representation of risk attitudes in terms of properties of the utility function, and the tremendous number of theoretical resus it has produced have led the expected utility model to become the dominant, and indeed, almost exclusive model of decision-making under rsk in economics, ‘operations research, philosophy, and statistical decision theory (4). ‘However, these theoretical advances have been accompanied by an accumulating body of empirical evidence suggesting that individ- uals systematically violate the predictions of the expected utility ‘model. The largest and most systematic class of these violations concerns the key behavioral assumption of the model, the so-called independence axiom. This has led to a growing tension in the field of deton thcry, with defenders ofthe expected tty approach stressing the “rationality” ofthis axiom and the theoretical power the model, and others emphasizing the importance ofthe emp evidence and developing alternatives to the expected utility model, My purpose in this article is to give an overview of these developments in the feld of decision-making under risk. The following sections provide a description of the expected utility ‘model both 2s a theoretical tool and as a behavioral hypothesis, 2 summary of the evidence on systematic violations of the indepen- dence axiom, and a report on the newer non-expected utiliey models of decision-making currently being developed. This latter work has shown that the basic results of expected utility analysis are in fact Quite robust to violations of the independence axiom. However, Separate evidence suggests that some of the other standard assump. tions in the theory of choice under uncertainty may also be suspect. ARTICLES. 937 ‘The Expected Utility Model ‘The expected utility mode follows standard economic theory by specifying a set of objects of choice and assuming thatthe individ- aul’ preferences can be represented by a real-valued function over this choicest. Since itis a model of decision-making under risk, the objets of choice are not the uitimate outcomes that might obtain {for example, alternative wealth evel) but rather probability dst bbuions ver dese outcomes. Given 3 set fx,» 8) of potenti ‘outcomes, the choice set hus consists ofall probability distributions Pe (Pig ev oa) Over fs. gh whee py cenots the probability of obtaining sy and 3p) ~ 1 “The model then assumes that the individual’ preferences can be represented by a real-valued maximand or preference function V¢) cover probability distributions, in the sense that the distribution PY = (ph...) is preferred 10 P= (py... Ps) if and only if Veer) > V(P), and is iniferent ro P if and only i V(P*) = VP) ‘The essence of the expected utility approach is that V) takes the linear form VP) = SU (xp for some set of cocficients {U(s)}, 50 thar expected uty preferences can be describe as being near in the probabiles. When the outcome set is «continuum sich asthe dnzerval[0.M, the probability distribution ofa random variable cover (0,4) can be represented by its density function ft), or more generally, by its cumulative distibution function FC). [where Fix) = prob(s = x)], and preferences over such distributions are assumed to be representable by linear preference functionals ofthe form Vf = JU fiside oF VIF) = fUG)AF (x), which can again be interpreted asthe expectation of U) (3). Since itis cleat that the transformed uty function aU) + b (a >0) will generate the same ranking over distributions as U(), utlityfancions are often ‘normalized so that U(O) = 0 and UM) = 1 Figure 1 illustrates how this model can be used to represent various atinudes toward isk, The monotonicity of the utility functions Ul) and U*C) in the figures reflect the propery of Fig 1. (A) Concave uly Function fs aerse individual. (B) Convex lu function of isk refering individual. Fig. 2. (A) Relay steep inference curves ofa risk averse invidual. (B) Relatively fat inference curves ofa ss preterrng individual. 98 stochastic dominance preference, where one probability distribution ‘ssi wo stochastically dominate another if canbe obtained fom the late by a sequence of igheward shifts of probably mas. Since such shits ase che probability of obtaining at east forall values of 5; stochastic dominance preference isthe probable analogue of the view chat "moe i beter” “The points # = (23)x' + (1/3) in the Sgure denote the expect cc value of the gamble offering a 2/3:1/3 chance ofthe outcomes» oe ¥ and f= 23)U") + (13)U) and. w= (23)U4) + (13)0%(e) give the expected utilities of this gamble for UC) and U*(). For the concave (Eats, bowed upward) uy function UC) ‘we have < Ut), implying that the indvidval would athe recive 2 sure payment equal to the expected value of the gamble than accual take the gamble isl. For the convex (bowed downward) ultyfanetion Uwe have @* > UG), so that this individual ‘woul peter to eae the rik rather than ec a sre payment of. Since Jense’s inequality (6) implies thc these respective atiudes will extend tall esky gambles, U) is erred to a ak averse and U*() as sis prefering (7). Researchers such as Arow and Prat have shown how the teatve concaviey or convesty ofa uslty function, 8 measured by the curvature index ~U"(s)/U"() can lead to theoretical predictions of how risk tines, and hence behavior, will ary with wealth or across individual in variety of eiferent risky situations (8, Although these figures illustrate the flexibility of the expected usilty model compared tothe Pascal-Fermat expected value ode, fan alternative graphical approach can be used to highlight the ‘ncaviral eswictons implied by the hypodhess of inary in the probabilcs. Consider the set ofall distributions (Pps) ver the fixed outcome levels {ei,a5.%), where x <2 <-y. Since Pim 1 pi — ps we can represent this set of dsrbutions by the point inthe uni triangle in the (Pp) plane (Fig 2). Since upward ‘movements in the triangle increase ps athe expense of Ps (hats, Shift probablcy mass from the outcome 3 up 0 5) and leftward ‘movements reduce py t0 the benef ofp (shite probabil from x up © 3), these movements (and moee generally, all northwest ‘movemens) result in stochasteally domipating dstbutions and ‘would accordingly be prefered. Since the invidal’ ndierence curves oF so-expected wilty lc in tis diagram are given by the solutions to Ueeiyps + Ues\Ld ~ pr pal + Ulss)ps = constant (2), they will consist of parallel straight lines (the solid lines in the figures), with mote preferred indifference curves lying to the northwest. This implies that knowledge of an individual’ indir fence curves over any small region is sufficient co know their preferences over the entire set of distributions, “The dashed lines in Fig. 2 are not indifference curves but rather iso-expected value lines, that i, solutions to spi tal] = pr ~ ps] + ayes = constant @ Since northeast movements along these lines do not change the expected value ofthe distribution but do increase the probabilities of the tail outcomes.) and xy at the expense ofthe middle outcome x, they represent the set of inerease in rsk in eis diagram (7). When the utility function UC) is concave (risk averse), is indifference curves can be shown to be steeper than the iso-expected value lines (ig. 2A) and increases in risk will lead to lower inference curves When U0) is convex (risk preferring), its indiference curves will be fatter than the isovexpected value lines (Fig. 2B), and increases in risk will lead to higher indifference curves. If we compare «wo dlferent utility functions, the one which is more risk averse (more concave) will posses the steeper indifference curves, SCIENCE, VOL. 256 “The property of lncarity inthe probabilities can also be repesent- ced as 2 restriction on the individual’ asitudes toward probability mixtures of distributions. Given an outcome set (8% the «(1 ~ a) probability mixture of the distributions P* = (pf, PD) and P= (1, 5 pa) is defined as the distribution aP* + (1 ~ a)P = (apt + (1 ~ alps... apt + (1 apn). This may be thought of s that single-stage distribution which yields the same ultimate probabilities over {xi} a8 a two-stage lortery Which offers an :(1 ~ a) chance of winning the distributions P* or P (B). Since linearity of V() implies that ViaP* + (1 — a)P) = aV(P*) + (1 ~ a)V{P), expected utility preferences will exhibit the following property, known as the independence axiom (10): If Pi prefered (indifrent) 10 P, then the mixture aP* + (1 = e)P** will be preferred (indiferent) to aP + (1 ~ a)P** for all a > O and P**, This condition, which isin fat equivalent to the propery of linearity in the probabilities, can be interpreted as follows: In terms of the ulkimate probability of obtaining cach ‘outcome, the choice between the mixtures aP* + (1 ~a)P** and aP + (1 ~ a)P** is equivalent to being offered a coin with a (1 ~ a) chance of landing ti, in which ease you will recive the lonery P**, and being asked before the fp whether you would rather receive the lottery P* or Pin the event of ahead. Now either tails will come up, in which case your choice will not have mattered, ‘or else heads will come up, in which case you are“in effec” back toa g,0

You might also like