Professional Documents
Culture Documents
Game and Decision Theoretic Models in Ethics
Game and Decision Theoretic Models in Ethics
Game and Decision Theoretic Models in Ethics
JOHN C. HARSANYI*
University of California at Berkeley
Contents
*I am indebted to the National Science Foundation for supporting this research through grant
SES-8700454 to the Center for Research in Management, University of California at Berkeley.
I. S O C I A L U T I L I T Y
probabilities are unknown to him (or are even undefined as numerical prob-
abilities). The normative theory dealing with certainty I shall call utility theory,
while those dealing with risk and uncertainty I shall combine under the heading
of decision theory. (Sometimes utility theory is more broadly defined so as to
include also what I am calling decision theory.)
On the other hand, the theory of rational behavior in a social setting can be
divided into garne theory and ethics. Garne theory deals with two or more
individuals orten having very different interests who try to maximize their own
(selfish or unselfish) interests in a rational manner against all the other
individuals who likewise try to maximize their own (selfish or unselfish)
interests in a rational manner. In contrast, ethics deals with two or more
individuals offen having very different personal interests yet trying to promote
the common interests of their society in a rational manner.
This chapter is about ethics, and not about decision theory. Yet, in discussing
ethical problems, I shall have to refer several times to the axioms of decision
theory. Therefore, in this section I shall briefly state these axioms in a way
convenient for my subsequent discussion.
I shall sharply distinguish between axioms that merely refer to some facts
established by general logic and mathematics and axioms proper to decision
theory as such. The former I shall call background assumptions. The latter I
shall call rationality postulates. Otherwise I shall follow the A n s c o m b e -
A u m a n n (1963) approach.
I shall use the following notations. Strictpreference, nonstrict preference, and
indifference (or equivalence) will be denoted by the symbols >, ~>, and - ,
respectively. Unless otherwise indicated, these symbols will always refer to the
preferences and indifferences entertained by one particular individual. I shall
use the notation
el,..., em and, therefore, also with the outcomes A 1 , . . , Am, I shall some-
times write
Postulate 2 (Continuity). Suppose that A > B > C. Then there exists some
probability mixture
L ( p ) = (A, p; C, l - p ) (2.4)
L - L'. (2.8)
where Pl, • • . , Pm are either the objective probabilities of the conditioning events
e 1, . . . , % known to him or are his own subjectiveprobabilitiesfor these events.
Any utility function equating the utility of every lottery to its expected utility
is said to possess the expected-utility property and is called a von Neumann-
Morgenstern (vNM) utility function.
As Anscombe and Aumann have shown, using the above axioms one can
first prove the theorem for risky lotteries. Then, one can extend the proof to all
Ch. 19: Garne and Decision Theoretic Models in Ethics 675
Utilitarian theory m a k e s two basic claims. One is that all morality is based on
maximizing social utility (also called the social welfare funtion). The other is
that social utility is a linear function of all individual utilities, assigning the
same positive weight to each individual's utility. 1
In this section and the next I shall try to show that these two claims follow
from the rationality postulates of Bayesian decision theory and from some
other, rather natural, assumptions. In this section I shall propose an equi-
probability model for moral value judgments, whereas in the next section I shall
p r o p o s e some axioms characterizing rational choices among alternative social
policies.
First of all I propose to distinguish between an individual's personal prefer-
erences and bis or her moral preferences) The former are his preferences
governing his everyday behavior. Most individuals' personal preferences will be
by no means completely selfish. But they will be particularistic in the sense of
giving greater weight to this individual's, his family m e m b e r s ' , and his friends'
interests than giving to other people's interests. In contrast, his moral pref-
erences will be his preferences governing bis moral value judgments. Unlike his
personal preferences, his moral preferences will be universalistic, in the sense
of giving the same positive weight to everybody's interests, including his own
because, by definition, moral value judgments are judgments based on im-
personal and impartial considerations.
For example, suppose s o m e b o d y tells me that he strongly prefers our
capitalist system over any socialist system. When I ask hirn why he feels this
way, he explains that in our capitalist system he is a millionaire and has a very
interesting and rewarding life. But in a socialist system in all probability he
would be a badly paid government official with a very uninteresting bureau-
cratic job. Obviously, if he is right about his prospects in a socialist system then
1Some utilitarians define social utility as the sum of all individual utilities whereas others define it
as their arithmetic mean. But as long as the number n of individuals in the society is constant, these
two approaches are mathematically equivalent because maximizing either of these two quantities
will also maximize the other. Only in discussing population policies will this equivalence break
down because n can no longer be treated as a constant in this context.
2In what follows, in similar phrases I shall omit the female pronoun.
676 J. C. Harsanyi
he has very good reasons to prefer his present position in our capitalist system.
Yet, his preference for the latter will be simply a personal preference based on
self-interest, and will obviously not qualify as a moral preference based on
impartial moral considerations.
The situation would be very different if he expressed a preference for the
capitalist system without knowing what his personal position would be under
either system, and in particular if he expected to have the same chance of
occupying any possible social position under either system.
More formally, suppose that our society consists of n individuals, to be called
individuals 1 , . . , i , . . . , n. Suppose that one particular individual, to be
called individual j, wants to compare various possible social situations s from
an impartial moral point of view. Let Ug(s) denote the utility level of individual
i (i = 1 , . . , n) in situation s. I shall assume that each utility function Ui is a
vNM utility function, and that individual j can make interpersonal utility
comparison between the utility levels U~(s) that various individuals i would
enjoy in different social situations s (see Section 5).
Finally, to ensure j's impartiality in assessing different social situations s, I
shall assume that j taust assume that he has the same probability 1/n of ending
up in the social position of any individual i with i's utility function Ui as his own
utility function. (This last assumption is needed to ensure that he will make a
realistic assessment of i's interests in the relevant social position and in the
relevant social situation. Thus, if i were a fish merchant in a given social
situation, then j taust assess this faet in terms of the utility that i would derive
from this occupation, and not in terms of his own (j's) tolerance or intolerance
for fishy smells.)
Under these assumptions, j would have to assign to any possible social
situation s the expected utility
Note that this model for moral value judgments is simply an updated version
of Adam Smith's (1976) theory of morality, which equated the moral point of
view to that of an impartial but sympathetic observer (or "spectator" as he
actually described hirn).
even though i himself will define the utility of this policy to him as
with
a 1. . . . , a ù > 0 , (4.5)
where the (expected) utilities Ui(1r) are defined in accordance with (4.2).
The axiom requires interpersonal comparability (at least for utility differ-
ences) because otherwise the requirement of an identical utility unit becomes
meaningless.
Yet, given Axiom 5, we can infer that
a 1 ..... an = a. (4.6)
ethical purposes we need more broadly defined arguments, with benefit vectors
taking the place of commodity vectors. By the benefit vector of a given
individual I shall mean a vector listing all economic and noneconomic benefits
(of advantages) available to hirn, including bis commodity endowments, and
listing also all economic and noneconomic discommodities (or disadvantages)
he has to face.
The vNM utility function Ui of an individual i is usually interpreted as a
mathematical representation of his preferences. For our purposes we shall add
a second interpretation. We shall say that i's vNM utility function U~ is also an
indicator of the amounts of satisfaction that i derives (or would derive) from
alternative benefit vectors (and from their probability mixtures). Indeed, any
preference by i for one thing over another can be itself interpreted as an
indication that i expects to derive more satisfaction from the former than from
the latter.
It is a well-known fact that the vNM utility functions of different individuals
tend to be very different in tbat they have very different preferences between
different benefit vectors, and in that they tend to derive very different amounts
of satisfaction from the same benefit vectors. Given out still very rudimentary
understanding of human psychology, we cannot really explain these differences
in any specific detail. But common sense does suggest that these differences
between people's preferences and between their levels of satisfaction under
comparable conditions- i.e., the differences between their vNM utility func-
t i o n s - are due to such factors as differences in their innate psychological and
physiological characteristics, in their upbringing and education, in their health,
in their life experiences, and other similar variables.
The variables explaining the differences between different people's vNM
utility functions I shall call causal variables. Let r» r~ . . . . be the vectors of
these causal variables explaining why the individuals i, k , . . . have the vNM
utility functions Ui, U ~ , . . . , and why these utility functions tend to differ from
individual to individual. I shall call these vectors ri, G, • • -, the causal-variable
vectors of individuals i, k , . . . .
Suppose that i's and k's benefit vectors are x and y, respectively, so that
their vNM utility levels- and therefore also their levels of satisfaction- are
Ue(x) and Uk(Y ). On the other hand, if their present benefit vectors were
interchanged, then their vNM utility levels - and therefore also their levels of
satisfaction-would be Ui(y ) and U~(x).
Under out assumptions, each individual's vNM utility level will depend both
on his benefit vector and on his causal-variable vector. Therefore, there exists
some mathematical function V such that
Moreover, this function V will be the same mathematical function in all four
equations (and in all similar equations). This is so because the differences
between the utility functions U i and U k can be fully explained by the differ-
ences between the two individuals' causal-variable vectors r i and G, whereas
the function V itself is determined by the basic psychological laws governing
human preferences and human satisfactions, equally applying to all human
beings. This function V I shall call the inter-individual utility function. 3
To be sure, we do not k n o w the mathematical form of this function V. Nor
do we know the nature of the causal-variable vectors r i, G , . • • belonging to
the various individuals. But my point is that if we did know the basic
psychological laws governing human preferences and human satisfaetions then
we could work out the mathematical form of V and could find out the nature of
these causal-variable vectors ri, G , . • • • This means that even if we do not
know the function V, and do not know the vectors ri, G , . . . , these are
well-defined mathematical entities, so that interpersonal utility comparisons
based on these mathematical entities are a meaningful operation.
Moreover, in many specific cases we do have enough insight into human
psychology in general, and into the personalities of the relevant individuals in
particular, to make some interpersonal utility comparisons. For instance,
suppose that both i and k are people with considerable musical talent. But i has
in fact chosen a musical career and is now a badly paid but very highly
respected m e m b e r of a famous orchestra. In contrast, k has opted for a more
lucrative profession. H e has obtained an accounting degree and is now the
highly paid and very popular chief accountant of a large company. Both
individuals seem to be quite happy in their chosen professions. But I would
have to know them really well before I could venture an opinion as to which
one actually derives more satisfaction from his own way of life.
Yet, suppose I do know these two people very weil. Then I may be willing to
make the tentative judgment that i's level of satisfaction, as measured by the
quantity Ui(x)= V(x, ri), is in fact higher or lower than is k's level of
satisfaction, as measured by the quantity Uk(y) = V( y, r k). Obviously, even if I
made such a judgment, I should know that such judgments are very hard to
make, and must be subject to wide margins of error. But such possibilities of
error do not make them into meaningless judgments.
I have suggested that, in discussing interpersonal comparisons of utilities,
these utilities should be primarily interpreted as amounts ofsatisfaction, rather
than as indicators of preference as such. My reason has been this.
Suppose we want to compare the vNM utility U~(x) = V(x, q ) that i assigns
to the benefit vector x, and the vNM utility Uk(Y) = V ( y , G ) that k assigns to
the benefit vector y. If we adopted the preference interpretation, then we
would have to ask whether i's situation characterized by the vector pair (x, r,.)
or k's situation characterized by the vector pair (y, rk) was preferred (or
whether these two situations were equally preferred). But this would be an
incomplete question to ask. For before this question could be answered we
would have to decide whether "preferred" meant "preferred by individual i"
or meant "preferred by individual k" - because, for all we know, one of these
two individuals might prefer i's situation, whereas the other might prefer k's
situation. Yet, if this were the case then we would have no way of telling
whether i's or k's situation were intrinsically preferable.
In contrast, if we adopt the amount-of-satisfaction interpretation, then no
similar problem will arise. For in this case Uz(x ) = V(x, rz) would become
simply the amount of satisfaction that i derives from his present situation,
whereas Uk(y ) = V(y, r~) would become the amount of satisfaction that k
derives from his present situation. To be sure, we have no way of directly
measuring these two amounts of satisfaction, but can only estimate them on the
basis of o u r - very fallible- intuitive understanding of human psychology and
of i's and k's personalities. Yet, assuming that there are definite psychological
laws governing human satisfactions and human preferences (eren if our
knowledge of these laws is very imperfect as yet), V will be a well-defined
mathematical function, and the quantities V(x, ri) and V(y, rk) will be well-
defined, real-valued mathematical quantities, in principle always comparable to
each other.
Both Theorems 2 and 3 make essential use of vNM utility functions. Yet, the
latter's use in ethics met with strong objections by Arrow (1951, p. 10) and by
Rawls (1971, pp. 172 and 323) on the ground that vNM utility functions merely
express people's attitudes toward gambling, and these attitudes have no moral
significance.
Yet this view, it seems to me, is based on a failure to distinguish between the
process utilities and the outcome utilities people derive from gambling and,
more generally, from risk-taking. By process utilities I mean the (positive and
negative) utilities a person derives from the act of gambling itself. These are
basically utilities he derives from the various psychological experiences associ-
ated with gambling, such as the nervous tension felt by him, the joy of winning,
the pain of losing, the regret for having made the wrong choice, etc. In
contrast, by outcome utilities I mean the (positive and negative) utilities he
assigns to various possible physical outcomes.
With respect to people's process utilities I agree with Arrow and Rawls: these
utilities do merely express people's attitudes toward gambling and, therefore,
Ch. 19: Garne and Decision Theoretic Models in Ethics 683
have no moral significance. But I shall try to show that people's vNM utility
functions express solely people's outcome utilities and completely disregard
their process utilities. Indeed, what people's vNM utility functions measure are
their cardinal utilities for various possible outcomes. Being cardinal utilities,
they indicate not only people's preferences between alternative outcomes, as
their ordinal utilities do, but also the relative importance they assign to various
outcomes. Yet, this is morally very valuable information.
I shall now define two concepts that I need in my subsequent discussion. When
people gamble for entertainment, they are usually just as much interested in
the process utilities they derive from their subjective experiences in gambling
as they are in the outcome utilities they expect to derive from the final
outcomes. In fact, they may gamble primarily for the sake of these subjective
experiences. This attitude, characterized by a strong interest in these process
utilities, I shall call a gambling-oriented attitude.
The situation is different when people engage in risky activities primarily for
the sake of the expected outcomes. In such cases, in particular if the stakes are
very high or if these people are business executives or political leaders
gambling with other people's money and sometimes even with other people's
lives, then they will be certainly weil advised, both for moral reasons and for
reasons of self-interest, to focus their attention on the outcome utilities and the
probabilities of the various possible outcomes in order to achieve the best
possible outcomes for their constituents and for t h e m s e l v e s - without being
diverted from this objective by their own positive or negative psychological
experiences and by the process utilities derived from these experiences. This
attitude of being guided by one's expected outcome utilities rather than by
one's proeess utilities in risky situations I shall call an outcome-oriented
attitude.4
Now I propose to argue that vNM utility functions are based solely on people's
outcome utilities, and make no use of their process utilities. Firstly, this can be
4Needless to say, everybody, whether he takes a gambling-oriented or a strictly outcome-
oriented attitude, does have process utilities in risky situations. My point is only that some people
intentionally take an outcome-oriented attitude and disregard these process utilities in order not to
be diverted from their main objective of maximizingtheir expected outcome utility. (In philosophi-
cal terminology, ourfirst-order preferences for enjoyable subjective experiences give rise to process
utilities. On the other hand, an outcome-oriented attitude is a second-order preference for
overriding those of our first-order preferences that give rise to such process utilities.)
684 J. C. Harsanyi
Let me now come back to my other contention that vNM utility functions are
very useful in ethics because they express the cardinal utilities people assign to
various possible outcomes, indicating the relative importance they attach to
these outcomes.
Ch. 19: Garne and Decision Theoretic Models in Ethics 685
Suppose that individual i pays $10 for a lottery ticket giving him 1/1000
chance of winning $1000. This fact implies that
1
1000 Ui($1000)~ Ui($10)" (6.1)
In other words, even though $1000 is only a 100 times larger amount of money
than $10 is, i assigns an at least 1000 times higher utility to the former than he
assigns to the latter. Thus, his vNM utility function not only indicates that he
prefers $1000 to $10 (which is all that an ordinal utility function could do), but
also indicates that he attaches unusually high importance to winning $1000 as
compared with the importance he attaches to not losing $10 (as he would do if
he did not wirt anything with his lottery ticket - which would be, of course, the
far more likely outcome).
To be sure, i's vNM utility function does not tell us why he assigns such a
high importance to winning $1000. We would have to know his personal
circumstances to understand this. (For instance, if we asked hirn we might find
out that winning $1000 was so important for hirn because he hoped to use the
money as cash deposit on a very badly needed second-hand car, or we might
obtain some other similar explanation.)
Thus, in ethics, vNM utility functions are important because they provide
information, not only about people's preferences, but also about the relative
importance they attach to their various preferences. This must be very valuable
information for any humanitarian ethics that tries to encourage us to satisfy
other people's wants and, other things being equal, to give priority to those
wants they themselves regard as being most important. Admittedly, a vNM
utility function measures the relative importance a person assigns to his various
wants by the risks he is willing to take to satisfy these wants. But, as we have
seen, this fact must not be confused with the untenable claim that a person's
vNM utility function expresses merely his like or his dislike for gambling as
such.
II. R U L E U T I L I T A R I A N I S M , ACT U T I L I T A R I A N I S M ,
RAWLS' AND BROCK'S N O N U T I L I T A R I A N T H E O R I E S
OF J U S T I C E
Act utilitarianism is the view that a morally right action is simply one that
would maximize expected social utility in the existing situation. In contrast,
rule utilitarianism is the view that a morally right action taust be defined in two
686 J. C. Harsanyi
steps. First, we taust define the right moral rule as the moral rule whose
acceptance would maximize expected social utility in similar situations. Then,
we taust define a morally right action as one in compliance with this moral rule.
Actually, reflection will show that in general we cannot judge the social
utility of any proposed moral rule without knowing the other moral rules
accepted by the relevant society. Thus, we cannot decide what the moral
obligations of a father should be toward his children without knowing how the
moral obligations of other relatives are defined toward these children. (We
taust ensure that somebody should be clearly responsible for the well-being of
every child. On the other hand, we taust not give conflicting responsibilities to
different people with respect to the same child.)
Accordingly, it seems to be preferable to make society's moral code, i.e., the
set of all moral rules accepted by the society, rather than individual moral
rules, the basic concept of rule utilitarian theory. Thus, we may define the
optimal moral code as the moral code whose acceptance would maximize
expected social utility,5 and may define a morally right action as one in
compliance with this moral code.
How should we interpret the term "social acceptance" used in these defini-
tions? Realistically, we cannot interpret it as full compliance by all members of
the society with the accepted moral code (or moral rule). All we can expect is
partial compliance, with a lesser degree of compliance in the case of a very
demanding moral code. (Moreover, we may expect much more compliance in
the moral judgments people make about each other's behavior than in their
own actual behavior.)
How will a rational utilitarian choose between the two versions of utilitarian
theory? It seems to me that he taust make his choice in terms of the basic
utilitarian choice criterion itself: he must ask whether a rule utilitarian or an act
utilitarian society would enjoy a higher level of social utility.
Actually, the problem of choosing between the rule utilitarian and the act
utilitarian approaches can be formally regarded as a special case of the rule
utilitarian problem of choosing among alternative moral codes according to
their expected social utility. For, when a rule utilitarian society chooses among
alternative moral codes, the act utilitarian moral code (asking each individual
to choose the social-utility maximizing action in every situation) is one of the
moral codes available for choice.
Note that this fact already shows that the social-utility level of a rule
utilitarian society, one using the optimal rule utilitarian moral code, must be at
least as high as that of an act utilitarian society, using the act utilitarian moral
code.
»For the sake of simplicity, I am assuming that the social-utility maximizing moral code is
unique.
Ch. 19: Garne and Decision Theoretic Models in Ethics 687
When a society accepts a given moral code, the most obvious effects of this will
be the benefits people will obtain by other people's (and by their own)
compliance with this moral code. These effects I shall call positive implementa-
tion effects.
Yet, compliance with any moral c o d e - together with the fact that this
compliance will always be somewhat incomplete- will give rise also to some
social costs. These include the efforts needed to comply with specific injunc-
tions of the moral code, and in particular to do so in some difficult situations;
the guilt feelings and the social stigma that may follow noncompliance; loss of
respect for the moral code if people see widespread noncompliance; and the
efforts needed to inculcate habits consistent with the moral code in the next
generation [cf. Brandt (1979, pp. 287-289)]. These effects I shall call negative
implementation effects. They will be particularly burdensome in the case of very
demanding moral codes and may make adoption of such a moral code
unattractive even if it would have very attractive positive implementation
effects.
Another important group of social effects that a moral code will produce are
its expectation effects. They result from the fact that people will not only
themselves comply with the accepted moral code to some degree but will
expect other people likewise to comply with it. This expectation may give them
incentives to socially beneficial activities, and may give them some assurance
that their interests will be protected. Accordingly, I shall divide the expectation
effects of a moral code into incentive effects and assurance effects. As we shall
see, these two classes of expectation effects are extremely important in
determining the social utility of any moral code. It is all the more surprising
that so far they have received hardly any attention in the literature of ethics.
after a tiring day, I would always have to ask myself whether I could not do
something more useful to society, such as doing some voluntary work for
charity (or perhaps trying to convert some of my friends to utilitarian theory)
instead. This of course means that, except in those rare cases where two or
more actions would equally produce the highest possible social utility, I would
never be permitted a free choice between alternative actions.
This also means that act utilitarian theory could not accommodate the
traditional, and intuitively very appealing, distinction between merely doing
one's duty and performing a supererogatory action going beyond the call of
duty. For we would do only our duty by choosing an action maximizing social
utility, and by doing anything else we would clearly fall to do our duty.
In contrast, a rule utilitarian moral code could easily recognize the intrinsic
value of free individual choice. Thus, suppose I have a choice between action
A, yielding the social utility a, and action B, yielding the social utility/3, with
a >/3. In this case, act utilitarian theory would make it my duty to choose
action A. But a rule utilitarian moral code could assign a procedural utility y to
free moral choice. This would make me morally free to choose between A and
B as long as /3 + T/> a. Nevertheless, because « >/3, A would remain the
morally preferable choice. Thus, I would do my duty both by choosing A and
by choosing B. Yet, by choosing the morally preferable action A, I would go
beyond merely doing my duty and would perform a supererogatory action.
11. Morally protected rights and obligations, and their expectation effects
The moral codes of civilized societies recognize some individual rights and
some special obligations 6 that cannot be overriden merely because by overrid-
ing them one could here and now increase social u t i l i t y - except possibly in
some very special cases where fundamentally important interests of society are
at stake. I shall describe these as morally protected rights and obligations.
As an example of individual rights, consider a person's property rights over a
boat he owns. According to our accepted moral code (and also according to
our legal rules), nobody else can use this boat without the owner's permission
in other than some exceptional cases (say, to save a human life). The mere fact
that use of the boat by an,other person may increase social utility (because he
would derive a greater utility by using the boat than the owner would) is not a
morally acceptable reason for him to use the boat without the owner's consent.
Even though, as we have seen, the direct effects of such property rights will
6By special obligations I mean moral obligations based on one's social role (e.g., one's
obligations as a parent, or as a teacher, or as a doctor, etc.) or on some past event (e.g., on having
made a promise, or on having incurred an obligation of gratitude to a benefactor, and so on).
690 J. C. Harsanyi
offen be to prevent some actions that might increase social utility, their indirect
effects, namely their expectation effects, make them into a socially very useful
institution. They provide socially desirable incentives to hard work, saving,
investment, and entrepreneurial activities. They also give property owners
assurance of some financial security and of some independence of other
people's good will. (Indeed, as a kind of assurance effect benefiting society as a
whole, widespread property ownership contributes to social stability, and is an
important guarantee of personal and political freedom.)
As an example of special obligations, consider a borrower's obligation to
repay the borrowed money to the lender, except if this would cause him
extreme hardship. In some cases, in particular when the borrower is very poor
whereas the lender is very rich, by repaying the money the borrower will
substantially decrease social utility because his own utility loss will greatly
exceed the lender's utility gain (since the marginal utility of money to very poor
people tends to be much higher than the marginal utility of money to very rich
people). Nevertheless, it is a socially very beneficial moral (and legal) rule that
normally loans must be repaid (even in the case of very poor borrowers and
very rich lenders) because otherwise people would have a strong incentive not
to lend money. Poor people would particularly suffer by being unable to
borrow money if it were known that they would not have to repay it. The rule
that loans must be repaid will also give lenders some assurance that society's
moral code will protect their interests if they lend money.
Since a rule utilitarian society would choose its moral code by the criterion of
social utility, it would no doubt choose a moral code recognizing many morally
protected rights and obligations, in view of their very beneficial expectation
effects. But an act utilitarian society could not do th&. This is so because act
utilitarian morality is based not on choosing between alternative moral codes,
but rather on choosing between alternative individual actions in each situation.
Therefore, the only expectation effects it could pay attention to would be those
of individual actions and not those of entire moral codes. Yet, normally an
individual action will have negligibly small expectation effects. If people know
that their society's moral code does protect, or does not protect, property
rights, this will have a substantial effect on the extent to which they will expect
property rights to be actually respected. But if all they know is that one
individual on one particular occasion did or did not respect another individual's
property rights, this will hardly have any noticeable effect on the extent to
which they will expect property rights to be respected in the future. Hence, an
act utilitarian society would have no reason to recognize morally protected
rights and obligations.
Yet, in fairness to act utilitarian theory, a consistent act utilitarian would not
really regret the absence of morally protected rights and obligations from his
society. For he would not really mind if what we would consider to be his
Ch. 19: Garne and Decision Theoretic Models in Ethics 691
of the rule utilitarian approach. As we have seen, whereas the rule utilitarian
approach is free to choose its moral code from a very large set of possible moral
codes, the act utilitarian approach is restricted to one particular moral code
within this set.
Yet, this means that act utilitarianism is committed to evaluate each in-
dividual action directly in terms of the consequentialist utilitarian criterion of
social-utility maximization. In contrast, rule utilitarianism is free to choose a
moral code that judges the moral value of individual actions partly in terms of
nonconsequentialist criteria if use of such criteria increases social utility. Thus,
it may choose a moral code that judges the moral value of a person's action not
only by its direct social-utility yield but also by the social relationship between
this person and the people directly benefiting or directly damaged by his
action, by this person's prior promises or other commitments, by procedural
criteria, and so on. Even if consequentialist criteria are used, they may be
based not only on the social consequences of individual actions but also on the
consequences of a morally approved social practice of similar behavior in all
similar cases. This greater flexibility in defining the moral value of individual
actions will permit adoption of moral codes with significantly higher social
utility.
MEZ , (13.1)
where :g is the set of all possible moral codes. On the other hand, stage 2 of
the garne will be a noncooperative garne in which each player i will choose a
strategy s z for himself so as to maximize his own individual utility Ui, subject to
Ch. 19: Game and Decision Theoretic Models in Ethics 693
s~ E P ( M ) , (13.2)
where P(M) is the set of all strategies permitted by the moral code M chosen at
stage 1. I shall call P(M) the permissible set for moral code M, and shall
assume that, for all M E ~ , P(M) is a nonempty compact subset of the
strategy set S.
The noncooperative garne played at stage 2, where the players' strategy
choices are restricted to P(M), will be called F(M). At stage 1, in order to
choose a moral code M maximizing social utility, the players must try to predict
the equilibrium point g = ( s l , . . . , sn) that will be the actual outcome of this
garne F(M). I shall assume that they will do this by choosing a predictor
function 1r selecting, for every possible garne F(M), an equilibrium point
= Ir(F(M)) as the likely outcome F(M). [For instance, they may choose this
predictor function on the basis of our solution concept for noncooperative
garnes. See Harsanyi and Selten (1988).] For convenience, I shall orten use the
shorter notation ~-*(M) -- ~r(F(M)).
Finally, I shall assume that each player's individual utility will have the
mathematical form
U~=U~(~,M). (13.3)
How does this model represent the implementation effects and expectation
effects of a given moral code M? Clearly, its implementation effects, both the
positive and the negative ones, will be represented by the fact that the players'
strategies will be restricted to the permissible set P(M) defined by this moral
code M. This fact will produce both utilities and disutilities for the players and,
therefore, will give rise both to positive and negative implementation effects.
On the other hand, the expectation effects of M will be represented by the
fact that some players will choose different strategies than they would choose if
their society had a different moral code - not because M directly requires them
694 J. C. Harsanyi
to do so but rather because these strategies are their best replies to the other
players' expected strategies, on the assumption that these other players will use
only strategies permitted by the moral code M.
This model illustrates the fact, weil known to garne theorists, that an ability
to make binding commitments is orten an important advantage for the players
in many garnes. In the rule utilitarian garne we are discussing, it is an important
advantage for the players that at stage 1 they can commit themselves to comply
with a jointly adopted moral code. Yet, in most other games, this advantage
lies in the fact that such commitments will prevent the players from disrupting
some agreed joint strategy in order to increase their own payoffs. In contrast,
in this game, part of the advantage lies in the fact that the players' commitment
to the jointly adopted moral code will prevent them from violating the other
players' rights or their own obligations in order to increase social utility.
Out model can be made more realistic by dropping the assumption that all
players are fully consistent rule utilitarians. Those who are I shall call the
committed players. Those who are not I shall call the uncommitted players. The
main difference will be that requirement (13.2) will now be observed only by
the committed players. For the uncommitted players, it will have to be
replaced by the trivial requirement s i E S. On the other hand, some of the
uncommitted players i might still choose a strategy s i at least in partial
compliance with their society's moral code M, presumably because they might
derive some utility by at least partial compliance. [This assumption, however,
requires no formal change in our model because (13.3) has already made M an
argument of the utility functions Ui. ]
The realism of our model can be further increased by making the utilitarian
game into one with incomplete information [see Harsanyi (1967-68)].
very similar to that underlying my own equi-probability model for moral value
judgments, discussed in Section 3.7
Nevertheless, there are important differences between Rawls' model and
mine. In my model, a person making a moral value judgment would choose
between alternative social situations in a rational manner, more particularly in
a way required by the rationality postulates of Bayesian decision theory. Thus,
he would always choose the social situation with the highest expected utility to
hirn. Moreover, in order to ensure that he will base his choice on impartial
universalistic considerations, he must make his choice on the assumption that,
whichever social situation he chose, he would always have the same probability
of ending up in any one of the n available social positions.
In contrast, Rawls' assumption is that each participant in the original
position will choose among alternative conceptions of justice on the basis of the
highly irrational maximin principle, which requires everybody to act in such a
way as if he were absolutely sure that, whatever he did, the worst possible
outcome of his action would obtain. This is a very surprising assumption
because Rawls is supposedly looking for that conception of justice that rational
individuals would choose in the original position.
It is easy to verify that the maximin principle is a highly irrational choice
criterion. The basic reason is that it makes the value of any possible action
wholly dependent on its worst possible outcome, regardless of how small its
probability. If we tried to follow this principle, then we could not cross even
the quietest country road because there was always some very small probability
that we would be overrun by a car. We could never eat any food because there
is always some very small probability that it contains some harmful bacteria.
Needless to say, we could never get married because a marriage may certainly
come to a bad end. Anybody who tried to live this way would soon find himself
in a mental institution.
Yet, the maximin principle would be not only a very poor guide in our
everyday life, it would be an equally poor guide in our moral decisions. As
Rawls rightly argues, if the participants of the original position followed the
maximin principle, then they would end up with what h e calls the difference
principle as their basic principle of justice. The latter would ask us always to
give absolute priority to the interests of the most disadvantaged and the poorest
social group over the interests of all other people no marter what - even if this
group consisted of a mere handful of people whose interests were only
minimally affected, whereas the rest of society consisted of many millions with
7Rawls first proposed his concept of the original position in 1957 [Rawls (1957)]. I proposed my
own model in 1953 and 1955 [Harsanyi (1953, 1955)]. But both of us were anticipated by Vickrey,
who suggested a similar approach already in 1945 ]Vickrey (1945)]. Yet all three of us arrived quite
independently at our own models.
696 J. C. Harsanyi
15. Brock's theory of social justice based on the Nash solution and on
the Shapley value
~r* = f i ( u i - d * ) , (15.1)
i=1
subject to the two constraints u* E F and ui* > di* for all i. Here F denotes the
convex and compact feasible set. (It is customary to write the second constraint
as a weak inequality. But Brock writes it as a strong inequality because he
wants to make it clear that every player will positively benefit by moving from
d* to u*. In any case, mathematically it makes no difference which way this
constraint is written.)
Let me now go over to Brock's full theory. This involves a two-stage garne.
At stage 1, the players choose a Constitution C restricting the strategies of each
player i at stage 2 to some subset S* of his original strategy set S i. As a result,
this Constitution C will define an NTU game G ( C ) in chäräcteristic-function
form to be played by the N players at stage 2. The outcome of G ( C ) will be an
NTU Shapley-value vector u **= ( u ~ * , . . . , u**) associated with this garne
G ( C ) . The players can choose only Constitutions C yielding a Shapley-value
vector u** with u i > d i for every player i. (If a given garne G ( C ) has more
than one Shapley-value vector u** satisfying this requirement, then the players
can presumably choose any one of the latter as the outcome of the garne.) Let
F* be the set of all possible Shapley-value vectors u** that the players can
obtain by adopting any such admissible Constitution C.
Actually, Brock assumes that the players can choose not only a specific
Constitution C but can choose also some probability mixture of two or more
Constitutions C, C', . . . . If this is what they do, then the outcome will be the
corresponding weighted average of the Shapley-value vectors u**, ( u * * ) ' , . . .
generated by these Constitutions. This of course means that the set of possible
outcomes will not be simply the set F* defined in the previous paragraph, but
rather will be the convex hull F** of this set F*.
698 J. C. Harsanyi
Finally, Brock assumes that at stage 1 the players will always choose that
particular Constitution, or t h a t p a r t i c u l a r probability mixture of Constitutions,
•
that ymlds the payoff vector u o = ( u 0~ , . . , un) 0 maxlmlzmg
• . • the Nash product
7r ° = I e I (u i - d * ) , (15.2)
i=1
u 1 + u 2 ~ 10. (15.3)
U I q- (1 q- G)U 2 ~ 1 0 , (15.4)
(1 + e ) u I + U2 ~ 10. (15.5)
By symmetry, maximization of W will now require choice of the utility vector
u = (0, 10), which will be once more a highly inequalitarian outcome. More-
over, arbitrarily small changes in the feasible set - such as a shift from (15.4) to
(15.3) and then to ( 1 5 . 5 ) - will make the utilitarian outcome discontinuously
jump from u = (10,0) first to an indeterminate outcome and then to u =
(0, 10).
I agree with Brock that, at least at an abstract mathematical level, these
mathematical anomalies are a serious objection to utilitarian theory. But I
should like to argue that they are much less of a problem for utilitarian theory
as an ethical theory for real-life human beings, because these anomalies will
hardly ever actually arise in real-life situations.
This is so because in real life we can never transfer abstract "utility" as such
from orte person to another. All we can do is to transfer assets possessing
utility, such as money, commodities, securities, political power, etc. Yet, most
people's utility functions are such that such assets sooner or later will become
subject to the Law of Diminishing Marginal Utility. As a result, in real-life
situations the upper boundary of the feasible set in the utility space will tend to
have enough concave curvature to prevent such anomalies from arising to any
significant extent.
As already mentioned, Brock also feels uneasy about use of interpersonal
utility comparisons in utilitarian theory. No doubt, such comparisons are
rejected by many philosophers and social scientists. But in Section 5 I already
stated my reasons for considering such comparisons to be perfectly legitimate
intellectual operations.
Let me now add that interpersonal utility comparisons not only are possible,
but are also strictly necessary for making moral decisions in many cases. If I
take a few children on a hiking trip and we run out of food on our way home,
then c o m m o n sense will tell me to give the last bite of food to the child likely
to derive the greatest utility from it (e.g., because she looks like the hungriest of
the childrerl). By the same token, if I have a concert ticket to give away, I
should presumably give it to that friend of mine likely to enjoy the concert
most, etc. It seems to me that we simply could not make sensible moral choices
in many cases without making, or at least trying to make, interpersonal utility
comparisons.
This is also my basic reason why I feel that our moral decisions should be
based on the utilitarian criterion of maximizing social utility rather than Brock's
criterion of maximizing a particular Nash product.
Suppose I can give some valuable object A either to individual 1 or to
individual 2. If I give it to 1 then I shall increase his utility level from u~ to
(u~ + Au~), whereas if I give it to 2 then I shall increase the latter's utility level
700 J. C. Harsanyi
that is, if
Bu 1 > l u 2 , (15.7)
AU 1 AU 2
, > - . (15.9)
u 1 - d~ u 2 - d~
On the other hand, I have to give A to 2 if the last two inequalities are
reversed. For convenience, the quantities (u i - d*) for i = l, 2 I shall describe
as the two individuals' ner utility levels.
The utilitarian criterion, as stated in (15.7), assesses the moral importance of
any individual need by the importance that the relevant individual himself
assigns to it, as measured by the utility increment Aui he would obtain by
satisfying this need. In contrast, Brock's criterion, as stated in (15.9), would
assess the moral importance of this need, not by the utility increment Auz as
such, but rather by the ratio of ~uz to the relevant individual's net utility level
(u~- d~).
Both (15.7) and (15.9) will tend to give priority to poor people's needs over
rich people's needs. For, owing to the Law of Diminishing Marginal Utility,
from any given benefit, poor people will tend to derive a larger utility
increment Au i than rich people will. Yet, (15.9) will give poor people's needs
an e r e n greater priority than (15.7) would give. This is so because in (15.9) the
two relevant individuals' net utility levels ( u i - d~) occur as divisors; and of
course these net utility levels will tend to be smaUer for poor people than for
rich people. Obviously, the question is whether it is morally justified to use
(15.9) as our decision rule when (15.7) would point in the opposite direction.
We all agree that in most cases we must give priority to poor people's needs
Ch. 19: Game and Decision Theoretic Models in Ethics 701
over rich people's because the former's needs tend to be more urgent, and
because poor people tend to derive rauch greater utility gain from our help. But
the question is what to do in those - rather exceptional - cases where some rich
people are in greater need of our help than any poor person is.
For instance, what should a doctor do when he has to decide whether to give
a life-saving drug in short supply to a rich patient likely to derive the greatest
medical benefit from it, or to give it to a p o o r patient who would derive a lesser
(but still substantial) benefit from this drug. According to utilitarian theory,
the doctor must give the drug to the patient who would obtain the greatest
benefit from it, regardless of this patient's wealth (or poverty). To do otherwise
would be morally intolerable discrimination against the rich patient because of
his wealth. In contrast, under Brock's theory, the greater-benefit criterion, as
expressed by (15.7), can sometimes be overridden by the higher-ratio criterion,
as expressed by (15.9). I find this view morally unacceptable. 8
To conclude: Brock's theory of "need justice" represents a very interesting
alternative to utilitarian theory. There are arguments in favor of either theory.
But, as I have already indicated, I regard utilitarian theory as a morally much
preferable approach. 9
Brock's aim is to provide proper representation both for "need justice" and for
"merit justice" within his two-stage game model. Yet, it seems to me that his
model is so much dominated by "need justice" considerations that it fails to
provide proper representation for the requirements of "merit justice".
Take the special case where all n players have the same needs and,
therefore, have the same utility functions, and also have the same disagreement
payoffs d*; but where they have very different produetive abilities and skills. I
now propose to show that in this case Brock's model would give all players the
very same payoffs - which would in this case satisfy the requirements of "need
justice" if considered in isolation, but would mean complete disregard of
"merit justice".
To verify this, first consider what I have called Brock's "simplified" theory,
involving maximization of the Nash product 7r*, defined by (15.1). Since all
players are assumed to have the same utility functions and the same disagree-
aAs is easy to verify, condition (15.9) is really one version of Zeuthen's Principle [see Harsanyi
(1977, pp. 149-166)]. As I argued in my 1977 book and in other publications, this Principle is a
very good decision rule in bargaining situations. But, for reasons already stated, I do not think that
it is the right decision rule in making moral decisions.
9A somewhat similar theory of justice, based like Brock's on the n-person Nash solution, but
apparently independent of Brock's (1978) paper, has been published by Yaari (1981).
702 J. C. Harsanyi
ment payoffs d*, maximization of 7r* would give all players equal utility payoffs
withui "" un.
Let us now consider Brock's full theory. Under this latter theory, the
players' payoffs would be determined by maximization of another Nash
product 7r°, defined by (15.2). Yet, this would again yield equal utility payoffs
with u 0I . . . . . 0
un, for the same reasons as in the previous case.
Why would these payoffs u ° completely fail to reflect the postulated differ-
ences among the players in productive abilities and skills? The reason is, it
seems to me, that the requirement of maximizing the Nash product 7r° would
force the players to choose a Constitution preventing those with potentially
greater productivity from making actual use of this greater productivity within
sectional coalitions. As a result, these players' Shapley values could not give
them credit for their greater productive abilities and skills.
To avoid this presumably undesired results, it would have to be stipulated
that no Constitution adopted by the players could do more than prevent the
players from engaging in irnmoral and illegal activities such as theft, fraud,
murder, and so on. But it could not prevent any player from making full use of
his productive potential in socially desirable economic and cultural activities.
Of course, in order to do this a clear criterion would have to be provided for
distinguishing socially desirable activities that cannot be constrained by any
Constitution, and socially undesirable activities that can and must be so
constrained.
(In fact, benevolence toward these people requires us to treat them as they
want to be treated, not as he wants them to be treated.)
Yet, if we want to exclude other-oriented preferences from each individual's
representative utility function, then we must find a way of defining a self,
oriented utility function V~for each individual i, based solely on i's self-oriented
preferences. There seem to be two possible approaches to this problem. One is
based on the notion of hypothetical preferences, which we already used in
defining a person's informed preferences (see Section 16). Under this ap-
proach, a person's self-oriented utility function V~must be defined as his utility
function based on his preferences he would display if he knew that all bis
other-oriented preferences - his preferences about how other people should be
t r e a t e d - would be completely disregarded.
Another possible approach is to define a person's self-oriented utility
function, V~ by means of mathematical operations performed on his complete
utility function Us, based on both his self-oriented and his other-oriented
preferences (assuming that U~ itself is already defined in terms of i's informed
preferences).
Let x i be a vector of all variables characterizing i's economic conditions, his
health, bis job, his social position, and all other conditions over which i has
self-oriented preferences. I shall call x~ i's personal position. Let y~ be the
composite vector y, = ( x l , . . . , xi_ 1, x~+l, • • •, xn), characterizing the personal
positions of all ( n - 1) individuals other than i. Then, i's complete utility
function U~ will have the mathematical form
consisting of two terms, one depending only on x s the other depending only on
yi. In this case we can define i's self-oriented utility function as V, = V~(xi).
Yet, in general, U,. will not be a separable function. In this case we can
define V~ as
This definition will make V~ always well-defined if Ui has a finite upper bound.
(But even if this is not the case we can make V~well-defined by restricting the
sup operator to feasible Ys values.)
706 J. C. Harsanyi
Equation (18.3) defines i's self-oriented utility Vi(xi) a s that utility level that
i would enjoy in a given personal position x i if his other-oriented preferences
were maximally satisfied. In other words, my definition is based on disregarding
any disutility that i may suffer because his other-oriented preferences may not
be maximally satisfied. This is one way of satisfying the requirement that i's
other-oriented preferences should be disregarded.
From a purely mathematical point of view, an equally acceptable approach
would be to replace the sup operator in (18.3) by the inf operator. But from a
substantive point o f view, this would be, it seems to me, a very infelicitous
approach. If we used the inf operator, then we would define Vi essentially as
the utility level that i would experience in the personal position x i if he knew
that all his relatives and friends, as well as all other people he might care
about, would suffer the worst possible conditions.
Obviously, if this were really the case then i could not derive rauch utility
from any personal position xl, however desirable a position the latter may be.
Yet, the purpose of the utility function Vi(xi) is to measure the desirability of
any personal position from i's own point of view. Clearly, a utility function
V~(xi) defined by use of the inf operator would be a very poor choice for this
purpose.
19. Conclusion
I have tried to show that, under reasonable assumptions, people satisfying the
rationality postulates of Bayesian decision theory must define their moral
standards in terms of utilitarian theory. More specifically, they must define
their social utility function as the arithmetic mean (or possibly as the sum) of all
individual utilities. I also defended the use of von Neumann-Morgenstern
utility functions in ethics.
I have argued that a society basing its moral standards on the rule utilitarian
approach will achieve rauch higher levels of social utility than one basing them
on the act utilitarian approach. I have also stated some of my objections to
Rawls' and to Brock's nonutilitarian theories of justice.
Finally, I argued that, in our social utility function, each individual's interests
should be represented by a utility function based on his informed preferences,
and excluding his mistaken preferences as weil as his malevolent preferences
and, more generally, all his other-oriented preferences.
References
Anscombe, F.J. and R.J. Aumann (1963) 'A definition of subjective probability', Annals of
Mathematical Statistics, 34: 199-205.
Ch. 19: Game and Decision Theoretic Models in Ethics 707
Arrow, K.J. (1951) Social choice and individual values. New York: Wiley.
Brandt, R.B. (1979) A theory of the good and the right. Oxford: Clarendon Press.
Brock, H.W. (1978) 'A new theory of social justice based on the mathematical theory of garnes',
in: P.C. Ordeshook, ed., Garne theory and politieal science. New York: New York University
Press.
Dworkin, R.M. (1977) Taking rights seriously. Cambridge, Mass.: Harvard University Press.
Griffin, J. (1986) Well-being. Oxford: Clarendon Press.
Hare, R.M. (1981) Moral thinking. Oxford: Clarendon Press.
Harsanyi, J.C. (1953) 'Cardinal utility in welfare economics and in the theory of risk taking',
Journal of Political Economy, 61: 434-435.
Harsanyi, J.C. (1955) 'Cardinal welfare, individualistic ethics, and interpersonal comparisons of
utility', Journal of Political Economy, 63: 309-321.
Harsanyi, J.C. (1963) 'A simplified bargaining model for the n-person cooperative game',
International Economic Review, 4: 194-220.
Harsanyi, J.C. (1967-68) 'Garnes with incomplete information played by Bayesian players', Parts
I-IIl, Management Science, 14: 159-182, 320-334, and 486-502.
Harsanyi, J.C. (1977) Rational behavior and bargaining equilibrium in garnes and social situations.
Cambridge: Cambridge University Press.
Harsanyi, J.C. (1987) 'Von Neumann-Morgenstern utilities, risk taking, and welfare', in: G.R.
Feiwel, ed., Arrow and the ascent of modern economic theory. New York: New York University
Press.
Harsanyi, J.C. and R. Selten (1988) A general theory of equilibrium seleetion in garnes. Cambridge,
Mass.: MIT Press.
Luce, R.D. and H. Raiffa (1957) Games and decisions. New York: Wiley.
Nash, J.F. (1950) 'The bargaining problem', Econometrica, 18: 155-162.
Rawls, J. (1957) 'Justice as fairness', Journal of Philosophy, 54: 653-662.
Rawls, J. (1971) A theory ofjustice. Cambridge, Mass.: Harvard University Press.
Shapley, L.S. (1969) 'Utility comparisons and the theory of garnes', in G.T. Guilbaud, ed., La
decision: Aggregation et dynamique des ordres de preference. Paris: Centre National de la
Recherche Scientifique.
Smart, J.J.C. (1961) An outline of a system of utilitarian ethics. Melbourne: Melbourne University
Press.
Smith, Adam (1976) Theory of moral sentiments. Clifton: Kelley. First published in 1759.
Vickrey, W.S. (1945) 'Measuring marginal utility by reactions to risk', Econometrica, 13: 319-333.
Von Neumann, J. and O. Morgenstern (1953) Theory of garnes and economic behavior. Princeton:
Princeton University Press.
Yaari, M.E. (1981) 'Rawls, Edgeworth, Shapley, Nash: Theories of distributive justiee re-
examined', Journal of Economic Theory, 24: 1-39.