Professional Documents
Culture Documents
Social Dilemmas - The Psychology of Human C - Paul Van Lange, Daniel P. Balliet, Craig D.
Social Dilemmas - The Psychology of Human C - Paul Van Lange, Daniel P. Balliet, Craig D.
Social Dilemmas
The Psychology of Human
Cooperation
3
1
Oxford University Press is a department of the University of Oxford.
It furthers the University’s objective of excellence in research, scholarship,
and education by publishing worldwide.
Oxford New York
Auckland Cape Town Dar es Salaam Hong Kong Karachi
Kuala Lumpur Madrid Melbourne Mexico City Nairobi
New Delhi Shanghai Taipei Toronto
With offices in
Argentina Austria Brazil Chile Czech Republic France Greece
Guatemala Hungary Italy Japan Poland Portugal Singapore
South Korea Switzerland Thailand Turkey Ukraine Vietnam
9 8 7 6 5 4 3 2 1
Printed in the United States of America
on acid-free paper
■ CONTENTS
Preface vii
3 Evolutionary Perspectives 39
4 Psychological Perspectives 54
5 Cultural Perspectives 79
References 153
Index 187
v
■ PREFACE
Social dilemmas are a pervasive feature of human society. They are a basic fab-
ric of social life, and challenge dyads, groups, and societies. They did so in
the past, they do so now, and they will do so in the future: Social dilemmas
cross the borders of time: our ancestors must have faced many social dilemmas
in their small groups and societies. Even the literary figure, Robinson Crusoe,
must have quickly learned about social dilemmas after Friday entered his life.
Similarly, we often face social dilemmas at home, at work, or many other places
where we are interdependent with other people. Newspapers are filled with
articles about societal problems that frequently are rooted in conflicts between
self-interest and collective interest, such as littering in parks, free-riding on
public transportation, evading taxes, pursuing bonuses in the financial world,
or exploiting natural resources. And social dilemmas may involve many people
who do not know one other, may include different countries, and for some
issues, such as global change, may concern the entire world. In many respects,
social dilemmas also cross “the borders of space.”
As the title indicates, this book is about social dilemmas, which are broadly
defined as conflicts between (often short-term) self-interest and (often longer-term)
collective interest. This book is also about the psychology of human cooperation.
In the course of this book, it will become clear that social dilemmas and human
cooperation are two sides of the same coin. Social dilemmas challenge our capacity
and motivation to cooperate with each other. Life without social dilemmas would
be relatively straightforward and pain-free: People would just behave as they liked
as if guided by Adam Smith’s invisible hand—at least as long as they were able to
coordinate actions with others. But life without social dilemmas is utopian. In our
interactions with friends and partners, work colleagues, or members of clubs and
communities and societies at large, there are frequent conflicts between our nar-
row self-interests and the collective interest.
This book provides many different examples of social dilemmas, and we will see
that they challenge the maintenance of our close relationships, our friendships, our
work, and leisure life, our politics, security, health, and the natural environment
in which we live. One could make the claim that the primary purpose of govern-
ment and management is to resolve social dilemmas. We would not be surprised
that a careful analysis would reveal that the majority of all challenges (80 percent
is a wild guess) that governments and management face are rooted in situations
that are, or closely resemble, social dilemmas. How can we promote spontaneous
help from bystanders, how can we activate citizenship and mutual help among
employees in our work organizations? How can we restrain overfishing? How can
we promote commuting by public transportation? How can we reduce greed and
excessive bonus cultures in the financial world? How can we maintain trust and
cooperation among nations, and promote national security? Social dilemmas can
vii
viii ■ Preface
meeting while we wrote the book. We want to thank Niels van Doesum for com-
ments on the final writings, and Lisanne Pauw who has organized, checked, and
rechecked, the long list of references. Finally, we would like to thank all members
of the broad international social dilemma community that comes together at the
bi-annual meetings at some exotic location in the world. We are proud members of
this community and without the intellectual inputs of each of the members of this
social dilemma network, this book could simply not have been written.
Finally, we hope that you will enjoy reading this book—as a student, a fellow
academic, teacher, practitioner, or member of the general public—and that it
makes a meaningful difference, even if only a small difference, in how you think
about cooperation and how to promote cooperation in our everyday lives and
society at large.
■ INTRODUCTION
Paul van Lange had primary responsibility for preparation of this chapter.
3
4 ■ Introduction to Social Dilemmas
yet how many are prepared to voluntarily reduce their carbon footprint by saving
more energy or driving or flying less frequently?
■ THE HUIZINGE CASE
One real world social dilemma occurred during the winter of 1979 in Huizinge,
a small village in the north of the Netherlands. Due to an unusually heavy
snow, Huizinge was completely cut off from the rest of country so that there
was no electricity for lighting, heat, television, and so on (Liebrand, 1983).
However, one of the 150 inhabitants owned a generator that could provide
sufficient electricity to all the people of this small community, if and only if
they exercised substantial restraint in their energy use. For example, they could
use only one light, they could not use heated water, heat had to be limited
to about 18 degrees Celsius (64 degrees Fahrenheit), and the curtains had to
be closed. As it turned out, the generator collapsed because most people were
in fact using heated water, and were living comfortably at 21 degrees Celsius
(70 degrees Fahrenheit), watching television, and burning several lights simul-
taneously. After being without electricity for a while, the citizens were able to
repair the generator, and this time, they appointed inspectors to check whether
people were using more electricity than agreed upon. But even then, the gen-
erator eventually collapsed due to overuse of energy. And again, all inhabitants
suffered from the cold and lack of light, and of course, could not watch televi-
sion. Indeed, there is little doubt that they all had preferred a situation in which
they could use at least some electricity (a result of massive cooperation) rather
than no electricity at all (a result of massive noncooperation).
Social dilemmas can be quite intense, as the Huizinge case illustrates. They are
also quite ubiquitous. In fact, many of the world’s most pressing problems represent
social dilemmas, broadly defined as situations in which short-term self-interest is
at odds with longer-term collective interests. Some of the most widely-recognized
social dilemmas challenge society’s well-being in the environmental domain,
including overharvesting of fish, overgrazing of common property, overpopula-
tion, destruction of the Brazilian rainforest, and buildup of greenhouse gasses due
to overreliance on cars. The lure of short-term self-interest can also discourage
people from contributing time, money, or effort toward the provision of collec-
tively beneficial goods. For example, people may listen to National Public Radio
without contributing toward its operations; community members may enjoy a
public fireworks display without helping to fund it; employees may elect to never
go above and beyond the call of duty, choosing instead to engage solely in activi-
ties prescribed by their formally defined job description; and citizens may decide
to not exert the effort to vote, leaving the functioning of their democracy to their
compatriots.
Social dilemmas apply to a wide range of real-world problems; they exist within
dyads, small groups, and society at large; and they deal with issues relevant to
a large number of disciplines, including psychology, sociology, political science
and economics, to name but a few. Given their scope, implications, and interdis-
ciplinary nature, social dilemmas have motivated huge literatures in each of these
Introduction to Social Dilemmas ■ 5
disciplines (see also Fehr & Gintis, 2007). Also, disciplines have tended to focus
on only one type of the social dilemma. For example, the two-person prisoner’s
dilemmas was very popular in social psychology during the 1970s; this was fol-
lowed by greater appreciation for other social dilemmas, including social dilem-
mas involving a greater number of people. In some social dilemmas, the act of
cooperation involves “giving” to a public good, in other social dilemmas, it is “not
taking too much” from a shared resource. We will now take a closer look at the
various types of social dilemmas, and the different names that various scientists
have used to capture a specific social dilemma. Once we have illustrated a family
of social dilemmas, we will also be able to provide a more formal definition of a
social dilemma.
■ S O C I A L D I L E M M A S : A FA M I LY O F G A M E S
TABLE 1.1. Classification of Social Dilemmas (after Messick and Brewer, 1983)
Collective Consequences
Immediate Delayed
Social Traps
• Take Some Dilemmas Commuting by car (vs. Harvesting as many fish as one can
• Commons/Resource public transportation, or from a common resource eventually
Dilemmas carpooling) leads to daily leads to the collapse of the resource
traffic congestion and stress
Social Fences Electing to not contribute Choosing to not engage in
• Give Some Dilemmas to a community-funded extra-role behaviors that benefit
• Public Goods fireworks show results in one’s company eventually leads to
Dilemmas cancellation of the show a deterioration of the company’s
positive culture
and if all pursue this non-cooperative course of action, all end up worse off than if
all had cooperated (see Figure 1.1)
In the Chicken Dilemma (also termed the Hawk-Dove game or the Snow Drift
game), each person is tempted to behave non-cooperatively (by driving straight
toward one’s “opponent” in an effort to win the game), but if neither player coop-
erates (swerves), both parties experience the worst outcome possible (death).
Clearly, Chicken does not involve a dominating strategy, as the best decision for
an individual rational decision maker depends on what he or she bbelieves the
other will do; if one believes the other will cooperate (swerve), the best course
of action is to behave non-cooperatively (and continue driving ahead); however,
if one is convinced that the other will not cooperate (will not swerve), one’s best
course of action is to cooperate (swerve), because it is better to lose the game than
to die. There are interesting parallels between Chicken and situations in which
people are faced with the dilemma whether to maintain honor or status when they
are closely at risk (see Kelley, Holmes, Kerr, Reis, Rusbult, & Van Lange, 2003). For
example, Chickenit is a situation in which you should exhibit toughness (by being
a hawk) by not cooperating, and clearly outperform the other if the other does
not cooperate (who is the dove). Intimidation may play a role by communicating
toughness, or a “no surrender” attitude. These are also risky strategies: if both par-
ticipants express such toughness, then the result may be that one needs to change
to cooperation (and lose face), or persist in noncooperation and maintain honor,
but seriously risk death. Over time, this may result in a snow drift, especially if
3 4 3 4 4 3
C C C
3 1 3 2 4 1
1 2 2 1 1 2
NC NC NC
4 2 4 1 3 2
people are committed not to lose face. In everyday life, such situations may arise
when two companies are involved in an intense competition to lower the price of
their product to a point that is “killing” for both, or to guarantee treatment (early
delivery of the product) than can never be implemented.
The Assurance (Trust) Dilemma also lacks a dominating strategy, and is unique
in that the highest collective and individual outcomes occur when both partners
choose to cooperate. This correspondence of joint and self outcomes might sug-
gest that the solution is simple, and there is no dilemma. However, if one party
considers beating the other party to be more important than obtaining high out-
comes for the self and others, or is convinced the other will behave competitively,
the best course of action is to not cooperate. The Assurance Dilemma is sometimes
described as resembling features of the relationship between the USA and Soviet
Union during the cold war, in which disarming represented the cooperative choice
and arming the noncooperative choice (e.g., Hamburger, 1979). To jointly disarm
was clearly the best solution for both countries, yet being the only one to disarm
would have made one nation terribly vulnerable, because it may have yielded the
worst possible solution. Thus, the two countries armed for a long time because
they failed to trust one another, believing that the other party was seeking relative
advantage, and therefore was to be considered very threatening. As another exam-
ple, two athletes want to be involved in a fair contest, in that neither takes drugs to
promote their performances. However, if one athlete suspects that the other might
take drugs, it is perhaps best to take drugs as well to minimize the odds of losing
due to unfair disadvantages (Liebrand, Wilke, Vogel, & Wolters, 1986).
The similarity between the Prisoner’s, Chicken, and Assurance Dilemmas is
that all three situations involve collective rationality: Cooperative behavior by both
individuals yields greater outcomes than does noncooperative behavior by both
individuals. Specifically, the best (Assurance) or second best (Chicken, Prisoner’s
Dilemma) possible outcome is obtained if both make a cooperative choice,
whereas the third best (Assurance, Prisoner’s Dilemma) or worst (Chicken) pos-
sible outcome is obtained if both make a noncooperative choice. In the Prisoner’s
dilemma, tendencies toward cooperation are challenged by both greed (i.e., the
appetitive pressure of obtaining the best possible outcome by making a noncoop-
erative choice) and fear (i.e., the aversive pressure of avoiding the worst possible
outcome by making a noncooperative choice; Coombs, 1973). In Chicken, coop-
eration is challenged by greed, whereas in Trust, cooperation is challenged by fear.
Thus, in a sense, the Prisoner’s Dilemma “combines Chicken and Assurance, rep-
resenting a stronger conflict of interest, involving both fear and greed. Consistent
with this analysis, research has revealed that individuals exhibit greater levels of
cooperation in Assurance and Chicken than in the Prisoner’s Dilemma (Liebrand
et al., 1986).
The temporal dimension. We often see that the consequences for the self can be
immediate or delayed, just as the consequences for the collective can be immediate
or delayed. This temporal dimension is exemplified in social traps, or situations in
which a course of action that offers positive outcomes for the self leads to negative
outcomes for the collective (Messick & McCleland, 1983; Platt, 1973). Examples
of delayed social traps include the buildup of pollution due to over-reliance on
8 ■ Introduction to Social Dilemmas
cars, and the eventual collapse of a common fishing ground as a result of sus-
tained overharvesting. Given their emphasis on “consuming” or “taking” a posi-
tive outcome for the self, social traps are often called take some dilemmas, a classic
example of which is the commons (or resource) dilemma. This is the kind of social
dilemma that attracted environmental scientists to examine the variables that help
people to exercise restraint in their consumption of shared resources.
These social trap situations may be contrasted with social fences, or situa-
tions in which an action that results in negative consequences for the self would,
if performed by enough people, lead to positive consequences for the collective.
Examples of delayed social fences include the eventual deterioration of a com-
pany’s positive culture due to employees’ unwillingness to engage in extra-role
(or organizational citizenship) behaviors, such as being a good sport and helping
new employees adjust, and the gradual deterioration of an education system due
to taxpayers’ unwillingness to fund school levies. Given their emphasis on “giving”
something of the self (such as time, money, or effort), social fences are often called
give some dilemmas, a classic example of which is the Public Goods Dilemma. This
is the kind of social dilemma that attracted experimental economists in particular
to examine the variables that help people to contribute to public goods, and resist
the temptation to free-ride on the contributions of other members.
currently developing new games to enhance our understanding of some new chal-
lenges to social decision making, and especially human cooperation (e.g., Halevy,
Bornstein, Sagiv, 2008; McCarter, Budescu, & Sheffran, 2011). Some of these issues
will be addressed in Chapter 8, which discusses prospects for the future.
■ WHY GAMES?
The social dilemma literature has its conceptual roots in game theory. With the
prisoner’s dilemma as one of the prime examples of a social dilemma, it is fair
to admit that the prisoner’s dilemma is just one in a family of numerous games.
One only needs to skim the book by Von Neumann and Morgenstern (1944),
or the much later book by Luce and Raiffa (1957), to see that the prisoner’s
dilemma is almost a needle in a haystack—not so easy to find. Yet the game
attracted lots of scientists. Why might that be? And why games?
First, the prisoner’s dilemma excels in parsimony. In its original form, involv-
ing two people who simultaneously make only once choice, the structure of game
is very simple. When lecturing, and not talking about the anecdote as to where
the name originated (which can be confusing, see Chapter 2), the game can be
explained in ten minutes or less. While simple in terms of structure, the game is
not simple at all in terms of rationality, and people can have very different feelings
about what is rational in the prisoner’s dilemma. Hence, the prisoner’s dilemma
is also complex: one can view the dilemma in different ways, and it is even more
complex to regulate behavior at the collective level. There is even research that
illustrates the point that it is the interaction goal—individualistic versus collec-
tivistic—that determines whether people view the cooperative choice as intel-
ligent and the noncooperative choice as unintelligent, or vice versa (Van Lange
Kuhlman, 1994).
Second, there is a wealth of motives, cognitions, and emotions that might be
activated by the Prisoner’s Dilemma. There may be a strong form of self-regard
such as greed (always go for the best possible outcome), a self-protective form of
self-regard such as fear (let’s make sure that the other is not going to exploit me),
a genuine concern with the outcomes for the self and the other (collectivism),
or under special circumstances, primarily the other (altruism). And there is the
powerful concern with equality or fairness and the strong tendency to minimize
large differences in outcomes. Such tendencies may be easily activated even when
just approaching a situation in which two people are unlikely to receive the same
outcomes (e.g., Haruno & Frith, 2009). Cognition and reasoning might be focused
on predicting the other’s behavior, making sense of the situation and decide what
to decide (e.g., in terms of norms and identity: “what does a person like me do in
a situation like this,” Messick, 1999; Weber, Kopelman, & Messick, 2004), and after
the fact: making sense of the other’s behavior, to “learn” for future situations like
the Prisoner’s Dilemma. All of this is preceded or accompanied by strong emo-
tions, such as regret (when one made a noncooperative choice out of fear but then
finds out that the other made a cooperative choice), or anger (when one made
a cooperative choice and then finds out that the other made a noncooperative
choice).
10 ■ Introduction to Social Dilemmas
Third, what attracted scientists to the original Prisoner’s Dilemma (in the ocean
of games) are theoretical questions, such as: (a) what is the logical, rational solution
to the prisoner’s dilemma?; and (b) what promotes a cooperative choice? Later, when
people started doing research on the iterated Prisoner’s Dilemma, people also asked
the question: (c) do people learn and adapt to develop stable patterns of cooperative
interaction? These are all questions relevant to game theory, evolution of coopera-
tion, as well as to the psychology of trust, cooperation, and learning (e.g., Budescu,
Erev, & Zwick, 1999; Nowak, 2006; Schroeder, 1995). This may well have been part
of the broader zeitgeist in the years after the economic crisis in the 1930s and World
War II. Game theory, more generally, was influential in various scientific disciplines
for a variety of reasons. One is that game theory provided a very useful comple-
ment to extant economic theory, which was primarily based on macro-level statis-
tics which had not proven to be exceptionally useful for the prediction of economic
stability and change. Another reason is that game theory provided a “logic” that had
a strong scientific appeal, analytical power, and mathematical precision (e.g., Kelley
et al., 2003; Rapoport, 1987; Suleiman, Budescu, Fischer, & Messick, 2004).
Fourth and finally, the Prisoner’s Dilemma also inspired scientists and practi-
tioners alike to get a grip some major social issues. One such issue was to analyze
the economic crisis from the ’30s, and provide a basis for the understanding of
various economic and social phenomena as well as to address the roots of conflict,
and especially how to resolve it (e.g., Pruitt & Kimmel, 1977; Schelling, 1960).
The Second World War itself, and especially the beginning of the Cold War, was
a period in which trust and cooperative relations had to be re-built, especially in
Europe. The Prisoner’s Dilemma, as well as some other games (e.g., negotiation
games), were often used in designing policy and providing recommendations for
the resolution of international hostility and friction (Lindskold, 1978; Osgood,
1962). Moreover, basic insights from game theory were discussed and used by
RAND corporation (Research ANd Development), an influential organization and
think tank whose mission was to provide analysis and advice to military strategy
by the United States. (RAND corporation is now more international in orientation
and has several sites outside of the United States; also, it is now broader in scope in
that it focuses on several key societal issues, including terrorism, energy conserva-
tion, and globalization—interestingly enough, these social issues also have strong
parallels to social dilemmas.) Later, in the early ’70s, when the Cold War was about
to come to an end, the Western world was facing a major oil crisis when the mem-
bers of Organization of Arab Petroleum Exporting Countries or OAPEC (consist-
ing of the Arab members of OPEC, plus Egypt, Syria and Tunisia) proclaimed an
oil embargo. The experience of scarcity—insufficient gasoline, for example—along
with some early signs that we overused other natural resources, was a strong expe-
rience that may have inspired the resource dilemma. Subsequent environmental
issues, such as global warming, acid rain, and overfishing, reinforced awareness of
social dilemmas where excessive consumption is increasingly perceived as nonco-
operative, or as a neglect of shared future interest (e.g., Burger, Ostrom, Norgaard,
Policansky, & Goldstein, 2001; Dolšak, & Ostrom, 2003).
Thus, we see four important reasons why the original Prisoner’s Dilemma game,
as a prototype of a social dilemma game, was inspirational to so many scientists,
Introduction to Social Dilemmas ■ 11
for such a long time: (a) its simplicity in terms of structure; (b) its wealth in terms
of motives, cognitions, and emotions that it may activate, (c) its ability to address
broad questions about human cooperation, and (d) its ability to help address and
illuminate critical societal issues (applicability; see Van Lange, 2013).
societies (Hermann, Thöni, & Gächter, 2008; see also Balliet & Van Lange, 2013b).
We also see that prominent theories are now discussed in terms of applications.
For example, research in the tradition of interdependence theory and evolutionary
theory are now being applied to various domains, such as environmental sustain-
ability, donations and volunteering, and organizational behaviour. More than any-
thing else, this trend reveals that understanding social dilemmas matters.
Taken together, by recognizing social issues and societal challenges, by bridg-
ing fields and disciplines, and by bridging theory and application, we see a grow-
ing scientific field that it is not only become more mature, but that is inspiring to
an increasing number of scientists working in different fields and disciplines, and
to professionals who face different social dilemmas in society and seek to find
effective and efficient solutions. And thinking about the basic nature of human
cooperation, and the fact that it is addressed at the level of the individual all the
way to society, one may almost reach the conclusion that social dilemmas are on
the verge of becoming a new field of scientific inquiry, a field where social, bio-
logical, and behavioral scientists are working together with scientists in comple-
mentary fields, such as neuroscience, genetics, and culture. Although our book is
primarily focused on the psychology of human cooperation, as the title indicates,
it is also true that we hope to cover some of the central articles that address the
state of the art of social dilemma research. We will do so selectively, because it is a
virtually impossible task to recognize all the empirical contributions that scientists
outside of psychology have made in the history of social dilemmas.
2 History, Methods,
and Paradigms
It can be argued that there are three key ideas underlying the general concept
of a mixed motive: The desire to do well for oneself; the fact that one’s out-
comes are partially influenced by the actions of others, as their outcomes are
partially affected by our actions; and that doing wrong by others leaves one
open to possible retaliation, if the interaction is ongoing. All three of these have
been issues of long standing interest among observers of human nature. Let us
consider each in turn.
Craig Parks had primary responsibility for preparation of this chapter.
13
14 ■ Introduction to Social Dilemmas
■ OUTCOME MAXIMIZATION
caution rather than fear, and wishing rather than longing. (There is no eupathic
equivalent of distress.)
The Greeks, then, had three quite different positions on outcome maximiza-
tion. The Epicureans believed that the ideal strategy is to maximize outcomes in
the long run, even if this meant incurring short-term loss. The Skeptics believed
that one should maximize immediate outcomes, because there is no way to know
whether actions now will affect outcomes later. Finally, the Stoics believed that
one should strive for acceptable outcomes rather than maximal outcomes, because
phenomena that produce maximum gain also have the potential to produce maxi-
mum loss. As we will see in later chapters, elements of each of these ideas are often
observed in modern social behavior.
Modern ideas. Modern ideas on outcome maximization have largely been
grounded in the Epicurean tradition. Stoicism has had no impact on modern
thought, and Skepticism quickly became tangled up in questions about how a
skeptic can function in society—in its pure form, Skepticism prescribes that a per-
son who needs to cross the street should go now, regardless of traffic, because
there is no way to know whether stepping in front of a car will cause the person
death or injury—and today functions only as a guiding principle for the conduct
of research and scholarship (Groarke, 2008).
Modern thought is grounded in Jeremy Bentham’s (1748–1832) notion of utili-
tarianism. For Bentham, an action has utility if it has the tendency to promote a
maximum amount of “happiness,” defined as pleasure with corresponding absence
of pain. Determining what action to perform is the end result of hedonic calculus,
in that, if a person is trying to achieve a pleasurable outcome, s/he will select the
action that is most likely to produce an outcome of maximum intensity and dura-
tion; will be experienced as directly as possible; offers the best chance of being
followed by other pleasurable experiences; and is unlikely to be followed by pain.
Alternately, if the person must deal with painful outcomes, the favored action is
the one most likely to produce pain of minimal intensity and duration; can be
experienced indirectly; is unlikely to be followed by other pains; and is likely to be
followed by pleasure. Thus, Bentham saw people as trying to maximize pleasur-
able outcomes and minimize painful ones, and to some extent, also saw people
as forward-thinking, in that a behavior that produces not only a pleasurable out-
come, but holds the possibility of future pleasures, is more likely to be performed
than one that does not hold future promise.
Bentham’s intellectual successor, John Stuart Mill (1806–1873), attempted to
express Bentham’s idea within the context of the mind, and is typically seen as
having laid the groundwork for consideration of the psychology of outcome maxi-
mization. Mill’s contributions were twofold. First, he proposed the idea that people
come to develop associations between actions and outcomes. This leads to antici-
pation of pleasure by virtue of performing an action, and when the action-pleasure
relationship occurs, feelings of satisfaction result. An unexpected action-pleasure
experience will instead produce feelings of surprise, but will also lead to the begin-
nings of an expected associative relationship. Thus, an employee who unexpect-
edly wins a commendation because he works overtime on a project will expect to
be similarly commended the next time he works extra hours. Second, and perhaps
16 ■ Introduction to Social Dilemmas
more importantly, Mill argued that there are qualitative distinctions among plea-
surable outcomes, and the more satisfying higher-order pleasures result from first
experiencing lower-order pleasures. This idea more distinctly develops the notion
that people will consider both short-term and long-term gains, and that long-term
gains will ultimately be more attractive than short-term gains.
A common misconception, and hence criticism, about the hedonic calculus is
that people are assumed to execute it before every decision. However, Bentham and
Mill were both clear that they held no such expectation (see Bentham, 1789/1970,
Chapter IV, Section VI, and Mill, 1861/1998, Chapter II, paragraph 19). From a
psychological perspective, the idea is more descriptive of why people attempt to
maximize positive outcomes rather than prescriptive of how one ought to decide
what to do so as to realize maximum benefit.
To summarize: the notion that people seek to maximize their own gain, and
minimize their own pain, has been a fundamental component of at least some
philosophies of human nature. While early views emphasized relatively straight-
forward tendencies toward seeking pleasure and avoiding pain, there seems to be
a gradual growth in the belief that people develop associations between actions
and outcomes and are able to adopt a longer-time perspective. In particular, more
modern theorists discussed the role of time horizon in pleasure motivation, ulti-
mately arguing that long-term pleasure ought to be the ultimate goal. As we will
see in later chapters, the extent to which people actually strive toward this goal is
debatable.
■ INTERDEPENDENCE
The idea that humans want to do well for themselves, then, has an ancient his-
tory. What about the notion that our actions affect others, as they affect us? It
too has been speculated on for centuries. Reference to the idea can be found
in Aristotle’s writings on eudaimonia. Eudaimonia is a Greek concept that has
no strict English equivalent, but is usually taken to refer to flourishing and an
objective assessment of life quality, as opposed to “happiness,” which is treated
as subjective assessment of quality. A key issue underlying the philosophy of
Aristotle’s time was how to achieve eudaimonia, and different schools of thought
had different opinions on this. For our purposes, Aristotle’s arguments alone are
noteworthy. He felt that eudaimonia was achieved not only by living up to one’s
abilities, but also by surrounding oneself with valuable “external goods,” a key
one being friends. Aristotle argued that such “goods” are critical for a good life
because they provide us with opportunities to apply our abilities. Quite sim-
ply, it is impossible to be virtuous if there is no one to express one’s virtuosity
toward. This, then, is an early idea about interdependence: I benefit by behaving
virtuously toward you, and you benefit by behaving virtuously toward me. The
benefit is not an immediate reward, but rather an intangible life experience, but
all the same, the idea remains that our outcomes are partially affected by others.
David Hume (1711–1776) is generally considered the first scholar to articulate
the dynamics of interdependence. Hume argued that people have what he called a
confined generosity: We are of course concerned with our own well-being, but we
History, Methods, and Paradigms ■ 17
also maintain some degree of concern for the well-being of others. This concern,
however, is not because of some natural benevolence, but rather a result of civiliza-
tion: By being part of society, one realizes that, for that society to persist, one has
to help ensure the survival of its members, including members who are not part
of one’s family. This may mean that we have to cooperate with people whom we
do not actually care about, and who do not care about us. Hume referred to this as
“artificial virtue.” He further argued that initial cooperative interactions with unre-
lated others will be cautious, and as one sees that positive outcomes emerge from
the exchange, the interactions will repeat and trust will build, leading to larger
acts of cooperation. One could argue (and some have argued) that Hume’s reason-
ing represents the first game-theoretic analysis of interdependence; regardless, his
basic logic remains at the foundation of most thought on human interaction.
Adam Smith (1723–1790) expanded upon Hume’s ideas. Smith argued that a
moral person has an innate desire to be approved of by others, and that we sym-
pathize with others by imagining how they must feel when they experience some-
thing. Because of our desire for approval, it follows that we will want to please
others and avoid offending them, and our ability to sympathize guides our choices
of actions that should bring approval. These ideas were expressed in Smith’s Theory
of Moral Sentiments (1759/2002). He is more popularly known for his other major
work, Wealth of Nations (1776/1976), still a seminal work in economics, and the
claim is frequently made that this book supercedes his writings on morality. In
fact, Smith saw the two works as complementary. Self-interest was, to Smith, an
example of “commercial virtue,” a more base virtue that emphasizes improvement
of one’s situation. As one strives for commercial virtue, the famous “invisible hand”
enters to improve the lot of others with whom one associates. In particular, in his
Wealth of Nations, he assumed that, for the most part, groups and societies are
well-functioning because individuals pursue their self-interest. As the well-known
quote states: “It is not from the benevolence of the butcher, the brewer, or the
baker, that we expect our dinner, but from their regard to their own interest. We
address ourselves, not to their humanity but to their self-love, and never talk to
them of our own necessities but of their advantages.” The assumption underly-
ing the invisible hand is that the pursuit of self-interest often has the unintended
consequence of enhancing collective interest. A further argument is that once the
social situation is indeed improved, citizens can turn their attention to higher, or
“noble,” virtues, the most prominent of which is the desire for approval. Smith’s
argument, then, is that while we have an innate desire to help others, we must help
ourselves first.
The common thread running through all of these positions is that interde-
pendence is ultimately functional. We need others to help us both survive and
maximize our potential. So, a person is most likely to survive and thrive if s/he
is good at working with others, and attending to their needs. Further, these phi-
losophers are quite optimistic. People either intuit that cooperation is important,
or are born with the ability to worry about how others feel. Not all philosophers
of human nature were as positive about humans as were Aristotle, Hume, and
Smith, however. Some took the view that cooperation can only be brought about
by force and threat. This was first, and most famously, articulated by Thomas
18 ■ Introduction to Social Dilemmas
■ E A R LY T H O U G H T S O N M I X E D M O T I V E S
Consider the following problem: A businessman has contracts with three sup-
pliers. Supplier A is owed $10,000, Supplier B is owed $20,000, and Supplier C
is owed $30,000. The businessman dies, and as no members of his family are
interested in taking over the business; it is going to be shut down. The three
suppliers need to be paid off, but the company has fewer assets than the $60,000
needed to pay the three suppliers in full. Though it is not yet clear exactly
how much the company has, the likely amount is either $10,000, $20,000, or
$30,000. Hence the problem: How much should each supplier be paid given
each of these possible asset totals?
While this may seem like a modern problem, in fact this scenario is a variant of
the marriage contract problem (so called because the example involves three wives
who bring differential resources to their marriage to the same man) presented in
the Talmud, which is the compilation of law and tradition from ancient Babylon,
and serves as the basis for Jewish law. For our purposes, the marriage contract
problem is important because it is the first known example of the use of ideas that
relate to mixed motives, specifically, the idea that any one creditor’s outcome in
the problem above is affected by the other two. While each creditor would most
prefer to be paid in full, or to be paid as close to full as the assets allow, the likeli-
hood that that will happen is low, because of the presence of the other creditors,
who have equally good claims to the assets and are probably unwilling to walk
away empty-handed. Instead, each creditor is going to have to accept a payoff that
is less than maximum, so that each can get some money, and in fact, the Talmud
prescribed that, no matter what the total assets, each creditor must be paid some
amount. As we will see in chapter 4, the idea that your maximum payoff exists, but
is likely unattainable, speaks to the notion of “temptation” in an outcome matrix.
the high card). Thus, A has the ability to force B to perform a specific behavior.
B’s influence over A is more indirect. B cannot induce A to perform any specific
behavior, but B may be able to change A’s outcome, if swapping with the deck
delivers a winning card to B (e.g., A holds a “2,” B holds a “4,” A swaps with B, and
B in turn swaps with the deck and draws a “6”—A has gone from being the win-
ner to the loser and can do nothing to alter this). This is in contrast to most card
games, in which one succeeds or fails largely by virtue of card management or by
monitoring probabilities.
Waldegrave recognized the interdependence aspect of le her, and realized
that, while Player B’s decision is always easy, Player A’s decision is not. How does
A know when to swap? If she is holding a “2” or a King, the decision is clear, but
for any other card, both keeping and swapping have potential benefit and draw-
back. Waldegrave was thus motivated to identify a strategy that would maximize
the likelihood of ending the game with maximal winnings. His solution, called a
mixed strategy, was based on the idea that one should avoid the absolute rule “I
will always swap if my card is less than some threshold value, and always keep if
it is above that value.” Instead, one should take a probabilistic approach, and swap
with probability p if the card is less than threshold, and swap with probability 1—p
if the card is above threshold. Mixed-choice strategy is a fundamental notion of
game theory, though it is less important for our discussion. For us, the Waldegrave
example is key because it represents the earliest example of someone pondering
how to make choices when another person has the ability to affect your outcomes.
After the appearance of the Waldegrave problem, much work was done on
mixed-motive-type situations, but this work was almost exclusively mathemati-
cal in nature, oriented around derivation of probabilities of various outcomes,
and proofs of theorems. It was not until the 1920s that theorists began to spec-
ulate on the role of psychological variables in mixed-motive choice, and that
speculation was initiated by a mathematician, Emile Borel. Borel was interested
in the game of poker. He recognized that the game is a situation of imper-
fect information—unless one is cheating, one knows only the content of one’s
own hand. A skilled opponent can take advantage of this by bluffing, which in
turn should lead to second-guessing of one’s strategy. Borel saw that these basic
features characterize a host of other situations (for example, a dictator could
bluff about how many missiles his military holds, and verification of the true
size of his arsenal could well be impossible to accomplish). Borel wondered
if a strategy could be devised that would maximize one’s chances of winning
even in the face of such uncertainty and trickery. He rather quickly concluded
that one could not, and by 1928 he had moved away from the problem, but
he was apparently the first to realize that mixed-motive choice is affected
by psychological factors as well as sheer strategy. Indeed, he contributed, in
1938, a chapter to a volume devoted to applications of findings from games
of chance entitled “Jeux ou la Psychologie Joue un Rôle Fondamental (Games,
or Psychology Plays a Fundamental Role),” and late in his life he was credited
History, Methods, and Paradigms ■ 21
within economics as being the first to bring psychology into the study of mixed
motives (Frechet, 1953).
Borel’s ideas were of interest to another mathematician, John von Neumann, who
believed that it was in fact possible to develop a choice strategy in the face of uncer-
tainty. He published on the problem in 1928, and then returned to it in the early
1940s. Interestingly, it is unclear what motivated von Neumann to resume working
on game theory. He was quite interested in computational logic, rule-based axi-
oms, and the notion of the brain as a calculator, and he was convinced that quan-
tum mechanics could model social phenomena. Oskar Morgenstern had a similar
conviction, and suggested that economic behavior would be an excellent test case
for their ideas. (See Mirowski, 1992, for a complete discussion of von Neumann’s
interests.) In 1944 von Neumann and Morgenstern published a book-length treat-
ment of their ideas entitled Theory of Games and Economic Behavior in which
they laid out the basic notions underlying game theory. Their particular goal was
to provide a set of axioms that would spell out, mathematically, what one should
expect to occur when people are engaged in a mixed-motive task.
Formal tests of their propositions began in 1950 with the work of two mathema-
ticians, Merrill Flood and Melvin Dresher. They believed that game theory could
be used to model international conflict, and devised a simple task for observing
individual behavior in a mixed-motive situation (De Herdt, 2003; Flood, 1952).
They invited two other researchers, John Williams and Armen Alchian, to play
100 rounds of a decision-making game. The players were presented the out-
come matrix shown in Figure 2.1, with Alchian’s payoffs below the diagonal and
Williams’ above it:
On each round, each of them was to choose between (1) and (2). They would
not be allowed to interact, though they would be informed after each trial of the
other’s choice, and the resulting payoff to each person. In the long run, the best
combination of choices for both players is (Alchian, 1; Williams, 2), as after 100
trials, the greatest combined payoff would have been issued to the duo: 50 points
Williams
1 2
Alchian 1 2
–1 1
0.5
2 0.5 –1
0 1
for Alchian, and 100 for Williams, for a total of 150 points. By contrast, the game
theory equilibrium prediction (Nash, 1950) is that Alchian will consistently select
(2), because no matter what Williams does, the outcome will be better than if
Alchian selects (1), and Williams will select (1) for the same reason. Thus, they
should always end up in the (2, 1) cell, and after 100 trials Alchian would have a
total of 0, and Williams 50, with 50 total points paid out. In fact, the (2, 1) com-
bination rarely happened, and the pair usually ended up in the (1, 2) cell. More
specifically, Williams chose (2) 78 times, and Alchian chose (1) 68 times, with
Alchian being less cooperative because he was unhappy with his outcomes being
of lesser value than Williams’.
Flood and Dresher’s study provoked much interest, but its context-free nature
raised questions about how easily the task could be understood, and how lay-
people (Williams and Alchian were a mathematician and economist respectively)
would respond to the game. As such, in 1950, during a presentation to the Stanford
University psychology department, mathematician Albert Tucker added a context
story. He suggested that the Flood-Dresher matrix paralleled a situation in which
two prisoners are separated and independently confronted with a request to con-
fess to a crime. If neither confesses, the prosecution will seek a tough sentence in
court; if each confesses, the resultant plea bargain will produce a lesser sentence
for each; but if only one confesses, it will be assumed that he alone committed the
crime, which demands a harsh sentence, while the non-confessor will go free. In
matrix form, with the outcomes being number of years in prison, this can be rep-
resented in the way shown in Figure 2.2:
As a result of Tucker’s cover story, this basic structure came to be called the
“Prisoner’s Dilemma Game,” or PDG for short.
The dynamics of the Prisoner’s Dilemma are deceptively simple. Because there
is no interaction between the prisoners, overt coordination of choices is impos-
sible. Each player has to try to infer what the other will do. The inference process
can lead to what seems an obvious conclusion. If Prisoner B confesses, Prisoner
A will receive 1 year in prison if he also confesses, and will go free if he does not
confess. Clearly here it would be better to not confess. Similarly, if B does not
Prisoner B
Confess Not confess
confess, A will get 2 years if he also does not confess, and 3 years if she does con-
fess. Two years is more desirable than 3 years, so if B does not confess, it is better
for A to not confess. Note the general pattern: Regardless of what B does, not
confessing produces the better outcome for A. We can say, then, that not confess-
ing dominates confessing, and we would expect A to not confess. But therein lies
the dilemma—this same logic also applies for B. This means that each prisoner
will choose to not confess, which means each will get 2 years in prison, which is a
worse outcome than would have resulted if each had confessed.
Perhaps we go one step further and assume each prisoner is insightful and
discovers this conflict. Each one should then conclude that confessing is the bet-
ter choice. But note now what arises: If B expects A to confess, then B could take
advantage of this and opt to not confess. This would give A 3 years in prison, and
would set B free. Surely B will fall prey to this temptation. But the same temptation
exists for A, which means A also would ultimately not confess, and we are right
back where we started. It is this dynamic that has attracted so many researchers
to the Prisoner’s Dilemma as a research tool. In 1957, Duncan Luce and Howard
Raiffa produced a nontechnical overview of the Prisoner’s Dilemma and discussed
its potential application to a variety of problems, and their work opened the door
for researchers in a number of disciplines—psychology, sociology, political science,
and economics, to name just some—to use the game as a tool for studying a variety
of real problems. As a result, studies using the Prisoner’s Dilemma grew rapidly, and
within a short time literally hundreds of papers on the paradigm were published.
Within the PDG matrix, confession is more generally the cooperative choice,
and failure to confess the non-cooperative choice. As well, the outcomes are typi-
cally represented by their motivational properties, as shown in Figure 2.3:
Here, “T” is the Temptation outcome, because it tempts each player to try to
receive it; “R” is the Reward for mutual cooperation; “P” is the Punishment for not
mutually cooperating; and S is the Sucker outcome, resulting from a failed attempt
at mutual cooperation. In a Prisoner’s Dilemma, these outcomes will order as
T > R > P > S, and twice Reward will be larger than Temptation plus Sucker (or for-
mally, 2R > T + S). This latter condition is necessary so that simple alternation
between cooperating and not cooperating is less lucrative over the long run than
Player B
Cooperate Not
cooperate
Player A Cooperate R T
R S
Not S P
cooperate
T P
repeated joint cooperation. The outcome values can also be used to quantify the
degree of cooperativeness or temptation in the payoff matrix. The K index (Rapoport,
1967) ranges from 0 to 1.00, with higher values reflecting a greater degree of coop-
erativeness, and lower values a greater degree of temptation. It is calculated as
R−P
K=
T −S
If the K value is large, the interpretation is that there is not a great outcome
advantage to pursuing Temptation; in other words, the relative difference between
Reward and Temptation is not that large. By contrast, a small K value indicates that
there is a considerable relative difference, and Temptation will be very attractive.
All else being equal, we expect the likelihood of cooperation to increase as the
Kindex goes up.
nor their own vegetables, and the non-weeders get to grow some plants, which is
not optimal but is acceptable.
Ultimatum Game and Dictator Game. There are also PDG variants in which
the nature of choice is altered. Of these, perhaps the two most popular are the
Ultimatum Game and the Dictator Game. In the Ultimatum Game, one person is
allotted an amount of resources and is told to divide the resources between herself
and another person. The division is then presented to the other person, who must
accept or reject it—no negotiation is allowed. If rejected, the resources disappear,
and neither person gets anything. The allocator’s outcomes are thus affected by
the recipient, and each is partially dependent upon the other. For the allocator,
the decision requires determining how much one can safely keep without looking
so unfair that the recipient will accept no payoff in order to punish the allocator.
A variant of the Ultimatum Game is the Dictator Game. Here, the recipient has
no choice—he must accept whatever the allocator provides. Because the recipient
cannot act, the Dictator Game is not technically a social dilemma, but it is none-
theless useful for thinking about social dilemmas, because the obvious choice—
keep everything, and force the recipient to take nothing—rarely actually occurs.
This allows us to ask questions about the role of variables like fairness and inclu-
siveness in social dilemma behavior.
It is perhaps ironic that, despite Tucker’s attempt to give the Prisoner’s Dilemma
some realism, criticism of the PDG quickly centered around its supposed lack
of correspondence with real-world situations. Nemeth (1972) raised the first
substantive criticisms, arguing that few social situations present a person with
just one interaction partner, only two choices, well-defined outcomes, and full
information about the other person’s potential outcomes. This criticism was not
shared by all disciplines—political scientists, for example, tend to believe that
the basic PDG provides a good approximation of arms races between coun-
tries—but within psychology, Nemeth’s critique had an impact. The n-person
Prisoner’s Dilemma (Hamburger, 1973), which expands the number of partici-
pants, helped somewhat to alleviate concerns about artificiality, but the larger
questions about the range and nature of choices one can make remained.
It is important to note that, despite the prevalence of social dilemmas in soci-
ety, studying behavior in actual, real-time social dilemmas is difficult. Often their
scale is just too large for a researcher to manage. Consider, for example, the efforts
required to complete an actual field study, such as van Vugt and colleagues’ (van
Vugt, Van Lange, Meertens, & Joireman, 1996) investigation of usage of carpool
lanes. In order to find out whether drivers would even be willing to consider using
a new carpool lane, the researchers had to, during rush hour, wait and approach
drivers who had stopped at a gas station located on the highway on which the
carpool lane had been installed, and ask whether the driver would be willing to
complete a survey; identify another stretch of a highway that had rush-hour use
comparable to that of the tested highway, but did not have a carpool lane, and
was far enough away from the targeted highway that the likelihood of a driver
26 ■ Introduction to Social Dilemmas
regularly using both highways was near zero; travel to that second highway, which
was about 100 miles removed from the targeted highway; set up at a gas station on
this second highway, and approach rush-hour drivers there with the survey; and
then mail off a second survey to all drivers who completed and returned the first
one. All of this was for a study that involved no manipulations introduced by the
researchers, and no long-term monitoring of drivers.
What should be clear is that a response to the artificiality issue that merely
shifts the research venue outside of the lab, to take advantage of real social dilem-
mas, is far more challenging than it might first seem. Because of this, Nemeth’s
challenge inspired theorists to devise some more complex research paradigms
that can be executed in the laboratory. We will now take a look at two such para-
digms—give-some games, and take-some games.
Give-some games can be broken into two types, depending on what is needed
to provide the entity. A step-level public good is one for which a certain mini-
mum total contribution must be received, at which point the entity is provided in
entirety. If the minimum is not reached, the entity does not exist. An example of a
step-level good is a bridge. Consider a pedestrian bridge in a park, with the bridge
being paid for through fundraising. Of course, it can be used by anyone who visits
the park. It does not make sense to build half of a bridge, so if only enough dona-
tions accumulate to pay for half of the bridge, it will not be built.
It is important to note that a step-level public good is technically not a social
dilemma, because if the decider is the final person needed to make a donation, it
is better for him to contribute than not contribute. Imagine that just $100 more
is needed to build the pedestrian bridge. If a citizen is in a position to give $100,
he should do so, because then the bridge will be built; if he does not give, it will
not be built. (Such a person is referred to as a critical contributor.) This violates
the strict tenet of a social dilemma that non-cooperation always produces a bet-
ter personal outcome than cooperation, though it does not prevent the step-level
public goods paradigm from being a popular research tool. In fact, the choices
and outcomes associated with a step-level public good can be represented in a
matrix, much like the Prisoner’s Dilemma. Figure 4 presents an example of a
five-person step-level public good, in which three contributors are needed in
order to provide the good:
If we assume that the good is more valuable than the resource (which is a safe
assumption, because presumably people would not pay more than something is
worth to them—no reasonable person, for example, would offer $50.00 for a pack-
age of gum), then we can see the violation of the social dilemma requirement
when there are two other givers: The good is more valuable than the resource, so
at that point it is better personally to give than to keep.
The other type of good is a continuous public good. This can be provided in any
amount, depending upon the total amount of contribution. A playground is a type of
continuous good. A small amount of money can build a small playground; as dona-
tions increase, better-quality equipment can be purchased, and the size of the play-
ground can be expanded. Technically, continuous public goods are almost always
a specific form of step-level goods, because there is likely some minimal amount
that has to accrue before anything can be done. For example, before the playground
can be built, we must accumulate enough money to buy one piece of equipment.
However, this minimal amount is usually so low that it will be achieved with trivial
effort. Because of their continuous nature, a continuous public good is not easily rep-
resented by a choice/outcome matrix. When used in research, the investigator will
typically feedback to group members how much total contribution was received, and
what that total amount purchases for them. A commonly-used research paradigm
for studying such goods is the coin exchange paradigm (Van Lange, Klapwijk, & Van
Munster, 2011). In this task, each of two people begins with a number of coins that
have value to the person. Each person is then given the option of giving some num-
ber of their coins (including the entire amount held) to the other person, with each
contributed coin having double value for the other person. For example, each person
might hold 10 coins that each have a worth of 25 cents to him or herself, but 50 cents
to the other person. Exchange decisions are made simultaneously, so that one person
cannot simply react to the allocation made by the other person.
It is not hard to see how the coin exchange paradigm parallels the PDG. If each
person gives all coins to the other, each will end up with a payoff that is double
what would have been received if each had kept all of their coins. In our example,
each person would earn $2.50 from keeping all coins, and $5.00 from exchanging
all coins. However, the best personal payoff is realized by keeping all coins and
having the other person give all coins (in our example, $2.50 + $5.00 = $7.50), and
the worst (0) occurs when a person gives all coins and receives none. There is thus
an incentive to keep one’s coins.
■ TA K E - S O M E G A M E S A N D R E S O U R C E D I L E M M A S
The other major variant of the Prisoner’s Dilemma is the take-some game.
Under this paradigm, people begin with access to a resource pool of limited
size. Each group member can sample from the resource up to some limit. If
the total of all requests is less than total pool size, each person is granted his
or her request, but if total requests exceed pool size, no one receives anything.
Often the choice is iterated; if this is the case, then after all requests have been
granted, the pool is partially replenished at some rate (e.g., 20% of the remain-
ing pool size) before the next round of choice begins. In the iterated case, the
trials will typically continue until either a stopping point is reached, or the
pool has been exhausted. As well, it is common with this paradigm to withhold
some information about the situation—the current size of the pool, the specific
requests of others, the replenishment rate, and/or the amount replenished are
all often omitted from the feedback given to group members. These omissions
are designed to enhance the fidelity of the paradigm to real resource manage-
ment problems. Consider, for example, a water table. This resource paradigm
well matches what water users do—each household has a limit to how much
water can be drawn at once; rain and snow partially replace the drawn water;
the table can go dry—and it is rare for water users to know, or even approxi-
mate, how large the table is at any given moment, how much rain and snow
flow back into the table, or how much water each other household is drawing.
Let us demonstrate how an experimental resource dilemma works. Imagine
that five people have access to a resource that begins with 500 units. Each person
can take up to 20 units per turn. After everyone has sampled from the resource,
10% of the remaining units are added back in, and the sampling/replenishment
process repeats. Assume that on the first turn, the five people take 20, 20, 15, 8,
History, Methods, and Paradigms ■ 29
and 7 units respectively, for a total harvest of 70 units. Thus, after everyone has
sampled, the resource has
500 – 70 = 430
units remaining. We now need to add 10% of the remaining pool size, or 43
units, back into the pool:
430 + 43 = 473
So the next round will begin with 473 units available. Let us assume that dur-
ing the second round, the five group members take 20, 20, 18, 15, and 11 units,
for a total harvest of 84 units. This reduces the resource to
473 – 84 = 389
units remaining. We add 10% of 389, or 39, units back into the pool:
389 + 39 = 428
The third round thus begins with 428 units. This process continues until either
the time limit for the experimental session is reached, or the pool gets so low
that it is impossible to fill all potential harvests from group members. In this case,
since we have five people who can each take up to 20 units, we need at least 100
units in the pool. Less than this, and it is possible that someone will not be able
to receive all that she requests. Should that happen, the session ends immediately.
Development of the take-some game was largely inspired by Hardin’s (1968) work
on the Tragedy of the Commons. Hardin described a field available to multiple
farmers in which cows are allowed to graze. It is in each farmer’s best interests to
put all of his cows into the field, but if all farmers do so, the grass will be eaten
so quickly that the field will cease to be useful as a feeding spot, and in the long
run, they will all be worse off, because there will be no nearby place to graze
their animals. The best long-term strategy is for each farmer to put just enough
cows in the field so that the grass grows back in one part of the field at the same
rate that it is being consumed in another part of the field. This means that each
farmer must also find a less convenient place to feed his remaining cows, but this
is the price that must be paid to keep the field useful for the long run. Hardin’s
“tragedy” was that the farmers would not recognize this long-term strategy, and
would instead destroy the field by pursuing the immediate gain.
A modern-day analogy to Hardin’s story is treatment of the Brazilian rain for-
ests. Large swaths of the forest have been cleared to make room for agriculture,
but because the land has supported trees rather than crops, it has only short-term
farming value, because the nutrients are used up quickly. For this reason, new
tracts of land must be continually cleared. If the rain forest area is indeed needed
for agriculture, the best long-term solution is to clear only a small patch of land,
farm it as long as possible and grow the rest of the crops elsewhere, and let the
rest of the land stay forested. When the cleared land exhausts it usefulness, a new
small patch can be cleared, and the old patch will be reforested by neighboring
trees. By the time the farmers need to reuse the first patch of land, the new trees
will have returned nutrients to the soil, and the land will again be crop-friendly.
30 ■ Introduction to Social Dilemmas
However, the temptation exists to just deforest a huge area and plant all crops at
once. This is easier than farming two locations, but once the large cleared area
is used up, there are no nearby trees to reforest it, and in the long run the land
becomes useless.
As with Olson’s public goods problem, there are parallels between the resource
dilemma and Prisoner’s Dilemma, in that trying to achieve the personal best out-
come leads to poor outcomes over the long term. It differs from the public good in
that there is immediate gain: At least in the early life of the resource, each person
gets what he or she wants. This difference is what distinguishes a social trap from a
social fence (Cross & Guyer, 1980). In a trap, there is immediate gain and long-term
loss, and in a fence, there is immediate loss and long-term gain. A take-some
dilemma is a trap, and a give-some dilemma is a fence. This distinction is impor-
tant because, though structurally the take-some and give-some games are similar,
it suggests that there should be perceptual and psychological differences between
the two situations. It is for this reason that social dilemma researchers treat the two
paradigms separately, and need to test whether a phenomenon that occurs under
one type of dilemma occurs under the other. We would never simply assume that
a behavior or perception that occurs with one type of dilemma necessarily occurs
under the other (for an excellent illustration, see Van Dijk & Wilke, 2000).
■ S TAT I C V E R S U S D Y N A M I C PA R A D I G M S
(e.g., Komorita & Parks, 1994, 1995; Messick & Liebrand, 1995; see also Kenrick,
Li, & Butner, 2003) have called for more careful study of the process by which peo-
ple alter their decisions as the dilemma progresses. There are two primary chal-
lenges to conducting such studies. First, the choice revision process may unfold
over a longer period than can be captured in a typical one- or two-hour labora-
tory research session. In response to this, computer simulation using agent-based
modeling has become an increasingly popular tool among social dilemma theo-
rists (e.g., Messick & Liebrand, 1995). Basically, agent-based modeling attempts
to estimate the actions of each of a large number of interdependent individuals,
with the assumptions that each person is adaptive, can reflect on past experi-
ences, has the ability to render a choice without interference from others, and
favors simple rules for governing choice. Each of these factors can be captured in
a probability-based algorithm, and once programmed, these algorithms can then
be run, and the patterns of estimated choices, often over a very long series of trials,
are output and analyzed. (See Macy & Willer, 2002, for a complete discussion of
execution of agent-based modeling simulations.) While one can quarrel with some
of the underlying assumptions—one can imagine, for example, how certain people
might be inflexible rather than adaptive, preferring to settle on one choice strat-
egy and apply it without exception, or that particular people might have complex,
even convoluted, rules for deciding what to do—it is still the case that agent-based
modeling can provide baseline estimates of what could happen in long-term situa-
tions. And as with any computer simulation, it becomes important to collect actual
data, contrast the results against the simulation results, and then address the ques-
tion of why there are deviations between the modeled and actual results. Given
the logistic challenges of getting real data from a large group over a long stretch
of time, agent-based modeling at present represents our strongest tool for at least
formulating some ideas of what happens in such situations.
Besides the logistic issue, an historic challenge to dynamic research has been
the difficulty of statistically treating repeated-choice data, and modeling how
influences on choice wax and wane as the dilemma moves forward. Recent devel-
opments in structural equation modeling (SEM) have the potential to make this
challenge surmountable. Basically, SEM is a quantitative method for combining
data from many variables into a single system, producing a set of path coefficients
that estimate the strength of impact of some input and mediating variables on one
or more outcome variables. The focus is on the nature of covariation between pairs
of variables in the system. The goal is not to derive a causal structure, but rather
a web of relationships that can be interpreted. While most commonly applied to
non-experimental data, there is no reason why SEM cannot also be used on data
sets that result from manipulated variables, and through use of a variant of SEM
called growth curve modeling, repetition of choice can be included in the model.
Application of SEM to social dilemma data thus allows simultaneous consideration
of a number of influences on choice, and can describe how the nature of choice
alters over time. It is important to note that proper application of SEM requires
a substantial sample size. SEM theorists generally suggest that there should be at
least 20 cases per parameter being estimated (Jackson, 2003). As a typical struc-
tural model can easily have upwards of 20 parameters to estimate (and in fact, a
32 ■ Introduction to Social Dilemmas
20-parameter model would be a relatively simple one), the researcher may need
many hundreds of cases to derive stable estimates, and some theorists (e.g., Barrett,
2007) have argued that any model with less than 200 cases should automatically be
rejected, unless it has been executed on a special, restricted sample for which large
numbers of cases are just inaccessible (e.g., schizophrenics). This sample size issue
should give social dilemma researchers pause before they wantonly begin to use
SEM on their data sets (and such misapplication is a real and growing problem—
see, e.g., Shah and Goldstein, 2006, for a recent demonstration of this trend), but
it should not be a barrier. Execution of a study that is designed with these caveats
in mind can produce a model that estimates the relative impact of a good number
of variables on social dilemma choice, and captures at least some of the dynamic
nature to boot. (Readers interested in a complete treatment of SEM should see
Kline, 2011.)
■ R E A L V E R S U S I N TA N G I B L E O U T C O M E S
Subjects in social dilemma studies are typically shown an outcome matrix that
reflects the number of points associated with each particular combination of
choices. In some studies, this is all they play for—the ability to walk away with
the knowledge that they accumulated a satisfactory (or dissatisfactory) amount
of points. In other studies, these points get converted to something tangi-
ble: Sometimes money, sometimes lottery tickets for a prize, sometimes a gift
certificate. Does it matter whether one uses tangible or intangible outcomes?
Perhaps not surprisingly, findings are all over the place regarding this question,
and have been for almost as long as social dilemma research has been conducted.
For this reason, incentives are a methodological issue that has never ceased to
be of concern. Gumpert, Deutsch, and Epstein (1969) found players to be more
competitive when money was at stake than when it was not, but the actual amount
of money was irrelevant—playing for any amount, no matter how small, was suffi-
cient to induce competition. Knox and Douglas (1971), however, found that while
the average rate of cooperation does not differ across magnitude of incentive, the
variance does, with variance increasing as size of incentive increases. This pattern
was replicated by Shaw (1976). From a statistical perspective, this makes it difficult
to accurately compare small-incentive and large-incentive studies. Complicating
things even further, Gallo and Sheposh (1971) could not replicate the “real money”
effect, finding no differences in cooperation between those playing for money
History, Methods, and Paradigms ■ 33
versus mere points; Stern (1976) found some evidence to suggest that intangible
incentives could be more influential on cooperation than tangible incentives; and
Clark and Sefton (2001) found financial incentive to be less influential on coopera-
tion than the opponent’s first choice. There is also a new line of research in which
participants can actually reward or punish each other in public goods dilemmas,
and this research shows that actual rewards and punishment with real monetary
consequences tend to be somewhat more effective in promoting cooperation than
hypothetical rewards and punishment (for a meta-analysis, see Balliet, Mulder, &
Van Lange, 2011).
All of this has led to a kind of stasis whereby those who do pay subjects think
that only this type of research is interpretable; conversely, those who do not pay
might think that it is wasteful to do so or that it induces a financial frame that is
not always present in social dilemmas in everyday life, and that might prime par-
ticipants with a particular mindset. These views compare strikingly well to a dia-
logue between Frank, Gilovich, and Regan (1993, 1996) and Yezer, Goldfarb, and
Poppen (1996) regarding whether studying economics makes people less coop-
erative. The contention (Frank et al.) is that economics teaches the self-interest
perspective and so does inhibit cooperation; the response (Yezer et al.) is that eco-
nomics also teaches the possibility and value of mutually beneficial action, and so
does not inhibit cooperation. It is possible that those who believe that people are
taught to be self-interested would plan to convert points to dollars, and those who
believe people are taught the usefulness of mutual benefit would not execute such
a conversion. So practically, there is probably no right answer regarding whether
study subjects need to have their choices connected to real money. The researcher
should simply make an informed choice, and be prepared to defend that choice.
■ THE ENDGAME
In many social dilemma studies, people make choices over multiple trials, because
the researcher is interested in observing the evolution of behavior over time. In
such studies, the question arises of whether subjects should be told how many tri-
als will occur. Such knowledge can induce endgame effects whereby cooperation
decreases dramatically on that last trial, because the person knows there will be no
retribution for such a choice (Rapoport & Dale, 1967). Given this, one can ask why
the researcher would ever tell subjects that “there will be x number of trials.”
However, Selten and Stoecker (1986) have argued that revealing versus conceal-
ing the stopping point of the game is a choice to be made theoretically. Specifically,
revealing the endpoint simulates a situation in which the person interacts with
different people trial by trial, but knows that there are a finite number of available
partners. An example would be a classroom in which a student must pair up with
a classmate every time an assignment is given, and will be paired with a different
classmate for each assignment. The student immediately knows how many assign-
ments will be given (it is equal to the number of classmates she has), but does not
know which classmate will be her partner for any given assignment. Free-riding on
the partner’s efforts early on could earn the student a reputation as a bad partner,
because past partners will pair up with future partners and may spread the word,
34 ■ Introduction to Social Dilemmas
■ DECEPTION
The final issue that one needs to consider when executing social dilemma research
is whether to have subjects interact with actual other subjects in real time, or
against a programmed strategy, with an inference that the opponent is a real per-
son. Arguments can be made for both approaches. Intact groups simulate real-life
situations but may result in many groups not producing the effect of interest—
one cannot study reciprocation if no one in the group reciprocates—thus wasting
subject hours. A concocted group guarantees that everyone has the same experi-
ence, but comes with a price of deceiving the subjects. What does one do?
The use of deception is a provocative issue. Within psychology alone, one can
find vehement arguments against it ever being used (e.g., Ortmann & Hertwig,
1997) and equally strong arguments for its occasional use (e.g., Kimmel, 1997).
Some researchers report evidence that people do not mind being deceived, so
long as the deception is mild; indeed, they may actually enjoy the experience
(Christensen, 1988); and others report that encountering a negative stimulus is
more aversive than being deceived (Epley & Huff, 1998). Also important, some
questions—such as the effects of careful manipulations of the feelings (empathy)
or behaviors or strategy (such as the so-called Tit-for-Tat strategy, see Chapter 4)
of the other person (Batson, 2011) are very hard to study without any form of
deception.
However, vivid examples of subjects inferring that an accidental occurrence in
the laboratory was actually part of the study have been reported—MacCoun and
Kerr (1987), for example, reported that a subject who experienced an epileptic
History, Methods, and Paradigms ■ 35
seizure during a session was thought by the other subjects to be involved in the
study—as a confederate of the experiment. Such evidence supports an argument
many experimental economists regard as important. They argue against any form
of deception because it undermines the trust that participants should have in the
integrity and honesty of experimental procedures, thereby undermining the gen-
eral validity of experiments.
The question of whether it is appropriate and/or desirable to use deception is,
then, a complicated one that has no easy answer. It is beyond the scope of this book
to try to sort through all of these complications—the reader who would like to
see this done is referred to the outstanding chapter by Kimmel (2006). We merely
point out that there is no easy answer for the social dilemma researcher who is
trying to decide what to do.
We might find some help in some recent developments in the health sciences.
There, ethicists have attempted to find a middle ground between deception and
full information. In this literature, it is acknowledged that deception is sometimes
necessary. For example, one cannot study placebo effects without convincing peo-
ple that they are taking an active pharmaceutical when they really are not (Miller
& Kaptchuk, 2008). However, it is also acknowledged that deception makes it at
best very difficult for people to freely consent to what they are about to experi-
ence (Wendler & Miller, 2004). As such, ethicists have proposed two strategies
for employing deception, yet giving subjects the ability to make informed choices
(Miller & Kaptchuk, 2008). At the start of the session, people could be told that the
study contains some deception, and be asked to sign an authorization form that
states the person is aware of and accepts the use of deception. If the person does not
care to be deceived, he can withdraw from the study without penalty. Alternatively,
during debriefing, when the person is informed that deception was used, the per-
son can be given the option to withdraw his data, with any rewards promised to the
person being delivered anyway. Whether either of these procedures drives a certain
kind of person away from the study, and hence skews the data, is unknown. They
may, however, represent a way for most social dilemma researchers to be accepting
of deception, at least among those who believe it is sometimes called for.
A second critical point to be gleaned from our historical review is that there
has never been agreement on whether humans are naturally inclined toward coop-
eration or selfishness. It is thus unsurprising that theorists continue to debate the
issue today. As part of this review, it was our goal to clarify what we see as some
modern misconceptions about some of these historical writings: that Adam Smith
believed people will always (and should) be self-interested; that people are thought
to perform pain/pleasure analyses before every action; that those who subscribe to
the Hobbesian point of view believe that humans are incapable of being generous
on their own.
Our discussion of the research paradigms revolves around the Prisoner’s
Dilemma, give-some game, and take-some game, and some of their major vari-
ants. We also raise the issue of static versus dynamic games, with the former
assuming that people have a set choice strategy that they employ to make their
cooperation decisions, and the latter relaxing this assumption, thus allowing for
the possibility that people will alter their choice strategy as the dilemma pro-
gresses. We noted that little research exists on dynamic choice in social dilemmas,
a knowledge gap that needs filling.
Finally, we reviewed some of the major practical considerations a social
dilemma researcher needs to make: what type of incentive structure to use;
whether to inform subjects of when the game will end; and whether or not to use
real or simulated opponents. We saw that none of these issues has an easy answer,
and the choices the researcher makes will be influenced by theoretical consider-
ations regarding what type of situation the researcher is trying to understand.
The take-home message from all of this is that constructing a proper social
dilemma study is challenging, in terms of both how the study is executed, and
the basic assumptions one makes about why humans do what they do. This is an
important point to keep in mind, also when reading the following chapters.
■ PART TWO
■ EVOLUTIONARY PERSPECTIVES
Although various forms of non-cooperative behavior may catch the eye, we also
see that people often engage in remarkable forms of prosocial behavior. We
make substantial personal sacrifices to help our kin and support our friends,
rescue complete strangers in bystander emergencies, make anonymous financial
donations to charities, and contribute to large scale public goods such as edu-
cation, religion, and environmental sustainability. From an evolutionary per-
spective, human cooperation is an enigma because, over time, natural selection
should ruthlessly winnow out any traits that reduce an individual’s fitness.
Fitness is defined in terms of an individual’s reproductive success, and it
includes both someone’s direct fitness (numbers of offspring carrying copies of the
same genes) and indirect fitness (other kin carrying copies of the same genes)—
together referred to as inclusive fitness (Hamilton, 1964). Thus, for any costly
behavior to evolve and persist, there need to be some corresponding ultimate ben-
efits in terms of spreading the actor’s genes. A first-pass glance at natural selec-
tion would suggest that cooperation in social dilemmas should have been selected
against because they cause us to perform behaviors which may be individually
costly. Nevertheless, cooperation is ubiquitous in both human and nonhuman
social interactions. Why? This is the question that this chapter addresses.
In this chapter, we outline a number of important evolutionary explanations for
human cooperation that have been suggested in the literature, (1) kin selection,
(2) direct reciprocity, (3) indirect reciprocity, and (4) costly signaling. These expla-
nations suggest that cooperation has evolved through natural selection. We also
briefly discuss other perspectives such as multilevel selection which suggests that
human cooperation may be adaptive at the group level, as well as mismatch and
cultural group selection theories which suggest that human cooperation is not an
adaptation per se but a byproduct of other adaptations. Finally, we briefly discuss
some emerging questions in the evolutionary literature on cooperation, and pose
some challenges for further research.
We begin by discussing some definitional issues. Evolutionary biologists
define cooperation as any action which is intended to benefit others, regardless
of whether the actor also benefits in the process. This captures a wide variety of
This chapter was written by Pat Barclay and Mark Van Vugt and is partly based on Barclay, P., &
Van Vugt, M. (in press). The evolutionary psychology of human prosociality: adaptations, mistakes,
and byproducts. To appear in D. Schroeder & W. Graziaono (Eds.) Oxford Handbook of Prosocial
Behavior. Oxford, UK: Oxford University Press.
39
40 ■ Perspectives to Social Dilemmas
that triggers human altruism and cooperation is empathy versus “oneness with
others”—the debate between Batson et al., 1997 versus Cialdini, Brown, Lewis,
Luce, & Neuberg, 1997. They can also debate development by asking whether
empathic cooperation is innate versus culturally learned. Much of the controversy
over evolutionary explanations of human cooperation is because researchers mix
up these levels of analysis, for example, assuming that people are consciously con-
cerned with receiving benefits for helping or that all altruistic acts are selfishly
motivated (Barclay, 2011; Van Vugt & Van Lange, 2006; West et al., 2011).
■ EVOLUTIONARY PERSPECTIVES
ON HUMAN COOPERATION
■ KIN SELECTION
The first major theory for understanding the evolutionary origins of human
cooperation is kin selection. The vast majority of the more costly forms of
cooperation in both humans and non-humans are directed towards kin. Why
is this, and how could kinship helping evolve? Imagine that you are a gene
trying to propagate copies of yourself. The great evolutionary theorist William
(Bill) Hamilton (1964) noted that there are at least two ways you can do this.
Increasing the reproduction of your current body (direct fitness), or increasing
the reproduction of other bodies that carry a copy of yourself (indirect fitness).
Inclusive fitness is the sum of these effects—direct fitness plus indirect fitness—
and is what organisms have evolved to maximize. For any given gene, close kin
are statistically likely to carry identical copies. Any gene that causes an individ-
ual to help close kin will often cause help to be targeted towards copies of itself.
Thus, we would predict that psychological mechanisms that cause nepotism will
evolve in many species, and that this nepotism should depend in part on the
closeness of kinship. This prediction has been abundantly confirmed in many
species across many thousands of studies (for a review, see Alcock, 1993). It has
even been found in plants (e.g. Dudley & File, 2007), suggesting that inclusive
fitness is a powerful idea that applies across all of life.
Regarding humans, much research has shown that—all else being equal—peo-
ple are nicer to kin than non-kin: they are more likely to help kin, less likely to harm
42 ■ Perspectives to Social Dilemmas
kin, and more willing to tolerate injustices from kin (e.g., Burnstein, Crandall, &
Kitayama, 1994; Krupp, DeBruine, & Barclay, 2008; Park, Schaller & Van Vugt,
2008). In one set of studies, Burnstein et al., (1994) tested several inclusive fit-
ness hypotheses by giving respondents hypothetical decisions to help others. They
distinguished between helping in life and death decisions—whereby people could
save only one person from a burning house while the others would perish—versus
more every day forms of helping such as shopping for someone’s groceries. The
targets of helping varied in terms of their degree of kinship, age, sex, and health.
Consistent with the evolutionary hypotheses, people felt closer to immediate kin
(siblings) than to more distant kin (cousins). Furthermore, they were more likely
to aid close kin over distant kin, especially in life-and-death situations, whereas
when it is a matter of an everyday favor they gave less weight to kinship.
Research suggests that even when people are in competition with others, they
will compete less sharply with kin than with non-kin (Daly & Wilson, 1988;
Gardner & West, 2004). Kinship is a major form of grouping in many pre-industrial
societies, and appears to be a major factor affecting who shares food with whom
in many societies (Gurven, 2004). In fact, the most persistent, long-term, self-
less and unreciprocated act that we see people perform—namely parental care—is
actually just a special case of kinship, because offspring carry copies of parental
genes (Dawkins, 1976/2006). Natural selection has crafted a human kinship psy-
chology that includes such powerful sentiments as parental love, mother-infant
attachment, brother and sister solidarity, and other such nepotistic tendencies.
These emotions are the proximate psychological mechanisms that function to pro-
mote helping towards kin. All told, kinship appears to be one of the most powerful
causes of cooperation for most humans on the planet (Park et al., 2008).
■ DIRECT RECIPROCITY
provided help in the past. This is the basis of reciprocity (Axelrod, 1984; Trivers,
1971; Van Vugt & Van Lange, 2006). In this way, the helpers tend to receive help
and the non-helpers tend not to receive any help. Indeed, people often get involved
in exchange relationships in which they take turns helping each other. In our meat
example, two hunters might share with each other as long as each of them has
given in the past. We have popular expressions such as “you scratch my back and
I’ll scratch yours,” which carry the implicit condition that “I will not scratch your
back unless you scratch mine.”
Is direct reciprocity beneficial for the actor? In a classic study, political scientist
Robert Axelrod organized a tournament in which he pitted different strategies in
computer simulations of the Prisoner’s Dilemma Game against each other, and
the winning strategy was Tit-for-Tat (Axelrod, 1984). Tit-for-Tat is a simple strat-
egy of initially cooperating with one’s partner, and thereafter imitating the part-
ner’s action on the previous interaction. Tit-for-Tat is a remarkably good strategy
because it pairs cooperators with other cooperators and does not get “suckered”
for long by those who do not cooperate. As such, it tends to do better than many
other strategies (Axelrod, 1984; Boyd & Lorberbaum, 1987; Dawkins, 1976/2006).
Yet scientists have found various conditions that limit the effectiveness of
Tit-for-Tat. Tit-for-Tat only works if the “shadow of the future” is long enough
such that the future benefits of one’s partner’s reciprocation will outweigh the cost
of immediate helping. Tit-for-Tat requires enough other reciprocators around
to make it worth initiating a reciprocal relationship. Under some conditions,
Tit-for-Tat can be beaten by more forgiving strategies that overlook acciden-
tal failures to cooperate, or by strategies that exploit unconditional cooperators
(Brembs, 1996; Klapwijk & Van Lange, 2009; Nowak & Sigmund, 1992). Although
Tit-for-Tat is not always the best reciprocal strategy to follow, the net sum of years
of theory is that some willingness to reciprocally help can be a highly successful
strategy in solving social dilemmas.
One of the factors that might favor a more generous strategy than strict
Tit-for-Tat is the occurrence of noise (Axelrod, 1984; Kollock, 1998). For exam-
ple, Klapwijk and Van Lange (2009; see also Van Lange, Ouwerkerk, & Tazelaar,
2002) showed that in noisy dyadic interactions—when the partner sometimes
behaves more or less cooperatively than intended—it pays to be slightly more
generous than Tit-for-Tat. Using a parcel delivery paradigm as a social dilemma
whereby individual’s pay offs were determined by the speed in which a parcel
would be delivered through a city (with roadblocks to induce noise) they found
that under noise Tit for Tat diminished trust and cooperation, and that a more
generous strategy was more effective in maintaining cooperation.
Most of the research on direct reciprocity uses the iterated Prisoner’s Dilemma,
which is a two-person game in which each player has the binary choice each round
of either “cooperating” or “defecting.” Recent work has allowed people to use
graded levels of cooperation instead of a binary choice. Mathematical models have
shown that the best strategy in such situations is “Raise-the-Stakes,” which means
starting out moderately cooperative and getting increasingly cooperative when
one’s partner reciprocates (Roberts & Sherratt, 1998; Sherratt & Roberts, 1999).
This accurately models what people actually do in experimental games (Roberts &
44 ■ Perspectives to Social Dilemmas
Renwick, 2003; Van den Bergh & Dewitte, 2006), especially with strangers with
whom they have no trusting relationship (Klapwijk & Van Lange, 2009; Majolo
et al., 2006).
Contrary to popular belief, the existence of direct reciprocity does not require
complex calculations or strict bookkeeping among egoistic individuals. Reciprocity
explains why people are capable of possessing genuine warmth, love, and sympa-
thy toward others such as in friendships or romantic relationships: “If I genuinely
value your welfare, it will cause me to help you, which can cause you to genuinely
care about me and help me when I need it, which causes me to value your welfare
more,” and so on. In this example, empathy and feelings of warmth toward one’s
friend or romantic partner are the proximate psychological mechanisms that are
shaped by an evolved psychology based on direct reciprocity (Barclay & Van Vugt,
2012; de Waal & Suchak, 2010; Neyer & Lange, 2003).
What direct reciprocity does require are the cognitive abilities to detect when
others might fail to reciprocate (cheater detection: Cosmides, Barrett, & Tooby.,
2010), remember who has and has not reciprocated (Barclay, 2008; Mealey, 1995),
trust that others will stick around long enough to return the favor (Van Vugt &
Hart, 2004; Van Vugt & Van Lange, 2006), and delay gratification in order to reap
the long-term gains of reciprocation (Harris & Madden, 2002; Stevens & Hauser,
2004). This might explain why reciprocity is relatively common in humans, but
relatively rare in the other primates that generally lack these more advanced men-
tal faculties.
■ INDIRECT RECIPROCITY
People do not only help their kin, partners, or friends. Human cooperation
appears to be much broader than that. People regularly help those who will not
have the opportunity to directly reciprocate. Take the hunting example again, and
imagine one hunter who is known to regularly share with others, and a second
hunter who is known for stinginess. When the generous hunter gets sick and
is unable to hunt for himself, others are likely to give him meat, whereas the
stingy hunter is much less likely to receive meat when sick (Gurven, Allen-Arave,
Hill, & Hurtado, 2000). This is an example of what is often referred to as indirect
reciprocity, which is when cooperative acts are reciprocated by someone other
than the recipient (Alexander, 1987; Nowak & Sigmund, 2005). According to
indirect reciprocity theory, people acquire a good reputation when they help oth-
ers, and this makes them more likely to receive help when they themselves need
it. People who refuse to help good people get a bad reputation, which reduces
their likelihood of receiving help themselves.
As an empirical illustration of indirect reciprocity, Wedekind and Milinski
(2000) had participants play a public good game in which they could give money
to other participants and could gain a reputation for giving or refusing. The
experimenters ensured that there was no possibility of direct reciprocation from
the recipient because participants would never be paired with the same person
again. Despite this, participants tended to give to others who had given in the
past, such that people with a good reputation were more likely to receive help.
Evolutionary Perspectives ■ 45
This result has been replicated in several other similar experiments conducted in
various labs in behaviorial economics and social psychology (Hardy & Van Vugt,
2006; Milinski, Semmann, Bakker, & Krambeck, 2001; Seinen & Schram, 2006;
Semmann, Krambeck, & Milinski, 2004; Van Vugt & Hardy, 2010).
For instance, Hardy and van Vugt (2006) showed that cooperators in a pub-
lic good game receive greater status from their peers and they are more likely to
be selected as group leaders. They had participants play a public good game in
randomly assigned three-player groups. In one condition, the individual contri-
butions per round were anonymous and in another they were public. After each
round the members of each group were asked who they preferred as their group
leader for a subsequent round. They were also asked which group member they
most admired and respected. As expected, cooperators received higher status rat-
ings and were most likely to be chosen as group leaders provided that their con-
tributions were known to others in their group. In a second study on a resource
(commons) dilemma, they essentially replicated this finding. Individuals who had
taken less from a resource pool were seen as higher in status and were preferred
as exchange partners.
What information do people use to decide whom to help? People seem to
use a combination of personal experience and social information about others
(gossip) when deciding whether to help them or not (Roberts, 2008; Sommerfeld,
Krambeck, Semman, & Milinski., 2007). Evolutionary scientists are currently
investigating what types of acts will result in someone obtaining a good or bad
reputation (image score). For instance, one gets a good reputation for punish-
ing non-cooperators (Barclay, 2006). Also, a refusal to help a bad person should
enhance one’s own reputation, but it is not clear whether it does (Bolton, Katok, &
Ockenfels, 2005; Milinski et al., 2001; Ohtsuki & Iwasa, 2007). Finally, it should
probably matter for one’s reputation whether a person helps (or fails to help) either
an in-group member or an out-group member but we know of no studies who
have looked into this.
Unlike kin selection and direct reciprocity, indirect reciprocity can poten-
tially explain cooperation in large-scale public goods in which people gain a
good reputation by being cooperative. Reputational forces like indirect reci-
procity can be harnessed to support cooperative actions like the fight against
climate change because people who work against climate change may benefit in
terms of indirect reciprocity (Van Vugt, 2009). As a test of this idea, Milinski,
Semman, Krambeck, and Marotzke (2006) ran a public goods experiment with
participants contributing to a public fund. In contrast to the standard public
good game, the public fund was not divided among the participants but the fund
was used to invest in reducing people’s fossil fuel use. This game mimicked the
global climate change problem. The researchers found that contributions went
up when the players were provided with expert information describing the cur-
rent state of the climate. Furthermore, in support of indirect reciprocity theory,
personal investments in climate protection increased substantially when players
invested publicly, that is, when they could build up a good reputation. Thus, a
third evolutionary explanation for why humans cooperate is because it yields
reputation benefits.
46 ■ Perspectives to Social Dilemmas
■ C O S T LY S I G N A L I N G
Vugt and Iredale (2012) argued that men’s altruism might be a costly signal to show
off their qualities to potential female partners. To test this “show-off ” hypothesis
(cf. Hawkes, 1991), they allocated men to four player groups to play several rounds
of a public good game, while being observed by either a male audience, a female
audience, or no audience. As expected, contributions dropped over time when
there was no audience which can be ascribed to the standard endgame effect. With
a male audience, the contributions also dropped over time, but not significantly.
However, with a female audience the contributions went up over time, suggest-
ing that the men were using their cooperation to compete for the attention of the
female. In line with this costly signaling hypothesis, men also contributed more
when they rated the female as sexually more attractive.
Costly signaling offers an interesting alternative perspective on the origins of
human cooperation by viewing altruism and other acts of kindness as signals to
attract potential coalition partners or sexual mates. It assumes that some traits
evolve because they enable individuals to do better in the competition for partners.
This idea fits well with a broader perspective known as biological markets theory.
Humans can choose many of their social partners and leave uncooperative partners
if there are better options available. The presence of partner choice creates a mar-
ket for social partners (Noë & Hammerstein, 1994, 1995). In such markets, people
choose the best partners they can obtain, given their own value in this market.
This perspective has implications for the evolution and development of coop-
eration because it creates a selection pressure for fairness and cooperation. If you
are not receiving a “fair” deal then you can simply find someone else who will offer
that deal (André & Baumard, 2011; Baumard, André, & Sperber., in press). In a
biological market, the best way to get a good partner is to be a good partner. As
long as there are enough opportunities for reputation building or there are costs
for being abandoned then this will cause an escalation of cooperative behavior in
a process known as “runaway social selection” (Nesse, 2007) or “competitive altru-
ism” (Barclay, 2004, 2011; Hardy & Van Vugt, 2006; Roberts, 1998).
The theory of biological markets combines aspects of reciprocity and costly
signaling in explaining cooperation. Traditional evolutionary perspectives predict
that people will be more cooperative when they are being observed, but biological
markets go further by predicting that humans will be even more generous when
competing over access to partners (Barclay & Willer, 2007; Sylwester & Roberts,
2011; Van Vugt & Iredale, 2012). Such competition pays off because high contribu-
tors gain status for helping others (Hardy & Van Vugt, 2006), and are more likely
to be chosen as partners (Barclay & Willer, 2007) and mates (Barclay, 2010). In
biological markets, cooperation is affected by factors like the supply and demand
of different currencies of help, one’s own market value, and one’s outside options
(Noë & Hammerstein, 1994, 1995).
More research is needed to test a broad range of predictions derived from costly
signaling and biological markets theories about the emergence of cooperation in
humans. These theories are appealing because they suggest that much of human
cooperation is about signaling and they offer compelling evolutionary explana-
tions for why there are consistent sex differences in cooperation in different situa-
tions (Balliet, Li, Macfarlan, & Van Vugt, 2011).
48 ■ Perspectives to Social Dilemmas
■ BASIC ISSUES
Contrary to popular belief, evolutionary theory does not predict that each case
of human cooperation is adaptive—in the sense that it increases someone’s
inclusive fitness. In the animal world, prey species sometimes get eaten because
they mistake where predators are (e.g., a zebra running towards a hidden lion)
and several bird species are tricked into raising cuckoo chicks. These animals
clearly produce a benefit to the other animals while incurring a cost to self, and
thus they can be viewed as acts of altruism. Clearly, such mistakes and manipu-
lations frequently occur in nature but they are not adaptive in an evolutionary
sense. Here we provide two common non-adaptive evolutionary explanations
for why humans cooperate in social dilemmas, mistakes, and mismatches.
Some forms of cooperation occur unintentionally. Going back to the hunting
example, suppose that one day you have successfully hunted meat, but you would
prefer not to share with the rest of the group because you and your family are
hungry. You could try to smuggle it back to your family or consume it on the
spot, but what if others catch you? You would risk losing your reputation, getting
punished, and having others not share with you in the future. Our psychological
mechanisms have evolved to be adaptive on average. All mechanisms will occa-
sionally make mistakes because errors are inevitable in any decision-making pro-
cess (Haselton & Buss, 2000; Nesse, 2005). Cooperative sentiments, like empathy,
cause us to help others (Batson et al., 1997; De Waal & Suchak, 2010). In a world
with reciprocity and reputation, this will often result in cooperative people receiv-
ing benefits for helping, even if those people do not intend to receive such benefits.
As long as those benefits outweigh the costs of occasionally helping the “wrong”
people (e.g. those who will not reciprocate) or in the “wrong” situations (e.g. when
we are anonymous) then it would still be adaptive on average to have cooperative
sentiments (Barclay, 2011; Delton, Krasnow, Cosmides, & Tooby, 2011).
We can design experiments to cause participants to make “mistakes” in social
dilemmas by helping when they receive no benefits for doing so, as long as we trig-
ger cues that would normally indicate the presence of benefits. For example, the
50 ■ Perspectives to Social Dilemmas
presence of eyes is normally a cue that one is being observed, and many experi-
ments have shown that people are more generous with their money when they
can observe eye-like stimuli on a computer or on a poster (Bateson, Nettle, &
Roberts, 2006; Burnham & Hare, 2007; Haley & Fessler, 2005; Mifune, Hashimoto,
& Yamagishi, 2010; Rigdon, Ishii, Watabe, & Kitayama, 2009). As another exam-
ple, facial resemblance is one cue that people use to detect kinship (DeBruine,
2005), and participants in experimental games are more trusting and cooperative
when they are playing with people whose faces have been morphed to slightly
resemble the participant’s own face (DeBruine, 2002, 2005; Krupp et al., 2008).
In both examples, an adaptive psychological mechanism is being “tricked” to
produce a cooperative response even when the participant does not benefit from
being helpful.
A second, non-adaptive explanation is that human cooperation is a mismatch.
Natural selection does not plan ahead. Our current adaptations are “designed” to
work well in past environments: Those who had more offspring in past environ-
ments tended to pass their traits on to current generations. If the environment
stays relatively constant, then those traits will function well in the current envi-
ronment. However, if the environment has changed recently, then traits which
were once adaptive may no longer be adaptive. In other words, the old adaptations
might not yet have been selected out of a population if the selection pressures have
recently changed. This idea is known as mismatch or evolutionary lag because the
changes in genes lag behind the changes in environments (Laland & Brown, 2006;
Van Vugt & Ahuja, 2010). The classic example of mismatch is our preferences for
sweets, salts, and fats: it is adaptive to crave these when they are rare, because they
are valuable sources of energy and nutrients. People still crave them even though
they are overabundant in modern environments and lead to obesity and other
health problems.
Social environments have changed dramatically in the last several centuries and
millennia. As such, forms of cooperation that were once adaptive might no longer
be adaptive. For example, we have gone from living in smaller kin-based groups to
much larger groups of mostly non-kin. In the former circumstances, a psychology
with the decision rules such as “feel warmth towards all group members and “help
someone who needs aid” would result in cooperation mostly targeted towards kin,
whereas in modern circumstances it would not. Thus, cooperative sentiments that
once increased inclusive fitness may no longer do so.
In addition to changes in the scale and kin composition of groups, we now also
have many more opportunities for anonymity and movement between groups. This
means that people can now get away with more selfish behavior than they could
have in small bands, and it is now easier to move to a new group and run from
one’s bad reputation. Accordingly, reputation may be less important in modern
environments than in past environments—though this requires empirical test-
ing. If so, then it is not as beneficial as it once was to possess social emotions
like guilt and shame. Such emotions help people maintain their reputations and
make amends for any damage they have done to cooperative relationships (Frank,
1988; Ketelaar & Au, 2003). When people can simply run from a bad reputation or
simply gain new partners to replace any partners they have estranged then these
Evolutionary Perspectives ■ 51
emotions are no longer functional. This situation may be changing with the advent
of the Internet and social media technologies such as Facebook and Twitter as
people are now able to spread information about others’ reputation—for good or
for ill—quickly and efficiently. As it stands now, it is currently unknown whether
mismatch is a major factor in the explaining human cooperation. Yet it is worth
investigating cooperation in smaller and largely kin-based social networks that
were the norm until fairly recently to see if humans still apply the same decision
rules in large, modern and complex societies (Dunbar et al., 2011; Van Vugt & Van
Lange, 2006; West et al., 2011).
The main lesson here is that not all forms of human cooperation are adaptive in
an evolutionary sense. People sometimes make mistakes regarding to whom they
bestow benefits because the psychological mechanisms underlying their coopera-
tive acts are misfiring.
biases might eventually result in highly cooperative groups replacing less coopera-
tive groups, thus spreading the norms of cooperation. This process is known as
cultural group selection, which should not be confused with group selection in a
biological sense (Henrich et al., 2008; Richerson & Boyd, 2006); it is the cultural
ideas that are spreading, not necessarily the groups. Stable groups are neither nec-
essary nor sufficient for this process (Barclay, 2010a).
By definition, cooperative actions benefit others in one’s group, so members of
a cooperative group are better off than members of groups with lots of free riders.
This means that there are advantages of being part of a cooperative group, even
if helping others or harming those that fail to help others (strong reciprocity) is
personally costly. This can lead to cultural changes as more cooperative, and thus
more successful, groups replace less cooperative groups, and bring their cultural
norms with them. Alternatively, less cooperative groups can become more cooper-
ative by imitating and conforming to the norms and behaviors of more successful
cooperative groups (Boyd & Richerson, 2002). Finally, people may “vote with their
feet” by joining groups with norms fostering cooperation, allowing for the further
spread of cooperation (Gürerk, Irlenbusch, & Rockenbach, 2006).
This process explains why humans have been able to create large and highly
cooperative societies on the back of a few primitive tribal social instincts to (1) help
members of their kin group (2) punish defectors, and (3) imitate the behaviors of
those around them. In general, gene-culture co-evolutionary theory offers a prom-
ising avenue for looking at human cooperation because they pay attention to inter-
actions between evolved cooperative sentiments and cultural learning biases. Yet
it is fair to say that due to their complexity and mathematical nature, these models
have not generated a lot of empirical research so far.
There are other very promising and relatively novel evolutionary approaches
to study human cooperation such as niche construction theory (Laland & Brown,
2006), scale of competition theory (West et al., 2006), selective investment theory
(Brown & Brown, 2006), and network reciprocity (Nowak, 2006). Space limitations
prevent us from elaborating on them here, but please consult these key references.
■ PSYCHOLOGICAL PERSPECTIVES
54
Psychological Perspectives ■ 55
actions on the individual’s outcomes. For example, an employee may just sim-
ply do those activities that are part of the contract or job description. But as we
have illustrated, an employee may also demonstrate a fair amount of helping, such
as familiarizing newcomers, working overtime when needed, and perhaps even
spontaneously offering help to colleagues who seem to need help.
According to interdependence theory, an individual may be transforming the
given matrix into an effective matrix, a matrix which summarizes his or her broader
preferences beyond the simple pursuit of direct self-interest. One type of trans-
formation may involve taking a longer-time perspective, whereby the employee
acts in ways that might be associated with greater outcomes for him or her in the
future, such as the positive return from other colleagues, or the anticipation of
reputational benefits. Another type of transformation may be outcome-based, such
that value is assigned not only to one’s own outcomes (immediate or future) but
also to the outcomes for others. For example, the employee may assign value to
the well-being of a unit or group of colleagues, seeking to enhance joint outcomes
rather than his own outcomes with no regard for his colleagues’ outcomes. Thus,
interdependence theory assumes that the pursuit of direct immediate outcomes
often provides an incomplete understanding of interpersonal behavior. That is why
this theory introduces the concept of transformation, defined as a movement away
from preferences of direct self-interest by attaching importance to longer-term
outcomes or outcomes of another person (other persons, or groups). We focus in
the remainder of the chapter on such outcome-based transformations. But what
outcome transformations may be distinguished?
The concept of transformation is based in part on the classic literature on social
value orientation (McClintock, 1972; see also Griesinger & Livingston, 1973),
which distinguishes among eight distinct preferences or orientations, including
altruism, cooperation, individualism, competition, aggression, as well as nihil-
ism, masochism, and inferiority (we will not discuss the latter three since they
are exceptionally uncommon). The outcome transformations can be understood
in terms of two dimensions, including (a) the importance (or weight) attached to
outcomes for self, and (b) the importance (or weight) attached to outcomes for
other. Figure 4.1 presents this schematic presentation, with weight to outcomes
for self on the x-axis (horizontal), and weight to outcomes for other on the y-axis
(vertical).
In this typology, cooperation is defined as the tendency to emphasize posi-
tive outcomes for self and other (“doing well together”). In contrast, competition
(or spite) is defined as the tendency to emphasize relative advantage over others
(“doing better than others”), thereby assigning positive weight to outcomes for self
and negative weight to outcomes for other. Individualism is defined by the ten-
dency to maximize outcomes for self, with little or no regard for outcomes for other
(“doing well—for oneself ”). These three orientations are fairly common in research
on social dilemmas, which often uses participants that do not know each other well.
Two other orientations—altruism and aggression—are somewhat less com-
monly observed in social dilemmas in that people do not tend to hold these as
orientations with which one approaches others in social dilemmas (but they may
be activated as motivational states, as well will discuss later). Altruism is defined
56 ■ Perspectives to Social Dilemmas
Altruism
Inferiority
+ Cooperation
Other
Masochism – Self Self + Individualism
Other
Nihilism – Competition
Aggression
by the tendency to maximize outcomes for other, with no or very little regard for
outcomes for self, and aggression is defined by the tendency to minimize outcomes
for other. Cooperation, individualism, and competition represent common orien-
tations, in that most of us probably have repeated experience with each of these
tendencies, either through introspection or through observation of other’s actions.
Similar models have been developed by other researchers. The most notable
model is the dual-concern model (Pruitt & Rubin, 1986), developed in an attempt to
understand the values or concerns that might underlie negotiation. As in the model
described above, the dual-concern model assumes two basic concerns: (a) concern
about own outcomes, and (b) concern about other’s outcomes. The dual concern
model assumes that each of these concerns can run from weak to strong. This model
delineates four negotiation strategies based on high versus low concern about own
outcomes and high versus low concern about other’s outcomes. According to the
dual-concern model, problem-solving is a function of high self-concern and high
other-concern, yielding is a function of low self-concern and high other-concern;
contending is a function of high self-concern and low other-concern; and inaction
is a function of low self-concern and low-other-concern. Negotiation research has
yielded good support for the dual-concern model (Carnevale & Pruitt, 1992; see
also De Dreu, Weingart, & Kwon, 2000).
The model of social value orientation and the dual-concern model have been
extended to include a third orientation (or concern), the pursuit of equality in
outcomes. It appears that individuals who tend to enhance joint outcomes (coop-
eration, problem-solving) are also strongly concerned with equality in outcomes,
whereas individuals who are more individualistic or competitive are not very
strongly concerned with equality in outcomes (Van Lange, 1999). The implica-
tion is that individuals who were concerned with joint outcomes might not act
cooperatively if they think that such actions create injustice, either to their own
disadvantage or the other’s disadvantage.
Psychological Perspectives ■ 57
be activated across situations, it is also true that small cues in the situation, or
in how we come to perceive the other person in terms of personality, motives,
and identity, might exert pronounced effects on our behavior. Some theories have
suggested that social dilemmas may often call for some construal of appropriate-
ness, in which a person may ask the fundamental question, what does a person
like me do in a situation like this? (Weber, Kopelman, & Messick, 2004; see also
Dawes & Messick, 2000). Norms are clearly an important source for transfor-
mations, in that most people want to act in ways that are consistent with broad
notions of appropriate and good behavior. But there may be many other sources as
well, such as identity concerns, reputational concerns, or empathy felt for others
in the group that might underlie the specific motives that people bring to bear on
social dilemmas—and that effectively cause behavior, and shape social interac-
tions (e.g., Foddy, Smithson, Schneider, & Hogg, 1999). To provide a framework
for these sources, and to provide a general framework for the influences on human
cooperation, we distinguish between structural, psychological, and dynamic influ-
ences—which we discuss next.
■ DEVELOPMENTS IN STRUCTURAL,
PSYCHOLOGICAL AND DYNAMIC INFLUENCES
■ STRUCTURAL INFLUENCES
Rewards, Punishment, and the Social Death Penalty. It has long been known that
the objective payoffs facing decision makers (i.e., the given payoff structure) can
have a large impact on cooperation in social dilemmas (e.g., Komorita & Parks,
1994; Rapoport, 1967). Those payoffs, in turn, may be determined by an experi-
menter (e.g., by presenting relatively low or high levels of fear and greed), or by
the actual outcomes afforded by the situation (e.g., the cost of contributing to a
public good versus the value of consuming the good). In terms of the situation,
another factor that has a large impact on the actual (or anticipated) payoffs in a
social dilemma is the presence of rewards for cooperation and punishment for
non-cooperation. Indeed, a recent meta-analysis showed that rewards and pun-
ishments both have moderate positive effects on cooperation in social dilemmas
Psychological Perspectives ■ 59
(Balliet, Mulder, & Van Lange, 2011). Administering rewards and punishments
is costly, however, and may thereby create a “second order public good.” For
example, sanctions may be good for the collective, but individuals may decide
not to contribute money or effort for this purpose. In his classic work, Yamagishi
(1986ab, 1988b) showed that people are willing to make such contributions if
they share the goal of cooperation, but do not trust others to voluntarily cooper-
ate. More recently, Fehr and Gächter (2000) showed that people are also often
willing to engage in costly punishment, and may even prefer institutions that pro-
vide the possibility of such sanctions, perhaps in part because the possibility of
costly punishment can help to install a norm of cooperation (Gürerk et al., 2006).
One of the most dramatic forms of punishment currently receiving attention
is ostracism or social exclusion. Research on ostracism and social exclusion reveals
that even the possibility of social exclusion is a powerful tool to increase coop-
eration, and that this threat might be more effective in small as opposed to large
groups (e.g., Cinyabuguma, Page, & Putterman, 2005; Kerr et al., 2009; Ouwerkerk,
Kerr, Gallucci, & Van Lange, 2005). Moreover, it appears that most people realize
that harmful pursuit of self-interest can lead to social punishments (see Gächter,
Herrmann, & Thöni, 2004). As noted by Kerr et al. (2009), in everyday life, small
groups may not often go as far as to socially exclude people, but the threat is often
there, especially in the form of social marginalization by paying less attention to
non-cooperative members or involving them in somewhat less important group
decisions. Consistent with this argument, there is evidence from anthropological
research in a tribal society in Northwest Kenya, which revealed that people may
often rely on some other, lost-cost activities first before they consider punishment.
In particular, group members often initiate gossip and express mockery and pub-
lic obloquy, often as part of slow-pace, low-cost strategies to build consensus and
muster enough support to eventually retaliate against the systematic wrongdoers
(Lienard, 2013).
Although punishments can be effective in promoting cooperation, some
adverse effects have been documented in recent research. For example, several
studies have shown that sanctions can decrease rather than increase coopera-
tion, especially if the sanctions are relatively low (e.g., Gneezy & Rustichini, 2004;
Mulder, Van Dijk, De Cremer, & Wilke, 2006; Tenbrunsel & Messick, 1999). One
explanation for these adverse effects is that punishments may undermine peo-
ple’s internal motivation to cooperate (cf. Deci, Koestner, & Ryan, 1999; Chen,
Pillutla, & Yao, 2009). According to Tenbrunsel and Messick (1999), sanctions can
also lead people to interpret the social dilemma as a business decision, as opposed
to an ethical decision, thus reducing cooperation.
Researchers are now also documenting that groups may at times punish coop-
erators, a (somewhat counterintuitive) phenomenon known as antisocial punish-
ment (Gächter & Herrmann, 2011; Herrmann, Thöni, & Gächter, 2008). In one
of the most recent papers on this topic, Parks and Stone (2010) found, across
several studies, that group members indicated a strong desire to expel another
group member who contributed a large amount to the provision of a public good
and later consumed little of the good (i.e., an unselfish member). Further, there is
also growing evidence suggesting that punishment might be most effective when
60 ■ Perspectives to Social Dilemmas
characteristics are not always clear, as people often face various types of “environ-
mental uncertainty” (e.g., How scarce is tuna exactly, and where exactly? What
is the replenishment rate for tuna? Or how big is the group? Au & Ngai, 2003;
Messick, Allison, & Samuelson, 1988; Suleiman & Rapoport, 1988).
Environmental uncertainty has been shown to reduce cooperation in various
social dilemmas (e.g., Budescu, Rapoport, & Suleiman, 1990; Gustafsson, Biel, &
Gärling, 1999), and several explanations have been offered to account for the
detrimental effects of uncertainty. For example, uncertainty may undermine effi-
cient coordination (De Kwaadsteniet, van Dijk, Wit, & de Cremer, 2006; Van Dijk
et al., 2009), lead people to be overly optimistic regarding the size of a resource
(Gustafsson et al., 1999), and/or provide a justification for non-cooperative behav-
ior (for a review, see Van Dijk, Wit, Wilke, & Budescu, 2004). Also, uncertainty
undermines cooperation when people believe their behavior is quite critical for
the realization of public goods, but when criticality is low, uncertainty matters less
or may even slightly promote cooperation (Chen, Au, & Komorita, 1996). Thus,
although it is not yet clear what mechanisms might explain the detrimental effects
of uncertainty, there is little doubt that uncertainty predictably undermines coop-
eration in various social dilemmas.
Noise. One final structural factor that has received attention in recent years is
the concept of noise, or discrepancies between intended and actual outcomes in
social interaction (cf. Bendor, Kramer, & Stout, 1991; Kollock, 1993; Van Lange,
Ouwerkerk, & Tazelaar, 2002). Presumably, cooperation is strongly challenged by
unintended errors, such as accidentally saying the wrong thing, or not responding
to an email because of a network breakdown, that may lead to misunderstand-
ing. However, surprisingly few studies have sought to capture noise, even though
noise underlies many situations in everyday life, and often gives rise to uncertainty
and misunderstanding. It may therefore challenge feelings of trust, and in turn,
cooperation.
In many experimental social dilemmas, there is a clear connection between
one’s intended level of cooperation and the actual level of cooperation commu-
nicated to one’s partner (e.g., if Partner A decides to give Partner B six coins,
Partner B learns that Partner A gave six coins). However, in the real world, it is
not uncommon for a decision maker’s actual level cooperation to be (positively
or negatively) impacted by factors outside of his or her control (i.e., noise). While
positive noise is possible (i.e., cooperation is higher than intended), the majority
of research has focused on the detrimental effects of negative noise (i.e., when
cooperation is lower than intended). This research clearly has shown that negative
noise reduces cooperation in give-some games (Van Lange et al., 2002) and will-
ingness to manage a common resource responsibly, especially among prosocials
faced with a diminishing resource (Brucks & Van Lange, 2007). Moreover, the
adverse consequences of negative noise can spill over into subsequent dilemmas
that contain no noise (Brucks & Van Lange, 2008). While noise can clearly under-
mine cooperation, several studies also suggest it can be overcome, for example, if
the partner pursues a strategy that is slightly more generous than a strict tit-for-tat
strategy (e.g., tit-for-tat + 1; Klapwijk & Van Lange, 2009; Van Lange et al., 2002),
when people are given an opportunity to communicate (Tazelaar, Van Lange, &
62 ■ Perspectives to Social Dilemmas
■ PSYCHOLOGICAL INFLUENCES
As such, trust involves vulnerability, that is, the uncertainty and risk that comes
with the control another person has over one’s outcomes and positive expecta-
tions, which often imply a set of beliefs in the cooperative intentions or behavior
of another person, or people in general (Rotter, 1967, see also Evans & Krueger,
2010; Kramer & Pittinksy, 2012). Although cooperation without trust is possible
(and a challenge, Cook, Hardin, & Levi, 200), in various societal contexts, but
perhaps especially in informal groups, trust may be considered as one of the key
ingredients to cooperation (Dawes, 1980; Yamagishi, 2011).
Early work on trust in social dilemmas showed that those high in dispositional
trust were more likely than those low in trust to increase cooperation in response
to a partner’s stated intention to cooperate (Parks, Henager, & Scamahorn, 1996),
reduce consumption of a depleting common (Messick, Wilke, Brewer, Kramer,
Zemke, & Lui, 1983), and contribute to public goods (Parks, 1994; Yamagishi,
1986a). Since these initial studies, a number of important insights regarding trust
and cooperation have emerged.
First, research suggests that people who are not very trusting of others are
not necessarily non-cooperative in a motivational sense. Rather, they are simply
prone to believe that others will not cooperate, and that fear undermines their
own (elementary) cooperation. However, when given the chance to contribute to a
sanctioning system that punishes non-cooperators, low-trusters are actually quite
cooperative. In other words, they appear quite willing to engage in instrumen-
tal cooperation by contributing to an outcome structure that makes it, including
those with selfish motives, attractive to cooperate, or unattractive to not cooperate
for everybody (Yamagishi, 2011; for earlier evidence, see Yamagishi, 1988ab).
Second, trust matters more when people lack information about other people’s
intentions or behavior, or when they are faced with considerable uncertainty (see
Yamagishi, 2011). An interesting case in point is provided by Tazelaar et al. (2004)
who, as mentioned earlier, found that levels of cooperation are much lower when
people face a social dilemma with noise. More interestingly, they also found that
the detrimental effect of noise was more pronounced for people with low trust
than for people with high trust (Tazelaar et al., 2004, Study 2).
Third, based on a recent meta-analysis, it has become clear that trust matters
most when there is a high degree of conflict between one’s own and others’ out-
comes (Balliet & Van Lange, 2013a; cf. Parks & Hulbert, 1995). This finding makes
sense, as these are the situations involving the greatest degree of vulnerability, as
trusting others to act in the collective’s interest can be quite costly in such situa-
tions. Indeed, as noted earlier, trust is, in many ways, about the intention to accept
vulnerability based upon positive expectations of the intentions or behavior of
another person (Rousseau et al., 1998, see also Evans & Krueger, 2009) or member
of one’s group (Foddy, Platow, & Yamagishi, 2009).
Consideration of Future Consequences. A final trait relevant to cooperation
in social dilemmas is the consideration of future consequences (CFC), defined
as “the extent to which people consider the potential distant outcomes of their
current behaviors and the extent to which they are influenced by these poten-
tial outcomes” (Strathman, Gleicher, Boninger, & Edwards, 1994, p. 743; cf.
Joireman, Shaffer, Balliet, & Strathman, 2012). Several studies have shown that
Psychological Perspectives ■ 65
individuals high in CFC are more likely than those low in CFC to cooperate in
experimentally-created social dilemmas (e.g., Joireman, Posey, Truelove, & Parks,
2009; Kortenkamp & Moore, 2006), and real-world dilemmas, for example, by
engaging in pro-environmental behavior (e.g., Joireman, Lasane et al., 2001;
Strathman et al., 1994); commuting by public transportation (e.g., Joireman, Van
Lange & Van Vugt, 2004); and supporting structural solutions to transportation
problems if the solution will reduce pollution (Joireman, Van Lange et al., 2001).
There is also some evidence suggesting that adopting a long-term orientation may
help groups in particular to overcome obstacles and initiate cooperation (Insko
et al.,1998).
Other Individual Differences. A number of additional individual differences
have received attention in recent dilemmas research. This research has shown, for
example, that cooperation in social dilemmas is higher among those low in narcis-
sism (Campbell, Bush, & Brunell, 2005); low in dispositional envy (Parks, Rumble,
& Posey, 2002); low in extraversion and high in agreeableness (Koole, Jager, van den
Berg, Vlek, & Hofstee, 2001); high in intrinsic orientation (Sheldon & McGregor,
2000); or high in sensation seeking and self-monitoring (Boone, Brabander, & van
Witteloostuijn, 1999).
Decision Framing. The psychological “framing” of social dilemmas has also
received a fair amount of recent attention. For example, in general, emphasizing
the acquisitive aspect of the dilemma (“you can gain something from the task”)
leads people to be less cooperative than emphasizing the supportive aspect of
the dilemma (“you can contribute toward a common good”) (Kramer & Brewer,
1984). Similarly, cooperation is lower when decision makers view the social
dilemma as a business decision, rather than an ethical decision (Tenbrunsel &
Messick, 1999) or a social decision (Liberman, Samuels, & Ross, 2004; Pillutla &
Chen, 1999). Framing the dilemma as a public goods versus a commons can also
impact cooperation, but, as De Dreu and McCusker (1997) show, the direction of
such framing effects seems to depend on the instructions given and the decision
maker’s SVO. To summarize, cooperation rates are lower in give-some than in
take-some dilemmas when instructions to the dilemma emphasize individual gain
or decision-makers have an individualistic value orientation, whereas coopera-
tion is higher in give-some than in take-some games when instructions empha-
size collective outcomes or decision-makers have a prosocial value orientation.
In general, group members are more concerned to distribute outcomes equally
among group members in the take-some dilemma than in the give-some dilemma
(Van Dijk & Wilke, 1995, 2000). Finally, research has also shown that cooperation
decreases if people come to believe they have being doing better than expected,
and increases if people believe they have been doing worse than expected (Parks,
Sanna, & Posey, 2003).
Priming. Another question that has received some attention is whether it is
possible to induce cooperation through subtle cues and implicit messages. The
answer is generally “yes,” though the dynamics of priming cooperation are surpris-
ingly complex, and it is not clear whether they exert very strong effects. But some
effects are worth mentioning. For example, priming an interdependent mindset
effectively promotes cooperation (Utz, 2004a), but if the person has a prosocial
66 ■ Perspectives to Social Dilemmas
range of negative emotions including envy (Parks et al., 2002), guilt (e.g., Nelissen,
Dijker, & De Vries, 2007), shame (e.g., De Hooge, Breugelmans, & Zeelenberg,
2008), regret (Martinez, Zeelenberg, & Rijsman, 2011), anger and disappointment
(e.g., Wubben, De Cremer, & Van Dijk, 2009), with most acting as stimulators of
cooperation.
On a related note, a more recent line of research has focused on how coopera-
tion is impacted when one’s partner communicates certain emotions. For exam-
ple, research shows that when one’s partner is not really in a position to retaliate,
people are more cooperative when their partner appears happy, but if one’s partner
can retaliate, people are more cooperative when their partner expresses anger (Van
Dijk, Van Kleef, Steinel, & Van Beest, 2008). Such research shows that commu-
nicated emotions are often interpreted as a signal that informs us how another
person might respond to our non-cooperative and cooperative behavior (e.g., Van
Kleef, De Dreu, & Manstead, 2006). Indeed, research also shows that coopera-
tors are more likely than individualists and competitors to smile when discussing
even mundane aspects of their day, and that cooperators, individualists, and com-
petitors can be identified simply on the basis of their non-verbal behavior (Shelley
et al., 2010).
In summary, personality differences in social values, trust, consideration of
future consequences, framing, priming, heuristics, and affect represent a long
list of variables that are important to understanding the psychological processes
that are activated in social dilemmas. Presumably, personality influences might be
more stable over time and generalizable across more some situations than other,
more subtle influences, such as framing, priming, and affect. The stable and subtle
influences are both important, as they provide the bigger picture of what the social
dilemmas might challenge in different people, and how some of these challenges
might be influenced in implicit ways. The effect sizes of framing and especially
priming may sometimes be somewhat modest, yet the effects tend to be fairly
robust, and therefore they help us understand how cooperation could perhaps be
promoted in cost-effective ways, such as by just activating a particular psychologi-
cal state or mindset in the ways social dilemmas are communicated and presented.
Parks and Rumble (2001) showed that the timing of rewards and punishments
matters: whereas prosocials are most likely to cooperate when their cooperation is
immediately reciprocated, competitors are most likely to cooperate when punish-
ment for non-cooperation is delayed. Thus, while quite effective, TFT should not
be regarded as the most effective strategy, because there are so many exceptions
now that have been observed, and that make sense from a psychological point
of view.
Moreover, even from a purely logical perspective, it is true that adding gen-
erosity can help overcome the detrimental effects of noise. And there is also
evidence that another strategy might actually outperform TFT in many social
dilemma situations. In particular, a strategy called, Win Stay, Lose Shift (WSLS, or
Win-Stay-Lose-Change), is defined by a very simple rule, though different from
TFT. The rule for WSLS is: when I do well, I repeat the choice I have made; and
when I do not do well, I shift and make a different choice. In practice, this means
that a non-cooperative choice is repeated if the other made a cooperative choice
(and I made a non-cooperative choice), and that a cooperative choice is repeated
if both persons made a cooperative choice.
Change to cooperation is when both persons did not cooperate, and change to
non-cooperation is when the other made a non-cooperative choice and I made a
cooperative choice. Several simulation studies revealed that, across several social
dilemma tasks, WSLS outperformed TFT (Nowak & Sigmund, 1993; see also
Messick & Liebrand, 1995). It does probably payoff to apply it with some flexibil-
ity. For example, it is probably unwise to always change to cooperation after each
and every interaction in which both did not cooperate; it is probably wiser to make
that change with some probability (e.g., Gächter, & Herrmann, 2009; Nowak &
Highfield, 2011). There is not much empirical research examining the strengths
and limitations of WSLS among real people, as most research on this strategy has
used computer simulations. But some have suggested that WSLS is quite a com-
mon, basic strategy, one that may be observed in humans, as well as in nonhuman
populations. After all, it seems quite natural to change only after outcomes are
disappointing, and to not change when the outcomes made you happy.
In sum, recent research has shed new light on how reciprocal strategies can
promote cooperation. TFT was believed most effective, but that view has now
been revisited. In situations involving noise, some generosity (added to reciproc-
ity) is quite effective; and there is evidence in support of the superior qualities of
Win-Stay-Lose-Shift, a very basic strategy that many people may spontaneously
apply in some form in real life.
Indirect Reciprocity. Recent research has also explored how indirect reciprocity
can encourage cooperation. Whereas the effects of direct reciprocity are observed
in repeated encounters between two individuals, cooperation in larger settings may
be promoted by indirect reciprocity. According to this view, cooperation may be
advantageous because we tend to help people who have helped others in the past.
As noted earlier, and briefly illustrated by the experiment of Wedekind and Milinski
(2000), indirect reciprocity models build on reputation effects by assuming that
people may gain a positive reputation if they cooperate and a negative reputation if
they do not. Indeed, people are more likely to cooperate with others who donated
70 ■ Perspectives to Social Dilemmas
to a charity fund like UNICEF (Milinski, Semmann, & Krambeck, 2002). Notably,
people also seem to be well aware of these positive effects, as they are more willing
to donate and cooperate if they feel their reputation will be known by others than
if they feel others are not aware of their contributions (e.g., Griskevicius, Tybur, &
van den Bergh, 2010). There is even evidence indicating that subtle cues of being
watched—by means of an image of pair of eyes—can enhance donations (Bateson
et al., 2006), which suggest the subtle power of reputational mechanisms.
Locomotion. Typically, experimental research on multi-trial social dilemmas
has explored how people respond to a given partner or group. However, in the
real world, one is not inevitably stuck with certain partners. One can exit relation-
ships and groups, and enter others. Recognizing exit and selection (and exclusion)
of new partners as viable options in social dilemmas, a number of recent studies
have begun to study locomotion and changes in group composition. For example,
Van Lange and Visser (1999) showed that people minimize interdependence with
others who have exploited them, and that competitors minimize interdependence
with others who pursue TFT, which is understandable, as competitors cannot effec-
tively achieve greater (relative) outcomes with a partner pursuing TFT. Similarly, it
is clear that conflict within a group may induce people to leave their group, even-
tually leading to group fissions (Hart & Van Vugt, 2006). The conflict may come
from failure to establish cooperation in the group or a decline in cooperation as
cooperative members exit (Yamagishi, 1988a; Van Lange & Visser, 1999; see also
De Cremer & Van Dijk, 2011), or from dissatisfaction with autocratic leadership
(Van Vugt, Jepson, Hart, & De Cremer, 2004). Conversely, prospects of coopera-
tion may encourage individuals to enter groups, for example, when sanctions of
non-cooperation promote the expectation of cooperation (see Gürerk et al., 2006).
Communication. Frequently, communication is conceptualized as a psycho-
logical variable. After all, communication is often thought of in terms of verbal
or nonverbal messages that are characterized by a fair amount of interpretation
and subjectivity. In the social dilemma literature, various forms of communication
have been compared. Classic research on social dilemma has shown that commu-
nication can effectively promote cooperation (see Balliet, 2010; Komorita & Parks,
1994; for classic studies, see Caldwell, 1976). But it is not just talk that explains
why communication might promote cooperation, even though face-to-face inter-
action by itself may be helpful. Simply talking about issues that are not in any way
relevant to the social dilemma does not seem to promote cooperation, as demon-
strated in one of its most classic studies of its kind (Dawes, McTavish, & Shaklee,
1977). Some researchers have suggested and found that, at least in single-trial
social dilemmas, promising (to make a cooperative choice) may be quite effective,
especially if all groups make such a promise (Orbell, Van der Kragt, & Dawes,
1988; see also Kerr & Kaufman-Gilliland, 1994). Subsequent research supported
this line of reasoning, in that “communication-with-pledge” promotes coopera-
tion, because it promotes a sense of group identity and a belief that one’s choice
matters (i.e., that one’s choice is believed to be critical; Chen, 1996).
These findings are important not only because they inform us about the psy-
chology of decision-making in social dilemmas, but also how they might help us
explain the dynamics of cooperation. Moreover, in real life social dilemmas, group
Psychological Perspectives ■ 71
members may actually decide whether they favor a structure in which they openly
communicate their intended choices. For example, as noted by Chen (1996), in
work groups, managers could ask to make a pledge of time and effort, and then
propose several binding pledge systems, especially those that are group-based such
that they create a common fate and normative standards for everybody involved
(Kerr & Kaufman, 1994; Kerr, Garst, Lewandowski, & Harris, 1997). In that sense,
it is interesting that even virtual groups, or the mere imagination of communi-
cation, can promote cooperation (Meleady, Hopthrow, & Crisp, 2013). Such evi-
dence might suggest that the effects of internalized norms are more powerful than
often is assumed. And perhaps the mechanisms through which communication
may promote cooperation might be quite subtle, involving norms and identity.
Indeed, communication may strengthen a sense of identity, but it also promotes a
norm of (generalized) reciprocity, which is why it might speak to similar mecha-
nisms as those that dynamically underlie the effects of direct and indirect reci-
procity. There is indeed evidence suggesting that people might fairly automatically
apply a social exchange heuristic, which prescribes direct or generalized forms of
reciprocity (Yamagishi, Terai, Kiyonari, Mifune, & Kanazawa, 2007): “Do what
you think another person would do in situation like this,” or some rule or heu-
ristic closely related to it. And there is recent evidence suggesting that the mere
imagining of group discussion can promoted cooperation (Meleady, Hopthrow, &
Crisp, 2013). In this research, participants engaged in a guided simulation of the
progressive steps required to reach cooperative consensus within a group discus-
sion of a social dilemma. It awaits future research, but it is possible that imagined
group discussion activates a generalized reciprocity norm that effectively pro-
motes cooperation. The good news about this is that perhaps cooperation can be
enhanced in quite a cost-effective manner, requiring no face-to-face discussion or
other time-consuming meetings (see Meleady et al., 2013).
Support for Structural Solutions. One final issue being addressed concerns struc-
tural solutions to social dilemmas which involve changing the decision-making
authority (e.g., by electing a leader), the rules for accessing the common resource,
or the incentive structure facing decision makers (e.g., by making the cooperative
response more attractive). In the lab, the most heavily studied structural solution
has been the election of a leader. Many early studies showed that people were more
likely to elect a leader when the group had failed to achieve optimal outcomes
in a social dilemma (e.g., underprovided a public good, or overused a common
resource; Messick et al., 1983; Van Vugt & De Cremer, 1999). Additional research
shows that, after a group has failed, willingness to elect a leader tends to be higher
in the commons dilemmas (as opposed to the public goods dilemmas) (e.g., Van
Dijk, Wilke, & Wit, 2003); when collective failure is believed to be the result of task
difficulty (as opposed to greed) (Samuelson, 1991); and among those with a pro-
social (vs. a proself) orientation (De Cremer, 2000; Samuelson, 1993). Research
comparing different leadership alternatives shows that group members are more
likely to support democratic (versus autocratic) leaders, and to stay in groups led
by democratic (versus autocratic) leaders (Van Vugt et al., 2004).
Beyond the lab, a number of field studies have also explored support for struc-
tural solutions, many rooted in Samuelson’s (1993) multiattribute evaluation
72 ■ Perspectives to Social Dilemmas
■ BASIC ISSUES
assigned substantial weight to the outcomes for the other at the expense of their
own outcomes (for further evidence, see Batson, 2011; for further illustrations, see
Caporael, Dawes, Orbell, & van de Kragt, 1989; Van Lange, 2008).
not going to reciprocate. And finally, sometimes it may not be wise to emphasize
equality in relationships, groups, and organizations. For example, in marital rela-
tionships, a discussion about equality may well be an indicator that a couple is
on its way to divorce, perhaps because such discussions can undermine genuine
other-regarding motives (e.g., responding to the partner ‘s needs; Clark & Mills,
1993). Similarly, in groups and organizations, communicating equality may lead to
social book-keeping that may undermine organizational citizenship behavior, the
more spontaneous forms of helping colleagues that are not really part of one’s job
but are nonetheless essential to the group or organization.
which non-cooperators are in some way punished, in that they are no longer part
of the group. This could mean that they no longer benefit from group outcomes,
but we suspect that the social aspects of even very subtle forms of exclusion can
yield powerful effects on the non-cooperators’ feelings and behavior. Indeed, there
is evidence that very subtle forms of social exclusion may activate those regions
of the brain that are associated with physical pain (Eisenberger, Lieberman, &
Williams, 2003). In short, while aggression is often undesirable, it may at times
serve a vital function in maintaining cooperation within the larger group.
In Lamalara, Indonesia, the sun rises while eight men prepare two small boats
for sea. They are going to hunt whales to provide food for their community.
Each looks at the other knowing they might have benefited by a few more hours
sleep and doing something that day to help their immediate family members,
hoping others would go to sea in search of whales. But each also understands
that if everyone behaved that way, then the community would go without this
vital resource.
At the same time, a small rural village in India turns on its only generator pro-
viding electricity for the community. This provides enough electricity for each
household to use a single light in their homes. However, if each household uses
more than that, then the generator fails and the community is left without electric-
ity. One household decides it is late in the evening and that it should be fine to turn
on a fan, since other households likely have their lights off. But fans require more
electricity than lights and too many households have decided to do the same. As the
individual turns on his fan, the generator fails and the community loses electricity.
A university student in the United States decides to complete her portion of a
group assignment for class. She has many tempting alternative options for spend-
ing her time, but she understands that her efforts will prove valuable to the group
project. She spends a good portion of her day working on the project.
At any single moment, people all over the world are being faced with social
dilemmas. Although such dilemmas can vary substantially in terms of the behav-
iors (e.g., whale hunting, using electricity, and homework) and outcomes (e.g.,
the provision of food, conserved energy, and a good grade), these situations all
share the similar underlying structure of social interdependence—that is, they all
involve a conflict of interest experienced by the persons facing the dilemma. The
point is that social dilemmas are universal phenomena and no human living on
any part of the planet is free from facing such dilemmas.
This simple fact has raised several interesting issues about the study of human
cooperation. First, if social dilemmas are a persistent and universal problem that
humans face in their social environments, and assuming that these problems have
been a recurring theme in our ancestral past, then it may be that humans have
evolved a set of species-typical adaptations to deal with these problems. As we
have seen in Chapter 3 with its discussion of the evolutionary issues, this is a very
likely possibility, even though there may be additional processes at work that affect
cooperation. A second issue, however, is that if humans all over the globe face
social dilemmas, we might see that different groups of humans possess different
strategies for approaching these dilemmas. Although cross-societal variability by
Daniel Balliet had primary responsibility for preparation of this chapter.
79
80 ■ Perspectives to Social Dilemmas
no means excludes the possibility that humans possess adaptations for behaviors
to deal with social dilemmas, the study of cross-societal variation in cooperation
has been primarily approached with a focus on the proximate social environment
and psychological mechanisms.
This chapter explores what we know about cross-societal variation in coopera-
tion in social dilemmas. It will become clear that we know that there is substantial
variability in how people think and behave in social dilemmas around the world—
from small-scale hunter-gatherer societies (Henrich et al., 2001) to large-scale
industrialized societies (Herrmann et al., 2008). We start our discussion by draw-
ing attention to research that establishes this variability across ethnicities and
societies. Although it is important to note this variability, it is more interesting to
explore explanations for this variability in cooperation. To explain this variability
between different ethnicities and societies, social scientists have emphasized the
importance of culture.
Culture is a broad concept, so it will benefit us to take a moment and discuss
what this concept entails. Before addressing the concept of culture, we should note
that various traditions or lines of research can be subsumed under the multifac-
eted concept of culture. Also, comparisons among cultures in studies on trust and
cooperation go back almost to the very beginning of research on social dilemmas
and related games (e.g., Kelley et al., 1970; Madsen & Shapira, 1970). We will dis-
cuss some of that older literature, but our focus will be on the more recent research
on culture that has tended to compare several societies, and that builds on classic
research by exploring the key explanations of differences among societies—and by
exploring whether and why some of the classic factors that might promote coop-
eration (see Chapters 3 and 4) are equally effective in different cultures. In light
of the increased focus on culture in the social dilemma literature, especially in
the last decade, we should note that this particular body of research and theory is
still relatively young. Many findings provide preliminary, rather than conclusive,
answers to the important yet intricate questions about culture and human coop-
eration. At the same time, important insights have been generated and substantial
progress has been made, so we feel that it is timely and important to provide an
overview of this growing topic of research.
After an attempt to clarify the concept of culture, we will discuss ideas and
research about how culture relates to cooperation. We will begin by discussing one
promising line of research on the informal sanctioning of social norms. Beyond
social norms of cooperation however, we will also discuss efforts in cross-cultural
psychology that emphasize cultural differences in values and beliefs, and address
their importance for understanding cultural variation in cooperation. We will end
this chapter by discussing some implications about the effect of globalization on
cooperation in global-scale social dilemmas.
The most basic question to answer in this line of research is: Does coopera-
tion vary across ethnicities and societies? For example, do we observe differing
Cultural Perspectives ■ 81
Some of the earliest work testing for variation in cooperation between ethnic
groups and societies compared levels of cooperation observed across Mexican
Americans, caucasian Americans, and Mexicans. This research found that cauca-
sian American children were generally less cooperative than Mexican American
children (Avellar & Kagan, 1976; Kagan, Zhan, & Geally, 1977; Knight & Kagan,
1977a,1977b; McClintock, 1974), but that both caucasian and Mexican American
children tended to be less cooperative than Mexican children (Kagan & Madsen,
1971, 1972; Knight & Kagan, 1977a; Madsen, 1969; Madsen & Shapira, 1970).
Interestingly, third generation Mexican American children have been found to
be less cooperative compared to second generation Mexican American children
(Kagan & Knight, 1979), suggesting that the extent of a family’s integration into
a society affects the socialization of their children’s social behavior. Indeed, for
the most part, this research has interpreted the above mentioned findings by
referring to different socialization processes that resulted in the social learn-
ing of different norms of cooperation and competition. Yet, some limitations of
this earlier work is that it over emphasized comparing ethnic groups within the
same country, it often did not compare multiple cultural groups simultaneously,
and it did not test hypotheses about the observed differences in cooperation.
A landmark study conducted by Toda, Shinotsuka, McClintock, and Stech
(1978) overcame two limitations of earlier work by examining samples from dif-
ferent societies and comparing more than two societies at once. Specifically, they
compared the competitive behaviours of children playing a dyadic maximizing
difference game for 100 trials in five different countries: Belgium, Greece, Japan,
Mexico, and the United States. Moreover, they examined children in each country
at three different ages (school grades 2, 4, and 6). They found some evidence for
cross-societal variation in competition, with the Japanese being the most competi-
tive and Belgians being the least competitive, but they also found some substantial
similarities between countries, such as an increase in competition across trials and
with age. Toda and colleagues noted that socialization differences may account
for the societal differences in competition observed among the younger children,
but that each culture displayed a substantial increase with competition over time,
suggesting that children across cultures are socialized to possess similar values for
competition.
Subsequent research has emphasized the strategy of comparing social dilemma
experiments with adults that were conducted in only two modern societies. For
example, Hemaseth (1994) compared the behavior of a sample of Americans
and Russians in a series of one-shot prisoner’s dilemmas. He found that while
Americans tended to cooperate on 51% of the trials, the Russians cooperated on
72% of the trials, suggesting that Russians are more cooperative than Americans.
82 ■ Perspectives to Social Dilemmas
Americans have also been compared to several other countries in terms of their
cooperation. Parks and Vu (1994) compared rates of cooperation in a public
goods game and resource dilemma between American and South Vietnamese
participants. They found that the Vietnamese displayed much more cooperation
than Americans—and that the Vietnamese even continued to cooperate when
partnered with a pre-programmed strategy that always defected. Americans
have also displayed less cooperation compared to Chinese samples (Domino,
1992; Hemesath & Poponio, 1998) and a Czech Republic sample (Anderson,
DiTraglia, & Gerlach, 2011).
Yet, although Americans have been found to cooperate less than the Chinese,
Czechs, Russians, and Vietnamese, research has also found that Americans coop-
erate at similar levels compared to Dutch participants (e.g., Liebrand & Van Run,
1985) and Columbian participants (Carpenter & Cardenas, 2011). Vietnamese
have also been shown to be similarly cooperative as Thai participants during a
public goods game (Carpenter, Daniere, Takahashi, 2004). Although in one study
Americans were less cooperative than Vietnamese, and then in another study
the Vietnamese were found to be equally cooperative as Thai participants, we are
unable to claim from this evidence that Americans are less cooperative than Thai
participants.
What can we conclude from these studies? They illustrate, as do some older
studies, that ethnic groups within the same country and individuals from differ-
ent countries can differ quite strongly in their responses to similar social dilemma
tasks. This by itself has important practical value for research conducted in various
laboratories around the world. It is possible that in some countries, social dilemma
tasks are approached with a different mindset than in other countries—and such
mindsets may also be relevant to how people from different countries might
respond to certain features of the experiment and/or experimental manipulations.
At the same time, these studies do not provide much evidence for underlying vari-
ables that might explain these differences. It is difficult to draw specific conclu-
sions about cross-societal variation in cooperation from the results of studies that
compare two countries, because each specific study has unique features that can
affect levels of cooperation. Also, differences in the experimental instructions, the
procedure for recruitment of participants, or language and translation issues, to
name just a few, might explain some of this variation in cooperation across eth-
nicities and societies.
Fortunately, there have also been some programs of research that sought to
address complex issues in cross-national research, especially by using highly
standardized experimental protocols. In one such program of research, Henrich
and colleagues (2001) compared 15 small-scale societies in their generosity dur-
ing a dictator game. Their sample of societies came from South America, Africa,
Mongolia, Papua New Guinea, and Indonesia, and included hunter-gatherers, hor-
ticulturalists, nomadic herding groups, and agriculturalists. They found consider-
able variation across societies in their generosity towards strangers—measured
Cultural Perspectives ■ 83
the participants choosing to contribute to the public good. Therefore, Cardenas and
colleagues (2008) found evidence for both similarity and differences between Latin
American countries in their willingness to make contributions to public goods.
Is there cross-societal variation in cooperation among large-scale societies
beyond Latin American countries? Herrmann et al. (2008) conducted public goods
experiments across 16 different large-scale societies, including Denmark, Greece,
Turkey, Saudi Arabia, Russia, the United States, and others. A unique aspect of
their design is that in each society they conducted public goods experiments both
with and without the opportunity to punish others in the public goods dilemma.
Overall, they found that societies differed in cooperation in both conditions, as well
as across conditions. For example, in the no-punishment condition, participants
from Denmark, Switzerland, and the United States demonstrated higher amounts
of cooperation compared to countries such as Turkey and Australia. When pun-
ishment opportunities were present, Denmark, Switzerland, and the United States
were all more cooperative than Greece, Saudi Arabia, and Turkey. Also, while pun-
ishment opportunities tended to increase cooperation rates compared to the con-
dition with no-punishment opportunities for some countries (e.g., China, South
Korea, and the United Kingdom), in other countries the opportunity for punish-
ment did not increase cooperation (e.g., Greece, Oman, and Turkey). This work
clearly demonstrates that large-scale modern societies do differ in terms of their
tendencies to cooperate with unrelated, anonymous strangers in laboratory social
dilemmas. This work also suggests that while there may be differences, some soci-
eties are also more similar to each other in terms of their cooperation. Moreover,
these findings complement other behavioral experiments that find cross-societal
variation in bargaining behavior (Oosterbeek, Sloof, & Van de Kuilen, 2004; Roth,
Prasnikar, Okuno-Fujiwara, & Zamir, 1991).
Taken together, the studies by Cardenas et al. (2008) and Hermann et al. (2008)
use standardized public good dilemma tasks and involve a large number of soci-
eties. These qualities are important because in doing so these ambitious projects
provided convincing evidence that both meaningful similarity and meaningful
variation in cooperation exists across large scale societies. Moreover, the work by
Hermann and colleagues (2008), also makes the important point that societies
share similarities and reveal differences in their responses to the availability of
punishment. We will return to this specific finding later, when discussing how
culture might help us understand why some variables might impact cooperation.
For the remaining chapter we will report research aimed towards explaining
this variation. Perhaps the most frequent construct evoked to explain ethnic and
societal variation in cooperation is culture (e.g., Boyd & Richerson, 2009; Gächter,
Hermann & Thöni, 2010; Henrich & Henrich, 2006; Kopelman, 2008; Weber &
Morris, 2010). Let’s take a moment to consider what culture is.
■ C U LT U R E A N D C R O S S - S O C I E TA L VA R I AT I O N
IN COOPERATION
Many perspectives on culture exist. Yet, a common theme that runs across
these conceptualizations is that culture simultaneously exists outside and within
Cultural Perspectives ■ 85
the minds of the individuals in a collective. That is to say, culture can involve
the institutions within societies, the products people use on daily basis, the
reoccurring interactions with specific others, and the patterns of the behaviors
observed across interactions. These features of our social environment encour-
age individuals to adopt similar patterns of beliefs, values, personalities, and
behaviors. A recent definition is that “culture consists of explicit and implicit
patterns of historically derived and selected ideas and their embodiment in
institutions, practices, and artefacts; cultural patterns may on the one hand be
considered as products of action, and on the other hand as conditioning ele-
ments of further actions (Adams & Markus, 2004, p. 341).”
Certainly, this is a complicated definition for a complex construct and there
should be no surprise that the study of the relation between culture and coopera-
tion is multifaceted, with many different researchers emphasizing different aspects
of culture. Researchers have also employed diverse methodologies in exploring the
relation between culture and cooperation. One long-standing approach involves
ethnographies of different cultures (e.g., Mauss, 1990; Sahlins, 1972). Our empha-
sis, however, will be on a different approach. In keeping with the focus of this
book, we will review relatively recent research using experimental games across
cultures to understand cross-societal variation in cooperation. That is, like the
studies that we reviewed earlier, we will focus on how researchers have attempted
to explain the variation that exists across societies in terms of their cooperation
displayed in experimental social dilemmas.
Although the research reported above clearly demonstrates the existence of
cross-societal variation, much of that work does not directly address the role of
culture in explaining this variation. It is important to demonstrate that culture
is essential in terms of how societies might differ in their responses to some key
experimental procedures. Moreover, it is important to test how some classic cul-
tural variables may explain cross-societal differences in terms of both cooperation
and responses to key experimental manipulations. Both of these later issues are
important, because even though some cultures may share similar levels of overall
cooperation, one of those cultures may benefit more or less from introducing a
specific strategy for encouraging cooperation.
To illustrate this point, let us return to the study by Hermann and colleagues
(2008). Recall that these researchers found that punishments can increase coop-
eration in certain societies, but that punishments failed to increase cooperation in
other societies (also see Balliet et al., 2011). Much research is needed to understand
how certain solutions for cooperation may be more or less effective at enhancing
cooperation in specific societies. While two societies may display similar levels of
cooperation, it may be that one strategy for increasing cooperation may work in
one society, but not the other society. For example, while one society may be able
to sustain cooperation by the use of informal sanctions by similar status other
individuals, it may be that in other societies an authority figure with the ability
to monitor and sanction the behavior of others may be necessary to encourage
cooperation. Such questions await future research. Thus, a more nuanced form
of cross-cultural work is needed to understand the many possible routes to coop-
eration across societies. Such work will definitely gain direction by a theoretical
86 ■ Perspectives to Social Dilemmas
consider what aspects of culture have been studied in relation to cooperation and
address how this research approaches these two objectives. Specifically, we will
focus on research on three defining features of culture and their relation to coop-
eration: social norms, values, and beliefs.
One aspect of cultural variation around the world may be due to different social
norms that emerge among local groups of people, which are used to guide and
evaluate behavior (Ostrom, 2000). Social norms are expectations about how
people should behave in a specific context when behavior is obligatory, permit-
ted, or forbidden. If people violate these expectations they tend to be formally
or informally sanctioned by others. Social norms can be about many different
behaviors. Social norms may include standards about how people should dress
for work, how we should greet others in public, how to eat food, and rules
about whether people should recycle, litter, or pay taxes.
One broad distinction may be made between (a) conventional norms, and
(b) moral norms (Turiel, 1983). Conventional norms pertain to those aspects of
social behavior that help people coordinate by developing rules for specific behav-
iors and interactions. For example, rules about how to dress at work, greet others,
and eat food may be considered conventional norms. Moral norms, however, per-
tain to those aspects of social behavior that help people develop patterns of social
exchange that involve some conflict of interest (e.g. recycling, littering, and paying
taxes). The norm to “do onto others what others do to you,” the norm of general-
ized reciprocity (e.g., Gould, 1960), is a clear example of a moral norm. Likewise,
to contribute one’s fair share to an important group outcome, a norm of fairness, is
also an example of a moral norm. Moral norms are especially important to social
dilemmas and may provide solutions to social interactions characterized by a con-
flict between self and collective interests. Conventional norms, on the other hand,
are relatively more important to situations that involve coordination problems,
compared to social dilemmas (Kelley et al., 2003).
Although cultures differ in terms of both types of norms, we will pay attention
to moral norms in particular because they are most relevant to social dilemmas.
Specifically, we focus on norms that are fueled by expectations about how people
should behave toward others during social dilemmas (Henrich & Henrich, 2006).
These social norms for cooperation would then be maintained by informal (and
sometimes formal) social punishment of norm violators (e.g., Boyd & Richerson,
2009; Mathew & Boyd, 2011). Moreover, over time, groups may become more dif-
ferent from each other as a result of imitation of successful group members, assimi-
lating migrants into existing cultures, and from between-group competition (Boyd
& Richerson, 2009; Gintis, 2003). Such social norms may explain why in some soci-
eties people simultaneously expect others to make contributions to public goods
and demonstrate a willingness to punish others who violate those expectations,
while other societies tend not to possess such expectations for cooperation.
Recall that Henrich and colleagues (2006) found considerable variation in the
dictator game across societies. In some small scale-societies, people were quite
88 ■ Perspectives to Social Dilemmas
This is perhaps one of the most challenging questions one can ask about cul-
ture and cooperation. Although more research is needed to address this impor-
tant question, we will review some recent research that provides some tentative
answers. In particular, in their research on small-scale societies, Henrich and
colleagues (2010) considered two possibilities why some small-scale societies
tend to possess norms of cooperation: (a) the extent of market integration,
and (b) societal members subscribing to a world religion. In this research, they
sampled 15 small-scale societies that varied in terms of both market integration
and religion. Market integration involves the extent to which a society includes
frequent anonymous interactions between unrelated persons. The researchers
indexed this across societies by measuring the percent of the amount of house-
hold calories purchased at the market. World religions have also been suggested
to cultivate prosocial norms amongst individuals sharing the religion. Thus,
religion may be a social institution that helps to maintain social cohesion and
cooperation amongst societal members. It may accomplish this by the threat
Cultural Perspectives ■ 89
■ C U LT U R A L VA L U E S A N D C O O P E R AT I O N
studies did not measure differences between their samples in endorsing these val-
ues, and most certainly, these two countries vary along several other dimensions
besides individualism-collectivism. Subsequent research has attempted to estab-
lish the relation by measuring both cultural values and cooperation in samples
from the same culture (e.g., Cox, Lobel, & McLeod, 1991; Probst, Carnevale, &
Triandis, 1999).
For example, Probst, Carnevale, and Triandis (1999) evaluated if people in the
same country (in the United States) who differentially self-report individualist
and collectivist values tend to cooperate more or less during social dilemmas. In
their measurements they distinguished how individualists and collectivists vary
according to a horizontal versus vertical view of social relations. Specifically, while
horizontal collectivists view the group as important for their self-concept and view
group members as relatively similar in status, vertical collectivists similarly view
the group as important for defining themselves, but tend to emphasize a hier-
archical structure within the group. Horizontal individualists view themselves
as distinct from others, but have an egalitarian perspective on social relations.
Vertical individualists, on the other hand, view the self as an autonomous entity
and expect inequality between individuals. They measured individual differences
in these values and then had participants interact for ten trials either in a standard
Prisoner’s Dilemma or an inter-group Prisoner’s Dilemma—a dilemma whereby
people decide among contributing to their individual account, their own group’s
public good, or to a global public good that involves an additional group. The
inter-group dilemma provides an added dilemma between the participants’ own
group and doing what is best for all people from each group facing the dilemma. In
the inter-group Prisoner’s Dilemma everyone benefits if people contribute to the
global account, but each group would benefit relatively more by contributing more
to their own group’s account and not the global account.
Probst and colleagues (1999) found that vertical collectivists displayed rela-
tively greater amounts of cooperation during the standard Prisoner’s Dilemma,
compared to the inter-group dilemma. However, they found the exact opposite
results for vertical individualists—vertical individualists were more coopera-
tive in the inter-group Prisoner’s Dilemma, compared to the standard Prisoner’s
Dilemma. Horizontal individualists and collectivists did not vary in their amounts
of cooperation across these contexts. Their reasoning was that vertical individ-
ualists value winning competitions and that to win during the inter-group con-
text required cooperation with in-group members, but winning in the standard
Prisoner’s Dilemma required defection. Vertical collectivists, however, were
inclined to place the group’s interests above their own interests in both contexts.
During the standard Prisoner’s Dilemma, it is obvious that cooperation results in
a greater outcome for the group. However, in the inter-group dilemma, defection
results in a greater outcome for both groups. In this context, vertical collectiv-
ists seemingly identified with both groups (since they were all in the same room
and students from the same university) and chose to defect in this context. This
research demonstrates that cultural values are relevant to informing behavior dur-
ing social dilemmas. What are needed now are programs of research that extend
these findings to understanding differences across societies. Another limitation
92 ■ Perspectives to Social Dilemmas
of this research is that it is correlational. Yet, recent evidence does suggest that
collectivist and individualist cultural contexts may play a causal role in informing
choice during social dilemmas.
One way to determine if collectivist-individualist cultures have a causal impact
on cooperation is to create these climates in a laboratory setting and observe its
effect on cooperation. This is exactly the method used by Chatman and Barsade
(1995) in observing the effects of collectivism-individualism on cooperative
work behavior. Chatman and Barsade (1995) had a group of M.B.A. students
interact in a workplace simulation that involved cooperation with several other
“co-workers.” They randomly assigned these students to work for an organization
that was either espousing collectivist or individualist values. For example, in the
individualist organization participants were told that individual effort was valued
and rewarded, while in the collectivist organization group efforts were valued and
rewarded. Participants were randomly assigned to work in these simulated orga-
nizations doing various tasks for two and half hours. Afterwards, they had each
person rate the cooperativeness of each member of their group. They found that
people rated their partners as more cooperative in the collectivistic organization,
compared to the individualistic organization. By directly manipulating a collec-
tivist versus individualist work climate, this research suggests that a collectivist
climate may play a causal role in promoting cooperation.
Another method to observe whether cultural values have a causal effect on
behavior is to activate already learned cultural values that pre-exist in the minds
of bicultural individuals. Recent research in the field of cultural psychology has
found that people may have knowledge of multiple cultures (Kitayama & Cohen,
2007). Several of these studies have demonstrated that bicultural participants
change their cognitions and behavior toward patterns consistent with a particu-
lar culture when provided cues of those cultures (Cohen, 2007; Oyserman & Lee,
2007). Wong and Hong (2005) considered if this was the case for cooperation
amongst a sample of participants from Hong Kong. They reasoned that Hong
Kong university students have had enough exposure to American media that
they may have internalized some aspects of American culture, including indi-
vidualist values. Thus, they predicted that providing reminders of either Chinese
culture or American culture would influence these Hong Kong participants to
behave either like Chinese or Americans during a Prisoner’s Dilemma game,
respectively. An added feature of their experiment was that they manipulated
whether the participants interacted with a friend or a stranger. They hypoth-
esized that the Chinese should only be more cooperative with a friend, com-
pared to the stranger, but that Americans would not distinguish between the two
conditions. They found that priming the participants with symbols of Chinese
culture resulted in higher expectations of cooperation and own cooperation,
but only when participants were interacting with a friend and not while they
were interacting with a stranger. Their cooperation was also higher in the friend
condition, compared to when they were primed with symbols from American
culture. This single study provides some preliminary evidence that exposure to
subtle cues that serve as reminder of culture differences can have a causal role on
an individual’s level of cooperation.
Cultural Perspectives ■ 93
Unfortunately, not much research has been done pursuing the specific variables
that may explain the link between cultural values and cooperation. Preliminary
work on this issue, however, suggests that the relation between collectivism and
cooperation may be explained by group efficacy. Prior research has shown that
people tend to be more cooperative when they feel that their behavior makes
a difference to the group outcome (self-efficacy) and when they expect that
their group can achieve its goals (collective efficacy) (Kerr, 1989). Earley (1993)
showed in samples from the United States, Israel, and China that collectivism
predicted cooperation in a group task, and that collective efficacy explained
the link between collectivism and cooperation. Thus, collective efficacy may
provide one clue about how cultural values relate to cooperation. Individuals
who are concerned about group outcomes and who believe that others in their
group similarly value the group over themselves will tend to believe that their
group will be effective at reaching their goal (e.g., a public good or resource
conservation), and this feeling of collective efficacy may promote a stronger
tendency to cooperate.
Cultural values may determine the effectiveness of certain solutions for promot-
ing cooperation. For example, does a sense of social responsibility, social identity,
anonymity, and/or group size affect cooperation in the same way in both collec-
tivist and individualist cultures? Here we discuss how certain well-established
solutions for maintaining cooperation may all work differently to affect coopera-
tion, depending on the cultural values of participants facing the dilemma.
Earley (1989) hypothesized that collectivist cultures would be more coopera-
tive than individualist cultures during a cooperative group task, but that induc-
ing a feeling of shared responsibility for the group outcome would only increase
levels of cooperation among persons in the individualist cultures. Earley (1989)
had managerial trainees for entry-level managerial positions either from America
or China complete as many tasks as possible that had been placed in their in-box.
They worked to complete these tasks in one hour individually. However, they were
also placed in a group and told that their output would be added to the output of
a group of co-workers. To manipulate a sense of shared responsibility, Earley told
participants that they were either one of ten managers working toward a com-
mon group goal of 200 items (high shared responsibility), or told participants they
could expect to complete about 20 individual items and were not told anything
about a group goal (low shared responsibility). The primary dependent variable of
interest was the amount of work accomplished in one hour. The study was framed
as a study of social loafing, but this in-box paradigm may also be considered a
form of a social dilemma (see Joireman, Daniels, et al., 2006).
Earley (1989) replicated prior research and found that the Chinese were rel-
atively more collectivist than the Americans. Collectivism also related to more
cooperation (individual output) in the experiment. Most importantly, however,
94 ■ Perspectives to Social Dilemmas
was that collectivism changed the positive relation between the manipulation of
shared responsibility and cooperation. Shared responsibility tended to increase
cooperation amongst individualists, but not collectivists. Prior research has found
that a shared sense of responsibility for a public good increases cooperation in
social dilemmas (De Cremer & Barker, 2003; De Cremer & Van Lange, 2001), but
this work has been primarily conducted in Western societies. One implication of
this work is that inducing a sense of shared responsibility may be one solution for
increasing cooperation in individualistic societies, but may not work as well in
collectivist societies.
Promoting an in-group identity, enhancing identifiability (or decreasing ano-
nymity), and reducing group size may also have different effects on cooperation
depending on the cultural values of participants facing the dilemma. In a separate
study, Earley (1993) hypothesized that collectivist cultures would be more coop-
erative with in-group members, relative to out-group members, and that people
from individualistic cultures would differentiate less between these two groups in
terms of cooperation. Earley compared samples from the United States, Israel, and
China on their behavior during the in-box paradigm described above. An addition
to the paradigm is that he manipulated whether the participants were interact-
ing with an in-group member, out-group member, or simply worked alone. Both
Israel and China scored significantly higher on collectivism relative to the United
States. He found that both the Israelis and Chinese cooperated significantly higher
in the in-group condition, compared to the out-group condition or individual
work condition. However, the Americans worked harder when they worked alone,
compared to the in-group and out-group conditions (which they did not differen-
tiate in terms of cooperation). While interacting with an in-group member, both
collectivist cultures, China and Israel, were indistinguishable in their levels of
cooperation, and both countries displayed greater amounts of cooperation com-
pared to the American sample. Thus, supporting Earley’s hypothesis, collectivist
cultures do tend to be relatively more cooperative with in-group members rela-
tive to out-group members, but that individualistic cultures may not differentiate
between these two groups.
To examine if group size and an enhanced identifiability of contributions would
affect cooperation differently depending on cultural values, Wagner (1995) had
students from the United States work on a group task throughout the semester
and then rate the cooperativeness of their peers. He found that individual dif-
ferences in collectivist values positively related to peer ratings of cooperation
during the group task. Importantly, individual differences in self-reported collec-
tivism changed the relation between group size and identifiability with coopera-
tion. For example, prior research has found that group size negatively relates to
cooperation (Bonacich, Shure, Kahan, & Meeker, 1976; Hamburger, Guyer, & Fox,
1975). However, Wagner (1995) only found this negative effect among individual-
ists. Group size did not affect the amount of cooperation amongst collectivists.
However, identifiability did positively affect cooperation amongst individualists,
but not collectivists. One important implication of this research is that prior con-
clusions from research on identifiability and group size with cooperation have
been limited to Western cultures, and these conclusions may not readily generalize
Cultural Perspectives ■ 95
■ C U LT U R A L B E L I E F S A N D C O O P E R AT I O N
Cultures not only differ in their values, but also differ in their beliefs about
social relations (see Bond et al. 2004). According to Bond and colleagues
(2004) cultures may differ according to general beliefs that people use to direct
their behavior on a daily basis. For example, cultures differ in several types
of beliefs, including beliefs about a supreme being (religiosity) and beliefs that
life events are predetermined (fate). A belief that is central to understanding
cooperation is the belief about the extent to which other people are trustworthy
(or not). This belief has been referred to as social cynicism in cross-cultural
research (Bond et al., 2004), but may be considered as cross-cultural differences
96 ■ Perspectives to Social Dilemmas
to the responder is tripled, but the amount allocated to the self remains the same.
Next, if the responder receives any money, then they decide between how much
to transfer to themselves or deliver to the allocator. During this transfer all money
retains its original value and afterward, the interaction is over. This research found
that while the median offer was only 25% of the endowment in Columbia; in all
the other samples (Argentina, Costa Rica, Peru, Uruguay, and Venezuela) the
median offer was 50% of the endowment. The behavior of the responder (consid-
ered a measure of trustworthiness) mirrored these results. Columbian participants
provided a median return of 14%, while in the other samples there was a higher
median return of approximately 25% (ranging from 20% to 28%). Thus, using
behavioral measures of trust, Cardenas and colleagues find evidence for both cul-
tural differences and similarities in both trusting and trustworthy behaviors.
Trust in others is likely the most relevant belief people possess that may help us
understand cross-cultural variation in cooperation. Nevertheless, there may be
other beliefs that directly or indirectly affect levels of cooperation. For example,
cross-cultural differences in beliefs in God (i.e., religiosity) can affect coopera-
tion rates (Johnson, 2005; Johnson & Bering, 2006). Johnson (2005) used data
from the Standard Cross-Cultural Sample including 186 societies and related
cross-societal variation in a belief in a “high god” to various indexes of coop-
eration within societies. He found support for the hypothesis that cross-cultural
differences in the belief in a supernatural deity that possesses the ability to pun-
ish selfish behavior positively relates to cooperation across societies. Similarly,
Henrich and colleagues (2010) found that the number of persons subscribing
to a major world religion predicted cooperation rates across 15 small-scale
100 ■ Perspectives to Social Dilemmas
■ BASIC ISSUES
Clearly, the topic of culture in social dilemmas is timely in several respects. This
research answers recent calls towards broadening the scope of psychological
research beyond largely Western samples (e.g., Henrich et al., 2010), and takes
advantage of theoretical advances about how culture can shape social behav-
ior, including cooperation. Moreover, several large-scale societal problems, such
as sustainable resource consumption and environmental protection, are more
challenging now than ever before as societies become increasingly global. This
raises the intriguing question: can globalization help promote cooperation to
solve social dilemmas that transcend societal boarders? Another basic issue
involves how cooperation relates to a society’s functioning. Do societies in
which cooperation is effectively promoted and sustained benefit—economically
and/or institutionally—through this enhanced cooperation? Next, we discuss
both of these basic issues.
to the individual fund (which remains the same), the local fund (which is multi-
plied by two and distributed among the four local group members), or the world
fund (which is multiplied by three and distributed among the twelve members
across the three countries). This design allows for a test between pitting paro-
chial motives against the concern for a broader group. While it may benefit the
local group for each member to contribute their endowment to that group, every-
one in the experiment benefits most by each person contributing money to the
global account. The researchers were interested in explaining who contributed to
the global account by either the country’s level of globalization or by individual
differences in the measure of globalization. That is, they also measured to what
extent each person endorsed engaging in international social interactions—which
composed the globalization scale.
They found that the countries that scored lower on the globalization index
(e.g., Iran and South Africa) contributed less to the global accounts, compared to
countries that scored higher on the globalization index (e.g., the United States).
Moreover, they found that the individual measure of globalization supported this
finding. Individuals who scored higher on this measure were more inclined to
make contributions to the global account. Both findings support a general conclu-
sion that globalization may foster, and not inhibit, contributions to global public
goods. Buchan and colleagues (2011) also reported that they measured the extent
to which people endorsed “feeling attached,” “defined themselves by,” or “felt close”
to members in their local community, their nation, or the world. Importantly, they
found that across all six countries, the extent to which people strongly endorsed
being a member of “the world” as part of their social identity predicted increased
amounts of contributions to the world account. This remained a significant pre-
dictor of cooperation even after controlling for expectations of others cooperation.
These initial research efforts to study the process of globalization on contribu-
tions towards global public goods provide hope for solving such broad interna-
tional social dilemmas. This research suggests that people who are more likely
to find themselves interacting with others outside of their own country are more
inclined to contribute to public goods. Moreover, people can develop a form of
social identity with the world or being human “in general” and this positively
relates to a willingness to contribute to broader public goods that cross national
boundaries at a time of important social and ecological challenges for humans
around the world. Importantly, one implication is that the process of globaliza-
tion does not have to stride towards forming a single global cultural in-group to
establish cooperation, but may increase cooperation by encouraging people from
different cultures to identify themselves as simultaneously part of their cultural
group and part of a broader group of human beings.
Several scholars have claimed that an ability for societal members to cooper-
ate in the provision of public goods underlies the success of societal institu-
tions (Henrich et al., 2010; Ostrom & Ahn, 2008; Putnam, 1993). For example,
Cultural Perspectives ■ 103
People around the world face social dilemmas. This chapter reviewed research
on cross-societal variation in cooperation during social dilemmas and discussed
several cultural explanations for this variation. Indeed, recent research has
clearly established ethnic and societal variation in cooperation. This research has
examined cooperation across a broad range of human societies, from nomadic
hunter-gatherers to large-scale industrialized societies. For example, in some soci-
eties, people are quite willing to cooperate with others in the provision of public
goods, but in other societies free-riding is rampant. Why do such differences exist?
To explain this variability in cooperation, researchers have often relied on some
aspect of culture. Although many conceptualizations of culture suggest that cul-
ture may simultaneously exist outside and within the minds of individuals in a
collective, much research has focused on cultural differences that are located in
the minds of individuals, such as values and beliefs. However, research on social
104 ■ Perspectives to Social Dilemmas
norms tends to emphasize both aspects. While social norms may be embed-
ded in the minds of individuals, people will often learn about and conform to
these norms as a result of the pattern of behaviors that exists around them—and
especially through the use of punishment to let others know they have violated a
specific norm. Here we reported on research that suggest social norms for coop-
eration and a willingness to punish norm-violators are two important aspects of
culture that may explain cross-societal variation in cooperation.
Yet, societies also vary in their values and beliefs, which, unlike social norms,
may affect cooperation in the absence of possible punishment. Here we report
research that demonstrates that cultural values of collectivism and the beliefs in
other’s trustworthiness are positively associated cooperation. Perhaps, an even
stronger and more important conclusion is that cross-societal differences in these
values and beliefs can determine what factors promote cooperation. To illustrate,
although collectivist cultures may have greater amounts of cooperation, compared
to individualist cultures, there are certain features of the environment that may
encourage as much cooperation amongst individualists as collectivists, including
a feeling of shared responsibility for a public good and making contributions iden-
tifiable. Moreover, these values and beliefs may themselves be more or less impor-
tant for determining cooperation in certain cultures. For example, trust may hold
important implications for sustaining cooperation in some societies, but other
societies may have norms and institutions in place that remove the necessity of
trust to promote cooperation.
Clearly, social norms, cultural values and beliefs may be important for regulat-
ing cooperative interactions amongst individuals embedded in a cultural group.
But in many societies, people are increasingly interacting with others outside their
cultural group. This can cause several challenges for cooperation during social
dilemmas—especially since much prior work has established a strong bias to
favor in-group, relative to out-group, members. Does this spell certain disaster
for our ability to solve global social dilemmas? Not necessarily. Recent work has
found that globalization is encouraging individuals to incorporate as part of their
self-identity a social identity as being part of a global human community. Indeed,
this global social identity promotes a willingness to cooperate in global social
dilemmas.
Lastly, scholars across the social sciences have discussed that cooperation
underlies a society’s ability to establish and maintain successful institutions and
may also lead to the creation of wealth. Yet there is a lack of research support-
ing this basic assumption, which underscores the importance for understanding
cross-societal variation in cooperation. Preliminary work finds that societies with
a higher level of political participation (a hallmark of a healthy democracy) also
demonstrate an ability to maintain cooperation through the use of punishment.
Future research that further explores the reasons why cultures vary in their coop-
eration may help to understand why certain countries are able to create wealth and
well functioning societal institutions, while other societies seem to lack this ability.
■ PART THREE
107
108 ■ Applications and Future of Social Dilemmas
workplace dilemmas. After all, these are contexts where individuals are working
together in groups and units, and faced with various dilemmas and observations
of what colleagues might do. Shall I work hard or take it easy? How hard do my
colleagues work? Shall I spend extra time to familiarize a new colleague with
the organization? We provide a review of various decisions and behaviors in
organizations that share important features of social dilemmas. In doing so, we
assume that the social dilemma literature, and the literature on management and
organizations, can be enriched. At the same time, we should note that review
does not imply that each situation is completely consistent with the definition of
social dilemma—for example, it is possible that for people who love their work,
and do not mind doing overtime, it would be no social dilemma to put in extra
hours: in those situations, self-interest and organizational interest seem hardly
conflicting (we will address this issue also in Chapter 7, when discussing appli-
cations of social dilemmas).
Thus, we focus on various workplace dilemmas that could be analyzed in gen-
eral terms as a social dilemma. We will also look at some strategies for trying to
resolve these workplace dilemmas. But first, let us ask a more basic question: Why
worry about workplace social dilemmas? If the job ultimately gets done, why does
it matter how it was completed, or by whom? In the bank studied by Rutte, the
work was done, completed later than it could have been, but done nonetheless. So
what if the workers missed out on a nice benefit?
Labor economists have considered this issue. Their argument is that resolution
of the dilemma has a multiplicative effect on productivity: The combination of
incentive for good performance and a supportive, cooperative atmosphere leads
to productivity gains that exceed what would be expected from the effects of each
alone (Rotemberg, 1994). For example, if incentives and collegiality each increase
productivity by 10%, the combination of the two might improve productivity by
25% (i.e., more than merely adding the two together). Incentive plans are easy to
implement, but creating cooperativeness is not. One does not simply tell workers
to be cooperative (or more accurately, one could do this, but the instruction is
unlikely to take hold). It is thus important to learn how to foster long-term coop-
eration in the workplace.
There is an interesting argument that arises within this line of reasoning,
namely, that it is possible for a cooperative environment to lead to decreases in pro-
duction. Holmström and Milgrom (1990) suggested that workers who are coop-
eratively oriented might reduce their personal levels of performance if a co-worker
clearly cannot perform at the level of others. In this way, the struggling co-worker
is protected from being singled out and punished by management. Thus, induc-
tion of long-term cooperation in workers is valuable only if the focus is on collec-
tive output. Throughout this chapter, we will assume such a focus.
We will discuss five forms of workplace dilemmas: (a) organizational citizen-
ship behavior, (b) Non-normative work behavior, (c) knowledge sharing dilem-
mas; (d) unionization; and (e) strategic alliances. Our reviews of these literatures is
by necessity selective, focusing only on those aspects of the problems that specifi-
cally connect to social dilemmas. The reader should understand that each of these
issues is more complex than our discussions might suggest.
Management and Organizations ■ 109
influence (e.g., Aryee, Budhwar, & Chen, 2002; Williams, Pitre, & Zainuba, 2002).
Still other researchers present evidence that both forms can have some degree of
influence (e.g., Rupp & Cropanzano, 2002). It is, however, clear that distributive
justice, which focuses on how outcomes are allocated across group members, is at
best weakly connected to OCB (Cropanzano & Byrne, 2000). Specifically, distribu-
tive justice seems to motivate OCB only when both interactional and procedural
justice are low (Skarlicki & Folger, 1997).
Regardless of which type of justice is influencing the person, current thought
is that perceived (in)justice arouses feelings of reciprocity; if the workplace is
treating me well (however that is defined), I should do something nice in return,
and if the workplace is treating me poorly, I should stop being nice—and turn to
“choosing for myself ” or even retaliation. (Cropanzano & Byrne, 2000; Rupp &
Cropanzano, 2002). Along these same lines, there is evidence that people with a
prosocial social value orientation get upset when they are treated fairly, but others
are treated unfairly (van Prooijen et al., 2012). This clearly overlaps with the role
of reciprocity in societal dilemmas, which, as we saw earlier, is a primal factor in
cooperative choice. Here, however, the person is reciprocating not the actions of
another person, but rather a general tendency of the collective: I have been treated
well by those who work here, and so I will respond in kind by doing nice things for
the collective, and vice versa for poor treatment. The idea of reciprocating a gen-
eral tendency within a societal dilemma has been discussed (Parks & Komorita,
1997) but not developed. As with the idea of ownership of information within a
knowledge-sharing dilemma, reciprocation within the organizational citizenship
dilemma has an added layer of complexity.
Organizational Identification
Membership Status
A number of studies have looked at to what extent the security of one’s mem-
bership in the work group contributes to willingness to perform extra-role
behaviors. The connection between the two is intricate. Those who hold formal
membership in the group, but are threatened with loss of membership, are less
likely to be good citizens (Reisel, Probst, Chia, Maloles, & König, 2010), but
those who hold temporary membership are more likely to engage in extra-role
behaviors, apparently to try to convince others that they should be retained
(Feather & Rauter, 2004). As well, those who might like to leave the work
group, but who do not see exit as a reasonable option, are more cooperative,
apparently because they are trying to make their current situation as positive
an experience as possible (Hui, Law, & Chen, 1999). Interestingly, Bergeron
(2007) has reversed the relationship between the two variables, and suggested
that OCB may actually contribute to a sense of membership instability, because
given a perceived finite amount of resources, people will worry that the more
resources they allocate to “extra” behaviors, the less they will have for their
assigned job tasks. Thus, the paradoxical effect of being a good citizen could
be termination.
Individual Characteristics
There are also some individual-level characteristics that have been connected
to willingness to perform extra-role behaviors, though study of individual
112 ■ Applications and Future of Social Dilemmas
differences and OCB has been surprisingly sparse. Empathic concern is per-
haps the most heavily studied of these variables, with greater concern being
related to more frequent performance of extra-role behaviors (see Bettencourt,
Gwinner, & Meuter, 2001). There is evidence that social value orientation pre-
dicts it, with prosocials being more willing to be good citizens than proselfs
(Penner, Midili, & Kegelmeyer, 1997; Rioux & Penner, 2001). At the personal-
ity level, conscientiousness is quite clearly influential (Borman, Penner, Allen,
& Motowidlo, 2003), and agreeableness and achievement motivation have also
been suggested as possible contributors (Neuman & Kickul, 1998). It is possible
that the motivation for performing extra-role behaviors alters as workers get
older (Wagner & Rush, 2000), a finding that complements the more general
notion that people are more likely to cooperate within a societal dilemma as
they age (Van Lange et al., 1997). Finally, Dunlop and Lee (2004) found that
workplace deviants have disproportionate influence over others, and the pres-
ence of good citizens in the group is insufficient to offset the harm done by the
deviants. Thus, when confronted with both good and bad organizational citi-
zens, people seem to be more strongly swayed by the bad actors. Once again,
this meshes well with research on social dilemmas documenting the undue
influence of bad actors (Kerr et al., 2009).
Summary
Shirking
Ethical Behavior
Related to concerns about performing one’s duties is the concern about whether
the worker is performing his or her duties in an ethical manner. This has always
been an issue within the research on work motivation, but it has become more
114 ■ Applications and Future of Social Dilemmas
prominent during the past decade, in light of a number of large-scale ethical vio-
lations at major world companies (e.g., News Corporation, Enron, WorldCom).
Theorists of ethical work behavior have argued that the decision of whether
to act ethically is modeled as a Prisoner’s Dilemma: It is best for the group if
everyone behaves ethically, but if everyone is indeed ethical, one can realize the
best personal outcome by committing an ethical violation (e.g., if no cowork-
ers steal, it will be quite easy for a person to walk off with company goods). Of
course, this fact is equally true for all other workers, and so if everyone acts on
the impulse, we then have a workplace with rampant unethical behavior (Tyson,
1990). Compounding the problem is that there is good evidence that people
tend to believe that, at least in work-related matters, they are more ethical than
their coworkers (Tyson, 1992), a phenomenon that probably occurs in several
social settings, with people believing that they are more moral and more honest
than other people (e.g., Van Lange & Sedikides, 1998). From a social dilemma
standpoint, this implies that the actor should be aware of the potential for being
exploited, which hence makes them vulnerable to becoming unethical them-
selves. It is clear that people who engage in unethical behavior are sometimes
unaware that their actions conflict with their morals (Banaji & Bhaskar, 2000),
and, when they are aware, they are good at distorting their perceptions of their
actions, to convince themselves that the actions remain consistent with their
moral codes (Darley, 2004; Tenbrunsel & Messick, 2004). Resolving this particu-
lar dilemma thus seems especially challenging: How can we convince someone
to be cooperative if he or she thinks his or her behavior already is cooperative?
An immediate suggestion is to strengthen, and perhaps make mandatory, ethi-
cal training for all workers. Analogous to the idea that educating people about a
social dilemma will foster cooperation, the notion here is that training workers on
business ethics will lead them to behave ethically. Unfortunately, whether training
in ethics counteracts the dilemma is debatable (Badaracco & Webb, 1995; James &
Cohen, 2004), and experimental research suggests that appeals to morality-based
standards of conduct are impactful only if the appeal is made by a leader who
demonstrates self-sacrifice herself (Mulder & Nelissen, 2010). We thus need to
test other methods of encouraging workers to behave ethically when the tempta-
tion is strong to do otherwise. One immediate possibility is the employment of an
ethics “safety net” under which employees report any suspected ethical violation
to a central contact, who then assigns the case to the most appropriate author-
ity (Kaptein, 2002). In theory, because the safety net is so easy to use, the abil-
ity to catch violators improves, because more reports will be filed. From a social
dilemma perspective, the safety net increases the likelihood that the violator will
be sanctioned, which should reduce the temptation to exploit others. Given the
importance of ethical behavior in all aspects of society, this is potentially a quite
important application of social dilemma research, and in fact legal scholars have
made a similar argument. For example, questions exist about whether managed
health care represents an ethical Prisoner’s Dilemma, because a treatment might
benefit a patient but produce a net economic loss for the HMO, thus giving the
HMO an incentive to deny the treatment (Bloche, 2002; see also Blair & Stout,
1999, 2001).
Management and Organizations ■ 115
■ KNOWLEDGE-SHARING DILEMMAS
Many organizational groups are charged with evaluating information and reach-
ing consensus on a course of action. In such groups, a primary task is to gather
and share the information. However, studies regularly show that people are gen-
erally reluctant to make the information that they have acquired broadly avail-
able (Cress & Kimmerle, 2007; Cress, Kimmerle, & Hesse, 2006). Why might
this be? In fact, the structure of an information-pooling task follows that of a
social dilemma. There is effort involved in obtaining information and then shar-
ing it with others, and adding one’s information to the set of publicly-presented
facts adds nothing to one’s ability to help make a good decision—the person
already knows the fact, so sharing does not enhance the person’s knowledge
base. By contrast, if the person remains silent, and listens to what everyone
else has to say, then his or her knowledge base increases greatly, at no cost.
Thus, regardless of what everyone else does, it is better to not share informa-
tion than to share. (This assumes there are no side benefits to be accrued from
sharing, like improvement of reputation, and that full disclosure of what one
knows is not a requirement of one’s job.) However, if everyone behaves in this
manner, then nothing is revealed, no one’s knowledge base is improved, and
the group output will be of poor quality. Knowledge-sharing is thus a particular
type of social dilemma, and more specifically, a form of a public-goods prob-
lem (Cabrera & Cabrera, 2002). While information-sharing occurs in all type
of decision-making groups, the issue is of particular concern to organizational
theorists, as effective information flow is critical for both smooth functioning
of an entity with many subunits, as well as for innovation (Argote & Ingram,
2000). As such, much effort has been devoted to finding interventions that will
encourage employees to be more forthcoming with information.
A challenge that immediately arises in this area, and which is unique to this
type of social dilemma, is considering the question of who ultimately “owns” the
information being shared. While a particular fact is held by a specific individual, it
can be argued that, if the fact was obtained while performing one’s job, the infor-
mation ultimately belongs to the organization. This adds a layer of complexity to
the dilemma that is not found in societal dilemmas. When deciding whether to
give money to a charity, for example, we do not first think about who really owns
the money. The ownership question does seem to impact willingness to share.
Constant, Kiesler, and Sproull (1994) found that how one answers the question of
ownership mediates willingness to share, with those who see the information as
the property of the organization being more forthcoming than those who see it as
their own. However, Jarvenpaa and Staples (2001) found more of a joint ownership
effect, in that people who saw the information as something that they “owned”
also believed that the information was owned by the organization as well; in other
words, because the information was acquired while the person was on the job, it
was also property of the workplace. They thus found more sharing by people who
saw the information as their own. Related to this is the impact of the information’s
value: People are less willing to share information that they deem valuable to oth-
ers, especially if the information seems more valuable to others than it is to oneself
116 ■ Applications and Future of Social Dilemmas
Summary
■ UNIONIZATION
One of the seminal connections between social dilemmas and the workplace was
Messick’s (1973, 1974) argument that the decision whether to form a labor union
can be modeled as a type of Prisoner’s Dilemma. If state laws allow for an open
shop (i.e., workers do not have to join a union that is present in their work-
place), then the best personal outcome is for everyone else to form the union
while one stays independent. Union-induced benefits cannot only be applied
to union members—if the union negotiates a raise in hourly pay, all workers
will receive it—so the person who remains independent will get the benefits,
but will not have to pay union dues. If everyone thinks like this, however, then
the union will not form, and benefits that can only be realized through united
action will not be received. Of course, if all workers join, then everyone gets the
benefits, but everyone has to pay dues. Finally, if just a small number of work-
ers attempt to form the union, then it will not succeed, but those workers will
lose whatever resources they put into the effort, and may well experience hostil-
ity from management for having tried to unionize. The unionization decision is
thus well-modeled as a social dilemma.
Marwell and Ames (1979) extended the logic, and argued that a union is actu-
ally a type of public good. A minimal number of members are needed to make it
exist, but once it exists, any benefits that it produces can be used by all workers
(again, assuming an open shop). This makes the unionization dilemma a type of
step-level good, in that as the critical minimum approaches, it becomes more rea-
sonable for unaffiliated members to consider joining. As Marwell and Ames (1981)
point out, it does not matter whether 4.9% or 49% of workers join the union, but
it does matter whether 51% or 49% join. However, once that 51% threshold is
crossed, it does not matter how many additional members the union acquires, as
full bargaining power has been achieved. This means that, assuming there is no
added bargaining power to be gained from a “sheer number of members” appeal,
late-joining members are irrelevant. Union members are thus confronted with the
possibility of free-riders in their midst.
A primary interest of researchers is in trying to predict who is likely to join the
union (i.e., who will contribute toward provision of the good) and who is likely to
118 ■ Applications and Future of Social Dilemmas
avoid union membership and free-ride on the union’s efforts (i.e., who will be self-
ish). It is important to note that researchers here distinguish between non-joiners
who are motivated by free riding and non-joiners who are just opposed to the
concept of unionization. From a social dilemma perspective, the distinction is
unimportant, because those who are uninterested in the public good would dis-
engage from it—those who do not contribute to public television because they
disagree with the idea of publicly-supported broadcasting presumably do not tune
into their local public station—but in the workplace such disengagement may
not be possible. Nonetheless, our focus in this section is only going to be on the
free-riders. We are not going to cover the research into why some people oppose
unionization. (Note that Klandermans, 2001, has performed a similar analysis of
participation in social movements as a public good.)
Perhaps the primary determinant of joining is concern about reputation.
Workers who are worried about what others will think of them if they do not join
tend to enlist in unions, even if they do not personally agree with unionization
(Chaison & Dhavale, 1992; Naylor, 1990), and it has been argued that whether the
union is provided or not depends critically on how many workers are concerned
with their reputation—no other factor will override it (Naylor & Cripps, 1993). To
that end, “positive reputation” has been defined by some theorists as an excludable
good that is provided by the union (Naylor, 1990), and it has been suggested that
unions can increase their attractiveness to those who are not concerned with repu-
tation by showing that they provide other excludable goods, like job security and
supplementary health benefits (Booth & Chatterji, 1995; Moreton, 1998).
From a social dilemma perspective, such an approach is akin to offering side
payments for cooperation: Cooperate and we will give you some additional
outcomes besides those contained in the payoff matrix. As it can often be hard
for unions to identify credible excludable goods that they can provide (Booth
& Chatterji, 1995), and given that there is work documenting that high rates of
cooperation can be induced without resorting to side payments (Dawes et al.,
1988), one wonders whether those approaches might also work in the union
situation.
Another factor in the decision to join is dissatisfaction with the current state
of affairs in the workplace. The more strongly dissatisfied the worker is, the more
likely he or she is to join the union (Charlwood, 2002; Hammer & Berman, 1981;
Klandermans, 1986). This finding offers an interesting parallel to the research
discussed previously on willingness to change how group members access a
resource, and suggests there is value in asking whether the cause of the dissat-
isfaction matters. We might predict that dissatisfaction with the system would
induce a desire to unionize, but dissatisfaction with specific members of manage-
ment would not. Along these lines, Fullagar and Barling (1989) found that the
dissatisfaction-joining connection is moderated by belief that the union can make
a difference in improving work conditions. This is potentially analogous to people
being willing to change access to a resource when its failure was due to struc-
tural problems, but not when failure was due to behavioral problems. People may
believe that the new system can make a difference under structural constraints, but
not when some group members are behaving irresponsibly.
Management and Organizations ■ 119
Summary
We have seen that labor unions can be considered a type of social dilemma,
more specifically a public good, and even more specifically a step-level public
good. Once a critical mass of union members is reached, additional members
likely add nothing, which introduces a temptation to let that critical mass form,
and then free-ride on their efforts to improve work conditions. The key fac-
tors that seem to drive the decision of whether to participate in the union are
concerns about the reputation one will have if one does not join, and degree
of dissatisfaction with the current work situation. Principles similar to these
120 ■ Applications and Future of Social Dilemmas
can be found in mainstream social dilemma research, but they are variables
that have not received all that much attention. There may also be an influential
individual difference that has parallels to social value orientation. Extending
some of these union-specific variables to general social dilemmas could yield
some results of interest, as would the testing of basic social dilemma influ-
ences on union-joining decisions. Researchers after Messick (1973, 1974) have
only sporadically conducted formal social dilemma analyses of unions (see
Klandermans, 2002, for one example), but such an analysis would be potentially
quite fruitful, for understanding of both unions and public goods.
■ STRATEGIC ALLIANCES
At a more macro level is the notion of a strategic alliance, which arises when
two or more firms which might normally work against each other instead volun-
tarily join together to accomplish some goal (Parkhe, 1993). Strategic alliances are
well-modeled as a social dilemma: While the goal is more easily accomplished
if all members of the alliance work together, the payoff from goal achievement
will be divided among the alliance members, so each member would be better
off investing less in the alliance than the others (McCarter & Northcraft, 2007;
Zeng & Chen, 2003). For example, in 2008 the American auto companies joined
together to appeal to the U. S. government for loans that would help the com-
panies repair their finances. This was strategic because a plea from three compa-
nies would likely be more persuasive than a single company asking for money.
However, it would also be more beneficial for any one company to invest less in
the alliance than the other two, because that company could then right itself more
quickly than the other two, and become profitable while the other two are still
struggling. Despite their at-times considerable advantage, real-world strategic alli-
ances have a fairly regular history of failing to live up their promise (Gottschlag
& Zollo, 2007), which has prompted researchers to study why this happens. That
the alliance is a form of social dilemma seems a promising answer to the ques-
tion, and experimental research has been able to document social dilemma-like
properties within a simulated alliance (Agarwal, Croson, & Mahoney, 2010).
Research into the dynamics of strategic alliances indeed finds factors at work
that also occur in interpersonal dilemmas. For example, the alliance is strength-
ened as reciprocity and trust between members develops (Muthusamy & White,
2005), though choice strategies other than Tit-for-Tat may be more effective at
strengthening the alliance (Arend & Seale, 2005). There is evidence that both pro-
cedural and interactional justice play a role in determining the strength of the
alliance (Luo, 2007). In a manner similar to social value orientation, the extent to
which a partner worries that other members of the alliance will free-ride impacts
own participation in the alliance. Recall from the Chapter 4 that competitors
assume that everyone is competitive and will readily exploit if given a chance.
Similarly, those who suspect alliance partners will eventually try to exploit the alli-
ance tend to reduce their own involvement in the alliance, even if no exploitation
has actually occurred (Rockmann & Northcraft, 2008). Structurally, the alliance
is often characterized by both social and environmental uncertainty. The social
Management and Organizations ■ 121
uncertainty is grounded in the fact that any one partner cannot really know what
the other partners are planning, and the environmental uncertainty stems from
the fact that it is impossible to know whether the alliance will succeed. It may be
that all of the time, effort, and willingness to be vulnerable is for naught (McCarter,
Mahoney, & Northcraft, 2011). We have seen that uncertainty plays a major role in
determining social dilemma choice among individuals, so this factor is yet another
parallel between strategic alliances and regular dilemmas.
A caution that must be raised here is that, with strategic alliances, we are talk-
ing about groups interacting with groups, rather than individuals with individuals.
A key tenet of groups research in general is that behaviors seen at the individual
level do not necessarily occur when those individuals are grouped and collective
performance is measured (Hinsz, Tindale, & Vollrath, 1997; Wildschut et al., 2003).
Thus, as we think about strategic alliances as social dilemmas, we need to be careful
to not automatically assume that principles that are well-established in individual
social dilemmas will also occur between alliance members. A considerable amount
of additional research is needed into the dynamics of alliances-as-dilemmas, but
this seems a promising application of social dilemma research to a feature of orga-
nizations. Notably, political scientists have considered the issue of international
alliances from a Prisoner’s Dilemma perspective (e.g., Conybeare, 1984; Palmer,
1990), and it would be useful to also ask to what extent the dynamics of these very
large-scale dilemmas also occur in smaller, organizational alliances.
■ BASIC ISSUES
We saw that individual worker ethical behavior is a form of social dilemma. What
about the larger-scale ethical climate within an organization? As we mentioned ear-
lier, individual behavior does not necessarily repeat itself at the collective level, so
there is no guarantee that the ideas we discussed about individual ethical behavior
would work at the level of the corporation. At a purely practical level, investigat-
ing corporate ethics within a social dilemma framework would help bring struc-
ture to research into the issue. In a seminal article, Donaldson and Dunfee (1994)
criticized approaches to business ethics as being grounded in either philoso-
phy (“Companies should do these things”) or empiricism (“Companies tend to
do these things”), but not both. Progress has been made since then, but simi-
lar criticisms continue to be leveled (e.g., Scherer & Palazzo, 2007). A social
dilemma approach can help along these lines, because the approach encom-
passes both what should be done (to achieve long-term cooperation), and what
actually is done. It may turn out to be that macro-level ethical behavior is not
well-described as a social dilemma, but it is worth testing the proposition.
122 ■ Applications and Future of Social Dilemmas
We have seen that a variety of aspects of workplace functioning fit the logic
of a social dilemma. As a primary emphasis of organizational researchers is in
understanding shortfalls in work productivity, the social dilemma framework
offers quite a bit of potential, if we equate productivity with cooperation. The
question of how long-term cooperation can be maximized then becomes equiv-
alent to asking how long-term productivity can be maximized.
Studying workplace dilemmas as social dilemmas also offers advantages to the
social dilemma theorist. The organization is an unusual setting in that it contains
many features one does not normally see in a typical real-world dilemma: There
are third parties who stand to be affected by the erosion of cooperation; group
members could be removed from the group by an authority because of lack of
cooperation; the complexity of the organization often makes it easier for free-riders
to stay hidden; the entity being contributed is often less tangible than money or
participation in a well-defined task. That basic principles of social dilemmas occur
in the workplace thus adds generality to the body of knowledge about dilemmas.
Research into workplace social dilemmas should thus be encouraged as beneficial
for both social dilemma researchers and organizational psychologists.
7 Environment, Politics, Security
and Health
Social dilemmas are everywhere around us. As social creatures, humans fre-
quently encounter cooperation problems at home, in their community, in the
workplace, and in society at large. Sometimes these social dilemmas involve
just two people, such as a husband and wife sharing the burdens of child-
care, whereas at other times millions or even billions of people are involved
with problems such as international security and global climate change. For
some real-world social dilemmas, the solutions seem fairly straightforward, for
instance, a husband and wife could make a reciprocal arrangement to pick up
their kids from school. Other social dilemmas require rather more complex
solutions. For instance, an international treaty such as the Kyoto Protocol to
address the problem of global climate change includes a combination of strate-
gies involving financial incentives, punishment, changes in social norms, legal,
and institutional changes (Dietz, Ostrom & Stern, 2003; Van Vugt, 2009).
It is important to realize that studying social dilemmas is not a theoretical exer-
cise. It is of course highly important to work out the mathematical assumptions
underlying the dilemma games and develop the procedural details of the laboratory
experiments. Nevertheless, it must be recognized that some of the most pressing
problems facing society today regarding the environment, public health, politics,
and international security are, in fact, social dilemmas. Understanding the psychol-
ogy of cooperation and defection within these social dilemmas is crucial for solving
these problems and improving the welfare of society and the fate of the planet.
In this chapter, we look at a few of the most pressing collective problems that we
as a community, society, nation, and planet are confronted with today through the
lens of social dilemma theory and research. These examples illustrate how social
dilemmas permeate modern life, and how they can be solved. The challenges for
solving these problems are threefold.
A first challenge is that the problem needs to be broadly recognized as a conflict
between self-interest and collective interest—generally, as a social dilemma. Many
cooperative problems in society are not being solved because people do not rec-
ognize them as social dilemmas. For instance, various public health issues such as
smoking, unsafe sex, and vaccination programs are in fact social dilemmas because
there are negative externalities involved such as the health risks involved in pas-
sive smoking or the contagion risks if many people choose not to be inoculated
against infectious diseases. At the same time, what sometimes looks like a social
dilemma is upon closer inspection quite a different social challenge. For example,
some collective problems involve a lack of coordination rather than cooperation.
Such coordination problems can be solved by adopting a simple rule—for instance,
Mark Van Vugt had primary responsibility for preparation of this chapter.
125
126 ■ Applications and Future of Social Dilemmas
some countries have chosen to drive on the right side of the road and others on the
left side. In each case, the pay-off structure underlying the social dilemma must
be carefully analyzed. If we fail to identify whether a particular problem is either
a continuous public goods or a step-level public goods problem, this might affect
the effectiveness of particular solutions. For instance, there were problems with the
tsunami-relief effort in Asia in 2004 because campaigns to raise donations were so
successful that the organizers could not spend the money effectively and a lot of
money ended up in the wrong hands. It would have been better to set a cap on the
amount of money needed and focus the activities on repairing the infrastructure of
the destroyed coastal areas in Indonesia and Thailand (Van Vugt & Hardy, 2010).
A second challenge is to appreciate the complexity of real-world social dilem-
mas. Many real-world problems contain a mixture of different dilemma games.
Researchers often make a distinction between public good dilemmas and com-
mons dilemmas. Public goods dilemmas require individuals to make an active
contribution to establish or maintain a collective good such as building a local
bridge or joining a social movement. They is clearly a collective interest, and usu-
ally these dilemmas include non-excludable goods, because once they have been
provided everyone can enjoy them and this does not affect the quality of the good.
Conversely, resource dilemmas—also known as commons dilemmas or CPRs
(common pool resources)—require individuals to make sacrifices to preserve
a common resource such as a communal garden or a water reservoir. Resource
dilemmas are usually involve a greater risk of harming others (rival goods) because
using the resource affects the quality for others.
In reality, the distinction between these two classes of social dilemmas is often
blurred and many real-world problems are hybrid social dilemmas. For instance,
environmental management requires that people make active contributions to
protect the environment, for instance, through paying eco-taxes as well as refrain
from consuming scarce resources such as water and energy (Van Vugt, 2009). It is
good to realize that there are psychological differences associated with framing a
problem as either a public good or a resource dilemma which has implications for
the effectiveness of particular strategies (Van Dijk & Wilke, 1995).
A third challenge is that there is usually not one strategy—a magic bullet—to
solve a real-world social dilemma. To tackle a problem such as tax evasion requires
a combination of different activities which tap into the different reasons why peo-
ple evade taxes. For instance, people may not pay their taxes either because they
do not believe that their money is spent wisely, or because they can get away with
not paying, or they have difficulties filling out tax forms. Different people have
different reasons why they do not cooperate in a social dilemma and therefore it
requires a combination of strategies to foster cooperation.
The literature often draws a distinction between structural and individual solu-
tions to social dilemmas. This distinction was originally proposed by Messick and
Environment, Politics, Security and Health ■ 127
■ E N V I R O N M E N TA L S U S TA I N A B I L I T Y
AS COOPERATIVE PROBLEM
One of the more pressing social dilemmas concerns the protection of the natu-
ral environment and natural resources (Gardner & Stern, 2002). Many envi-
ronmental problems are social dilemmas because they entail a conflict between
individual and collective interests. For instance, when people make efforts to
save domestic energy or recycle their garbage, they will be incurring a net cost.
Yet, if not many others follow their example, the benefits of their efforts will be
negligible as it will have no impact on the overall sustainability of the resource.
Many environmental problems have the underlying structure of a tragedy of the
commons (or a resource dilemma), as we discussed in an earlier chapter.
Garret Hardin, who introduced the term “Tragedy of the Commons” in a famous
article in Science (1968), had an environmental problem in mind. He tells the story
of how the management of a communal pasturage by a group of herdsmen turns
into ecological disaster when each individual, upon realizing that adding extra
cattle benefits him personally, increases his herd, thereby (intentionally or unin-
tentionally) causing the destruction of the commons. The tragedy of the commons
has become central to our understanding of many local, national, and global eco-
logical problems. As an evolutionary biologist, Hardin argued that nature favors
individuals who exploit common resources at the expense of the more restrained
users. He also argued that voluntary contributions to create institutions for man-
aging the commons often fall short because of (the fear of) free-riders. To save
the commons, Hardin therefore recommended “mutual coercion, mutually agreed
upon” which essentially involves electing a central authority that regulates people’s
access to the commons.
Hardin’s article inspired a large body of research into factors contributing to the
preservation of shared environmental resources, including much applied research
into various environmental problems such as the conservation of resources like
water and energy, recycling and transportation (Joireman et al., 2004; Penn, 2003;
Samuelson, 1990; Van Lange, Van Vugt, Meertens, & Ruiter, 1998). Here is an
overview of the main results of these research programs.
group at Indiana University studied various cases of success and failure in the
management of local communal resources. In her classic book Governing the
Commons (1990) she described various examples of resource management
projects such as water irrigation systems, fisheries, and cattle grazing, and used
these to draw some general design principles for successful community resource
management. Ostrom was primarily interested in community management sys-
tems in which resource users devise their own management rules, accept the
rules voluntarily, and have the power to collectively change them. From study-
ing these systems, she concluded that communities are actually much better
in organizing themselves to prevent a tragedy of the commons than originally
suggested in Hardin’s article.
Ostrom focused on the sustainability of common-pool resources. A common-
pool resource is one that is large enough geographically to make it difficult to
exclude individuals from benefiting from its use. Sustainability is a mark of suc-
cessful management because renewable resources such as grasslands, forests,
and fisheries replenish themselves at a limited rate and overuse can cause their
depletion. Ostrom looked at renewable resources in which substantial scarcity
existed, in which relatively small numbers of individuals depended heavily on the
resource. Ostrom found that success in developing long-lasting sustainable com-
munity management systems depends on a combination of four factors, (1) char-
acteristics of the resource, (2) the community using the resource, (3) the rules they
develop, and (4) the actions of government at regional and national levels. A social
psychological analysis suggests that these conditions are important because they
tap into the four primary motives for decision making in social dilemmas, under-
standing, belonging, trusting, and self-enhancing.
A first condition for successful community resource management is that the
resource is controllable locally. This means that the resource has clearly identifi-
able boundaries, that resources stay within these boundaries, and that changes in
the resource can be monitored. For instance, fish stocks in lakes or coastal areas
are easier to monitor and control than fish stocks in open seas. Furthermore, com-
munal resources are more likely to be sustained once users realize there is a threat
of depletion due to overuse. Information campaigns play an important role in con-
veying this information.
A second factor determining the success of community resource management
has to do with the characteristics of the group of users. Sustainable communities
have rather small and stable populations with relatively few individuals moving in
and out and with many members placing a high value on the preservation of the
common resource. Stability is important because such communities are character-
ized by dense social networks and strong social norms about how people ought
to behave. Ostrom refers to this as social capital. Successful communities are also
those in which there are easy, low-cost ways of sharing information and resolving
conflicts. In the absence of a small and stable community of users, it thus becomes
paramount to develop identity strategies to increase social connections between
individuals.
A third condition for community resource management is the availability of
appropriate incentives, rules, and procedures. Successful community resource
132 ■ Applications and Future of Social Dilemmas
■ T R A N S P O RTAT I O N A N D M O B I L I T Y
Researchers have looked at other real-world social dilemmas beyond the envi-
ronment. One of these concerns political activism. Politics is a public goods
dilemma because citizens give up some autonomy to create institutions (law,
army, police) to manage different kinds of social problems in society such
as security, crime, antisocial behavior, poverty, and unemployment. People’s
self-interested choice is to not contribute to upholding law and order, but of
course, if nobody does, then society as a whole will break down and everyone
will be worse off.
Voting. One of the more salient political social dilemmas concerns voting
behavior. When people cast their vote in an election or referendum they incur a
net cost, and yet their impact on the outcome of the election is negligible. It is very
tempting to free-ride on the efforts of others and yet if no-one casts their vote then
governments operate without legitimacy and everyone will eventually be worse
134 ■ Applications and Future of Social Dilemmas
off. Although it is rational not to vote many people still do—this is known as the
voter’s paradox (Garman & Kamien, 1968). Why?
A social dilemma analysis of voting reveals a number of reasons why people
turn out and vote. One important factor is people’s understanding of the criticality
of their vote. In general, supporters of minority parties feel more critical than sup-
porters of majority parties (Ledyard & Palfrey, 2002) and this explains why election
results are generally less clear-cut than forecasters predict based on polling results.
Voting also serves a belongingness purpose because people derive benefits from
supporting particular candidates and parties that they identify with. To increase
voting rates, governments usually directly appeal to people’s self-interest by mak-
ing voting compulsory. If an eligible voter does not attend a polling place, he or
she may be subject to punitive measures such as fines, community service, or even
imprisonment. As a result, turnout is higher in countries that have adopted com-
pulsory voting such as Australia, Belgium, and Singapore. Some countries hand
out penalties if people do not cast their vote. In countries such as Brazil, Peru, and
Greece, if a person fails to vote in an election, they are barred from obtaining a
passport until after they have voted in the two most recent elections. In Turkey, if
an eligible voter does not cast their vote in an election, then they pay a fee of about
five Turkish lira (about $8 USD.). Thus, both incentive and institutional strategies
can contribute to solving the social dilemma of political voting.
What kind of institutional changes would people vote for in solving a commons
dilemma? Naturally, people have a stronger preference for change when a com-
mons is being depleted, but it is interesting what kind of rule change they prefer.
A research program by Messick, Samuelson and others shows that people prefer an
equal division of the common resource above other solutions such as appointing a
leader or an authority that regulates access to the commons (Rutte & Wilke, 1985;
Samuelson, Messick, Rutte, & Wilke, 1984). This suggests that users want to retain
some autonomy in the commons.
Tax Paying. Income tax paying is a standard example of a public good dilemma,
especially when taxes are collected through the procedure of filing tax returns as in
most Western countries. When filling out an income tax form it is in the interest
of individual citizens to under-report the amount of income they have received so
that they are taxed less heavily. Yet, if many taxpayers adopt this strategy, then this
means that many valuable public goods in society such as schools, libraries, health
care, and the police force are underfunded, leaving everyone worse off. This is not
a hypothetical problem. The massive budget problems in European countries such
as Greece, Italy, and Spain in the last few years (since 2009) are in part due to a
lack of compliance with tax regulations, especially among the wealthy citizens of
these countries. Tax evasion rates in Western countries indeed vary quite dramati-
cally. Webley, Robben, Elffers and Hessing (1991) report tax evasion percentages
varying from 1% to 40%. Therefore, it is interesting to look at the problem of tax
evasion from a social dilemma viewpoint.
Traditional approaches to reduce tax evasion focus on increasing incentives for
compliance through punishment and deterrence. More and better audits, higher
fines, and increasing the scrutiny of taxpayers who have been caught once, directly
influence people’s temptation to defect and they generally increase compliance
Environment, Politics, Security and Health ■ 135
rates. Yet, such punitive systems are costly to operate and therefore tax authorities
have looked at other, less expensive ways to induce tax compliance. To increase
compliance, they have introduced much simpler tax forms which people can easily
comprehend (Elffers, 2000). In some countries, tax authorities also give feedback
to taxpayers on what people’s taxes are being spent upon so that people feel a that
there is a more direct link between their actions and the outcomes in terms of the
provisions of public goods.
People’s need to belong can also be invoked to induce tax compliance through
activating personal and social norms (Wenzel, 2004). People generally do what
they believe the majority of people do and therefore it is important to convey
information—provided it is true—that the majority of people report their income
honestly. In addition, people are also more likely to imitate prestigious, high-status
individuals, and so it is in the interest of tax authorities to scrutinize the tax forms
of highly public figures and “name and shame” them if they defect. Incentive
schemes also seem to be effective, and they can even increase trust in tax authori-
ties. For instance, tax authorities in the Netherlands collect a provisional tax dur-
ing the year but when the tax is overestimated tax payers get a monetary refund.
Research suggests that the combination of an easy-to-fill-out tax form, and poten-
tially a considerable tax deduction has increased public trust in the tax system as
well as tax compliance (Elffers, 2000).
Volunteerism and Social Movements. Every year millions of people around
the globe volunteer to devote substantial time and energy to help others, for
example, providing companionship for the elderly, tutoring children with learn-
ing problems, organizing activities at local sports clubs, or participating in social
and political movements. According to a 2010 survey, 62.8 million adults in the
United States perform volunteer services each year, for a total of 8 billion hours
per year (Volunteering in America, 2012). Volunteerism is a classic example of a
public good dilemma. It is in everyone’s interest that the sick and needy in soci-
ety are being cared for and that there are religious, sports, and leisure activities
which people can participate in. Yet, at the same time for any particular indi-
vidual it is attractive to use such services if they need to, but contribute nothing
to maintain them.
A social dilemma approach suggests that there are different strategies that can
be used to promote volunteering in society. These strategies should be tailored
to the particular psychological motives that people have for volunteering. Social
psychological research on volunteerism suggests that people volunteer for many
different reasons (Omoto & Snyder, 2002). These can be neatly grouped into the
four primary motives for cooperation in social dilemmas: understanding, belong-
ing, trusting and self-enhancing. Some people volunteer to get a greater under-
standing of a particular problem. Another common motivation for volunteering is
a concern with a particular social grouping or community that one identifies with
(e.g., a religious person helping in a local church). Related to this, some people
do volunteer work as a means to express their personal and humanitarian values
as caring individuals, trusting that others will do the same for them if they need
help. Finally, many volunteers report self-enhancing reasons such as benefits for
their personal career and development, making friends, and feeling better about
136 ■ Applications and Future of Social Dilemmas
regardless of the individual costs of participation (Simon et al., 1998). Identity
strategies which tap into people’s motivation to belong to a particular group should
therefore be highly effective in fostering collective action. A study on consumer
boycotts in the United States, inspired by a social dilemma perspective, showed
that the likelihood of participation in a boycott was influenced by both the likeli-
hood of the boycott’s success and the extent to which people identified with the
movement (Sen, Gurhan-Canli, & Morwitz, 2001).
In sum, a broad range of political behaviors can be viewed through the lens of
social dilemma theory, revealing many interesting insights into what drives people
to volunteer for good causes or vote in elections, for instance. Furthermore, by
looking into the primary motives for political and social action, this approach
offers a number of promising strategies to foster cooperation.
The analysis of international conflict and warfare has greatly benefited from
a social dilemma analysis. One of the classic case studies concerns the arms
race between Russia and the United States during the Cold War in the middle
of the last century. Well known game theorists such as Anatol Rapoport and
Thomas Schelling argued that the race to acquire nuclear weapons could easily
be conceived of as a Prisoner’s Dilemma game. In this game, each country has
a choice between building up their nuclear weapon arsenal, or disarmament,
which is the cooperative choice. Arming dominates disarmament because no
matter what the other country does, it is better to arm. If the Soviets arm, then
the United States should also arm to keep up, resulting in an outcome called
MAD (mutually assured destruction) in which both countries could obliterate
each other. If the Soviet Union disarms, the United State can gain a strategic
advantage by continuing to arm.
The social dilemma primarily lies in the costs of the arms race. Mutual arma-
ment is much more costly than mutual disarmament. Thus, both countries are bet-
ter of disarming but neither is willing to trust the other to do so. At the time, many
different experiments were conducted to analyze how actors behaved in such arms
races. It was found that a Tit-for-Tat strategy, in which a country started first with
a cooperative move—disarm—and then mimicked the choices of their opponent
elicited the most cooperation (e.g., Guyer, Fox & Hamburger, 1973).
Some fifty years later, we can conclude that this is what happens. Both the United
States and particularly Russia came to realize they could no longer afford spend-
ing excessive amounts of money on developing their nuclear weapons. Through
a number of bilateral treaties, which increased trust in each other’s cooperation,
both countries have reduced their nuclear weapon arsenal considerably. Yet, other
countries are still involved in a nuclear arms race, such as India and Pakistan, and
North and South Korea.
In addition to developing trust, it seems that it is important to know each coun-
try’s understanding of the conflict. A content analysis of political speeches made
by American and Russian leaders revealed that they perceived the arms race more
138 ■ Applications and Future of Social Dilemmas
as a coordination game than as a Prisoner’s Dilemma (Plous, 1985). This was fur-
ther confirmed in a survey among U.S. senators who could indicate their prefer-
ences for the ranking of outcomes in a 2x2 game in which the two countries, the
United States and Soviet Union, each had an option to disarm (cooperation) or
arm (noncooperation). Their preferences showed that, more than anything, both
countries wanted to disarm (Plous, 1985). This information is extremely useful to
convey to political leaders because it suggests that a cooperative solution is much
easier to achieve.
Nevertheless, some research suggests that individuals as elected representa-
tives or leaders of their group or country often make more defective choices in a
social dilemma between groups than ordinary group members (Reinders Folmer,
Klapwijk, De Cremer, & Van Lange, 2012). Groups are inherently more competi-
tive than individuals (Wildschut et al., 2003), and so it is very important for group
leaders to try and develop an intimate, personal relationship with each other so
that they see each other not just as representatives of their group.
Warfare. An important insight from social dilemma theory about conflict
and warfare between countries is that it poses a cooperative problem within each
country (Van Vugt et al., 2007). Going to war is essentially a public goods dilemma
that individuals and societies face. Each individual actor would be better off not
participating in warfare because there is a huge potential cost, the risk of injury or
death. Yet, from the group’s perspective, it may sometimes pay to get many people
to sign up for war because of the spoils of a victory over a rival group or nation.
The analysis of inter-group conflict from a social dilemma perspective has been
given a huge boost by the experiments conducted by Bornstein and colleagues
(Bornstein, 1992; Bornstein & Ben-Yossef, 1994). They created an inter-group
Prisoner’s Dilemma game in the lab to model warfare decisions. In these games,
individuals could either keep an endowment to themselves or they could invest it
in their group. The group with the highest number of contributors would be vic-
torious in the game and only individuals in the group with the highest number of
contributors would receive a pay-out.
Getting individuals to contribute to war efforts may depend upon how they
view themselves. Arguably, the stronger people identify with their group, the
more likely they are to contribute. Trust also matters: If people believe not many
others will join them, why should they? Finally, incentive and institutional strate-
gies could solve this dilemma. If groups can ensure that the benefits of the loot
will go to the people who actively contributed, then there is less temptation to
free-ride. In addition, punishing defectors or deserters with imprisonment or
even execution—as they do in some countries—is a powerful deterrent against
free-riding. These strategies could be particularly focused on males because they
are historically the warriors in their group (McDonald, Navarrete, & Van Vugt,
2012). Research on the male warrior hypothesis shows that when a group is in
conflict with another group, men start contributing more to their group (Van
Vugt et al., 2007).
Both inter-group and intra-group conflicts carry features of a social dilemma.
To promote cooperation requires analyzing the key motives that guide the actions
of individuals and groups in these problems. A complicating factor is that groups
Environment, Politics, Security and Health ■ 139
and group representatives are often more competitive than ordinary individuals,
which makes it difficult to solve problems of warfare and international security. It
seems that developing trust is a key factor.
■ S O C I A L D I L E M M A S I N P U B L I C H E A LT H
Some public health issues can also be identified as social dilemmas because
many health-related behaviors carry negative externalities. This may not always
be obvious. For instance, smoking, binge drinking, unprotected sex, or exces-
sive eating seem to be largely individual problems of self-control and temporal
discounting. Nevertheless, the consequences of these behaviors also affect other
people (e.g., passive smoking, unsafe sex), thus making it essentially coopera-
tive problems.
Infectious Diseases and Vaccinations. One of the most effective ways to prevent
the spread of infectious diseases is through vaccinations. Vaccination campaigns
have been very successful and have led to the eradication of many terrible dis-
eases around the world, such as polio, cholera, and typhus. Yet, vaccinating poses
an interesting social dilemma (Henrich & Henrich, 2007). From the perspective
of society, it is essential that many people get inoculated, because if a majority
of people has been immunized the disease is unlikely to spread (in preventive
medicine this is known as “herd immunity”). Yet, getting a vaccination involves
a small risk for the individual, because in sporadic cases the person might get ill
or even die from the vaccine. In addition, if a large number of people within a
population has been vaccinated against a particular disease, then there are fewer
benefits of getting vaccinated for any particular individual as the risks of infection
are negligible.
A social dilemma approach points to some interesting strategies to the vaccina-
tion dilemma. A particularly effective institutional strategy is to make it manda-
tory. However, this can be seen as a violation of basic human rights. In various
religious groups around the world, vaccinations are seen as interfering with the
work of God, and so these communities are not willing to comply with mandatory
vaccination programs. Not surprisingly, whenever there are outbreaks of diseases
such as polio or rubella, it usually affects children in close (religious) communities
where vaccination rates tend to be low. It is very important to increase people’s
understanding of the problem and so providing accurate information is crucial.
Some years ago, an article in The Lancet (1998) claimed to have found a link
between vaccinations of children and the onset of autism. The article was subse-
quently withdrawn, because of methodological problems. Nevertheless, it caused
substantial damage, and vaccination rates for children plummeted in the United
Kingdom after the first publication of these results.
Institutional strategies have been effective in the widespread adoption of vac-
cination programs. In the United States, for instance, children cannot attend state
schools unless they have received all their childhood vaccinations (Henrich &
Henrich, 2007). Furthermore, families with children who do not get their vac-
cinations get stigmatized and ostracized—these are powerful means to increase
compliance. Because vaccination poses an important, large-scale social dilemma,
140 ■ Applications and Future of Social Dilemmas
the best way to sustain cooperation and prevent defection is through the right
combination of legal changes, incentives, information, and identity solutions.
■ BASIC ISSUES
In this chapter, we have shown that a variety of societal problems can be fruit-
fully analyzed through adopting a social dilemma approach. From environmental
and health problems to cooperative challenges regarding international security
and warfare, social dilemma theory gleans new insights into the social causes
underlying these problems as well as interventions to tackle them. A range of
other cooperative problems in society could potentially also benefit from a social
dilemma analysis, such as the prevention of crime and antisocial behavior, file
sharing on the Internet, and child care and relationship well-being. Space limita-
tions prevent us from delving deeper into these dilemma problems here.
Not every societal problem is a social dilemma. Social dilemmas should not be
over-recognized. For each problem, we must very carefully analyze its pay-off
structure to see if it fits with the definitions of a social dilemma. If so, we
should examine what sort of social dilemma we are dealing with—is it a pubic
good or commons dilemma or perhaps a mixture of the two? Furthermore,
we should examine people’s understanding of the dilemma, for instance, some
users perceive a transport dilemma as essentially an environmental problem
whereas others perceive it as a problem of coordination. The pay-off structure
of a particular dilemma, and the way people perceive the dilemma, determines
what kinds of strategies will be most effective.
In terms of tackling real-world social dilemmas, we have drawn a distinction
between four kinds of strategies that each tap predominantly into one core psycho-
logical motive underlying decision-making in them: understanding, belonging,
trusting, and self-enhancing (Fiske, 2004; Van Vugt, 2009). The first two strate-
gies, information and identity, are individual solutions because they do not change
the actual pay-off structure underlying the dilemma but rather, make cooperation
psychologically more appealing. For instance, people are more likely to conserve
resources when there is a threat of depletion, and contribute to a common good for
a group that they strongly identify with. The other two, incentive and institutional
strategies, actually change the dilemma structure either by increasing the benefits
of cooperation and costs of defection or through changing the decision-making
environment, for example, by creating choice options (e.g., a separate lane for car-
poolers) or removing them (e.g., children who are not vaccinated cannot go to
school).
they cancel each other out? There is some evidence, for instance, that incen-
tive strategies actually undermine people’s intrinsic motivation to contribute to
a public good as their understanding of the problem changes (Mulder et al.,
2006). This phenomenon is also known as crowding out (Frey & Jegen, 2001)—
the idea that extrinsic motivation eradicates intrinsic motivation (Deci et al.,
1999). As an example, researchers found that the introduction of financial pen-
alties for picking up kids late from nurseries actually increased noncompliance
rates. The parents were reframing the dilemma as an individual economic prob-
lem. By paying extra, they believed they were entitled to pick up their kids later
(Gneezy & Rustichini, 2004).
Second, identity strategies that aim to influence people’s belongingness needs
via social incentives may backfire if it appears that only a small minority of people
are, in fact, showing the desired cooperative behavior. Because people generally
want to belong to the majority, a message such as “only 5% of people in this com-
munity recycle their garbage, and that’s why we want you to change your behavior”
is going to be highly counterproductive. As an illustration, a sign at the Petrified
National Forest Park in Arizona attempts to prevent theft of petrified wood by
informing visitors about the regrettably high number of thefts each year. Field
experiments have shown that this antitheft sign depicting the prevalence of theft
actually increased theft by almost 300% (Cialdini, 2003; Griskevicius et al., 2012).
Third, individual differences matter in the way people respond to social
dilemma strategies. For instance, public education appeals to donate money or
behave more sustainably are going to be more persuasive among people with a
basic understanding of the problem, with strong personal norms, or a prosocial
disposition. Yet other individuals lacking the knowledge or motivation to change
are more likely to respond to individual reward and punishment (Van Lange
et al., 1997; Van Vugt et al., 1996; Wenzel, 2004). Similarly, we suspect that people
with high belongingness needs will be persuaded more strongly by social incen-
tives, for example, giving feedback on how well they are doing compared to their
neighbors in terms of their electricity use (Nolan, Schultz, Cialdini, Goldstein,
& Griskevicius, 2008). Finally, do different cultures respond differently to differ-
ent social dilemma strategies? Perhaps in more individualistic cultures, there is a
stronger aversion toward institutional strategies limiting people’s decision free-
dom, for example, whether or not to immunize their children against an infectious
disease. Yet such legislation may be more strongly endorsed in collective cultures.
In the end, policymakers must try to tackle social dilemmas through finding
the right mix of strategies. Many social dilemmas in society are complex and
often solutions require a good understanding of human social psychology and
importance of cultural norms, institutions and governments. The recent chal-
lenge in Europe to save the Euro currency presents a good example of how a
complex social dilemma—countries contributing money to save the Euro—is
addressed by restructuring the problem in terms of a cooperative challenge
for all countries involved, strengthening a joint European identity, building in
142 ■ Applications and Future of Social Dilemmas
penalties for “defecting” countries (like Greece, Spain and Portugal), and rely-
ing on fair and legitimate institutions to administer penalties.
143
144 ■ Applications and Future of Social Dilemmas
This taxonomy helps us understand the game (read: situation) people are facing,
and the problems or opportunities that the game (again read: situation) affords.
This interdependence-based analysis not only provides key insights into the struc-
ture of the situation (what is the situation about?), it also emphasizes the relevance
of our own interaction goals (are we cooperative or not?) and those we attribute to
others in a global or concrete manner (are other people cooperative or not?). The
latter attributions or beliefs are, of course, closely linked to the concept of trust.
Evolutionary theory provides a meta-theoretical framework for understanding
the (ultimate) functions of trust and cooperation in social dilemmas, and how nat-
ural selection has shaped proximate psychological mechanisms, as discussed and
illustrated in Chapter 3. Evolutionary and psychological explanations complement
each other, of course, and together they can provide the bigger, and more complete,
picture of decision-making in social dilemmas (Van Vugt & Van Lange, 2006). To
illustrate, interdependence theory (and game theory) conveys the importance of
incomplete information for the development of cooperation. By virtue of its focus
on the conflict between self-interest and collective interest, incomplete informa-
tion in social dilemmas presupposes some degree of trust in others: “Does the
other person intentionally or unintentionally harm the collective interest?” From
an evolutionary perspective, acknowledging the role of incomplete information is
important because it challenges our thinking about the evolution of cooperation.
For example, it may help us understand why focusing on intentions rather than on
actual behaviors has functional value in an evolutionary sense. Even more, it may
help us understand the roots of generosity (Nowak & Sigmund, 1992). Proximally,
giving others the benefit of the doubt, especially when accompanied by the com-
munication of generosity, will enhance the level of trust the other has in your
intentions—which in turn is crucial for coping with uncertainty and incomplete
information (Van Lange et al., 2002). We are looking forward to a fruitful and
comprehensive integration of structural factors (the games we play), psychologi-
cal explanations (what we make of the game), and the ultimate functions these
factors serve in terms of psychological, economic, and evolutionary benefits (the
outcomes of playing the game).
■ INTERDEPENDENCE STRUCTURE
The following broader research themes may well receive increased empirical
attention in the future. An interdependence framework suggests the importance
of (a) availability of information, (b) the dimension of time, and (c) the unit of
analysis (individuals versus groups).
From complete to incomplete information. The notion that people are often
faced with incomplete information in social dilemmas is well recognized. In fact,
in most social dilemmas in the real world, people do not have complete infor-
mation about issues such as the preferences of interaction partners or how out-
comes are precisely determined by their own and others’ behavior (e.g., would
the other person really appreciate my initiative to complete a major portion of
a joint task). Likewise, people often experience outcomes (e.g., the person did
not respond to my e-mail) but lack information about how these outcomes came
Prospects for the Future ■ 145
about (e.g., perhaps he was not able to use e-mail). Imperfect information may be
even more important in larger scale social dilemmas such as environmental social
dilemmas—for example, how limited are our natural resources?
Clearly, the concepts of noise, social uncertainty (lack of information about oth-
ers’ actions) and environmental uncertainty (lack of information about the objec-
tive state of affairs, such as resource size) are all important from both a scientific and
societal perspective. These structural and psychological factors might activate trust
or distrust, optimism or pessimism, or the closing or opening of one’s mind for new
information. In larger-scale social dilemmas, incomplete information might trigger
collective activities aimed at informing policy through research, and challenge the
ways in which authorities might communicate opportunities and risks, as well as
the specific ways in which people maintain sufficient levels of efficacy (the feeling
that their choice matters), trust in others’ cooperation, and show willingness to
make a contribution themselves (Kerr, 2012; Parks et al., 2013; Van Vugt, 2009).
From present to future. The dimension of time is clearly very important in
social dilemmas. This is even more so in social dilemmas outside of the laboratory,
where various collective goals take time to materialize, where repeated interaction
unfolds over time, but where the individual costs are often in the here and now
and the benefits are much delayed. Environmental dilemmas are just one exam-
ple where the dimension of time clearly is important. Axelrod (1984) referred to
the shadow of the future as a mechanism that might help individuals realize that
cooperative action now will provide benefits over repeated interactions with the
same partner in the future. There is good support for this notion (e.g., Roth &
Murnighan, 1978). Moreover, there is even evidence that punishment is far more
effective when the time horizon is long (e.g., 50 trials) rather than short (e.g., 10
trials), which suggests that sometimes it takes time for groups to promote coop-
eration with one another through punishment (Gächter, Renner, & Sefton, 2008).
At the same time, there has been considerable research on temporal discounting,
showing that people are not always very good at sacrificing short-term interest and
prioritizing longer-term goals. This discounting mechanism might explain failure
to delay gratification in consumption, enjoying the cigarette or fattening snack
now while neglecting the possible consequences in the future, or delaying visits
to the dentist (e.g., Green, Myerson, Lichtman, Rosen, & Fry, 1996; Mischel, 2012;
Rachlin, 2006). Moreover, there is the intriguing issue of asymmetrical relations
among generations of people, involving issues of altruism, conflict, and fairness.
For example, elderly people have a shorter time horizon than younger people, yet
it might take a fair amount of effort or sacrifices from the elderly to maintain
a healthy environment for the next generations—such intergenerational social
dilemmas are real, and worthy of future study (Wade-Benzoni & Tost, 2009).
The overall point here is that, although social dilemma studies have addressed
the time dimension to a certain extent, there is clear potential to design studies
that address the time dimension more fully—for example, by using longitudi-
nal research designs. Theoretically, this would allow us to capture topics such as
self-control, delay of gratification, and long-term orientation in the social dilemma
context in which such mechanisms support collective interest, rather than just indi-
vidual interest (Joireman, Shaffer, Balliet, Strathman, 2012; Strathman et al., 1994).
146 ■ Applications and Future of Social Dilemmas
■ U N D E R S TA N D I N G P R O C E S S E S
behave cooperatively ourselves (e.g., Todorov & Duchaine, 2008; Van Dijk, Van
Kleef, Steinel, & Van Beest, 2008). Such information may be gleaned from the face
as well as other bodily cues such as height, symmetry, and muscularity (Aviezer,
Trope, & Todorov, 2012; Spisak Dekker, Kruger, & Van Vugt, 2012).
Cognition has received already a fair amount of attention in the social dilemma
literature. For example, the roles of framing and priming have been subject of
empirical study, but as suggested earlier, these lines of research need to be comple-
mented by additional research to understand the mechanics of relatively subtle
influences, along with their boundary conditions. Earlier, we outlined the impor-
tance of studying nested social dilemmas involving the person, the group, and
the collective. We think that more subtle cognitive processes, for example catego-
rization effects, may play an important role in those complex but realistic social
dilemmas (Wit & Kerr, 2002; see also Kerr & Tindale, 2004). Also, in everyday life,
social dilemmas may sometimes be quite complex in that actors do not always
directly “see” how one’s own behavior might affect another person’s outcomes in
direct or indirect ways. In such situations, skill may matter, such as the ability to
adopt another person’s perspective, but such skill may also be promoted by pro-
social motivation—one might see the other’s preferences more clearly if one is
more strongly concerned about the other’s welfare (Van Doesum & Van Lange,
2013; see also Yamagishi, Hashimoto, & Schug, 2008). Likewise, it may takes skill
(and perhaps will) to accurately read the emotions that people might express in
social dilemmas (see Baron-Cohen, Wheelwright, Hill, Raste, & Plumb, 2001),
which might guide our subsequent expectations, beliefs, and behavior. And in
addition to people’s own construal of the social dilemma, people may have expec-
tations or beliefs as to what game other people think they are facing, and these
meta-cognitive processes may also shape our expectations of others’ behavior and
our own behavior (see Havely, Chou, & Murnighan, 2012).
Affect and emotions have received far less attention in social dilemma research.
This is surprising because some emotions—such as anger or guilt—may be pow-
erful determinants of decision making in social dilemmas such as cooperation,
defection, and punishment. It is also possible that emotions play a somewhat dif-
ferent role if people do not play the game for money or points, but for outcomes
that may be seen as more personal or less universal—such as providing effort,
sharing information, or giving time. For example, a recent study revealed that giv-
ing time to friends or strangers, as opposed to receiving “free time” for oneself,
increases perceptions of having time, both in terms of the present and the future
(Mogilner, Cahnce, & Norton, 2012). It is also possible that people construe social
dilemmas differently once they have been told that spending money on others
(generosity) promotes happiness (Dunn, Aknin, & Norton, 2008). Taken together,
different cognitions and emotions play an important role in social dilemmas. It is
interesting how small variations in how we frame a social dilemma, how we see
others, and—very importantly—how we interpret the behavior of others can have
pronounced effects on behavior.
From students to seniors. Processes are importantly influenced not only by
the perceived, but also by the perceiver. We suggest the importance of a research
strategy that includes a broader sample than just university students. Evidence is
Prospects for the Future ■ 149
accumulating that even students might differ a fair amount in terms of the beliefs
regarding others’ behavior (Frank, Gilovich, & Regan, 1993), and in their own social
value orientations. Even at the beginning of the first year of their study, the dominant
orientation among psychology majors is prosociality, whereas the dominant orienta-
tion among economics majors, however, is individualism (Van Lange et al., 2011).
Moreover, there is increasing evidence from various samples in the United States
that social class may matter, with those with a lower social economic status (“lower
social class”) being more likely to adopt a prosocial orientation to various situa-
tions (e.g., Piff, Krauss, Côté, Cheng, & Keltner, 2010). Also, there is evidence that
the prevalence of prosocial orientations increases with age (Van Lange et al., 1997).
There is recent neuroscientific evidence suggesting that older individuals, relative
to younger individuals, are more trusting of people with untrustworthy faces, even
though they are about equally trusting of people with trustworthy faces (Castle et al.,
2012), suggesting some evidence for the role of learning and experience.
Why is this important? By focusing on university students in our samples, we
may be underestimating the importance of people who are more trusting in oth-
ers’ benign intent, and we may be underestimating a tendency to adopt a prosocial
orientation, perhaps especially a concern with egalitarianism. There may be other
differences as well, such as less crystallized attitudes, and less well-established
social networks (Sears, 1986). Such sample selection may account for underesti-
mation of trust and cooperation, as well as an overestimation of social influences
on cooperation, in that younger people might have a greater ability and motivation
to be open to new information and other perspectives. It may well be that these
issues are especially important for issues related to trust, fairness, and cooperation,
which are at heart of social dilemmas.
From behavior to effective and efficient solutions. One issue that is important for
the implementation of theory-based knowledge about social dilemmas is rooted
in the distinction between effectiveness and efficiency, and the social processes
that are involved in this. It is one thing to conclude that a particular interven-
tion is effective (in that it elicits high levels of cooperation), but it is quite another
thing to conclude whether an intervention is efficient and socially fair. For exam-
ple, if people can punish one another, it is possible that the benefits of enhanced
cooperation do not outweigh the costs of a maintaining an expensive sanctioning
system (see Balliet et al., 2011; Gächter et al., 2008). The same may be true for
rewarding cooperation, even though there is evidence that this may both effective
and efficient (Kiyonari & Barclay, 2008). It is also interesting to note that because
punishments are costly to both the punisher and the punished, one might wonder
whether such costly acts might be replaced with more efficient mechanisms, such
as a concern for reputation. There is some evidence indicating that even if reputa-
tion as a mechanism is quite effective, people are likely still to punish free-riders
to further enhance cooperation (Rockenbach & Milinski, 2006). At the same time,
as we have seen in Chapter 5, this tendency may not be consistently observed
across all cultures, as there is some tendency in some cultures to punish not only
free-riders but also cooperators (Herrmann et al., 2008).
One issue that is very central to the development of cooperation is how a sanc-
tioning system is organized, implemented, and used. For example, it is often true
150 ■ Applications and Future of Social Dilemmas
that relatively small groups in large societies, such as local communities, have
enormous potential to organize and manage themselves in cost-effective ways that
promote cooperation and prevent them from depleting natural resources (Ostrom
& Walker, 2003; Poteete, Janssen, & Ostrom, 2010). In small groups, people are
able to develop rules that match the local circumstances, they are able to monitor
one another’s behavior, and punish free-riding and reward generosity quite effec-
tively. People care very strongly about their image or reputation in their local com-
munity, and so if the norms favoring cooperation are well-specified, then often the
mere presence of others makes a big difference. These are important virtues of a
local organization, formal or informal, relative to a more global authority.
These findings paint a picture in which the ways in which individuals relate
to each other in small groups and local communities is important to the over-
all functioning of society—and this suggests the strong positive reinforcement
among structural solutions, third-party intervention, and psychological solutions.
A case in point is Tyler and Degoey’s (1995) research on the 1991 water shortage
in California, which demonstrated that people exercised more constraint on their
water consumption if they felt treated more fairly by the local authorities.
Many of the insights described above were already recognized by the late
Elinor Ostrom, who suggested more than two decades ago that institutions could
play a very important role in regulating the local management to preserve natu-
ral resources and avoid ecosystem collapses (Ostrom, 1990). In retrospect, her
insights in many ways reinforce conclusions that are now supported by research.
In particular, among smaller units such as dyads and small groups, it is trust and
reciprocity that matters (and we would add, generosity and forgiveness), along
with effective communication. Within a frame of sufficient vertical trust, people
will adopt accepting attitudes to governmental interventions, such as the provi-
sion of rewards and punishment, and some constraint on their autonomy. These
are also analyses of social dilemmas in which the various scientific fields and dis-
ciplines might inform one another to understand how small groups might help
effectively—and efficiently—manage and resolve ongoing social dilemmas.
Looking back and looking ahead, we conclude that the study of social dilem-
mas is “alive and kicking.” Over the years, the field has produced numerous
replicable findings, advanced our theoretical understanding of human coop-
eration, fostered communication among scientific disciplines, and has at least
made a beginning of applying such knowledge to help resolve social dilem-
mas in everyday life. Being dedicated social dilemmas researchers ourselves,
our observations may be a bit biased, of course. It is our strong conviction that
there is now a solid body of knowledge on the psychology of social dilemmas
that could be of exceptional utility in facing the numerous challenges—theo-
retical, empirical, methodological, and societal—that the field will encounter in
the future.
We already noted several avenues for future research. Further challenges are
to increase our understanding of the how and why of rewards and punishment,
Prospects for the Future ■ 151
Abrams, L. C., Cross, R., Lesser, E., & Levin, D. Z. (2003). Nurturing interpersonal trust in
knowledge-sharing networks. Academy of Management Executives, 17, 64–77.
Adams, G., & Markus, H. R. (2004). Toward a conception of culture suitable for a social
psychology of culture. In M. Schaller & C. S. Crandall (Eds.), Psychological foundations
of culture (pp. 335–360). Mahwah, NJ: Erlbaum.
Agarwal, R., Croson, R., & Mahoney, J. T. (2010). The role of incentives and communication
in strategic alliances: An experimental investigation. Strategic Management Journal, 31,
413–437.
Alcock, J. (1993). Animal behavior: An evolutionary approach. Sunderland, MA: Sinauer
Associates.
Alexander, R. D. (1987). The biology of moral systems. New York: Aldine de Gruyter.
Allison, S. T., McQueen, L. R., & Schaerfl, L. M. (1992). Social decision making processes
and the equal partitionment of shared resources. Journal of Experimental Social
Psychology, 28, 23–42.
Allison, S. T., & Messick, D. M. (1990). Social decision heuristics in the use of shared
resources. Journal of Behavioral Decision Making, 3, 195–204.
Anderson, L. R., DiTraglia, F. J., & Gerlach, J. R. (2011). Measuring altruism in a public
goods experiment: A comparison of U.S. and Czech subjects. Experimental Economics,
14, 426–437.
Anderson, L. R., Mellor, J. M., & Milyo, J. (2004). Social capital and contributions in a
public-goods experiment. American Economic Review, 94, 373–376.
André, J.-B., & Baumard, N. (2011). Social opportunities and the evolution of fairness.
Journal of Theoretical Biology, 289, 128–135.
Angle, H. L., & Perry, J. L. (1986). Dual commitment and labor-management relationship
climates. Academy of Management Journal, 29, 31–50.
Aquino, K., Grover, S. L., Goldman, B., & Folger, R. (2003). When push doesn’t come to
shove: Interpersonal forgiveness in workplace relationships. Journal of Management
Inquiry, 12, 209–216.
Archetti, M., & Scheuring, I. (2010). Coexistence of cooperation and defection in public
goods games. Evolution, 65, 1140–1148.
Arend, R. J., & Seale, D. A. (2005). Modeling alliance activity: An iterated prisoner’s
dilemma with exit option. Strategic Management Journal, 26, 1057–1074.
Argote, L. A., & Ingram, P. (2000). Knowledge transfer: A basis for competitive advantages
in firms. Organizational Behavior and Human Decision Processes, 82, 150–169.
Aryee, S., Budhwar, P. S., & Chen, Z. -X. (2002). Trust as a mediator of the relationship
between organizational justice and work outcomes: Test of a social exchange model.
Journal of Organizational Behavior, 23, 267–285.
Au, W. T., & Kwong, Y. Y. (2004). Measurements and effects of social value orientation in
social dilemmas: A review. In R. Suleiman, D. V. Budescu, I. Fischer, & D. M. Messick
(Eds.), Contemporary research on social dilemmas (pp. 71–98). New York: Cambridge
University Press.
153
154 ■ References
Au, W. T., & Ngai, M. Y. (2003). Effects of group size uncertainty and protocol of play in a
common pool resource dilemma. Group Processes and Intergroup Relations, 6, 265–283.
Avellar, K, & Kagan, S. (1976). Development of competitive behaviors in Anglo-American
and Mexican-American children. Psychological Reports, 39, 191–198.
Aviezer, H., Trope, Y., & Todorov, A. (2012). Body cues, not facial expressions, discriminate
between intense positive and negative emotions. Science, 338, 1225–1229.
Axelrod, R. (1984). The evolution of cooperation. New York: Basic Books.
Badaracco, J. L., & Webb, A. P. (1995). Business ethics: A view from the trenches. California
Management Review, 37 (2), 8–28.
Baldassari, D., & Grossman, G. (2011). Centralized sanctioning and legitimate authority
promote cooperation in humans. Proceedings of the National Academy of Sciences, 108,
11023–11027 (June 20).
Balliet, D. (2010). Communication and cooperation in social dilemmas: A meta-analytic
review. Journal of Conflict Resolution, 54, 39–57.
Balliet, D., Li, N., & Joireman, J. (2011). Relating trait self-control and forgiveness within
prosocials and proselfs: Compensatory vs. synergistic models. Journal of Personality and
Social Psychology, 101, 1090–1105.
Balliet, D., Li, N. P., Macfarlan, S. J., & Van Vugt, M. (2011a). Sex differences in
cooperation: A meta-analytic review of social dilemmas. Psychological Bulletin, 137,
881–909.
Balliet, D., Mulder, L. B., & Van Lange, P. A. M. (2011b). Reward, punishment, and
cooperation: A meta-analysis. Psychological Bulletin, 137, 594–614.
Balliet, D., Parks, C. D., & Joireman, J. (2009). Social value orientation and cooperation
in social dilemmas: A meta-analysis. Group Processes and Intergroup Relations, 12,
533–547.
Balliet, D., & Van Lange, P. A. M. (2013a). Trust, conflict, and cooperation: A meta-analysis.
Psychological Bulletin, 139, 1090–1112.
Balliet, D. & Van Lange, P. A.M. (2013b). Trust, punishment, and cooperation across 18
societies: A meta-analysis. Perspectives on Psychological Science, 8, 363–379.
Banaji, M. R., & Bhaskar, R. (2000). Implicit stereotypes and memory: The bounded
rationality of social beliefs. In D. L. Schachter & E. Scarry (Eds.), Memory, brain and
belief (pp. 139–175). Cambridge, MA: Harvard University Press.
Barclay, P. (2004). Trustworthiness and competitive altruism can also solve the “tragedy of
the commons.” Evolution & Human Behavior, 25, 209–220.
Barclay, P. (2006). Reputational benefits for altruistic punishment. Evolution and Human
Behavior, 27, 325–344.
Barclay, P. (2008). Enhanced recognition of defectors depends on their rarity. Cognition,
107, 817–828.
Barclay, P. (2010). Altruism as a courtship display: Some effects of third-party generosity on
audience perceptions. British Journal of Psychology, 101, 123–135.
Barclay, P. (2011). Competitive helping increases with the size of biological markets and
invades defection. Journal of Theoretical Biology, 281, 47–55.
Barclay, P., & Van Vugt, M. (in press). The evolution of human prosociality. In D. A.
Schroeder & W. G. Graziano (Eds.), Handbook of prosocial behavior. London: Sage.
Barclay, P., & Willer, R. (2007). Partner choice creates competitive altruism in humans.
Proceedings of the Royal Society-B, 274, 749–753.
Barling, J., Kelloway, E. K., & Bremermann, E. H. (1991). Preemployment predictors of
union attitudes: The role of family socialization and work beliefs. Journal of Applied
Psychology, 76, 725–731.
References ■ 155
Baron-Cohen, S., Wheelwright, S., Hill, J., Raste, Y., & Plumb, I. (2001). The “Reading
the Mind in the Eyes” Test Revised Version: A study with normal adults, and adults
with Asperger Syndrome or High-Functioning Autism. Journal of Child Psychiatry and
Psychiatry, 42, 241–252.
Barrett, P. (2007). Structural equation modeling: Adjudging model fit. Personality and
Individual Differences, 42, 815–824.
Bateson, M., Nettle, D., & Roberts, G. (2006). Cues of being watched enhance cooperation
in a real-world setting. Biology Letters, 2, 412–414.
Batson, C. D. (1994). Why act for the public good? Four answers. Personality and Social
Psychology Bulletin, 20, 603–610.
Batson, C. D. (2011). Altruism in humans. New York: Oxford University Press.
Batson, C. D., & Ahmad, N. (2001). Empathy-induced altruism in a prisoner’s dilemma II: What
if the target of empathy has defected? European Journal of Social Psychology, 31, 25–36.
Batson, C. D., Batson, J. G., Todd, R. M., Brummett, B. H., Shaw, L. L., & Aldeguer, C.
M. R. (1995). Empathy and the collective good: Caring for one of the others in a social
dilemma. Journal of Personality and Social Psychology, 68, 619–631.
Batson, C. D., Sager, K., Garst, E., Kang, M., Rubchinsky, K., & Dawson, K. (1997). Is
empathy-induced helping due to self-other merging? Journal of Personality and Social
Psychology, 73, 495–509.
Baumard, N., André, J.-B., & Sperber, D. (2013). A mutualistic approach to morality.
Behavioral and Brain Sciences, 36, 59–122.
Bendor, J., Kramer, R. M., & Stout, S. (1991). When in doubt . . . cooperation in a noisy
prisoners-dilemma. Journal of Conflict Resolution, 35, 691–719.
Bentham, J. (1789/1970) An introduction to the principles of morals and legislation.
London: Athlone Press.
Bergeron, D. M. (2007). The potential paradox of organizational citizenship behavior: Good
citizens at what cost? Academy of Management Review, 32, 1078–1095.
Bettencourt, L. A., Gwinner, K. P., & Meuter, M. L. (2001). A comparison of attitude,
personality, and knowledge predictors of service-oriented organizational citizenship
behaviors. Journal of Applied Psychology, 86, 29–41.
Blair, M. M., & Stout, L. A. (1999). A team production theory of corporate law. Virginia Law
Review, 85, 248–328.
Blair, M. M., & Stout, L. A. (2001). Trust, trustworthiness, and the behavioral foundations of
corporate law. University of Pennsylvania Law Review, 149, 1785–1789.
Blasi, J., Conte, M., & Kruse, D. (1996). Employee stock ownership and corporate
performance among public companies. Industrial and Labor Relations Review, 50, 60–79.
Bloche, M. G. (2002). Trust and betrayal in the medical marketplace. Stanford Law Review,
55, 919–954.
Bock, G. -W., Zmud, R. M., Kim, Y. -G., & Lee, J. -N. (2005). Behavioral intention formation
in knowledge sharing: Examining the roles of extrinsic motivators, social-psychological
forces, and organizational climate. MIS Quarterly, 29, 87–111.
Bogaert, S., Boone, C., & Declerck, C. (2008). Social value orientation and cooperation in
social dilemmas: A review and conceptual model. British Journal of Social Psychology,
47, 453–480.
Bohnet, I., Herrmann, B., & Zechhauser, R. (2010). Trust and the reference points for
trustworthiness in gulf and western countries. Quarterly Journal of Economics, 125,
811–828.
Bolton, G. E., Katok, E., & Ockenfels, A. (2005). Cooperation among strangers with limited
information about reputation. Journal of Public Economics, 89, 1457–1468.
156 ■ References
Bonacich, P., Shure, G. H., Kahan, J. P., & Meeker, R. J. (1976). Cooperation and group size
in the N-person prisoner’s dilemma. Journal of Conflict Resolution, 20, 687–706.
Bond, M. H., Leung, K., Au, A., Tong, K. -K., & Chemonges-Nielson, Z. (2004). Combining
social axioms with values in predicting behaviours. European Journal of Personality, 18,
177–191.
Bond, M. H., Leung, K., Au, A., Tong, K. -K., Reimel de Carrasquel, S., Murakami, F., et al.
(2004). Culture-level dimensions of social axioms and their correlates across 41 cultures.
Journal of Cross-Cultural Psychology, 35, 548–570.
Boone, J. L. (1998). The evolution of magnanimity: When is it better to give than to receive?
Evolution and Human Behavior, 9, 1–21.
Boone, C., Brabander, B. D., & van Witteloostuijn, A. (1999). The impact of personality
on behavior in five prisoner’s dilemmas games. Journal of Economic Psychology, 20,
343–377.
Booth, A. L., & Chatterji, M. (1995). Union membership and wage bargaining when
membership is not compulsory. Economic Journal, 105, 345–360.
Borman, W. C., & Motowidlo, S. J. (1993). Expanding the criterion domain to include
elements of contextual performance. In N. Schmitt & W. C. Borman (Eds.), Personnel
selection in organizations (pp. 71–98). San Francisco: Jossey-Bass.
Borman, W. C., Penner, L. A., Allen, T. D., & Motowidlo, S. J. (2001). Personality predictors
of citizenship performance. International Journal of Selection and Assessment, 9, 52–69.
Bornstein, G. (1992). The free-rider problem in intergroup conflicts over step-level and
continuous public goods. Journal of Personality and Social Psychology, 62, 597–606.
Bornstein, G. (2003). Intergroup conflict: Individual, group, and collective interests.
Personality and Social Psychology Review, 7, 129–145.
Bornstein, G., & Ben-Yossef, M. (1994). Cooperation in intergroup and single-group social
dilemmas. Journal of Experimental Social Psychology, 30, 52–67.
Bos, P. A., Terburg, D.,Van Honk, J. (2010). Testosterone decreases trust in socially naïve
humans. Proceedings of the National Academy of Sciences, 107, 9991–9996.
Bowles, S. (2009). Did warfare among ancestral hunter-gatherers affect the evolution of
human social behaviors? Science, 324, 1293–1298.
Boyd, R., Gintis, H., Bowles, S., & Richerson, P. J. (2003). The evolution of altruistic
punishment. Proceedings of the National Academy of Sciences, 100, 3531–3535.
Boyd, R., & Lorberbaum, J. P. (1987). No pure strategy is evolutionarily stable in the
repeated prisoner’s dilemma game. Nature, 327, 58–59.
Boyd, R., & Richerson, P. J. (2002). Group beneficial norms can spread rapidly in a structured
population. Journal of Theoretical Biology, 215, 287–296.
Boyd, R., & Richerson, P. J. (2009). Culture and the evolution of human cooperation.
Philosophical Transactions of the Royal Society-B, 364, 3281–3288.
Bradfield, M., & Aquino, K. (1999). The effects of blame attributions and offender likableness
on forgiveness and revenge in the workplace. Journal of Management, 25, 607–631.
Brams, S. J. (1985). Superpower games. New Haven, CT: Yale University Press.
Brembs, B. (1996). Chaos, cheating, and cooperation: Potential solutions to the prisoner’s
dilemma. Oikos, 76, 14–24.
Brewer, M. B., & Kramer, R. M. (1986). Choice behavior in social dilemmas: Effects of social
identity, group size, and decision framing. Journal of Personality and Social Psychology,
50, 543–549.
Brosnan, S. F, Schiff, H. C., & de Waal, F. B. M. (2005). Tolerance for inequity may increase
with social closeness in chimpanzees. Proceedings of the Royal Society-B, 272, 253–258.
References ■ 157
Brown, S. L., & Brown, M. (2006). Selective investment theory. Psychological Inquiry,
17, 1–29.
Brucks, W. M., & Van Lange, P. A. M. (2007). When prosocials act like proselfs in a commons
dilemma. Personality and Social Psychology Bulletin, 33, 750–758.
Brucks, W. M. & Van Lange, P. A. M. (2008). No control, no drive: How noise may
undermine conservation behavior in a commons dilemma. European Journal of Social
Psychology, 38, 810–822.
Buchan, N. R., Brewer, M. B., Grimalda, G., Wilson, R. K., Fatas, E., & Foddy, M. (2011).
Global social identity and global cooperation. Psychological Science, 22, 821–828.
Buchan, N. R., & Croson, R. (2004). The boundaries of trust: own and other’s actions in the
US and China. Journal of Economic Behavior and Organization, 55, 485–504.
Buchan, N. R., Grimalda, G., Wilson, R., Brewer, M, Fatas, E., & Foddy, M. (2009).
Globalization and human cooperation. Proceedings of the National Academy of Sciences,
106, 4138–4142.
Buchan, N. R., Johnson, E. J., & Croson, R. T. A. (2006). Let’s get personal: An international
examination of the influence of communication, culture, and social distance on other
regarding preferences. Journal of Economic Behavior and Organization, 60, 373–398.
Budescu, D.V., Au, W., & Chen, X.P. (1996). Effects of protocol of play and social orientation
on behavior in sequential resource dilemmas. Organizational Behavior and Human
Decision Processes, 1997, 69, 179–194.
Budescu, D. V., Erev, I., Zwick, R. (1999). Games and human behavior. Mahwah, NJ.,
Lawrence Erlbaum Associates.
Budescu, D. V., Rapoport, A., Suleiman, R. (1990). Resource dilemmas with environmental
uncertainty and asymmetrical players. European Journal of Social Psychology, 20,
475–487.
Burger, J., Ostrom, E., Norgaard, R. B., Policansky, D., & Goldstein, B. D. (Eds.) (2001).
Protecting the commons: A framework for resource management in the Americas.
Washington, DC: Island Press.
Burnham, T. C., & Hare, B. (2007). Engineering human cooperation: Does involuntary
neural activation increase public goods contributions? Human Nature, 18, 88–108.
Burnstein, E., Crandall, C., & Kitayama, S. (1994). Some neo-Darwinian decision rules for
altruism: Weighing cues for inclusive fitness as a function of biological importance of
the decision. Journal of Personality and Social Psychology, 67, 773–789.
Butler, J. K. (1999). Trust expectations, information sharing, climate of trust, and negotiation
effectiveness and efficiency. Group and Organization Management, 24, 217–238.
Cabrera, A., & Cabrera, E. F. (2002). Knowledge-sharing dilemmas. Organization Studies,
23, 687–710.
Cabrera, A., Collins, W. C., & Salgado, J. F. (2006). Determinants of individual engagement
in knowledge sharing. International Journal of Human Resource Management, 17,
245–264.
Caldwell, M. D. (1976). Communication and sex effects in a five-person prisoner’s dilemma
game. Journal of Personality and Social Psychology, 33, 273–280.
Cameron, L. D., Brown, P. M., & Chapman, J. G. (1998). Social value orientation and
decisions to take proenvironmental action. Journal of Applied Social Psychology, 28,
675–697.
Campbell, W. K., Bush, C. P., & Brunell, A. B. (2005). Understanding the social costs of
narcissism: The case of the tragedy of the commons. Personality and Social Psychology
Bulletin, 31, 1358–1368.
158 ■ References
Caporael, L. R., Dawes, R. M., Orbell, J. M., & van de Kragt, A. J. C. (1989). Selfishness
examined: Cooperation in the absence of egoistic incentives. Behavioral and Brain
Sciences, 12, 683–699.
Cardenas, J. C., Chong, A., & Nopo, H. (2008). To what extent do Latin Americans trust
and cooperate? Field experiments on social exclusion in Six Latin American countries.
Economía, 9, 45–88.
Carnevale, P. J., & Pruitt, D. G. (1992). Negotiation and mediation. Annual Review of
Psychology, 43, 531–582.
Carpenter, J., & Cardenas, J. C. (2011). An intercultural examination of cooperation in the
commons. Journal of Conflict Resolution, 55, 632–651.
Carpenter, J. Daniere, A. G., & Takahashi, L. M. (2004). Cooperation, trust, and social
capital in southeast Asian urban slums. Journal of Economic Behavior and Organization,
55, 533–551.
Castle, E., Eisenberger, N. I., Seeman, T. E., Moons, W. G., Boggero, I. A. Grinblatt, M. S., &
Taylor, S. T. (2012). Neurological and behavior bases of age differences in perceptions of
trust. Proceedings of the National Academy of Science, 109, 20848–20852.
Chaison, G. N., & Dhavale, D. G. (1992). The choice between union membership and
free-rider status. Journal of Labor Research, 13, 355–369.
Charlwood, A. (2002). Why do non-union employees want to unionize? Evidence from
Britain. British Journal of Industrial Relations, 40, 463–491.
Chatman, J. A., & Barsade, S. G. (1995). Personality, organizational culture, and
cooperation: Evidence from a business simulation. Administrative Science Quarterly, 40,
423–443.
Chen, C. C., Chen, X.-P., & Meindl, J. R. (1998). How can cooperation be fostered? The cultural
effects of individualism-collectivism. Academy of Management Review, 23, 285–304.
Chen, X. -P. (1996). The group-based binding pledge as a solution to public goods problems.
Organizational Behavior and Human Decision Processes, 66, 192–202.
Chen, X. -P., Au, W. T., & Komorita, S. S. (1996). Sequential choice in a step-level public
good dilemma: The effects of criticality and uncertainty. Organizational Behavior and
Human Decision Processes, 65, 37–47.
Chen, X. -P., & Bachrach, D. G. (2003). Tolerance of free-riding: The effects of defection
size, defection pattern, and social orientation in a repeated public goods dilemma.
Organizational Behavior and Human Decision Processes, 90, 139–147.
Chen, X. -P., Pillutla, M. M., & Yao, X. (2009). Unintended consequences of cooperation
inducing and maintaining mechanisms in public goods dilemmas: Sanctions and moral
appeals. Group Processes and Intergroup Relations, 12, 241–255.
Choi, Y., & Mai-Dalton, R. R. (1999). The model of followers’ responses to self-sacrificial
leadership: An empirical test. Leadership Quarterly, 10, 397–421.
Christensen, L. (1988). Deception in psychological research: When is its use justified?
Personality and Social Psychology Bulletin, 14, 664–675.
Cialdini, R. B. (2003). Crafting normative messages to protect the environment. Current
Directions in Psychological Science, 12, 105–109.
Cialdini, R. B., Brown, S. L., Lewis, B. P., Luce, C., Neuburg, S. L. (1997). Reinterpreting
the empathy-altruism relationship: When one into one equals oneness. Journal of
Personality and Social Psychology, 73, 481–494.
Cinyabuguma, M., Page, T., & Putterman, L. (2005). Cooperation under the threat of
expulsion in a public goods experiment. Journal of Public Economics, 89, 1421–1435.
References ■ 159
Clark, K., & Sefton, M. (2001). The sequential prisoner’s dilemma: Evidence on reciprocation.
Economic Journal, 111, 51–68.
Clark, M. S., & Mills, J. (1993). The difference between communal and exchange
relationships: What it is and is not. Personality and Social Psychology Bulletin, 19, 684–691.
Clary, E. G., Snyder, M., Ridge, R. D., Copeland, J., Stukas, A. A., Haugen, J., & Miene,
P. (1998). Understanding and assessing the motivations of volunteers: A functional
approach. Journal of Personality and Social Psychology, 74, 1516–1530.
Cohen, D. (2007). Methods in cultural psychology. In S. Kitayama & D. Cohen (Eds.),
Handbook of cultural psychology (pp. 196–236). New York: Guilford.
Colbert, A. E., Mount, M. K., Harter, J. K., Witt, L. A., & Barrick, M. R. (2004). Interactive
effects of personality and perceptions of the work situation on workplace deviance.
Journal of Applied Psychology, 89, 599–609.
Connelly, C. E., & Kelloway, E. K. (2003). Predictors of employees’ perceptions of knowledge
sharing cultures. Leadership and Organizational Development Journal, 24, 294–301.
Constant, D., Kiesler, S., & Sproull, L. (1994). What’s mine is ours, or is it? A study of
attitudes about information sharing. Information Systems Research, 5, 400–421.
Conybeare, J. A. C. (1984). Public goods, prisoner’s dilemmas and the international political
economy. International Studies Quarterly, 28, 5–22.
Cook, K. S., Hardin, R., & Levi, M. (2005). Cooperation without trust? New York: Russell
Sage Foundation.
Coombs, C. A. (1973). A reparameterization of the prisoner’s dilemma game. Behavioral
Science, 18, 424–428.
Cosmides, L., Barrett, H. C., & Tooby, J. (2010). Adaptive specializations, social exchange,
and the evolution of human intelligence. Proceedings of the National Academy of Sciences,
107, 9007–9014.
Cox, T. H., Lobel, S. A., & McLeod, P. L. (1991). Effects of ethnic group cultural differences
in cooperative and competitive behavior on a group task. Academy of Management
Journal, 34, 827–847.
Cress, U., & Kimmerle, J. (2007). Guidelines and feedback in information exchange: The
impact of behavioral anchors and descriptive norms in a social dilemma. Group
Dynamics, 11, 42–53.
Cress, U., & Kimmerle, J. (2008). Endowment heterogeneity and identifiability in the
information-exchange dilemma. Computers in Human Behavior, 24, 862–874.
Cress, U., Kimmerle, J., & Hesse, F. W. (2006). Information exchange with shared
databases as a social dilemma: The effect of metaknowledge, bonus systems, and cost.
Communication Research, 33, 370–390.
Crone, E. A., Will, G. J., Overgaauw, S., & Güroğlu, B. (in press). Social decision-making
in childhood and adolescence. In P. A. M. Van Lange, B. Rockenbach, & T. Yamagishi
(Eds), Social dilemmas: New perspectives on reward and punishment. New York: Oxford
University Press.
Cropanzano, R., & Byrne, Z. M. (2000). Workplace justice and the dilemma of organizational
citizenship. In M. van Vugt, M. Snyder, T. R. Tyler, & A. Biel (Eds.), Cooperation in
modern society (pp. 142–161). New York: Routledge.
Cross, J. G., & Guyer, M. J. (1980). Social traps. Ann Arbor, MI: University of Michigan Press.
Darley, J. M. (2004). The cognitive and social psychology of contagious organizational
corruption. Brooklyn Law Review, 70, 1177–1194.
Dawes, R. M. (1980). Social dilemmas. Annual Review of Psychology, 31, 169–193.
160 ■ References
Dawes, R. M., & Messick, D. M. (2000). Social dilemmas. International Journal of Psychology,
35, 111–116.
Dawes, R. M., McTavish, J., & Shaklee, H. (1977). Behavior, communication, and
assumptions about other people’s behavior in a commons dilemma situation. Journal of
Personality and Social Psychology, 35, 1–11.
Dawes, R. M., van de Kragt, A. J. C., & Orbell, J. M. (1988). Not me or thee but we: The
importance of group identity in eliciting cooperation in dilemma situations: Experimental
manipulations. Acta Psychologica, 68, 83–97.
Dawkins, R. (2006). The selfish gene (30th anniversary ed.). Oxford: Oxford University Press.
DeBruine, L. M. (2002). Facial resemblance enhances trust. Proceedings of the Royal
Society-B, 269, 1307–1312.
DeBruine, L. M. (2005). Trustworthy but not lust-worthy: context-specific effects of facial
resemblance. Proceedings of the Royal Society-B, 272, 919–922.
De Cremer, D. (2000). Leadership selection in social dilemmas—Not all prefer it: The
moderating effect of social value orientation. Group Dynamics330–337.
De Cremer, D., & Barker, M. (2003). Accountability and cooperation in social dilemmas: The
influence of others’ reputational concerns. Current Psychology, 22, 155–163.
De Cremer, D., & Van Dijk, E. (2005). When and why leaders put themselves first: Leader
behaviour in resource allocations as a function of feeling entitled. European Journal of
Social Psychology, 35, 553–563.
De Cremer, D., & Van Dijk, E. (2011). On the near miss in public good dilemmas: How
upward counterfactuals influence group stability when the group fails. Journal of
Experimental Social Psychology, 47, 139–146.
De Cremer, D., & Van Lange, P. A. M. (2001). Why prosocials exhibit greater cooperation
than proselfs: The roles of social responsibility and reciprocity. European Journal of
Personality, 15, 5–S18.
De Cremer, D., & Van Vugt, M. (1999). Social identification effects in social dilemmas:
A transformation of motives. European Journal of Social Psychology, 29, 871–893.
De Dreu, C. K. W., & Boles, T. L. (1998). Share and share alike or winner take all? The
influence of social value orientation upon choice and recall of negotiation heuristics
Organizational Behavior and Human Decision Processes, 76, 253–276.
De Dreu, C. K. W., Giacomantonio, M., Shalvi, S., & Sligte, D. J. (2009). Getting stuck or
stepping back: Effects of obstacles and construal level in the negotiation of creative
solutions. Journal of Experimental Social Psychology, 45, 542–548.
De Dreu, C. K. W., Greer, L. L., Handgraaf, M. J. J., Shalvi, S., Van Kleef, G. A., Baas, M.,
et al. (2010). The neuropeptide oxytocin regulates parochial altruism in intergroup
conflict among humans. Science, 328, 1408–1411.
De Dreu, C. K. W., & McCusker, C. (1997). Gain–loss frames and cooperation in
two-person social dilemmas: A transformational analysis. Journal of Personality and
Social Psychology, 72, 1093–1106.
De Dreu, C. K. W., Weingart, L. R., & Kwon, S. (2000). Influence of social motives on
integrative negotiation: A meta-analytic review and test of two theories. Journal of
Personality and Social Psychology, 78, 889–905.
De Herdt, T. (2003). Cooperation and fairness: The Flood-Dresher experiment revisited.
Review of Social Economy, 61, 183–210.
De Hooge, I. E., Breugelmans, S. M., & Zeelenberg, M. (2008). Not so ugly after all: When
shame acts as a commitment device. Journal of Personality and Social Psychology, 95,
933–943.
References ■ 161
De Kwaadsteniet, E. W., Van Dijk, E., Wit, A., & De Cremer, D. (2006). Social dilemmas as
strong versus weak situations: Social value orientations and tacit coordination under
resource uncertainty. Journal of Experimental Social Psychology, 42, 509–516.
De Quervain, D. J. F., Fischbacher, U., Treyer, V., Schellhammer, M., Schnyder, U., Buck, A.,
& Fehr, E. (2004). The neural basis of altruistic punishment. Science, 305, 1254–1258.
de Waal, F. B. M., & Suchak, M. (2010). Prosocial primates: Selfish and unselfish motivations.
Philosophical Transactions of the Royal Society-B, 365, 2711–2722.
Deci, E. L., Koestner, R., & Ryan, R. M. (1999). A meta-analytic review of experiments
investigating the effects of extrinsic rewards on intrinsic motivation. Psychological
Bulletin, 125, 627–668.
Deery, S. J., Iverson, R. D., & Erwin, P. J. (1994). Predicting organizational and union
commitment: The effect of industrial relations climate. British Journal of Industrial
Relations, 32, 581–597.
Delton, A. W., Krasnow, M. M., Cosmides, L., & Tooby, J. (2011). Evolution of direct
reciprocity under uncertainty can explain human generosity in one-shot encounters.
Proceedings of the National Academy of Sciences, 108, 13335–13340.
Den Hartog, D. N., De Hoogh, A. H. B., & Keegan, A. E. (2007). The interactive effects of
belongingness and charisma on helping and compliance. Journal of Applied Psychology,
92, 1131–1139.
Dennett, D. C. (2006). Breaking the spell. New York: Viking.
Diekmann, A. (1985). Volunteer’s dilemma. Journal of Conflict Resolution, 29, 605–610.
Dietz, T., Ostrom, E., & Stern, P. C. (2003). The struggle to govern the commons. Science,
302, 1907–1912.
Dolšak, N., & Ostrom, E. (2003). The commons in the new millennium: Challenges and
adaptations. Cambridge, MA: MIT Press.
Domino, G. (1992). Cooperation and competition in Chinese and American children.
Journal of Cross-Cultural Psychology, 23, 456–467.
Doney, P. M., Cannon, J. P., & Mullen, M. R. (1998). Understanding the influence of national
culture on the development of trust. Academy of Management Review, 23, 601–620.
Dudley, S. A., & File, A. L. (2007). Kin recognition in an annual plant. Biology Letters, 3,
435–438.
Dukerich, J. M., Golden, B. R., & Shortell, S. M. (2002). Beauty is in the eye of the
beholder: The impact of organizational identification, identity, and image on the
cooperative behaviors of physicians. Administrative Science Quarterly, 47, 507–533.
Dunbar, R. I. M., Baron, R., Frangou, A., Pearce, E., van Leeuwen, E. J. C., Stow, J., et al.
(2012). Social laughter is correlated with an elevated pain threshold. Proceedings of the
Royal Society-B, 279, 1161–1167.
Dunlop, P. D., & Lee, K. (2004). Workplace deviance, organizational citizenship behavior,
and business unit performance: The bad apples do spoil the whole barrel. Journal of
Organizational Behavior, 25, 67–80.
Dunn, E. W., Aknin, L. B., & Norton, M. I. (2008). Spending money on others promotes
happiness. Science, 319, 1687–1688.
Earley, P. C. (1989). Social loafing and collectivism: A comparison of the United States and
the People’s Republic of China. Administrative Science Quarterly, 34, 565–581.
Earley, P. C. (1993). East meets west meets Mideast: Further explorations of collectivistic
and individualistic work groups. Academy of Management Journal, 36, 319–348.
Eek, D., & Gärling, T. (2006). Prosocials prefer equal outcomes to maximizing joint
outcomes. British Journal of Social Psychology, 45, 321–337.
162 ■ References
Egas, M., & Riedl, A. (2008). The economics of altruistic punishment and the maintenance
of cooperation. Proceedings of the Royal Society-B, 275, 871–878.
Ehrhart, M. G. (2004). Leadership and procedural justice climate as antecedents of unit-level
organizational citizenship behavior. Personnel Psychology, 57, 61–94.
Eisenberg, M. A. (1998). Corporate conduct that does not maximize shareholder gain: Legal
conduct, ethical conduct, the penumbra effect, reciprocity, the prisoner’s dilemma,
sheep’s clothing, social conduct, and disclosure. Stetson Law Review, 28, 1–27.
Eisenberger, N. I., Lieberman, M. D., & Williams, K. D. (2003). Does rejection hurt? An
fMRI study of social exclusion. Science, 302, 290–292.
Elffers, H. (2000). But taxpayers do cooperate! In M. Van Vugt, M. Snyder, T. R. Tyler, & A.
Biel (Eds.), Cooperation in modern society (pp. 184–194). New York: Routledge.
Ellemers, N., de Gilder, D., & van den Heuvel, H. (1998). Career-oriented versus
team-oriented commitment and behavior at work. Journal of Applied Psychology, 83,
717–730.
Epley, N., & Huff, C. (1998). Suspicion, affective response, and educational benefit as a
result of deception in psychology research. Personality and Social Psychology Bulletin,
24, 759–768.
Evans, A. M., & Krueger, J. I. (2010). Elements of trust: Risk and perspective-taking. Journal
of Experimental Social Psychology, 47, 171–177.
Feather N. T., & Rauter, K. A. (2004). Organizational citizenship behaviours in relation to
job status, job insecurity, organizational commitment and identification, job satisfaction
and work values. Journal of Occupational and Organizational Psychology, 77, 81–94.
Fehr, E., Bernhard, H. & Rockenbach, B. (2008). Egalitarianism in young children. Nature,
454, 1079–1083.
Fehr, E., & Gächter, S. (2000). Cooperation and punishment in public goods experiments.
American Economic Review, 90, 980–994.
Fehr, E., & Gächter, S. (2002). Altruistic punishment in humans. Nature, 415, 137–140.
Fehr, E., & Gintis, H. (2007). Human motivation and social cooperation: Experimental and
analytical foundations. Annual Review of Sociology, 33, 43–64.
Fehr, E., & Schmidt, K. M. (1999). A theory of fairness, competition, and cooperation. The
Quarterly Journal of Economics, 114, 817–868.
Fischbacher, U., Gächter, S., & Fehr, E. (2001). Are people conditionally cooperative?
Evidence from a public goods experiment. Economics Letters, 71, 397–404.
Fiske, S. T. (2004). Social beings: A core motives approach to social psychology. Hoboken,
NJ: Wiley.
Flood, M. M. (1952). Some experimental games. Research memorandum RM-789. Santa
Monica, CA: RAND Corporation.
Foddy, M., Platow, M. J., Yamagishi, T. (2009). Group-based trust in strangers: The role of
stereotypes and group heuristics. Psychological Science, 20, 419–422.
Foddy, M., Smithson, M., Schneider, S., & Hogg, M. (Eds.) (1999). Resolving social
dilemmas: Dynamic, structural, and intergroup aspects. Philadelphia: Psychology Press.
Foster, K. R., Wenseleers, T., Ratnieks, F. L. W., & Queller, D. C. (2006). There is nothing
wrong with inclusive fitness. Trends in Ecology and Evolution, 21, 599–600.
Frank, R. H. (1988). Passions within reason. New York: Norton.
Frank, R. H., Gilovich, T., & Regan, D. T. (1993). Does studying economics inhibit
cooperation? Journal of Economic Perspectives, 7, 159–171.
Frank, R. H., Gilovich, T., & Regan, D. T. (1996). Do economists make bad citizens? Journal
of Economic Perspectives, 10, 187–192.
References ■ 163
Frechet, M. (1953). Emile Borel, initiator of the theory of psychological games and its
application. Econometrica, 21, 95–96.
Freeman, R., Kruse, D., & Blasi, J. (2010). Worker responses to shirking under shared
capitalism. In D. Kruse, R. Freeman, & J. Blasi (Eds.), Shared capitalism at work (pp.
77–103). Chicago: University of Chicago Press.
Frey, B. S., & Jegen, R. (2001). Journal of Economic Surveys, 15, 589–611.
Fukuyama, F. (1995). Trust: The social virtues and the creation of prosperity.
New York: Free Press.
Fullagar, C., & Barling, J. (1989). A longitudinal test of a model of the antecedents and
consequences of union loyalty. Journal of Applied Psychology, 74, 213–227.
Gächter, S., & Herrmann, B. (2009). Reciprocity, culture and human cooperation: Previous
insights and a new cross-cultural experiment. Philosophical Transactions of the Royal
Society Biological Sciences, 364, 791–806.
Gächter, S., & Herrmann, B. (2011). The limits of self-governance when cooperators get
punished: Experimental evidence from urban and rural Russia. European Economic
Review, 65, 193–210.
Gächter, S., Herrmann, B., & Thöni, C. (2004). Trust, voluntary cooperation, and
socio-economic background: Survey and experimental evidence. Journal of Economic
Behavior and Organization, 55, 505–531.
Gächter, S., Herrmann, B., & Thoni, C. (2010). Culture and cooperation. Philosophical
Transactions of the Royal Society-B, 365, 2651–2661.
Gächter, S., Renner, E., & Sefton, M. (2008). The long-run benefits of punishment. Science,
322, 1510.
Gallo, P., & Sheposh, J. (1971). Effects of incentive magnitude on cooperation in the prisoner’s
dilemma game: A reply to Gumpert, Deutsch, and Epstein. Journal of Personality and
Social Psychology, 19, 42–46.
Gardner, A., & West, S. A. (2004). Spite and the scale of competition. Journal of Evolutionary
Biology, 17, 1195–1203.
Gardner, G. T., & Stern, P. C. (2002). Environmental problems and human behavior. Needham
Heights, MA: Allyn & Bacon.
Garman, M., & Kamien, M. I. (1968). The paradox of voting: Probability calculations.
Behavioral Science, 4, 306–316.
Gintis, H. (2003). The hitchhiker’s guide to altruism: Gene-culture coevolution and the
internalization of norms. Journal of Theoretical Biology, 220, 407–418.
Gintis, H. (2007). A framework for the integration of the behavioral sciences. Behavioral
and Brain Sciences, 30, 1–61.
Gintis, H., Smith, E. A., & Bowles, S. (2001). Costly signaling and cooperation. Journal of
Theoretical Biology, 213, 103–119.
Glimcher, P. W., Camerer, C., Fehr, E., & Poldrack, R. A. (2008). Neuroeconomics:
Decision-making and the brain. Elsevier, New York.
Gneezy, U. & Rustichini, A. (2004). Incentives, punishment and behavior. Advances in
Behavioral Economics, 572–589.
Gottschlag, O., & Zollo, M. (2007). Interest alignment and competitive advantage. Academy
of Management Review, 38, 418–437.
Green, L., Myerson, J., Lichtman, D., Rosen, S., & Fry A. (1996). Temporal discounting in choice
between delayed rewards: The role of age and income. Psychology and Aging, 11, 79–84.
Griesinger, D. W., & Livingston, J. W. (1973). Toward a model of interpersonal motivation
in experimental games. Behavioral Science, 18, 173–188.
164 ■ References
Griskevicius, V., Cantu, S., & Van Vugt, M. (2012). The evolutionary bases for sustainable
behavior. Journal of Public Policy and Marketing, 31, 115–128
Griskevicius, V., Tybur, J. M., & van den Bergh, B. (2010). Going green to be seen: Status,
reputation, and conspicuous conservation. Journal of Personality and Social Psychology,
98, 392–404.
Groarke, L. (2008). Ancient skepticism. Stanford Encyclopedia of Philosophy, http://plato.
stanford.edu/entries/skepticism-ancient/#EPE, retrieved 12/7/09.
Gruber, J., Mauss, I. B., & Tamir, M. (2011). A dark side of happiness? How, when, and why
happiness is not always good. Perspectives on Psychological Science, 6, 222–233.
Gumpert, P., Deutsch, M., & Epstein, Y. (1969). Effect of incentive magnitude on cooperation
in the prisoner’s dilemma game. Journal of Personality and Social Psychology, 11, 66–69.
Gürerk, Ö., Irlenbusch, B., & Rockenbach, B. (2006). The competitive advantage of
sanctioning institutions. Science, 312, 108–111.
Gurven, M. (2004). To give and to give not: the behavioral ecology of human food transfers.
Behavioral and Brain Sciences, 27, 543–583.
Gurven, M., Allen-Arave, W., Hill, K., & Hurtado, A. M. (2000). “It’s a Wonderful
Life”: signaling generosity among the Ache of Paraguay. Evolution and Human Behavior,
21, 263–282.
Gustafsson, M., Biel, A., & Gärling, T. (1999). Over-harvesting of resources of unknown
size. Acta Psychologica, 103, 47–64.
Guyer, M., Fox, J., & Hamburger, H. (1973). Format effects in the prisoner’s dilemma.
Journal of Conflict Resolution, 17, 719–744.
Halevy, N., Bornstein, G., & Sagiv, L. (2008). “In-group-love” and “out-group hate” as
motives for individual participation in intergroup conflict. Psychological Science, 19,
405–411.
Halevy, N., Chou, E. Y., & Murnighan, J. K. (2012). Mind games: The mental representation
of conflict. Journal of Personality and Social Psychology, 102, 132–148.
Haley, K. J., & Fessler, D. M. T. (2005). Nobody’s watching? Subtle cues enhance generosity
in an anonymous economic game. Evolution and Human Behavior, 26, 245–256.
Hamburger, H. (1973). N-person prisoner’s dilemma. Journal of Mathematical Sociology,
3, 27–48.
Hamburger, H. (1979). Games as models of social phenomena. San Francisco: W.H. Freeman.
Hamburger, H., Guyer, M., & Fox, J. (1975). Group size and cooperation. Journal of Conflict
Resolution, 19, 503–531.
Hamilton, W. D. (1964). The genetical evolution of social behaviour (I and II). Journal of
Theoretical Biology, 7, 1–52.
Hammer, T. H., & Berman, M. (1981). The role of noneconomic factors in faculty union
voting. Journal of Applied Psychology, 66, 415–421.
Harbaugh, W. T. (1998). What do donations buy? A model of philanthropy based on
prestige and warm glow. Journal of Public Economics, 67, 269–284.
Hardin, G. (1968). The tragedy of the commons. Science, 162, 1243–1248.
Hardy, C. L., & Van Vugt, M. (2006). Nice guys finish first: The competitive altruism
hypothesis. Personality and Social Psychology Bulletin, 32, 1402–1413.
Harris, A. C., & Madden, G. J. (2002). Delay discounting and performance on the prisoner’s
dilemma game. Psychological Record, 52, 429–440.
Hart, C. M., & van Vugt, M. (2006). From fault line to group fission: Understanding membership
changes in small groups. Personality and Social Psychology Bulletin, 32, 392–404.
References ■ 165
Haruno, M., & Frith, C. D. (2009). Activity in the amygdala elicited by unfair divisions
predicts social value orientation. Nature Neuroscience, 13, 160–161.
Haselton, M. G., & Buss, D. M. (2000). Error management theory: a new perspective on
biases in cross-sex mind reading. Journal of Personality and Social Psychology, 78, 81–91.
Hawkes, K. (1991). Showing off: tests of an hypothesis about men’s foraging goals. Ethology
and Sociobiology, 12, 29–54.
Hemesath, M. (1994). Cooperate or defect? Russian and American students in a prisoner’s
dilemma. Comparative Economic Studies, 36, 83–93.
Hemesath, M., & Pomponio, X. (1998). Cooperation and culture: Students from China and
the United States in a prisoner’s dilemma. Cross-Cultural Research, 32, 171–184.
Henle, C. A. (2005). Predicting workplace deviance from the interaction between
organizational justice and personality. Journal of Managerial Issues, 17, 247–263.
Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., Gintis, H., & McElreath, R. (2001).
In search of homo economicus: Behavioral experiments in 15 small-scale societies.
American Economic Review, 91, 73–78.
Henrich, J., Boyd, R., & Richerson, P. J. (2008). Five misunderstandings about cultural
evolution. Human Nature, 19, 119–137.
Henrich, J., Ensminger, J., McElreath, R., Barr, A., Barrett, C., Bolyanatz, A., et al. (2010).
Markets, religion, community size, and the evolution of fairness and punishment.
Science, 327, 1480–1484.
Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world.
Behavioral and Brain Sciences, 33, 61–135.
Henrich, J., McElreath, R., Barr, A., Ensminger, J., Barrett, C., Bolyanatz, A., et al. (2006).
Costly punishment across human societies. Science, 312, 1767–1770.
Henrich, J., & Henrich, N. (2006). Culture, evolution and the puzzle of human cooperation.
Cognitive Systems Research, 7, 220–245.
Henrich, N., & Henrich, J. (2007). Why humans cooperate. Oxford: Oxford University Press.
Herrmann, B., Thöni, C., & Gächter, S. (2008). Antisocial punishment across societies.
Science, 319, 1362–1367.
Hertel, G., & Fiedler, K. (1994). Affective and cognitive influences in a social dilemma
game. European Journal of Social Psychology, 24, 131–145.
Higgins, E. T. (1998). Promotion and prevention: Regulatory focus as a motivational
principle. Advances in Experimental Social Psychology, 30, 1–46.
Hinsz, V. B., Tindale, R. S., & Vollrath, D. A. (1997). The emerging conceptualization of
groups as information processors. Psychological Bulletin, 121, 43–64.
Hobbes, T. (1651/1985). Leviathan. New York: Viking Penguin.
Hofstede, G. (2001). Culture’s consequences: Comparing values, behaviors, institutions, and
organizations across nations (2nd Edition). London, UK: Sage Publications.
Holmström, B., & Milgrom, P. (1990). Regulating trade among agents. Journal of Institutional
and Theoretical Economics, 146, 85–105.
Hong R.Y., & Wong, Y., (2005). Dynamic influences of culture and cooperation in a
prisoner’s dilemma. Psychological Science, 16, 429–434.
Hsu, M. -H., Ju, T. L., Yen, C. -H., & Chang, C. -M. (2007). Knowledge sharing behavior
in virtual communities: The relationship between trust, self-efficacy, and outcome
expectations. International Journal of Human-Computer Studies, 65, 153–169.
Huff, L., & Kelley, L. (2003). Levels of organizational trust in individualist versus collectivist
societies: A seven-nation study. Organizational Science, 14, 81–90.
166 ■ References
Hui, C., Law, R. S., & Chen, Z. -X. (1999). A structural equation model of the effects of
negative affectivity, leader-member exchange, and perceived job mobility on in-role and
extra-role performance: A Chinese case. Organizational Behavior and Human Decision
Processes, 77, 3–21.
Inglehart, R., & Baker, W. E. (2000). Modernization, cultural change, and the persistence of
traditional values. American Sociological Review, 65, 19–51.
Inglehart, R., Basanez, M., & Moreno, A. (1998). Human values and beliefs: A cross-cultural
sourcebook. Ann Arbor, MI: University of Michigan Press.
Insko, C. A., Kirchner, J. L., Pinter, B., Efaw, J., & Wildschut, T. (2005). Inter-individual-intergroup
discontinuity as a function of trust and categorization: The paradox of expected
cooperation. Journal of Personality and Social Psychology, 88, 365–385.
Insko, C. A., & Schopler, J. (1998). Differential distrust of groups and individuals. In C.
Sedikides, J. Schopler, & C. A. Insko (Eds.), Intergroup cognition and intergroup behavior
(pp. 75–107). Mahwah, NJ: Erlbaum.
Insko, C. A., Schopler, J., Pemberton, M. B., Wieselquist, J., McIlraith, S. A., Currey, D.
P, & Gaertner, L. (1998). Long-term outcome maximization and the reduction of
interindividual–intergroup discontinuity. Journal of Personality and Social Psychology,
75, 695–711.
Iredale, W., Van Vugt., M., & Dunbar, R. (2008). Showing off in humans: Male generosity as
mate signal. Evolutionary Psychology, 6, 386–392.
Jackson, D. L. (2003). Revisiting sample size and number of parameter estimates: Some
support for the N:q hypothesis. Structural Equation Modeling, 10, 128–141.
James, H. S., & Cohen, J. P. (2004). Does ethics training neutralize the incentives of the
prisoner’s dilemma? Evidence from a classroom experiment. Journal of Business Ethics,
50, 53–61.
Jarvenpaa, S. L., & Staples, D. S. (2001). Exploring perceptions of organizational ownership
of information and expertise. Journal of Management Information Systems, 18, 151–183.
Johnson, D. D. P. (2005). God’s punishment and public goods. Human Nature, 16, 410–446.
Johnson, D. D. P., & Bering, J. (2006). Hand of God, mind of man: Punishment and cognition
in the evolution of cooperation. Evolutionary Psychology, 4, 219–233.
Joireman, J., Daniels, D., George-Falvy, J., & Kamdar, D. (2006). Organizational citizenship
behaviors as a function of empathy, consideration of future consequences, and employee
time horizon: An initial exploration using an in-basket simulation of OCBs. Journal of
Applied Social Psychology, 36, 2266–2292.
Joireman, J., Kamdar, D., Daniels, D., & Duell, B. (2006). Good citizens to the end? It
depends: Empathy and concern with future consequences moderate the impact of a
short-term time horizon on organizational citizenship behaviors. Journal of Applied
Psychology, 91, 1307–1320.
Joireman, J. A., Lasane, T. P., Bennett, J., Richards, D., & Solaimani, S. (2001). Integrating
social value orientation and the consideration of future consequences within the
extended norm activation model of proenvironmental behavior. British Journal of Social
Psychology, 40, 133–155.
Joireman, J., Posey, D. C., Truelove, H. B., & Parks, C. D. (2009). The environmentalist
who cried drought: Reactions to repeated warnings about depleting resources under
conditions of uncertainty. Journal of Environmental Psychology, 29, 181–192.
Joireman, J., Shaffer, M., Balliet, D., & Strathman, A. (2012). Promotion orientation explains
why future oriented people exercise and eat healthy: Evidence from the two-factor
References ■ 167
exclusion and toleration for bad apples. Journal of Experimental Social Psychology, 45,
603–613.
Kerr, N. L., & Tindale, R. S. (2004). Group performance and decision making. Annual
Review of Psychology, 55, 623–655.
Ketelaar, T., & Au, W. T. (2003). The effects of feelings of guilt on the behavior of uncooperative
individuals in repeated social bargaining games: An affect-as-information interpretation
of the role of emotion in social interaction. Cognition and Emotion, 17, 429–453.
Kimmel, A. J. (1997). In defense of deception. American Psychologist, 53, 803–805.
Kimmel, A. J. (2006). From artifacts to ethics: The delicate balance between methodological
and moral concerns in behavioral research. In D. A. Hantula (Ed.), Advances in social
and organizational psychology (pp. 113–140). Mahwah, NJ: Erlbaum.
Kimmerle, J., Cress, U., & Hesse, F. W. (2007). An interactional perspective on group
awareness: Alleviating the information-exchange dilemma (for everybody)?
International Journal of Human-Computer Studies, 65, 899–910.
Kimmerle, J., Wodzicki, K., Jarodzka, H., & Cress, U. (2011). Value of information, behavioral
guidelines, and social value orientation in an information-exchange dilemma. Group
Dynamics, 15, 173–186.
Kitayama, S., & Cohen, D. (2007). Handbook of cultural psychology. New York: Guilford Press.
Kiyonari, T., & Barclay, P. (2008). Cooperation in social dilemmas: free-riding may be
thwarted by second-order rewards rather than punishment. Journal of Personality and
Social Psychology, 959(4), 826–842.
Klandermans, B. (1986). Psychology and trade union participation: Joining, acting, quitting.
Journal of Occupational and Organizational Psychology, 59, 189–204.
Klandermans, B. (2001). Why social movements come into being and why people join
them. In J. R. Blau (Ed.), The Blackwell companion to sociology (pp. 268–281). Malden,
MA: Blackwell.
Klandermans, B. (2002). How group identification helps to overcome the dilemma of
collective action. American Behavioral Scientist, 45, 887–900.
Klandermans, B., Van Der Toorn, J., Van Stekelenburg, J. (2008). Embeddedness and
identity: How immigrants turn grievances into action. American Sociological Review,
73, 992–1012.
Klapwijk, A., & Van Lange, P. A. M. (2009). Promoting cooperation and trust in noisy
situations: The power of generosity. Journal of Personality and Social Psychology, 96, 83–103.
Kline, R. B. (2011). Principles and practice of structural equation modeling (3rd ed.).
New York: Guilford.
Knack, S., & Keefer, P. (1997). Does social capital have an economic payoff ? A cross-country
investigation. Quarterly Journal of Economics, 112, 1251–1288.
Knight, G. P., & Kagan, S., (1977a). Development of prosocial and competitive behaviors in
Anglo-American and Mexican-American children. Child Development, 48, 1385–1393.
Knight, G. P., & Kagan, S., (1977b). Acculturation of prosocial and competitive behaviors
among second and third generation Mexican-American children. Journal of
Cross-Cultural Psychology, 8, 273–284.
Knox, R. E., & Douglas, R. L. (1971). Trivial incentives, marginal comprehension, and
dubious generalizations from prisoner’s dilemma studies. Journal of Personality and
Social Psychology, 20, 160–165.
Kollock, P. (1993). “An eye for an eye leaves everybody blind”: Cooperation and accounting
systems. American Sociological Review, 58, 768–786.
References ■ 169
Leung, K., Au, A., Huang, X., Kurman, J., Niit, T., & Niit, K. (2007). Social axioms and
values: A cross-cultural examination. European Journal of Personality, 21, 91–111.
Leung, K., Tong, K. -K., and Ho, S. S. -Y. (2004). Effects of interactional justice on egocentric
bias in resource allocation decisions. Journal of Applied Psychology, 89, 405–415.
Liberman, V., Samuels, S. M., & Ross, L. (2004). The name of the game: predictive power
of reputations versus situational labels in determining prisoner’s dilemma game moves.
Personality and Social Psychology Bulletin, 30, 1175–1185.
Liebrand, W. B. G. (1983). A classification of social dilemma games. Simulations & Games,
14, 123–138.
Liebrand, W. B. G., Jansen, R. W. T. L., Rijken, V. M., & Suhre, C. J. M. (1986). Might over
morality: Social values and the perception of other players in experimental games.
Journal of Experimental Social Psychology, 22, 203–215.
Liebrand, W. B. G., & Van Run, G. J. (1985). The effects of social motives on behavior in
social dilemmas in two cultures. Journal of Experimental Social Psychology, 21, 86–102.
Liebrand, W. B. G., Wilke, H. A. M., Vogel, R., & Wolters, F. J. M. (1986). Value orientation
and conformity in three types of social dilemma games. Journal of Conflict Resolution,
30, 77–97.
Lienard, P. (in press). Beyond kin: cooperation in a tribal society. In: P. A. M. Van Lange,
B. Rockenbach, & T. Yamagishi (Eds), Social dilemmas: New perspectives on reward and
punishment. New York: Oxford University Press.
Lind, E. A. (2001). Fairness heuristic theory: Justice judgments as pivotal cognitions
in organizational relations. In J. Greenberg & R. Cropanzano (Eds.), Advances in
organizational justice (pp. 56–88). Stanford, CA: Stanford University Press.
Lindskold, S. (1978). Trust development, the GRIT proposal, and the effects of conciliatory
acts on conflict and cooperation. Psychological Bulletin, 85, 107–128.
Lu, L., Leung, K., & Koch, P. T. (2006). Managerial knowledge sharing: The role of
individual, interpersonal, and organizational factors. Management and Organization
Review, 2, 15–41.
Luce, R. D., & Raiffa, H. (1957). Games and decisions. New York: Wiley and Sons.
Lumsden, C. J., & Wilson, E. O. (1981). Genes, mind, and culture. Cambridge, MA: Harvard
University Press.
Lumsden, J., Miles, L. K., Richardson, M. J., Smith, C. A., & Macrae, C. N. (2012). Who
syncs? Social motives and interpersonal coordination. Journal of Experimental Social
Psychology, 48, 746–751.
Luo, Y. (2007). The independent and interactive roles of procedural, distributive, and
interactional justice in strategic alliances. Academy of Management Journal, 50, 644–664.
Lyle, H. F. III, Smith, E. A., & Sullivan, R. J. (2009). Blood donations as costly signals of
donor quality. Journal of Evolutionary Psychology, 4, 263–286.
MacCoun, R. J., & Kerr, N. L. (1987). Suspicion in the psychological laboratory: Kelman’s
prophecy revisited. American Psychologist, 42, 199.
Macy, M. W., & Willer, R. (2002). From factors to actors: Computational sociology and
agent-based modeling. Annual Review of Sociology, 28, 143–166.
Madsen, M. C., & Shapira, A., (1970). Cooperative and competitive behavior of urban
Afro-American, Mexican-American, and Mexican village children. Developmental
Psychology, 3, 16–20.
Magenau, J. M., Martin, J. E., & Peterson, M. M. (1988). Dual and unilateral commitment
among stewards and rank-and-file union members. Academy of Management Journal,
31, 359–376.
References ■ 171
Majolo, B., Ames, K., Brumpton, R., Garratt, R., Hall, K., & Wilson, N. (2006). Human
friendship favours cooperation in the iterated prisoner’s dilemma. Behaviour, 143,
1383–1395.
Martinez, L. M. F., Zeelenberg, M., & Rijsman, J. B. (2011). Behavioral consequences of regret
and disappointment in social bargaining games. Cognition and Emotion, 25, 351–359.
Marwell, G., & Ames, R. E. (1979). Experiments on the provision of public goods.
I. Resources, interest, group size, and the free-rider problem. American Journal of
Sociology, 84, 1335–1360.
Marwell, G., & Ames, R. E. (1981). Economists free ride, does anyone else? Experiments on
the provision of public goods, IV. Journal of Public Economics, 15, 295–310.
Marlowe, F. W., Berbesque, J. C., Barr, A., Barrett, C., Bolyanatz, A., Cardenas, J. C., et al.
(2008). More “altruistic” punishment in larger societies. Proceedings of the Royal
Society-B, 275, 587–592.
Mathew, S., & Boyd, R. (2011). Punishment sustains large-scale cooperation in prestate
warfare. Proceedings of the National Academy of Sciences, 108, 11875–11880.
Matzler, K., Renzl, B., Mooradian, T., van Krogh, G., & Müller, J. (2011). Personality
traits, affective commitment, documentation of knowledge, and knowledge sharing.
International Journal of Human Resource Management, 22, 296–310.
Matzler, K., Renzl, B., Müller, J., Herting, S., & Mooradian, T. (2008). Personality traits and
knowledge sharing. Journal of Economic Psychology, 29, 301–313.
Mauss, M. (1950/1990). The gift: The form and reason for exchange in archaic societies (W. D.
Halls translation). New York: Norton.
McAndrew, F. T. (2002). New evolutionary perspectives on altruism: Multilevel selection
and costly signaling theories. Current Directions in Psychological Science, 11, 79–82.
McCarter, M. W., Budescu, D. V., & Sheffran, J. (2011). The give-or-take-some dilemma: An
empirical investigation of a hybrid social dilemma. Organizational Behavior and Human
Decision Processes, 116, 83–95.
McCarter, M. W., Mahoney, J. T., & Northcraft, G. B. (2011). Testing the waters: Using
collective real options to manage the social dilemma of strategic alliances. Academy of
Management Review, 36, 621–640.
McCarter, M. W., & Northcraft, G. B. (2007). Happy together? Insights and implications of
viewing managed supply chains as a social dilemma. Journal of Operations Management,
25, 498–511.
McClintock, C. G. (1972). Social motivation—a set of propositions. Behavioral Science, 17,
438–454.
McClintock, C. G., (1974). Development of social motives in Anglo-American and
Mexican-American children. Journal of Personality and Social Psychology, 29, 348–354.
McClintock, C. G. & Allison, S. T. (1989). Social value orientation and helping behavior.
Journal of Applied Social Psychology, 19, 353–362.
McClintock, C. G., & Moskowitz, J. M. (1976). Children’s preferences for individualistic,
cooperative, and competitive outcomes. Journal of Personality and Social Psychology,
34, 543–555.
McDonald, M. M., Navarrete, C. D., & Van Vugt, M. (2012). Evolution and the psychology
of intergroup conflict: The male warrior hypothesis. Philosophical Transactions of the
Royal Society-B, 367, 670–679.
Meleady, R., Hopthrow, T., & Crisp, R. J. (2013). Simulating social dilemmas: Promoting
cooperative behavior through imagined group discussion. Journal of Personality and
Social Psychology, 104, 839–853.
172 ■ References
Mirowski, P. (1992). What were von Neumann and Morgenstern trying to accomplish? In E.
R. Weintraub (Ed.), Toward a history of game theory (pp. 113–147). Durham, NC: Duke
University Press.
Mischel, W. (2012). Self-control theory. In P. A. M. Van Lange, A. W. Kruglanski, & E. T.
Higgins (Eds), Handbook of theories of theories of social psychology (Vol. 2, pp. 1–22).
Thousand Oaks, CA: Sage.
Mogilner, C., Chance, Z., & Norton, M. I. (2012). Giving time gives you time. Psychological
Science, 23, 1233–1238.
Mooradian, T., Renzl, B., & Matzler, K. (2006). Who trusts? Personality, trust, and knowledge
sharing. Management Learning, 37, 523–540.
Moreton, D. R. (1998). An open shop trade union model of wages, effort and membership.
European Journal of Political Economy, 14, 511–527.
Morris, H. (1976). On guilt and innocence. Berkeley : University of California Press.
Mulder, L. B., & Nelissen, R. M. A. (2010). When rules really make a difference: The effect
of cooperation rules and self-sacrificing leadership on moral norms in social dilemmas.
Journal of Business Ethics, 95, 57–72.
Mulder, L. B., van Dijk, E., De Cremer, D., & Wilke, H. A. M. (2006). Undermining trust
and cooperation: The paradox of sanctioning systems in social dilemmas. Journal of
Experimental Social Psychology, 42, 147–162.
Murnighan, J. K., Kim, J. W., & Metzger, A. R. (1993). The volunteer dilemma. Administrative
Science Quarterly, 38, 515–538.
Murnighan, J. K., & Roth, A. E. (1983). Expecting continued play in prisoner’s dilemma
games. Journal of Conflict Resolution, 27, 279–300.
Murphy, R., Ackerman, K., & Handgraaf, M. (2011). Measuring social value orientation.
Judgment and Decision Making, 6, 771–781.
Murphy, S. M., Wayne, S. J., Liden, R. C., & Erdogan, B. (2003). Understanding social loafing: The
role of justice perceptions and exchange relationships. Human Relations, 56, 61–84.
Muthusamy, S. K., & White, M. A. (2005). Learning and knowledge transfer in strategic
alliances: A social exchange view. Organization Studies, 26, 415–441.
Myatt, D. P., & Wallace, C. (2008). An evolutionary analysis of the volunteer’s dilemma.
Games and Economic Behavior, 62, 67–76.
Nash, J. F. (1950). Equilbrium points in n-person games. Proceedings of the National
Academy of Sciences, 36, 48–49.
Nauta, A., De Dreu, C. K. W., & Van der Vaart, T. (2002). Social value orientation,
organizational goal concerns, and interdepartmental problem-solving behavior. Journal
of Organizational Behavior, 23, 199–213.
Naylor, R. (1990). A social custom model of collective action. European Journal of Political
Economy, 6, 201–216.
Naylor, R., & Cripps, M. (1993). An economic theory of the open shop trade union.
European Economic Review, 37, 1599–1620.
Nelissen, R. M. A., Dijker, A. J. M., & deVries, N. K. (2007). How to turn a hawk into a dove
and vice versa: Interactions between emotions and goals in a give-some dilemma game.
Journal of Experimental Social Psychology, 43, 280–286.
Nemeth, C. (1972). A critical analysis of research utilizing the prisoner’s dilemma paradigm
for the study of bargaining. Advances in Experimental Social Psychology, 6, 203–234.
Nesse, R. M. (2005). Natural selection and the regulation of defenses: A signal detection
analysis of the smoke detector principle. Evolution and Human Behavior, 26, 88–105.
174 ■ References
Nesse, R. M. (2007). Runaway social selection for displays of partner value and altruism.
Biological Theory, 2, 143–155.
Neufeld, S. L., Griskevicius, V., Ledlow, S. E., Li, Y. J., Neel, R., Berlin, A., & Yee, C. (2011).
Going green to help your genes: The use of kin-based appeals in conservation messaging.
Manuscript submitted for publication.
Neuman, G. A., & Kickul, J. R. (1998). Organizational citizenship behaviors: Achievement
orientation and personality. Journal of Business and Psychology, 13, 263–279.
Newton, L. A., & Shore, L. M. (1992). A model of union membership: Instrumentality,
commitment, and opposition. Academy of Management Review, 17, 275–298.
Neyer, F. J., & Lange, F. R. (2003). Blood is thicker than water: Kinship orientation across
adulthood. Journal of Personality and Social Psychology, 84, 310–321.
Noë, R., & Hammerstein, P. (1994). Biological markets: Supply and demand determine the
effect of partner choice in cooperation, mutualism and mating. Behavioral Ecology and
Sociobiology, 35, 1–11.
Noë, R., & Hammerstein, P. (1995). Biological markets. Trends in Ecology & Evolution, 10,
336–339.
Nolan, J. P., Schultz, P. W., Cialdini, R. B., Goldstein, N. J., & Griskevicius, V. (2008).
Normative social influence is underdetected. Personality and Social Psychology Bulletin,
34, 913–923.
Nosenzo, D., & Sefton, M. (2013). Promoting cooperation: The distribution of reward
and punishment power. In P. A. M. Van Lange, B. Rockenbach, & T. Yamagishi (Eds),
Social dilemmas: New perspectives on reward and punishment. New York: Oxford
University Press.
Nowak, M. A. (2006). Five rules for the evolution of cooperation. Science, 314, 1560–1563.
Nowak, M. A., & Highfield, R. (2011). SuperCooperators. New York: Free Press.
Nowak, M. A., & Sigmund, K. (1992). Tit for tat in heterogenous populations. Nature, 355,
250–253.
Nowak, M. A., & Sigmund, K. (1993). A strategy of win-stat, lose-shift that outperforms
tit-for-tat in the prisoner’s dilemma game. Nature, 364, 56–58.
Nowak, M. A., & Sigmund, K. (2005). Evolution of indirect reciprocity. Nature, 437,
1291–1298.
O’Gorman, R. O., Henrich, J., & Van Vugt, M.(2008). Constraining free-riding in public
goods games: Designated solitary punishers can sustain human cooperation. Proceedings
of Royal Society-B, 276, 323–329.
Ohtsuki, H., & Iwasa, Y. (2007). Global analyses of evolutionary dynamics and exhaustive
search for social norms that maintain cooperation by reputation. Journal of Theoretical
Biology, 244, 518–531.
Olson, M. (1965). The logic of collective action. Cambridge, MA: Harvard University Press.
Olweus, D. (1979). Stability of aggressive reaction patterns in males: A review. Psychological
Bulletin, 86, 852–875.
Omoto, A., & Snyder, M. (2002). Considerations of community: The context and process of
volunteerism. American Behavioral Scientist, 45, 846–867.
Oosterbeek, H., Sloof, R., & Van de Kuilen, G. (2004). Cultural differences in ultimatum
game experiments: Evidence from meta-analysis. Experimental Economics, 7, 171–188.
Opotow, S. & Weiss, L. (2000). New ways of thinking about environmentalism: Denial and
the process of moral exclusion in environmental conflict. Journal of Social Issues, 56,
475–490.
References ■ 175
Orbell, J., Van de Kragt, A. J. C., & Dawes, R. M. (1988). Explaining discussion-induced
cooperation. Journal of Personality and Social Psychology, 54, 811–819.
Ortmann, A., & Hertwig, R. (1997). Is deception acceptable? American Psychologist, 52,
746–747.
Osgood, C. (1962). An alternative to war or surrender. Urbana: University of Illinois Press.
Ostrom, E. (1990). Governing the commons: The evolution of institutions for collective action.
Cambridge: Cambridge University Press.
Ostrom, E. (2000). Collective action and the evolution of social norms. Journal of Economic
Perspectives, 14, 137–158.
Ostrom, E., & Ahn, T. K. (2008). The meaning of social capital and its link to collective
action. In D. Castiglione, J. W. van Deth, & G. Wolleb (Eds.). Handbook on social capital
(pp. 17–35). Northampton, MA: Elgar.
Ostrom, E., Gardner, R., & Walker, J. (2003). Rules, games, and common-pool resources. Ann
Arbor: University of Michigan Press.
Ostrom, E., & Walker, J. (2003). Trust and reciprocity. New York: Russell Sage Foundation.
Ouwerkerk, J. W., Kerr, N. L., Gallucci, M., & Van Lange, P. A. M. (2005). Avoiding the social
death penalty: Ostracism and cooperation in social dilemmas. In K. D. Williams, J. P.
Forgas, & W. von Hippel (Eds.), The social outcast: Ostracism, social exclusion, rejection,
and bullying (pp. 321–332). New York: Psychology Press.
Oyserman, D., & Lee, S. W. -S. (2007). Priming “culture”: Culture as situated cognition.
In S. Kitayama & D. Cohen (Eds.), Handbook of cultural psychology (pp. 255–282).
New York: Guilford.
Palmer, G. (1990). Alliance politics and issue areas: Determinants of defense spending.
American Journal of Political Science, 34, 190–211.
Park, J. H., Schaller, M., & Van Vugt, M. (2008). Psychology of human kin recognition: Heuristic
cues, erroneous inferences, and their implications. Review of General Psychology, 12,
215–235.
Parkhe, A. (1993). Strategic alliance structuring: A game theoretic and transaction cost
examination of interfirm cooperation. Academy of Management Journal, 36, 794–829.
Parks, C.D. (1994). The predictive ability of social values in resource dilemmas and public
goods games. Personality and Social Psychology Bulletin, 20, 431–438.
Parks, C. D., Henager, R. F., & Scamahorn, S. D. (1996). Trust and reactions to messages of
intent in social dilemmas. Journal of Conflict Resolution, 40, 134–151.
Parks, C. D., & Hulbert, L. G. (1995). High and low trusters’ responses to fear in a payoff
matrix. Journal of Conflict Resolution, 39, 718–730.
Parks, C. D., Joireman, J., & Van Lange, P. A. M. (2013). Cooperation, trust, and
antagonism: How public goods are promoted. Psychological Science in the Public Interest,
14, xxx–xxx.
Parks, C. D., & Komorita, S. S. (1997). Reciprocal strategies for large groups. Personality and
Social Psychology Review, 1, 314–322.
Parks, C. D., & Rumble, A.C. (2001). Elements of reciprocity and social value orientation.
Personality and Social Psychology Bulletin, 27, 1301–1309.
Parks, C. D., Rumble, A. C., & Posey, D. C. (2002). The effects of envy on reciprocation in a
social dilemma. Personality and Social Psychology Bulletin, 28, 509–520.
Parks, C. D., Sanna, L. J., & Posey, D. C. (2003). Retrospection in social dilemmas: How
thinking about the past affects future cooperation. Journal of Personality and Social
Psychology, 84, 988–996.
176 ■ References
Parks, C. D., & Stone, A. B. (2010). The desire to expel unselfish members from the group.
Journal of Personality and Social Psychology, 99, 303–310.
Parks, C. D., & Vu. A. D. (1994). Social dilemma behavior of individuals from highly
individualist and collectivist cultures. Journal of Conflict Resolution, 38, 708–718.
Penn, D. J. (2003). The evolutionary roots of our environmental problems: Toward a
Darwinian ecology. Quarterly Review of Biology, 78, 275–301.
Penner, L. A., Midili, A. R., & Kegelmeyer, J. (1997). Beyond job attitudes: A personality
and social psychology perspective on the causes of organizational citizenship behavior.
Human Performance, 10, 111–131.
Piff, P. K., Kraus, M. W., Côté, S., Cheng, B. H., & Keltner, D. (2010). Having less, giving
more: The influence of social class on prosocial behavior. Journal of Personality and
Social Psychology, 99, 771–784.
Pillutla, M. M., & Chen, X. -P. (1999). Social norms and cooperation in social dilemmas: The
effects of context and feedback. Organizational Behavior and Human Decision Processes,
78, 81–103.
Platt, J. (1973). Social traps. American Psychologist, 28, 641–651.
Plous, S. (1985). Perceptions illusions and military realities: The nuclear arms race. Journal
of Conflict Resolution, 29, 363–389.
Poppe, M., & Valkenberg, H. (2003). Effects of gains versus loss and certain versus probable
outcomes on social value orientations. European Journal of Social Psychology, 33, 331–337.
Poteete, A. R., Janssen, M. A., & Ostrom, E. (Eds.) (2010). Working together: Collective action,
the commons, and multiple methods in practice. Princeton: Princeton University Press.
Probst, T. M., Carnevale, P. J., & Triandis, H. C. (1999). Cultural values in intergroup and
single group social dilemmas. Organizational Behavior and Human Decision Processes,
77, 171–191.
Pruitt, D. G., & Kimmel, M. J. (1977). Twenty years of experimental gaming: Critique,
synthesis, and suggestions for the future. Annual Review of Psychology, 28, 363–392.
Pruitt, D. G., & Rubin, J. Z. (1986). Social conflict. Reading, MA: Addison-Wesley.
Putnam, R. (1993). Making democracy work. Princeton, NJ: Princeton University Press.
Rachlin, H. (2006). Notes on discounting. Journal of the Experimental Analysis of Behavior,
85, 425–435.
Raihani, N. J., & Bshary, R. (2011). The evolution of punishment in n-player public goods
games: A volunteer’s dilemma. Evolution, 65, 2725–2728.
Rapoport, Am. (1987). Research paradigms and expected utility models for the provision of
step-level public goods. Psychological Review, 94, 74–83.
Rapoport, Am. (1988). Provision of step-level goods: Effects of inequality in resources.
Journal of Personality and Social Psychology, 54, 432–440.
Rapoport, An. (1967). A note on the “index of cooperation” for prisoner’s dilemma. Journal
of Conflict Resolution, 11, 100–103.
Rapoport, An., & Chammah, A. M. (1965). Prisoner’s dilemma. Ann Arbor, MI: University
of Michigan Press.
Rapoport, An., & Dale, P. S. (1967). The “end” and “start” effects in iterated prisoner’s
dilemma. Journal of Conflict Resolution, 11, 354–362.
Reeve, H. K. (2000). Multi-level selection and human cooperation. Evolution and Human
Behavior, 21, 65–72.
Reinders Folmer, C., Klapwijk, A., De Cremer, D., & Van Lange, P. A. M. (2012). One for
all: What representing a group may do to us. Journal of Experimental Social Psychology,
48, 1047–1056.
References ■ 177
Reisel, W. D., Probst, T. M., Chia, S. -L., Maloles, C. M., & König, C. J. (2010). The effects
of job insecurity on job satisfaction, organizational citizenship behavior, deviant
behavior, and negative emotions of employees. International Studies of Management and
Organization, 40, 74–91.
Renzl, B. (2008). Trust in management and knowledge sharing: The mediating effects of fear
and knowledge documentation. Omega, 36, 206–220.
Richerson, P. J., & Boyd, R. (2005). Not by genes alone: How culture transformed human
evolution. Chicago, IL: University of Chicago Press.
Rigdon, M., Ishii, K., Watabe, M., & Kitayama, S. (2009). Minimal social cues in the dictator
game. Journal of Economic Psychology, 30, 358–367.
Rilling, J. K., & Sanfey, A. G. (2011). The neuroscience of social decision-making. Annual
Review of Psychology, 62, 23–48.
Rioux, S. M., & Penner, L. A. (2001). The causes of organizational citizenship
behavior: A motivational analysis. Journal of Applied Psychology, 86, 1306–1314.
Roberts, G. (1998). Competitive altruism: From reciprocity to the handicap principle.
Proceedings of the Royal Society-B, 265, 427–431.
Roberts, G. (2005). Cooperation through interdependence. Animal Behaviour, 70, 901–908.
Roberts, G., & Renwick, J. S. (2003). The development of cooperative relationships: An
experiment. Proceedings of the Royal Society-B, 270, 2279–2283.
Roberts, G., & Sherratt, T. N. (1998). Development of cooperative relationships through
increasing investment. Nature, 394, 175–179.
Robinson, S. L., & Bennett, R. J. (1995). A typology of deviant workplace behaviors:
A multidimensional scaling study. Academy of Management Journal, 38, 555–572.
Robinson, S. L., & O’Leary-Kelly, A. E. (1998). Monkey see, monkey do: The influence of
work groups on the antisocial behavior of employees. Academy of Management Journal,
41, 658–672.
Roch, S. G., Lane, J. A., S., Samuelson, C. D., Allison, S. T., & Dent, J. L. (2000). Cognitive
load and the equality heuristic: A two-stage model of resource overconsumption in
groups. Organizational Behavior and Human Decision Processes, 83, 185–212.
Rockenbach, B., & Milinski, M. (2006). The efficient interaction of indirect reciprocity and
costly punishment. Nature, 444, 718–723.
Rockenbach, B., & Milinski, M. (2011). To qualify as a social partner humans hide severe
punishment though their observed cooperativeness is decisive, Proceedings of the
National Academy of Science (PNAS), 105/45, 18307–18312.
Rockmann, K., & Northcraft, G. B. (2008). To be or not to be trusted: The influence of
media richness on defection and deception. Organizational Behavior and Human
Decision Processes, 107, 106–122.
Rotemberg, J. J. (1994). Human relations in the workplace. Journal of Political Economy,
102, 684–717.
Roth, A. E., & Murnighan, J. K. (1978). Equilibrium behavior and repeated play in the
prisoner’s dilemma. Journal of Mathematical Psychology, 17, 189–198.
Roth, A. E., Prasnikar, V., Okuno-Fujiwara, M., & Zamir, S. (1991). Bargaining and market
behavior in Jerusalem, Ljublajana, Pittsburgh, and Tokyo: An experimental study.
American Economic Review, 81, 1068–1095.
Rotter, J. B. (1967). A new scale for the measurement of interpersonal trust. Journal of
Personality, 35, 651–665.
Rousseau, D. M., Sitkin, S. B., Burt R. S., & Camerer, C. (1998). Not so different after
all: A cross-discipline view of trust. Academy of Management Review, 23, 393–404.
178 ■ References
Rumble, A. C., Van Lange, P. A. M., & Parks, C. D. (2010). The benefits of empathy: When
empathy may sustain cooperation in social dilemmas. European Journal of Social
Psychology, 40, 856–866.
Rupp, D. E., & Cropanzano, R. (2002). The mediating effects of social exchange relationships
in predicting workplace outcomes from multifoci organizational justice. Organizational
Behavior and Human Decision Processes, 89, 925–946.
Rusbult, C. E., & Van Lange, P. A. M. (2003). Interdependence, interaction, and relationships.
Annual Review of Psychology, 54, 351–375.
Rutte, C. G. (1990). Solving organizational social dilemmas. Social Behaviour, 5, 285–294.
Rutte, C. G., & Wilke, H. A. M. (1985). Preference for decision structures in a social
dilemmas situation. European Journal of Social psychology, 15, 367–370.
Rutte, C. G., & Wilke, H. A. M. (1992). Goals, expectations and behavior in a social dilemma
situation. In W. B. G. Liebrand, D. M. Messick, & H. A. M. Wilke (Eds.), Social dilemmas
(pp. 280–305). New York: Pergamon.
Sahlins, M. (1972). Stone age economics. New York: Aldine De Gruyter.
Samuelson, C. D. (1990). Energy conservation: A social dilemma approach. Social
Behaviour, 5, 207–230.
Samuelson, C. D. (1991). Perceived task difficulty, causal attributions, and preferences for
structural change in resource dilemmas. Personality and Social Psychology Bulletin, 17,
181–187.
Samuelson, C. D. (1993). A multiattribute evaluation approach to structural change in
resource dilemmas. Organizational Behavior and Human Decision Processes, 55, 298–324.
Samuelson, C. D., Messick, D. M., Rutte, C. G., & Wilke, H. A. M. (1984). Individual and
structural solutions to resource dilemmas in two cultures. Journal of Personality and
Social Psychology, 47, 94–104.
Sartorius, R. (1975). Individual conduct and social norms. Belmont, CA: Dickenson.
Sattler, D. N., & Kerr, N. L. (1991). Might versus morality explored: Motivational and
cognitive bases for social motives. Journal of Personality and Social Psychology, 60,
756–765.
Scalet, S. (2006). Prisoner’s dilemmas, cooperative norms, and codes of business ethics.
Journal of Business Ethics, 65, 309–323.
Schelling, T. C. (1960). The strategy of conflict. Cambridge, MA: Harvard University Press.
Scherer, A. G., & Palazzo, G. (2007). Toward a political conception of corporate
responsibility: Business and society seen from a Habermasian perspective. Academy of
Management Review, 32, 1096–1120.
Schroeder, D. A. (Ed.) (1995). Social dilemmas: Perspectives on individuals and groups.
Westport, CT: Praeger.
Schwartz, S. H. (1992). Universals in the content and structure of values: Theoretical advances
and empirical tests in 20 countries. Advances in Experimental Social Psychology, 25, 1–66.
Schwartz, S. H. (1999). A theory of cultural values and some implications for work. Applied
Psychology: An International Review, 48, 23–47.
Searcy, W. A., & Nowicki, S. (2005). The evolution of animal communication: Reliability and
deception in signaling systems. Princeton, NJ: Princeton University Press.
Sears, D. O. (1986). College sophomores in the laboratory: Influences of a narrow data
base on social psychology’s view of human nature. Journal of Personality and Social
Psychology, 51, 515–530.
Seinen, I., & Schram, A. (2006). Social status and group norms: Indirect reciprocity in a
helping experiment. European Economic Review, 50, 581–602.
References ■ 179
Selten, R., & Stoecker, R. (1986). End behavior in sequences of finite prisoner’s dilemma
supergames: A learning theory approach. Journal of Economic Behavior and Organization,
7, 47–70.
Semmann, D., Krambeck, H.-J., & Milinski, M. (2004). Strategic investment in reputation.
Behavioral Ecology and Sociobiology, 56, 248–252.
Sen, S., Gurhan-Canli, Z., & Morwitz, V. (2001). Withholding consumption: A social
dilemma perspective on consumer boycotts. Journal of Consumer Research, 28, 399–417.
Settoon, R. P., Bennett N., & Liden, R. C. (1996). Social exchange in organizations: Perceived
organizational support, leader-member exchange, and employee reciprocity. Journal of
Applied Psychology, 81, 219–227.
Shah, R., & Goldstein, S. M. (2006). Use of structural equation modeling in operations
management research: Looking back and forward. Journal of Operations Management,
24, 148–169.
Shaw, J. I. (1976). Response-contingent payoffs and cooperative behavior in the prisoner’s
dilemma game. Journal of Personality and Social Psychology, 34, 1024–1033.
Sheldon, K.M. (1999). Learning the lessons of tit-for-tat: Even competitors can get the
message. Journal of Personality and Social Psychology, 77, 1245–1253.
Sheldon, K. M., & McGregor, H. A. (2000). Extrinsic value orientation and “the tragedy of
the commons.” Journal of Personality, 68, 383–411.
Shelley, G. P., Page, M., Rives, P., Yeagley, E., & Kuhlman, D. M. (2010). Nonverbal
communication and detection of individual differences in social value orientation. In
R. M. Kramer, A. Tenbrunsel, & M. H. Bazerman (Eds.), Social decision making: Social
dilemmas, social values, and ethics (pp. 147–170). New York: Routledge.
Shen, S.-F., Reeve, H. K., & Herrnkind, W. (2010). The Brave Leader game and the timing of
altruism among non-kin. American Naturalist, 176, 242–248.
Sherratt, T. N., & Roberts, G. (1999). The evolution of quantitatively responsive cooperative
trade. Journal of Theoretical Biology, 200, 419–426.
Simon, B., Loewy, M., Stürmer, S., Weber, U., & Freytag, P., Habig, C., et al. (1998).
Collective identification and social movement participation. Journal of Personality and
Social Psychology, 74, 646–658.
Simon, H. A. (1990). A mechanism for social selection and successful altruism. Science,
250, 1665–1668.
Simon, H. A. (1991). Organizations and markets. Journal of Economic Perspectives, 5, 34–38.
Singer, T., Seymour, B., O’Doherty, J., Kaube, H., Dolan, R. J., Frith, C. D. (2004). Empathy for
pain involves the affective but not sensory components of pain. Science, 303, 1157–1162.
Singer, T., Seymour, B., O’Doherty, J. P., Stephan, K. E., Dolan, R. J., & Frith, C. D. (2006).
Empathic neural responses are modulated by the perceived fairness of others. Nature,
439, 466–469.
Skarlicki, D. P., & Folger, R. (1997). Retaliation in the workplace: The roles of distributive,
procedural, and interactional justice. Journal of Applied Psychology, 82, 434–443.
Smith, A. (1759/2002). Theory of moral sentiments. New York: Cambridge University Press.
Smith, A. (1776/1976). Wealth of nations. New York: Oxford University Press.
Smith, E. A. (2004). Why do good hunters have higher reproductive success? Human
Nature, 15, 343–364.
Smith, E. A., & Bliege Bird, R. (2000). Turtle hunting and tombstone opening: Public
generosity as costly signaling. Evolution and Human Behavior, 21, 245–262.
Sober, E., & Wilson, D. S. (1998). Unto others: The evolution and psychology of unselfish
behavior. Cambridge, MA: Harvard University Press.
180 ■ References
Sommerfeld, R. D., Krambeck, H.-J., Semmann, D., & Milinski, M. (2007). Gossip as an
alternative for direct observation in games of indirect reciprocity. Proceedings of the
National Academy of Sciences, 104, 17435–17440.
Spisak, B., Dekker, P., Kruger, M., & Van Vugt, M. (2012). Warriors and peacekeepers: Testing
a biosocial implicit leadership hypothesis of intergroup relations using masculine and
feminine faces. PloS ONE, 7, 1 e30399.
Staats, H. J., Wit, A. P., & Midden, C. Y. H. (1996). Communicating the greenhouse effect
to the public: Evaluation of a mass media campaign from a social dilemma perspective.
Journal of Environmental Management, 45, 189–203.
Steinel, W., Utz, S., & Koning, L. (2010). The good, the bad and the ugly thing to do when
sharing information: Revealing, concealing and lying depend on social motivation,
distribution, and importance of information. Organizational Behavior and Human
Decision Processes, 113, 85–96.
Stern, P. C. (1976). Effect of incentives and education on resource conservation decisions in a
simulated commons dilemma. Journal of Personality and Social Psychology, 34, 1285–1292.
Stevens, J. R., & Hauser, M. D. (2004). Why be nice? Psychological constraints on the
evolution of cooperation. Trends in Cognitive Science, 8, 60–65.
Stouten, J., De Cremer, D., & Van Dijk, E. (2005). All is well that ends well, at least for
proselfs: Emotional reactions to equality violation as a function of social value
orientation. European Journal of Social Psychology, 35, 767–783.
Stouten, J., De Cremer, D., & Van Dijk, E. (2007). Managing equality in social dilemmas:
Emotional and retributive implications. Social Justice Research, 20, 53–67.
Stouten, J., De Cremer, D., & Van Dijk, E. (2009). Behavioral (in)tolerance of equality
violation in social dilemmas: When trust affects contribution decisions after violation of
equality. Group Processes and Intergroup Relations, 12, 517–531.
Strathman, A., Gleicher, F., Boninger, D. S., & Edwards, C. S. (1994). The consideration of
future consequences: Weighing immediate and distant outcomes of behavior. Journal of
Personality and Social Psychology, 66, 742–752.
Suleiman, R., Budescu, D. V., Fischer, I., & Messick, D. M. (2004). Contemporary psychological
research on social dilemmas. New York: Cambridge University Press.
Suleiman, R. & Rapoport A. (1988). Environmental and social uncertainty in single-trial
resource dilemmas. Acta Psychologica, 68, 99–112.
Sylwester, K., & Roberts, G. (2010). Cooperators benefit through reputation-based partner
choice in economic games. Biology Letters, 6, 659–662.
Tan, H. -B., & Forgas, J. P. (2010). When happiness makes us selfish, but sadness makes
us fair: Affective influences on interpersonal strategies in the dictator game. Journal of
Experimental Social Psychology, 46, 571–576.
Taylor, M. (1987). The possibility of cooperation. New York: Cambridge University Press.
Tazelaar, M. J. A., Van Lange, P. A. M., & Ouwerkerk, J. W. (2004). How to cope with “noise”
in social dilemmas: the benefits of communication. Journal of Personality and Social
Psychology, 87, 845–859.
Tenbrunsel, A. E., & Messick, D. M. (1999). Sanctioning systems, decision frames, and
cooperation. Administrative Science Quarterly, 44, 684–707.
Tenbrunsel, A. E., & Messick, D. M. (2004). Ethical fading: The role of self-deception in
unethical behavior. Social Justice Research, 17, 223–236.
Tepper, B. J., & Taylor, E. C. (2003). Relationships among supervisors’ and subordinates’
procedural justice perceptions and organizational citizenship behaviors. Academy of
Management Journal, 46, 97–105.
References ■ 181
Thibaut, J. W., & Kelley, H. H. (1959). The social psychology of groups. New York: Wiley
and Sons.
Tinbergen, N. (1968). On war and peace in animals and man. Science, 160, 1411–1418.
Toda, M., Shinotsuka, H., McClintock, C. G., & Stech, F. J. (1978). Development of
competitive behavior as a function of cultural, age, and social comparison. Journal of
Personality and Social Psychology, 36, 825–839.
Todorov, A., & Duchaine, B. (2008). Reading trustworthiness in faces without recognizing
faces. Cognitive Neuropsychology, 25, 395–410.
Tooby, J., & Cosmides, L. (1996). Friendship and the Banker’s Paradox: Other pathways
to the evolution of adaptations for altruism. Proceedings of the British Academy, 88,
119–143.
Triandis, H. C. (1989). Cross-cultural studies of individualism and collectivism. In J. J.
Berman (Ed.), Nebraska Symposium on Motivation (pp. 41–133). Lincoln, NE: University
of Nebraska Press.
Trivers, R. (1971). The evolution of reciprocal altruism. Quarterly Review of Biology,
46, 35–57.
Turiel, E.(1983). The development of social knowledge: Morality and convention. Melbourne,
Australia: Cambridge University Press.
Tyler, T. R., & Blader, S. L. (2003). The group engagement model: Procedural justice, social
identity, and cooperative behavior. Personality and Social Psychology Review, 7, 349–361.
Tyler, T. R., & Degoey, P. (1995). Collective restraint in social dilemmas: Procedural justice
and social identification effects on support for authorities. Journal of Personality and
Social Psychology, 69, 482–497.
Tyler, T. R., Degoey, P., & Smith, H. (1996). Understanding why the justice of group
procedures matters: A test of the psychological dynamics of the group-value model.
Journal of Personality and Social Psychology, 70, 913–930.
Tyson, T. (1990). Believing that everyone else is less ethical: Implications for work behavior
and ethics instruction. Journal of Business Ethics, 9, 715–721.
Tyson, T. (1992). Does believing that everyone else is less ethical have an impact on work
behavior? Journal of Business Ethics, 11, 707–717.
Utz, S. (2004a). Self-construal and cooperation: Is the interdependent self more cooperative
than the independent self? Self and Identity, 3, 177–190.
Utz, S. (2004b). Self-activation is a two-edged sword: The effects of I primes on cooperation.
Journal of Experimental Social Psychology, 40, 769–776.
Utz, S., Ouwerkerk, J. W., & Van Lange, P. A. M. (2004). What is smart in a social dilemma?
Differential effects of priming competence on cooperation. European Journal of Social
Psychology, 34, 317–332.
Van den Bergh, B., & Dewitte, S. (2006). The robustness of the “raise-the-stakes”
strategy: Coping with exploitation in noisy prisoner’s dilemma games. Evolution and
Human Behavior, 27, 19–28.
Van den Bos, K., & Lind, E. A. (2002). Uncertainty management by means of fairness
judgments. Advances in Experimental Social Psychology, 34, 1–60.
Van der Vegt, G., van de Vliert, E., & Oosterhof, A. (2003). Informational dissimilarity and
organizational citizenship behavior: The role of intrateam interdependence and team
identification. Academy of Management Journal, 46, 715–727.
Van Dijk, E., De Kwaadsteniet, E. W., & De Cremer, D. (2009). Tacit coordination in social
dilemmas: The importance of having a common understanding. Journal of Personality
and Social Psychology, 96, 665–678.
182 ■ References
Van Dijk, E., Van Kleef, G. A., Steinel, W., & Van Beest, I. (2008). A social functional
approach to emotions in bargaining: When communicating anger pays and when it
backfires. Journal of Personality and Social Psychology, 94, 600–614.
Van Dijk, E., & Wilke, H. (1993). Differential interests, equity, and public good provision.
Journal of Experimental Social Psychology, 29, 1–16.
Van Dijk, E., & Wilke, H. (1994). Asymmetry of wealth and public good provision. Social
Psychology Quarterly, 57, 352–359.
Van Dijk, E., & Wilke, H. (1995). Coordination rules in asymmetric social dilemmas: A
comparison between public good dilemmas and resource dilemmas. Journal of
Experimental Social Psychology, 31, 1–27.
Van Dijk, E., & Wilke, H. (2000). Decision-induced focusing in social dilemmas: Give-some,
keep-some, take-some, and leave-some dilemmas. Journal of Personality and Social
Psychology, 78, 92–104.
Van Dijk, E., Wilke, H., & Wit, A. (2003). Preferences for leadership in social dilemmas: Public
good dilemmas versus resource dilemmas. Journal of Experimental Social Psychology,
39, 170–176.
Van Dijk, E., Wit, A., Wilke, H., & Budescu, D. V. (2004). What we know (and do not know)
about the effects of uncertainty on behavior in social dilemmas. In R. Suleiman, D. V.
Budescu, I. Fischer, & D. M. Messick (Eds.), Contemporary psychological research on
social dilemmas (pp. 315–331). Cambridge: Cambridge University Press.
Van Doesum, N., & Van Lange, P. A. M. (2013). Social mindfulness: Will and skill to navigate
the social world. Unpublished manuscript. VU University, Amsterdam.
Van Goozen, S. H. M., Frijda, N. H., & Van de Poll, N. E. (1995). Anger and aggression
during role-playing: Gender differences between hormonally treated male and female
transsexuals and controls. Aggressive Behavior, 21, 257–273.
Van Kleef, G. A., De Dreu, C. K. W., & Manstead, A. S. R. (2006). Supplication and
appeasement in conflict and negotiation: The interpersonal effects of disappointment,
worry, guilt, and regret. Journal of Personality and Social Psychology, 91, 124–142.
Van Lange, P. A. M. (1999). The pursuit of joint outcomes and equality in outcomes: An
integrative model of social value orientation. Journal of Personality and Social Psychology,
77, 337–349.
Van Lange, P. A. M. (2008). Does empathy trigger only altruistic motivation—How about
selflessness and justice? Emotion, 8, 766–774.
Van Lange, P. A. M. (2013). What we should expect from theories in social psychology: Truth,
abstraction, progress, and applicability as standards (TAPAS). Personality and Social
Psychology Review, 17, 40–55.
Van Lange, P. A. M., Bekkers, R., Chirumbolo, A., & Leone, L. (2012). Are conservatives less
likely to be prosocial than liberals? From games to ideology, political preferences and
voting. European Journal of Personality, 26, 461–473.
Van Lange, P. A. M., De Cremer, D., Van Dijk, E., & Van Vugt, M. (2007a). Self-interest and
beyond: Basic principles of social interaction. In A. W. Kruglanski & E. T. Higgins (Eds.),
Social psychology: Handbook of basic principles (pp. 540–561). New York: Guilford.
Van Lange, P. A. M., Bekkers, R., Schuyt, T. N. M., & van Vugt, M. (2007b). From games
to giving: Social value orientation predicts donation to noble causes. Basic and Applied
Social Psychology, 29, 375–384.
Van Lange, P. A. M. Joireman, J., Parks, C. D., & Van Dijk, E. (2013). The psychology of
social dilemmas: A review. Organizational Behavior and Human Decision Processes, 120,
125–141.
References ■ 183
Van Lange, P. A. M., Klapwijk, A., & van Munster, L. M. (2011). How the shadow of the
future might promote cooperation. Group Processes and Intergroup Relations, 14,
857–870.
Van Lange, P. A. M., & Kuhlman, D. M. (1994). Social value orientations and impressions of
partner’s honesty and intelligence: A test of the might versus morality effect. Journal of
Personality and Social Psychology, 67, 126–141.
Van Lange, P. A. M., Otten, W., De Bruin, E. M. N., & Joireman, J. A. (1997). Development
of prosocial, individualistic, and competitive orientations: Theory and preliminary
evidence. Journal of Personality and Social Psychology, 73, 733–746.
Van Lange, P. A. M., Ouwerkerk, J. W., & Tazelaar, M. J. A. (2002). How to overcome the
detrimental effects of noise in social interaction: The benefits of generosity. Journal of
Personality and Social Psychology, 82, 768–780.
Van Lange, P. A. M., & Rusbult, C. E. (2012). Interdependence theory. In P. A. M. Van
Lange, A. W. Kruglanski, & E. T. Higgins (Eds.), Handbook of theories of social psychology
(Vol. 2, pp. 251–272). Thousand Oaks, CA: Sage.
Van Lange, P. A. M., Schippers, M., & Balliet, D. (2011). Who volunteers in psychology
experiments? An empirical review of prosocial motivation in volunteering. Personality
and Individual Differences, 51, 297–284.
Van Lange, P. A. M., & Sedikides, C. (1998). Being more honest but not necessarily more
intelligent than others: Generality and explanations for the Muhammad Ali effect.
European Journal of Social Psychology, 28, 675–680.
Van Lange, P. A. M., & Visser, K. (1999). Locomotion in social dilemmas: How people adapt
to cooperative, tit-for-tat, and noncooperative partners. Journal of Personality and Social
Psychology, 77, 762–773.
Van Prooijen, J. -W., Stahl, T., Eek, D., & Van Lange, P. A. M. (2012). Injustice for all or just
for me? Social value orientation predicts responses to own versus other’s procedures.
Personality and Social Psychology Bulletin, 38, 1247–1258.
Van Vugt, M. (1997). Why the privatisation of public goods might fail: A social dilemma
approach. Social Psychology Quarterly, 63, 355–366.
Van Vugt, M. (2001). Community identification moderating the impact of financial
incentives in a natural social dilemma: Water conservation. Personality and Social
Psychology Bulletin, 25, 731–745.
Van Vugt, M. (2009). Averting the tragedy of the commons: Using social psychological
science to protect the environment. Current Directions in Psychological Science, 18,
169–173.
Van Vugt, M. & Ahuja, A. (2010). Naturally selected: The evolutionary science of leadership.
New York: Harper.
Van Vugt, M., & De Cremer, D. (1999). Leadership in social dilemmas: The effects of group
identification on collective actions to provide public goods. Journal of Personality and
Social Psychology, 76, 587–599.
Van Vugt, M., De Cremer, D., & Janssen, D. P. (2007). Gender differences in competition
and cooperation: The male warrior hypothesis. Psychological Science. 18, 19–23.
Van Vugt, M., & Hardy, C. L. (2010). Cooperation for reputation: Wasteful contributions as
costly signals in public goods. Group Processes and Intergroup Relations, 1–11.
Van Vugt, M., & Hart, C. M. (2004). Social identity as social glue: The origins of group
loyalty. Journal of Personality and Social Psychology, 86, 585–598.
Van Vugt, M., & Iredale, W. (2013). Men behaving nicely: Public goods as peacock tails.
British Journal of Psychology, 104, 3–13.
184 ■ References
Van Vugt, M., Jepson, S. F., Hart, C. M., & De Cremer, D. (2004). Autocratic leadership in social
dilemmas: A threat to group stability. Journal of Experimental Social Psychology, 40, 1–13.
Van Vugt, M., & Samuelson, C. D. (1999). The impact of personal metering in the
management of a natural resource crisis: A social dilemma analysis. Personality and
Social Psychology Bulletin, 25, 731–745.
Van Vugt, M., & Van Lange, P. A. M. (2006). Psychological adaptations for prosocial
behaviour: The altruism puzzle. In M. E. Schaller, J. A. Simpson, & D. T. Kenrick (Eds.),
Evolution and social psychology (pp. 237–261). New York: Psychology Press.
Van Vugt, M., Van Lange, P. A. M., Meertens, R. M., & Joireman, J. A. (1996). How a
structural solution to a real-world social dilemma failed: A field experiment on the first
carpool lane in Europe. Social Psychology Quarterly, 59, 364–374.
Vierikko, E., Pulkkinen, L., Kaprio, J., & Rose, R. J. (2006). Genetic and environmental
sources of continuity and change in teacher-rated aggresson during early adolescence.
Aggressive Behavior, 32, 308–320.
Von Neumann, J. (1928). Zur theorie der gesellschaftspiele [Theory of parlor games].
Mathematische Annelen, 100, 295–320.
Von Neumann, J., & Morgenstern, O. (1944). Theory of games and economic behavior.
Princeton, NJ: Princeton University Press.
Wade-Benzoni, K. A., Okumura, T., Brett, J. M., Moore, D. A., Tenbrunsel, A. E., & Bazerman,
M. H. (2002). Cognitions and behavior in asymmetric social dilemmas: A comparison
of two cultures. Journal of Applied Psychology, 87, 87–95.
Wade-Benzoni, K. A., & Tost, L. P. (2009). The egoism and altruism of intergenerational
behavior. Personality and Social Psychology Review, 13, 165–193.
Wade-Benzoni, K. A., Tenbrunsel, A. E., & Bazerman, M. H. (1996). Egocentric perceptions
of fairness in asymmetric, environmental social dilemmas: Explaining harvesting
behavior and the role of communication. Organizational Behavior and Human Decision
Processes, 67, 111–126.
Wagner, J. A. III. (1995). Studies of individualism-collectivism: Effects on cooperation in
groups. Academy of Management Journal, 38, 152–172.
Wagner, S. L., & Rush, M. C. (2000). Altruistic organizational citizenship behavior: Context,
disposition, and age. Journal of Social Psychology, 140, 379–391.
Wang, H., Law, K. S., Hackett, R. D., Wang, D., & Chen, Z. -X. (2005). Leader-member
exchange as a mediator of the relationship between transformational leadership and
followers’ performance and organizational citizenship behavior. Academy of Management
Journal, 48, 420–432.
Weber, E. U., & Morris, M. W. (2010). Culture and judgment and decision making: The
constructivist turn. Perspectives on Psychological Science, 5, 410–419.
Weber, J. M., Kopelman, S., & Messick, D. M. (2004). A conceptual review of social
dilemmas: Applying a logic of appropriateness. Personality and Social Psychology
Review, 8, 281–307.
Weber, M. J., & Murnighan, J. K. (2008). Suckers or saviors? Consistent contributors in
social dilemmas. Journal of Personality and Social Psychology, 95, 1340–1353.
Webley, P., Robben, H., Elffers, H., & Hessing, D. (1991). Tax evasion: An experimental
approach. Cambridge: Cambridge University Press.
Wedekind, C., & Milinski, M. (2000). Cooperation through image scoring in humans.
Science, 288, 850–852.
Wendler, D., & Miller, F. G. (2004). Deception in pursuit of science. Archives of Internal
Medicine, 164, 597–600.
References ■ 185
Zahavi, A., and Zahavi, A. (1997). The handicap principle: A missing piece of Darwin’s puzzle.
New York: Oxford University Press.
Zak, P. J. (2008). The neurobiology of trust. Scientific American, 298, 88–95
Zeng, M., & Chen, X. -P. (2003). Achieving cooperation in multiparty alliances: A social
dilemma approach to partnership management. Academy of Management Review, 28,
587–605.
■ INDEX
instrumental cooperation, 8, 64, 77, 119 Kramer, R., 61, 64–65, 72, 101, 111
intangible vs. tangible outcomes, 32–33 Kuhlman, D. M., 68, 74
interactional justice, 109–110, 120
interdependence laissez-faire leadership, 123, 124
dynamic interaction processes, 58, 67–72 large-scale societies, 83–84, 89–90
functionality of, 17 Latin America in cross-cultural
and globalization, 101 comparisons, 96–97, 98
historical perspective, 16–21 leadership, 71, 110, 114, 123
psychological influences on, 20–21, 58, le her (card game), 19–20
62–67 Leviathan (Hobbes), 13, 18
structural influences on, 58–62 Liebrand, W. B. G., 4, 7, 31, 63, 66, 69, 73,
structures of, 5–6 75, 82
theoretical considerations, 54–55, 143–144 local social management, 131, 150
interdisciplinary nature of social dilemma locomotion, 70
research, 11, 13 Logic of Appropriateness Model, 127
inter-group interactions, 91, 138–139, 146 The Logic of Collective Action (Olson), 136
inter-group public goods (IPG) situation, 119 long-term orientation, 55, 65. See also time
international social dilemmas, 100–102 dimension
interpersonal basis of aggression, 75 Luce, R. D., 9
interpersonal organizational citizenship
behavior (OCBI), 109 MacCoun, R. J., 34–35
intra-group conflicts, 138–139 management. See workplace
intrinsic orientation, 65 market integration, and cooperation
IPG (inter-group public goods) situation, 119 norms, 88, 89
Iredale, W., 46–47 Marlowe, F. W., 89
Marotzke, J., 45
Japanese in cross-societal comparisons, 97, 98 marriage contract problem, 19
Jarvenpaa, S. L., 115 Marshello, A. F., 68, 74
Jewish law, 19 Marwell, G., 117
joint outcomes, preference for, 55, 56, 62 Maximizing Difference Game, 74, 81
Joireman, J., 5, 8, 11, 25, 62–65, 72, 93, 109, McClintock, C. G., 74, 81
128, 133, 145–146 McCusker, C., 65
justice in workplace, 109–110, 120, 123, mechanism approach to altruism and
124. See also fairness cooperation, 40
membership status in organizations, 111
Kelley, H. H., 75 memetics, 51
Kelley, L., 96 Messick, D. M., 59, 74, 117, 126–127, 134
Kerr, N. L., 34–35, 59 Mexican Americans, 81
Kiesler, S., 115 Mexicans, 81
K index in Prisoner’s Dilemma, 24 Milgrom, P., 108
kin selection, 41–42 Milinski, M., 44–45
Klandermans, B., 118, 120, 136 Mill, J. S., 15–16
knowledge-sharing dilemmas, 115–117. minimizing large differences, Prisoner’s
See also information Dilemma, 9
Knox, R. E., 32 mismatches, non-adaptive cooperation
Kollock, P., 43, 61, 68 due to, 50–51
Komorita, S. S., 63 mistakes, non-adaptive cooperation due
Krambeck, H.-J., 45 to, 49–50
192 ■ Index
social fences, 6t, 8, 30. See also give-some interdependence theory, 58–62
dilemmas to social dilemma solutions, 71,
socialization, 81 126–127
social loafing, 113 structural equation modeling (SEM), 31–32
social movements, 136–137 sustainability, 131
social norms SVO. See social value orientation (SVO)
conformity bias, 51–52 sympathy, and interdependence, 17
cultural perspective, 87–90, 98–99, 103
definition, 87 take-some dilemmas, 6t, 8, 28–30, 65. See
individual response to, 103 also resource dilemmas
religion, 88–89, 90, 99–100 tangible vs. intangible outcomes, 32–33
reward and punishment method for tax paying, 134–135
enforcing, 59–60 Tazelaar, M. J. A., 64
as sources for transformations, 58 TEA (tradeable environmental
strength of internalized, 71 allowances), 130
tax paying compliance, 135 team commitment, 111
workplace, 112–113, 116, 122–123 temporal dimension. See time dimension
See also sanctions for social norm temporal discounting, 145
violation temptation in outcome matrix, 19,
social preferences theory, 57 23–24
social traps, 6t, 7–8, 30. See also take-some Tenbrunsel, A. E., 59
dilemmas Theory of Games and Economic Behavior
social value orientation (SVO) (Neumann and Morgenstern), 21
direct reciprocity, 68 Theory of Moral Sentiments (Smith), 17
and framing of decisions, 65 third-party punishment game, 87–89
in information sharing, 116 Thöni, C., 86
organizational citizenship behaviors, 112 Thorngate, W. B., 74
as psychological influence on choice, time dimension
62–63 age and cooperation, 112, 149
strategic alliances, 120 altruism, 54
typology of, 55–57, 56f cultural assimilation, 87
unionization, 119 delayed gratification, 5, 44, 145
spite. See competition development of longer-term
Sproull, L., 115 perspective, 16
Stag Hunt (Assurance) Dilemma, 5–6, 6f, dynamic interaction processes,
7, 24–25 67–72
Stahelski, A. J., 75 importance for future research,
Staples, D. S., 115 145–146
static vs. dynamic paradigms of social rewards and punishments, 69
dilemma choice, 30–32 and social dilemma definition, 5
Stech, F. J., 81 social dilemma structure, 7–8
step-level public good, 27, 27f and Tit-for-Tat strategy, 43
Stern, P. C., 33 Tit-for-Tat strategy, 43, 68–69, 137
Stoecker, R., 33 Toda, M., 81
stoicism, classical Greek, 14–15 tradeable environmental allowances
Stone, A. B., 59–60 (TEA), 130
strategic alliances, workplace, 120–121 tragedy of the commons, 29, 128
structural approaches Tragedy of the Commons (Hardin), 29
196 ■ Index