Download as pdf or txt
Download as pdf or txt
You are on page 1of 209

■ Social Dilemmas

Social Dilemmas
The Psychology of Human
Cooperation

Paul A. M. Van Lange


Daniel Balliet
Craig D. Parks
Mark Van Vugt

3
1
Oxford University Press is a department of the University of Oxford.
It furthers the University’s objective of excellence in research, scholarship,
and education by publishing worldwide.

Oxford New York
Auckland Cape Town Dar es Salaam Hong Kong Karachi
Kuala Lumpur Madrid Melbourne Mexico City Nairobi
New Delhi Shanghai Taipei Toronto

With offices in
Argentina Austria Brazil Chile Czech Republic France Greece
Guatemala Hungary Italy Japan Poland Portugal Singapore
South Korea Switzerland Thailand Turkey Ukraine Vietnam

Oxford is a registered trademark of Oxford University Press


in the UK and certain other countries.

Published in the United States of America by


Oxford University Press
198 Madison Avenue, New York, NY 10016

© Oxford University Press 2014

All rights reserved. No part of this publication may be reproduced, stored in a


retrieval system, or transmitted, in any form or by any means, without the prior
permission in writing of Oxford University Press, or as expressly permitted by law,
by license, or under terms agreed with the appropriate reproduction rights organization.
Inquiries concerning reproduction outside the scope of the above should be sent to the
Rights Department, Oxford University Press, at the address above.

You must not circulate this work in any other form


and you must impose this same condition on any acquirer.

Library of Congress Cataloging-in-Publication Data


Lange, Paul A. M. Van.
Social dilemmas: the psychology of human cooperation / by Paul A.M. Van Lange,
Daniel Balliet, Craig D. Parks, Mark Van Vugt.
pages cm
Includes bibliographical references and index.
ISBN 978–0–19–989761–2
1. Social interaction. 2. Cooperativeness. 3. Interpersonal relations.
4. Social psychology. I. Title.
HM1111.L36 2014
302—dc23 2013022815

9 8 7 6 5 4 3 2 1
Printed in the United States of America
on acid-free paper
■ CONTENTS

Preface vii

PA R T O N E ■ Introduction to Social Dilemmas

1 Introduction to Social Dilemmas 3


2 History, Methods, and Paradigms 13

PA R T T W O ■ Perspectives to Social Dilemmas

3 Evolutionary Perspectives 39
4 Psychological Perspectives 54
5 Cultural Perspectives 79

PA R T T H R E E ■ Applications and Future of Social Dilemmas

6 Management and Organizations 107


7 Environment, Politics, Security and Health 125
8 Prospects for the Future 143

References 153
Index 187

v
■ PREFACE

Social dilemmas are a pervasive feature of human society. They are a basic fab-
ric of social life, and challenge dyads, groups, and societies. They did so in
the past, they do so now, and they will do so in the future:  Social dilemmas
cross the borders of time: our ancestors must have faced many social dilemmas
in their small groups and societies. Even the literary figure, Robinson Crusoe,
must have quickly learned about social dilemmas after Friday entered his life.
Similarly, we often face social dilemmas at home, at work, or many other places
where we are interdependent with other people. Newspapers are filled with
articles about societal problems that frequently are rooted in conflicts between
self-interest and collective interest, such as littering in parks, free-riding on
public transportation, evading taxes, pursuing bonuses in the financial world,
or exploiting natural resources. And social dilemmas may involve many people
who do not know one other, may include different countries, and for some
issues, such as global change, may concern the entire world. In many respects,
social dilemmas also cross “the borders of space.”
As the title indicates, this book is about social dilemmas, which are broadly
defined as conflicts between (often short-term) self-interest and (often longer-term)
collective interest. This book is also about the psychology of human cooperation.
In the course of this book, it will become clear that social dilemmas and human
cooperation are two sides of the same coin. Social dilemmas challenge our capacity
and motivation to cooperate with each other. Life without social dilemmas would
be relatively straightforward and pain-free: People would just behave as they liked
as if guided by Adam Smith’s invisible hand—at least as long as they were able to
coordinate actions with others. But life without social dilemmas is utopian. In our
interactions with friends and partners, work colleagues, or members of clubs and
communities and societies at large, there are frequent conflicts between our nar-
row self-interests and the collective interest.
This book provides many different examples of social dilemmas, and we will see
that they challenge the maintenance of our close relationships, our friendships, our
work, and leisure life, our politics, security, health, and the natural environment
in which we live. One could make the claim that the primary purpose of govern-
ment and management is to resolve social dilemmas. We would not be surprised
that a careful analysis would reveal that the majority of all challenges (80 percent
is a wild guess) that governments and management face are rooted in situations
that are, or closely resemble, social dilemmas. How can we promote spontaneous
help from bystanders, how can we activate citizenship and mutual help among
employees in our work organizations? How can we restrain overfishing? How can
we promote commuting by public transportation? How can we reduce greed and
excessive bonus cultures in the financial world? How can we maintain trust and
cooperation among nations, and promote national security? Social dilemmas can
vii
viii ■ Preface

be easily over-recognized, but it is a challenge to entertain the thought exercise of


finding social issues at the level of relationships, organizations, or governments
that do not share elements of social dilemmas. They exist—but not so easy to find.
Social dilemmas are important and ubiquitous. Because social dilemmas cap-
ture so many situations that matter to all of us, they are of great scientific interest.
They touch upon basic questions about human nature, such as: are people natu-
rally selfish, altruistic, or perhaps both? While cooperation is a big theme in the
scientific literature, studying the psychology of human cooperation raises various
questions about constructs that are intimately linked to it, such as trust, proso-
ciality, reciprocity, fairness, culture, norm violation, emotions, reputation, social
learning, adaptation, reward and punishment, and many other important theo-
retical concepts. For example, specific issues such as forgiveness, morality, toler-
ance (of exploitation), generosity, retaliation, deceit, and competition (or spite) are
all relevant to the study of social dilemmas, and some of these phenomena can be
easily placed under its rubric.
It is a real treat to write about social dilemmas. And it is an even greater treat to
do so with true friends. We have worked very well—both together and alone, in that
order. We had a number of meetings at various places around the world, we had
a few dinners, and while we did most of the writing individually, after we agreed
about the basic structure of the book, we worked as a virtual team. While think-
ing about the contents of this book, we reached the following conclusions. There
should be a chapter about the history and methods of social dilemmas because
it is an established field of research and there have been many developments in
the analyses of different experimental games since the first empirical articles on
the prisoner’s dilemma game emerged. Part 1 of the book offers an introduction
to social dilemmas. Chapter 1 focuses on definitions and assumptions underly-
ing social dilemmas. Chapter 2 provides an historical account of social dilemma
research with a special focus on methods and experimental paradigms.
We did not have to think long about including a chapter on the psychological
perspective on social dilemmas, because that is the perspective we take as social
psychologists studying social dilemmas. Although our overall perspective on
social dilemmas is colored by our own discipline, we also agreed about including
a chapter on evolutionary approaches to social dilemmas because this theoreti-
cal perspective has guided the field from the start. We also felt that we needed to
include a cultural approach to social dilemmas because of the importance of cul-
tural differences in cooperation. The importance of culture is illustrated by some
older studies on social dilemmas, as well as more recent studies which have com-
pared the same games in many different countries, societies, and cultures. Part 2
addresses these perspectives, the evolutionary perspective (Chapter  3), the psy-
chological perspective (Chapter 4), and the cultural perspective (Chapter 5).
Without much hesitation we also decided that we needed chapters on societal
applications of social dilemma research. Each of us has worked on applications
of social dilemmas. Some of us have conducted field research on environmental
dilemmas, others on management and organizational dilemmas, and still others on
social dilemmas in politics and security. Chapter 6 summarizes research on social
dilemmas in management and organizations. Chapter  7 reviews applications of
Preface ■ ix

social dilemma research in the domains of environmental sustainability, politics,


security, and public health. We decided to add these latter themes because of their
urgency, and because we strongly believe in the utility of a social dilemma approach
for studying and resolving these dilemmas. Yet we acknowledge that more research
is needed, especially on important societal dilemmas. The concluding chapter,
Chapter  8, outlines the prospects for the future of social dilemmas and human
cooperation, addressing major new trends in research, theory and application.
We hope to reach a broad audience of scientists in various fields and disciplines,
as well as the interested reader or practitioner who is committed to resolving social
dilemmas in various domains of social life. As we note in the various chapters,
and as suggested by the title, our primary approach has been psychological: we
address the psychology of human cooperation. We felt it would be premature, and
too ambitious, to write a textbook on social dilemmas that includes anthropologi-
cal, biological, economical, mathematical, or philosophical perspectives as well. It
would be unrealistic to convey the impression that we actively pursue the dream
of interdisciplinary coverage of the social dilemma literature. In other words, we
do not really capture all perspectives on human cooperation in social dilemmas.
We hope to do so in the future some time, as we all think this is a very important
service to the field. But for now, we think that our coverage of the psychology of
human cooperation is comprehensive and reasonably exhaustive, at least when
focusing on the past two decades of research on social dilemmas. We also recog-
nized the importance of discussing some basic issues—the bigger theoretical or
societal issues—that are inspired by social dilemma research. Obviously, science is
not complete if there are no remaining issues left to be addressed.
Clearly, a book that involves four authors, even four true friends, is a challenge.
The way we worked was that each chapter was assigned to one of the four authors
of this book. Chapters 1, 4, and 8 were prepared by Paul Van Lange, Chapters
2 and 6 by Craig Parks, Chapters 3 and 7 by Mark Van Vugt, and Chapter 5 by
Daniel Balliet. Each of the chapters was read by other authors and discussed in
detail, both face-to-face and via e-mail. This led to many important additions and
revisions. All along, we knew that our shared goal was to write a comprehensive
state-of-the art book on the psychology of social dilemmas. We are not in the posi-
tion to judge whether we have succeeded, and we reserve judgment on this to our
readers—academics, students, practitioners, and the broader public.
We would like to extend our gratitude to a number of people for making valu-
able contributions to the completion of the book. Social psychologist Jeff Joireman
has made significant inputs to Chapters 1 and 4. Evolutionary biologist Pat Barclay
has made important intellectual contributions to Chapter 3. Several people have
been supportive of this book project from the start whom we would like to thank
for their help. Margaret Foddy has shown continuous support throughout the
book project, and has contributed to our thinking about the outline and structure
of the book. We are grateful to David Schroeder, Norbert Kerr and Mike Kuhlman,
whose genuine interest in this project strengthened our conviction and motiva-
tion to initiate and complete this project. We also want to thank Abby Cross who
has expressed her enthusiasm from the very beginning at a meeting of Society
and Personality and Social Psychology, and throughout the three years after that
x ■ Preface

meeting while we wrote the book. We want to thank Niels van Doesum for com-
ments on the final writings, and Lisanne Pauw who has organized, checked, and
rechecked, the long list of references. Finally, we would like to thank all members
of the broad international social dilemma community that comes together at the
bi-annual meetings at some exotic location in the world. We are proud members of
this community and without the intellectual inputs of each of the members of this
social dilemma network, this book could simply not have been written.
Finally, we hope that you will enjoy reading this book—as a student, a fellow
academic, teacher, practitioner, or member of the general public—and that it
makes a meaningful difference, even if only a small difference, in how you think
about cooperation and how to promote cooperation in our everyday lives and
society at large.

The authors, December 2012


■ Social Dilemmas
■ PART ONE

Introduction to Social Dilemmas


1 Introduction to Social Dilemmas

■ INTRODUCTION

What determines how well an organization will do in business? What deter-


mines how well a national soccer team will do in World Cup? What determines
whether a marriage or relationship will thrive and survive rather than end?
What determines the quality of the environment that the world seeks to pro-
tect? Of course, skill and talents are crucial. A business might benefit from skill
and foresight in strategic planning, or coming up with the right product at the
right time. Having an exceptionally skillful player on the team might make a
big difference. Perhaps the ability to communicate clearly, along with the abil-
ity to listen and provide support, might promote well-functioning relationships.
And the society’s ability to provide technological solutions to environmental
issues (e.g., development of cleaner cars) do  help.
But above and beyond differences in ability (talent and skill), the health and
vitality of relationships, groups, and the society at large is strongly challenged
by social dilemmas, or conflicts between short-term self-interest and long-term
collective interest. Organizations fare better if their employees are willing to go
the extra mile; teams perform better if individuals are willing to share success
rather than primarily pursue their own success; acts of sacrifice help partners in a
relationship and marriage; and exercising restraint on consumption, such as eat-
ing particular fish that risk depletion, helps to maintain a healthy environment.
Pollution and depletion of natural resources are among the most urgent social
dilemmas. And even various forms of intergroup conflict, as in the Middle East,
share features with social dilemmas. After all, members of groups typically prefer
peace over hostility since peace meets a basic desire for security (and peace is less
costly). Thus, a good deal of what we see on headline news, what we read in news-
papers and news sites on the Internet, and what we experience at work or at home
resembles aspects of social dilemma.
Many social dilemmas are challenging because acting in one’s immediate
self-interest is tempting to everyone involved, even though everybody benefits
from acting in the longer-term collective interest. For example, relationships are
healthier if partners do not neglect each another’s preferences, organizations are
more productive if employees spontaneously exchange expertise, and nations fare
better when they show respect for one another’s values, norms, and traditions.
Similarly, in the long run everyone would benefit from a cleaner environment,

Paul van Lange had primary responsibility for preparation of this chapter.

3
4 ■ Introduction to Social Dilemmas

yet how many are prepared to voluntarily reduce their carbon footprint by saving
more energy or driving or flying less frequently?

■ THE HUIZINGE CASE

One real world social dilemma occurred during the winter of 1979 in Huizinge,
a small village in the north of the Netherlands. Due to an unusually heavy
snow, Huizinge was completely cut off from the rest of country so that there
was no electricity for lighting, heat, television, and so on (Liebrand, 1983).
However, one of the 150 inhabitants owned a generator that could provide
sufficient electricity to all the people of this small community, if and only if
they exercised substantial restraint in their energy use. For example, they could
use only one light, they could not use heated water, heat had to be limited
to about 18 degrees Celsius (64 degrees Fahrenheit), and the curtains had to
be closed. As it turned out, the generator collapsed because most people were
in fact using heated water, and were living comfortably at 21 degrees Celsius
(70 degrees Fahrenheit), watching television, and burning several lights simul-
taneously. After being without electricity for a while, the citizens were able to
repair the generator, and this time, they appointed inspectors to check whether
people were using more electricity than agreed upon. But even then, the gen-
erator eventually collapsed due to overuse of energy. And again, all inhabitants
suffered from the cold and lack of light, and of course, could not watch televi-
sion. Indeed, there is little doubt that they all had preferred a situation in which
they could use at least some electricity (a result of massive cooperation) rather
than no electricity at all (a result of massive noncooperation).
Social dilemmas can be quite intense, as the Huizinge case illustrates. They are
also quite ubiquitous. In fact, many of the world’s most pressing problems represent
social dilemmas, broadly defined as situations in which short-term self-interest is
at odds with longer-term collective interests. Some of the most widely-recognized
social dilemmas challenge society’s well-being in the environmental domain,
including overharvesting of fish, overgrazing of common property, overpopula-
tion, destruction of the Brazilian rainforest, and buildup of greenhouse gasses due
to overreliance on cars. The lure of short-term self-interest can also discourage
people from contributing time, money, or effort toward the provision of collec-
tively beneficial goods. For example, people may listen to National Public Radio
without contributing toward its operations; community members may enjoy a
public fireworks display without helping to fund it; employees may elect to never
go above and beyond the call of duty, choosing instead to engage solely in activi-
ties prescribed by their formally defined job description; and citizens may decide
to not exert the effort to vote, leaving the functioning of their democracy to their
compatriots.
Social dilemmas apply to a wide range of real-world problems; they exist within
dyads, small groups, and society at large; and they deal with issues relevant to
a large number of disciplines, including psychology, sociology, political science
and economics, to name but a few. Given their scope, implications, and interdis-
ciplinary nature, social dilemmas have motivated huge literatures in each of these
Introduction to Social Dilemmas ■ 5

disciplines (see also Fehr & Gintis, 2007). Also, disciplines have tended to focus
on only one type of the social dilemma. For example, the two-person prisoner’s
dilemmas was very popular in social psychology during the 1970s; this was fol-
lowed by greater appreciation for other social dilemmas, including social dilem-
mas involving a greater number of people. In some social dilemmas, the act of
cooperation involves “giving” to a public good, in other social dilemmas, it is “not
taking too much” from a shared resource. We will now take a closer look at the
various types of social dilemmas, and the different names that various scientists
have used to capture a specific social dilemma. Once we have illustrated a family
of social dilemmas, we will also be able to provide a more formal definition of a
social dilemma.

■ S O C I A L D I L E M M A S :   A   FA M I LY O F   G A M E S

Social dilemmas come in many flavors. Sometimes cooperation means giving


or contributing to the collective, sometimes it means not taking or consuming
from a resource shared by a collective. Sometimes the time horizon is short,
even as short as a single interaction, sometimes it is long-lasting, almost with-
out an end as in ongoing relationships. There are social dilemmas involving
two persons, and social dilemmas involving all people living in a country, con-
tinent, or even the world. Not surprisingly, the diversity in social dilemma set-
tings has led researchers to offer a range of different definitions for the concept.
In his Annual Review of Psychology article, Robyn Dawes (1980) was one of
the first who formally coined the term social dilemma, which he defined as a
situation in which (a)  each decision maker has a dominating strategy dictating
non-cooperation (i.e., an option that produces the highest outcome, regardless
of others’ choices), and (b)  if all choose this dominating strategy, all end up
worse off than if all had cooperated (i.e., a deficient equilibrium).
But as we will see, while focusing on the crux of the dilemma, this definition
does not do justice to some other outcome structures (or more precisely, inter-
dependence structures) that also capture the conflict between self-interest and
collective interest. These include not only the Prisoner’s Dilemma, but also the
Chicken Dilemma, and the Assurance Dilemma (or trust dilemma). This defini-
tion also does not include the temporal or time dimension (e.g., Messick & Brewer,
1983; Van Lange, Joireman, Parks, & Van Dijk, 2013), because consequences can
be immediate (short-term) or delayed (long-term). A more inclusive conceptual-
ization allows us to include social traps, social fences, public good dilemmas, and
resource dilemmas (see Table 1.1). We briefly discuss both features in turn.
Prisoner’s, Chicken, and Assurance Dilemmas. The well-known Prisoner’s
Dilemma has often been used as the basis for defining social dilemmas, which is
also evident in Dawes’ definition. We suggest that two other outcome interdepen-
dence structures can also be viewed as social dilemmas, if one relaxes the require-
ments for a dominating strategy and a single equilibrium. These structures include
the Chicken and the Assurance (or Trust) Dilemma. In both dilemmas, the indi-
vidual versus collective conflict essential to social dilemmas is retained: there is a
non-cooperative course of action that is (at times) tempting for each individual,
6 ■ Introduction to Social Dilemmas

TABLE  1.1. Classification of Social Dilemmas (after Messick and Brewer,  1983)
Collective Consequences
Immediate Delayed

Social Traps
• Take Some Dilemmas Commuting by car (vs. Harvesting as many fish as one can
• Commons/Resource public transportation, or from a common resource eventually
Dilemmas carpooling) leads to daily leads to the collapse of the resource
traffic congestion and stress
Social Fences Electing to not contribute Choosing to not engage in
• Give Some Dilemmas to a community-funded extra-role behaviors that benefit
• Public Goods fireworks show results in one’s company eventually leads to
Dilemmas cancellation of the show a deterioration of the company’s
positive culture

and if all pursue this non-cooperative course of action, all end up worse off than if
all had cooperated (see Figure 1.1)
In the Chicken Dilemma (also termed the Hawk-Dove game or the Snow Drift
game), each person is tempted to behave non-cooperatively (by driving straight
toward one’s “opponent” in an effort to win the game), but if neither player coop-
erates (swerves), both parties experience the worst outcome possible (death).
Clearly, Chicken does not involve a dominating strategy, as the best decision for
an individual rational decision maker depends on what he or she bbelieves the
other will do; if one believes the other will cooperate (swerve), the best course
of action is to behave non-cooperatively (and continue driving ahead); however,
if one is convinced that the other will not cooperate (will not swerve), one’s best
course of action is to cooperate (swerve), because it is better to lose the game than
to die. There are interesting parallels between Chicken and situations in which
people are faced with the dilemma whether to maintain honor or status when they
are closely at risk (see Kelley, Holmes, Kerr, Reis, Rusbult, & Van Lange, 2003). For
example, Chickenit is a situation in which you should exhibit toughness (by being
a hawk) by not cooperating, and clearly outperform the other if the other does
not cooperate (who is the dove). Intimidation may play a role by communicating
toughness, or a “no surrender” attitude. These are also risky strategies: if both par-
ticipants express such toughness, then the result may be that one needs to change
to cooperation (and lose face), or persist in noncooperation and maintain honor,
but seriously risk death. Over time, this may result in a snow drift, especially if

Prisoner’s Dilemma Chicken Dilemma Assurance (Trust) Dilemma


C NC C NC C NC

3 4 3 4 4 3
C C C
3 1 3 2 4 1

1 2 2 1 1 2
NC NC NC
4 2 4 1 3 2

Figure 1.1 Three Social Dilemmas.


Introduction to Social Dilemmas ■ 7

people are committed not to lose face. In everyday life, such situations may arise
when two companies are involved in an intense competition to lower the price of
their product to a point that is “killing” for both, or to guarantee treatment (early
delivery of the product) than can never be implemented.
The Assurance (Trust) Dilemma also lacks a dominating strategy, and is unique
in that the highest collective and individual outcomes occur when both partners
choose to cooperate. This correspondence of joint and self outcomes might sug-
gest that the solution is simple, and there is no dilemma. However, if one party
considers beating the other party to be more important than obtaining high out-
comes for the self and others, or is convinced the other will behave competitively,
the best course of action is to not cooperate. The Assurance Dilemma is sometimes
described as resembling features of the relationship between the USA and Soviet
Union during the cold war, in which disarming represented the cooperative choice
and arming the noncooperative choice (e.g., Hamburger, 1979). To jointly disarm
was clearly the best solution for both countries, yet being the only one to disarm
would have made one nation terribly vulnerable, because it may have yielded the
worst possible solution. Thus, the two countries armed for a long time because
they failed to trust one another, believing that the other party was seeking relative
advantage, and therefore was to be considered very threatening. As another exam-
ple, two athletes want to be involved in a fair contest, in that neither takes drugs to
promote their performances. However, if one athlete suspects that the other might
take drugs, it is perhaps best to take drugs as well to minimize the odds of losing
due to unfair disadvantages (Liebrand, Wilke, Vogel, & Wolters, 1986).
The similarity between the Prisoner’s, Chicken, and Assurance Dilemmas is
that all three situations involve collective rationality: Cooperative behavior by both
individuals yields greater outcomes than does noncooperative behavior by both
individuals. Specifically, the best (Assurance) or second best (Chicken, Prisoner’s
Dilemma) possible outcome is obtained if both make a cooperative choice,
whereas the third best (Assurance, Prisoner’s Dilemma) or worst (Chicken) pos-
sible outcome is obtained if both make a noncooperative choice. In the Prisoner’s
dilemma, tendencies toward cooperation are challenged by both greed (i.e., the
appetitive pressure of obtaining the best possible outcome by making a noncoop-
erative choice) and fear (i.e., the aversive pressure of avoiding the worst possible
outcome by making a noncooperative choice; Coombs, 1973). In Chicken, coop-
eration is challenged by greed, whereas in Trust, cooperation is challenged by fear.
Thus, in a sense, the Prisoner’s Dilemma “combines Chicken and Assurance, rep-
resenting a stronger conflict of interest, involving both fear and greed. Consistent
with this analysis, research has revealed that individuals exhibit greater levels of
cooperation in Assurance and Chicken than in the Prisoner’s Dilemma (Liebrand
et al., 1986).
The temporal dimension. We often see that the consequences for the self can be
immediate or delayed, just as the consequences for the collective can be immediate
or delayed. This temporal dimension is exemplified in social traps, or situations in
which a course of action that offers positive outcomes for the self leads to negative
outcomes for the collective (Messick & McCleland, 1983; Platt, 1973). Examples
of delayed social traps include the buildup of pollution due to over-reliance on
8 ■ Introduction to Social Dilemmas

cars, and the eventual collapse of a common fishing ground as a result of sus-
tained overharvesting. Given their emphasis on “consuming” or “taking” a posi-
tive outcome for the self, social traps are often called take some dilemmas, a classic
example of which is the commons (or resource) dilemma. This is the kind of social
dilemma that attracted environmental scientists to examine the variables that help
people to exercise restraint in their consumption of shared resources.
These social trap situations may be contrasted with social fences, or situa-
tions in which an action that results in negative consequences for the self would,
if performed by enough people, lead to positive consequences for the collective.
Examples of delayed social fences include the eventual deterioration of a com-
pany’s positive culture due to employees’ unwillingness to engage in extra-role
(or organizational citizenship) behaviors, such as being a good sport and helping
new employees adjust, and the gradual deterioration of an education system due
to taxpayers’ unwillingness to fund school levies. Given their emphasis on “giving”
something of the self (such as time, money, or effort), social fences are often called
give some dilemmas, a classic example of which is the Public Goods Dilemma. This
is the kind of social dilemma that attracted experimental economists in particular
to examine the variables that help people to contribute to public goods, and resist
the temptation to free-ride on the contributions of other members.

■ A DEFINITION OF SOCIAL DILEMMAS

We define social dilemmas as situations in which a non-cooperative course of


action is (at times) tempting for each individual in that it yields superior (often
short-term) outcomes for self, and if all pursue this non-cooperative course
of action, all are (often in the longer-term) worse off than if all had cooper-
ated (see also Van Lange, Joireman, Parks, & Van Dijk, 2013). This definition
is inclusive of the well-known Prisoner’s Dilemma, as well as the Chicken
Dilemma and the Assurance Dilemma, and it includes the correlation with
time, such that consequences for self are often immediate or short-term, while
the consequences for the collective often unfold over longer periods of  time.
Although the above definition of social dilemmas is fairly comprehensive, we
acknowledge that other important distinctions are not included. One such dis-
tinction is the difference between first order dilemma, which represents the initial
dilemma, and a second order dilemma, which represents the dilemma that one
might face when deciding whether to contribute to a costly system that might
promote cooperation in the first order dilemma (e.g., a system that sanctions
free-riders, Yamagishi, 1986a). Cooperation in the first order dilemma is known as
elementary cooperation, while cooperation in the second order dilemma is known
as instrumental cooperation. As the reader will see in this book, a good deal of
contemporary research on social dilemmas has also been devoted to instrumen-
tal cooperation in second order dilemmas, providing strong evidence that many
(but not all) people are quite willing to engage in costly behavior to reward other
group members who have cooperated and punish those who have not cooper-
ated (e.g., Fehr & Gächter, 2002). And as has been common in social dilemma
research (e.g., Bornstein, 1992; Pruitt & Kimmel, 1977), several scientists are
Introduction to Social Dilemmas ■ 9

currently developing new games to enhance our understanding of some new chal-
lenges to social decision making, and especially human cooperation (e.g., Halevy,
Bornstein, Sagiv, 2008; McCarter, Budescu, & Sheffran, 2011). Some of these issues
will be addressed in Chapter 8, which discusses prospects for the future.

■ WHY GAMES?

The social dilemma literature has its conceptual roots in game theory. With the
prisoner’s dilemma as one of the prime examples of a social dilemma, it is fair
to admit that the prisoner’s dilemma is just one in a family of numerous games.
One only needs to skim the book by Von Neumann and Morgenstern (1944),
or the much later book by Luce and Raiffa (1957), to see that the prisoner’s
dilemma is almost a needle in a haystack—not so easy to find. Yet the game
attracted lots of scientists. Why might that be? And why  games?
First, the prisoner’s dilemma excels in parsimony. In its original form, involv-
ing two people who simultaneously make only once choice, the structure of game
is very simple. When lecturing, and not talking about the anecdote as to where
the name originated (which can be confusing, see Chapter 2), the game can be
explained in ten minutes or less. While simple in terms of structure, the game is
not simple at all in terms of rationality, and people can have very different feelings
about what is rational in the prisoner’s dilemma. Hence, the prisoner’s dilemma
is also complex: one can view the dilemma in different ways, and it is even more
complex to regulate behavior at the collective level. There is even research that
illustrates the point that it is the interaction goal—individualistic versus collec-
tivistic—that determines whether people view the cooperative choice as intel-
ligent and the noncooperative choice as unintelligent, or vice versa (Van Lange
Kuhlman, 1994).
Second, there is a wealth of motives, cognitions, and emotions that might be
activated by the Prisoner’s Dilemma. There may be a strong form of self-regard
such as greed (always go for the best possible outcome), a self-protective form of
self-regard such as fear (let’s make sure that the other is not going to exploit me),
a genuine concern with the outcomes for the self and the other (collectivism),
or under special circumstances, primarily the other (altruism). And there is the
powerful concern with equality or fairness and the strong tendency to minimize
large differences in outcomes. Such tendencies may be easily activated even when
just approaching a situation in which two people are unlikely to receive the same
outcomes (e.g., Haruno & Frith, 2009). Cognition and reasoning might be focused
on predicting the other’s behavior, making sense of the situation and decide what
to decide (e.g., in terms of norms and identity: “what does a person like me do in
a situation like this,” Messick, 1999; Weber, Kopelman, & Messick, 2004), and after
the fact: making sense of the other’s behavior, to “learn” for future situations like
the Prisoner’s Dilemma. All of this is preceded or accompanied by strong emo-
tions, such as regret (when one made a noncooperative choice out of fear but then
finds out that the other made a cooperative choice), or anger (when one made
a cooperative choice and then finds out that the other made a noncooperative
choice).
10 ■ Introduction to Social Dilemmas

Third, what attracted scientists to the original Prisoner’s Dilemma (in the ocean
of games) are theoretical questions, such as: (a) what is the logical, rational solution
to the prisoner’s dilemma?; and (b) what promotes a cooperative choice? Later, when
people started doing research on the iterated Prisoner’s Dilemma, people also asked
the question: (c) do people learn and adapt to develop stable patterns of cooperative
interaction? These are all questions relevant to game theory, evolution of coopera-
tion, as well as to the psychology of trust, cooperation, and learning (e.g., Budescu,
Erev, & Zwick, 1999; Nowak, 2006; Schroeder, 1995). This may well have been part
of the broader zeitgeist in the years after the economic crisis in the 1930s and World
War II. Game theory, more generally, was influential in various scientific disciplines
for a variety of reasons. One is that game theory provided a very useful comple-
ment to extant economic theory, which was primarily based on macro-level statis-
tics which had not proven to be exceptionally useful for the prediction of economic
stability and change. Another reason is that game theory provided a “logic” that had
a strong scientific appeal, analytical power, and mathematical precision (e.g., Kelley
et al., 2003; Rapoport, 1987; Suleiman, Budescu, Fischer, & Messick, 2004).
Fourth and finally, the Prisoner’s Dilemma also inspired scientists and practi-
tioners alike to get a grip some major social issues. One such issue was to analyze
the economic crisis from the ’30s, and provide a basis for the understanding of
various economic and social phenomena as well as to address the roots of conflict,
and especially how to resolve it (e.g., Pruitt & Kimmel, 1977; Schelling, 1960).
The Second World War itself, and especially the beginning of the Cold War, was
a period in which trust and cooperative relations had to be re-built, especially in
Europe. The Prisoner’s Dilemma, as well as some other games (e.g., negotiation
games), were often used in designing policy and providing recommendations for
the resolution of international hostility and friction (Lindskold, 1978; Osgood,
1962). Moreover, basic insights from game theory were discussed and used by
RAND corporation (Research ANd Development), an influential organization and
think tank whose mission was to provide analysis and advice to military strategy
by the United States. (RAND corporation is now more international in orientation
and has several sites outside of the United States; also, it is now broader in scope in
that it focuses on several key societal issues, including terrorism, energy conserva-
tion, and globalization—interestingly enough, these social issues also have strong
parallels to social dilemmas.) Later, in the early ’70s, when the Cold War was about
to come to an end, the Western world was facing a major oil crisis when the mem-
bers of Organization of Arab Petroleum Exporting Countries or OAPEC (consist-
ing of the Arab members of OPEC, plus Egypt, Syria and Tunisia) proclaimed an
oil embargo. The experience of scarcity—insufficient gasoline, for example—along
with some early signs that we overused other natural resources, was a strong expe-
rience that may have inspired the resource dilemma. Subsequent environmental
issues, such as global warming, acid rain, and overfishing, reinforced awareness of
social dilemmas where excessive consumption is increasingly perceived as nonco-
operative, or as a neglect of shared future interest (e.g., Burger, Ostrom, Norgaard,
Policansky, & Goldstein, 2001; Dolšak, & Ostrom, 2003).
Thus, we see four important reasons why the original Prisoner’s Dilemma game,
as a prototype of a social dilemma game, was inspirational to so many scientists,
Introduction to Social Dilemmas ■ 11

for such a long time: (a) its simplicity in terms of structure; (b) its wealth in terms
of motives, cognitions, and emotions that it may activate, (c) its ability to address
broad questions about human cooperation, and (d) its ability to help address and
illuminate critical societal issues (applicability; see Van Lange, 2013).

■ SUMMARY AND CONCLUSIONS

We define social dilemmas as situations in which a non-cooperative course of


action is (at times) tempting for individuals in that it yields superior (often
short-term) outcomes for self, and if all pursue this non-cooperative course of
action, all are (often in the longer-term) worse off than if all had cooperated.
This definition captures the Prisoner’s Dilemma, the Chicken Dilemma, and the
Assurance Dilemma, and acknowledges the fact that often social dilemmas rep-
resent a time dimension, depicting a conflict between short-term self-interest
and longer-term collective interest. This basic definition underlies numerous
specific kinds of social dilemmas that we may face in everyday life, in our
relationships, in groups, and organizations where we work, and as member of
local or global communities. The Huizinge case is indeed just one of many,
many  cases.
Looking back, the history of scientific research on social dilemmas reveals sev-
eral noteworthy trends. One trend is that an increasing number of situations are
described and analyzed as social dilemmas. These include situations as diverse as
sacrifice in close relationships, citizenship in organizations, consumption of scarce
resources, donations to public goods such as public television, or efforts toward
peace-keeping in international relations. Although sometimes social dilemmas
might be over-recognized, it is evident that social dilemmas have tremendous
intuitive appeal, in that they have a strong feel of societal importance. This shows
the breadth of social dilemmas.
Second, social dilemmas have been studied by scientists working in several dis-
ciplines, from anthropologists to biologists, from mathematicians to evolution-
ary scientists, and from psychologists to political scientists. Equally important,
we have seen that various scientific disciplines clearly have grown “toward each
other” such that there is much greater exchange of knowledge and tools (such as
research paradigms) that are very important to progress in the science of human
cooperation. For example, we have seen that a standardized form of the Public
Goods Dilemma (Fehr & Gächter, 2002) has been used now in numerous experi-
ments conducted by various scientists working in economics, evolutionary sci-
ence, psychology, and so on (for a meta-analytic review, see Balliet, Mulder, & Van
Lange, 2011). This trend illustrates the truly interdisciplinary relevance of social
dilemmas.
Third, we witness that theory (or science) and societal reality (or application)
increasingly go hand in hand, in that they inspire and influence one another in
quite fruitful ways. In the past years, we have seen a strong plea for interdisci-
plinary research (Gintis, 2007); we have seen a translation from basic theory to
societal application (e.g., Parks, Joireman, & Van Lange, 2013); as well as a strong
attempt to generalize basic knowledge of social dilemmas to different samples and
12 ■ Introduction to Social Dilemmas

societies (Hermann, Thöni, & Gächter, 2008; see also Balliet & Van Lange, 2013b).
We also see that prominent theories are now discussed in terms of applications.
For example, research in the tradition of interdependence theory and evolutionary
theory are now being applied to various domains, such as environmental sustain-
ability, donations and volunteering, and organizational behaviour. More than any-
thing else, this trend reveals that understanding social dilemmas matters.
Taken together, by recognizing social issues and societal challenges, by bridg-
ing fields and disciplines, and by bridging theory and application, we see a grow-
ing scientific field that it is not only become more mature, but that is inspiring to
an increasing number of scientists working in different fields and disciplines, and
to professionals who face different social dilemmas in society and seek to find
effective and efficient solutions. And thinking about the basic nature of human
cooperation, and the fact that it is addressed at the level of the individual all the
way to society, one may almost reach the conclusion that social dilemmas are on
the verge of becoming a new field of scientific inquiry, a field where social, bio-
logical, and behavioral scientists are working together with scientists in comple-
mentary fields, such as neuroscience, genetics, and culture. Although our book is
primarily focused on the psychology of human cooperation, as the title indicates,
it is also true that we hope to cover some of the central articles that address the
state of the art of social dilemma research. We will do so selectively, because it is a
virtually impossible task to recognize all the empirical contributions that scientists
outside of psychology have made in the history of social dilemmas.
2 History, Methods,
and Paradigms

Mixed-motive situations cut across many disciplines. Besides psychologists,


economists, political scientists, biologists, mathematicians, and sociologists
are all interested in aspects of mixed-motive conflict. Economists focus on to
what extent, and why, people deviate from the rational choice of pure selfish-
ness. Similarly, mathematicians look at how different forms of the conflict can
induce different patterns of responding. Political scientists see certain forms
of mixed-motive frameworks as useful models of arms races and international
conflicts, and in a similar way, sociologists apply the logic to social problems.
Biologists use the framework to explain how species that are hostile to each
other can nonetheless coexist (how is it, for example, that doves have not been
made extinct by one of their top predators, the hawk?).
Where, though, does the idea of mixed motives comes from? The impression
one often gets is that Thomas Hobbes described something akin to a mixed-motive
problem in his 1651 work Leviathan, and then things lay dormant until von
Neumann and Morgenstern (1944) described, in mathematical terms, a game-like
problem that could induce very different patterns of behavior across respondents.
But where did the idea go in the intervening 300  years? Given the impact of
Hobbes, did nobody really find this aspect of his work to be thought-provoking?
Did social conditions improve so quickly in the 18th century that no one else saw
mixed-motive problems arising in daily life? What encouraged von Neumann and
Morgenstern to take up the problem?
The purpose of this chapter is to explain how the notion of a mixed motive
came to be, and show how researchers tackle mixed-motive problems in the labo-
ratory. There is a large number of variants of the basic research paradigm, and we
cannot begin to cover them all. Instead, we will look at the major classes of para-
digms, and also give some attention to how dynamic mixed-motive tasks, or those
with a time component, are studied.

■ THE NOTION OF A “MIXED MOTIVE”

It can be argued that there are three key ideas underlying the general concept
of a mixed motive:  The desire to do well for oneself; the fact that one’s out-
comes are partially influenced by the actions of others, as their outcomes are
partially affected by our actions; and that doing wrong by others leaves one
open to possible retaliation, if the interaction is ongoing. All three of these have
been issues of long standing interest among observers of human nature. Let us
consider each in  turn.
Craig Parks had primary responsibility for preparation of this chapter.

13
14 ■ Introduction to Social Dilemmas

■ OUTCOME MAXIMIZATION

Classical ideas. Speculation on how humans are motivated by outcomes stretches


back to the ancient Greeks. Epicurus (341–270 B.C.E.), Pyrrho (360–270 B.C.E.),
and Zeno (334–262 B.C.E.) all proposed philosophies of the role of outcomes in
human behavior. Epicurus and Pyrrho both believed that pleasure is the final
goal of an organism’s existence, and so all actions are directed toward realizing
pleasure and avoiding pain. They differed, however, on when the pleasurable
goal would be realized. Epicurus believed that people ought to be driven by
long-term goals, and perform actions now that would produce pleasure in the
long run. Importantly, these immediate actions may lead to short-term pain, but
Epicurus argued that this is tolerable if the pain must be experienced in order
to realize long-term pleasure. Thus, someone who seems not to be attempting
to avoid a painful experience is likely motivated by a long-term pleasurable
goal. Indeed, Epicureanism scorns those who respond to immediately pleasur-
able activities that would result in long-term  pain.
Pyrrho’s skepticism takes much the opposite position. Pyrrho argued that one
cannot know whether what seems to be a causal relationship actually is one, and so
inferring that long-term pleasure can be realized by experiencing short-term pain
is foolish. Instead, skepticism indicates that one should live for the moment and
engage in activities that produce immediate reward. It is certainly possible that one
might experience negative outcomes in the future, but whether those outcomes
are the result of short-term pleasure, and whether they might not have occurred
if one had not pursued immediate pleasure, is impossible to know or predict. (In
fact, another tenet of skepticism is that logical reasoning is pointless, because one
can never isolate cause and effect relationships.) Thus, for Pyrrho the wise strategy
is to experience pleasure now, and the person who avoids that in hopes of a better
long-term outcome is setting oneself up for disappointment.
In contrast to Epicurus and Pyrrho, Zeno’s stoicism saw pleasure and pain as
insufficient reasons to engage in, or avoid, an action. For Stoics, actions should
be motivated by reason, virtue, and logic rather than pleasure seeking or pain
avoidance, with a goal of stability in the intensity of emotional reactions to daily
experiences. Ideally, one would never have a day in which things were incredibly
wonderful, or simply awful; rather, each day would produce mildly positive expe-
riences. Pleasurable phenomena are the root cause of emotional peaks and valleys,
because they are “morally indifferent,” in that they can provide both happy and
unhappy experiences: A person with power, for example, may very much enjoy
exercising it, but also may live in fear of being deposed by subordinates because
of it. Zeno argued that people respond to pleasurable stimuli with “passion,” or
impulse and false judgment, rather than reason, and as such do not see the moral
indifference of the stimulus. As a result, they can experience the highs of pleasure,
if the outcome is immediate, or longing, if the outcome is delayed, but they can
also experience the “evils” of distress (for immediate negative outcomes) or fear (if
the negative outcome is in the future). By contrast, arriving at the correct decision
by means of logical reasoning and acting in moderation leads to the experience
of “good passions” or eupatheiai. Thus, one experiences joy rather than pleasure,
History, Methods, and Paradigms ■ 15

caution rather than fear, and wishing rather than longing. (There is no eupathic
equivalent of distress.)
The Greeks, then, had three quite different positions on outcome maximiza-
tion. The Epicureans believed that the ideal strategy is to maximize outcomes in
the long run, even if this meant incurring short-term loss. The Skeptics believed
that one should maximize immediate outcomes, because there is no way to know
whether actions now will affect outcomes later. Finally, the Stoics believed that
one should strive for acceptable outcomes rather than maximal outcomes, because
phenomena that produce maximum gain also have the potential to produce maxi-
mum loss. As we will see in later chapters, elements of each of these ideas are often
observed in modern social behavior.
Modern ideas. Modern ideas on outcome maximization have largely been
grounded in the Epicurean tradition. Stoicism has had no impact on modern
thought, and Skepticism quickly became tangled up in questions about how a
skeptic can function in society—in its pure form, Skepticism prescribes that a per-
son who needs to cross the street should go now, regardless of traffic, because
there is no way to know whether stepping in front of a car will cause the person
death or injury—and today functions only as a guiding principle for the conduct
of research and scholarship (Groarke, 2008).
Modern thought is grounded in Jeremy Bentham’s (1748–1832) notion of utili-
tarianism. For Bentham, an action has utility if it has the tendency to promote a
maximum amount of “happiness,” defined as pleasure with corresponding absence
of pain. Determining what action to perform is the end result of hedonic calculus,
in that, if a person is trying to achieve a pleasurable outcome, s/he will select the
action that is most likely to produce an outcome of maximum intensity and dura-
tion; will be experienced as directly as possible; offers the best chance of being
followed by other pleasurable experiences; and is unlikely to be followed by pain.
Alternately, if the person must deal with painful outcomes, the favored action is
the one most likely to produce pain of minimal intensity and duration; can be
experienced indirectly; is unlikely to be followed by other pains; and is likely to be
followed by pleasure. Thus, Bentham saw people as trying to maximize pleasur-
able outcomes and minimize painful ones, and to some extent, also saw people
as forward-thinking, in that a behavior that produces not only a pleasurable out-
come, but holds the possibility of future pleasures, is more likely to be performed
than one that does not hold future promise.
Bentham’s intellectual successor, John Stuart Mill (1806–1873), attempted to
express Bentham’s idea within the context of the mind, and is typically seen as
having laid the groundwork for consideration of the psychology of outcome maxi-
mization. Mill’s contributions were twofold. First, he proposed the idea that people
come to develop associations between actions and outcomes. This leads to antici-
pation of pleasure by virtue of performing an action, and when the action-pleasure
relationship occurs, feelings of satisfaction result. An unexpected action-pleasure
experience will instead produce feelings of surprise, but will also lead to the begin-
nings of an expected associative relationship. Thus, an employee who unexpect-
edly wins a commendation because he works overtime on a project will expect to
be similarly commended the next time he works extra hours. Second, and perhaps
16 ■ Introduction to Social Dilemmas

more importantly, Mill argued that there are qualitative distinctions among plea-
surable outcomes, and the more satisfying higher-order pleasures result from first
experiencing lower-order pleasures. This idea more distinctly develops the notion
that people will consider both short-term and long-term gains, and that long-term
gains will ultimately be more attractive than short-term gains.
A common misconception, and hence criticism, about the hedonic calculus is
that people are assumed to execute it before every decision. However, Bentham and
Mill were both clear that they held no such expectation (see Bentham, 1789/1970,
Chapter IV, Section VI, and Mill, 1861/1998, Chapter II, paragraph 19). From a
psychological perspective, the idea is more descriptive of why people attempt to
maximize positive outcomes rather than prescriptive of how one ought to decide
what to do so as to realize maximum benefit.
To summarize: the notion that people seek to maximize their own gain, and
minimize their own pain, has been a fundamental component of at least some
philosophies of human nature. While early views emphasized relatively straight-
forward tendencies toward seeking pleasure and avoiding pain, there seems to be
a gradual growth in the belief that people develop associations between actions
and outcomes and are able to adopt a longer-time perspective. In particular, more
modern theorists discussed the role of time horizon in pleasure motivation, ulti-
mately arguing that long-term pleasure ought to be the ultimate goal. As we will
see in later chapters, the extent to which people actually strive toward this goal is
debatable.

■ INTERDEPENDENCE

The idea that humans want to do well for themselves, then, has an ancient his-
tory. What about the notion that our actions affect others, as they affect us? It
too has been speculated on for centuries. Reference to the idea can be found
in Aristotle’s writings on eudaimonia. Eudaimonia is a Greek concept that has
no strict English equivalent, but is usually taken to refer to flourishing and an
objective assessment of life quality, as opposed to “happiness,” which is treated
as subjective assessment of quality. A  key issue underlying the philosophy of
Aristotle’s time was how to achieve eudaimonia, and different schools of thought
had different opinions on this. For our purposes, Aristotle’s arguments alone are
noteworthy. He felt that eudaimonia was achieved not only by living up to one’s
abilities, but also by surrounding oneself with valuable “external goods,” a key
one being friends. Aristotle argued that such “goods” are critical for a good life
because they provide us with opportunities to apply our abilities. Quite sim-
ply, it is impossible to be virtuous if there is no one to express one’s virtuosity
toward. This, then, is an early idea about interdependence: I benefit by behaving
virtuously toward you, and you benefit by behaving virtuously toward me. The
benefit is not an immediate reward, but rather an intangible life experience, but
all the same, the idea remains that our outcomes are partially affected by others.
David Hume (1711–1776) is generally considered the first scholar to articulate
the dynamics of interdependence. Hume argued that people have what he called a
confined generosity: We are of course concerned with our own well-being, but we
History, Methods, and Paradigms ■ 17

also maintain some degree of concern for the well-being of others. This concern,
however, is not because of some natural benevolence, but rather a result of civiliza-
tion: By being part of society, one realizes that, for that society to persist, one has
to help ensure the survival of its members, including members who are not part
of one’s family. This may mean that we have to cooperate with people whom we
do not actually care about, and who do not care about us. Hume referred to this as
“artificial virtue.” He further argued that initial cooperative interactions with unre-
lated others will be cautious, and as one sees that positive outcomes emerge from
the exchange, the interactions will repeat and trust will build, leading to larger
acts of cooperation. One could argue (and some have argued) that Hume’s reason-
ing represents the first game-theoretic analysis of interdependence; regardless, his
basic logic remains at the foundation of most thought on human interaction.
Adam Smith (1723–1790) expanded upon Hume’s ideas. Smith argued that a
moral person has an innate desire to be approved of by others, and that we sym-
pathize with others by imagining how they must feel when they experience some-
thing. Because of our desire for approval, it follows that we will want to please
others and avoid offending them, and our ability to sympathize guides our choices
of actions that should bring approval. These ideas were expressed in Smith’s Theory
of Moral Sentiments (1759/2002). He is more popularly known for his other major
work, Wealth of Nations (1776/1976), still a seminal work in economics, and the
claim is frequently made that this book supercedes his writings on morality. In
fact, Smith saw the two works as complementary. Self-interest was, to Smith, an
example of “commercial virtue,” a more base virtue that emphasizes improvement
of one’s situation. As one strives for commercial virtue, the famous “invisible hand”
enters to improve the lot of others with whom one associates. In particular, in his
Wealth of Nations, he assumed that, for the most part, groups and societies are
well-functioning because individuals pursue their self-interest. As the well-known
quote states:  “It is not from the benevolence of the butcher, the brewer, or the
baker, that we expect our dinner, but from their regard to their own interest. We
address ourselves, not to their humanity but to their self-love, and never talk to
them of our own necessities but of their advantages.” The assumption underly-
ing the invisible hand is that the pursuit of self-interest often has the unintended
consequence of enhancing collective interest. A further argument is that once the
social situation is indeed improved, citizens can turn their attention to higher, or
“noble,” virtues, the most prominent of which is the desire for approval. Smith’s
argument, then, is that while we have an innate desire to help others, we must help
ourselves first.
The common thread running through all of these positions is that interde-
pendence is ultimately functional. We need others to help us both survive and
maximize our potential. So, a person is most likely to survive and thrive if s/he
is good at working with others, and attending to their needs. Further, these phi-
losophers are quite optimistic. People either intuit that cooperation is important,
or are born with the ability to worry about how others feel. Not all philosophers
of human nature were as positive about humans as were Aristotle, Hume, and
Smith, however. Some took the view that cooperation can only be brought about
by force and threat. This was first, and most famously, articulated by Thomas
18 ■ Introduction to Social Dilemmas

Hobbes (1588–1679) in Leviathan. A treatise on why central authority is neces-


sary, Hobbes’ argument was that, without such an authority, people will feel jus-
tified in doing whatever is necessary to safeguard themselves. Further, because
“whatever is necessary” could reasonably encompass any action or object, people
will come to feel they have the right to lay claim to anything. However, because
all people are basically the same mentally and physically, it will be impossible for
any one person to act with impunity, as there will always be someone else who is
strong and/or smart enough to oppose it. As such, people will have to battle for
safeguards, which leads to total warfare. To avoid this, citizens establish a social
contract under which they cede some freedoms to a central authority in exchange
for protection and maintenance of order. Thus, people do cooperate, but only
because they are forced to: Failure to cooperate with others will cause the author-
ity to mete out punishment.
Hobbes’ argument is today taken as an extreme viewpoint on the selfishness
of humans. A  more moderate viewpoint is provided by the modern philoso-
pher Herbert Morris. Morris (1976) argued that, while people usually exercise
self-restraint, they have difficulty doing so. When a group member relaxes his or
her self-restraint, he or she becomes a threat to the well-being of the group because
others will be tempted to do so as well. As such, society needs to put in place a
system of rules that reinforces self-restraint, and takes away the advantages gained
by an unrestrained individual. Relative to Hobbes, this is a more tempered, yet
still pessimistic, view of human nature; whereas Hobbes is saying that, without
authority, humanity will devolve into chaos, Morris suggests that, without rules,
people will try to control themselves, but will have increasing difficulty doing so,
and once some people become unrestrained, they will serve as models for others
to follow. Another contemporary philosopher, Rolf Sartorius, provided perhaps
the most measured argument for why cooperation can only be brought about
through enforcement. Sartorius (1975) argued that, within any class of behaviors,
there will be some that have no value to the group, and some that do have value.
However, humans are generally not good at distinguishing between the two, and
left to their own devices, people will frequently select behaviors that seem benefi-
cial but are not.
What we have, then, are two different perspectives on interdependence. The
Aristotelian perspective argues that humans know cooperation is a good thing
that ultimately benefits us all, and they will indeed cooperate when the conditions
are right. By contrast, the Hobbesian point of view sees humans as creatures that,
at best, would like to do right by others, but lack the skill and insight to do so. As
such, external forces that produce cooperation need to be introduced and main-
tained. The implications of these two perspectives are considerable. The former
suggests that people will be cooperative if we just give them the opportunity to
do so, while the latter implies that, left to their own devices, people will cooper-
ate only sporadically, or as likely, not at all. One does not have to be knowledge-
able about social dilemmas to realize that the steps one would take to maximize
cooperation by others will differ tremendously depending upon which perspective
one adopts. Throughout this book, we will see that the tension between these two
points of view persists today.
History, Methods, and Paradigms ■ 19

■ E A R LY T H O U G H T S O N M I X E D M O T I V E S

Consider the following problem:  A  businessman has contracts with three sup-
pliers. Supplier A  is owed $10,000, Supplier B is owed $20,000, and Supplier C
is owed $30,000. The businessman dies, and as no members of his family are
interested in taking over the business; it is going to be shut down. The three
suppliers need to be paid off, but the company has fewer assets than the $60,000
needed to pay the three suppliers in full. Though it is not yet clear exactly
how much the company has, the likely amount is either $10,000, $20,000, or
$30,000. Hence the problem:  How much should each supplier be paid given
each of these possible asset totals?
While this may seem like a modern problem, in fact this scenario is a variant of
the marriage contract problem (so called because the example involves three wives
who bring differential resources to their marriage to the same man) presented in
the Talmud, which is the compilation of law and tradition from ancient Babylon,
and serves as the basis for Jewish law. For our purposes, the marriage contract
problem is important because it is the first known example of the use of ideas that
relate to mixed motives, specifically, the idea that any one creditor’s outcome in
the problem above is affected by the other two. While each creditor would most
prefer to be paid in full, or to be paid as close to full as the assets allow, the likeli-
hood that that will happen is low, because of the presence of the other creditors,
who have equally good claims to the assets and are probably unwilling to walk
away empty-handed. Instead, each creditor is going to have to accept a payoff that
is less than maximum, so that each can get some money, and in fact, the Talmud
prescribed that, no matter what the total assets, each creditor must be paid some
amount. As we will see in chapter 4, the idea that your maximum payoff exists, but
is likely unattainable, speaks to the notion of “temptation” in an outcome matrix.

■ THE GAME OF LE HER

Another early example of recognition of the complexities of interdependence


was put forth in 1713 by Pierre de Montmort, discussing a colleague of his
named Waldegrave. Waldegrave was interested in the card game le her. In this
two-player game, each person is dealt one card face down, and the deck is
placed face down. Each may look at his or her card. Player A must then decide
to keep her card or swap it for Player B’s card. After A makes a choice, B must
then choose between keeping the card he currently holds or swapping it for the
top card on the deck. After B makes a choice, both players reveal their cards,
and the high card  wins.
While the rules are very simple, the idea underlying it is a nice example of
interdependence, in that each player’s outcomes are partially affected by the other.
If A elects to swap, then B’s actions are fully determined: B will swap with the deck
if A hands him a losing card (e.g., A holds a “2” and B holds a “4,”and A swaps
with B—B now knows with certainty that he holds the low card, and will trade
with the deck), and will not swap if A gives him a winning card (e.g., A holds a “5”
and B holds a “4,” and A swaps with B—B now knows with certainty that he holds
20 ■ Introduction to Social Dilemmas

the high card). Thus, A has the ability to force B to perform a specific behavior.
B’s influence over A is more indirect. B cannot induce A to perform any specific
behavior, but B may be able to change A’s outcome, if swapping with the deck
delivers a winning card to B (e.g., A holds a “2,” B holds a “4,” A swaps with B, and
B in turn swaps with the deck and draws a “6”—A has gone from being the win-
ner to the loser and can do nothing to alter this). This is in contrast to most card
games, in which one succeeds or fails largely by virtue of card management or by
monitoring probabilities.
Waldegrave recognized the interdependence aspect of le her, and realized
that, while Player B’s decision is always easy, Player A’s decision is not. How does
A know when to swap? If she is holding a “2” or a King, the decision is clear, but
for any other card, both keeping and swapping have potential benefit and draw-
back. Waldegrave was thus motivated to identify a strategy that would maximize
the likelihood of ending the game with maximal winnings. His solution, called a
mixed strategy, was based on the idea that one should avoid the absolute rule “I
will always swap if my card is less than some threshold value, and always keep if
it is above that value.” Instead, one should take a probabilistic approach, and swap
with probability p if the card is less than threshold, and swap with probability 1—p
if the card is above threshold. Mixed-choice strategy is a fundamental notion of
game theory, though it is less important for our discussion. For us, the Waldegrave
example is key because it represents the earliest example of someone pondering
how to make choices when another person has the ability to affect your outcomes.

■ BOREL AND VON NEUMANN

After the appearance of the Waldegrave problem, much work was done on
mixed-motive-type situations, but this work was almost exclusively mathemati-
cal in nature, oriented around derivation of probabilities of various outcomes,
and proofs of theorems. It was not until the 1920s that theorists began to spec-
ulate on the role of psychological variables in mixed-motive choice, and that
speculation was initiated by a mathematician, Emile Borel. Borel was interested
in the game of poker. He recognized that the game is a situation of imper-
fect information—unless one is cheating, one knows only the content of one’s
own hand. A  skilled opponent can take advantage of this by bluffing, which in
turn should lead to second-guessing of one’s strategy. Borel saw that these basic
features characterize a host of other situations (for example, a dictator could
bluff about how many missiles his military holds, and verification of the true
size of his arsenal could well be impossible to accomplish). Borel wondered
if a strategy could be devised that would maximize one’s chances of winning
even in the face of such uncertainty and trickery. He rather quickly concluded
that one could not, and by 1928 he had moved away from the problem, but
he was apparently the first to realize that mixed-motive choice is affected
by psychological factors as well as sheer strategy. Indeed, he contributed, in
1938, a chapter to a volume devoted to applications of findings from games
of chance entitled “Jeux ou la Psychologie Joue un Rôle Fondamental (Games,
or Psychology Plays a Fundamental Role),” and late in his life he was credited
History, Methods, and Paradigms ■ 21

within economics as being the first to bring psychology into the study of mixed
motives (Frechet,  1953).

■ THE PRISONER’S DILEMMA

Borel’s ideas were of interest to another mathematician, John von Neumann, who
believed that it was in fact possible to develop a choice strategy in the face of uncer-
tainty. He published on the problem in 1928, and then returned to it in the early
1940s. Interestingly, it is unclear what motivated von Neumann to resume working
on game theory. He was quite interested in computational logic, rule-based axi-
oms, and the notion of the brain as a calculator, and he was convinced that quan-
tum mechanics could model social phenomena. Oskar Morgenstern had a similar
conviction, and suggested that economic behavior would be an excellent test case
for their ideas. (See Mirowski, 1992, for a complete discussion of von Neumann’s
interests.) In 1944 von Neumann and Morgenstern published a book-length treat-
ment of their ideas entitled Theory of Games and Economic Behavior in which
they laid out the basic notions underlying game theory. Their particular goal was
to provide a set of axioms that would spell out, mathematically, what one should
expect to occur when people are engaged in a mixed-motive task.
Formal tests of their propositions began in 1950 with the work of two mathema-
ticians, Merrill Flood and Melvin Dresher. They believed that game theory could
be used to model international conflict, and devised a simple task for observing
individual behavior in a mixed-motive situation (De Herdt, 2003; Flood, 1952).
They invited two other researchers, John Williams and Armen Alchian, to play
100 rounds of a decision-making game. The players were presented the out-
come matrix shown in Figure 2.1, with Alchian’s payoffs below the diagonal and
Williams’ above it:
On each round, each of them was to choose between (1) and (2). They would
not be allowed to interact, though they would be informed after each trial of the
other’s choice, and the resulting payoff to each person. In the long run, the best
combination of choices for both players is (Alchian, 1; Williams, 2), as after 100
trials, the greatest combined payoff would have been issued to the duo: 50 points

Williams
1 2

Alchian 1 2

–1 1

0.5

2 0.5 –1

0 1

Figure 2.1 Flood and Dresher’s original payoff matrix


22 ■ Introduction to Social Dilemmas

for Alchian, and 100 for Williams, for a total of 150 points. By contrast, the game
theory equilibrium prediction (Nash, 1950) is that Alchian will consistently select
(2), because no matter what Williams does, the outcome will be better than if
Alchian selects (1), and Williams will select (1) for the same reason. Thus, they
should always end up in the (2, 1) cell, and after 100 trials Alchian would have a
total of 0, and Williams 50, with 50 total points paid out. In fact, the (2, 1) com-
bination rarely happened, and the pair usually ended up in the (1, 2) cell. More
specifically, Williams chose (2)  78 times, and Alchian chose (1)  68 times, with
Alchian being less cooperative because he was unhappy with his outcomes being
of lesser value than Williams’.
Flood and Dresher’s study provoked much interest, but its context-free nature
raised questions about how easily the task could be understood, and how lay-
people (Williams and Alchian were a mathematician and economist respectively)
would respond to the game. As such, in 1950, during a presentation to the Stanford
University psychology department, mathematician Albert Tucker added a context
story. He suggested that the Flood-Dresher matrix paralleled a situation in which
two prisoners are separated and independently confronted with a request to con-
fess to a crime. If neither confesses, the prosecution will seek a tough sentence in
court; if each confesses, the resultant plea bargain will produce a lesser sentence
for each; but if only one confesses, it will be assumed that he alone committed the
crime, which demands a harsh sentence, while the non-confessor will go free. In
matrix form, with the outcomes being number of years in prison, this can be rep-
resented in the way shown in Figure 2.2:
As a result of Tucker’s cover story, this basic structure came to be called the
“Prisoner’s Dilemma Game,” or PDG for short.
The dynamics of the Prisoner’s Dilemma are deceptively simple. Because there
is no interaction between the prisoners, overt coordination of choices is impos-
sible. Each player has to try to infer what the other will do. The inference process
can lead to what seems an obvious conclusion. If Prisoner B confesses, Prisoner
A will receive 1 year in prison if he also confesses, and will go free if he does not
confess. Clearly here it would be better to not confess. Similarly, if B does not

Prisoner B
Confess Not confess

Prisoner A Confess 1 yr in prison Go free

1 yr in prison 3 yrs in prison

Not 3 yrs in prison 2 yrs in prison


confess

Go free 2 yrs in prison

Figure 2.2 Tucker’s “Prisoner’s Dilemma” matrix.


History, Methods, and Paradigms ■ 23

confess, A will get 2 years if he also does not confess, and 3 years if she does con-
fess. Two years is more desirable than 3 years, so if B does not confess, it is better
for A  to not confess. Note the general pattern:  Regardless of what B does, not
confessing produces the better outcome for A. We can say, then, that not confess-
ing dominates confessing, and we would expect A to not confess. But therein lies
the dilemma—this same logic also applies for B. This means that each prisoner
will choose to not confess, which means each will get 2 years in prison, which is a
worse outcome than would have resulted if each had confessed.
Perhaps we go one step further and assume each prisoner is insightful and
discovers this conflict. Each one should then conclude that confessing is the bet-
ter choice. But note now what arises: If B expects A to confess, then B could take
advantage of this and opt to not confess. This would give A 3 years in prison, and
would set B free. Surely B will fall prey to this temptation. But the same temptation
exists for A, which means A also would ultimately not confess, and we are right
back where we started. It is this dynamic that has attracted so many researchers
to the Prisoner’s Dilemma as a research tool. In 1957, Duncan Luce and Howard
Raiffa produced a nontechnical overview of the Prisoner’s Dilemma and discussed
its potential application to a variety of problems, and their work opened the door
for researchers in a number of disciplines—psychology, sociology, political science,
and economics, to name just some—to use the game as a tool for studying a variety
of real problems. As a result, studies using the Prisoner’s Dilemma grew rapidly, and
within a short time literally hundreds of papers on the paradigm were published.
Within the PDG matrix, confession is more generally the cooperative choice,
and failure to confess the non-cooperative choice. As well, the outcomes are typi-
cally represented by their motivational properties, as shown in Figure 2.3:
Here, “T” is the Temptation outcome, because it tempts each player to try to
receive it; “R” is the Reward for mutual cooperation; “P” is the Punishment for not
mutually cooperating; and S is the Sucker outcome, resulting from a failed attempt
at mutual cooperation. In a Prisoner’s Dilemma, these outcomes will order as
T > R > P > S, and twice Reward will be larger than Temptation plus Sucker (or for-
mally, 2R > T + S). This latter condition is necessary so that simple alternation
between cooperating and not cooperating is less lucrative over the long run than

Player B
Cooperate Not
cooperate

Player A Cooperate R T

R S

Not S P
cooperate
T P

Figure 2.3 The general structure of a prisoner’s dilemma.


24 ■ Introduction to Social Dilemmas

repeated joint cooperation. The outcome values can also be used to quantify the
degree of cooperativeness or temptation in the payoff matrix. The K index (Rapoport,
1967) ranges from 0 to 1.00, with higher values reflecting a greater degree of coop-
erativeness, and lower values a greater degree of temptation. It is calculated as
R−P
K=
T −S

If the K value is large, the interpretation is that there is not a great outcome
advantage to pursuing Temptation; in other words, the relative difference between
Reward and Temptation is not that large. By contrast, a small K value indicates that
there is a considerable relative difference, and Temptation will be very attractive.
All else being equal, we expect the likelihood of cooperation to increase as the
Kindex goes up.

■ VARIANTS OF THE PRISONER’S DILEMMA

Chicken and Assurance. There are an enormous number of variations on the


Prisoner’s Dilemma, most of which involve either some rearrangement of the
cells in which the T, R, P, and S values appear, or alteration in how choices
are made. Rapoport and Chammah (1965) cataloged many of these variations.
Two of such dilemmas are the Chicken and Assurance dilemma, which we also
discussed and illustrated in Chapter  1. For our purposes, a brief discussion
in terms of T, R, P, and S will suffice. In the Chicken Game, Punishment and
Sucker are switched. This mimics the situation of two cars racing toward each
other, each driver daring the other to swerve first. If one driver swerves and
the other does not, the former will be embarrassed and the latter will be lauded
as brave, but being embarrassed is better than the outcome if neither driver
swerves. In this situation, swerving is the cooperative behavior, and not swerv-
ing the non-cooperative behavior. Many political scientists believe Chicken is
the best format for modeling arms races (Brams,  1985).
In the Assurance Game (sometimes called the Stag Hunt Game, or Trust
Dilemma, Kelley et al., 2003), Temptation and Reward are exchanged. On the sur-
face, this might seem an uninteresting game, since mutual cooperation produces
the best personal outcome, but when applied to real social situations, it is often
assumed that cooperation is more difficult than non-cooperation. Thus, while the
best outcome is achieved through mutual cooperation, it is also the case that it
is harder to execute mutual cooperation than mutual non-cooperation, and par-
ticipants may opt for a lesser payoff in exchange for ease of action (Taylor, 1987).
Consider, for example, a community vegetable garden. If everyone were to help
pull weeds from the garden, then everyone could grow a large number of plants,
which is the best outcome. However, weeding is difficult, and if no one pulls weeds,
the community could still grow some plants. Finally, if some people pull weeds
and others do not, the goal of being weed-free will not be accomplished, because
the weeders cannot keep up with weed growth, and they will also not have the time
to tend to their own plants. Thus, the weeders get neither a weed-free plot of land
History, Methods, and Paradigms ■ 25

nor their own vegetables, and the non-weeders get to grow some plants, which is
not optimal but is acceptable.
Ultimatum Game and Dictator Game. There are also PDG variants in which
the nature of choice is altered. Of these, perhaps the two most popular are the
Ultimatum Game and the Dictator Game. In the Ultimatum Game, one person is
allotted an amount of resources and is told to divide the resources between herself
and another person. The division is then presented to the other person, who must
accept or reject it—no negotiation is allowed. If rejected, the resources disappear,
and neither person gets anything. The allocator’s outcomes are thus affected by
the recipient, and each is partially dependent upon the other. For the allocator,
the decision requires determining how much one can safely keep without looking
so unfair that the recipient will accept no payoff in order to punish the allocator.
A variant of the Ultimatum Game is the Dictator Game. Here, the recipient has
no choice—he must accept whatever the allocator provides. Because the recipient
cannot act, the Dictator Game is not technically a social dilemma, but it is none-
theless useful for thinking about social dilemmas, because the obvious choice—
keep everything, and force the recipient to take nothing—rarely actually occurs.
This allows us to ask questions about the role of variables like fairness and inclu-
siveness in social dilemma behavior.

■ CRITICISMS OF THE PRISONER’S DILEMMA

It is perhaps ironic that, despite Tucker’s attempt to give the Prisoner’s Dilemma
some realism, criticism of the PDG quickly centered around its supposed lack
of correspondence with real-world situations. Nemeth (1972) raised the first
substantive criticisms, arguing that few social situations present a person with
just one interaction partner, only two choices, well-defined outcomes, and full
information about the other person’s potential outcomes. This criticism was not
shared by all disciplines—political scientists, for example, tend to believe that
the basic PDG provides a good approximation of arms races between coun-
tries—but within psychology, Nemeth’s critique had an impact. The n-person
Prisoner’s Dilemma (Hamburger, 1973), which expands the number of partici-
pants, helped somewhat to alleviate concerns about artificiality, but the larger
questions about the range and nature of choices one can make remained.
It is important to note that, despite the prevalence of social dilemmas in soci-
ety, studying behavior in actual, real-time social dilemmas is difficult. Often their
scale is just too large for a researcher to manage. Consider, for example, the efforts
required to complete an actual field study, such as van Vugt and colleagues’ (van
Vugt, Van Lange, Meertens, & Joireman, 1996) investigation of usage of carpool
lanes. In order to find out whether drivers would even be willing to consider using
a new carpool lane, the researchers had to, during rush hour, wait and approach
drivers who had stopped at a gas station located on the highway on which the
carpool lane had been installed, and ask whether the driver would be willing to
complete a survey; identify another stretch of a highway that had rush-hour use
comparable to that of the tested highway, but did not have a carpool lane, and
was far enough away from the targeted highway that the likelihood of a driver
26 ■ Introduction to Social Dilemmas

regularly using both highways was near zero; travel to that second highway, which
was about 100 miles removed from the targeted highway; set up at a gas station on
this second highway, and approach rush-hour drivers there with the survey; and
then mail off a second survey to all drivers who completed and returned the first
one. All of this was for a study that involved no manipulations introduced by the
researchers, and no long-term monitoring of drivers.
What should be clear is that a response to the artificiality issue that merely
shifts the research venue outside of the lab, to take advantage of real social dilem-
mas, is far more challenging than it might first seem. Because of this, Nemeth’s
challenge inspired theorists to devise some more complex research paradigms
that can be executed in the laboratory. We will now take a look at two such para-
digms—give-some games, and take-some games.

■ GIVE-SOME GAMES AND PUBLIC


GOOD DILEMMAS

A give-some game is a dilemma in which each participant possesses some


resources that are needed to provide an entity that all group members may use. The
dilemma lies in the fact that any person can access the entity regardless of whether
she contributed resources toward its existence. Because of this, the personal-best
strategy is to give nothing, let others pay for the entity, and then take advantage
of their efforts. (Such behavior is termed “free riding.”) This is analogous to the
Temptation payoff in a Prisoner’s Dilemma. And, as with the PDG, the dilemma
lies in the fact that all others are equally tempted, and if all follow the tempta-
tion, then no one will contribute anything, the entity will not be provided, and
everyone is worse off than if everyone has contributed. A classic example of such a
situation is public television. It subsists largely on donations, but anyone can watch
its programs, so there is no personal reason to give the station anything. But of
course, if this occurs, the station gets no money and cannot broadcast. All viewers
suffer as a result.
Olson (1965) was the first to describe the give-some dilemma. He referred to
the problem as a “logic of collective action.” Olson demonstrated that collective
action problems are very much like the Prisoner’s Dilemma. If one gives and the
entity is provided, all is well, but if one gives and the entity is not provided, the per-
son loses everything: The resource is gone, with nothing to show for it (presuming
the resource cannot simply be returned to the contributor). By contrast, if one
does not give and the entity is not provided, then the status quo is maintained, but
if one does not give and the entity is provided, the best possible situation occurs,
as one gets to enjoy the entity for free. Olson argued that keeping thus dominates
giving, and therefore people will not give toward such entities (which he termed
public goods), and will instead free-ride. As will be seen in later chapters, the public
goods paradigm has itself spurred much research, and the notion of free-riding
has become a standard concept in various lines of research, even in popular lan-
guage. That said, although free-riding is tempting, considerable research reveals
that quite a few people are able to resist that temptation across several specific
public good dilemmas.
History, Methods, and Paradigms ■ 27

Give-some games can be broken into two types, depending on what is needed
to provide the entity. A  step-level public good is one for which a certain mini-
mum total contribution must be received, at which point the entity is provided in
entirety. If the minimum is not reached, the entity does not exist. An example of a
step-level good is a bridge. Consider a pedestrian bridge in a park, with the bridge
being paid for through fundraising. Of course, it can be used by anyone who visits
the park. It does not make sense to build half of a bridge, so if only enough dona-
tions accumulate to pay for half of the bridge, it will not be built.
It is important to note that a step-level public good is technically not a social
dilemma, because if the decider is the final person needed to make a donation, it
is better for him to contribute than not contribute. Imagine that just $100 more
is needed to build the pedestrian bridge. If a citizen is in a position to give $100,
he should do so, because then the bridge will be built; if he does not give, it will
not be built. (Such a person is referred to as a critical contributor.) This violates
the strict tenet of a social dilemma that non-cooperation always produces a bet-
ter personal outcome than cooperation, though it does not prevent the step-level
public goods paradigm from being a popular research tool. In fact, the choices
and outcomes associated with a step-level public good can be represented in a
matrix, much like the Prisoner’s Dilemma. Figure  4 presents an example of a
five-person step-level public good, in which three contributors are needed in
order to provide the good:
If we assume that the good is more valuable than the resource (which is a safe
assumption, because presumably people would not pay more than something is
worth to them—no reasonable person, for example, would offer $50.00 for a pack-
age of gum), then we can see the violation of the social dilemma requirement
when there are two other givers: The good is more valuable than the resource, so
at that point it is better personally to give than to keep.
The other type of good is a continuous public good. This can be provided in any
amount, depending upon the total amount of contribution. A playground is a type of
continuous good. A small amount of money can build a small playground; as dona-
tions increase, better-quality equipment can be purchased, and the size of the play-
ground can be expanded. Technically, continuous public goods are almost always
a specific form of step-level goods, because there is likely some minimal amount
that has to accrue before anything can be done. For example, before the playground
can be built, we must accumulate enough money to buy one piece of equipment.
However, this minimal amount is usually so low that it will be achieved with trivial
effort. Because of their continuous nature, a continuous public good is not easily rep-
resented by a choice/outcome matrix. When used in research, the investigator will

Number of others who give


0 1 2 3 4
Give Nothing Nothing Good Good Good
Keep Resource Resource Resource Good + Resource Good + Resource

Figure 2.4 A step-level public goods matrix.


28 ■ Introduction to Social Dilemmas

typically feedback to group members how much total contribution was received, and
what that total amount purchases for them. A commonly-used research paradigm
for studying such goods is the coin exchange paradigm (Van Lange, Klapwijk, & Van
Munster, 2011). In this task, each of two people begins with a number of coins that
have value to the person. Each person is then given the option of giving some num-
ber of their coins (including the entire amount held) to the other person, with each
contributed coin having double value for the other person. For example, each person
might hold 10 coins that each have a worth of 25 cents to him or herself, but 50 cents
to the other person. Exchange decisions are made simultaneously, so that one person
cannot simply react to the allocation made by the other person.
It is not hard to see how the coin exchange paradigm parallels the PDG. If each
person gives all coins to the other, each will end up with a payoff that is double
what would have been received if each had kept all of their coins. In our example,
each person would earn $2.50 from keeping all coins, and $5.00 from exchanging
all coins. However, the best personal payoff is realized by keeping all coins and
having the other person give all coins (in our example, $2.50 + $5.00 = $7.50), and
the worst (0) occurs when a person gives all coins and receives none. There is thus
an incentive to keep one’s coins.

■ TA K E - S O M E G A M E S A N D R E S O U R C E D I L E M M A S

The other major variant of the Prisoner’s Dilemma is the take-some game.
Under this paradigm, people begin with access to a resource pool of limited
size. Each group member can sample from the resource up to some limit. If
the total of all requests is less than total pool size, each person is granted his
or her request, but if total requests exceed pool size, no one receives anything.
Often the choice is iterated; if this is the case, then after all requests have been
granted, the pool is partially replenished at some rate (e.g., 20% of the remain-
ing pool size) before the next round of choice begins. In the iterated case, the
trials will typically continue until either a stopping point is reached, or the
pool has been exhausted. As well, it is common with this paradigm to withhold
some information about the situation—the current size of the pool, the specific
requests of others, the replenishment rate, and/or the amount replenished are
all often omitted from the feedback given to group members. These omissions
are designed to enhance the fidelity of the paradigm to real resource manage-
ment problems. Consider, for example, a water table. This resource paradigm
well matches what water users do—each household has a limit to how much
water can be drawn at once; rain and snow partially replace the drawn water;
the table can go dry—and it is rare for water users to know, or even approxi-
mate, how large the table is at any given moment, how much rain and snow
flow back into the table, or how much water each other household is drawing.
Let us demonstrate how an experimental resource dilemma works. Imagine
that five people have access to a resource that begins with 500 units. Each person
can take up to 20 units per turn. After everyone has sampled from the resource,
10% of the remaining units are added back in, and the sampling/replenishment
process repeats. Assume that on the first turn, the five people take 20, 20, 15, 8,
History, Methods, and Paradigms ■ 29

and 7 units respectively, for a total harvest of 70 units. Thus, after everyone has
sampled, the resource has
500 – 70 = 430
units remaining. We now need to add 10% of the remaining pool size, or 43
units, back into the  pool:
430 + 43 = 473
So the next round will begin with 473 units available. Let us assume that dur-
ing the second round, the five group members take 20, 20, 18, 15, and 11 units,
for a total harvest of 84 units. This reduces the resource  to
473 – 84 = 389
units remaining. We add 10% of 389, or 39, units back into the  pool:
389 + 39 = 428
The third round thus begins with 428 units. This process continues until either
the time limit for the experimental session is reached, or the pool gets so low
that it is impossible to fill all potential harvests from group members. In this case,
since we have five people who can each take up to 20 units, we need at least 100
units in the pool. Less than this, and it is possible that someone will not be able
to receive all that she requests. Should that happen, the session ends immediately.
Development of the take-some game was largely inspired by Hardin’s (1968) work
on the Tragedy of the Commons. Hardin described a field available to multiple
farmers in which cows are allowed to graze. It is in each farmer’s best interests to
put all of his cows into the field, but if all farmers do so, the grass will be eaten
so quickly that the field will cease to be useful as a feeding spot, and in the long
run, they will all be worse off, because there will be no nearby place to graze
their animals. The best long-term strategy is for each farmer to put just enough
cows in the field so that the grass grows back in one part of the field at the same
rate that it is being consumed in another part of the field. This means that each
farmer must also find a less convenient place to feed his remaining cows, but this
is the price that must be paid to keep the field useful for the long run. Hardin’s
“tragedy” was that the farmers would not recognize this long-term strategy, and
would instead destroy the field by pursuing the immediate gain.
A modern-day analogy to Hardin’s story is treatment of the Brazilian rain for-
ests. Large swaths of the forest have been cleared to make room for agriculture,
but because the land has supported trees rather than crops, it has only short-term
farming value, because the nutrients are used up quickly. For this reason, new
tracts of land must be continually cleared. If the rain forest area is indeed needed
for agriculture, the best long-term solution is to clear only a small patch of land,
farm it as long as possible and grow the rest of the crops elsewhere, and let the
rest of the land stay forested. When the cleared land exhausts it usefulness, a new
small patch can be cleared, and the old patch will be reforested by neighboring
trees. By the time the farmers need to reuse the first patch of land, the new trees
will have returned nutrients to the soil, and the land will again be crop-friendly.
30 ■ Introduction to Social Dilemmas

However, the temptation exists to just deforest a huge area and plant all crops at
once. This is easier than farming two locations, but once the large cleared area
is used up, there are no nearby trees to reforest it, and in the long run the land
becomes useless.
As with Olson’s public goods problem, there are parallels between the resource
dilemma and Prisoner’s Dilemma, in that trying to achieve the personal best out-
come leads to poor outcomes over the long term. It differs from the public good in
that there is immediate gain: At least in the early life of the resource, each person
gets what he or she wants. This difference is what distinguishes a social trap from a
social fence (Cross & Guyer, 1980). In a trap, there is immediate gain and long-term
loss, and in a fence, there is immediate loss and long-term gain. A  take-some
dilemma is a trap, and a give-some dilemma is a fence. This distinction is impor-
tant because, though structurally the take-some and give-some games are similar,
it suggests that there should be perceptual and psychological differences between
the two situations. It is for this reason that social dilemma researchers treat the two
paradigms separately, and need to test whether a phenomenon that occurs under
one type of dilemma occurs under the other. We would never simply assume that
a behavior or perception that occurs with one type of dilemma necessarily occurs
under the other (for an excellent illustration, see Van Dijk & Wilke, 2000).

■ S TAT I C V E R S U S D Y N A M I C PA R A D I G M S

In discussing the resource dilemma, we mentioned in passing that research-


ers often have group members make a sequence of decisions, with the pool
being partially replenished before each choice. It is also possible for both the
Prisoner’s Dilemma and public goods task to involve multiple decision trials.
In fact, a focus of some research has been whether behaviors that occur under
single-trial tasks also happen when there are multiple trials. This feature of the
research paradigm speaks to static versus dynamic modeling of social dilemma
choice:  A  static choice is a single decision, and dynamic choosing unfolds over
time. Examples of both can be found in real social dilemmas. Water usage is
clearly dynamic:  We sample from our water source every day, and in more
arid climates, citizens may get messages indicating that the resource is imper-
iled and that they should, at least temporarily (and perhaps under threat of
sanctions), change their behaviors. This is the nature of dynamism—behavior
is potentially fluid, malleable, and adaptable. By contrast, contribution to a
charity is a type of static choice. When we see a Salvation Army kettle outside
of a store at Christmas-time, we have to decide whether we want to help the
Salvation Army or not. It is a one-time decision. While we might place money
into the kettle on multiple visits, this does not constitute dynamic choice—the
decision was whether to give, and later deposits are simply an increasing of the
original donation. Thus, both static and dynamic social dilemma decisions are
important to understand.
Analysis of static decisions has long been the standard within social dilemma
research. There are many studies of dynamic decisions, but understanding exactly
what is going on has been a challenge, and a number of social dilemma theorists
History, Methods, and Paradigms ■ 31

(e.g., Komorita & Parks, 1994, 1995; Messick & Liebrand, 1995; see also Kenrick,
Li, & Butner, 2003) have called for more careful study of the process by which peo-
ple alter their decisions as the dilemma progresses. There are two primary chal-
lenges to conducting such studies. First, the choice revision process may unfold
over a longer period than can be captured in a typical one- or two-hour labora-
tory research session. In response to this, computer simulation using agent-based
modeling has become an increasingly popular tool among social dilemma theo-
rists (e.g., Messick & Liebrand, 1995). Basically, agent-based modeling attempts
to estimate the actions of each of a large number of interdependent individuals,
with the assumptions that each person is adaptive, can reflect on past experi-
ences, has the ability to render a choice without interference from others, and
favors simple rules for governing choice. Each of these factors can be captured in
a probability-based algorithm, and once programmed, these algorithms can then
be run, and the patterns of estimated choices, often over a very long series of trials,
are output and analyzed. (See Macy & Willer, 2002, for a complete discussion of
execution of agent-based modeling simulations.) While one can quarrel with some
of the underlying assumptions—one can imagine, for example, how certain people
might be inflexible rather than adaptive, preferring to settle on one choice strat-
egy and apply it without exception, or that particular people might have complex,
even convoluted, rules for deciding what to do—it is still the case that agent-based
modeling can provide baseline estimates of what could happen in long-term situa-
tions. And as with any computer simulation, it becomes important to collect actual
data, contrast the results against the simulation results, and then address the ques-
tion of why there are deviations between the modeled and actual results. Given
the logistic challenges of getting real data from a large group over a long stretch
of time, agent-based modeling at present represents our strongest tool for at least
formulating some ideas of what happens in such situations.
Besides the logistic issue, an historic challenge to dynamic research has been
the difficulty of statistically treating repeated-choice data, and modeling how
influences on choice wax and wane as the dilemma moves forward. Recent devel-
opments in structural equation modeling (SEM) have the potential to make this
challenge surmountable. Basically, SEM is a quantitative method for combining
data from many variables into a single system, producing a set of path coefficients
that estimate the strength of impact of some input and mediating variables on one
or more outcome variables. The focus is on the nature of covariation between pairs
of variables in the system. The goal is not to derive a causal structure, but rather
a web of relationships that can be interpreted. While most commonly applied to
non-experimental data, there is no reason why SEM cannot also be used on data
sets that result from manipulated variables, and through use of a variant of SEM
called growth curve modeling, repetition of choice can be included in the model.
Application of SEM to social dilemma data thus allows simultaneous consideration
of a number of influences on choice, and can describe how the nature of choice
alters over time. It is important to note that proper application of SEM requires
a substantial sample size. SEM theorists generally suggest that there should be at
least 20 cases per parameter being estimated (Jackson, 2003). As a typical struc-
tural model can easily have upwards of 20 parameters to estimate (and in fact, a
32 ■ Introduction to Social Dilemmas

20-parameter model would be a relatively simple one), the researcher may need
many hundreds of cases to derive stable estimates, and some theorists (e.g., Barrett,
2007) have argued that any model with less than 200 cases should automatically be
rejected, unless it has been executed on a special, restricted sample for which large
numbers of cases are just inaccessible (e.g., schizophrenics). This sample size issue
should give social dilemma researchers pause before they wantonly begin to use
SEM on their data sets (and such misapplication is a real and growing problem—
see, e.g., Shah and Goldstein, 2006, for a recent demonstration of this trend), but
it should not be a barrier. Execution of a study that is designed with these caveats
in mind can produce a model that estimates the relative impact of a good number
of variables on social dilemma choice, and captures at least some of the dynamic
nature to boot. (Readers interested in a complete treatment of SEM should see
Kline, 2011.)

■ PRACTICAL CONSIDERATIONS WHEN


CONDUCTING SOCIAL DILEMMA RESEARCH

Throughout this chapter we have provided some samples of paradigms that


researchers use when conducting social dilemma studies. When executing this
research, there are a host of practical considerations that the researcher must
also take into account. They are considerations that do not offer easy answers,
and the researcher must make some informed choices about how to handle
them. In this section, we review these considerations.

■ R E A L V E R S U S I N TA N G I B L E O U T C O M E S

Subjects in social dilemma studies are typically shown an outcome matrix that
reflects the number of points associated with each particular combination of
choices. In some studies, this is all they play for—the ability to walk away with
the knowledge that they accumulated a satisfactory (or dissatisfactory) amount
of points. In other studies, these points get converted to something tangi-
ble:  Sometimes money, sometimes lottery tickets for a prize, sometimes a gift
certificate. Does it matter whether one uses tangible or intangible outcomes?
Perhaps not surprisingly, findings are all over the place regarding this question,
and have been for almost as long as social dilemma research has been conducted.
For this reason, incentives are a methodological issue that has never ceased to
be of concern. Gumpert, Deutsch, and Epstein (1969) found players to be more
competitive when money was at stake than when it was not, but the actual amount
of money was irrelevant—playing for any amount, no matter how small, was suffi-
cient to induce competition. Knox and Douglas (1971), however, found that while
the average rate of cooperation does not differ across magnitude of incentive, the
variance does, with variance increasing as size of incentive increases. This pattern
was replicated by Shaw (1976). From a statistical perspective, this makes it difficult
to accurately compare small-incentive and large-incentive studies. Complicating
things even further, Gallo and Sheposh (1971) could not replicate the “real money”
effect, finding no differences in cooperation between those playing for money
History, Methods, and Paradigms ■ 33

versus mere points; Stern (1976) found some evidence to suggest that intangible
incentives could be more influential on cooperation than tangible incentives; and
Clark and Sefton (2001) found financial incentive to be less influential on coopera-
tion than the opponent’s first choice. There is also a new line of research in which
participants can actually reward or punish each other in public goods dilemmas,
and this research shows that actual rewards and punishment with real monetary
consequences tend to be somewhat more effective in promoting cooperation than
hypothetical rewards and punishment (for a meta-analysis, see Balliet, Mulder, &
Van Lange, 2011).
All of this has led to a kind of stasis whereby those who do pay subjects think
that only this type of research is interpretable; conversely, those who do not pay
might think that it is wasteful to do so or that it induces a financial frame that is
not always present in social dilemmas in everyday life, and that might prime par-
ticipants with a particular mindset. These views compare strikingly well to a dia-
logue between Frank, Gilovich, and Regan (1993, 1996) and Yezer, Goldfarb, and
Poppen (1996) regarding whether studying economics makes people less coop-
erative. The contention (Frank et  al.) is that economics teaches the self-interest
perspective and so does inhibit cooperation; the response (Yezer et al.) is that eco-
nomics also teaches the possibility and value of mutually beneficial action, and so
does not inhibit cooperation. It is possible that those who believe that people are
taught to be self-interested would plan to convert points to dollars, and those who
believe people are taught the usefulness of mutual benefit would not execute such
a conversion. So practically, there is probably no right answer regarding whether
study subjects need to have their choices connected to real money. The researcher
should simply make an informed choice, and be prepared to defend that choice.

■ THE ENDGAME

In many social dilemma studies, people make choices over multiple trials, because
the researcher is interested in observing the evolution of behavior over time. In
such studies, the question arises of whether subjects should be told how many tri-
als will occur. Such knowledge can induce endgame effects whereby cooperation
decreases dramatically on that last trial, because the person knows there will be no
retribution for such a choice (Rapoport & Dale, 1967). Given this, one can ask why
the researcher would ever tell subjects that “there will be x number of trials.”
However, Selten and Stoecker (1986) have argued that revealing versus conceal-
ing the stopping point of the game is a choice to be made theoretically. Specifically,
revealing the endpoint simulates a situation in which the person interacts with
different people trial by trial, but knows that there are a finite number of available
partners. An example would be a classroom in which a student must pair up with
a classmate every time an assignment is given, and will be paired with a different
classmate for each assignment. The student immediately knows how many assign-
ments will be given (it is equal to the number of classmates she has), but does not
know which classmate will be her partner for any given assignment. Free-riding on
the partner’s efforts early on could earn the student a reputation as a bad partner,
because past partners will pair up with future partners and may spread the word,
34 ■ Introduction to Social Dilemmas

so it behooves her to be cooperative. However, as the last assignment nears, the


likelihood of reputational damage goes down, because the few remaining upcom-
ing partners are unlikely to meet up before the tasks end, so word spreading that
the person is a bad partner probably will not happen. As such, the student can
safely do nothing, and this becomes more true as the end nears. A game in which
the number of trials is specified simulates situations such as this.
By contrast, a game for which the stopping point is unspecified simulates
open-ended social interaction. Consider, for example, the problem of deciding
whether to contribute to public radio. The listener does not know who else listens
to the station, how many listeners carry forward to the next donation period, or at
what point he himself will stop listening to the station. The listener thus does not
know when the dilemma will stop, and leaves open the possibility of retribution.
Because of this, the person needs to be more cautious, and show exhibit coopera-
tion for a longer period of time.
The decision of whether to tell subjects the stopping point, then, needs to be
driven by theoretical concerns. There is obviously a limit to one’s ability to control
this, since study subjects typically are in the lab for a fixed amount of time. If a
person has been in the lab for 45 minutes of a one-hour session, she will cor-
rectly infer that the end is coming soon, even if the experimenter has said nothing.
Despite this, the researcher should carefully consider what the study subjects will
be told. It does make a conceptual difference.

■ DECEPTION

The final issue that one needs to consider when executing social dilemma research
is whether to have subjects interact with actual other subjects in real time, or
against a programmed strategy, with an inference that the opponent is a real per-
son. Arguments can be made for both approaches. Intact groups simulate real-life
situations but may result in many groups not producing the effect of interest—
one cannot study reciprocation if no one in the group reciprocates—thus wasting
subject hours. A concocted group guarantees that everyone has the same experi-
ence, but comes with a price of deceiving the subjects. What does one do?
The use of deception is a provocative issue. Within psychology alone, one can
find vehement arguments against it ever being used (e.g., Ortmann & Hertwig,
1997) and equally strong arguments for its occasional use (e.g., Kimmel, 1997).
Some researchers report evidence that people do not mind being deceived, so
long as the deception is mild; indeed, they may actually enjoy the experience
(Christensen, 1988); and others report that encountering a negative stimulus is
more aversive than being deceived (Epley & Huff, 1998). Also important, some
questions—such as the effects of careful manipulations of the feelings (empathy)
or behaviors or strategy (such as the so-called Tit-for-Tat strategy, see Chapter 4)
of the other person (Batson, 2011)  are very hard to study without any form of
deception.
However, vivid examples of subjects inferring that an accidental occurrence in
the laboratory was actually part of the study have been reported—MacCoun and
Kerr (1987), for example, reported that a subject who experienced an epileptic
History, Methods, and Paradigms ■ 35

seizure during a session was thought by the other subjects to be involved in the
study—as a confederate of the experiment. Such evidence supports an argument
many experimental economists regard as important. They argue against any form
of deception because it undermines the trust that participants should have in the
integrity and honesty of experimental procedures, thereby undermining the gen-
eral validity of experiments.
The question of whether it is appropriate and/or desirable to use deception is,
then, a complicated one that has no easy answer. It is beyond the scope of this book
to try to sort through all of these complications—the reader who would like to
see this done is referred to the outstanding chapter by Kimmel (2006). We merely
point out that there is no easy answer for the social dilemma researcher who is
trying to decide what to do.
We might find some help in some recent developments in the health sciences.
There, ethicists have attempted to find a middle ground between deception and
full information. In this literature, it is acknowledged that deception is sometimes
necessary. For example, one cannot study placebo effects without convincing peo-
ple that they are taking an active pharmaceutical when they really are not (Miller
& Kaptchuk, 2008). However, it is also acknowledged that deception makes it at
best very difficult for people to freely consent to what they are about to experi-
ence (Wendler & Miller, 2004). As such, ethicists have proposed two strategies
for employing deception, yet giving subjects the ability to make informed choices
(Miller & Kaptchuk, 2008). At the start of the session, people could be told that the
study contains some deception, and be asked to sign an authorization form that
states the person is aware of and accepts the use of deception. If the person does not
care to be deceived, he can withdraw from the study without penalty. Alternatively,
during debriefing, when the person is informed that deception was used, the per-
son can be given the option to withdraw his data, with any rewards promised to the
person being delivered anyway. Whether either of these procedures drives a certain
kind of person away from the study, and hence skews the data, is unknown. They
may, however, represent a way for most social dilemma researchers to be accepting
of deception, at least among those who believe it is sometimes called for.

■ SUMMARY AND CONCLUSIONS

In this chapter we have reviewed the development of the idea of a mixed


motive, discussed how modern thought on interdependence is grounded in
earlier ideas, and introduced the major research paradigms used to study social
dilemma choice.
Our review of the history of the idea of a mixed motive is designed to show
that this is not a modern concern; scholars have wondered about it for centuries.
This demonstrates how central mixed-motive conflict is to everyday life. Further,
the key ideas underlying modern theory on how to resolve social dilemmas have
existed for quite some time. None of this is to criticize modern researchers for sim-
ply rehashing old ideas, but rather to show how challenging the problem is. After
2,500 years, we still do not know which of a long-term or short-term focus is the
best, or whether one should strive for the best outcomes, or the merely acceptable.
36 ■ Introduction to Social Dilemmas

A second critical point to be gleaned from our historical review is that there
has never been agreement on whether humans are naturally inclined toward coop-
eration or selfishness. It is thus unsurprising that theorists continue to debate the
issue today. As part of this review, it was our goal to clarify what we see as some
modern misconceptions about some of these historical writings: that Adam Smith
believed people will always (and should) be self-interested; that people are thought
to perform pain/pleasure analyses before every action; that those who subscribe to
the Hobbesian point of view believe that humans are incapable of being generous
on their own.
Our discussion of the research paradigms revolves around the Prisoner’s
Dilemma, give-some game, and take-some game, and some of their major vari-
ants. We also raise the issue of static versus dynamic games, with the former
assuming that people have a set choice strategy that they employ to make their
cooperation decisions, and the latter relaxing this assumption, thus allowing for
the possibility that people will alter their choice strategy as the dilemma pro-
gresses. We noted that little research exists on dynamic choice in social dilemmas,
a knowledge gap that needs filling.
Finally, we reviewed some of the major practical considerations a social
dilemma researcher needs to make:  what type of incentive structure to use;
whether to inform subjects of when the game will end; and whether or not to use
real or simulated opponents. We saw that none of these issues has an easy answer,
and the choices the researcher makes will be influenced by theoretical consider-
ations regarding what type of situation the researcher is trying to understand.
The take-home message from all of this is that constructing a proper social
dilemma study is challenging, in terms of both how the study is executed, and
the basic assumptions one makes about why humans do what they do. This is an
important point to keep in mind, also when reading the following chapters.
■ PART TWO

Perspectives to Social Dilemmas


3 Evolutionary Perspectives

■ EVOLUTIONARY PERSPECTIVES

Although various forms of non-cooperative behavior may catch the eye, we also
see that people often engage in remarkable forms of prosocial behavior. We
make substantial personal sacrifices to help our kin and support our friends,
rescue complete strangers in bystander emergencies, make anonymous financial
donations to charities, and contribute to large scale public goods such as edu-
cation, religion, and environmental sustainability. From an evolutionary per-
spective, human cooperation is an enigma because, over time, natural selection
should ruthlessly winnow out any traits that reduce an individual’s fitness.
Fitness is defined in terms of an individual’s reproductive success, and it
includes both someone’s direct fitness (numbers of offspring carrying copies of the
same genes) and indirect fitness (other kin carrying copies of the same genes)—
together referred to as inclusive fitness (Hamilton, 1964). Thus, for any costly
behavior to evolve and persist, there need to be some corresponding ultimate ben-
efits in terms of spreading the actor’s genes. A first-pass glance at natural selec-
tion would suggest that cooperation in social dilemmas should have been selected
against because they cause us to perform behaviors which may be individually
costly. Nevertheless, cooperation is ubiquitous in both human and nonhuman
social interactions. Why? This is the question that this chapter addresses.
In this chapter, we outline a number of important evolutionary explanations for
human cooperation that have been suggested in the literature, (1) kin selection,
(2) direct reciprocity, (3) indirect reciprocity, and (4) costly signaling. These expla-
nations suggest that cooperation has evolved through natural selection. We also
briefly discuss other perspectives such as multilevel selection which suggests that
human cooperation may be adaptive at the group level, as well as mismatch and
cultural group selection theories which suggest that human cooperation is not an
adaptation per se but a byproduct of other adaptations. Finally, we briefly discuss
some emerging questions in the evolutionary literature on cooperation, and pose
some challenges for further research.
We begin by discussing some definitional issues. Evolutionary biologists
define cooperation as any action which is intended to benefit others, regardless
of whether the actor also benefits in the process. This captures a wide variety of

This chapter was written by Pat Barclay and Mark Van Vugt and is partly based on Barclay, P., &
Van Vugt, M.  (in press). The evolutionary psychology of human prosociality:  adaptations, mistakes,
and byproducts. To appear in D.  Schroeder & W.  Graziaono (Eds.) Oxford Handbook of Prosocial
Behavior. Oxford, UK: Oxford University Press.

39
40 ■ Perspectives to Social Dilemmas

different behaviors such as helping, volunteering, altruism, coordination, and pro-


social behavior. Some types of cooperation are costly to the actor, either in the
short or long run, whereas other types carry no costs, or return benefits to the
actor almost immediately. For instance, altruism is a costly form of cooperation
and evolutionary biologists use this term to refer to actions which decrease one’s
lifetime reproductive success (West, El Moulden & Gardner, 2011). In contrast,
mutualism is a form of non-costly cooperation when the actor and another agent
both benefit from something the actor does. Because cooperation in social dilem-
mas usually involves a cost to the actor, we will concentrate here on explaining
the more costly forms of cooperation from an evolutionary perspective. Thus, this
chapter does not address any examples of cooperation in which the actor receives
direct benefits from cooperating, including Volunteer’s Dilemmas (e.g. Archetti &
Scheuring, 2010; Diekmann, 1985) or having some stake in the recipient of the
help (Roberts, 2005; Tooby & Cosmides, 1996).
It is good to note that research on the evolution of cooperation draws from
many disciplines such as evolutionary biology, experimental economics, math-
ematical game theory, anthropology, and, of course, social, cognitive, and devel-
opmental psychology. It uses tools and methods from all of the above to investigate
the origins of human cooperation. What often differentiates evolutionary research-
ers from non-evolutionary researchers in the study of human cooperation is the
types of questions they ask. Researchers may get into unnecessary quarrels over
the causes of human cooperation without realizing that they may be providing
valid answers to different questions (Barclay & Van Vugt, 2012; Van Vugt & Van
Lange, 2006).
For example, one researcher may say “people help each other because they feel
empathy for each other.” A  second researcher may say “people help each other
because they learn to help.” A  third researcher will say “people help each other
because those who help tend to receive help.” A fourth research may say “people help
each other because we share this trait with other apes and it evolved out of kin altru-
ism.” What these researchers may not realize is that they could all be right—or could
all be wrong—because they are answering questions at different levels of analysis.
The first researcher is talking about proximate psychological mechanisms
underlying altruism and cooperation: what is going on within the person at the
time he or she helps. The second research is talking about development, that is, how
the psychological mechanisms underlying cooperation develop within the lifespan
of an individual and how genes and environments interact. The third researcher is
asking about ultimate, adaptive function, that is, why an individual would develop
in such a way as to have that psychological mechanism to help or cooperate, and
what selective pressures cause it to persist. The fourth research is addressing phy-
logeny or evolutionary history, that is, how and when the mechanism evolved in
our evolutionary history, and what prior traits it could have evolved from.
This example shows how these four levels of analysis are complementary,
not mutually exclusive. To understand human cooperation in social dilemmas
requires an explanation at all four of these levels of analysis (Tinbergen, 1968).
The only fruitful scientific discussion is between explanations within the same
level. For example, researchers can debate whether the psychological mechanism
Evolutionary Perspectives ■ 41

that triggers human altruism and cooperation is empathy versus “oneness with
others”—the debate between Batson et  al., 1997 versus Cialdini, Brown, Lewis,
Luce, & Neuberg, 1997. They can also debate development by asking whether
empathic cooperation is innate versus culturally learned. Much of the controversy
over evolutionary explanations of human cooperation is because researchers mix
up these levels of analysis, for example, assuming that people are consciously con-
cerned with receiving benefits for helping or that all altruistic acts are selfishly
motivated (Barclay, 2011; Van Vugt & Van Lange, 2006; West et al., 2011).

■ EVOLUTIONARY PERSPECTIVES
ON HUMAN COOPERATION

What selective pressures have caused the evolution of human cooperation in


social dilemmas? While the answer is not yet completely clear, there are sev-
eral intriguing possibilities. We can roughly divide these evolutionary accounts
into two categories, adaptive or non-adaptive explanations. Adaptive explana-
tions are those that find some benefits to cooperation, such that being help-
ful increases one’s fitness or reproductive success. Adaptive explanations are
cases in which the eventual benefits of cooperation outweigh the costs, either
for the individual (direct fitness) or for copies of her genes residing in other
bodies (indirect fitness). However, it is good to realize that not all forms of
human cooperation are adaptive in an evolutionary sense. We will return to
this towards the end of the chapter.

■ KIN SELECTION

The first major theory for understanding the evolutionary origins of human
cooperation is kin selection. The vast majority of the more costly forms of
cooperation in both humans and non-humans are directed towards kin. Why
is this, and how could kinship helping evolve? Imagine that you are a gene
trying to propagate copies of yourself. The great evolutionary theorist William
(Bill) Hamilton (1964) noted that there are at least two ways you can do this.
Increasing the reproduction of your current body (direct fitness), or increasing
the reproduction of other bodies that carry a copy of yourself (indirect fitness).
Inclusive fitness is the sum of these effects—direct fitness plus indirect fitness—
and is what organisms have evolved to maximize. For any given gene, close kin
are statistically likely to carry identical copies. Any gene that causes an individ-
ual to help close kin will often cause help to be targeted towards copies of itself.
Thus, we would predict that psychological mechanisms that cause nepotism will
evolve in many species, and that this nepotism should depend in part on the
closeness of kinship. This prediction has been abundantly confirmed in many
species across many thousands of studies (for a review, see Alcock, 1993). It has
even been found in plants (e.g. Dudley & File, 2007), suggesting that inclusive
fitness is a powerful idea that applies across all of  life.
Regarding humans, much research has shown that—all else being equal—peo-
ple are nicer to kin than non-kin: they are more likely to help kin, less likely to harm
42 ■ Perspectives to Social Dilemmas

kin, and more willing to tolerate injustices from kin (e.g., Burnstein, Crandall, &
Kitayama, 1994; Krupp, DeBruine, & Barclay, 2008; Park, Schaller & Van Vugt,
2008). In one set of studies, Burnstein et  al., (1994) tested several inclusive fit-
ness hypotheses by giving respondents hypothetical decisions to help others. They
distinguished between helping in life and death decisions—whereby people could
save only one person from a burning house while the others would perish—versus
more every day forms of helping such as shopping for someone’s groceries. The
targets of helping varied in terms of their degree of kinship, age, sex, and health.
Consistent with the evolutionary hypotheses, people felt closer to immediate kin
(siblings) than to more distant kin (cousins). Furthermore, they were more likely
to aid close kin over distant kin, especially in life-and-death situations, whereas
when it is a matter of an everyday favor they gave less weight to kinship.
Research suggests that even when people are in competition with others, they
will compete less sharply with kin than with non-kin (Daly & Wilson, 1988;
Gardner & West, 2004). Kinship is a major form of grouping in many pre-industrial
societies, and appears to be a major factor affecting who shares food with whom
in many societies (Gurven, 2004). In fact, the most persistent, long-term, self-
less and unreciprocated act that we see people perform—namely parental care—is
actually just a special case of kinship, because offspring carry copies of parental
genes (Dawkins, 1976/2006). Natural selection has crafted a human kinship psy-
chology that includes such powerful sentiments as parental love, mother-infant
attachment, brother and sister solidarity, and other such nepotistic tendencies.
These emotions are the proximate psychological mechanisms that function to pro-
mote helping towards kin. All told, kinship appears to be one of the most powerful
causes of cooperation for most humans on the planet (Park et al., 2008).

■ DIRECT RECIPROCITY

A second explanation for why humans cooperate in social dilemmas is direct


reciprocity and it is based on the theory of reciprocal altruism (Trivers, 1971).
This theory assumes that when people help each other, they often receive pay-
back at a later date. Take the example of hunting, a common practice among
our human ancestors, which may have selected for reciprocity in humans.
Hunting food is difficult, and hunters often come home empty-handed. This
means that each hunter is at risk of going hungry some days and having a
bonanza of food on others days when he catches something. To resolve the
problem of being hungry on some days, two or more hunters could agree to
help each other. Each hunter will share with the other(s) when he has plenty,
and gets a share when he is hungry. This way, each has fewer hungry days and
is more likely to survive.
There is a risk associated with this form of cooperation because it constitutes
a social dilemma, and, more precisely, a Prisoner’s Dilemma game, in which it
is costly to share. If one person received meat from others without giving any-
thing in return, then he would be better off than someone who paid the cost of
sharing. Free-riding on others would be beneficial if people are willing to give to
anyone. One evolved strategy is for people to preferentially help those who have
Evolutionary Perspectives ■ 43

provided help in the past. This is the basis of reciprocity (Axelrod, 1984; Trivers,
1971; Van Vugt & Van Lange, 2006). In this way, the helpers tend to receive help
and the non-helpers tend not to receive any help. Indeed, people often get involved
in exchange relationships in which they take turns helping each other. In our meat
example, two hunters might share with each other as long as each of them has
given in the past. We have popular expressions such as “you scratch my back and
I’ll scratch yours,” which carry the implicit condition that “I will not scratch your
back unless you scratch mine.”
Is direct reciprocity beneficial for the actor? In a classic study, political scientist
Robert Axelrod organized a tournament in which he pitted different strategies in
computer simulations of the Prisoner’s Dilemma Game against each other, and
the winning strategy was Tit-for-Tat (Axelrod, 1984). Tit-for-Tat is a simple strat-
egy of initially cooperating with one’s partner, and thereafter imitating the part-
ner’s action on the previous interaction. Tit-for-Tat is a remarkably good strategy
because it pairs cooperators with other cooperators and does not get “suckered”
for long by those who do not cooperate. As such, it tends to do better than many
other strategies (Axelrod, 1984; Boyd & Lorberbaum, 1987; Dawkins, 1976/2006).
Yet scientists have found various conditions that limit the effectiveness of
Tit-for-Tat. Tit-for-Tat only works if the “shadow of the future” is long enough
such that the future benefits of one’s partner’s reciprocation will outweigh the cost
of immediate helping. Tit-for-Tat requires enough other reciprocators around
to make it worth initiating a reciprocal relationship. Under some conditions,
Tit-for-Tat can be beaten by more forgiving strategies that overlook acciden-
tal failures to cooperate, or by strategies that exploit unconditional cooperators
(Brembs, 1996; Klapwijk & Van Lange, 2009; Nowak & Sigmund, 1992). Although
Tit-for-Tat is not always the best reciprocal strategy to follow, the net sum of years
of theory is that some willingness to reciprocally help can be a highly successful
strategy in solving social dilemmas.
One of the factors that might favor a more generous strategy than strict
Tit-for-Tat is the occurrence of noise (Axelrod, 1984; Kollock, 1998). For exam-
ple, Klapwijk and Van Lange (2009; see also Van Lange, Ouwerkerk, & Tazelaar,
2002)  showed that in noisy dyadic interactions—when the partner sometimes
behaves more or less cooperatively than intended—it pays to be slightly more
generous than Tit-for-Tat. Using a parcel delivery paradigm as a social dilemma
whereby individual’s pay offs were determined by the speed in which a parcel
would be delivered through a city (with roadblocks to induce noise) they found
that under noise Tit for Tat diminished trust and cooperation, and that a more
generous strategy was more effective in maintaining cooperation.
Most of the research on direct reciprocity uses the iterated Prisoner’s Dilemma,
which is a two-person game in which each player has the binary choice each round
of either “cooperating” or “defecting.” Recent work has allowed people to use
graded levels of cooperation instead of a binary choice. Mathematical models have
shown that the best strategy in such situations is “Raise-the-Stakes,” which means
starting out moderately cooperative and getting increasingly cooperative when
one’s partner reciprocates (Roberts & Sherratt, 1998; Sherratt & Roberts, 1999).
This accurately models what people actually do in experimental games (Roberts &
44 ■ Perspectives to Social Dilemmas

Renwick, 2003; Van den Bergh & Dewitte, 2006), especially with strangers with
whom they have no trusting relationship (Klapwijk & Van Lange, 2009; Majolo
et al., 2006).
Contrary to popular belief, the existence of direct reciprocity does not require
complex calculations or strict bookkeeping among egoistic individuals. Reciprocity
explains why people are capable of possessing genuine warmth, love, and sympa-
thy toward others such as in friendships or romantic relationships: “If I genuinely
value your welfare, it will cause me to help you, which can cause you to genuinely
care about me and help me when I need it, which causes me to value your welfare
more,” and so on. In this example, empathy and feelings of warmth toward one’s
friend or romantic partner are the proximate psychological mechanisms that are
shaped by an evolved psychology based on direct reciprocity (Barclay & Van Vugt,
2012; de Waal & Suchak, 2010; Neyer & Lange, 2003).
What direct reciprocity does require are the cognitive abilities to detect when
others might fail to reciprocate (cheater detection: Cosmides, Barrett, & Tooby.,
2010), remember who has and has not reciprocated (Barclay, 2008; Mealey, 1995),
trust that others will stick around long enough to return the favor (Van Vugt &
Hart, 2004; Van Vugt & Van Lange, 2006), and delay gratification in order to reap
the long-term gains of reciprocation (Harris & Madden, 2002; Stevens & Hauser,
2004). This might explain why reciprocity is relatively common in humans, but
relatively rare in the other primates that generally lack these more advanced men-
tal faculties.

■ INDIRECT RECIPROCITY

People do not only help their kin, partners, or friends. Human cooperation
appears to be much broader than that. People regularly help those who will not
have the opportunity to directly reciprocate. Take the hunting example again, and
imagine one hunter who is known to regularly share with others, and a second
hunter who is known for stinginess. When the generous hunter gets sick and
is unable to hunt for himself, others are likely to give him meat, whereas the
stingy hunter is much less likely to receive meat when sick (Gurven, Allen-Arave,
Hill, & Hurtado, 2000). This is an example of what is often referred to as indirect
reciprocity, which is when cooperative acts are reciprocated by someone other
than the recipient (Alexander, 1987; Nowak & Sigmund, 2005). According to
indirect reciprocity theory, people acquire a good reputation when they help oth-
ers, and this makes them more likely to receive help when they themselves need
it. People who refuse to help good people get a bad reputation, which reduces
their likelihood of receiving help themselves.
As an empirical illustration of indirect reciprocity, Wedekind and Milinski
(2000) had participants play a public good game in which they could give money
to other participants and could gain a reputation for giving or refusing. The
experimenters ensured that there was no possibility of direct reciprocation from
the recipient because participants would never be paired with the same person
again. Despite this, participants tended to give to others who had given in the
past, such that people with a good reputation were more likely to receive help.
Evolutionary Perspectives ■ 45

This result has been replicated in several other similar experiments conducted in
various labs in behaviorial economics and social psychology (Hardy & Van Vugt,
2006; Milinski, Semmann, Bakker, & Krambeck, 2001; Seinen & Schram, 2006;
Semmann, Krambeck, & Milinski, 2004; Van Vugt & Hardy, 2010).
For instance, Hardy and van Vugt (2006) showed that cooperators in a pub-
lic good game receive greater status from their peers and they are more likely to
be selected as group leaders. They had participants play a public good game in
randomly assigned three-player groups. In one condition, the individual contri-
butions per round were anonymous and in another they were public. After each
round the members of each group were asked who they preferred as their group
leader for a subsequent round. They were also asked which group member they
most admired and respected. As expected, cooperators received higher status rat-
ings and were most likely to be chosen as group leaders provided that their con-
tributions were known to others in their group. In a second study on a resource
(commons) dilemma, they essentially replicated this finding. Individuals who had
taken less from a resource pool were seen as higher in status and were preferred
as exchange partners.
What information do people use to decide whom to help? People seem to
use a combination of personal experience and social information about others
(gossip) when deciding whether to help them or not (Roberts, 2008; Sommerfeld,
Krambeck, Semman, & Milinski., 2007). Evolutionary scientists are currently
investigating what types of acts will result in someone obtaining a good or bad
reputation (image score). For instance, one gets a good reputation for punish-
ing non-cooperators (Barclay, 2006). Also, a refusal to help a bad person should
enhance one’s own reputation, but it is not clear whether it does (Bolton, Katok, &
Ockenfels, 2005; Milinski et al., 2001; Ohtsuki & Iwasa, 2007). Finally, it should
probably matter for one’s reputation whether a person helps (or fails to help) either
an in-group member or an out-group member but we know of no studies who
have looked into this.
Unlike kin selection and direct reciprocity, indirect reciprocity can poten-
tially explain cooperation in large-scale public goods in which people gain a
good reputation by being cooperative. Reputational forces like indirect reci-
procity can be harnessed to support cooperative actions like the fight against
climate change because people who work against climate change may benefit in
terms of indirect reciprocity (Van Vugt, 2009). As a test of this idea, Milinski,
Semman, Krambeck, and Marotzke (2006) ran a public goods experiment with
participants contributing to a public fund. In contrast to the standard public
good game, the public fund was not divided among the participants but the fund
was used to invest in reducing people’s fossil fuel use. This game mimicked the
global climate change problem. The researchers found that contributions went
up when the players were provided with expert information describing the cur-
rent state of the climate. Furthermore, in support of indirect reciprocity theory,
personal investments in climate protection increased substantially when players
invested publicly, that is, when they could build up a good reputation. Thus, a
third evolutionary explanation for why humans cooperate is because it yields
reputation benefits.
46 ■ Perspectives to Social Dilemmas

■ C O S T LY S I G N A L I N G

A fourth major evolutionary perspective considers human cooperation to be a


costly signal of some underlying quality of the helper. Like indirect reciproc-
ity, a costly signaling perspective assumes that cooperation is a signal that can
benefit one’s reputation. However, the payout that helpers get is not necessarily
in terms of cooperation:  it could also be in terms of access to sexual mates or
resources. Let’s go back to the hunters. Hunting big game is challenging, and
hunters regularly come home empty-handed. It takes a lot of skill to catch big
game with any regularity. If you see someone who is often sharing giraffe meat
that he has caught, what do you conclude about him? Probably that he has skills
and resources. These can include talents such as athletic ability, physical strength,
coordination, intelligence, perseverance, leadership, and commitment—all of
which are desirable traits in a sexual mate or in a coalitional partner, and unde-
sirable traits in an enemy. Thus the practice of hunting and sharing large game—
two cooperative activities—may be a way of signaling qualities about oneself that
may otherwise be difficult to observe directly (Hawkes, 1991; Smith  & Bliege
Bird, 2000; Smith, 2004).
The classic example of a costly signal in the animal world is the peacock’s tail.
This ornamental tail is very costly to grow and it severely restricts the movements
of the peacock. Yet, by being costly it signals to peahens that the carrier of the
tail is in excellent condition and possesses good genetic qualities. Peahens indeed
select their mating partners based on the tail quality. In a similar vein, human
cooperation might be a way to broadcast information about oneself in a way which
constrains it to be honest (Iredale, Van Vugt, & Dunbar, 2008; Searcy & Nowicki,
2005; Zahavi & Zahavi, 1997): “I benefit from sending signals to convince you that
I have certain qualities (e.g., abilities, resources, cooperative intent), and you ben-
efit from determining whether I honestly do possess those qualities.” But how does
one know if the signals are honest or if the other person is bluffing? Signals can be
constrained to be honest if they carry a potential fitness cost which is only worth-
while for someone who honestly possesses the quality (Gintis, Smith, & Bowles,
2001; Searcy & Nowicki, 2007).
For example, it is fairly easy for Bill Gates to donate one million dollars to char-
ity. As such, Bill Gates pay a relatively low fitness cost for such large donations, and
this can be outweighed by any reputational benefits he receives. For most other
folks, the reputational benefits would not outweigh the crippling cost of sacrific-
ing that much money, so the fitness cost is too high and therefore not worth it. As
a result of these differing fitness costs, audiences can infer that Bill Gates is very
rich because he has over a billion dollars to spare. Bill Gates thus receives status,
respect, mating opportunities—if he were so inclined—and a host of other social
benefits.
Costly signaling theory can explain many forms of altruism and cooperation
such as philanthropy (Harbaugh, 1998), large public feasts, and potlatches (e.g.
Boone, 1998; Smith & Bliege Bird, 2000; Van Vugt & Hardy, 2010), bravery (Kelly
& Dunbar, 2001), blood donations (Lylej, Smith, & Sullivan, 2009), and volunteer-
ing and charity giving (McAndrew, 2002; Van Vugt & Iredale, 2012). Recently, Van
Evolutionary Perspectives ■ 47

Vugt and Iredale (2012) argued that men’s altruism might be a costly signal to show
off their qualities to potential female partners. To test this “show-off ” hypothesis
(cf. Hawkes, 1991), they allocated men to four player groups to play several rounds
of a public good game, while being observed by either a male audience, a female
audience, or no audience. As expected, contributions dropped over time when
there was no audience which can be ascribed to the standard endgame effect. With
a male audience, the contributions also dropped over time, but not significantly.
However, with a female audience the contributions went up over time, suggest-
ing that the men were using their cooperation to compete for the attention of the
female. In line with this costly signaling hypothesis, men also contributed more
when they rated the female as sexually more attractive.
Costly signaling offers an interesting alternative perspective on the origins of
human cooperation by viewing altruism and other acts of kindness as signals to
attract potential coalition partners or sexual mates. It assumes that some traits
evolve because they enable individuals to do better in the competition for partners.
This idea fits well with a broader perspective known as biological markets theory.
Humans can choose many of their social partners and leave uncooperative partners
if there are better options available. The presence of partner choice creates a mar-
ket for social partners (Noë & Hammerstein, 1994, 1995). In such markets, people
choose the best partners they can obtain, given their own value in this market.
This perspective has implications for the evolution and development of coop-
eration because it creates a selection pressure for fairness and cooperation. If you
are not receiving a “fair” deal then you can simply find someone else who will offer
that deal (André & Baumard, 2011; Baumard, André, & Sperber., in press). In a
biological market, the best way to get a good partner is to be a good partner. As
long as there are enough opportunities for reputation building or there are costs
for being abandoned then this will cause an escalation of cooperative behavior in
a process known as “runaway social selection” (Nesse, 2007) or “competitive altru-
ism” (Barclay, 2004, 2011; Hardy & Van Vugt, 2006; Roberts, 1998).
The theory of biological markets combines aspects of reciprocity and costly
signaling in explaining cooperation. Traditional evolutionary perspectives predict
that people will be more cooperative when they are being observed, but biological
markets go further by predicting that humans will be even more generous when
competing over access to partners (Barclay & Willer, 2007; Sylwester & Roberts,
2011; Van Vugt & Iredale, 2012). Such competition pays off because high contribu-
tors gain status for helping others (Hardy & Van Vugt, 2006), and are more likely
to be chosen as partners (Barclay & Willer, 2007) and mates (Barclay, 2010). In
biological markets, cooperation is affected by factors like the supply and demand
of different currencies of help, one’s own market value, and one’s outside options
(Noë & Hammerstein, 1994, 1995).
More research is needed to test a broad range of predictions derived from costly
signaling and biological markets theories about the emergence of cooperation in
humans. These theories are appealing because they suggest that much of human
cooperation is about signaling and they offer compelling evolutionary explana-
tions for why there are consistent sex differences in cooperation in different situa-
tions (Balliet, Li, Macfarlan, & Van Vugt, 2011).
48 ■ Perspectives to Social Dilemmas

■ BASIC ISSUES

In the above, we have discussed four well-established evolutionary theories of


human cooperation, including kin selection, direct reciprocity, indirect reci-
procity, and costly signaling. The evolutionary perspective also raises some
basic issues that are relevant to understanding human cooperation. We discuss
three basic issues here: (a) is human cooperation adaptive at the individual level
and/or at the group level? (b) are all forms of human cooperation adaptive, and
(c)  how does evolution interact with culture to produce human cooperation?

Is Cooperation Adaptive at the Individual or Group Level?

The above evolutionary theories of human cooperation all primarily focus on


how cooperation influences someone’s inclusive fitness. Inclusive fitness is con-
cerned with one’s own reproductive success and that of one’s kin (Hamilton,
1964). However, this is not the only way of looking at fitness. Inclusive fit-
ness theory is simply one particular method of counting fitness, and alternative
methods exist. One alternative is “neighbor-modulated fitness,” where instead
of examining one’s effects on oneself and on kin (as in inclusive fitness theory),
a researcher only measures one’s own reproduction and includes kin’s effects on
oneself (Queller, 2011; West et al., 2011). Another alternative is offered by mul-
tilevel selection theory (Boyd, Gintis, Bowles, & Richerson, 2003; McAndrew,
2002; Sober & Wilson, 1998; Wilson, Van Vugt, & O’Gorman, 2008):  some
types of costly cooperation like altruism will decrease one’s own fitness relative
to one’s group but will increase the fitness of the group relative to other groups.
Multi-level selection looks at how one’s actions affect group fitness versus indi-
vidual fitness, and whether between-group selection for cooperation is stronger
than the within-group selection against costly cooperation. Cooperation will
arise when the latter selection force is stronger than the former.
It is important to stress that multilevel selection and inclusive fitness theory are
mathematically equivalent (Foster, Wenseleers, Ratnieks, & Queller, 2006; Sober &
Wilson, 1998; West et al., 2011). This is no longer under any serious debate. All
multilevel selection models can be translated into inclusive fitness models, and
vice versa. Rather than being a “new selective force” like reciprocity or costly sig-
naling, multi-level selection is simply another way of looking at fitness, much like
a different way of looking at a Necker cube (Reeve, 2000; Sober & Wilson, 1998),
or measuring distance in miles instead of kilometers.
The big question is: Is it useful to look at group fitness when looking at human
cooperation? Researchers disagree on this. Some researchers argue that this mul-
tilevel perspective is indeed useful. These researchers argue that human groups
can function as a single reproductive unit much like a beehive or termite colony
in which individuals give up their own reproductive interests to benefit the group
(Wilson et  al., 2008). Such a process would have been aided by the high levels
of deadly intergroup conflict among ancestral hunter-gatherer societies (Bowles,
2009), and could result in group-level adaptations which produce in-group coop-
eration, sharing, coordination, suppression of within-group conflict, collective
Evolutionary Perspectives ■ 49

decision-making, and hostility towards, and de-humanization of, other groups.


Other researchers argue that—given the mathematical equivalence of these the-
ories—all such findings are better predicted by existing components of inclusive
fitness theory such as kinship, reputation, mutualisms, and vested interests in
one’s group (e.g. Barclay, 2010 a; Reeve, 2000; West et al., 2011; Yamagishi, Jin, &
Kiyonari, 1999). These researchers argue that a focus on multi-level selection tends
to hinder scientific progress more than it helps because it creates semantic confu-
sion and makes it unclear what specific factors are selecting for cooperation.
Given the ongoing controversy over this topic, we cannot resolve it here.
Instead, the utility of multilevel perspectives will be decided by whether they make
unique predictions and produce novel findings that are not generated by inclusive
fitness theory, and whether they do so without creating semantic confusion over
redefinitions of altruism (West et al., 2011). If they can do so, then they would be
a promising new avenue for research on human cooperation (Wilson et al., 2008).

Are All Forms of Human Cooperation Adaptive?

Contrary to popular belief, evolutionary theory does not predict that each case
of human cooperation is adaptive—in the sense that it increases someone’s
inclusive fitness. In the animal world, prey species sometimes get eaten because
they mistake where predators are (e.g., a zebra running towards a hidden lion)
and several bird species are tricked into raising cuckoo chicks. These animals
clearly produce a benefit to the other animals while incurring a cost to self, and
thus they can be viewed as acts of altruism. Clearly, such mistakes and manipu-
lations frequently occur in nature but they are not adaptive in an evolutionary
sense. Here we provide two common non-adaptive evolutionary explanations
for why humans cooperate in social dilemmas, mistakes, and mismatches.
Some forms of cooperation occur unintentionally. Going back to the hunting
example, suppose that one day you have successfully hunted meat, but you would
prefer not to share with the rest of the group because you and your family are
hungry. You could try to smuggle it back to your family or consume it on the
spot, but what if others catch you? You would risk losing your reputation, getting
punished, and having others not share with you in the future. Our psychological
mechanisms have evolved to be adaptive on average. All mechanisms will occa-
sionally make mistakes because errors are inevitable in any decision-making pro-
cess (Haselton & Buss, 2000; Nesse, 2005). Cooperative sentiments, like empathy,
cause us to help others (Batson et al., 1997; De Waal & Suchak, 2010). In a world
with reciprocity and reputation, this will often result in cooperative people receiv-
ing benefits for helping, even if those people do not intend to receive such benefits.
As long as those benefits outweigh the costs of occasionally helping the “wrong”
people (e.g. those who will not reciprocate) or in the “wrong” situations (e.g. when
we are anonymous) then it would still be adaptive on average to have cooperative
sentiments (Barclay, 2011; Delton, Krasnow, Cosmides, & Tooby, 2011).
We can design experiments to cause participants to make “mistakes” in social
dilemmas by helping when they receive no benefits for doing so, as long as we trig-
ger cues that would normally indicate the presence of benefits. For example, the
50 ■ Perspectives to Social Dilemmas

presence of eyes is normally a cue that one is being observed, and many experi-
ments have shown that people are more generous with their money when they
can observe eye-like stimuli on a computer or on a poster (Bateson, Nettle, &
Roberts, 2006; Burnham & Hare, 2007; Haley & Fessler, 2005; Mifune, Hashimoto,
& Yamagishi, 2010; Rigdon, Ishii, Watabe, & Kitayama, 2009). As another exam-
ple, facial resemblance is one cue that people use to detect kinship (DeBruine,
2005), and participants in experimental games are more trusting and cooperative
when they are playing with people whose faces have been morphed to slightly
resemble the participant’s own face (DeBruine, 2002, 2005; Krupp et  al., 2008).
In both examples, an adaptive psychological mechanism is being “tricked” to
produce a cooperative response even when the participant does not benefit from
being helpful.
A second, non-adaptive explanation is that human cooperation is a mismatch.
Natural selection does not plan ahead. Our current adaptations are “designed” to
work well in past environments: Those who had more offspring in past environ-
ments tended to pass their traits on to current generations. If the environment
stays relatively constant, then those traits will function well in the current envi-
ronment. However, if the environment has changed recently, then traits which
were once adaptive may no longer be adaptive. In other words, the old adaptations
might not yet have been selected out of a population if the selection pressures have
recently changed. This idea is known as mismatch or evolutionary lag because the
changes in genes lag behind the changes in environments (Laland & Brown, 2006;
Van Vugt & Ahuja, 2010). The classic example of mismatch is our preferences for
sweets, salts, and fats: it is adaptive to crave these when they are rare, because they
are valuable sources of energy and nutrients. People still crave them even though
they are overabundant in modern environments and lead to obesity and other
health problems.
Social environments have changed dramatically in the last several centuries and
millennia. As such, forms of cooperation that were once adaptive might no longer
be adaptive. For example, we have gone from living in smaller kin-based groups to
much larger groups of mostly non-kin. In the former circumstances, a psychology
with the decision rules such as “feel warmth towards all group members and “help
someone who needs aid” would result in cooperation mostly targeted towards kin,
whereas in modern circumstances it would not. Thus, cooperative sentiments that
once increased inclusive fitness may no longer do so.
In addition to changes in the scale and kin composition of groups, we now also
have many more opportunities for anonymity and movement between groups. This
means that people can now get away with more selfish behavior than they could
have in small bands, and it is now easier to move to a new group and run from
one’s bad reputation. Accordingly, reputation may be less important in modern
environments than in past environments—though this requires empirical test-
ing. If so, then it is not as beneficial as it once was to possess social emotions
like guilt and shame. Such emotions help people maintain their reputations and
make amends for any damage they have done to cooperative relationships (Frank,
1988; Ketelaar & Au, 2003). When people can simply run from a bad reputation or
simply gain new partners to replace any partners they have estranged then these
Evolutionary Perspectives ■ 51

emotions are no longer functional. This situation may be changing with the advent
of the Internet and social media technologies such as Facebook and Twitter as
people are now able to spread information about others’ reputation—for good or
for ill—quickly and efficiently. As it stands now, it is currently unknown whether
mismatch is a major factor in the explaining human cooperation. Yet it is worth
investigating cooperation in smaller and largely kin-based social networks that
were the norm until fairly recently to see if humans still apply the same decision
rules in large, modern and complex societies (Dunbar et al., 2011; Van Vugt & Van
Lange, 2006; West et al., 2011).
The main lesson here is that not all forms of human cooperation are adaptive in
an evolutionary sense. People sometimes make mistakes regarding to whom they
bestow benefits because the psychological mechanisms underlying their coopera-
tive acts are misfiring.

How Does Evolution Interact with Culture


in Producing Human Cooperation?

An emerging perspective on the origins of human cooperation is offered by


co-evolutionary models. Most people think of evolution as dealing mostly with
genes but cultural traits can also evolve. If a cultural trait is better at propagat-
ing itself and attracting new bearers then it will spread in a population at the
expense of alternative cultural traits. The study of such transmission is known
as memetics (after Richard Dawkins’ concept of memes, which are units of cul-
ture which jump from one mind to the next). Because of the fact that humans
inherit traits both genetically and culturally, these models are also referred to
as gene-culture co-evolutionary models (Boyd & Richerson, 2002; Lumsden &
Wilson, 1981; Richerson & Boyd,  2005).
One reason cultural traits spread is because they are good for its bearer. Other
individuals will see that the bearer is doing well and will imitate that cultural trait
(Henrich, Boyd, & Richerson, 2008; Richerson & Boyd, 2005). In such cases, the
cultural trait and the underlying genetic trait are in a symbiotic mutualism; both
benefit from such arrangement. However, a cultural trait need not necessarily be
good for its bearer to spread. If a cultural trait is exceptionally good at getting itself
copied by new minds, then it will spread even if it has no net effect—or even a
negative effect—on its bearer’s fitness (Dawkins, 1976/2006). Thus, cultural traits
can also be like parasites in that they can manipulate their hosts to increase their
own propagation at the expense of their bearer’s fitness (Dennett, 2006).
Gene-culture co-evolutionary models assume that some forms of human coop-
eration come about because humans have evolved to copy each other (conformity
bias). Social psychological studies such as the classic Asch and Milgram experi-
ments show that people display a strong tendency to conform to whatever norm is
present in a particular environment. Thus, when people observe others cooperat-
ing, it will increase the likelihood that cooperation will spread through imitation,
regardless of whether cooperation is evolutionarily adaptive (Simon, 1990). In
addition, people are biased to imitate prestigious individuals, and if a high-status
group member cooperates then cooperation is more likely to spread. Such cultural
52 ■ Perspectives to Social Dilemmas

biases might eventually result in highly cooperative groups replacing less coopera-
tive groups, thus spreading the norms of cooperation. This process is known as
cultural group selection, which should not be confused with group selection in a
biological sense (Henrich et al., 2008; Richerson & Boyd, 2006); it is the cultural
ideas that are spreading, not necessarily the groups. Stable groups are neither nec-
essary nor sufficient for this process (Barclay, 2010a).
By definition, cooperative actions benefit others in one’s group, so members of
a cooperative group are better off than members of groups with lots of free riders.
This means that there are advantages of being part of a cooperative group, even
if helping others or harming those that fail to help others (strong reciprocity) is
personally costly. This can lead to cultural changes as more cooperative, and thus
more successful, groups replace less cooperative groups, and bring their cultural
norms with them. Alternatively, less cooperative groups can become more cooper-
ative by imitating and conforming to the norms and behaviors of more successful
cooperative groups (Boyd & Richerson, 2002). Finally, people may “vote with their
feet” by joining groups with norms fostering cooperation, allowing for the further
spread of cooperation (Gürerk, Irlenbusch, & Rockenbach, 2006).
This process explains why humans have been able to create large and highly
cooperative societies on the back of a few primitive tribal social instincts to (1) help
members of their kin group (2) punish defectors, and (3) imitate the behaviors of
those around them. In general, gene-culture co-evolutionary theory offers a prom-
ising avenue for looking at human cooperation because they pay attention to inter-
actions between evolved cooperative sentiments and cultural learning biases. Yet
it is fair to say that due to their complexity and mathematical nature, these models
have not generated a lot of empirical research so far.
There are other very promising and relatively novel evolutionary approaches
to study human cooperation such as niche construction theory (Laland & Brown,
2006), scale of competition theory (West et al., 2006), selective investment theory
(Brown & Brown, 2006), and network reciprocity (Nowak, 2006). Space limitations
prevent us from elaborating on them here, but please consult these key references.

■ SUMMARY AND CONCLUSIONS

Evolutionary perspectives have generated many novel insights into understand-


ing human cooperation in social dilemmas. We have discussed four main the-
ories that explain why cooperation has evolved in humans, (1)  kin selection,
(2) direct reciprocity, (3) indirect reciprocity, and (4) costly signaling. These are
not rival theories; they complement each other in explaining different facets of
human cooperation. In addition, we must recognize that some forms of human
cooperation are not puzzling from an evolutionary perspective, because they
benefit the actor directly (e.g., mutualisms). Similarly, sometimes cooperation
in social dilemmas is not strictly adaptive in an evolutionarily sense. People
may cooperate with each other because their evolved mechanisms are misfir-
ing or because they are imitating those around them. Future research should
examine how evolved (genetic) cooperative dispositions interact with cultural
Evolutionary Perspectives ■ 53

factors in explaining the prevalence and peculiarity of human cooperation and


whether there is a role for genetic group selection in explaining human coop-
eration. In the next two chapters we discuss the psychological and cultural per-
spectives on human cooperation, and these chapters will reveal that both of
these perspectives are quite complementary to the evolutionary perspective on
human cooperation.
4 Psychological Perspectives

■ PSYCHOLOGICAL PERSPECTIVES

Is human behavior always guided by direct self-interest? Are people


other-regarding, at least some of them? Does competition always produce bad
outcomes for the group? Does altruism exist in social dilemmas? And what are
the psychological variables that help us predict and understand whether people
are likely to cooperate or not? This chapter will address these and related ques-
tions. We begin by discussing whether people go beyond self-interest or not,
and discuss the specific orientations (or social preferences) that might be acti-
vated in the context of social dilemmas. Next, we provide a brief overview of
past research on social dilemmas, and discuss the psychological variables that
might underlie cooperation in social dilemmas. We close by addressing some
basic issues, such as whether altruism exists, whether other-regarding motives
can produce bad effects for the group, and whether self-regarding motives can
produce good effects for the groups.

■ DO PEOPLE GO BEYOND DIRECT


(MATERIAL) SELF-INTEREST?

As alluded to earlier, the answer is yes. Frequently, people act in a manner


so as to obtain good personal outcomes in the future, and take a long-term
orientation to a concrete situation in the here and now. For example, people
may invest in a relationship with a new colleague because they know that they
will work together on various projects; the employee may take the new col-
league out for lunch and devote a fair amount of time familiarizing him to the
organization. Alternatively, people may take account of the outcomes of other
individuals with whom we are interdependent. The new colleague seems like a
nice guy, so why not help him? Or we may wish that both we and the other
get equally good outcomes. Thus, later, when the new colleague feels at home,
the two colleagues may implicitly or explicitly use a rule of equality in their
approach to one another—they may tend to be equally helpful to one another.
In each of these examples, people go beyond direct self-interest.
The notion that people go beyond direct self-interest is explicated in interde-
pendence theory (Kelley & Thibaut, 1978; Kelley et al., 2003; Van Lange & Rusbult,
2012), which makes a distinction between the given matrix and the effective matrix.
The given matrix is largely based on hedonic, self-interested preferences, and sum-
marizes the consequences of the individual’s own actions and another person’s
Paul Van Lange had primary responsibility for preparation of this chapter.

54
Psychological Perspectives ■ 55

actions on the individual’s outcomes. For example, an employee may just sim-
ply do those activities that are part of the contract or job description. But as we
have illustrated, an employee may also demonstrate a fair amount of helping, such
as familiarizing newcomers, working overtime when needed, and perhaps even
spontaneously offering help to colleagues who seem to need help.
According to interdependence theory, an individual may be transforming the
given matrix into an effective matrix, a matrix which summarizes his or her broader
preferences beyond the simple pursuit of direct self-interest. One type of trans-
formation may involve taking a longer-time perspective, whereby the employee
acts in ways that might be associated with greater outcomes for him or her in the
future, such as the positive return from other colleagues, or the anticipation of
reputational benefits. Another type of transformation may be outcome-based, such
that value is assigned not only to one’s own outcomes (immediate or future) but
also to the outcomes for others. For example, the employee may assign value to
the well-being of a unit or group of colleagues, seeking to enhance joint outcomes
rather than his own outcomes with no regard for his colleagues’ outcomes. Thus,
interdependence theory assumes that the pursuit of direct immediate outcomes
often provides an incomplete understanding of interpersonal behavior. That is why
this theory introduces the concept of transformation, defined as a movement away
from preferences of direct self-interest by attaching importance to longer-term
outcomes or outcomes of another person (other persons, or groups). We focus in
the remainder of the chapter on such outcome-based transformations. But what
outcome transformations may be distinguished?
The concept of transformation is based in part on the classic literature on social
value orientation (McClintock, 1972; see also Griesinger & Livingston, 1973),
which distinguishes among eight distinct preferences or orientations, including
altruism, cooperation, individualism, competition, aggression, as well as nihil-
ism, masochism, and inferiority (we will not discuss the latter three since they
are exceptionally uncommon). The outcome transformations can be understood
in terms of two dimensions, including (a) the importance (or weight) attached to
outcomes for self, and (b) the importance (or weight) attached to outcomes for
other. Figure 4.1 presents this schematic presentation, with weight to outcomes
for self on the x-axis (horizontal), and weight to outcomes for other on the y-axis
(vertical).
In this typology, cooperation is defined as the tendency to emphasize posi-
tive outcomes for self and other (“doing well together”). In contrast, competition
(or spite) is defined as the tendency to emphasize relative advantage over others
(“doing better than others”), thereby assigning positive weight to outcomes for self
and negative weight to outcomes for other. Individualism is defined by the ten-
dency to maximize outcomes for self, with little or no regard for outcomes for other
(“doing well—for oneself ”). These three orientations are fairly common in research
on social dilemmas, which often uses participants that do not know each other well.
Two other orientations—altruism and aggression—are somewhat less com-
monly observed in social dilemmas in that people do not tend to hold these as
orientations with which one approaches others in social dilemmas (but they may
be activated as motivational states, as well will discuss later). Altruism is defined
56 ■ Perspectives to Social Dilemmas

Altruism

Inferiority
+ Cooperation

Other
Masochism – Self Self + Individualism

Other
Nihilism – Competition

Aggression

Figure 4.1 Graphic presentation of social value orientations.


Weights assigned to outcomes for self are presented on the
horizontal axis, and weights assigned to outcomes for other are
presented on the vertical axis.

by the tendency to maximize outcomes for other, with no or very little regard for
outcomes for self, and aggression is defined by the tendency to minimize outcomes
for other. Cooperation, individualism, and competition represent common orien-
tations, in that most of us probably have repeated experience with each of these
tendencies, either through introspection or through observation of other’s actions.
Similar models have been developed by other researchers. The most notable
model is the dual-concern model (Pruitt & Rubin, 1986), developed in an attempt to
understand the values or concerns that might underlie negotiation. As in the model
described above, the dual-concern model assumes two basic concerns: (a) concern
about own outcomes, and (b) concern about other’s outcomes. The dual concern
model assumes that each of these concerns can run from weak to strong. This model
delineates four negotiation strategies based on high versus low concern about own
outcomes and high versus low concern about other’s outcomes. According to the
dual-concern model, problem-solving is a function of high self-concern and high
other-concern, yielding is a function of low self-concern and high other-concern;
contending is a function of high self-concern and low other-concern; and inaction
is a function of low self-concern and low-other-concern. Negotiation research has
yielded good support for the dual-concern model (Carnevale & Pruitt, 1992; see
also De Dreu, Weingart, & Kwon, 2000).
The model of social value orientation and the dual-concern model have been
extended to include a third orientation (or concern), the pursuit of equality in
outcomes. It appears that individuals who tend to enhance joint outcomes (coop-
eration, problem-solving) are also strongly concerned with equality in outcomes,
whereas individuals who are more individualistic or competitive are not very
strongly concerned with equality in outcomes (Van Lange, 1999). The implica-
tion is that individuals who were concerned with joint outcomes might not act
cooperatively if they think that such actions create injustice, either to their own
disadvantage or the other’s disadvantage.
Psychological Perspectives ■ 57

The issue of egalitarianism, in particular, has received considerable attention in


other disciplines. For example, under the label social preferences, one motive had
often been labelled as “inequity aversion” (e.g., Fehr & Schmidt, 1999), which has
been shown to gradually develop in children in between three and eight years old
(e.g., Fehr, Bernhard, & Rockenbach, 2008). Some even thought the motive was
so universal that one could also show its existence in nonhuman primates, such as
chimpanzees (e.g., Brosnan, Schiff, & De Waal, 2005), but this has not been rep-
licated in a recent study (Brauer, Call, & Tomasello, 2009). However, there is now
good consensus among scientists working in different disciplines that egalitarian-
ism is a powerful motive among humans.
There is even evidence that egalitarianism may also play an important role in
situations where material outcomes do not directly matter. For example, people
with a prosocial orientation value equality in receiving voice in decision-making
procedures, e.g., if the supervisor asks my opinion, he or she should be asking
other’s opinions as well (Van Prooijen, Stahl, Eek, & Van Lange, 2012). Also, issues
having to do with equality might often operate in the form of a heuristic, such that
people might use the rule of a “fair share” in a heuristic or even automatic manner
(De Dreu & Boles, 1998; Messick & Allison, 1990). In fact, such reasoning is sup-
ported by evidence showing that people have a strong neuroscientific tendency to
punish (or not empathize with) other individuals who violate a norm of fairness
by taking advantage of another’s cooperation (e.g., De Quervain et al., 2004 Fehr &
Gächter, 2002; Singer et al., 2006).
The conclusion is that people might approach social dilemmas with a broader
set of motives than the pursuit of direct self-interest. As described by interdepen-
dence theory, motives (or social preferences) such as the enhancement of joint
outcomes and equality in outcomes, or egalitarianism, might underlie cooperation
(or not), and motives such as enhancement of relative advantage (competition)
may underlie persistence in non-cooperation. Other motives, such as altruism or
aggression, seem somewhat more responsive in nature—toward another person’s
suffering in the case of altruism (e.g., Batson, 2011), and toward another person’s
norm violations in the case of aggression (e.g., Van Lange, De Cremer, Van Dijk, &
Van Vugt, 2007). These motives are important, but less so as a broad orientation
with which one approaches situations: People are unlikely to approach a situation
with an orientation to only enhance the other’s outcomes or to only harm the
other’s outcomes. But these altruistic and aggressive motives may well be activated
in response to the other’s needs or suffering (e.g., when the other has just recently
been abandoned by a partner) or in response to other’s provocation (e.g., the other
tried to exploit you).
How do the motives such as cooperation, egalitarianism, and competition
work? Do people use them in a completely conscious, thoughtful, or even calcu-
lating manner? This is possible. People sometimes may think about a decision after
having carefully evaluated the pros and cons of the available options, and a per-
son, for example, may reach the conclusion: We are in this together, he has always
been good to me, I will do may share now. But it is also possible that motives are
activated in relatively subtle ways that might escape awareness or consciousness.
Also, although people do differ in the probability with which these motives can
58 ■ Perspectives to Social Dilemmas

be activated across situations, it is also true that small cues in the situation, or
in how we come to perceive the other person in terms of personality, motives,
and identity, might exert pronounced effects on our behavior. Some theories have
suggested that social dilemmas may often call for some construal of appropriate-
ness, in which a person may ask the fundamental question, what does a person
like me do in a situation like this? (Weber, Kopelman, & Messick, 2004; see also
Dawes  & Messick, 2000). Norms are clearly an important source for transfor-
mations, in that most people want to act in ways that are consistent with broad
notions of appropriate and good behavior. But there may be many other sources as
well, such as identity concerns, reputational concerns, or empathy felt for others
in the group that might underlie the specific motives that people bring to bear on
social dilemmas—and that effectively cause behavior, and shape social interac-
tions (e.g., Foddy, Smithson, Schneider, & Hogg, 1999). To provide a framework
for these sources, and to provide a general framework for the influences on human
cooperation, we distinguish between structural, psychological, and dynamic influ-
ences—which we discuss next.

■ DEVELOPMENTS IN STRUCTURAL,
PSYCHOLOGICAL AND DYNAMIC INFLUENCES

As noted earlier, interdependence theory assumes that choice behavior in inter-


dependent settings is a combined function of structural influences (e.g., features
of the decision and/or social situation), psychological influences (e.g., internal
motives, framing, recently primed schemas, or affect), and dynamic interac-
tion processes (e.g., how certain individuals respond to a tit-for-tat strategy, or
whether forgiveness or retaliation will predominate when others do not cooper-
ate). We adopt this framework for discussing some recent programs of research
on social dilemmas. We first discuss structural influences by reviewing research
on rewards and punishments, asymmetries between decision makers, and
uncertainty over various aspects of the social dilemma decision. In subsequent
sections, we review recent research on psychological influences (e.g., individual
differences) and dynamic interaction processes (e.g., reciprocal strategies).

■ STRUCTURAL INFLUENCES

Rewards, Punishment, and the Social Death Penalty. It has long been known that
the objective payoffs facing decision makers (i.e., the given payoff structure) can
have a large impact on cooperation in social dilemmas (e.g., Komorita & Parks,
1994; Rapoport, 1967). Those payoffs, in turn, may be determined by an experi-
menter (e.g., by presenting relatively low or high levels of fear and greed), or by
the actual outcomes afforded by the situation (e.g., the cost of contributing to a
public good versus the value of consuming the good). In terms of the situation,
another factor that has a large impact on the actual (or anticipated) payoffs in a
social dilemma is the presence of rewards for cooperation and punishment for
non-cooperation. Indeed, a recent meta-analysis showed that rewards and pun-
ishments both have moderate positive effects on cooperation in social dilemmas
Psychological Perspectives ■ 59

(Balliet, Mulder, & Van Lange, 2011). Administering rewards and punishments
is costly, however, and may thereby create a “second order public good.” For
example, sanctions may be good for the collective, but individuals may decide
not to contribute money or effort for this purpose. In his classic work, Yamagishi
(1986ab, 1988b) showed that people are willing to make such contributions if
they share the goal of cooperation, but do not trust others to voluntarily cooper-
ate. More recently, Fehr and Gächter (2000) showed that people are also often
willing to engage in costly punishment, and may even prefer institutions that pro-
vide the possibility of such sanctions, perhaps in part because the possibility of
costly punishment can help to install a norm of cooperation (Gürerk et al., 2006).
One of the most dramatic forms of punishment currently receiving attention
is ostracism or social exclusion. Research on ostracism and social exclusion reveals
that even the possibility of social exclusion is a powerful tool to increase coop-
eration, and that this threat might be more effective in small as opposed to large
groups (e.g., Cinyabuguma, Page, & Putterman, 2005; Kerr et al., 2009; Ouwerkerk,
Kerr, Gallucci, & Van Lange, 2005). Moreover, it appears that most people realize
that harmful pursuit of self-interest can lead to social punishments (see Gächter,
Herrmann, & Thöni, 2004). As noted by Kerr et al. (2009), in everyday life, small
groups may not often go as far as to socially exclude people, but the threat is often
there, especially in the form of social marginalization by paying less attention to
non-cooperative members or involving them in somewhat less important group
decisions. Consistent with this argument, there is evidence from anthropological
research in a tribal society in Northwest Kenya, which revealed that people may
often rely on some other, lost-cost activities first before they consider punishment.
In particular, group members often initiate gossip and express mockery and pub-
lic obloquy, often as part of slow-pace, low-cost strategies to build consensus and
muster enough support to eventually retaliate against the systematic wrongdoers
(Lienard, 2013).
Although punishments can be effective in promoting cooperation, some
adverse effects have been documented in recent research. For example, several
studies have shown that sanctions can decrease rather than increase coopera-
tion, especially if the sanctions are relatively low (e.g., Gneezy & Rustichini, 2004;
Mulder, Van Dijk, De Cremer, & Wilke, 2006; Tenbrunsel & Messick, 1999). One
explanation for these adverse effects is that punishments may undermine peo-
ple’s internal motivation to cooperate (cf. Deci, Koestner, & Ryan, 1999; Chen,
Pillutla, & Yao, 2009). According to Tenbrunsel and Messick (1999), sanctions can
also lead people to interpret the social dilemma as a business decision, as opposed
to an ethical decision, thus reducing cooperation.
Researchers are now also documenting that groups may at times punish coop-
erators, a (somewhat counterintuitive) phenomenon known as antisocial punish-
ment (Gächter & Herrmann, 2011; Herrmann, Thöni, & Gächter, 2008). In one
of the most recent papers on this topic, Parks and Stone (2010) found, across
several studies, that group members indicated a strong desire to expel another
group member who contributed a large amount to the provision of a public good
and later consumed little of the good (i.e., an unselfish member). Further, there is
also growing evidence suggesting that punishment might be most effective when
60 ■ Perspectives to Social Dilemmas

it is administered in a decentralized manner (by fellow members) rather than in a


centralized manner (by an authority), perhaps because fellow members contrib-
ute more strongly to cooperative norms (for some tentative evidence, see Balliet,
Mulder, & Van Lange, 2011; Nosenzo & Sefton, 2013). There is also recent evi-
dence indicating that, if they can, people are likely to hide acts of severe punish-
ment (and low contributions by themselves), while they display high contributions
by themselves (Rockenbach & Milinski, 2011). These tendencies might have an
important impact on the effectiveness of reward and punishment.
Asymmetries in Resources, Benefits, and Roles. Another popular topic in social
dilemmas is the role of asymmetries. In most early social dilemma studies, group
members were symmetric in that they each possessed an equal number of endow-
ments that they could contribute to a public good, and/or could each benefit
equally from public goods and collective resources. Moreover, group members
typically made their decisions simultaneously (rather than sequentially), and fre-
quently made their decision without reference to specific roles in a group (such as
whether one is a leader or a follower). While such symmetries help simplify the
dilemma, in real life, various types of asymmetry are more prevalent. Recognizing
this, researchers are now exploring how such asymmetries impact choice behavior
in social dilemmas.
For example, research has shown that those who are wealthier and those who
benefit more from a well-functioning public good behave more cooperatively (e.g.,
Marwell & Ames, 1979; Van Dijk & Wilke, 1993, 1994; but see Rapoport, 1988).
These differences partly reflect differences in the relative costs of contributing (e.g.,
contributing a certain amount of money may be less risky for the less wealthy), but
they may also connect to feelings of fairness (e.g., people consider it fair if the
wealthy contribute more than the less fortunate). Moreover, in step-level situa-
tions, asymmetries are often used as a tacit coordination device (e.g. by deciding to
contribute in proportion to the number of endowments one possess), yet this only
works if people (tacitly) agree on which tacit coordination rule to apply (Van Dijk,
De Kwaadsteniet, & De Cremer, 2009). And, of course, group members do not
always agree. Indeed, in some cases, people may have self-serving ideas on what
would be fair or reasonable, especially when people face multiple types of asym-
metry (Wade-Benzoni, Tenbrunsel, & Bazerman, 1996; Messick & Sentis, 1983).
For example, it has been shown that leaders take more of a common resource than
followers, in large part because leaders feel more entitled to behave selfishly (De
Cremer & Van Dijk, 2005), and may be especially likely to do so when there is a
high degree of variability among group members’ harvests (Stouten, De Cremer, &
Van Dijk, 2005). Thus, resource asymmetries can have a large impact on coopera-
tion in social dilemmas.
Uncertainty. In most social dilemma experiments, the characteristics of the
dilemma have been known with certainty to all group members. For example,
in resource dilemmas, participants are usually informed about the exact size
of the resource, the exact replenishment rate, and the number of participants.
Similarly, in public goods dilemmas, participants are often aware of the exact
threshold required to provide the public good (or the function linking contribu-
tions to benefits in a continuous public good). In real life, however, such defining
Psychological Perspectives ■ 61

characteristics are not always clear, as people often face various types of “environ-
mental uncertainty” (e.g., How scarce is tuna exactly, and where exactly? What
is the replenishment rate for tuna? Or how big is the group? Au & Ngai, 2003;
Messick, Allison, & Samuelson, 1988; Suleiman & Rapoport, 1988).
Environmental uncertainty has been shown to reduce cooperation in various
social dilemmas (e.g., Budescu, Rapoport, & Suleiman, 1990; Gustafsson, Biel, &
Gärling, 1999), and several explanations have been offered to account for the
detrimental effects of uncertainty. For example, uncertainty may undermine effi-
cient coordination (De Kwaadsteniet, van Dijk, Wit, & de Cremer, 2006; Van Dijk
et al., 2009), lead people to be overly optimistic regarding the size of a resource
(Gustafsson et al., 1999), and/or provide a justification for non-cooperative behav-
ior (for a review, see Van Dijk, Wit, Wilke, & Budescu, 2004). Also, uncertainty
undermines cooperation when people believe their behavior is quite critical for
the realization of public goods, but when criticality is low, uncertainty matters less
or may even slightly promote cooperation (Chen, Au, & Komorita, 1996). Thus,
although it is not yet clear what mechanisms might explain the detrimental effects
of uncertainty, there is little doubt that uncertainty predictably undermines coop-
eration in various social dilemmas.
Noise. One final structural factor that has received attention in recent years is
the concept of noise, or discrepancies between intended and actual outcomes in
social interaction (cf. Bendor, Kramer, & Stout, 1991; Kollock, 1993; Van Lange,
Ouwerkerk, & Tazelaar, 2002). Presumably, cooperation is strongly challenged by
unintended errors, such as accidentally saying the wrong thing, or not responding
to an email because of a network breakdown, that may lead to misunderstand-
ing. However, surprisingly few studies have sought to capture noise, even though
noise underlies many situations in everyday life, and often gives rise to uncertainty
and misunderstanding. It may therefore challenge feelings of trust, and in turn,
cooperation.
In many experimental social dilemmas, there is a clear connection between
one’s intended level of cooperation and the actual level of cooperation commu-
nicated to one’s partner (e.g., if Partner A  decides to give Partner B six coins,
Partner B learns that Partner A gave six coins). However, in the real world, it is
not uncommon for a decision maker’s actual level cooperation to be (positively
or negatively) impacted by factors outside of his or her control (i.e., noise). While
positive noise is possible (i.e., cooperation is higher than intended), the majority
of research has focused on the detrimental effects of negative noise (i.e., when
cooperation is lower than intended). This research clearly has shown that negative
noise reduces cooperation in give-some games (Van Lange et al., 2002) and will-
ingness to manage a common resource responsibly, especially among prosocials
faced with a diminishing resource (Brucks & Van Lange, 2007). Moreover, the
adverse consequences of negative noise can spill over into subsequent dilemmas
that contain no noise (Brucks & Van Lange, 2008). While noise can clearly under-
mine cooperation, several studies also suggest it can be overcome, for example, if
the partner pursues a strategy that is slightly more generous than a strict tit-for-tat
strategy (e.g., tit-for-tat + 1; Klapwijk & Van Lange, 2009; Van Lange et al., 2002),
when people are given an opportunity to communicate (Tazelaar, Van Lange, &
62 ■ Perspectives to Social Dilemmas

Ouwerkerk, 2004), and when people are encouraged to be empathetic (Rumble,


Van Lange, & Parks, 2010).
In summary, structural influences center on key differences in the interde-
pendence structure of the social dilemma, such that outcomes linked to coopera-
tion can be improved through reward and outcomes linked to non-cooperation
through punishment, with exclusion representing a strong form of punishment.
The effects of structural differences often go beyond material outcomes, and elicit
a rich psychology involving neuroscientific, cognitive, and emotional processes.
Asymmetries and roles are important determinants of behavior in social dilemma,
yet understudied, especially when looking at social dilemmas in everyday life in
which asymmetries and roles seem the rule and not the exception. Uncertainty
and noise are also omnipresent in everyday life, and they may shape the psychol-
ogy in many ways, in that they may challenge trust, feelings of control, and some-
times may give rise to judgments and heuristics that are predictably inaccurate,
such as unrealistic optimism regarding the state of affairs (such as size of the pool)
or unrealistic pessimism regarding other’s willingness to cooperate.

■ PSYCHOLOGICAL INFLUENCES

Advances have also been made in understanding how a variety of psychological


variables impact cooperation in social dilemmas. In this section, we focus on
four categories of psychological variables including individual differences, deci-
sion framing, priming, and affect.
Social Value Orientation. A long history of social dilemma research makes clear
that people differ in fundamental ways in how they approach and interact in social
dilemmas. The personality variable that has received the lion’s share of the atten-
tion is social value orientation (SVO) (Messick & McClintock, 1968; Van Lange,
1999). Although SVO has long been recognized as a predictor of social dilemma
cognition and behavior (e.g., Kelley & Stahelski, 1970; Kuhlman & Marshello,
1975), researchers continue to gain deeper insights into its origin (e.g., Van Lange,
Otten, De Bruin, & Joireman, 1997), measurement (e.g., Eek & Gärling, 2006;
Murphy, Ackerman, & Handgraaf, 2011), and influence on cognition and behavior
in lab and field studies (e.g., Budescu, Au, & Chen, 1996; Van Lange & Kuhlman,
1994). As noted earlier, several comprehensive reviews of the SVO literature have
recently been published (e.g., Au & Kwong, 2004; Balliet, Parks, & Joireman, 2009;
Bogaert, Boone, & Decleck, 2008; Van Lange et al., 2007a). Nevertheless, a number
of key findings are worth discussing.
First, whereas researchers have often defined a prosocial value orientation in
terms of a desire to maximize joint outcomes, it is becoming increasingly clear
that prosocials are also very concerned with maximizing equality. For example, in
his integrative model of social value orientation, Van Lange (1999) suggests that
the desire to maximize joint gain and equality are positively correlated and that
prosocials pursue both goals (cf. De Cremer & Van Lange, 2001), while individual-
ists and competitors pursue neither. More recent evidence supports the claim that
equality in outcomes may well be the primary concern among prosocials (Eek &
Gärling, 2006).
Psychological Perspectives ■ 63

Consistent with the argument that prosocials consider equality an important


principle, research shows that prosocials are more likely than individualists and
competitors to (a) use an “equal split is fair” rule in negotiation settings (De Dreu
& Boles, 1998), (b) respond with a high degree of anger to violations of equality,
regardless of how such violations impact their own outcomes, whereas individual-
ists and competitors only respond to violations of equality when such violations
harm their own outcomes (Stouten, De Cremer, & Van Dijk, 2005), and (c) show
a high degree of activity in the amygdala when evaluating unequal distributions of
outcomes (Haruno & Frith, 2009).
It is possible that the strong concern with egalitarianism underlies the “might
versus morality effect,” the tendency among prosocials to evaluate others’ behavior
in terms of good and bad, whereas individualists tend to to evaluate others’behavior
more strongly in terms of strength versus weakness (Liebrand, Jansen, Rijken, &
Suhre, 1986; Sattler & Kerr, 1991), or intelligence versus unintelligence (Van Lange
& Kuhlman, 1994). For example, the abstract evaluations of morality and immo-
rality may well point at specific judgments of the others’fairness or lack of fairness.
Taken together, these findings suggest that a concern with equality is very
strongly linked to how prosocials approach social dilemmas, how they respond
to others who might violate equality, and what makes them distinctively differ-
ent from individualists and competitors. It is also plausible that because of their
concern with equality, prosocials might feel strongly about restoring justice in
the world (e.g., Joireman & Duell, 2005), and gravitate to political parties that
emphasize not only solidarity but also egalitarianism (e.g., Van Lange, Bekkers,
Chirumbolo, & Leone, 2012).
Second, researchers continue to find evidence for the ecological validity of
SVO. As an example, research has shown that, relative to individualists and com-
petitors, prosocials are more willing to donate to help the ill and the poor (but
not the local sports club) and volunteer as participants in psychology experiments
(e.g., McClintock & Allison, 1989; Van Lange, Schippers, & Balliet, 2011), exhibit
citizenship behavior in organizations (Nauta, De Dreu, & Van der Vaart, 2002),
engage in pro-environmental behavior (Cameron, Brown, & Chapman, 1998;
Joireman, Lasane, Bennett, Richards, & Solaimani, 2001), express stronger prefer-
ences for public transportation (Van Vugt, Meertens, & Van Lange, 1995), coor-
dinate (i.e., synchronize) their behavior with an interaction partner (Lumsden,
Miles, Richardson, Smith, & Macrae, 2012), and be perceived as cooperative based
on their non-verbal behavior (Shelley, Page, Rives, Yeagley, & Kuhlman, 2010).
There is also recent evidence indicating that social value orientation is relevant to
understanding forgiveness, or whether or not individuals are willing and able to
forgive other people’s offences (Balliet, Li, & Joireman, 2011). In short, since the
publication of Komorita and Parks’ (1994) book, an impressive number of studies
have been published supporting the real-world impact of SVO in dyads, groups,
and organizations, and at societal levels.
Trust. Another variable closely linked to cooperation is trust. According to one
of the most accepted definitions, trust is “a psychological state comprising the
intention to accept vulnerability based upon the positive expectations of the inten-
tions or behavior of another” (Rousseau, Sitkin, Burt, & Camerer, 1998, p. 395).
64 ■ Perspectives to Social Dilemmas

As such, trust involves vulnerability, that is, the uncertainty and risk that comes
with the control another person has over one’s outcomes and positive expecta-
tions, which often imply a set of beliefs in the cooperative intentions or behavior
of another person, or people in general (Rotter, 1967, see also Evans & Krueger,
2010; Kramer & Pittinksy, 2012). Although cooperation without trust is possible
(and a challenge, Cook, Hardin, & Levi, 200), in various societal contexts, but
perhaps especially in informal groups, trust may be considered as one of the key
ingredients to cooperation (Dawes, 1980; Yamagishi, 2011).
Early work on trust in social dilemmas showed that those high in dispositional
trust were more likely than those low in trust to increase cooperation in response
to a partner’s stated intention to cooperate (Parks, Henager, & Scamahorn, 1996),
reduce consumption of a depleting common (Messick, Wilke, Brewer, Kramer,
Zemke, & Lui, 1983), and contribute to public goods (Parks, 1994; Yamagishi,
1986a). Since these initial studies, a number of important insights regarding trust
and cooperation have emerged.
First, research suggests that people who are not very trusting of others are
not necessarily non-cooperative in a motivational sense. Rather, they are simply
prone to believe that others will not cooperate, and that fear undermines their
own (elementary) cooperation. However, when given the chance to contribute to a
sanctioning system that punishes non-cooperators, low-trusters are actually quite
cooperative. In other words, they appear quite willing to engage in instrumen-
tal cooperation by contributing to an outcome structure that makes it, including
those with selfish motives, attractive to cooperate, or unattractive to not cooperate
for everybody (Yamagishi, 2011; for earlier evidence, see Yamagishi, 1988ab).
Second, trust matters more when people lack information about other people’s
intentions or behavior, or when they are faced with considerable uncertainty (see
Yamagishi, 2011). An interesting case in point is provided by Tazelaar et al. (2004)
who, as mentioned earlier, found that levels of cooperation are much lower when
people face a social dilemma with noise. More interestingly, they also found that
the detrimental effect of noise was more pronounced for people with low trust
than for people with high trust (Tazelaar et al., 2004, Study 2).
Third, based on a recent meta-analysis, it has become clear that trust matters
most when there is a high degree of conflict between one’s own and others’ out-
comes (Balliet & Van Lange, 2013a; cf. Parks & Hulbert, 1995). This finding makes
sense, as these are the situations involving the greatest degree of vulnerability, as
trusting others to act in the collective’s interest can be quite costly in such situa-
tions. Indeed, as noted earlier, trust is, in many ways, about the intention to accept
vulnerability based upon positive expectations of the intentions or behavior of
another person (Rousseau et al., 1998, see also Evans & Krueger, 2009) or member
of one’s group (Foddy, Platow, & Yamagishi, 2009).
Consideration of Future Consequences. A  final trait relevant to cooperation
in social dilemmas is the consideration of future consequences (CFC), defined
as “the extent to which people consider the potential distant outcomes of their
current behaviors and the extent to which they are influenced by these poten-
tial outcomes” (Strathman, Gleicher, Boninger, & Edwards, 1994, p.  743; cf.
Joireman, Shaffer, Balliet, & Strathman, 2012). Several studies have shown that
Psychological Perspectives ■ 65

individuals high in CFC are more likely than those low in CFC to cooperate in
experimentally-created social dilemmas (e.g., Joireman, Posey, Truelove, & Parks,
2009; Kortenkamp & Moore, 2006), and real-world dilemmas, for example, by
engaging in pro-environmental behavior (e.g., Joireman, Lasane et  al., 2001;
Strathman et al., 1994); commuting by public transportation (e.g., Joireman, Van
Lange & Van Vugt, 2004); and supporting structural solutions to transportation
problems if the solution will reduce pollution (Joireman, Van Lange et al., 2001).
There is also some evidence suggesting that adopting a long-term orientation may
help groups in particular to overcome obstacles and initiate cooperation (Insko
et al.,1998).
Other Individual Differences. A  number of additional individual differences
have received attention in recent dilemmas research. This research has shown, for
example, that cooperation in social dilemmas is higher among those low in narcis-
sism (Campbell, Bush, & Brunell, 2005); low in dispositional envy (Parks, Rumble,
& Posey, 2002); low in extraversion and high in agreeableness (Koole, Jager, van den
Berg, Vlek, & Hofstee, 2001); high in intrinsic orientation (Sheldon & McGregor,
2000); or high in sensation seeking and self-monitoring (Boone, Brabander, & van
Witteloostuijn, 1999).
Decision Framing. The psychological “framing” of social dilemmas has also
received a fair amount of recent attention. For example, in general, emphasizing
the acquisitive aspect of the dilemma (“you can gain something from the task”)
leads people to be less cooperative than emphasizing the supportive aspect of
the dilemma (“you can contribute toward a common good”) (Kramer & Brewer,
1984). Similarly, cooperation is lower when decision makers view the social
dilemma as a business decision, rather than an ethical decision (Tenbrunsel &
Messick, 1999) or a social decision (Liberman, Samuels, & Ross, 2004; Pillutla &
Chen, 1999). Framing the dilemma as a public goods versus a commons can also
impact cooperation, but, as De Dreu and McCusker (1997) show, the direction of
such framing effects seems to depend on the instructions given and the decision
maker’s SVO. To summarize, cooperation rates are lower in give-some than in
take-some dilemmas when instructions to the dilemma emphasize individual gain
or decision-makers have an individualistic value orientation, whereas coopera-
tion is higher in give-some than in take-some games when instructions empha-
size collective outcomes or decision-makers have a prosocial value orientation.
In general, group members are more concerned to distribute outcomes equally
among group members in the take-some dilemma than in the give-some dilemma
(Van Dijk & Wilke, 1995, 2000). Finally, research has also shown that cooperation
decreases if people come to believe they have being doing better than expected,
and increases if people believe they have been doing worse than expected (Parks,
Sanna, & Posey, 2003).
Priming. Another question that has received some attention is whether it is
possible to induce cooperation through subtle cues and implicit messages. The
answer is generally “yes,” though the dynamics of priming cooperation are surpris-
ingly complex, and it is not clear whether they exert very strong effects. But some
effects are worth mentioning. For example, priming an interdependent mindset
effectively promotes cooperation (Utz, 2004a), but if the person has a prosocial
66 ■ Perspectives to Social Dilemmas

orientation, it is better to prime a self-mindset which can activate their existing


prosocial values (Utz, 2004b). Similarly, prosocials show increased cooperation
when encouraged to think about “smart” behavior, whereas such “smart” primes
will just make proselfs more selfish (Utz, Ouwerkerk, & Van Lange, 2004).
Heuristics. Like priming, the application of decision heuristics to social dilemma
choice has received relatively little attention. Yet the work on heuristics that has
been done is quite revealing. A small amount of this work has looked at the value
of heuristics for directing behavior in large-scale social dilemmas (Messick &
Liebrand, 1995; Parks & Komorita, 1997). The primary focus, however, has been
on an equality heuristic (or norm), under which people choose with an eye toward
making sure everyone has the same experience. In resource-consumption-type
tasks, the equality heuristic is oriented around everyone receiving the same
amount of the resource. People tend to anchor on it, and then adjust their choices
in a self-serving direction (Allison, McQueen, & Schaerfl, 1992; Allison & Messick,
1990; Roch, Lane, Samuelson, Allison, & Dent, 2000). When the dilemma involves
contribution, equality is oriented around everyone giving the same amount,
though the motivator of this heuristic is not constant—sometimes equality is
used to emphasize fairness, in that all should give, but at other times it is used to
emphasize efficiency, in that everybody giving the same amount is the easiest way
to achieve the goal (Stouten, De Cremer, & Van Dijk, 2005, 2007, 2009). Further
along this line, some theorists have argued that, in mixed-motive situations, most
decision heuristics are employed in order to maximize the likelihood of engaging
in fair behavior, on the assumption that coming across as fair conveys to others
that one is trustworthy (Lind, 2001).
Affect. The influence of affect on decision-making is another topic of current
prominence within the field of social dilemmas. Here, research has focused on
both general mood states and specific emotions. Regarding mood, a clear pattern
that emerges is that a positive mood is not necessarily beneficial for encouraging
cooperation. For example, a positive mood can lead people to infer that they have
been sufficiently supportive of the group and they are now at liberty to choose
however they wish (e.g., Hertel & Fiedler, 1994). It may also be that a positive
mood leads people to focus more on internal states, which would heighten self-
ishness, while negative moods lead to an external focus, which would heighten
cooperation (Tan & Forgas, 2010). These findings are consistent with the emerg-
ing notion that happiness is not always a useful mood state to induce (Gruber,
Mauss, & Tamir, 2011) and raises the interesting notion that it could be beneficial
to make social dilemma participants feel bad in some way about the situation.
Along these lines, it has been shown that those who feel badly about their choices
in a social dilemma will become more cooperative in subsequent dilemmas, even
if there is a considerable time lag between the initial and subsequent dilemmas
(Ketelaar & Au, 2003).
This immediately raises the question of whether it would matter which spe-
cific negative emotion was induced. For example, would it be irrelevant whether
a person felt mad or sad, so long as the feeling was negative? For that matter,
might there be other specific emotions that come into play when choosing in a
social dilemma? In fact, there is evidence that cooperation is connected with a
Psychological Perspectives ■ 67

range of negative emotions including envy (Parks et al., 2002), guilt (e.g., Nelissen,
Dijker, & De Vries, 2007), shame (e.g., De Hooge, Breugelmans, & Zeelenberg,
2008), regret (Martinez, Zeelenberg, & Rijsman, 2011), anger and disappointment
(e.g., Wubben, De Cremer, & Van Dijk, 2009), with most acting as stimulators of
cooperation.
On a related note, a more recent line of research has focused on how coopera-
tion is impacted when one’s partner communicates certain emotions. For exam-
ple, research shows that when one’s partner is not really in a position to retaliate,
people are more cooperative when their partner appears happy, but if one’s partner
can retaliate, people are more cooperative when their partner expresses anger (Van
Dijk, Van Kleef, Steinel, & Van Beest, 2008). Such research shows that commu-
nicated emotions are often interpreted as a signal that informs us how another
person might respond to our non-cooperative and cooperative behavior (e.g., Van
Kleef, De Dreu, & Manstead, 2006). Indeed, research also shows that coopera-
tors are more likely than individualists and competitors to smile when discussing
even mundane aspects of their day, and that cooperators, individualists, and com-
petitors can be identified simply on the basis of their non-verbal behavior (Shelley
et al., 2010).
In summary, personality differences in social values, trust, consideration of
future consequences, framing, priming, heuristics, and affect represent a long
list of variables that are important to understanding the psychological processes
that are activated in social dilemmas. Presumably, personality influences might be
more stable over time and generalizable across more some situations than other,
more subtle influences, such as framing, priming, and affect. The stable and subtle
influences are both important, as they provide the bigger picture of what the social
dilemmas might challenge in different people, and how some of these challenges
might be influenced in implicit ways. The effect sizes of framing and especially
priming may sometimes be somewhat modest, yet the effects tend to be fairly
robust, and therefore they help us understand how cooperation could perhaps be
promoted in cost-effective ways, such as by just activating a particular psychologi-
cal state or mindset in the ways social dilemmas are communicated and presented.

■ DYNAMIC INTERACTION PROCESSES

In the preceding sections, we focused mainly on how features of the decision,


situation, and person influence the decision to cooperate at a given point in
time. While some of these variables could be viewed as having a dynamic com-
ponent (e.g., the impact of rewards and punishments on cooperation), most of
the variables were static in the sense that they did not typically concern how
a decision maker faced with a social dilemma actively responds to changes in
his or her environment over time. Sometimes this means that personality dif-
ferences are expressed in how people respond to others over time (e.g., how
an individualist might respond to a tit-for-tat strategy; Kuhlman & Marshello,
1975), or that personality differences become weaker and that most people
respond strongly to information about others’ behavior in a group as it unfolds
over time (e.g., the number of non-cooperators in a group, Chen & Bachrach,
68 ■ Perspectives to Social Dilemmas

2003). In the present section, we consider several promising lines of research


addressing on-going interaction processes within the context of social dilem-
mas by examining what happens after group members have made their choices,
learned of others choices, and must make a subsequent choice. Specifically, we
consider recent work on reciprocal strategies, generosity in the context of mis-
understandings (or noise), locomotion, and support for structural solutions to
social dilemmas.
Direct Reciprocity. There is a long tradition of research on how different recipro-
cal strategies (e.g., unconditionally cooperative, unconditionally non-cooperative,
or conditionally cooperative) impact cooperation in social dilemmas (e.g.,
Komorita, Parks, & Hulbert, 1992). At the time, there was much consensus that the
tit-for-tat (TFT) strategy (start cooperative, and then respond in kind to the part-
ner’s actions) is the most effective strategy in promoting cooperation—and as such
most effective in promoting joint welfare as well as one’s own welfare over the long
run (Axelrod, 1984). The effectiveness of the other’s strategy, however, has been
shown to depend on an individual’s social value orientation. For example, in their
classic work, Kuhlman and Marshello (1975) had cooperators, individualists, and
competitors play 30 trials of a two-person prisoner’s dilemma game against one of
three pre-programmed strategies (100% cooperative, TFT, 100% non-cooperative).
Kuhlman and Marshello found that cooperators showed high levels of coopera-
tion, unless their partner always choose to behave non-cooperatively; competi-
tors showed low levels of cooperation, regardless of their partner’s strategy; and
individualists showed high levels of cooperation only when paired with a partner
pursuing a TFT strategy. For many years, these findings led to the conclusion that
(a) TFT was always the best strategy for eliciting cooperation, (b) that an uncon-
ditionally cooperative strategy was sure to be exploited, and (c) that individualists
(but not competitors) could be taught to cooperate, when they came to understand
it was in their own best interest.
Recent research, however, has called into question each of these conclusions.
Based on evidence obtained in simulation research on noise (Bendor et al., 1991;
Kollock, 1993; Nowak & Sigmund, 1992), there is a logic to adding a bit of gen-
erosity to TFT in order to cope effectively with noise in social dilemmas. And
subsequent empirical research also revealed that in situations involving negative
noise (i.e., when one’s cooperation level is not as high as it was intended), TFT
is actually less effective at eliciting cooperation than a more generous strategy in
which one responds in a slightly more cooperative manner than one’s partner did
on the previous trial (e.g., TFT+1, see Van Lange et al., 2002). One explanation
for this finding is that when one’s partner adopts a generous reciprocal strategy,
it encourages one to maintain the impression that one’s partner has benign inten-
tions and can be trusted (see also Klapwijk & Van Lange, 2009). Second, argu-
ing against the inevitable exploitation of unconditional cooperators, Weber and
Murnighan (2008) showed that consistent cooperators can effectively encourage
cooperation in social dilemmas, often ultimately promoting their own long-term
best interests. Third, whereas it was long assumed that competitors could not learn
to cooperate, Sheldon (1999) showed that, when given enough time, competitors
increase their level of cooperation in response to a tit-for-tat strategy. Finally,
Psychological Perspectives ■ 69

Parks and Rumble (2001) showed that the timing of rewards and punishments
matters: whereas prosocials are most likely to cooperate when their cooperation is
immediately reciprocated, competitors are most likely to cooperate when punish-
ment for non-cooperation is delayed. Thus, while quite effective, TFT should not
be regarded as the most effective strategy, because there are so many exceptions
now that have been observed, and that make sense from a psychological point
of view.
Moreover, even from a purely logical perspective, it is true that adding gen-
erosity can help overcome the detrimental effects of noise. And there is also
evidence that another strategy might actually outperform TFT in many social
dilemma situations. In particular, a strategy called, Win Stay, Lose Shift (WSLS, or
Win-Stay-Lose-Change), is defined by a very simple rule, though different from
TFT. The rule for WSLS is: when I do well, I repeat the choice I have made; and
when I do not do well, I shift and make a different choice. In practice, this means
that a non-cooperative choice is repeated if the other made a cooperative choice
(and I made a non-cooperative choice), and that a cooperative choice is repeated
if both persons made a cooperative choice.
Change to cooperation is when both persons did not cooperate, and change to
non-cooperation is when the other made a non-cooperative choice and I made a
cooperative choice. Several simulation studies revealed that, across several social
dilemma tasks, WSLS outperformed TFT (Nowak & Sigmund, 1993; see also
Messick & Liebrand, 1995). It does probably payoff to apply it with some flexibil-
ity. For example, it is probably unwise to always change to cooperation after each
and every interaction in which both did not cooperate; it is probably wiser to make
that change with some probability (e.g., Gächter, & Herrmann, 2009; Nowak  &
Highfield, 2011). There is not much empirical research examining the strengths
and limitations of WSLS among real people, as most research on this strategy has
used computer simulations. But some have suggested that WSLS is quite a com-
mon, basic strategy, one that may be observed in humans, as well as in nonhuman
populations. After all, it seems quite natural to change only after outcomes are
disappointing, and to not change when the outcomes made you happy.
In sum, recent research has shed new light on how reciprocal strategies can
promote cooperation. TFT was believed most effective, but that view has now
been revisited. In situations involving noise, some generosity (added to reciproc-
ity) is quite effective; and there is evidence in support of the superior qualities of
Win-Stay-Lose-Shift, a very basic strategy that many people may spontaneously
apply in some form in real life.
Indirect Reciprocity. Recent research has also explored how indirect reciprocity
can encourage cooperation. Whereas the effects of direct reciprocity are observed
in repeated encounters between two individuals, cooperation in larger settings may
be promoted by indirect reciprocity. According to this view, cooperation may be
advantageous because we tend to help people who have helped others in the past.
As noted earlier, and briefly illustrated by the experiment of Wedekind and Milinski
(2000), indirect reciprocity models build on reputation effects by assuming that
people may gain a positive reputation if they cooperate and a negative reputation if
they do not. Indeed, people are more likely to cooperate with others who donated
70 ■ Perspectives to Social Dilemmas

to a charity fund like UNICEF (Milinski, Semmann, & Krambeck, 2002). Notably,
people also seem to be well aware of these positive effects, as they are more willing
to donate and cooperate if they feel their reputation will be known by others than
if they feel others are not aware of their contributions (e.g., Griskevicius, Tybur, &
van den Bergh, 2010). There is even evidence indicating that subtle cues of being
watched—by means of an image of pair of eyes—can enhance donations (Bateson
et al., 2006), which suggest the subtle power of reputational mechanisms.
Locomotion. Typically, experimental research on multi-trial social dilemmas
has explored how people respond to a given partner or group. However, in the
real world, one is not inevitably stuck with certain partners. One can exit relation-
ships and groups, and enter others. Recognizing exit and selection (and exclusion)
of new partners as viable options in social dilemmas, a number of recent studies
have begun to study locomotion and changes in group composition. For example,
Van Lange and Visser (1999) showed that people minimize interdependence with
others who have exploited them, and that competitors minimize interdependence
with others who pursue TFT, which is understandable, as competitors cannot effec-
tively achieve greater (relative) outcomes with a partner pursuing TFT. Similarly, it
is clear that conflict within a group may induce people to leave their group, even-
tually leading to group fissions (Hart & Van Vugt, 2006). The conflict may come
from failure to establish cooperation in the group or a decline in cooperation as
cooperative members exit (Yamagishi, 1988a; Van Lange & Visser, 1999; see also
De Cremer & Van Dijk, 2011), or from dissatisfaction with autocratic leadership
(Van Vugt, Jepson, Hart, & De Cremer, 2004). Conversely, prospects of coopera-
tion may encourage individuals to enter groups, for example, when sanctions of
non-cooperation promote the expectation of cooperation (see Gürerk et al., 2006).
Communication. Frequently, communication is conceptualized as a psycho-
logical variable. After all, communication is often thought of in terms of verbal
or nonverbal messages that are characterized by a fair amount of interpretation
and subjectivity. In the social dilemma literature, various forms of communication
have been compared. Classic research on social dilemma has shown that commu-
nication can effectively promote cooperation (see Balliet, 2010; Komorita & Parks,
1994; for classic studies, see Caldwell, 1976). But it is not just talk that explains
why communication might promote cooperation, even though face-to-face inter-
action by itself may be helpful. Simply talking about issues that are not in any way
relevant to the social dilemma does not seem to promote cooperation, as demon-
strated in one of its most classic studies of its kind (Dawes, McTavish, & Shaklee,
1977). Some researchers have suggested and found that, at least in single-trial
social dilemmas, promising (to make a cooperative choice) may be quite effective,
especially if all groups make such a promise (Orbell, Van der Kragt, & Dawes,
1988; see also Kerr & Kaufman-Gilliland, 1994). Subsequent research supported
this line of reasoning, in that “communication-with-pledge” promotes coopera-
tion, because it promotes a sense of group identity and a belief that one’s choice
matters (i.e., that one’s choice is believed to be critical; Chen, 1996).
These findings are important not only because they inform us about the psy-
chology of decision-making in social dilemmas, but also how they might help us
explain the dynamics of cooperation. Moreover, in real life social dilemmas, group
Psychological Perspectives ■ 71

members may actually decide whether they favor a structure in which they openly
communicate their intended choices. For example, as noted by Chen (1996), in
work groups, managers could ask to make a pledge of time and effort, and then
propose several binding pledge systems, especially those that are group-based such
that they create a common fate and normative standards for everybody involved
(Kerr & Kaufman, 1994; Kerr, Garst, Lewandowski, & Harris, 1997). In that sense,
it is interesting that even virtual groups, or the mere imagination of communi-
cation, can promote cooperation (Meleady, Hopthrow, & Crisp, 2013). Such evi-
dence might suggest that the effects of internalized norms are more powerful than
often is assumed. And perhaps the mechanisms through which communication
may promote cooperation might be quite subtle, involving norms and identity.
Indeed, communication may strengthen a sense of identity, but it also promotes a
norm of (generalized) reciprocity, which is why it might speak to similar mecha-
nisms as those that dynamically underlie the effects of direct and indirect reci-
procity. There is indeed evidence suggesting that people might fairly automatically
apply a social exchange heuristic, which prescribes direct or generalized forms of
reciprocity (Yamagishi, Terai, Kiyonari, Mifune, & Kanazawa, 2007):  “Do what
you think another person would do in situation like this,” or some rule or heu-
ristic closely related to it. And there is recent evidence suggesting that the mere
imagining of group discussion can promoted cooperation (Meleady, Hopthrow, &
Crisp, 2013). In this research, participants engaged in a guided simulation of the
progressive steps required to reach cooperative consensus within a group discus-
sion of a social dilemma. It awaits future research, but it is possible that imagined
group discussion activates a generalized reciprocity norm that effectively pro-
motes cooperation. The good news about this is that perhaps cooperation can be
enhanced in quite a cost-effective manner, requiring no face-to-face discussion or
other time-consuming meetings (see Meleady et al., 2013).
Support for Structural Solutions. One final issue being addressed concerns struc-
tural solutions to social dilemmas which involve changing the decision-making
authority (e.g., by electing a leader), the rules for accessing the common resource,
or the incentive structure facing decision makers (e.g., by making the cooperative
response more attractive). In the lab, the most heavily studied structural solution
has been the election of a leader. Many early studies showed that people were more
likely to elect a leader when the group had failed to achieve optimal outcomes
in a social dilemma (e.g., underprovided a public good, or overused a common
resource; Messick et al., 1983; Van Vugt & De Cremer, 1999). Additional research
shows that, after a group has failed, willingness to elect a leader tends to be higher
in the commons dilemmas (as opposed to the public goods dilemmas) (e.g., Van
Dijk, Wilke, & Wit, 2003); when collective failure is believed to be the result of task
difficulty (as opposed to greed) (Samuelson, 1991); and among those with a pro-
social (vs. a proself) orientation (De Cremer, 2000; Samuelson, 1993). Research
comparing different leadership alternatives shows that group members are more
likely to support democratic (versus autocratic) leaders, and to stay in groups led
by democratic (versus autocratic) leaders (Van Vugt et al., 2004).
Beyond the lab, a number of field studies have also explored support for struc-
tural solutions, many rooted in Samuelson’s (1993) multiattribute evaluation
72 ■ Perspectives to Social Dilemmas

model. Samuelson proposed that decision makers evaluate structural solutions


in terms of efficiency, self-interest, fairness, and freedom (autonomy), and that
the importance of the four dimensions should vary as a function of individual
differences (e.g., in social value orientation or consideration of future conse-
quences). Samuelson’s model has received support in several field studies explor-
ing support for improvements in public transportation (e.g., Joireman, Van
Lange et al., 2001). People generally are more likely to accept structural changes
if they are perceived as efficient (it should be cost-effective), not too costly to
the self (self-interest), and as fair (e.g., all people contribute in a fair manner).
Freedom is important because people generally value autonomy in making deci-
sions and possibilities for self-management. For example, research on the first
carpool lane in Europe provided some tentative evidence that people may not
approve of it because it was considered very expensive, and somewhat unfair
because some people were simply unable to carpool (because of their work
schedules, or location of work or home) and therefore could not share in the
benefits that carpoolers enjoyed: the enjoyment of a lane without congestion at
rush hour (Van Vugt et al., 1996).
Finally, as noted earlier, research on structural solutions to social dilemmas
has been greatly advanced by Ostrom and her colleagues who have studied the
development of institutions designed to manage common pool resources (e.g.,
Ostrom, 1990; Ostrom, Gardner & Walker, 2003). The broad conclusion reached
by Ostrom and her colleagues is that local management of small communities, and
the enhancement and maintenance of trust in these communities, is essential for
both the communities and the broader collective. Or, as Ostrom and Ahn (2008)
stated:  “The very condition for a successful market economy and democracy is
that a vast number of people relate in a trustworthy manner when dealing with
others to achieve collective actions of various scales.” (p. 24).
In summary, it is one thing to predict and explain how people might behave
in relatively static situations, such as social dilemmas without repeated interac-
tion. It is quite another thing to predict and explain dynamic interaction patterns.
While classic research has emphasized reciprocity, such as tit-for-tat, as a func-
tional strategy promoting cooperative interaction, more recent research suggests
that it is functional to add a bit of generosity. One reason is that generosity helps
to maintain or promote trust, which in turn is a key ingredient to cooperation (see
also Balliet & Van Lange, 2013a; Kramer, 1999). Further, when social dilemmas
do not elicit sufficient cooperation, we see that people exhibit a greater willingness
to support several solutions, including the option of communication with bind-
ing elements (such as pledge), and the structural solution of electing a leader. In
doing so, they tend to support democratic leadership over autocratic leadership.
Together, feelings of trust, criticality, and “we-ness” (such as the feeling “we are in
this together”) seem essential for small communities to productively approach and
resolve social dilemmas. They may not only underlie cooperation, but also explain
how (and why) participants contribute to dynamic interaction patterns and struc-
tural changes in social dilemmas, and why such instrumental contributions are
effective in promoting cooperation.
Psychological Perspectives ■ 73

■ BASIC ISSUES

Does altruism exist in social dilemmas?


There has been a fair amount of debate about the existence of altruism both
within and beyond psychology. Much of the controversy, however, deals with def-
initions of altruism which, across disciplines, range from behavioral definitions
(i.e., acts of costly helping are considered altruistic; Fehr & Gächter, 2002)  to
definitions that seek to exclude any possible mechanism that may be activated
by some consideration that may not be free of self-interest (e.g., Cialdini et  al.,
1997). If we limit our discussion, for parsimony’s sake, to research on coopera-
tion, competition, and resource allocation measures, then we see that altruism
is not very prominent. For example, in assessments of interpersonal orienta-
tions in a specific resource allocation task, the percentage of people who should
be classified as altruistic (i.e. assigning no weight to their own outcomes while
assigning substantial weight to other’s outcomes) is close to zero (Liebrand & Van
Run, 1985). Similarly, when people who play a single-choice prisoner’s dilemma
observe that the other makes a non-cooperative choice, the percentage of coop-
eration drops to 5% or less (Van Lange, 1999).
But this evidence should not be interpreted as if altruism does not exist. In fact,
what is more likely is that it does not exist under the (impersonal) circumstances
that are common in this tradition of research. People usually face a decision-making
task, be it a social dilemma task, a resource allocation task, or a negotiation task, in
which they are interdependent with a relative stranger; there is no history of social
interaction or other form of relationship. Accordingly, there is no basis for feel-
ings of interpersonal attachment, sympathy, or relational commitment. We suggest
that when such feelings are activated, altruism may very well exist. In fact, relative
strangers (even animals) can elicit empathy even in younger people (e.g., four year
olds, whose perspective-taking abilities are still developing), as we know from some
movies (e.g., the killing of Bambi’s mother in the movie Bambi).
As a case in point, Batson and Ahmad (2001) had participants play a single-trial
prisoner’s dilemma in which the other made the first choice. Before the social
dilemma task, the other shared some personal information that her romantic part-
ner had ended the relationship with her, and that she found it hard to think about
anything else. Batson and Ahmad compared three conditions, one of which was a
high-empathy condition in which participants were asked to imagine and adopt the
other person’s perspective. The other conditions were either a low-empathy condi-
tion, in which participants were instructed to take an objective perspective to infor-
mation shared by the other, or a condition in which no personal information was
shared. After these instructions, participants were informed that the other made a
non-cooperative choice. Batson and Ahmad found that nearly half of the participants
(45%) in the high-empathy condition made a cooperative choice, while the percent-
ages in the other low-empathy and control conditions were very low, as shown in
earlier research (less than 5%, as in Van Lange, 1999). Hence, this study provides
an interesting demonstration of the power of empathy in activating choices that can
be understood in terms of altruism, in that high-empathy participants presumably
74 ■ Perspectives to Social Dilemmas

assigned substantial weight to the outcomes for the other at the expense of their
own outcomes (for further evidence, see Batson, 2011; for further illustrations, see
Caporael, Dawes, Orbell, & van de Kragt, 1989; Van Lange, 2008).

Are some people really competitive in social dilemmas?

There is also strong evidence in support of competition as an orientation quite


distinct from self-interest. As noted earlier, the work by Messick and McClintock
(1968) has inspired considerable research that reveals not only that cooperative
orientations but also competitive orientations may underlie social interactions.
For example, Kuhlman and Marshello (1975) have demonstrated that individuals
with cooperative orientations do not tend to exploit others who exhibit coopera-
tion at every interaction situation, irrespective of the individual’s own behavior.
They also showed that individuals with competitive orientations do not exhibit
cooperation, even if cooperative behavior, rather than non-cooperative behavior,
best serves their own personal outcomes (e.g., the tendency to compete with
tit-for-tat partners, yielding bad outcomes; see Van Lange & Visser, 1999).
The importance of competition is even more directly shown in research on
a decision-making task that represents a conflict between cooperation on the
one hand, and individualism (Option A) and on the other hand competition
(Option B). Hence, the only consideration to choose Option B is to receive better
outcomes (or less worse outcomes) than the other, even though one could do bet-
ter for oneself by choosing Option A. Research using this so-called Maximizing
Difference Game has revealed that quite a few people choose the competitive alter-
native; it is also of some interest to note that among some (young) age groups com-
petitive tendencies tend to be even more pronounced (McClintock & Moskowitz,
1976). Specifically, among very young children (three years old), the individual-
istic orientation dominates, after which competition becomes more pronounced
(4–5 years), which is then followed by the cooperative orientation (6–7 years). This
pattern is largely consistent with recent research by Fehr et al. (2008) on the devel-
opment of egalitarianism. There is also evidence that by the age of eight years,
the level of prosociality is not much different from adults, suggesting that many
of these developments take place before adolescence (Crone, Will, Overgaauw, &
Güroğlu, 2013). The interesting conclusion is that competition is part of the devel-
opment in childhood, and tends to precede the development of prosociality. It is
not clear whether the growth in egalitarianism and prosociality constrains com-
petition, and whether competition may be necessary for other motives to develop.
One might wonder whether it is the aversion of getting behind or the temp-
tation of getting ahead that underlies such competition. In an elegant study by
Messick and Thorngate (1967), it was shown that the former tendency (aversive
competition) is much more pronounced than the latter tendency (appetitive
competition)—in other words, not losing seems a stronger motivation than win-
ning. This early research was later extended, and generalized, by Kahneman and
Tversky’s (1979) gain and loss frames in their prospect theory, and by Higgins’
(1998) distinction between prevention and promotion focus as two distinct
self-regulatory systems. Recent research has also revealed that under conditions
Psychological Perspectives ■ 75

of uncertainty, competition may be especially pronounced, presumably because


people really want to make sure that they do not get less than the other (Poppe
& Valkenberg, 2003). Thus, there is little doubt that competition is an important
orientation that needs to be carefully distinguished from self-interest.

What about aggression?

Aggression has received little attention in research on social dilemmas. It is


interesting to note that, especially in comparison to the orientation of altru-
ism, much research on aggression focuses on genetic and biological factors.
Examples are not only twin studies, but also studies focusing on associations of
aggression with hormonal activity, such as variations in levels of testosterone.
Generally, this body of research supports the view that aggressiveness is sub-
stantially influenced by genetic factors and biological make-up (e.g., Vierikko,
Pulkkinen, Kaprio, & Rose, 2006). For example, there is research showing
that manipulations of levels of testosterone, varied as part of a treatment for
sexual transformations, influence the proclivity to anger. Specifically, there is
an increase in the tendencies toward anger among individuals who transform
from woman to man, and a decrease in such tendencies among individuals who
transform from man to woman (Van Goozen, Frijda, & Van de Poll,  1995).
Importantly, the correlation between aggressiveness and testosterone is espe-
cially pronounced for scale items assessing aggressiveness in response to provoca-
tion (Olweus, 1979), suggesting that aggression needs to be considered in terms of
anger that is interpersonally activated. Indeed, the methods typically used to study
aggression consist of examining aggressiveness in response to provocation by
another person. Hence, anger and aggressiveness should be easily aroused by oth-
ers who fail to exhibit cooperative behavior. This interpersonal basis of aggression
is important, and suggests several interesting phenomena. For example, it may
well be that tendencies toward aggression are most pronounced among those who
do not expect others to behave selfishly. As a case in point, Kelley and Stahelski
(1970) provide some evidence for what they referred to as overassimilation, the
tendency for cooperative individuals (at least, some cooperative individuals) to
behave eventually even more non-cooperatively than the fairly non-cooperative
partner with whom they interact (see also Liebrand et al., 1986). More generally,
aggression may be activated by others’ non-cooperative behaviour, in dyads and
groups, by violations of justice (broadly conceived), and perhaps by misperceiving
or misunderstanding another person’s intentions. Thus, it is surprising that aggres-
sion has received so little attention in social dilemmas, because—unless research
suggests otherwise—aggression seems an important orientation in social dilem-
mas, albeit one that seems activated primarily by the behavior of others.

Can other-regarding motives produce bad outcomes


for the collective?

Clearly, motives that one would label as “other-regarding,” such as coopera-


tion, egalitarianism, and the most prosocial of all, altruism, are generally quite
76 ■ Perspectives to Social Dilemmas

predictive of cooperative behavior in social dilemmas, and therefore of pro-


ducing good outcomes for the collectives. However, this is not always true.
Other-regarding motives can cause bad effects for the collective. For example,
there is research indicating that feelings of empathy could promote choices that
benefit one particular individual in a group—at the expense of outcomes for
the entire group (Batson et  al., 1995). As such, empathy can sometimes form
a threat to cooperative interaction, just as selfishness can. That is, feelings of
empathy may lead one to provide tremendous support to one particular person,
thereby neglecting the well-being of the collective. For example, as noted by
Batson et al. (1995, p. 621), an executive may retain an ineffective employee for
whom he or she feels compassion to the detriment of the organization. Another
example is that parents may sometimes be so supporting of their children that
it harms collective interest in a serious manner (e.g., not making an attempt to
stop their making noise in public situations).
Also, a strong concern with collective well-being—cooperation—almost always
supports actions that are collectively desirable. There is, however, one very important
exception to this rule, namely when social dilemmas take the form of multi-layered
social dilemmas, in which “cooperation” is good for one’s own group, but bad for
another group—and bad for the entire collectivity (see Bornstein, 1992). Consider,
for example, the soldier fighting for his or her own country, but killing soldiers
from the other country, thereby causing bad effects for the entire collective. It is
this type of “cooperation action” that often is supported and respected by in-group
members that threatens collective well-being (for evidence, see Insko & Schopler,
1998; Wildschut & Insko, 2007; Wit & Kerr, 2002). In that sense, cooperation can
be a risky orientation, especially because intergroup conflicts, once started, are
often very hard to resolve.
How about egalitarianism? Often equality supports collectively desirable actions.
In fact, sometimes donations, volunteering, and related forms of helping may be
rooted in a sense of fairness: to enhance the situation of those who are worse off
than oneself. Indeed, campaigns aimed at fostering helping behavior sometimes
emphasize not only empathy but also feelings of justice—does it feel right that we
do not stop the suffering? Also, when a majority of people makes a cooperative
choice (e.g., not overusing water), then policy makers could indeed make salient
that important fact—because getting more than others for the wrong reasons sim-
ply does not feel good, and it is very difficult to justify it to oneself or to others.
Despite its benefits, equality can also entail risks to collective outcomes. First,
if individuals are primarily concerned with equality, they may show an aversion
to being taken advantage of, and end up following “bad apples” in the group who
choose not to cooperate (e.g., Kerr et al., 2009; Rutte & Wilke, 1992). Second, a
strong concern with equality may harm collective outcomes because people do
not want to unilaterally invest in situations in which such investing cannot occur
simultaneously. For example, building exchange systems often takes time and
unilateral actions—an example is the exchange of expertise among colleagues. If
one, a statistics expert, is very seriously concerned about equality, then he or she
may not want to invest too much time into conducting complex, time-consuming
analyses if there is a bit of uncertainty that the other (an expert in writing) is
Psychological Perspectives ■ 77

not going to reciprocate. And finally, sometimes it may not be wise to emphasize
equality in relationships, groups, and organizations. For example, in marital rela-
tionships, a discussion about equality may well be an indicator that a couple is
on its way to divorce, perhaps because such discussions can undermine genuine
other-regarding motives (e.g., responding to the partner ‘s needs; Clark & Mills,
1993). Similarly, in groups and organizations, communicating equality may lead to
social book-keeping that may undermine organizational citizenship behavior, the
more spontaneous forms of helping colleagues that are not really part of one’s job
but are nonetheless essential to the group or organization.

And can competition or aggression promote


good collective outcomes?

Conversely, motives that are associated with non-cooperation may sometimes


be important instruments for cooperation. Earlier, we have seen that it is quite
a challenge to promote cooperation in people with competitive orientations. At
the same time, competition can have beneficial effects in multi-layered social
dilemmas that we discussed above for cooperation. When there are two or more
well-defined groups who comprise the entire collective, then sometimes com-
petition between the groups helps the entire collective. The competition should
then deal with something desirable. For example, in Netherlands, there is a con-
test between cities aiming for the award “Cleanest City.” As another example,
two departments at a university may do better (yielding greater research out-
put and enhanced teaching) if the university provides extra resources for only
excellent departments. In fact, organizations often use competition as a means
to promote functioning. Sometimes such practices take explicit forms, when,
for example, competitive reward structures are being implemented:  your evalu-
ations and salary depend on your performance relative to others’ performances.
But even when not done explicitly, the performances of others typically matter
in most organizations, because many jobs lack objective criteria, and so manag-
ers will often rely on social standards for evaluating individual performance.
Just as a competitive orientation can sometimes yield positive outcomes for
the collective, so can aggression serve a useful function in groups. As noted ear-
lier, individuals are likely to act aggressively to another person in a dyad, or other
people in the group, who fail to cooperate. As such, aggression may serve to regu-
late fairness and promote cooperation. For example, people may use aggression
as an instrument for encouraging cooperation by exhibiting instrumental coop-
eration or altruistic punishment. Instrumental cooperation refers to all behaviors
by which individuals contribute to the quality of a system that rewards coopera-
tors or punishes non-cooperators (Yamagishi, 1986a; see also Kiyonari & Barclay,
2008). An example is a contribution to the maintenance of sanctioning systems
such as monitoring devices needed for publicizing or punishing non-cooperators.
Altruistic punishment refers to all behaviors by which individuals are willing to
engage in costly acts by which non-cooperators are directly punished (Fehr &
Gächter, 2002; see also Egas & Riedl, 2008). Another form of aggression that indi-
viduals and groups may use is social exclusion or forms of marginalization by
78 ■ Perspectives to Social Dilemmas

which non-cooperators are in some way punished, in that they are no longer part
of the group. This could mean that they no longer benefit from group outcomes,
but we suspect that the social aspects of even very subtle forms of exclusion can
yield powerful effects on the non-cooperators’ feelings and behavior. Indeed, there
is evidence that very subtle forms of social exclusion may activate those regions
of the brain that are associated with physical pain (Eisenberger, Lieberman, &
Williams, 2003). In short, while aggression is often undesirable, it may at times
serve a vital function in maintaining cooperation within the larger group.

■ SUMMARY AND CONCLUSIONS

This chapter began by arguing that people go beyond self-interest in a variety


of ways, and that several motives or social preferences might be activated in the
context of social dilemmas. We discuss the motives of cooperation, egalitarian-
ism, and competition as a set of broader motives, and noted that altruism and
aggression might be specific responses to a person’s needs (or suffering) or vio-
lations of norms. Next, we provide a brief overview of past research on social
dilemmas and discuss the psychological variables that might underlie coopera-
tion in social dilemmas. In doing so, we offer a framework that distinguishes
between structural, psychological, and dynamic (interactional) influences. We
close by addressing some basic issues, such as whether altruism exists, whether
other-regarding motives can produce bad effects for the group, and whether
self-regarding motives can produce good effects for the groups. While they may
seem controversial, the answers to these questions were relatively straightfor-
ward. Altruism does seem to exist and can be activated, and so does competi-
tion; other-regarding motives typically produce good outcomes for the group,
but can also produce bad outcomes; and motives such as competition and
aggression typically produce bad outcomes for the group, but can also produce
good outcomes.
5 Cultural Perspectives

In Lamalara, Indonesia, the sun rises while eight men prepare two small boats
for sea. They are going to hunt whales to provide food for their community.
Each looks at the other knowing they might have benefited by a few more hours
sleep and doing something that day to help their immediate family members,
hoping others would go to sea in search of whales. But each also understands
that if everyone behaved that way, then the community would go without this
vital resource.
At the same time, a small rural village in India turns on its only generator pro-
viding electricity for the community. This provides enough electricity for each
household to use a single light in their homes. However, if each household uses
more than that, then the generator fails and the community is left without electric-
ity. One household decides it is late in the evening and that it should be fine to turn
on a fan, since other households likely have their lights off. But fans require more
electricity than lights and too many households have decided to do the same. As the
individual turns on his fan, the generator fails and the community loses electricity.
A university student in the United States decides to complete her portion of a
group assignment for class. She has many tempting alternative options for spend-
ing her time, but she understands that her efforts will prove valuable to the group
project. She spends a good portion of her day working on the project.
At any single moment, people all over the world are being faced with social
dilemmas. Although such dilemmas can vary substantially in terms of the behav-
iors (e.g., whale hunting, using electricity, and homework) and outcomes (e.g.,
the provision of food, conserved energy, and a good grade), these situations all
share the similar underlying structure of social interdependence—that is, they all
involve a conflict of interest experienced by the persons facing the dilemma. The
point is that social dilemmas are universal phenomena and no human living on
any part of the planet is free from facing such dilemmas.
This simple fact has raised several interesting issues about the study of human
cooperation. First, if social dilemmas are a persistent and universal problem that
humans face in their social environments, and assuming that these problems have
been a recurring theme in our ancestral past, then it may be that humans have
evolved a set of species-typical adaptations to deal with these problems. As we
have seen in Chapter 3 with its discussion of the evolutionary issues, this is a very
likely possibility, even though there may be additional processes at work that affect
cooperation. A  second issue, however, is that if humans all over the globe face
social dilemmas, we might see that different groups of humans possess different
strategies for approaching these dilemmas. Although cross-societal variability by
Daniel Balliet had primary responsibility for preparation of this chapter.

79
80 ■ Perspectives to Social Dilemmas

no means excludes the possibility that humans possess adaptations for behaviors
to deal with social dilemmas, the study of cross-societal variation in cooperation
has been primarily approached with a focus on the proximate social environment
and psychological mechanisms.
This chapter explores what we know about cross-societal variation in coopera-
tion in social dilemmas. It will become clear that we know that there is substantial
variability in how people think and behave in social dilemmas around the world—
from small-scale hunter-gatherer societies (Henrich et  al., 2001)  to large-scale
industrialized societies (Herrmann et al., 2008). We start our discussion by draw-
ing attention to research that establishes this variability across ethnicities and
societies. Although it is important to note this variability, it is more interesting to
explore explanations for this variability in cooperation. To explain this variability
between different ethnicities and societies, social scientists have emphasized the
importance of culture.
Culture is a broad concept, so it will benefit us to take a moment and discuss
what this concept entails. Before addressing the concept of culture, we should note
that various traditions or lines of research can be subsumed under the multifac-
eted concept of culture. Also, comparisons among cultures in studies on trust and
cooperation go back almost to the very beginning of research on social dilemmas
and related games (e.g., Kelley et al., 1970; Madsen & Shapira, 1970). We will dis-
cuss some of that older literature, but our focus will be on the more recent research
on culture that has tended to compare several societies, and that builds on classic
research by exploring the key explanations of differences among societies—and by
exploring whether and why some of the classic factors that might promote coop-
eration (see Chapters 3 and 4) are equally effective in different cultures. In light
of the increased focus on culture in the social dilemma literature, especially in
the last decade, we should note that this particular body of research and theory is
still relatively young. Many findings provide preliminary, rather than conclusive,
answers to the important yet intricate questions about culture and human coop-
eration. At the same time, important insights have been generated and substantial
progress has been made, so we feel that it is timely and important to provide an
overview of this growing topic of research.
After an attempt to clarify the concept of culture, we will discuss ideas and
research about how culture relates to cooperation. We will begin by discussing one
promising line of research on the informal sanctioning of social norms. Beyond
social norms of cooperation however, we will also discuss efforts in cross-cultural
psychology that emphasize cultural differences in values and beliefs, and address
their importance for understanding cultural variation in cooperation. We will end
this chapter by discussing some implications about the effect of globalization on
cooperation in global-scale social dilemmas.

■ COOPERATION ACROSS ETHNIC GROUPS


AND SOCIETIES

The most basic question to answer in this line of research is:  Does coopera-
tion vary across ethnicities and societies? For example, do we observe differing
Cultural Perspectives ■ 81

amounts of cooperation across societies? Do we see very little cooperation in


some societies, while other societies contain an abundance of cooperation? Only
recently has research begun to provide strong answers to such a basic question.

Pioneering Research on Ethnic and Societal


Variation in Cooperation

Some of the earliest work testing for variation in cooperation between ethnic
groups and societies compared levels of cooperation observed across Mexican
Americans, caucasian Americans, and Mexicans. This research found that cauca-
sian American children were generally less cooperative than Mexican American
children (Avellar & Kagan, 1976; Kagan, Zhan, & Geally, 1977; Knight & Kagan,
1977a,1977b; McClintock, 1974), but that both caucasian and Mexican American
children tended to be less cooperative than Mexican children (Kagan & Madsen,
1971, 1972; Knight & Kagan, 1977a; Madsen, 1969; Madsen & Shapira, 1970).
Interestingly, third generation Mexican American children have been found to
be less cooperative compared to second generation Mexican American children
(Kagan & Knight, 1979), suggesting that the extent of a family’s integration into
a society affects the socialization of their children’s social behavior. Indeed, for
the most part, this research has interpreted the above mentioned findings by
referring to different socialization processes that resulted in the social learn-
ing of different norms of cooperation and competition. Yet, some limitations of
this earlier work is that it over emphasized comparing ethnic groups within the
same country, it often did not compare multiple cultural groups simultaneously,
and it did not test hypotheses about the observed differences in cooperation.
A landmark study conducted by Toda, Shinotsuka, McClintock, and Stech
(1978) overcame two limitations of earlier work by examining samples from dif-
ferent societies and comparing more than two societies at once. Specifically, they
compared the competitive behaviours of children playing a dyadic maximizing
difference game for 100 trials in five different countries: Belgium, Greece, Japan,
Mexico, and the United States. Moreover, they examined children in each country
at three different ages (school grades 2, 4, and 6). They found some evidence for
cross-societal variation in competition, with the Japanese being the most competi-
tive and Belgians being the least competitive, but they also found some substantial
similarities between countries, such as an increase in competition across trials and
with age. Toda and colleagues noted that socialization differences may account
for the societal differences in competition observed among the younger children,
but that each culture displayed a substantial increase with competition over time,
suggesting that children across cultures are socialized to possess similar values for
competition.
Subsequent research has emphasized the strategy of comparing social dilemma
experiments with adults that were conducted in only two modern societies. For
example, Hemaseth (1994) compared the behavior of a sample of Americans
and Russians in a series of one-shot prisoner’s dilemmas. He found that while
Americans tended to cooperate on 51% of the trials, the Russians cooperated on
72% of the trials, suggesting that Russians are more cooperative than Americans.
82 ■ Perspectives to Social Dilemmas

Americans have also been compared to several other countries in terms of their
cooperation. Parks and Vu (1994) compared rates of cooperation in a public
goods game and resource dilemma between American and South Vietnamese
participants. They found that the Vietnamese displayed much more cooperation
than Americans—and that the Vietnamese even continued to cooperate when
partnered with a pre-programmed strategy that always defected. Americans
have also displayed less cooperation compared to Chinese samples (Domino,
1992; Hemesath & Poponio, 1998)  and a Czech Republic sample (Anderson,
DiTraglia, & Gerlach, 2011).
Yet, although Americans have been found to cooperate less than the Chinese,
Czechs, Russians, and Vietnamese, research has also found that Americans coop-
erate at similar levels compared to Dutch participants (e.g., Liebrand & Van Run,
1985)  and Columbian participants (Carpenter & Cardenas, 2011). Vietnamese
have also been shown to be similarly cooperative as Thai participants during a
public goods game (Carpenter, Daniere, Takahashi, 2004). Although in one study
Americans were less cooperative than Vietnamese, and then in another study
the Vietnamese were found to be equally cooperative as Thai participants, we are
unable to claim from this evidence that Americans are less cooperative than Thai
participants.
What can we conclude from these studies? They illustrate, as do some older
studies, that ethnic groups within the same country and individuals from differ-
ent countries can differ quite strongly in their responses to similar social dilemma
tasks. This by itself has important practical value for research conducted in various
laboratories around the world. It is possible that in some countries, social dilemma
tasks are approached with a different mindset than in other countries—and such
mindsets may also be relevant to how people from different countries might
respond to certain features of the experiment and/or experimental manipulations.
At the same time, these studies do not provide much evidence for underlying vari-
ables that might explain these differences. It is difficult to draw specific conclu-
sions about cross-societal variation in cooperation from the results of studies that
compare two countries, because each specific study has unique features that can
affect levels of cooperation. Also, differences in the experimental instructions, the
procedure for recruitment of participants, or language and translation issues, to
name just a few, might explain some of this variation in cooperation across eth-
nicities and societies.

Contemporary Illustrations in Small-Scale Societies

Fortunately, there have also been some programs of research that sought to
address complex issues in cross-national research, especially by using highly
standardized experimental protocols. In one such program of research, Henrich
and colleagues (2001) compared 15 small-scale societies in their generosity dur-
ing a dictator game. Their sample of societies came from South America, Africa,
Mongolia, Papua New Guinea, and Indonesia, and included hunter-gatherers, hor-
ticulturalists, nomadic herding groups, and agriculturalists. They found consider-
able variation across societies in their generosity towards strangers—measured
Cultural Perspectives ■ 83

by the amount allocated to an anonymous other in a dictator game. In some


societies, such as the Lamalara, people were inclined to give half their amount
in the dictator game to another anonymous partner, but in other societies, such
as the Machiguenga in Peru, people were inclined to only offer a quarter of their
amounts to a receiver in the dictator game. This suggests that these societies vary
substantially in their tendencies to show generosity to others in their community.
Additionally, Henrich and colleagues (2001) observed behavior in a pub-
lic goods game across seven of these small-scale societies and found that these
societies also varied in their contributions to the public good. In this context, the
Machiguenga demonstrated a modal response of zero contributions, yet other
societies, such as Orma in Kenya, tended to give almost 60% of their endowment
to the public good. Moreover, many of the participants in some societies recog-
nized that the public goods game was a similar context to making provisions to the
community, indicating that these experimental procedures may have some exter-
nal validity in relation to real public goods faced in these societies.
Taken together, this work gives rise to two conclusions. First, it demonstrates
that cooperation amongst unrelated persons varies substantially across small-scale
societies. Second, the findings suggest that, on balance, none of the societies con-
formed to a classic economic perspective that suggests people should provide zero
contributions to another person in the dictator game. In no society was there a
dominant tendency to make zero contributions in the dictator game. Instead,
many of the societies gave between 40 to 50% of their endowments to the receiver
in the dictator game. Likewise, the public good dilemmas also reveal much higher
levels of cooperation, and lower levels of free-riding than expected on the basis of
a classic economic perspective that would predict the absence of donations in the
dictator game and massive free-riding in the public good dilemmas.

Contemporary Illustrations in Large-Scale Societies

Although prior research making single comparisons between two different


large-scale societies indicates that these societies may vary in terms of their coop-
eration (e.g., Domino, 1992; Hemesath, 1994; Hemesath & Pomponio, 1998; Parks
& Vu, 1994), more recent work has ambitiously sampled cooperation rates across
a broad range of large-scale industrialized societies. As a recent case in point,
Cardenas, Chong, and Ñopo (2008) tested the amount of cooperation in a public
goods dilemma conducted across six countries and within different Latin American
cities, including Argentina, Columbia, Costa Rica, Peru, Uruguay, and Venezuela.
They had participants in groups ranging between 12 to 39 in the same room who
were endowed with a single token that could be either allocated to their individual
fund or the group fund (the public good). The token allocated to the group fund
equaled about 10 USD, but the token placed in the group fund was worth one
USD for each person in the group. They found significant variation in the willing-
ness to contribute to the public good. Although only 12% of participants (out of
567)  from Bogotá were willing to contribute to the public good, in Caracas about
47% of participants (out of 488) were willing to make contributions. The contribu-
tions in the remaining cities showed much similarity, ranging from 22% to 25% of
84 ■ Perspectives to Social Dilemmas

the participants choosing to contribute to the public good. Therefore, Cardenas and
colleagues (2008) found evidence for both similarity and differences between Latin
American countries in their willingness to make contributions to public goods.
Is there cross-societal variation in cooperation among large-scale societies
beyond Latin American countries? Herrmann et al. (2008) conducted public goods
experiments across 16 different large-scale societies, including Denmark, Greece,
Turkey, Saudi Arabia, Russia, the United States, and others. A  unique aspect of
their design is that in each society they conducted public goods experiments both
with and without the opportunity to punish others in the public goods dilemma.
Overall, they found that societies differed in cooperation in both conditions, as well
as across conditions. For example, in the no-punishment condition, participants
from Denmark, Switzerland, and the United States demonstrated higher amounts
of cooperation compared to countries such as Turkey and Australia. When pun-
ishment opportunities were present, Denmark, Switzerland, and the United States
were all more cooperative than Greece, Saudi Arabia, and Turkey. Also, while pun-
ishment opportunities tended to increase cooperation rates compared to the con-
dition with no-punishment opportunities for some countries (e.g., China, South
Korea, and the United Kingdom), in other countries the opportunity for punish-
ment did not increase cooperation (e.g., Greece, Oman, and Turkey). This work
clearly demonstrates that large-scale modern societies do differ in terms of their
tendencies to cooperate with unrelated, anonymous strangers in laboratory social
dilemmas. This work also suggests that while there may be differences, some soci-
eties are also more similar to each other in terms of their cooperation. Moreover,
these findings complement other behavioral experiments that find cross-societal
variation in bargaining behavior (Oosterbeek, Sloof, & Van de Kuilen, 2004; Roth,
Prasnikar, Okuno-Fujiwara, & Zamir, 1991).
Taken together, the studies by Cardenas et al. (2008) and Hermann et al. (2008)
use standardized public good dilemma tasks and involve a large number of soci-
eties. These qualities are important because in doing so these ambitious projects
provided convincing evidence that both meaningful similarity and meaningful
variation in cooperation exists across large scale societies. Moreover, the work by
Hermann and colleagues (2008), also makes the important point that societies
share similarities and reveal differences in their responses to the availability of
punishment. We will return to this specific finding later, when discussing how
culture might help us understand why some variables might impact cooperation.
For the remaining chapter we will report research aimed towards explaining
this variation. Perhaps the most frequent construct evoked to explain ethnic and
societal variation in cooperation is culture (e.g., Boyd & Richerson, 2009; Gächter,
Hermann & Thöni, 2010; Henrich & Henrich, 2006; Kopelman, 2008; Weber &
Morris, 2010). Let’s take a moment to consider what culture is.

■ C U LT U R E A N D C R O S S - S O C I E TA L VA R I AT I O N
IN COOPERATION

Many perspectives on culture exist. Yet, a common theme that runs across
these conceptualizations is that culture simultaneously exists outside and within
Cultural Perspectives ■ 85

the minds of the individuals in a collective. That is to say, culture can involve
the institutions within societies, the products people use on daily basis, the
reoccurring interactions with specific others, and the patterns of the behaviors
observed across interactions. These features of our social environment encour-
age individuals to adopt similar patterns of beliefs, values, personalities, and
behaviors. A  recent definition is that “culture consists of explicit and implicit
patterns of historically derived and selected ideas and their embodiment in
institutions, practices, and artefacts; cultural patterns may on the one hand be
considered as products of action, and on the other hand as conditioning ele-
ments of further actions (Adams & Markus, 2004, p.  341).”
Certainly, this is a complicated definition for a complex construct and there
should be no surprise that the study of the relation between culture and coopera-
tion is multifaceted, with many different researchers emphasizing different aspects
of culture. Researchers have also employed diverse methodologies in exploring the
relation between culture and cooperation. One long-standing approach involves
ethnographies of different cultures (e.g., Mauss, 1990; Sahlins, 1972). Our empha-
sis, however, will be on a different approach. In keeping with the focus of this
book, we will review relatively recent research using experimental games across
cultures to understand cross-societal variation in cooperation. That is, like the
studies that we reviewed earlier, we will focus on how researchers have attempted
to explain the variation that exists across societies in terms of their cooperation
displayed in experimental social dilemmas.
Although the research reported above clearly demonstrates the existence of
cross-societal variation, much of that work does not directly address the role of
culture in explaining this variation. It is important to demonstrate that culture
is essential in terms of how societies might differ in their responses to some key
experimental procedures. Moreover, it is important to test how some classic cul-
tural variables may explain cross-societal differences in terms of both cooperation
and responses to key experimental manipulations. Both of these later issues are
important, because even though some cultures may share similar levels of overall
cooperation, one of those cultures may benefit more or less from introducing a
specific strategy for encouraging cooperation.
To illustrate this point, let us return to the study by Hermann and colleagues
(2008). Recall that these researchers found that punishments can increase coop-
eration in certain societies, but that punishments failed to increase cooperation in
other societies (also see Balliet et al., 2011). Much research is needed to understand
how certain solutions for cooperation may be more or less effective at enhancing
cooperation in specific societies. While two societies may display similar levels of
cooperation, it may be that one strategy for increasing cooperation may work in
one society, but not the other society. For example, while one society may be able
to sustain cooperation by the use of informal sanctions by similar status other
individuals, it may be that in other societies an authority figure with the ability
to monitor and sanction the behavior of others may be necessary to encourage
cooperation. Such questions await future research. Thus, a more nuanced form
of cross-cultural work is needed to understand the many possible routes to coop-
eration across societies. Such work will definitely gain direction by a theoretical
86 ■ Perspectives to Social Dilemmas

understanding about why societies differ in terms of their cooperation. Certainly,


identifying that variation exists across societies in terms of cooperation is merely
identifying the problem to be solved. Theory of culture may prove instrumental in
explaining such variation across societies.
If culture matters for cooperation, then we may expect to see different cul-
tural groupings of societies systematically differ in their amounts of cooperation.
Gächter, Herrmann, and Thoni (2010) grouped the sixteen societies used in the
research reported above (Herrmann et al., 2008) into six different cultural group-
ings according to Inglehart and Baker’s (2000) model of cultural values. This is
because national boundaries do not always demarcate cultural boundaries and
certain regions of the world may more or less share different socio-historical
backgrounds resulting in more or less cultural similarities and differences. For
example, the shared history of several eastern European countries may result in
a shared cultural background that makes their cultures more similar and sys-
tematically different from western European countries that may themselves
share certain aspects of culture. Using Inglehart and Baker’s (2000) cross-societal
model of cultural values enabled them to group the societies into six “cultural
regions” labeled English-speaking; Protestant Europe; Orthodox/ex-Communist;
Southern Europe; Arabic speaking; and Confucian. This novel approach enabled
the researchers to examine if there was more or less between-cultural, compared
to within-cultural, variation in cooperation.
Gächter and colleagues (2010) found that there were greater amounts of
between-cultural variation than within-cultural variation in cooperation. What
this means is that there were greater differences in cooperation between cultural
groups—for example, between protestant Europe and Southern Europe, com-
pared to within the cultural groups, such as Turkey and Greece (both considered
Southern Europe). Moreover, they found that the between-cultural differences in
cooperation were even greater when opportunities to punish others were present
in the social dilemma, compared to when punishment opportunities were absent.
Specifically, they found that when punishment was absent, then individual varia-
tion in cooperation was larger than between-cultural variation. However, during
punishment conditions, between-cultural variation was relatively more important
in understanding cooperation. Thus, this research clearly shows that culture dif-
ferences are important to our understanding of human cooperation, and in par-
ticular how cooperation can be promoted in various societies.
To summarize, we have briefly conveyed that the concept of culture is complex
and that culture may involve both features of the environment as well as content
and processes of the mind. Two goals for research on ethnic and societal varia-
tion in cooperation is to examine (a) how cultural differences affect cooperation
and (b) how cultural differences determine what strategies may or may not work
in promoting cooperation within specific ethnicities or societies. Indeed, as we
have described above, research finds that there is greater variation in coopera-
tion among cultural regions compared to differences observed amongst countries
within the same cultural region—and this difference in between-culture and
within-culture variability in cooperation is even more pronounced in experiments
that allow participants to punish the behavior of their group members. We now
Cultural Perspectives ■ 87

consider what aspects of culture have been studied in relation to cooperation and
address how this research approaches these two objectives. Specifically, we will
focus on research on three defining features of culture and their relation to coop-
eration: social norms, values, and beliefs.

■ SOCIAL NORMS AND COOPERATION

One aspect of cultural variation around the world may be due to different social
norms that emerge among local groups of people, which are used to guide and
evaluate behavior (Ostrom, 2000). Social norms are expectations about how
people should behave in a specific context when behavior is obligatory, permit-
ted, or forbidden. If people violate these expectations they tend to be formally
or informally sanctioned by others. Social norms can be about many different
behaviors. Social norms may include standards about how people should dress
for work, how we should greet others in public, how to eat food, and rules
about whether people should recycle, litter, or pay  taxes.
One broad distinction may be made between (a)  conventional norms, and
(b) moral norms (Turiel, 1983). Conventional norms pertain to those aspects of
social behavior that help people coordinate by developing rules for specific behav-
iors and interactions. For example, rules about how to dress at work, greet others,
and eat food may be considered conventional norms. Moral norms, however, per-
tain to those aspects of social behavior that help people develop patterns of social
exchange that involve some conflict of interest (e.g. recycling, littering, and paying
taxes). The norm to “do onto others what others do to you,” the norm of general-
ized reciprocity (e.g., Gould, 1960), is a clear example of a moral norm. Likewise,
to contribute one’s fair share to an important group outcome, a norm of fairness, is
also an example of a moral norm. Moral norms are especially important to social
dilemmas and may provide solutions to social interactions characterized by a con-
flict between self and collective interests. Conventional norms, on the other hand,
are relatively more important to situations that involve coordination problems,
compared to social dilemmas (Kelley et al., 2003).
Although cultures differ in terms of both types of norms, we will pay attention
to moral norms in particular because they are most relevant to social dilemmas.
Specifically, we focus on norms that are fueled by expectations about how people
should behave toward others during social dilemmas (Henrich & Henrich, 2006).
These social norms for cooperation would then be maintained by informal (and
sometimes formal) social punishment of norm violators (e.g., Boyd & Richerson,
2009; Mathew & Boyd, 2011). Moreover, over time, groups may become more dif-
ferent from each other as a result of imitation of successful group members, assimi-
lating migrants into existing cultures, and from between-group competition (Boyd
& Richerson, 2009; Gintis, 2003). Such social norms may explain why in some soci-
eties people simultaneously expect others to make contributions to public goods
and demonstrate a willingness to punish others who violate those expectations,
while other societies tend not to possess such expectations for cooperation.
Recall that Henrich and colleagues (2006) found considerable variation in the
dictator game across societies. In some small scale-societies, people were quite
88 ■ Perspectives to Social Dilemmas

reluctant to give much at all of their endowment to an unrelated anonymous


stranger, while in other societies people were willing to share about half their
endowment. Henrich and colleagues (2006) took this data to indicate cultural
variation in social norms for generosity towards unrelated others in a community.
If this were the case, then people in societies should be more willing to punish
non-cooperative behavior, since these social norms for generosity are likely main-
tained by people monitoring, evaluating, and sanctioning people’s norm-violating
behavior. To examine this idea, Henrich and colleagues compared behavior across
societies in a dictator game to punishment behavior in a third-party punishment
game. In the third-party punishment game, two people play a dictator game and
a third-party participant is asked if they would be willing to pay a cost (10% of
their endowment) to punish the dictator (30% of the amount kept for themselves).
Prior to learning about the amount delivered in the dictator game, the third-party
participant is asked if they are willing to pay to punish the dictator for each pos-
sible offer in the game.
An interesting finding in this research is that people were willing to pay a cost
to punish behavior toward an anonymous unrelated person. Although people
gained no immediate financial benefit from their punishment behavior, punish-
ment remained common across societies. Most importantly however, as expected,
in societies where there were higher offers in the dictator games (i.e., societies with
social norms for generosity towards strangers), there was a greater willingness to
pay a cost to punish dictators who gave others less in the third-party punishment
game. These findings suggest that in societies with social norms of generosity (and
so cooperation) with unrelated others, these social norms may be maintained by
people being willing to informally punish others who do not behave according
to the norm. Thus, this research indicates that social norms may be an important
component of culture that relates to cross-societal variation in cooperation.

Why Is It That Some Countries Tend to Develop


Cooperative Social Norms?

This is perhaps one of the most challenging questions one can ask about cul-
ture and cooperation. Although more research is needed to address this impor-
tant question, we will review some recent research that provides some tentative
answers. In particular, in their research on small-scale societies, Henrich and
colleagues (2010) considered two possibilities why some small-scale societies
tend to possess norms of cooperation:  (a)  the extent of market integration,
and (b) societal members subscribing to a world religion. In this research, they
sampled 15 small-scale societies that varied in terms of both market integration
and religion. Market integration involves the extent to which a society includes
frequent anonymous interactions between unrelated persons. The researchers
indexed this across societies by measuring the percent of the amount of house-
hold calories purchased at the market. World religions have also been suggested
to cultivate prosocial norms amongst individuals sharing the religion. Thus,
religion may be a social institution that helps to maintain social cohesion and
cooperation amongst societal members. It may accomplish this by the threat
Cultural Perspectives ■ 89

of supernatural punishment by a deity or through rituals and beliefs that tend


to foster group cohesion. World religion across societies was measured by the
number of societal members who reported adopting a major world religion.
Henrich and colleagues (2010) had participants in each of the fifteen societ-
ies interact in three different situations: the dictator game, the ultimatum game,
and the third-party punishment game (described above). Importantly, in societ-
ies with high market integration there were larger offers in both the ultimatum
game and the dictator game, compared to societies with low market integration.
Moreover, people were more likely to punish others in the third-party punishment
game in relatively higher market integration societies. World religion had simi-
lar effects. In societies where a greater percentage of societal members reported
adopting a world religion participants tended to give higher offers in both the
ultimatum and dictator games, and displayed a greater tendency to punish oth-
ers in the third-party punishment game. This data indicates that in societies that
involve greater amounts of exchange relationships between unrelated others and
in which people tend to adopt world religions, there tend to be norms of coop-
eration between unrelated strangers and a willingness to punish norm violators.
Thus, cooperative norms may emerge from a tendency to engage in relations
between unrelated persons, and religion may develop in societies that have a need
for sustaining cooperation amongst many unrelated individuals.
The focus on market integration and world religions is also interesting from the
perspective of large-scale societies. After all, both market integration and world
religions tend to be characteristic of large-scale modern industrialized societies.
Thus, the size of a society may also predict which societies have an emergence of
social norms for cooperation. Marlowe and colleagues (2008) sampled behavior
in the third-party punishment game in 12 societies. The local group population
in these societies varied between 33 to 2  million. They found that people were
much less willing to engage in third-party punishment in smaller societies, com-
pared to larger societies. It is plausible that in small-scale societies people may be
able to maintain cooperation among societal members by means other than the
emergence of social norms and the use of punishment to enforce such norms.
For example, in small-scale societies, cooperation may be maintained through
reciprocal relations and reputations, because each group member’s behavior is
able to be monitored and most interactions occur amongst individuals who have
an interaction history and knowledge about each other. However, in large-scale
societies there are too many people to be able to monitor and keep track of every-
one’s behavior/reputation. This analysis, while somewhat speculative, suggests
that in large-scale societies it is virtually impossible to enforce norms through
direct reciprocity or even reputation (see Chapters 3 and 4) because many social
interactions involved anonymous, unrelated strangers. Hence, it makes sense for
such large collectives to develop strong social norms, along with a pronounced
tendency to enforce such norms, and perhaps rely more on institutional forms of
norm enforcement (e.g., norm enforcement through guards and policy officers).
Certainly, large societies around the world have established various institutional
mechanisms to promote large-scale cooperation, and research finds that such
centralized institutional mechanisms can be effective, although sometimes quite
90 ■ Perspectives to Social Dilemmas

costly, mechanisms to promote cooperation (e.g., Baldassari & Grossman, 2011).


Unfortunately, to date, no research has examined if centralized institutional mech-
anisms are more or less effective in promoting cooperation in certain cultures—a
topic worthy of future research.
In conclusion, research finds that societies may vary in the social (moral) norms
for how to behave cooperatively with unrelated strangers. These norms also seem
to be maintained by a willingness by societal members to punish norm violators.
These norms of cooperation and the corresponding willingness to punish people
who violate those norms may be rooted in the challenges faced by larger societies
that involve many exchange relations between anonymous strangers. Moreover,
some variation in these norms correspond to cross-societal variation in the adop-
tion of a world religion, suggesting that world religions may be institutions that
function to maintain cooperative social relations between strangers. One limita-
tion of the ability for social norms to regulate cooperation is that they are most
often determinants of behavior when people’s behavior is being monitored and
there exists a possibility of being punished or negatively evaluated. This poses a
problem because many forms of cooperation during social dilemmas occur in
contexts where people are not being monitored or it is very unlikely that their
behavior may be punished. Importantly, culture may still be able to influence
cooperation in the absence of observers, and this may be achieved by people inter-
nalizing certain cultural values about appropriate social behavior.

■ C U LT U R A L VA L U E S A N D C O O P E R AT I O N

Much research on cross-cultural differences has focused on differences in what


are referred to as cultural values. Cultural values are abstract ideas that indi-
cate what is good, appropriate, and desirable behavior in a specific society
(Williams, 1970). These values may form the basis for informal social norms
and formal societal institutions that regulate social behavior (Schwartz, 1999).
Several types of values have been found to differ across cultures, including
power, achievement, traditionalism, and hedonism, to name a few (Schwartz,
1992). Perhaps the most researched cultural values—and especially in rela-
tion to cooperation—are individualist and collectivist values (Hofstede, 1980;
Triandis, 1989; Wagner, 1995). Individualist values emphasize that individuals
are autonomous and should pursue their personal goals. Collectivist values, on
the other hand, emphasize promoting group goals, social norms, and shared
beliefs among members of the group—which has been hypothesized to relate
to greater amounts of cooperation with in-group members (Chen, Chen, &
Meindl, 1998; Cox, Lobel, & McLeod, 1991; Parks & Vu, 1994; Wagner, 1995).
Below we will primarily focus on research relating individualist and collectivist
cultural values with cooperation.
Early research on cultural differences in cooperation centered on the dis-
tinction between cultures on this value dimension. For example, Parks and Vu
(1994) argued that Vietnamese participants were more cooperative than the
American participants because Vietnamese culture has been found to be rela-
tively more collectivist, while American culture is more individualist. Yet, these
Cultural Perspectives ■ 91

studies did not measure differences between their samples in endorsing these val-
ues, and most certainly, these two countries vary along several other dimensions
besides individualism-collectivism. Subsequent research has attempted to estab-
lish the relation by measuring both cultural values and cooperation in samples
from the same culture (e.g., Cox, Lobel, & McLeod, 1991; Probst, Carnevale, &
Triandis, 1999).
For example, Probst, Carnevale, and Triandis (1999) evaluated if people in the
same country (in the United States) who differentially self-report individualist
and collectivist values tend to cooperate more or less during social dilemmas. In
their measurements they distinguished how individualists and collectivists vary
according to a horizontal versus vertical view of social relations. Specifically, while
horizontal collectivists view the group as important for their self-concept and view
group members as relatively similar in status, vertical collectivists similarly view
the group as important for defining themselves, but tend to emphasize a hier-
archical structure within the group. Horizontal individualists view themselves
as distinct from others, but have an egalitarian perspective on social relations.
Vertical individualists, on the other hand, view the self as an autonomous entity
and expect inequality between individuals. They measured individual differences
in these values and then had participants interact for ten trials either in a standard
Prisoner’s Dilemma or an inter-group Prisoner’s Dilemma—a dilemma whereby
people decide among contributing to their individual account, their own group’s
public good, or to a global public good that involves an additional group. The
inter-group dilemma provides an added dilemma between the participants’ own
group and doing what is best for all people from each group facing the dilemma. In
the inter-group Prisoner’s Dilemma everyone benefits if people contribute to the
global account, but each group would benefit relatively more by contributing more
to their own group’s account and not the global account.
Probst and colleagues (1999) found that vertical collectivists displayed rela-
tively greater amounts of cooperation during the standard Prisoner’s Dilemma,
compared to the inter-group dilemma. However, they found the exact opposite
results for vertical individualists—vertical individualists were more coopera-
tive in the inter-group Prisoner’s Dilemma, compared to the standard Prisoner’s
Dilemma. Horizontal individualists and collectivists did not vary in their amounts
of cooperation across these contexts. Their reasoning was that vertical individ-
ualists value winning competitions and that to win during the inter-group con-
text required cooperation with in-group members, but winning in the standard
Prisoner’s Dilemma required defection. Vertical collectivists, however, were
inclined to place the group’s interests above their own interests in both contexts.
During the standard Prisoner’s Dilemma, it is obvious that cooperation results in
a greater outcome for the group. However, in the inter-group dilemma, defection
results in a greater outcome for both groups. In this context, vertical collectiv-
ists seemingly identified with both groups (since they were all in the same room
and students from the same university) and chose to defect in this context. This
research demonstrates that cultural values are relevant to informing behavior dur-
ing social dilemmas. What are needed now are programs of research that extend
these findings to understanding differences across societies. Another limitation
92 ■ Perspectives to Social Dilemmas

of this research is that it is correlational. Yet, recent evidence does suggest that
collectivist and individualist cultural contexts may play a causal role in informing
choice during social dilemmas.
One way to determine if collectivist-individualist cultures have a causal impact
on cooperation is to create these climates in a laboratory setting and observe its
effect on cooperation. This is exactly the method used by Chatman and Barsade
(1995) in observing the effects of collectivism-individualism on cooperative
work behavior. Chatman and Barsade (1995) had a group of M.B.A.  students
interact in a workplace simulation that involved cooperation with several other
“co-workers.” They randomly assigned these students to work for an organization
that was either espousing collectivist or individualist values. For example, in the
individualist organization participants were told that individual effort was valued
and rewarded, while in the collectivist organization group efforts were valued and
rewarded. Participants were randomly assigned to work in these simulated orga-
nizations doing various tasks for two and half hours. Afterwards, they had each
person rate the cooperativeness of each member of their group. They found that
people rated their partners as more cooperative in the collectivistic organization,
compared to the individualistic organization. By directly manipulating a collec-
tivist versus individualist work climate, this research suggests that a collectivist
climate may play a causal role in promoting cooperation.
Another method to observe whether cultural values have a causal effect on
behavior is to activate already learned cultural values that pre-exist in the minds
of bicultural individuals. Recent research in the field of cultural psychology has
found that people may have knowledge of multiple cultures (Kitayama & Cohen,
2007). Several of these studies have demonstrated that bicultural participants
change their cognitions and behavior toward patterns consistent with a particu-
lar culture when provided cues of those cultures (Cohen, 2007; Oyserman & Lee,
2007). Wong and Hong (2005) considered if this was the case for cooperation
amongst a sample of participants from Hong Kong. They reasoned that Hong
Kong university students have had enough exposure to American media that
they may have internalized some aspects of American culture, including indi-
vidualist values. Thus, they predicted that providing reminders of either Chinese
culture or American culture would influence these Hong Kong participants to
behave either like Chinese or Americans during a Prisoner’s Dilemma game,
respectively. An added feature of their experiment was that they manipulated
whether the participants interacted with a friend or a stranger. They hypoth-
esized that the Chinese should only be more cooperative with a friend, com-
pared to the stranger, but that Americans would not distinguish between the two
conditions. They found that priming the participants with symbols of Chinese
culture resulted in higher expectations of cooperation and own cooperation,
but only when participants were interacting with a friend and not while they
were interacting with a stranger. Their cooperation was also higher in the friend
condition, compared to when they were primed with symbols from American
culture. This single study provides some preliminary evidence that exposure to
subtle cues that serve as reminder of culture differences can have a causal role on
an individual’s level of cooperation.
Cultural Perspectives ■ 93

Cultural Values, Collective Efficacy, and Cooperation

Unfortunately, not much research has been done pursuing the specific variables
that may explain the link between cultural values and cooperation. Preliminary
work on this issue, however, suggests that the relation between collectivism and
cooperation may be explained by group efficacy. Prior research has shown that
people tend to be more cooperative when they feel that their behavior makes
a difference to the group outcome (self-efficacy) and when they expect that
their group can achieve its goals (collective efficacy) (Kerr, 1989). Earley (1993)
showed in samples from the United States, Israel, and China that collectivism
predicted cooperation in a group task, and that collective efficacy explained
the link between collectivism and cooperation. Thus, collective efficacy may
provide one clue about how cultural values relate to cooperation. Individuals
who are concerned about group outcomes and who believe that others in their
group similarly value the group over themselves will tend to believe that their
group will be effective at reaching their goal (e.g., a public good or resource
conservation), and this feeling of collective efficacy may promote a stronger
tendency to cooperate.

Cultural Values and Strategies That Promote Cooperation

Cultural values may determine the effectiveness of certain solutions for promot-
ing cooperation. For example, does a sense of social responsibility, social identity,
anonymity, and/or group size affect cooperation in the same way in both collec-
tivist and individualist cultures? Here we discuss how certain well-established
solutions for maintaining cooperation may all work differently to affect coopera-
tion, depending on the cultural values of participants facing the dilemma.
Earley (1989) hypothesized that collectivist cultures would be more coopera-
tive than individualist cultures during a cooperative group task, but that induc-
ing a feeling of shared responsibility for the group outcome would only increase
levels of cooperation among persons in the individualist cultures. Earley (1989)
had managerial trainees for entry-level managerial positions either from America
or China complete as many tasks as possible that had been placed in their in-box.
They worked to complete these tasks in one hour individually. However, they were
also placed in a group and told that their output would be added to the output of
a group of co-workers. To manipulate a sense of shared responsibility, Earley told
participants that they were either one of ten managers working toward a com-
mon group goal of 200 items (high shared responsibility), or told participants they
could expect to complete about 20 individual items and were not told anything
about a group goal (low shared responsibility). The primary dependent variable of
interest was the amount of work accomplished in one hour. The study was framed
as a study of social loafing, but this in-box paradigm may also be considered a
form of a social dilemma (see Joireman, Daniels, et al., 2006).
Earley (1989) replicated prior research and found that the Chinese were rel-
atively more collectivist than the Americans. Collectivism also related to more
cooperation (individual output) in the experiment. Most importantly, however,
94 ■ Perspectives to Social Dilemmas

was that collectivism changed the positive relation between the manipulation of
shared responsibility and cooperation. Shared responsibility tended to increase
cooperation amongst individualists, but not collectivists. Prior research has found
that a shared sense of responsibility for a public good increases cooperation in
social dilemmas (De Cremer & Barker, 2003; De Cremer & Van Lange, 2001), but
this work has been primarily conducted in Western societies. One implication of
this work is that inducing a sense of shared responsibility may be one solution for
increasing cooperation in individualistic societies, but may not work as well in
collectivist societies.
Promoting an in-group identity, enhancing identifiability (or decreasing ano-
nymity), and reducing group size may also have different effects on cooperation
depending on the cultural values of participants facing the dilemma. In a separate
study, Earley (1993) hypothesized that collectivist cultures would be more coop-
erative with in-group members, relative to out-group members, and that people
from individualistic cultures would differentiate less between these two groups in
terms of cooperation. Earley compared samples from the United States, Israel, and
China on their behavior during the in-box paradigm described above. An addition
to the paradigm is that he manipulated whether the participants were interact-
ing with an in-group member, out-group member, or simply worked alone. Both
Israel and China scored significantly higher on collectivism relative to the United
States. He found that both the Israelis and Chinese cooperated significantly higher
in the in-group condition, compared to the out-group condition or individual
work condition. However, the Americans worked harder when they worked alone,
compared to the in-group and out-group conditions (which they did not differen-
tiate in terms of cooperation). While interacting with an in-group member, both
collectivist cultures, China and Israel, were indistinguishable in their levels of
cooperation, and both countries displayed greater amounts of cooperation com-
pared to the American sample. Thus, supporting Earley’s hypothesis, collectivist
cultures do tend to be relatively more cooperative with in-group members rela-
tive to out-group members, but that individualistic cultures may not differentiate
between these two groups.
To examine if group size and an enhanced identifiability of contributions would
affect cooperation differently depending on cultural values, Wagner (1995) had
students from the United States work on a group task throughout the semester
and then rate the cooperativeness of their peers. He found that individual dif-
ferences in collectivist values positively related to peer ratings of cooperation
during the group task. Importantly, individual differences in self-reported collec-
tivism changed the relation between group size and identifiability with coopera-
tion. For example, prior research has found that group size negatively relates to
cooperation (Bonacich, Shure, Kahan, & Meeker, 1976; Hamburger, Guyer, & Fox,
1975). However, Wagner (1995) only found this negative effect among individual-
ists. Group size did not affect the amount of cooperation amongst collectivists.
However, identifiability did positively affect cooperation amongst individualists,
but not collectivists. One important implication of this research is that prior con-
clusions from research on identifiability and group size with cooperation have
been limited to Western cultures, and these conclusions may not readily generalize
Cultural Perspectives ■ 95

beyond these cultures. As the research mentioned above suggests, collectivists


cooperate at high levels regardless of group size or identifiability, but individual-
ists behave differently as the environment is altered according to these variables.
In summary, the above mentioned research clearly shows that both individual
differences and cross-societal variation in collectivist and individualist values
relate to cooperation—with people and societies that hold collectivist values dem-
onstrating greater amounts of cooperation. Moreover, research has supported the
position that these values may have a causal effect on cooperation. Collectivist cul-
tures may have greater cooperation, because cultural members expect that others
will also value group outcomes and so tend to expect others to cooperate, thereby
enhancing their sense of collective efficacy and promoting cooperation. The cul-
tural values of collectivism and individualism may also determine what strate-
gies are effective in promoting cooperation. Research finds that a sense of shared
responsibility, identifiability, and group size only affect cooperation amongst indi-
vidualists, but not collectivists.
Thus, different countries may require different strategies to promote coop-
eration, depending on the dominant cultural values in the respective society.
Certainly more research is necessary, especially uncovering other possible cultural
values that may exert their effects on cooperation (e.g., power, achievement, and
hedonism), but the broad message of these studies is that collectivism matters for
cooperation and that these values may help us understand when and why certain
variables relate to cooperation. Moreover, the research reported above clearly con-
veys that researchers need to take caution in generalizing their results beyond the
cultural sample (e.g., Henrich, Heine, & Norenzayan, 2010).
Importantly, research has not only found that culture may exert its effects on
cooperation through social norms and values, but may also do so through beliefs
about others—and especially beliefs about other’s willingness to cooperate. In fact,
Hong and Wong (2005) found evidence that priming Hong Kong participants with
symbols of either Chinese or American cultural symbols affected the participants’
beliefs or expectations about their partner’s cooperative behavior, which partly
explained why participants were relatively more cooperative after being primed
with Chinese symbols. Certainly, another important difference between cultures,
besides social norms and values, are differences in beliefs—and beliefs about
human nature in particular.

■ C U LT U R A L B E L I E F S A N D C O O P E R AT I O N

Cultures not only differ in their values, but also differ in their beliefs about
social relations (see Bond et  al. 2004). According to Bond and colleagues
(2004) cultures may differ according to general beliefs that people use to direct
their behavior on a daily basis. For example, cultures differ in several types
of beliefs, including beliefs about a supreme being (religiosity) and beliefs that
life events are predetermined (fate). A  belief that is central to understanding
cooperation is the belief about the extent to which other people are trustworthy
(or not). This belief has been referred to as social cynicism in cross-cultural
research (Bond et al., 2004), but may be considered as cross-cultural differences
96 ■ Perspectives to Social Dilemmas

in trust. As noted in chapter  4, trust is often defined in terms of the willing-


ness to accept vulnerability based upon the positive expectations of the inten-
tions or behaviors of others (Rousseau et  al., 1998). Also, we have discussed in
chapter 4 the well-established finding that trust relates positively to cooperation
in social dilemmas (Anderson, Mellor, & Milyo, 2004; Fischbacher, Gächter, &
Fehr, 2001; Yamagishi, 1988; for a recent meta-analysis see Balliet & Van Lange,
2013a). Importantly, trust in others is a belief that may vary across societies and
that may hold important implications for understanding cross-societal variation
in cooperation. Indeed, in this section, we will see that trust and culture are
closely related in our understanding of cooperation in different societies.
Prior research on trust has found that societies do vary substantially in their
generalized trust in others. Research on the World Values Survey has asked people
around the world to respond to the question “Generally speaking, would you say
that most people can be trusted or that you need to be very careful in dealing with
people” (Inglehart, Basanez, & Moreno, 1998). What this research has found is
that in some societies people report being very cautious about dealing with people
(e.g., Turkey and Iran), but in other societies a majority of people feel that most
people can be trusted (e.g., Norway and Sweden). Two important limitations of
this research is that it measures trust with a single forced choice question, and this
may not be an optimal measure of trust because people may theoretically be both
trusting and cautious of others (Yamagishi, 2011).
Complementing results of the World Values Survey, Huff and Kelley (2003)
also found much cross-societal variation in trust using a multi-item measure of
generalized trust in others. They found that two samples from the United States
reported greater amounts of trust (and less distrust) in others in general com-
pared to samples from six Asian countries, including China, Hong Kong, Japan,
Korea, Malaysia, and Taiwan. These latter results were expected because it was
hypothesized that individualistic Americans would differentiate less in term of
trust between in-group and out-group members, compared to samples of rela-
tively collectivist Asian cultures. In fact, they also measured the difference in trust
between ethnic group members versus non-members. They found that the six
Asian samples extended relatively greater amounts of trust to their ethnic group
members, compared to non-members, and that this difference was larger in the
Asian countries than it was in the two samples from the United States. Although
these findings further establish cross-societal variation in trust, this research con-
tinues to use self-report survey items to measure trust.
Recent work has complemented this survey research by examining
cross-societal variation in trust using behavioral experiments where trusting oth-
ers, and the chance of being taken advantage of, is costly (e.g., Bohnet, Herrmann,
& Zechhauser, 2010; Buchan & Croson, 2004; Buchan, Johnson, & Croson, 2006;
Cardenas et  al., 2008). Cardenas and colleagues (2008) found evidence of both
differences and similarities in trust across large sample from six Latin American
countries. They had participants from each country play a trust game. In the trust
game participants are either assigned to the role of the allocator or responder. The
allocator is endowed with a specific amount of money and then decides how much
of that money to allocate to themselves or the responder. Any amount delivered
Cultural Perspectives ■ 97

to the responder is tripled, but the amount allocated to the self remains the same.
Next, if the responder receives any money, then they decide between how much
to transfer to themselves or deliver to the allocator. During this transfer all money
retains its original value and afterward, the interaction is over. This research found
that while the median offer was only 25% of the endowment in Columbia; in all
the other samples (Argentina, Costa Rica, Peru, Uruguay, and Venezuela) the
median offer was 50% of the endowment. The behavior of the responder (consid-
ered a measure of trustworthiness) mirrored these results. Columbian participants
provided a median return of 14%, while in the other samples there was a higher
median return of approximately 25% (ranging from 20% to 28%). Thus, using
behavioral measures of trust, Cardenas and colleagues find evidence for both cul-
tural differences and similarities in both trusting and trustworthy behaviors.

Generalized Trust and Cooperation across Societies

Does cross-societal differences in trust positively relate to cross-societal varia-


tion in cooperation? As discussed previously, people tend to cooperate more
with others when they expect others to cooperate (e.g., Balliet & Van Lange,
2013a; Yamagishi, 1988). Thus, we might expect that high-trust societies would
display greater amounts of cooperation. In fact, several researchers have estab-
lished theoretical links between cross-societal trust with differences in coopera-
tion. Putnam (1993) suggested that trust plays a key role in strengthening social
networks and facilitating collective action. Fukuyama (1995) has similarly sug-
gested that cross-societal differences in trust may form the basis of differences
in societies being able form spontaneous groups and networks that can generate
value and prosperity in societies. An implication of this work is that high-trust
societies may be more prosperous, and indeed prior research has found that
cross-societal differences in responses to the world values survey relates posi-
tively to national wealth (Knack & Keefer, 1997; La Porta, Lopez-de-Silanes,
Shleifer, & Vishny, 1997). Yet, this does not provide direct evidence for the link
between cross-societal variation in trust and cooperation.
Research on cross-societal trust and cooperation has mostly made use of survey
measures of trust or measuring expectations of other’s behavior prior to observ-
ing cooperation in an experimental social dilemma. For example, Wade-Benzoni
and colleagues (2002) hypothesized that a collectivist sample of Japanese would
tend to expect others to be more cooperative in a social dilemma, and so would
cooperate more than an individualistic sample of Americans. They found that the
Japanese reported being relatively more collectivist than the Americans, and that
the Japanese tended to be more cooperative than Americans. Importantly, the dif-
ferences in their beliefs about their partners’ cooperation (i.e., trust) explained
the differences in cooperation between the two countries. The Japanese tended
to cooperate more because they expected more cooperation from others, com-
pared to the Americans, who tended to expect relatively less cooperation from
their partner. Thus, differences in self-reported trust in others positively predicted
behavioral differences in cooperation—and this relation explained the differences
in cooperation between the Japanese and Americans.
98 ■ Perspectives to Social Dilemmas

Other research has compared behavioral measures of trust with behavioral


measures of cooperation. As mentioned earlier, Cardenas and colleagues (2008)
measured cooperation across six Latin American societies by observing partici-
pants in a public goods dilemma. Importantly, they had the same participants play
a trust game with another person. Across the six societies, behavior in the trust
game positively related to cooperation in the public goods game. This research
provides strong evidence that the level of trust in a particular society (as indexed
by a behavioral measure) predicts the amount of cooperation in that society.
It may be that in some societies trust is a stronger determinant of cooperation
compared to other societies—and in those societies different strategies are used
to promote cooperation. Indeed, a recent meta-analysis linking trust with coop-
eration found that societies differed in the strength of this relationship (Balliet &
Van Lange, 2013a). Specifically, the meta-analysis included studies on the relation
between dispositional trust and cooperation, as well as expectations of partner(s)
behavior with cooperation, which were conducted across 28 different societies.
Even though trust was positively related to cooperation within each country, the
strength of that relationship varied across societies. In certain societies trust was
a good predictor of cooperation (e.g. Denmark, Belgium, and the Netherlands),
but in other societies there was a weaker relation between trust and cooperation
(e.g., South Africa, Venezuela, and Poland). Why was trust a stronger predictor in
some countries?
The difference may be explained by cultural differences in how individuals
manage social relations. Yamagishi (1988; 2011; see also Yamagishi & Yamagishi,
1994) has argued that trust is more important for promoting cooperation in the
United States, compared to Japan. In Japan, people are more willing to cooper-
ate when certain mechanisms (e.g., monitoring and sanctioning uncooperative
behavior) provide assurance that others will cooperate. Yamagishi (1988) found
that Americans tended to behave more cooperatively than the Japanese in a pub-
lic good dilemma without the possibility to punish others, but when punishment
opportunities were present then there were similar levels of cooperation between
the Japanese and Americans. Yamagishi interpreted these results to suggest that
high-trusting Americans are more likely to cooperate in the absence of a sanc-
tioning system, but that the Japanese do not believe others will cooperate in the
absence of such a system and so the presence of sanctioning increase their rates
of cooperation. Here we have a clear example how possible differences in insti-
tutions between the Japanese and the United States may reciprocally affect trust
and cooperation. Yamagishi (2011) notes how specific forms of social relations in
Japan have established several institutions that regulate behavior in specific ways,
removing the need for trust to regulate social interactions. Such institutions may
over time have the effect of further eroding trust in others within a society.
Although trust may have a main effect on cooperation, it may also affect coop-
eration indirectly through other means. High-trust societies may sustain their
cooperation, in part, because these societies also tend to have greater amounts
of third-party punishment of non-cooperators. Paying the cost of punishing
non-cooperators poses a second-order social dilemma, such that it is in each per-
son’s best interest to not pay the cost to punish norm violators, but to free-ride on
Cultural Perspectives ■ 99

the increased amounts of cooperation by the punishments of others. However, in


high-trust societies people may be more willing to punish norm violators because
they also believe that others will also be willing to punish them. Thus, trust may
help to solve the second-order dilemma of punishing free riders in large-scale
societies. In support for this perspective, Balliet and Van Lange (2013b) conducted
a meta-analysis of studies relating punishment to cooperation in small-group
public goods dilemmas across 18 different societies. They coded each society for
their level of trust using data from the World Values Survey. After controlling for
several relevant variables, they found that trust related positively to the effect of
punishment on cooperation. Therefore, these data lend support to this perspective
by providing evidence that informal peer punishments more effectively increase
cooperation in public goods in high-trust, compared to low-trust, societies.
In summary, much prior research on cross-cultural beliefs has narrowed atten-
tion to the belief in other’s trustworthiness. As noted above, research using both
survey and behavioral measures of trust has established cross-societal variation in
the belief that others are trustworthy. Researchers across the social sciences have
also discussed how this variation in trust may be important for understanding
cooperation and the functioning of societies (e.g., Fukuyama, 1995; Putnam, 1993;
Yamagishi, 2011). Indeed, research has established that some observed differences
in cooperation between societies can be explained by different levels of trust. But
research suggests that trust may matter more for some countries compared to oth-
ers. As Yamagishi (1988) notices, trust may be important for maintaining coop-
eration in the United States, but other social and psychological mechanisms may
sustain cooperation in Japan. A society’s level of trust has also been demonstrated
to affect how peer punishment can sustain cooperation—with peer punishment
promoting cooperation better in high-trust societies. Overall, this research sug-
gests that trust in others may explain why certain societies possess much less
cooperation amongst strangers, while other societies possess an abundance of
cooperation for the provision of public goods. Although trust has yielded one
promising direction for research, cross-societal variation in other specific beliefs
may affect cooperation, such as belief in God.

Additional Cultural Beliefs Important for Cooperation

Trust in others is likely the most relevant belief people possess that may help us
understand cross-cultural variation in cooperation. Nevertheless, there may be
other beliefs that directly or indirectly affect levels of cooperation. For example,
cross-cultural differences in beliefs in God (i.e., religiosity) can affect coopera-
tion rates (Johnson, 2005; Johnson & Bering, 2006). Johnson (2005) used data
from the Standard Cross-Cultural Sample including 186 societies and related
cross-societal variation in a belief in a “high god” to various indexes of coop-
eration within societies. He found support for the hypothesis that cross-cultural
differences in the belief in a supernatural deity that possesses the ability to pun-
ish selfish behavior positively relates to cooperation across societies. Similarly,
Henrich and colleagues (2010) found that the number of persons subscribing
to a major world religion predicted cooperation rates across 15 small-scale
100 ■ Perspectives to Social Dilemmas

societies. Thus, beliefs in a supernatural third-party punisher may explain


some cross-societal variation in cooperation. As Johnson (2005) recognizes, a
belief in such a deity removes the need for costly sanctions for cooperation and
effectively solves the second-order dilemma of punishment. Certainly, future
research will profit by examining a wider range of cross-cultural beliefs and
considering how these beliefs relate to cooperation.
Future research will also benefit by considering the relation between beliefs with
other components of culture. Although cultural values and beliefs are hypothesized
to interrelate in many ways (Bond, Leung, Au, Tong, & Chemonges-Nielson, 2004;
Doney, Cannon, & Mullen, 1998; Leung, Au, Huang, Kurman, Niit, & Niit, 2007),
such as collectivists placing different emphasis on trust compared to individualists
(e.g., Doney, Cannon, & Mullen, 1998; Huff & Kelley, 2003), to date, research has
largely overlooked how cultural values and beliefs interact in ways to affect coop-
eration. Yet, the current research does clearly establish that both beliefs and values
have an important role in understanding cooperation, and provide clues about the
conditions that may affect cooperation in specific cultural contexts. A discourag-
ing implication of this conclusion is that promoting cooperation during multicul-
tural interactions may be exceptionally challenging, since each person may bring
their own cultural background to the dilemma, which may cause misunderstand-
ing and potential conflict. Moreover, solutions to multicultural social dilemmas
may be considerably more complex, because people from different cultures may
have systematically different responses to changes in the situation. This issue is
even more concerning as people around the world become increasingly interde-
pendent while facing global social dilemmas.

■ BASIC ISSUES

Clearly, the topic of culture in social dilemmas is timely in several respects. This
research answers recent calls towards broadening the scope of psychological
research beyond largely Western samples (e.g., Henrich et  al., 2010), and takes
advantage of theoretical advances about how culture can shape social behav-
ior, including cooperation. Moreover, several large-scale societal problems, such
as sustainable resource consumption and environmental protection, are more
challenging now than ever before as societies become increasingly global. This
raises the intriguing question:  can globalization help promote cooperation to
solve social dilemmas that transcend societal boarders? Another basic issue
involves how cooperation relates to a society’s functioning. Do societies in
which cooperation is effectively promoted and sustained benefit—economically
and/or institutionally—through this enhanced cooperation? Next, we discuss
both of these basic issues.

Can globalization transcend cultural boundaries


and promote cooperation?

As discussed above, cooperation rates vary substantially across cultures. Most


of these studies involve interactions among individuals from the same culture.
Cultural Perspectives ■ 101

However, in more recent times, people find themselves engaged in social


interdependent interactions with persons of different societies and cultures.
Such interactions may occur over the Internet, during international business
or travel, and among interactions between politicians representing different
nations. Certainly, social dilemmas may transcend national and cultural bound-
aries. For example, bordering countries may face a social dilemma for attempt-
ing to control the spread of a disease (which does not require a passport to
cross borders). At the extreme end of these international dilemmas are dilem-
mas that all countries face together. For example, all countries are currently fac-
ing the public goods dilemma of global warming (Milinski et  al., 2008). While
it may be in each country’s best interest (and the individuals in those countries)
to not make their contributions by reducing their energy consumption, if we
all behave this way then the world may face a collective disaster. An abun-
dance of research suggests that people are inclined to cooperate more with their
in-group, compared to members of an out-group (e.g., Brewer & Kramer, 1986;
Dawes, van de Kragt, & Orbell, 1988; for a recent meta-analysis see, Balliet,
Wu, & De Dreu, 2013). This may paint a gloomy picture for our ability to solve
global social dilemmas, such as the climate change issue. However, recent work
suggests that the process of globalization may be leaving its mark in the minds
of individuals across national/cultural boundaries that may provide a solution
to such dilemmas.
Globalization is occurring at an ever-increasing rate. This includes an increase
in people being able to connect and communicate with each other and may also
result in an increase in interdependence between people and countries that were
only decades ago relatively isolated from each other. How might this globaliza-
tion affect our ability to solve global social dilemmas? As an initial foray into this
research question, Nancy Buchan and colleagues (Buchan et al., 2009, 2011) have
tested how identifying with the world affects contributions towards global public
goods. These authors suggest that the process of globalization, instead of reinforc-
ing pre-existing in-group national identities, may promote a certain form of global
identity in which people identify themselves as one human member of a larger
group of humans who inhabit the planet Earth. The authors go further to suggest
that the extent to which people endorse the importance of a global social identity
may affect their tendency to contribute towards global public goods.
To examine this hypothesis, Buchan and colleagues (2009; 2011) conducted a
multi-level public goods experiment modeling contributions to a global public
good in six different countries (Argentina, Iran, Italy, Russia, South Africa, and
the United States). The countries varied in their degree of being globalized as indi-
cated by a previously developed measure of globalization. For example, Iran is
significantly lower on globalization than the United States. In each country, they
employed a multi-level public goods dilemma where participants made a deci-
sion between allocating an endowment to an individual fund, local fund, or world
fund. In the public goods dilemma their local group consisted of four individu-
als, but the global group consisted of their group of four and two other groups
of four—each from a different country. When making their decision, they were
given ten tokens (each worth.50 USD) and then asked to either donate the money
102 ■ Perspectives to Social Dilemmas

to the individual fund (which remains the same), the local fund (which is multi-
plied by two and distributed among the four local group members), or the world
fund (which is multiplied by three and distributed among the twelve members
across the three countries). This design allows for a test between pitting paro-
chial motives against the concern for a broader group. While it may benefit the
local group for each member to contribute their endowment to that group, every-
one in the experiment benefits most by each person contributing money to the
global account. The researchers were interested in explaining who contributed to
the global account by either the country’s level of globalization or by individual
differences in the measure of globalization. That is, they also measured to what
extent each person endorsed engaging in international social interactions—which
composed the globalization scale.
They found that the countries that scored lower on the globalization index
(e.g., Iran and South Africa) contributed less to the global accounts, compared to
countries that scored higher on the globalization index (e.g., the United States).
Moreover, they found that the individual measure of globalization supported this
finding. Individuals who scored higher on this measure were more inclined to
make contributions to the global account. Both findings support a general conclu-
sion that globalization may foster, and not inhibit, contributions to global public
goods. Buchan and colleagues (2011) also reported that they measured the extent
to which people endorsed “feeling attached,” “defined themselves by,” or “felt close”
to members in their local community, their nation, or the world. Importantly, they
found that across all six countries, the extent to which people strongly endorsed
being a member of “the world” as part of their social identity predicted increased
amounts of contributions to the world account. This remained a significant pre-
dictor of cooperation even after controlling for expectations of others cooperation.
These initial research efforts to study the process of globalization on contribu-
tions towards global public goods provide hope for solving such broad interna-
tional social dilemmas. This research suggests that people who are more likely
to find themselves interacting with others outside of their own country are more
inclined to contribute to public goods. Moreover, people can develop a form of
social identity with the world or being human “in general” and this positively
relates to a willingness to contribute to broader public goods that cross national
boundaries at a time of important social and ecological challenges for humans
around the world. Importantly, one implication is that the process of globaliza-
tion does not have to stride towards forming a single global cultural in-group to
establish cooperation, but may increase cooperation by encouraging people from
different cultures to identify themselves as simultaneously part of their cultural
group and part of a broader group of human beings.

Does cooperation lead to better institutional


and economic outcomes?

Several scholars have claimed that an ability for societal members to cooper-
ate in the provision of public goods underlies the success of societal institu-
tions (Henrich et  al., 2010; Ostrom & Ahn, 2008; Putnam, 1993). For example,
Cultural Perspectives ■ 103

democratic governments are thought to thrive in societies where citizens freely


engage in public life by debates, electing representatives, and joining political
parties (La Du Lake & Huckfeldt, 1998; Putnam, 1993). However, limited evi-
dence exists to support such claims. Recently, Balliet and Van Lange (2013b)
conducted a meta-analysis of studies on the relation between punishment and
cooperation which were conducted in 18 different countries. They found that
the effectiveness of punishment to promote cooperation in the provision of
public goods is positively correlated with the amount of political participation
by societal members. Political participation is a hallmark of successful demo-
cratic governments. Thus, there is some preliminary evidence that behavioral
differences in cooperation across societies, as measured by laboratory social
dilemmas, do predict a society’s ability to maintain a democratic institution.
Do more cooperative societies possess greater wealth than less cooperative
societies? More cooperative societies may be able to establish and maintain larger
social networks that can create wealth and prosperity (e.g., Fukuyama, 1995).
Moreover, in countries where strangers lack an ability to cooperate, this may
increase the necessity of contracts and oversight by authorities for various forms
of social exchange, which increases transaction costs and wastes resources (e.g.,
Knack & Keefer, 1997). Yet, there is a lack of data that directly addresses this ques-
tion. There is some evidence that high-trust societies—as indicated by responses
to the World Values Survey—have a higher GDP per capita compared to low-trust
societies (Knack & Keefer, 1997; La Porta et  al., 1997). However, this research
relies on survey responses and measures of trust, not cooperation. There are sev-
eral potential reasons for the positive relation between cross-societal variation
trust and GDP per capita, and subsequent research has yet to test if this relation is
the result of a greater ability by societal members to cooperate in the provision of
public goods. Thus, although there is some research to suggest that cross-societal
variation in cooperation predicts successful societal institutions and the creation
of wealth, much of this research is preliminary and indirect, so future research is
strongly encouraged to further examine these important issues in greater detail.

■ SUMMARY AND CONCLUSIONS

People around the world face social dilemmas. This chapter reviewed research
on cross-societal variation in cooperation during social dilemmas and discussed
several cultural explanations for this variation. Indeed, recent research has
clearly established ethnic and societal variation in cooperation. This research has
examined cooperation across a broad range of human societies, from nomadic
hunter-gatherers to large-scale industrialized societies. For example, in some soci-
eties, people are quite willing to cooperate with others in the provision of public
goods, but in other societies free-riding is rampant. Why do such differences exist?
To explain this variability in cooperation, researchers have often relied on some
aspect of culture. Although many conceptualizations of culture suggest that cul-
ture may simultaneously exist outside and within the minds of individuals in a
collective, much research has focused on cultural differences that are located in
the minds of individuals, such as values and beliefs. However, research on social
104 ■ Perspectives to Social Dilemmas

norms tends to emphasize both aspects. While social norms may be embed-
ded in the minds of individuals, people will often learn about and conform to
these norms as a result of the pattern of behaviors that exists around them—and
especially through the use of punishment to let others know they have violated a
specific norm. Here we reported on research that suggest social norms for coop-
eration and a willingness to punish norm-violators are two important aspects of
culture that may explain cross-societal variation in cooperation.
Yet, societies also vary in their values and beliefs, which, unlike social norms,
may affect cooperation in the absence of possible punishment. Here we report
research that demonstrates that cultural values of collectivism and the beliefs in
other’s trustworthiness are positively associated cooperation. Perhaps, an even
stronger and more important conclusion is that cross-societal differences in these
values and beliefs can determine what factors promote cooperation. To illustrate,
although collectivist cultures may have greater amounts of cooperation, compared
to individualist cultures, there are certain features of the environment that may
encourage as much cooperation amongst individualists as collectivists, including
a feeling of shared responsibility for a public good and making contributions iden-
tifiable. Moreover, these values and beliefs may themselves be more or less impor-
tant for determining cooperation in certain cultures. For example, trust may hold
important implications for sustaining cooperation in some societies, but other
societies may have norms and institutions in place that remove the necessity of
trust to promote cooperation.
Clearly, social norms, cultural values and beliefs may be important for regulat-
ing cooperative interactions amongst individuals embedded in a cultural group.
But in many societies, people are increasingly interacting with others outside their
cultural group. This can cause several challenges for cooperation during social
dilemmas—especially since much prior work has established a strong bias to
favor in-group, relative to out-group, members. Does this spell certain disaster
for our ability to solve global social dilemmas? Not necessarily. Recent work has
found that globalization is encouraging individuals to incorporate as part of their
self-identity a social identity as being part of a global human community. Indeed,
this global social identity promotes a willingness to cooperate in global social
dilemmas.
Lastly, scholars across the social sciences have discussed that cooperation
underlies a society’s ability to establish and maintain successful institutions and
may also lead to the creation of wealth. Yet there is a lack of research support-
ing this basic assumption, which underscores the importance for understanding
cross-societal variation in cooperation. Preliminary work finds that societies with
a higher level of political participation (a hallmark of a healthy democracy) also
demonstrate an ability to maintain cooperation through the use of punishment.
Future research that further explores the reasons why cultures vary in their coop-
eration may help to understand why certain countries are able to create wealth and
well functioning societal institutions, while other societies seem to lack this ability.
■ PART THREE

Applications and Future


of Social Dilemmas
6 Management and Organizations

Consider the following workplace arrangement. Different work units within a


shift have different sets of daily tasks assigned to them. When all of the units
have completed all of their assigned tasks, every worker on the shift can go
home, even if there is still time left in the work day. Further, when members
of one unit have finished all of their work, they can assist other units who are
still working. There is an incentive to offer such help, because enabling other
units to finish quickly means that everyone can go home early. However, this
also means there is an incentive for a unit to loaf in the morning, because when
other units finish their work, they will come and help you do your work, which
means the members of your unit won’t have to work as hard as everyone  else.
We thus have a social dilemma: Everyone should work hard so that all of the
tasks are completed early in the day, and everyone can go home, but the best
thing for members of an individual unit is to do no work, and wait for colleagues
from other units to step in. The end result will be that members of the loafing
unit will get to go home early, and they will not have done much work at all. Of
course, this principle is true for all units, and if all workers respond to the temp-
tation, then no tasks get completed, and everyone has to stay until the formal
dismissal time.
In fact, this very situation was described by Rutte (1990) in a field study of a
bank. Each day, each unit within the bank was assigned a certain number of tasks
that had to be completed by the end of the work day. For example, the investment
department might have to prepare reports on a specific number of properties. If a
unit happened to not have very much assigned to it on a particular day, after com-
pleting their work, the members of that unit would be given some of the tasks that
had originally been assigned to another unit, one that had not made very much
progress. For example, if Investments only has four properties to report on today,
but Loans has to complete paperwork on 50 clients, then after all of the property
reports are completed, some of the loan paperwork will be redirected to the invest-
ment department. Rutte found that most members of most teams recognized that
taking it easy in the morning meant a reduced workload in the afternoon, and
that hard work in the morning was “rewarded” with more work in the afternoon.
As such, most workers did little work in the morning, people rarely finished their
work (and thus went home) early, and there was widespread unhappiness, to the
point that it was creating a difficult work environment. Eventually, the bank had
to abandon the program.
The purpose of this chapter is to show how social dilemmas can manifest
themselves in the workplace, and to discuss some of the unique aspects of
Craig Parks had primary responsibility for preparation of this chapter.

107
108 ■ Applications and Future of Social Dilemmas

workplace dilemmas. After all, these are contexts where individuals are working
together in groups and units, and faced with various dilemmas and observations
of what colleagues might do. Shall I work hard or take it easy? How hard do my
colleagues work? Shall I  spend extra time to familiarize a new colleague with
the organization? We provide a review of various decisions and behaviors in
organizations that share important features of social dilemmas. In doing so, we
assume that the social dilemma literature, and the literature on management and
organizations, can be enriched. At the same time, we should note that review
does not imply that each situation is completely consistent with the definition of
social dilemma—for example, it is possible that for people who love their work,
and do not mind doing overtime, it would be no social dilemma to put in extra
hours: in those situations, self-interest and organizational interest seem hardly
conflicting (we will address this issue also in Chapter 7, when discussing appli-
cations of social dilemmas).
Thus, we focus on various workplace dilemmas that could be analyzed in gen-
eral terms as a social dilemma. We will also look at some strategies for trying to
resolve these workplace dilemmas. But first, let us ask a more basic question: Why
worry about workplace social dilemmas? If the job ultimately gets done, why does
it matter how it was completed, or by whom? In the bank studied by Rutte, the
work was done, completed later than it could have been, but done nonetheless. So
what if the workers missed out on a nice benefit?
Labor economists have considered this issue. Their argument is that resolution
of the dilemma has a multiplicative effect on productivity:  The combination of
incentive for good performance and a supportive, cooperative atmosphere leads
to productivity gains that exceed what would be expected from the effects of each
alone (Rotemberg, 1994). For example, if incentives and collegiality each increase
productivity by 10%, the combination of the two might improve productivity by
25% (i.e., more than merely adding the two together). Incentive plans are easy to
implement, but creating cooperativeness is not. One does not simply tell workers
to be cooperative (or more accurately, one could do this, but the instruction is
unlikely to take hold). It is thus important to learn how to foster long-term coop-
eration in the workplace.
There is an interesting argument that arises within this line of reasoning,
namely, that it is possible for a cooperative environment to lead to decreases in pro-
duction. Holmström and Milgrom (1990) suggested that workers who are coop-
eratively oriented might reduce their personal levels of performance if a co-worker
clearly cannot perform at the level of others. In this way, the struggling co-worker
is protected from being singled out and punished by management. Thus, induc-
tion of long-term cooperation in workers is valuable only if the focus is on collec-
tive output. Throughout this chapter, we will assume such a focus.
We will discuss five forms of workplace dilemmas: (a) organizational citizen-
ship behavior, (b) Non-normative work behavior, (c) knowledge sharing dilem-
mas; (d) unionization; and (e) strategic alliances. Our reviews of these literatures is
by necessity selective, focusing only on those aspects of the problems that specifi-
cally connect to social dilemmas. The reader should understand that each of these
issues is more complex than our discussions might suggest.
Management and Organizations ■ 109

■ ORGANIZATIONAL CITIZENSHIP BEHAVIOR

A form of social dilemma that is more or less unique to companies is the


dilemma of organizational citizenship, or performance of extra-role behaviors for
the good of the company. Working past the end of one’s shift, completing work
tasks on off-days, spending time to help socialize new workers, and using per-
sonal resources for company needs are all examples of organizational citizenship
behaviors (OCB). Such behaviors have clear benefit for companies, as they help
improve the social atmosphere of the workplace, and they are broadly applicable,
in that a member of almost any work group within the organization can do them
(Borman & Motowidlo, 1993). However, performance of good-citizen actions is
in line with the structure of a social dilemma, in that acceptance of what appears
to most people as a short-term loss (of employee time, effort, and resources) is
needed in order to realize a long-term gain in organizational productivity, and
the best personal outcome is realized by letting others be the good citizens and
then capitalizing on the organizational benefits of their good deeds (Cropanzano
& Byrne, 2000; Joireman, Daniels, et  al., 2006; Joireman, Kamdar et  al., 2006;
see also Dunlop & Lee, 2004). Whether to give something more to the company
thus presents a dilemma. Some theorists split organizational citizenship into two
subconstructs:  behaviors that are more interpersonally-oriented, like altruism
(e.g., helping a customer on one’s day off ), and behaviors that are beneficial to
the organization as a whole, like civic participation (the worker offers to rep-
resent the company at a local function that occurs outside of work hours). In
the organizational literature these are typically referred to as OCBI and OCBO
respectively. It is, however, unclear whether any predictive accuracy is gained
from such a division, as opposed to a simple global measure of organizational
citizenship (LePine, Erez, & Johnson, 2002).
Organizational citizenship behaviors are very important to the modern corpo-
ration. Cropanzano and Byrne (2000) argue that companies in which employees
only do their assigned tasks, and nothing more, would struggle mightily to be pro-
ductive. As such, researchers focus primarily on the factors that dissuade people
from giving their extra time and resources to the company. Perhaps the most heav-
ily studied is perceived justice of the workplace. Basically, people are more willing
to cooperate and give of themselves if they have a sense that the organization is fair
and just. Which form of justice they focus on is the subject of some debate.
Research is unclear as to whether procedural justice or interactional justice is
the dominant influence. Procedural justice refers to the means by which outcomes
are assigned to people, and interactional justice refers to the sense of being fully
engaged in the process and respected by others. Both have been connected to gen-
eral cooperative behavior. Tyler and Blader (2003) argue that procedural justice
impacts one’s social identity, which in turn affects the extent to which one cooper-
ates with others. Interactional justice has similarly been shown to encourage more
equitable allocation of resources (Leung, Tong, & Ho, 2004). Regarding OCB spe-
cifically, studies tend to suggest that procedural justice is the key component for
encouraging good behavior (e.g., Ehrhart, 2004; Tepper & Taylor, 2003), though
some researchers rather surprisingly find interactional justice only as the primal
110 ■ Applications and Future of Social Dilemmas

influence (e.g., Aryee, Budhwar, & Chen, 2002; Williams, Pitre, & Zainuba, 2002).
Still other researchers present evidence that both forms can have some degree of
influence (e.g., Rupp & Cropanzano, 2002). It is, however, clear that distributive
justice, which focuses on how outcomes are allocated across group members, is at
best weakly connected to OCB (Cropanzano & Byrne, 2000). Specifically, distribu-
tive justice seems to motivate OCB only when both interactional and procedural
justice are low (Skarlicki & Folger, 1997).
Regardless of which type of justice is influencing the person, current thought
is that perceived (in)justice arouses feelings of reciprocity; if the workplace is
treating me well (however that is defined), I should do something nice in return,
and if the workplace is treating me poorly, I should stop being nice—and turn to
“choosing for myself ” or even retaliation. (Cropanzano & Byrne, 2000; Rupp &
Cropanzano, 2002). Along these same lines, there is evidence that people with a
prosocial social value orientation get upset when they are treated fairly, but others
are treated unfairly (van Prooijen et al., 2012). This clearly overlaps with the role
of reciprocity in societal dilemmas, which, as we saw earlier, is a primal factor in
cooperative choice. Here, however, the person is reciprocating not the actions of
another person, but rather a general tendency of the collective: I have been treated
well by those who work here, and so I will respond in kind by doing nice things for
the collective, and vice versa for poor treatment. The idea of reciprocating a gen-
eral tendency within a societal dilemma has been discussed (Parks & Komorita,
1997) but not developed. As with the idea of ownership of information within a
knowledge-sharing dilemma, reciprocation within the organizational citizenship
dilemma has an added layer of complexity.

The Role of Leadership

Organizational citizenship researchers are also interested in to what extent a


leader can inspire workers to perform extra-role behaviors. Indeed, there is
some evidence that the quality of one’s relationship with the group leader is
more influential on OCB than is perceived justice (Settoon, Bennett, & Liden,
1996). Regardless, there is much evidence that liked leaders can inspire high
rates of OCB (LePine et al., 2002). Such leaders seem to convince others to see
the situation in terms of the collective; that is, they can get workers to think
about extra-role behaviors as actions that can produce benefits for everyone,
rather than as actions that do not produce the personal best outcome (Wang,
Law, Hackett, Wang, & Chen, 2005). This parallels research on construal levels
in social dilemmas, which shows that perception of the dilemma in abstract
terms (“What is the big picture?”) generally produces more cooperation than
perception in concrete terms (“How am I  impacted right now”) (De Dreu,
Giacomantonio, Shalvi, & Sligte, 2009). The leader is apparently helping the
workers to construe organizational citizenship as a broad-impact behavior. How
a leader does this is not clear, but charisma (Den Hartog, De Hoogh, & Keegan,
2007), transformational leadership (Wang et  al., 2005), and modeling of com-
petent leaders (Choi & Mai-Dalton, 1999) have all been suggested as important
influences.
Management and Organizations ■ 111

Organizational Identification

Organizational identification addresses the extent to which a worker sees the


organization as part of his or her self-concept. Those high in organizational
identification feel a sense of “we-ness” with the company or group, and are thus
invested in the success of the group, as group successes are also personal suc-
cesses [Kramer, Hanna, Su, & Wei, 2001; Simon, 1991; see also Batson’s (1994)
discussion of “collectivism” as a motive for working for the public good]. By
contrast, those low in organizational identification feel no strong connection
to the group; their position is just a job, and nothing more. Organizational
identification is a positive influence on cooperation (e.g., Dukerich, Golden, &
Shortell, 2002; Feather & Rauter, 2004; Meyer & Allen, 1991), even when condi-
tions would otherwise discourage it (van der Vegt, van de Vliert, & Oosterhof,
2003). Once again, this meshes well with research into societal dilemmas, spe-
cifically that research showing the positive impact of group identification on
cooperative choice. Some researchers have further segmented commitment into
career commitment and team-oriented commitment, and shown that those
high in the latter make significant contributions to the work group’s task (e.g.,
Ellemers, de Gilder, & van den Heuvel, 1998). This distinction may overlap
with social value orientation, as career commitment emphasizes personal suc-
cess, and team commitment emphasizes collective success.

Membership Status

A number of studies have looked at to what extent the security of one’s mem-
bership in the work group contributes to willingness to perform extra-role
behaviors. The connection between the two is intricate. Those who hold formal
membership in the group, but are threatened with loss of membership, are less
likely to be good citizens (Reisel, Probst, Chia, Maloles, & König, 2010), but
those who hold temporary membership are more likely to engage in extra-role
behaviors, apparently to try to convince others that they should be retained
(Feather & Rauter, 2004). As well, those who might like to leave the work
group, but who do not see exit as a reasonable option, are more cooperative,
apparently because they are trying to make their current situation as positive
an experience as possible (Hui, Law, & Chen, 1999). Interestingly, Bergeron
(2007) has reversed the relationship between the two variables, and suggested
that OCB may actually contribute to a sense of membership instability, because
given a perceived finite amount of resources, people will worry that the more
resources they allocate to “extra” behaviors, the less they will have for their
assigned job tasks. Thus, the paradoxical effect of being a good citizen could
be termination.

Individual Characteristics

There are also some individual-level characteristics that have been connected
to willingness to perform extra-role behaviors, though study of individual
112 ■ Applications and Future of Social Dilemmas

differences and OCB has been surprisingly sparse. Empathic concern is per-
haps the most heavily studied of these variables, with greater concern being
related to more frequent performance of extra-role behaviors (see Bettencourt,
Gwinner, & Meuter, 2001). There is evidence that social value orientation pre-
dicts it, with prosocials being more willing to be good citizens than proselfs
(Penner, Midili, & Kegelmeyer, 1997; Rioux & Penner, 2001). At the personal-
ity level, conscientiousness is quite clearly influential (Borman, Penner, Allen,
& Motowidlo, 2003), and agreeableness and achievement motivation have also
been suggested as possible contributors (Neuman & Kickul, 1998). It is possible
that the motivation for performing extra-role behaviors alters as workers get
older (Wagner & Rush, 2000), a finding that complements the more general
notion that people are more likely to cooperate within a societal dilemma as
they age (Van Lange et  al., 1997). Finally, Dunlop and Lee (2004) found that
workplace deviants have disproportionate influence over others, and the pres-
ence of good citizens in the group is insufficient to offset the harm done by the
deviants. Thus, when confronted with both good and bad organizational citi-
zens, people seem to be more strongly swayed by the bad actors. Once again,
this meshes well with research on social dilemmas documenting the undue
influence of bad actors (Kerr et  al.,  2009).

Summary

Organizational citizenship is a behavior that all companies would like to encour-


age. The good citizen is someone who “goes the extra mile” and does the little
things that help the group succeed, usually with no explicit personal payoff for
the behavior. We have seen that OCB is well-described as a social dilemma,
and that procedural and/or interactional justice, leadership, and commitment to
and status in the group all influence to what extent the worker will be a good
citizen. Some individual differences may also relate to OCB, though much more
work is needed in this area. Some of these factors have also been connected
to societal dilemmas, but others have not. For example, it would be interest-
ing to know whether a temporary member of a societal group (for example,
as above, the probationary member of a farming co-op) is especially likely to
be cooperative. The OCB-as-social-dilemma idea can thus inform research on
general social dilemmas, as well as dilemmas research helping to develop better
interventions to encourage good citizenship.

■ NON-NORMATIVE WORK BEHAVIOR

Organizational behavior researchers have increasingly focused on the perfor-


mance of inappropriate behaviors in the workplace. The Prisoner’s Dilemma
has emerged as a popular tool for explaining many of these behaviors. The basic
argument is that a worker would like to behave appropriately, but either realizes
or observes that appropriate actions can be taken advantage of by less scrupu-
lous coworkers. The worker will thus either fear being exploited, or be tempted
to exploit his or her coworkers. Analysis of non-normative work behavior from
Management and Organizations ■ 113

a social dilemma standpoint allows the theorist to suggest some interventions


that can be implemented relatively easily. In this section, we will look at two
such analyses:  shirking of work tasks, and unethical behavior.

Shirking

One of the oldest problems in organizational psychology is how to motivate


workers and keep them performing at a consistently high level. That it is still a
prominent research topic today tells us that no dominant solutions have been
identified. One problem that is addressed in this research is shirking, or the
purposeful reduction of effort toward an assigned task. Indeed, that research
continues into how to minimize social loafing (see Kerr & Tindale, 2004, for
a recent summary of social loafing research) suggests that shirking of work
duties is likely a manifestation of a more general tendency to reduce effort
whenever possible (see Murphy, Wayne, Liden, & Erdogan, 2003, for a simi-
lar argument.). Some organizational theorists have suggested that shirking of
work duties can be effectively controlled by introducing a Prisoner’s Dilemma
structure into the workplace, by means of a profit-sharing mechanism (Blasi,
Conte, & Kruse, 1996; Freeman, Kruse, & Blasi, 2010). In this way, the fact that
someone is loafing and taking advantage of the hard work of others negatively
impacts the outcomes of the coworkers themselves, because the hard workers
are being exploited. The assumption underlying this logic is that the coworkers
will impose punishments on the free-riders in order to get them to do their
fair share of work, and hence boost payoffs for everyone (except for the loafer,
whose outcomes will lessen). However, it is also the case that, in order to deter-
mine which colleagues to sanction, the hard workers will need to more closely
monitor everyone’s performances, and this will draw resources away from effi-
cient task productivity (Jones, 1995). As well, the sanction needs to be quite
severe, as a mild penalty may actually increase the rate of shirking, with loafers
seeing the penalty as a small price to pay for the privilege of taking advantage
of others (Tenbrunsel & Messick,  1999).
Note carefully the recommendation being offered here: It is to create a dilemma
where none exists. An obvious question to raise is whether doing so is defensible.
Throughout this book we have talked about methods for alleviating the dilemma,
so whether it is acceptable to purposely introduce one is a reasonable question.
While there is no objective answer, workplace ethicists feel that it is defensible,
because the long-term goal is to instill a sense of cooperation in everyone, a sense
that presumably does not currently exist (Scalet, 2006). So, at least with regard to
encouraging workers to consistently perform their duties, this situation may be
one in which we want to actively promote the existence of the social dilemma.

Ethical Behavior

Related to concerns about performing one’s duties is the concern about whether
the worker is performing his or her duties in an ethical manner. This has always
been an issue within the research on work motivation, but it has become more
114 ■ Applications and Future of Social Dilemmas

prominent during the past decade, in light of a number of large-scale ethical vio-
lations at major world companies (e.g., News Corporation, Enron, WorldCom).
Theorists of ethical work behavior have argued that the decision of whether
to act ethically is modeled as a Prisoner’s Dilemma:  It is best for the group if
everyone behaves ethically, but if everyone is indeed ethical, one can realize the
best personal outcome by committing an ethical violation (e.g., if no cowork-
ers steal, it will be quite easy for a person to walk off with company goods). Of
course, this fact is equally true for all other workers, and so if everyone acts on
the impulse, we then have a workplace with rampant unethical behavior (Tyson,
1990). Compounding the problem is that there is good evidence that people
tend to believe that, at least in work-related matters, they are more ethical than
their coworkers (Tyson, 1992), a phenomenon that probably occurs in several
social settings, with people believing that they are more moral and more honest
than other people (e.g., Van Lange & Sedikides, 1998). From a social dilemma
standpoint, this implies that the actor should be aware of the potential for being
exploited, which hence makes them vulnerable to becoming unethical them-
selves. It is clear that people who engage in unethical behavior are sometimes
unaware that their actions conflict with their morals (Banaji & Bhaskar, 2000),
and, when they are aware, they are good at distorting their perceptions of their
actions, to convince themselves that the actions remain consistent with their
moral codes (Darley, 2004; Tenbrunsel & Messick, 2004). Resolving this particu-
lar dilemma thus seems especially challenging:  How can we convince someone
to be cooperative if he or she thinks his or her behavior already is cooperative?
An immediate suggestion is to strengthen, and perhaps make mandatory, ethi-
cal training for all workers. Analogous to the idea that educating people about a
social dilemma will foster cooperation, the notion here is that training workers on
business ethics will lead them to behave ethically. Unfortunately, whether training
in ethics counteracts the dilemma is debatable (Badaracco & Webb, 1995; James &
Cohen, 2004), and experimental research suggests that appeals to morality-based
standards of conduct are impactful only if the appeal is made by a leader who
demonstrates self-sacrifice herself (Mulder & Nelissen, 2010). We thus need to
test other methods of encouraging workers to behave ethically when the tempta-
tion is strong to do otherwise. One immediate possibility is the employment of an
ethics “safety net” under which employees report any suspected ethical violation
to a central contact, who then assigns the case to the most appropriate author-
ity (Kaptein, 2002). In theory, because the safety net is so easy to use, the abil-
ity to catch violators improves, because more reports will be filed. From a social
dilemma perspective, the safety net increases the likelihood that the violator will
be sanctioned, which should reduce the temptation to exploit others. Given the
importance of ethical behavior in all aspects of society, this is potentially a quite
important application of social dilemma research, and in fact legal scholars have
made a similar argument. For example, questions exist about whether managed
health care represents an ethical Prisoner’s Dilemma, because a treatment might
benefit a patient but produce a net economic loss for the HMO, thus giving the
HMO an incentive to deny the treatment (Bloche, 2002; see also Blair & Stout,
1999, 2001).
Management and Organizations ■ 115

■ KNOWLEDGE-SHARING DILEMMAS

Many organizational groups are charged with evaluating information and reach-
ing consensus on a course of action. In such groups, a primary task is to gather
and share the information. However, studies regularly show that people are gen-
erally reluctant to make the information that they have acquired broadly avail-
able (Cress & Kimmerle, 2007; Cress, Kimmerle, & Hesse, 2006). Why might
this be? In fact, the structure of an information-pooling task follows that of a
social dilemma. There is effort involved in obtaining information and then shar-
ing it with others, and adding one’s information to the set of publicly-presented
facts adds nothing to one’s ability to help make a good decision—the person
already knows the fact, so sharing does not enhance the person’s knowledge
base. By contrast, if the person remains silent, and listens to what everyone
else has to say, then his or her knowledge base increases greatly, at no cost.
Thus, regardless of what everyone else does, it is better to not share informa-
tion than to share. (This assumes there are no side benefits to be accrued from
sharing, like improvement of reputation, and that full disclosure of what one
knows is not a requirement of one’s job.) However, if everyone behaves in this
manner, then nothing is revealed, no one’s knowledge base is improved, and
the group output will be of poor quality. Knowledge-sharing is thus a particular
type of social dilemma, and more specifically, a form of a public-goods prob-
lem (Cabrera & Cabrera, 2002). While information-sharing occurs in all type
of decision-making groups, the issue is of particular concern to organizational
theorists, as effective information flow is critical for both smooth functioning
of an entity with many subunits, as well as for innovation (Argote & Ingram,
2000). As such, much effort has been devoted to finding interventions that will
encourage employees to be more forthcoming with information.
A challenge that immediately arises in this area, and which is unique to this
type of social dilemma, is considering the question of who ultimately “owns” the
information being shared. While a particular fact is held by a specific individual, it
can be argued that, if the fact was obtained while performing one’s job, the infor-
mation ultimately belongs to the organization. This adds a layer of complexity to
the dilemma that is not found in societal dilemmas. When deciding whether to
give money to a charity, for example, we do not first think about who really owns
the money. The ownership question does seem to impact willingness to share.
Constant, Kiesler, and Sproull (1994) found that how one answers the question of
ownership mediates willingness to share, with those who see the information as
the property of the organization being more forthcoming than those who see it as
their own. However, Jarvenpaa and Staples (2001) found more of a joint ownership
effect, in that people who saw the information as something that they “owned”
also believed that the information was owned by the organization as well; in other
words, because the information was acquired while the person was on the job, it
was also property of the workplace. They thus found more sharing by people who
saw the information as their own. Related to this is the impact of the information’s
value: People are less willing to share information that they deem valuable to oth-
ers, especially if the information seems more valuable to others than it is to oneself
116 ■ Applications and Future of Social Dilemmas

(Kimmerle, Wodzicki, Jarodzka, & Cress, 2011). Thus, with information-exchange


dilemmas, people seem more focused on the qualities of the resource that they
are being asked to share than is typically seen in public goods paradigms. This
idea is consistent with research on managerial learning of new information, which
shows that managers are generally reluctant to receive new information from
internal sources, because it diminishes status differences—“If I had to learn some-
thing from a subordinate, it means the subordinate is as good as me” (Menon &
Pfeffer, 2003). It may be that people see possession of special knowledge as a kind
of status symbol; sharing that knowledge would make it less special, and thus less
status-enhancing.
Regarding factors that impact whether one will share what one knows, a pri-
mary focus has been on the organization’s climate, and more specifically, to what
extent the organization encourages and rewards its members for sharing what
they know. Such encouragement does seem to be important, with employees
being sensitive to whether information sharing is a normative behavior (Bock,
Zmud, Kim, & Lee, 2005; Cabrera, Collins, & Salgado, 2006; Cress & Kimmerle,
2007; Quigley, Tesluk, Locke, & Bartol, 2007). Females are especially sensitive to
whether the organization supports information sharing (Connelly & Kelloway,
2003). This normative influence is sufficiently strong that if the norm is for infor-
mation hoarding, an incentive to go against the norm and share knowledge is
largely ineffective (Wolfe & Loraas, 2008). People do also worry about losing their
unique role within the organization as a result of sharing what they know, in that
if everyone now possesses the person’s information, that individual may have no
further importance to the organization (Bock et  al., 2005; Lu, Leung, & Koch,
2006; Renzl, 2008).
As with regular social dilemmas, there are also some individual-level factors
that impact willingness to share information. Degree of interpersonal trust is per-
haps the most heavily studied, with many studies showing that willingness to share
is positively correlated with degree of trust in others (e.g., Butler, 1999; Hsu, Ju, Yen,
& Chang, 2007; Kimmerle, Cress, & Hesse, 2007), and theorists arguing that build-
ing of interpersonal trust is essential for effective knowledge-sharing (Abrams,
Cross, Lesser, & Levin, 2003). Social value orientation is also an influence, with
prosocials being more forthcoming with information than proselfs (Steinel, Utz, &
Koning, 2010; Wolfe & Loraas, 2008). In the realm of the big five personality traits,
agreeableness and conscientiousness both relate to sharing. High-agreeable people
are more likely to believe that sharing is important than low-agreeable people, and
high-conscientious people are more likely to document what they know than are
low-conscientious people. Actual sharing is then predicted from perceived impor-
tance and documentation (Matzler, Renzl, Mooradian, van Krogh, & Müller,
2011; Matzler, Renzl, Müller, Herting, & Mooradian, 2008; Mooradian, Renzl, &
Matzler, 2006). Finally, similar to the effects of endowment size on contribution
to a public good, people who hold a large amount of information tend to share
more than do people holding a small amount of information, though interestingly,
over-endowed people tend to believe that they do not need to share as much as
they can (Cress & Kimmerle, 2008).
Management and Organizations ■ 117

Summary

The knowledge-sharing dilemma represents a very particular type of social


dilemma within an organization. It most closely resembles a public goods
problem, but contains some unique elements not found in societal dilemmas.
Given the importance of the free flow of information within a company, under-
standing how to regulate the dilemma is critical. We have seen that certain
aspects of the dilemma (perceived norms, trust) have received a decent amount
of attention, but there are many other aspects that are not well understood.
Additionally, whether aspects of a general public goods problem might also
manifest themselves in a knowledge-sharing dilemma needs investigation.

■ UNIONIZATION

One of the seminal connections between social dilemmas and the workplace was
Messick’s (1973, 1974) argument that the decision whether to form a labor union
can be modeled as a type of Prisoner’s Dilemma. If state laws allow for an open
shop (i.e., workers do not have to join a union that is present in their work-
place), then the best personal outcome is for everyone else to form the union
while one stays independent. Union-induced benefits cannot only be applied
to union members—if the union negotiates a raise in hourly pay, all workers
will receive it—so the person who remains independent will get the benefits,
but will not have to pay union dues. If everyone thinks like this, however, then
the union will not form, and benefits that can only be realized through united
action will not be received. Of course, if all workers join, then everyone gets the
benefits, but everyone has to pay dues. Finally, if just a small number of work-
ers attempt to form the union, then it will not succeed, but those workers will
lose whatever resources they put into the effort, and may well experience hostil-
ity from management for having tried to unionize. The unionization decision is
thus well-modeled as a social dilemma.
Marwell and Ames (1979) extended the logic, and argued that a union is actu-
ally a type of public good. A minimal number of members are needed to make it
exist, but once it exists, any benefits that it produces can be used by all workers
(again, assuming an open shop). This makes the unionization dilemma a type of
step-level good, in that as the critical minimum approaches, it becomes more rea-
sonable for unaffiliated members to consider joining. As Marwell and Ames (1981)
point out, it does not matter whether 4.9% or 49% of workers join the union, but
it does matter whether 51% or 49% join. However, once that 51% threshold is
crossed, it does not matter how many additional members the union acquires, as
full bargaining power has been achieved. This means that, assuming there is no
added bargaining power to be gained from a “sheer number of members” appeal,
late-joining members are irrelevant. Union members are thus confronted with the
possibility of free-riders in their midst.
A primary interest of researchers is in trying to predict who is likely to join the
union (i.e., who will contribute toward provision of the good) and who is likely to
118 ■ Applications and Future of Social Dilemmas

avoid union membership and free-ride on the union’s efforts (i.e., who will be self-
ish). It is important to note that researchers here distinguish between non-joiners
who are motivated by free riding and non-joiners who are just opposed to the
concept of unionization. From a social dilemma perspective, the distinction is
unimportant, because those who are uninterested in the public good would dis-
engage from it—those who do not contribute to public television because they
disagree with the idea of publicly-supported broadcasting presumably do not tune
into their local public station—but in the workplace such disengagement may
not be possible. Nonetheless, our focus in this section is only going to be on the
free-riders. We are not going to cover the research into why some people oppose
unionization. (Note that Klandermans, 2001, has performed a similar analysis of
participation in social movements as a public good.)
Perhaps the primary determinant of joining is concern about reputation.
Workers who are worried about what others will think of them if they do not join
tend to enlist in unions, even if they do not personally agree with unionization
(Chaison & Dhavale, 1992; Naylor, 1990), and it has been argued that whether the
union is provided or not depends critically on how many workers are concerned
with their reputation—no other factor will override it (Naylor & Cripps, 1993). To
that end, “positive reputation” has been defined by some theorists as an excludable
good that is provided by the union (Naylor, 1990), and it has been suggested that
unions can increase their attractiveness to those who are not concerned with repu-
tation by showing that they provide other excludable goods, like job security and
supplementary health benefits (Booth & Chatterji, 1995; Moreton, 1998).
From a social dilemma perspective, such an approach is akin to offering side
payments for cooperation:  Cooperate and we will give you some additional
outcomes besides those contained in the payoff matrix. As it can often be hard
for unions to identify credible excludable goods that they can provide (Booth
& Chatterji, 1995), and given that there is work documenting that high rates of
cooperation can be induced without resorting to side payments (Dawes et  al.,
1988), one wonders whether those approaches might also work in the union
situation.
Another factor in the decision to join is dissatisfaction with the current state
of affairs in the workplace. The more strongly dissatisfied the worker is, the more
likely he or she is to join the union (Charlwood, 2002; Hammer & Berman, 1981;
Klandermans, 1986). This finding offers an interesting parallel to the research
discussed previously on willingness to change how group members access a
resource, and suggests there is value in asking whether the cause of the dissat-
isfaction matters. We might predict that dissatisfaction with the system would
induce a desire to unionize, but dissatisfaction with specific members of manage-
ment would not. Along these lines, Fullagar and Barling (1989) found that the
dissatisfaction-joining connection is moderated by belief that the union can make
a difference in improving work conditions. This is potentially analogous to people
being willing to change access to a resource when its failure was due to struc-
tural problems, but not when failure was due to behavioral problems. People may
believe that the new system can make a difference under structural constraints, but
not when some group members are behaving irresponsibly.
Management and Organizations ■ 119

Interestingly, there is some evidence that something analogous to social value


orientation (SVO) might also impact the decision to participate in a union. Newton
and Shore (1992) suggest that some union members join because the union has the
opportunity to benefit all workers, while others join strictly to improve their own
outcomes. The former, termed “identifiers,” are deeply committed to the union
and see it as an effective tool for bringing about change, whereas the latter, termed
“instrumentals,” agree that the union can effect change, but their commitment to
it is low, and they prefer minimal and easy engagement with it. Instrumentals are
also thought to be likely to readily withdraw from the union, if they do not believe
it is producing self-benefit quickly enough; such people are termed “disgruntleds.”
There are clear parallels between these various types of members and some of the
social value orientations. Identifiers share features with prosocials, in that both
see the dilemma in terms of benefit for the collective, and both consistently direct
resources toward the dilemma. Instrumentals are similar to “individualists”; they
see the dilemma only in terms of how it benefits themselves, and they will change
their behavior quickly if support for the dilemma is not producing a good enough
personal payoff. Further, just as social value orientations are grounded partly in
childhood experiences (Van Lange et al., 1997), beliefs about union efficacy are
also partly rooted in childhood, through observation of parental involvement in
and attitudes about unions (Barling, Kelloway, & Bremermann, 1991). It may thus
be that, just as contribution to a general public good is influenced by SVO, so is
participation in the specific public good of a labor union affected by a trait akin
to SVO.
We saw in an earlier chapter that a technique for enhancing within-group
cooperation is to create an intergroup public goods (IPG) situation, in which
groups compete against each other to provide a good. There is some evidence that
a similar process occurs within labor unions. Specifically, involvement in union
activities tends to be strong when management is seen as an adversary, and weak
when management is seen as a partner (Angle & Perry, 1986; Deery, Iverson,
& Erwin, 1994; Magenau, Martin, & Peterson, 1988). It can be argued that the
degree of disharmony between labor and management makes salient that the two
groups are competing to provide good outcomes for their members, and only one
group can win. This is the basic structure of the IPG. A fruitful exercise would
thus be to test whether the phenomena associated with the IPG also arise within
union-management interaction.

Summary

We have seen that labor unions can be considered a type of social dilemma,
more specifically a public good, and even more specifically a step-level public
good. Once a critical mass of union members is reached, additional members
likely add nothing, which introduces a temptation to let that critical mass form,
and then free-ride on their efforts to improve work conditions. The key fac-
tors that seem to drive the decision of whether to participate in the union are
concerns about the reputation one will have if one does not join, and degree
of dissatisfaction with the current work situation. Principles similar to these
120 ■ Applications and Future of Social Dilemmas

can be found in mainstream social dilemma research, but they are variables
that have not received all that much attention. There may also be an influential
individual difference that has parallels to social value orientation. Extending
some of these union-specific variables to general social dilemmas could yield
some results of interest, as would the testing of basic social dilemma influ-
ences on union-joining decisions. Researchers after Messick (1973, 1974)  have
only sporadically conducted formal social dilemma analyses of unions (see
Klandermans, 2002, for one example), but such an analysis would be potentially
quite fruitful, for understanding of both unions and public  goods.

■ STRATEGIC ALLIANCES

At a more macro level is the notion of a strategic alliance, which arises when
two or more firms which might normally work against each other instead volun-
tarily join together to accomplish some goal (Parkhe, 1993). Strategic alliances are
well-modeled as a social dilemma:  While the goal is more easily accomplished
if all members of the alliance work together, the payoff from goal achievement
will be divided among the alliance members, so each member would be better
off investing less in the alliance than the others (McCarter & Northcraft, 2007;
Zeng & Chen, 2003). For example, in 2008 the American auto companies joined
together to appeal to the U.  S.  government for loans that would help the com-
panies repair their finances. This was strategic because a plea from three compa-
nies would likely be more persuasive than a single company asking for money.
However, it would also be more beneficial for any one company to invest less in
the alliance than the other two, because that company could then right itself more
quickly than the other two, and become profitable while the other two are still
struggling. Despite their at-times considerable advantage, real-world strategic alli-
ances have a fairly regular history of failing to live up their promise (Gottschlag
& Zollo, 2007), which has prompted researchers to study why this happens. That
the alliance is a form of social dilemma seems a promising answer to the ques-
tion, and experimental research has been able to document social dilemma-like
properties within a simulated alliance (Agarwal, Croson, & Mahoney, 2010).
Research into the dynamics of strategic alliances indeed finds factors at work
that also occur in interpersonal dilemmas. For example, the alliance is strength-
ened as reciprocity and trust between members develops (Muthusamy & White,
2005), though choice strategies other than Tit-for-Tat may be more effective at
strengthening the alliance (Arend & Seale, 2005). There is evidence that both pro-
cedural and interactional justice play a role in determining the strength of the
alliance (Luo, 2007). In a manner similar to social value orientation, the extent to
which a partner worries that other members of the alliance will free-ride impacts
own participation in the alliance. Recall from the Chapter  4 that competitors
assume that everyone is competitive and will readily exploit if given a chance.
Similarly, those who suspect alliance partners will eventually try to exploit the alli-
ance tend to reduce their own involvement in the alliance, even if no exploitation
has actually occurred (Rockmann & Northcraft, 2008). Structurally, the alliance
is often characterized by both social and environmental uncertainty. The social
Management and Organizations ■ 121

uncertainty is grounded in the fact that any one partner cannot really know what
the other partners are planning, and the environmental uncertainty stems from
the fact that it is impossible to know whether the alliance will succeed. It may be
that all of the time, effort, and willingness to be vulnerable is for naught (McCarter,
Mahoney, & Northcraft, 2011). We have seen that uncertainty plays a major role in
determining social dilemma choice among individuals, so this factor is yet another
parallel between strategic alliances and regular dilemmas.
A caution that must be raised here is that, with strategic alliances, we are talk-
ing about groups interacting with groups, rather than individuals with individuals.
A key tenet of groups research in general is that behaviors seen at the individual
level do not necessarily occur when those individuals are grouped and collective
performance is measured (Hinsz, Tindale, & Vollrath, 1997; Wildschut et al., 2003).
Thus, as we think about strategic alliances as social dilemmas, we need to be careful
to not automatically assume that principles that are well-established in individual
social dilemmas will also occur between alliance members. A considerable amount
of additional research is needed into the dynamics of alliances-as-dilemmas, but
this seems a promising application of social dilemma research to a feature of orga-
nizations. Notably, political scientists have considered the issue of international
alliances from a Prisoner’s Dilemma perspective (e.g., Conybeare, 1984; Palmer,
1990), and it would be useful to also ask to what extent the dynamics of these very
large-scale dilemmas also occur in smaller, organizational alliances.

■ BASIC ISSUES

It should be apparent that the workplace social dilemma is a new topic. We


have identified some forms that the dilemma can take, and seen that some
research into each form exists. But given the newness of the topic, there are
also some quite basic issues that require attention.

Are Corporate Ethics a Social Dilemma?

We saw that individual worker ethical behavior is a form of social dilemma. What
about the larger-scale ethical climate within an organization? As we mentioned ear-
lier, individual behavior does not necessarily repeat itself at the collective level, so
there is no guarantee that the ideas we discussed about individual ethical behavior
would work at the level of the corporation. At a purely practical level, investigat-
ing corporate ethics within a social dilemma framework would help bring struc-
ture to research into the issue. In a seminal article, Donaldson and Dunfee (1994)
criticized approaches to business ethics as being grounded in either philoso-
phy (“Companies should do these things”) or empiricism (“Companies tend to
do these things”), but not both. Progress has been made since then, but simi-
lar criticisms continue to be leveled (e.g., Scherer & Palazzo, 2007). A  social
dilemma approach can help along these lines, because the approach encom-
passes both what should be done (to achieve long-term cooperation), and what
actually is done. It may turn out to be that macro-level ethical behavior is not
well-described as a social dilemma, but it is worth testing the proposition.
122 ■ Applications and Future of Social Dilemmas

Theoretically, there are some interesting connections between corporate ethics


and social dilemma-related issues. For example, corporations that have a strong eth-
ical climate also tend to show high levels of corporate social responsibility (CSR), or
involvement in their communities (Joyner & Payne, 2002). If we treat CSR as a form
of cooperative, group-regarding behavior, the implication is that an ethical business
culture contributes to macro-level cooperation. However, experts in corporate law
have argued that CSR itself presents a Prisoner’s Dilemma, specifically a conflict
between maximizing profit and being community-regarding: If all other companies
engage in their community, then Company X could exploit this and withdraw from
CSR actions, allowing them to make a maximal profit while competitors accept a
lesser profit. Further, pursuing CSR at the expense of profit maximization could put
the company in legal jeopardy with its shareholders (Eisenberg, 1998). While this
may seem disheartening, note the involvement of a social dilemma structure in the
argument. Perhaps application of some of the ideas that we have discussed in this
book could help improve our understanding of corporate ethics.
The dynamics of corporate ethics, then, exist at a level of complexity beyond
individual ethics. If we agree that it is important for all entities to adhere to some
ethical code, then it follows that we need a clear understanding of how corporate
ethics develop and are implemented, and that such an understanding is going to
be especially hard to achieve. It seems that a social dilemma approach offers much
potential.

Can Social Dilemma Approaches Help Explain


Workplace Deviance?

Organizational theorists generally treat failure to be a good organizational citizen


as a type of workplace deviance (e.g., Lee & Allen, 2002; Dunlop & Lee, 2004;
Robinson & Bennett, 1995). Such deviance can be as mild as chronically show-
ing up late for work, or as severe as workplace theft or sabotage. Supporting this
connection is the fact that some variables that are connected to cooperation seem
also to predict engagement in workplace deviance. For example, just as with social
dilemma behavior, workplace deviance is predictable from the person’s levels of
agreeableness and conscientiousness (Colbert, Mount, Harter, Witt, & Barrick,
2004), and performance of such behaviors is influenced by what coworkers are
doing—an individual who is surrounded by deviant coworkers tends to also engage
in frequent deviance (Robinson & O’Leary-Kelly, 1998). Given this, one wonders
whether workplace deviance in general can be approached as a type of social
dilemma. Argument along these lines can be found. For example, some scholars
suggest that workplace theft can be seen as a type of Prisoner’s Dilemma (e.g.,
Aquino, Grover, Goldman, & Folger, 2003; Scalet, 2006) and, similar to a dilemma,
that forgiveness can play a role in minimization of the temptation to steal (Aquino
et  al., 2003; Bradfield & Aquino, 1999). A  reasonable question to ask is whether
some of the interventions that encourage at least temporary cooperation within a
social dilemma can also act to reduce the temptation towards deviance.
Solution of workplace deviance problems is not a trivial issue. It is estimated
that such behavior is not only the fastest-growing problem behavior in the
Management and Organizations ■ 123

workplace, but the fastest-growing problem behavior in American society (Henle,


2005). Theorists have no real ideas about how to minimize it, let alone prevent it.
Adopting a social dilemma approach certainly will not lead to prevention—we still
need interventions that make permanent temporary tendencies toward coopera-
tion. However, as has been seen in our earlier chapters, we do have some good
tools for bringing about at least short-term cooperation. For many organizations,
such interventions would be an improvement over what is done now to address
workplace deviance, and social dilemma researchers in turn might learn some-
thing from the problem of employee deviance that leads to creation of interven-
tions that produce longer-lasting cooperation.

Do Social Dilemmas at the Workplace Call


for Democratic Management?

This question relates to a classic theme in social and organizational psychol-


ogy. Indeed, there have been several studies which compared the effectiveness
of democratic and autocratic leadership, and a leadership style often referred
to as laissez-faire. Democratic leadership involves procedural (and often inter-
actional) justice, in that the workers are involved in group decision processes.
In contrast, in autocratic leadership, it is typically the leader alone, or part of a
subgroup (e.g., management team, executive committee) that makes decisions,
without much or any involvement of the workers. The laissez-faire leadership
leaves most decisions up in the air, in that no clear direction comes from a
leader, and the expectation is that the group manages themselves.
Classic research provides some evidence for the effectiveness of democratic
leadership, and recent research suggests that involvement in decision-making pro-
cedures (voice) is essential to the trust that people have in leaders. And it is quite
likely that this element of democratic leadership also promotes effective leadership
in managing social dilemmas. In groups and formal organizations, workers should
trust leaders who use incentives—such as rewards and punishment—for motivat-
ing people to contribute their organizational productivity and success (Balliet et al.,
2011b). Such enhanced trust may also shape a culture in which organizational citi-
zenship behaviors operates as the norm, in which free-riding is collectively disap-
proved, and in which fraud and destructive forms of deviance would be unthinkable.
The above is not to deny that sometimes autocratic leadership can be quite
effective. For example, when the group faces an urgent social dilemma crisis, or
a situation in which nobody wants to give (but only take), then it might be very
important to utilize some autocracy in norm enforcement. This is what Hardin
(1968) recommended when he argued for “mutual coercion, mutually agreed
upon” solutions to solve the problem of cooperation. Likewise, sometimes groups
are quite able to self-organize. As we have seen in some of the earlier chapters, it is
interesting to see that, with some regularity, groups are quite able to promote and
sustain healthy levels of cooperation. And clearly, if that is true, then leaders may
serve several other roles—such as providing general direction—but the role of
managing social dilemmas should be relatively small, and take the form of “global
monitoring” rather than direct forms of “norm enforcement.”
124 ■ Applications and Future of Social Dilemmas

Taken together, democratic leadership involving procedural justice seems quite


functional overall. Procedural justice, especially involvement in decision-making
procedures, communicates trust, respect, and concern with collective well-being
(Tyler, DeGoey, & Smith, 1996), and it may reduce feelings of uncertainty regard-
ing the management of social dilemmas (Van den Bos & Lind, 2002). These are
important psychological needs. And perhaps because of these needs, it is possible
that democratic, rather than autocratic leadership, tends to support group stabil-
ity—the commitment that group members have toward the organization and their
fellow group members (e.g.,Van Vugt et al., 2004).
But still, sometimes other types of management may also be quite effective.
From a social dilemma perspective, it can be argued that some aspects of auto-
cratic leadership or even laissez-fair can be functional in some situations: It largely
depends on how well the group already copes with social dilemmas of various
kinds (independent of other features such as the urgency of social dilemma).
Moreover, it is often overlooked that for management to be effective, it needs to be
sufficiently supported by workers in an organization. Even when some elements
of autocratic leadership are called for, it would seem important to communicate
well and clearly, so that new rules and procedures are free of bias and appeal to the
needs and concerns of all group members.

■ SUMMARY AND CONCLUSIONS

We have seen that a variety of aspects of workplace functioning fit the logic
of a social dilemma. As a primary emphasis of organizational researchers is in
understanding shortfalls in work productivity, the social dilemma framework
offers quite a bit of potential, if we equate productivity with cooperation. The
question of how long-term cooperation can be maximized then becomes equiv-
alent to asking how long-term productivity can be maximized.
Studying workplace dilemmas as social dilemmas also offers advantages to the
social dilemma theorist. The organization is an unusual setting in that it contains
many features one does not normally see in a typical real-world dilemma: There
are third parties who stand to be affected by the erosion of cooperation; group
members could be removed from the group by an authority because of lack of
cooperation; the complexity of the organization often makes it easier for free-riders
to stay hidden; the entity being contributed is often less tangible than money or
participation in a well-defined task. That basic principles of social dilemmas occur
in the workplace thus adds generality to the body of knowledge about dilemmas.
Research into workplace social dilemmas should thus be encouraged as beneficial
for both social dilemma researchers and organizational psychologists.
7 Environment, Politics, Security
and Health

Social dilemmas are everywhere around us. As social creatures, humans fre-
quently encounter cooperation problems at home, in their community, in the
workplace, and in society at large. Sometimes these social dilemmas involve
just two people, such as a husband and wife sharing the burdens of child-
care, whereas at other times millions or even billions of people are involved
with problems such as international security and global climate change. For
some real-world social dilemmas, the solutions seem fairly straightforward, for
instance, a husband and wife could make a reciprocal arrangement to pick up
their kids from school. Other social dilemmas require rather more complex
solutions. For instance, an international treaty such as the Kyoto Protocol to
address the problem of global climate change includes a combination of strate-
gies involving financial incentives, punishment, changes in social norms, legal,
and institutional changes (Dietz, Ostrom & Stern, 2003; Van Vugt,  2009).
It is important to realize that studying social dilemmas is not a theoretical exer-
cise. It is of course highly important to work out the mathematical assumptions
underlying the dilemma games and develop the procedural details of the laboratory
experiments. Nevertheless, it must be recognized that some of the most pressing
problems facing society today regarding the environment, public health, politics,
and international security are, in fact, social dilemmas. Understanding the psychol-
ogy of cooperation and defection within these social dilemmas is crucial for solving
these problems and improving the welfare of society and the fate of the planet.
In this chapter, we look at a few of the most pressing collective problems that we
as a community, society, nation, and planet are confronted with today through the
lens of social dilemma theory and research. These examples illustrate how social
dilemmas permeate modern life, and how they can be solved. The challenges for
solving these problems are threefold.
A first challenge is that the problem needs to be broadly recognized as a conflict
between self-interest and collective interest—generally, as a social dilemma. Many
cooperative problems in society are not being solved because people do not rec-
ognize them as social dilemmas. For instance, various public health issues such as
smoking, unsafe sex, and vaccination programs are in fact social dilemmas because
there are negative externalities involved such as the health risks involved in pas-
sive smoking or the contagion risks if many people choose not to be inoculated
against infectious diseases. At the same time, what sometimes looks like a social
dilemma is upon closer inspection quite a different social challenge. For example,
some collective problems involve a lack of coordination rather than cooperation.
Such coordination problems can be solved by adopting a simple rule—for instance,
Mark Van Vugt had primary responsibility for preparation of this chapter.

125
126 ■ Applications and Future of Social Dilemmas

some countries have chosen to drive on the right side of the road and others on the
left side. In each case, the pay-off structure underlying the social dilemma must
be carefully analyzed. If we fail to identify whether a particular problem is either
a continuous public goods or a step-level public goods problem, this might affect
the effectiveness of particular solutions. For instance, there were problems with the
tsunami-relief effort in Asia in 2004 because campaigns to raise donations were so
successful that the organizers could not spend the money effectively and a lot of
money ended up in the wrong hands. It would have been better to set a cap on the
amount of money needed and focus the activities on repairing the infrastructure of
the destroyed coastal areas in Indonesia and Thailand (Van Vugt & Hardy, 2010).
A second challenge is to appreciate the complexity of real-world social dilem-
mas. Many real-world problems contain a mixture of different dilemma games.
Researchers often make a distinction between public good dilemmas and com-
mons dilemmas. Public goods dilemmas require individuals to make an active
contribution to establish or maintain a collective good such as building a local
bridge or joining a social movement. They is clearly a collective interest, and usu-
ally these dilemmas include non-excludable goods, because once they have been
provided everyone can enjoy them and this does not affect the quality of the good.
Conversely, resource dilemmas—also known as commons dilemmas or CPRs
(common pool resources)—require individuals to make sacrifices to preserve
a common resource such as a communal garden or a water reservoir. Resource
dilemmas are usually involve a greater risk of harming others (rival goods) because
using the resource affects the quality for others.
In reality, the distinction between these two classes of social dilemmas is often
blurred and many real-world problems are hybrid social dilemmas. For instance,
environmental management requires that people make active contributions to
protect the environment, for instance, through paying eco-taxes as well as refrain
from consuming scarce resources such as water and energy (Van Vugt, 2009). It is
good to realize that there are psychological differences associated with framing a
problem as either a public good or a resource dilemma which has implications for
the effectiveness of particular strategies (Van Dijk & Wilke, 1995).
A third challenge is that there is usually not one strategy—a magic bullet—to
solve a real-world social dilemma. To tackle a problem such as tax evasion requires
a combination of different activities which tap into the different reasons why peo-
ple evade taxes. For instance, people may not pay their taxes either because they
do not believe that their money is spent wisely, or because they can get away with
not paying, or they have difficulties filling out tax forms. Different people have
different reasons why they do not cooperate in a social dilemma and therefore it
requires a combination of strategies to foster cooperation.

■ STRATEGIES AND MOTIVATIONS


IN SOCIAL DILEMMAS

The literature often draws a distinction between structural and individual solu-
tions to social dilemmas. This distinction was originally proposed by Messick and
Environment, Politics, Security and Health ■ 127

Brewer (1983) in a seminal paper. Structural strategies try to foster cooperation


through changing the actual pay-offs in the dilemma, altering the choice options,
or creating institutions. Typical examples include reward and punishment strat-
egies, carving up a resource into smaller units (privatization), or appointing a
leader or authority to regulate access. Individual solutions do not alter the actual
pay-offs of the dilemma, yet they increase the psychological salience of and
attractiveness of voluntary cooperation. For instance, communication about a
cooperative problem enhances people’s understanding and personal responsibility
for solving it (Dawes, 1980).
A different way to differentiate between social dilemma strategies is to focus
on the core psychological motive they tap into. This approach is based on the idea
that if we know what motivates people to behave as they do in a social dilemma, we
can change their behavior. We distinguish between four core motives of decision
making in social dilemmas (Van Vugt, 2009):  understanding, belonging, trust-
ing, and self-enhancing. This distinction is inspired by Susan Fiske’s Core Social
Motivations Model (Fiske, 2004) and Weber et al.’s (2004) Logic of Appropriateness
Model. Each of these core motives (understanding, belonging, trusting, and
self-enhancing) informs a different kind of strategy (information, identity, institu-
tions, and incentives), which together we refer to as the 4xI-framework for solving
social dilemmas (cf. Van Vugt, 2009).
Understanding refers to a basic human motive to make sense of the world around
us and to manage uncertainties. To increase understanding of social dilemmas pri-
marily requires information strategies with activities such as education, (social)
learning, feedback, and monitoring to promote cooperation. Humans also have
a pervasive need to belong and feel connected to others. This motive gives rise to
identity strategies which improves people’s ties to their residential community or
workplace, for instance, through local community-based initiatives (Weber et al.,
2004). Trusting refers to a basic need to engage in mutually cooperative relation-
ships. This motive gives rise to institutional solutions to improve trust in others’
cooperation, for instance, by developing norms and rules to deter free-riding or
establishing clearly identifiable group boundaries in managing social dilemmas.
Often, the effectiveness of these strategies requires trust in authorities (Tyler &
Degoey, 1995). Finally, people are motivated to seek rewards and avoid punish-
ments. This self-enhancing motive gives rise to incentive strategies that appeal
primarily to people’s self-interest, such as financial rewards for cooperation and
penalties for defection.
We will now take a look at various pressing social dilemmas in our society
through the lens of the 4xI framework. We acknowledge, of course, that there
are other ways to categorize different social dilemma strategies. First, we exam-
ine environmental sustainability as a cooperative problem because this problem
has been studied extensively in social dilemma research across the social sciences.
We then proceed to various other cooperative problems in society that have been
identified as social dilemmas in areas such as politics, volunteering, international
security, and public health. Finally, we draw some lessons from these case studies
to tackle a host of other social dilemmas in society.
128 ■ Applications and Future of Social Dilemmas

■ E N V I R O N M E N TA L S U S TA I N A B I L I T Y
AS COOPERATIVE PROBLEM

One of the more pressing social dilemmas concerns the protection of the natu-
ral environment and natural resources (Gardner & Stern, 2002). Many envi-
ronmental problems are social dilemmas because they entail a conflict between
individual and collective interests. For instance, when people make efforts to
save domestic energy or recycle their garbage, they will be incurring a net cost.
Yet, if not many others follow their example, the benefits of their efforts will be
negligible as it will have no impact on the overall sustainability of the resource.
Many environmental problems have the underlying structure of a tragedy of the
commons (or a resource dilemma), as we discussed in an earlier chapter.
Garret Hardin, who introduced the term “Tragedy of the Commons” in a famous
article in Science (1968), had an environmental problem in mind. He tells the story
of how the management of a communal pasturage by a group of herdsmen turns
into ecological disaster when each individual, upon realizing that adding extra
cattle benefits him personally, increases his herd, thereby (intentionally or unin-
tentionally) causing the destruction of the commons. The tragedy of the commons
has become central to our understanding of many local, national, and global eco-
logical problems. As an evolutionary biologist, Hardin argued that nature favors
individuals who exploit common resources at the expense of the more restrained
users. He also argued that voluntary contributions to create institutions for man-
aging the commons often fall short because of (the fear of) free-riders. To save
the commons, Hardin therefore recommended “mutual coercion, mutually agreed
upon” which essentially involves electing a central authority that regulates people’s
access to the commons.
Hardin’s article inspired a large body of research into factors contributing to the
preservation of shared environmental resources, including much applied research
into various environmental problems such as the conservation of resources like
water and energy, recycling and transportation (Joireman et al., 2004; Penn, 2003;
Samuelson, 1990; Van Lange, Van Vugt, Meertens, & Ruiter, 1998). Here is an
overview of the main results of these research programs.

■ RESOURCE CONSER VATION

Information. One of the problems in persuading people to conserve scarce


resources such as energy or water is that people generally lack an understand-
ing of how their actions are linked together to produce a particular collective
outcome. Therefore strategies conveying information about the state of the
resource, referred to as reducing environmental uncertainty, seem to work
well. Reducing uncertainty fosters sustainable use, because most people are
optimistic about the future and underestimate the damage their actions are
doing to the environment (Budescu et  al., 1990; Opotow & Weiss, 2000).
Managing environmental resources therefore depend first and foremost on
gathering reliable resource information, for example, about fish stocks and
energy supplies.
Environment, Politics, Security and Health ■ 129

Research further shows that local environmental information works better


than global information, in part because it is easier for people to appreciate envi-
ronmental risks if there is a visible link between their actions and immediate envi-
ronmental outcomes and they feel they can personally contribute something to
alleviate the problem (personal efficacy; Kerr, 1989). To illustrate the importance
of information, one of us was involved in a study among households in Britain
during an acute water shortage in the summer of 1997. First, perceptions about
the severity of the crisis predicted efforts to conserve water (Van Vugt, 2001; Van
Vugt & Samuelson, 1999). People’s knowledge about the cause underlying the
drought made a difference, too. When people believed the shortage was caused
by other households taking too much, they conserved less water than when they
believed it was caused by unusually warm weather (Van Vugt, 2001). In addition,
people make more efforts to conserve when they feel more efficacious and believe
their contribution makes an actual difference in alleviating the crisis (Staats, Wit,
& Midden, 1996). Finally, people step up their voluntary contributions to save a
common resource if they think that there is a huge risk that they will lose every-
thing if the resource collapses (Milinski et al., 2006).
Identity. These findings suggest that information strategies aimed at improving
one’s understanding of the problem can help. Yet, to mobilize people also requires
that they see that their fate is interdependent with that of others. Humans iden-
tify strongest with primary groups such as friends and family, and therefore an
appeal to the interests of those primary groups is more persuasive than appealing
to some abstract notion of humanity. For instance, messages with appeals to kin-
ship such as “please think of your children’s future” raise environmental coopera-
tion (Neufeld et al., 2011).
The more people identify themselves with their group, the more they are con-
cerned about their reputation. Environmental pressure groups routinely apply
“naming and shaming” campaigns to force polluting organizations to change
their policies, and with some success. For instance, McDonalds discontinued their
celluloid packaging of burgers after massive grassroots protests in various cities
across the United States (Gardner & Stern, 2002). Research shows that giving peo-
ple a sense that they are being watched—by displaying eyes on posters—reduces
littering in public places (Griskevicius, Cantu & Van Vugt, 2012).
A word of caution about identity strategies is that they can be a double-edged
sword. Group identities are often tribal, and this can be both a force for good and
bad in promoting environmental sustainability. Research in real world commons
shows that if resources are shared between two or more communities—such as
river irrigation systems or sea fisheries—there is a greater risk of depletion because
it induces intergroup conflict (Ostrom, 1990). In such cases, fostering a superor-
dinate group identity—for instance, promoting trade between the communities or
accentuating a common threat such as the collapse of the local economy—might
be a better alternative (Van Vugt, 2009).
Institutions. A  third strategy conducive to successful resource conservation is
creating legitimate commons institutions. Leaders and authorities play a key role in
governing local and global environmental resources, but who is prepared to trust and
empower them? Research shows that authorities must employ fair decision-making
130 ■ Applications and Future of Social Dilemmas

rules and procedures if they are effective at promoting conservation. Regardless


of whether people receive bad or good outcomes, they want to be treated fairly
and respectfully. A study on the 1991 California water shortage (Tyler & DeGoey,
1995) showed that Californians cooperated with local water authorities in imple-
menting drastic water saving measures if they believed the authorities made efforts
to listen to their concerns and provided accurate, unbiased information about the
shortage. Procedural concerns are particularly important for residents with a strong
sense of communal identity. A survey of the 1994 British railway privatization found
that train users who did not trust private companies to look after this public good
were more likely to take cars instead (Van Vugt, 1997). Thus, trust in institutions
plays a crucial role in managing urgent and complex environmental challenges.
Incentives. Finally, appeals to self-interest in the form of reward and punishment
strategies are conducive in changing environmental behavior (Samuelson, 1990).
Monetary incentive schemes (e.g., subsidies) have been effective in fostering the
adoption of expensive home-saving devices such as solar panels, water meters,
and roof insulation. Financial incentives also promote sustainable practices within
industry. An example is the highly effective system of tradable environmental
allowances (TEA) in the United States. This scheme permits companies to buy and
sell “pollution” credits, which is believed to have contributed to a decline in acid
rain (Dietz et al., 2003).
In promoting environmental cooperation, reward and punishment might work
best in combination with other strategies. Research shows that there are important
individual differences in the weight individuals assign to self-interest, and there-
fore reward and punishment schemes work better among individuals who are pri-
marily concerned with (economic) self-interest such as individuals with proself
value orientations. Also, individual incentive schemes might work better among
people who do not feel very strongly connected to their community. In a study on
the effect of financial incentives on domestic water use, members of households
were asked to complete a short community identity-scale (Van Vugt, 2001) with
statements such as “I feel strongly attached to this community” and “There are
many people in my community whom I think of as good friends” (1 = strongly
disagree, 5 = strongly agree). Water records (corrected for various demographic
variables and previous use) showed that households which identified strongly with
their community consumed less water regardless of whether they had a meter.
This implies that economic incentives work better when core belonging needs are
unfulfilled. Yet incentive schemes may be counterproductive if they undermine
other core needs. For instance, handing out small fines for littering might signal
that the problem is more widespread (trust) than it actually is, or transform it
from an ethical-environmental issue into an economic issue (understanding; cf.
Tenbrunsel & Messick, 1999).

■ COMMUNITY RESOURCE MANAGEMENT


AS SOCIAL DILEMMA

How do communities manage resource dilemmas? The Nobel Prize-winning


political scientist Elinor Ostrom (who passed away in 2012)  and her research
Environment, Politics, Security and Health ■ 131

group at Indiana University studied various cases of success and failure in the
management of local communal resources. In her classic book Governing the
Commons (1990) she described various examples of resource management
projects such as water irrigation systems, fisheries, and cattle grazing, and used
these to draw some general design principles for successful community resource
management. Ostrom was primarily interested in community management sys-
tems in which resource users devise their own management rules, accept the
rules voluntarily, and have the power to collectively change them. From study-
ing these systems, she concluded that communities are actually much better
in organizing themselves to prevent a tragedy of the commons than originally
suggested in Hardin’s article.
Ostrom focused on the sustainability of common-pool resources. A common-
pool resource is one that is large enough geographically to make it difficult to
exclude individuals from benefiting from its use. Sustainability is a mark of suc-
cessful management because renewable resources such as grasslands, forests,
and fisheries replenish themselves at a limited rate and overuse can cause their
depletion. Ostrom looked at renewable resources in which substantial scarcity
existed, in which relatively small numbers of individuals depended heavily on the
resource. Ostrom found that success in developing long-lasting sustainable com-
munity management systems depends on a combination of four factors, (1) char-
acteristics of the resource, (2) the community using the resource, (3) the rules they
develop, and (4) the actions of government at regional and national levels. A social
psychological analysis suggests that these conditions are important because they
tap into the four primary motives for decision making in social dilemmas, under-
standing, belonging, trusting, and self-enhancing.
A first condition for successful community resource management is that the
resource is controllable locally. This means that the resource has clearly identifi-
able boundaries, that resources stay within these boundaries, and that changes in
the resource can be monitored. For instance, fish stocks in lakes or coastal areas
are easier to monitor and control than fish stocks in open seas. Furthermore, com-
munal resources are more likely to be sustained once users realize there is a threat
of depletion due to overuse. Information campaigns play an important role in con-
veying this information.
A second factor determining the success of community resource management
has to do with the characteristics of the group of users. Sustainable communities
have rather small and stable populations with relatively few individuals moving in
and out and with many members placing a high value on the preservation of the
common resource. Stability is important because such communities are character-
ized by dense social networks and strong social norms about how people ought
to behave. Ostrom refers to this as social capital. Successful communities are also
those in which there are easy, low-cost ways of sharing information and resolving
conflicts. In the absence of a small and stable community of users, it thus becomes
paramount to develop identity strategies to increase social connections between
individuals.
A third condition for community resource management is the availability of
appropriate incentives, rules, and procedures. Successful community resource
132 ■ Applications and Future of Social Dilemmas

management is characterized by rules that limit resource exploitation by exclud-


ing outsiders and controlling the level of resource use by insiders. Ostrom finds
that rules work better if people have had a say in making them and they are per-
ceived as equitable and fair. Successful rules also have built-in incentives for com-
pliance so that rule following has benefits that override the temptation to defect.
For instance, in the lobster fisheries in Maine (USA) all lobstermen agree to spend
their summer on the land repairing their equipment, trusting that no-one will go
out poaching lobsters. Successful community resource management also applies
penalties for defection which are easy to administer and gradual in punishment.
A fourth factor conducive to successful community resource management is
the role of local and national government. Central government can help commu-
nity management by giving local rules legal status, by providing legal assistance to
resolve conflicts and by providing support for monitoring the resource. Ostrom
reports numerous cases in which central government officials who were respon-
sible for resource management accepted bribes or political favors in return for
allowing some individuals to take more than their share of the resource.
The four conditions outlined by Ostrom nicely map onto our social psychologi-
cal model of resource conservation in the sense that a combination of strategies
aimed at increasing one’s understanding of the problem (information), belonging-
ness to the community (identity), trust in institutions and in each other (institu-
tions), and finally, appropriate rules, rewards and sanctions (incentives) work best.

■ T R A N S P O RTAT I O N A N D M O B I L I T Y

One of the more complex social dilemmas in society concerns transportation


and mobility. Massive car (and more recently plane) travel is one of the greatest
air polluters in the world and greatly contributes to the depletion of nonrenew-
able resources such as oil and gas. But there are other negative externalities
too in the form of noise pollution, road and space requirements, and public
safety. Individuals are better off using their cars whenever they need to, but
from a societal viewpoint it would be better if there were fewer cars on the
road. Yet each individual car’s contribution to air and noise pollution is negli-
gible because it is shared with so many other road users, and so there is little
incentive for people to give up driving. This is a classic cooperative problem
with the features of a social dilemma. Yet, transportation also poses a coordina-
tion game (e.g., how to avoid a traffic jam) and the combination of these two
elements makes the problem particularly difficult to  solve.
Information strategies may have very different effects depending upon people’s
understanding of the dilemma. In one study (Van Vugt et al., 1995) car drivers
received scenarios about a route they could take by either car or train. They also
received information about what the majority of travelers planned to do. People
who viewed the transport dilemma as an environmental problem followed the
majority and took the train if the majority went by train. Yet, when the major-
ity was traveling by train a significant number of people took the car instead.
These travelers realized that there would be no congestion on the road and hence
their travel time would be quicker. They interpreted the game as a coordination
Environment, Politics, Security and Health ■ 133

problem. The message is that it is important to consider people’s understanding of


a situation if we want to change their behavior. Other research shows that people
with a prosocial disposition or a longer time orientation view transport issues as
environmental dilemmas, whereas people with a selfish disposition or a short time
orientation view such problems as coordination games (Joireman et al., 2004; Van
Vugt et al., 1995).
To reduce the negative externalities of private car use, institutional strategies
that limit polluting options have been found to be highly effective. Examples are
the compulsory installation of cleaner engines in cars and the removal of leaded
gas from pumps. Incentive schemes also work well. When the City of London
implemented a system to charge people driving their cars into the city a daily fee of
$10, car use dropped by 20%. Providing rewards in the form of separate car lanes
for people sharing their car (carpoolers) to reduce congestion also works well,
and many of these lanes operate successfully in major cities around the world. Yet
incentive schemes work better if the authorities implementing them are perceived
as fair and legitimate.
When in 1993 the Dutch government built a special lane for carpoolers (peo-
ple sharing their car) along one of the busiest highways in the Netherlands; it cut
travel times substantially for carpoolers. So, this was a real incentive for people to
stop driving alone. Yet single drivers reacted strongly against the lane, and after
widespread protest and a legal challenge, the lane closed within a year. Survey data
suggested that many drivers did not trust the intentions of the authorities because
they were paying high road taxes and regarded it as particularly unfair that only
cars with three occupants could use the lane (Van Vugt et al., 1996).
In sum, the effectiveness of information strategies to reduce car use depends
upon people’s understanding of the transport dilemma. Institutional strategies
to influence travel use work better if people trust the authorities and incentive
schemes are effective if they are perceived as attainable. Strategies tapping into
people’s belongingness needs—such as connecting people living in the same area
through car-sharing schemes—seem promising alternatives.

■ SOCIAL DILEMMAS IN POLITICS

Researchers have looked at other real-world social dilemmas beyond the envi-
ronment. One of these concerns political activism. Politics is a public goods
dilemma because citizens give up some autonomy to create institutions (law,
army, police) to manage different kinds of social problems in society such
as security, crime, antisocial behavior, poverty, and unemployment. People’s
self-interested choice is to not contribute to upholding law and order, but of
course, if nobody does, then society as a whole will break down and everyone
will be worse  off.
Voting. One of the more salient political social dilemmas concerns voting
behavior. When people cast their vote in an election or referendum they incur a
net cost, and yet their impact on the outcome of the election is negligible. It is very
tempting to free-ride on the efforts of others and yet if no-one casts their vote then
governments operate without legitimacy and everyone will eventually be worse
134 ■ Applications and Future of Social Dilemmas

off. Although it is rational not to vote many people still do—this is known as the
voter’s paradox (Garman & Kamien, 1968). Why?
A social dilemma analysis of voting reveals a number of reasons why people
turn out and vote. One important factor is people’s understanding of the criticality
of their vote. In general, supporters of minority parties feel more critical than sup-
porters of majority parties (Ledyard & Palfrey, 2002) and this explains why election
results are generally less clear-cut than forecasters predict based on polling results.
Voting also serves a belongingness purpose because people derive benefits from
supporting particular candidates and parties that they identify with. To increase
voting rates, governments usually directly appeal to people’s self-interest by mak-
ing voting compulsory. If an eligible voter does not attend a polling place, he or
she may be subject to punitive measures such as fines, community service, or even
imprisonment. As a result, turnout is higher in countries that have adopted com-
pulsory voting such as Australia, Belgium, and Singapore. Some countries hand
out penalties if people do not cast their vote. In countries such as Brazil, Peru, and
Greece, if a person fails to vote in an election, they are barred from obtaining a
passport until after they have voted in the two most recent elections. In Turkey, if
an eligible voter does not cast their vote in an election, then they pay a fee of about
five Turkish lira (about $8 USD.). Thus, both incentive and institutional strategies
can contribute to solving the social dilemma of political voting.
What kind of institutional changes would people vote for in solving a commons
dilemma? Naturally, people have a stronger preference for change when a com-
mons is being depleted, but it is interesting what kind of rule change they prefer.
A research program by Messick, Samuelson and others shows that people prefer an
equal division of the common resource above other solutions such as appointing a
leader or an authority that regulates access to the commons (Rutte & Wilke, 1985;
Samuelson, Messick, Rutte, & Wilke, 1984). This suggests that users want to retain
some autonomy in the commons.
Tax Paying. Income tax paying is a standard example of a public good dilemma,
especially when taxes are collected through the procedure of filing tax returns as in
most Western countries. When filling out an income tax form it is in the interest
of individual citizens to under-report the amount of income they have received so
that they are taxed less heavily. Yet, if many taxpayers adopt this strategy, then this
means that many valuable public goods in society such as schools, libraries, health
care, and the police force are underfunded, leaving everyone worse off. This is not
a hypothetical problem. The massive budget problems in European countries such
as Greece, Italy, and Spain in the last few years (since 2009) are in part due to a
lack of compliance with tax regulations, especially among the wealthy citizens of
these countries. Tax evasion rates in Western countries indeed vary quite dramati-
cally. Webley, Robben, Elffers and Hessing (1991) report tax evasion percentages
varying from 1% to 40%. Therefore, it is interesting to look at the problem of tax
evasion from a social dilemma viewpoint.
Traditional approaches to reduce tax evasion focus on increasing incentives for
compliance through punishment and deterrence. More and better audits, higher
fines, and increasing the scrutiny of taxpayers who have been caught once, directly
influence people’s temptation to defect and they generally increase compliance
Environment, Politics, Security and Health ■ 135

rates. Yet, such punitive systems are costly to operate and therefore tax authorities
have looked at other, less expensive ways to induce tax compliance. To increase
compliance, they have introduced much simpler tax forms which people can easily
comprehend (Elffers, 2000). In some countries, tax authorities also give feedback
to taxpayers on what people’s taxes are being spent upon so that people feel a that
there is a more direct link between their actions and the outcomes in terms of the
provisions of public goods.
People’s need to belong can also be invoked to induce tax compliance through
activating personal and social norms (Wenzel, 2004). People generally do what
they believe the majority of people do and therefore it is important to convey
information—provided it is true—that the majority of people report their income
honestly. In addition, people are also more likely to imitate prestigious, high-status
individuals, and so it is in the interest of tax authorities to scrutinize the tax forms
of highly public figures and “name and shame” them if they defect. Incentive
schemes also seem to be effective, and they can even increase trust in tax authori-
ties. For instance, tax authorities in the Netherlands collect a provisional tax dur-
ing the year but when the tax is overestimated tax payers get a monetary refund.
Research suggests that the combination of an easy-to-fill-out tax form, and poten-
tially a considerable tax deduction has increased public trust in the tax system as
well as tax compliance (Elffers, 2000).
Volunteerism and Social Movements. Every year millions of people around
the globe volunteer to devote substantial time and energy to help others, for
example, providing companionship for the elderly, tutoring children with learn-
ing problems, organizing activities at local sports clubs, or participating in social
and political movements. According to a 2010 survey, 62.8 million adults in the
United States perform volunteer services each year, for a total of 8 billion hours
per year (Volunteering in America, 2012). Volunteerism is a classic example of a
public good dilemma. It is in everyone’s interest that the sick and needy in soci-
ety are being cared for and that there are religious, sports, and leisure activities
which people can participate in. Yet, at the same time for any particular indi-
vidual it is attractive to use such services if they need to, but contribute nothing
to maintain them.
A social dilemma approach suggests that there are different strategies that can
be used to promote volunteering in society. These strategies should be tailored
to the particular psychological motives that people have for volunteering. Social
psychological research on volunteerism suggests that people volunteer for many
different reasons (Omoto & Snyder, 2002). These can be neatly grouped into the
four primary motives for cooperation in social dilemmas: understanding, belong-
ing, trusting and self-enhancing. Some people volunteer to get a greater under-
standing of a particular problem. Another common motivation for volunteering is
a concern with a particular social grouping or community that one identifies with
(e.g., a religious person helping in a local church). Related to this, some people
do volunteer work as a means to express their personal and humanitarian values
as caring individuals, trusting that others will do the same for them if they need
help. Finally, many volunteers report self-enhancing reasons such as benefits for
their personal career and development, making friends, and feeling better about
136 ■ Applications and Future of Social Dilemmas

oneself. Of course, many volunteering activities are driven by combinations of


these motives.
To promote volunteering, information strategies could focus on the under-
standing motive: Doing volunteer work helps one better understand a particular
problem (e.g., joining Greenpeace might help a person better understand global
environmental problems). Identity strategies might attract volunteers for particu-
lar causes that people strongly identity with. For instance, many gay people are
interested in volunteering with AIDS victims because gay individuals are a risk
group for contracting HIV/AIDS. A downside of such strategies is that volunteers
run the risk of being stigmatized (Omoto & Snyder, 2002). Incentives might be
useful in getting volunteers who are primarily self-interested in their motivations.
Volunteering is a good way to strengthen one’s career prospects, and to expand
people’s social network size. Interestingly research has found that when such
self-enhancing motives are salient, people endure longer in their volunteer activi-
ties than people primarily motivated by other needs (Clary et al., 1998). Finally,
institutional changes may be needed to increase volunteering. A tactic that vari-
ous leisure organizations nowadays employ is to only let people enjoy a particular
service if they contribute to its upkeep. For instance, many sports and leisure clubs
now require members to sign a contract where they promise to get involved in
“volunteer” activities.
A special form of volunteerism is joining a social movement. In his well-known
book The Logic of Collective Action, political theorist Mancur Olson (1965)
used the example of labor unions as an illustration of a public good dilemma.
Employees may be greatly in favor of having a union represent their interests in
negotiations with employers over wages and working conditions. However, they
have no interest individually in paying the cost of union participation and would
rather free-ride and let others pay for this service. Olson argued that the dilemma
can only be overcome by making union membership compulsory—an institu-
tional strategy—or by providing selective incentives for members. In his words “It
is certain that a collective good will not be provided unless there is some coercion
or some outside inducements” (p. 44).
There may be alternative strategies for getting people to join a social move-
ment, according to research (Klandermans, Van der Toorn & Van Stekelenburg,
2008). One important motive relates to people’s understanding of the situation: Is
it possible to change a situation through participating in collective action? Feelings
of personal efficacy are greatly enhanced if people know that many other people
feel the same. In the recent Arab Spring protests, it was conveyed via social media
(Twitter, Facebook) that dissatisfaction was widespread and that many individuals
in many different cities were taking to the streets. This information played a key
role in people’s decision to join the movement. Trust in authorities also matters in
joining a protest. Once people feel grievances because certain moral norms have
been violated by authorities (e.g., human rights violations, abortion laws) they are
more likely to engage in protest (Klandermans et al., 2008).
Also, it makes a difference to what extent people self-identify with a particular
social group or community that is being affected. Research suggests that if people
strongly identify with a particular social cause, they will join a social movement
Environment, Politics, Security and Health ■ 137

regardless of the individual costs of participation (Simon et  al., 1998). Identity
strategies which tap into people’s motivation to belong to a particular group should
therefore be highly effective in fostering collective action. A study on consumer
boycotts in the United States, inspired by a social dilemma perspective, showed
that the likelihood of participation in a boycott was influenced by both the likeli-
hood of the boycott’s success and the extent to which people identified with the
movement (Sen, Gurhan-Canli, & Morwitz, 2001).
In sum, a broad range of political behaviors can be viewed through the lens of
social dilemma theory, revealing many interesting insights into what drives people
to volunteer for good causes or vote in elections, for instance. Furthermore, by
looking into the primary motives for political and social action, this approach
offers a number of promising strategies to foster cooperation.

■ THE COOPERATIVE PROBLEM


OF INTERNATIONAL SECURITY

The analysis of international conflict and warfare has greatly benefited from
a social dilemma analysis. One of the classic case studies concerns the arms
race between Russia and the United States during the Cold War in the middle
of the last century. Well known game theorists such as Anatol Rapoport and
Thomas Schelling argued that the race to acquire nuclear weapons could easily
be conceived of as a Prisoner’s Dilemma game. In this game, each country has
a choice between building up their nuclear weapon arsenal, or disarmament,
which is the cooperative choice. Arming dominates disarmament because no
matter what the other country does, it is better to arm. If the Soviets arm, then
the United States should also arm to keep up, resulting in an outcome called
MAD (mutually assured destruction) in which both countries could obliterate
each other. If the Soviet Union disarms, the United State can gain a strategic
advantage by continuing to  arm.
The social dilemma primarily lies in the costs of the arms race. Mutual arma-
ment is much more costly than mutual disarmament. Thus, both countries are bet-
ter of disarming but neither is willing to trust the other to do so. At the time, many
different experiments were conducted to analyze how actors behaved in such arms
races. It was found that a Tit-for-Tat strategy, in which a country started first with
a cooperative move—disarm—and then mimicked the choices of their opponent
elicited the most cooperation (e.g., Guyer, Fox & Hamburger, 1973).
Some fifty years later, we can conclude that this is what happens. Both the United
States and particularly Russia came to realize they could no longer afford spend-
ing excessive amounts of money on developing their nuclear weapons. Through
a number of bilateral treaties, which increased trust in each other’s cooperation,
both countries have reduced their nuclear weapon arsenal considerably. Yet, other
countries are still involved in a nuclear arms race, such as India and Pakistan, and
North and South Korea.
In addition to developing trust, it seems that it is important to know each coun-
try’s understanding of the conflict. A content analysis of political speeches made
by American and Russian leaders revealed that they perceived the arms race more
138 ■ Applications and Future of Social Dilemmas

as a coordination game than as a Prisoner’s Dilemma (Plous, 1985). This was fur-
ther confirmed in a survey among U.S. senators who could indicate their prefer-
ences for the ranking of outcomes in a 2x2 game in which the two countries, the
United States and Soviet Union, each had an option to disarm (cooperation) or
arm (noncooperation). Their preferences showed that, more than anything, both
countries wanted to disarm (Plous, 1985). This information is extremely useful to
convey to political leaders because it suggests that a cooperative solution is much
easier to achieve.
Nevertheless, some research suggests that individuals as elected representa-
tives or leaders of their group or country often make more defective choices in a
social dilemma between groups than ordinary group members (Reinders Folmer,
Klapwijk, De Cremer, & Van Lange, 2012). Groups are inherently more competi-
tive than individuals (Wildschut et al., 2003), and so it is very important for group
leaders to try and develop an intimate, personal relationship with each other so
that they see each other not just as representatives of their group.
Warfare. An important insight from social dilemma theory about conflict
and warfare between countries is that it poses a cooperative problem within each
country (Van Vugt et al., 2007). Going to war is essentially a public goods dilemma
that individuals and societies face. Each individual actor would be better off not
participating in warfare because there is a huge potential cost, the risk of injury or
death. Yet, from the group’s perspective, it may sometimes pay to get many people
to sign up for war because of the spoils of a victory over a rival group or nation.
The analysis of inter-group conflict from a social dilemma perspective has been
given a huge boost by the experiments conducted by Bornstein and colleagues
(Bornstein, 1992; Bornstein & Ben-Yossef, 1994). They created an inter-group
Prisoner’s Dilemma game in the lab to model warfare decisions. In these games,
individuals could either keep an endowment to themselves or they could invest it
in their group. The group with the highest number of contributors would be vic-
torious in the game and only individuals in the group with the highest number of
contributors would receive a pay-out.
Getting individuals to contribute to war efforts may depend upon how they
view themselves. Arguably, the stronger people identify with their group, the
more likely they are to contribute. Trust also matters: If people believe not many
others will join them, why should they? Finally, incentive and institutional strate-
gies could solve this dilemma. If groups can ensure that the benefits of the loot
will go to the people who actively contributed, then there is less temptation to
free-ride. In addition, punishing defectors or deserters with imprisonment or
even execution—as they do in some countries—is a powerful deterrent against
free-riding. These strategies could be particularly focused on males because they
are historically the warriors in their group (McDonald, Navarrete, & Van Vugt,
2012). Research on the male warrior hypothesis shows that when a group is in
conflict with another group, men start contributing more to their group (Van
Vugt et al., 2007).
Both inter-group and intra-group conflicts carry features of a social dilemma.
To promote cooperation requires analyzing the key motives that guide the actions
of individuals and groups in these problems. A complicating factor is that groups
Environment, Politics, Security and Health ■ 139

and group representatives are often more competitive than ordinary individuals,
which makes it difficult to solve problems of warfare and international security. It
seems that developing trust is a key factor.

■ S O C I A L D I L E M M A S I N P U B L I C   H E A LT H

Some public health issues can also be identified as social dilemmas because
many health-related behaviors carry negative externalities. This may not always
be obvious. For instance, smoking, binge drinking, unprotected sex, or exces-
sive eating seem to be largely individual problems of self-control and temporal
discounting. Nevertheless, the consequences of these behaviors also affect other
people (e.g., passive smoking, unsafe sex), thus making it essentially coopera-
tive problems.
Infectious Diseases and Vaccinations. One of the most effective ways to prevent
the spread of infectious diseases is through vaccinations. Vaccination campaigns
have been very successful and have led to the eradication of many terrible dis-
eases around the world, such as polio, cholera, and typhus. Yet, vaccinating poses
an interesting social dilemma (Henrich & Henrich, 2007). From the perspective
of society, it is essential that many people get inoculated, because if a majority
of people has been immunized the disease is unlikely to spread (in preventive
medicine this is known as “herd immunity”). Yet, getting a vaccination involves
a small risk for the individual, because in sporadic cases the person might get ill
or even die from the vaccine. In addition, if a large number of people within a
population has been vaccinated against a particular disease, then there are fewer
benefits of getting vaccinated for any particular individual as the risks of infection
are negligible.
A social dilemma approach points to some interesting strategies to the vaccina-
tion dilemma. A particularly effective institutional strategy is to make it manda-
tory. However, this can be seen as a violation of basic human rights. In various
religious groups around the world, vaccinations are seen as interfering with the
work of God, and so these communities are not willing to comply with mandatory
vaccination programs. Not surprisingly, whenever there are outbreaks of diseases
such as polio or rubella, it usually affects children in close (religious) communities
where vaccination rates tend to be low. It is very important to increase people’s
understanding of the problem and so providing accurate information is crucial.
Some years ago, an article in The Lancet (1998) claimed to have found a link
between vaccinations of children and the onset of autism. The article was subse-
quently withdrawn, because of methodological problems. Nevertheless, it caused
substantial damage, and vaccination rates for children plummeted in the United
Kingdom after the first publication of these results.
Institutional strategies have been effective in the widespread adoption of vac-
cination programs. In the United States, for instance, children cannot attend state
schools unless they have received all their childhood vaccinations (Henrich &
Henrich, 2007). Furthermore, families with children who do not get their vac-
cinations get stigmatized and ostracized—these are powerful means to increase
compliance. Because vaccination poses an important, large-scale social dilemma,
140 ■ Applications and Future of Social Dilemmas

the best way to sustain cooperation and prevent defection is through the right
combination of legal changes, incentives, information, and identity solutions.

■ BASIC ISSUES

In this chapter, we have shown that a variety of societal problems can be fruit-
fully analyzed through adopting a social dilemma approach. From environmental
and health problems to cooperative challenges regarding international security
and warfare, social dilemma theory gleans new insights into the social causes
underlying these problems as well as interventions to tackle them. A  range of
other cooperative problems in society could potentially also benefit from a social
dilemma analysis, such as the prevention of crime and antisocial behavior, file
sharing on the Internet, and child care and relationship well-being. Space limita-
tions prevent us from delving deeper into these dilemma problems here.

Are all societal problems social dilemmas?

Not every societal problem is a social dilemma. Social dilemmas should not be
over-recognized. For each problem, we must very carefully analyze its pay-off
structure to see if it fits with the definitions of a social dilemma. If so, we
should examine what sort of social dilemma we are dealing with—is it a pubic
good or commons dilemma or perhaps a mixture of the two? Furthermore,
we should examine people’s understanding of the dilemma, for instance, some
users perceive a transport dilemma as essentially an environmental problem
whereas others perceive it as a problem of coordination. The pay-off structure
of a particular dilemma, and the way people perceive the dilemma, determines
what kinds of strategies will be most effective.
In terms of tackling real-world social dilemmas, we have drawn a distinction
between four kinds of strategies that each tap predominantly into one core psycho-
logical motive underlying decision-making in them:  understanding, belonging,
trusting, and self-enhancing (Fiske, 2004; Van Vugt, 2009). The first two strate-
gies, information and identity, are individual solutions because they do not change
the actual pay-off structure underlying the dilemma but rather, make cooperation
psychologically more appealing. For instance, people are more likely to conserve
resources when there is a threat of depletion, and contribute to a common good for
a group that they strongly identify with. The other two, incentive and institutional
strategies, actually change the dilemma structure either by increasing the benefits
of cooperation and costs of defection or through changing the decision-making
environment, for example, by creating choice options (e.g., a separate lane for car-
poolers) or removing them (e.g., children who are not vaccinated cannot go to
school).

How do social dilemma strategies interact?

Many questions remain in developing solutions to the many cooperative prob-


lems in society. First, do these different strategies reinforce each other or do
Environment, Politics, Security and Health ■ 141

they cancel each other out? There is some evidence, for instance, that incen-
tive strategies actually undermine people’s intrinsic motivation to contribute to
a public good as their understanding of the problem changes (Mulder et  al.,
2006). This phenomenon is also known as crowding out (Frey & Jegen, 2001)—
the idea that extrinsic motivation eradicates intrinsic motivation (Deci et  al.,
1999). As an example, researchers found that the introduction of financial pen-
alties for picking up kids late from nurseries actually increased noncompliance
rates. The parents were reframing the dilemma as an individual economic prob-
lem. By paying extra, they believed they were entitled to pick up their kids later
(Gneezy & Rustichini,  2004).
Second, identity strategies that aim to influence people’s belongingness needs
via social incentives may backfire if it appears that only a small minority of people
are, in fact, showing the desired cooperative behavior. Because people generally
want to belong to the majority, a message such as “only 5% of people in this com-
munity recycle their garbage, and that’s why we want you to change your behavior”
is going to be highly counterproductive. As an illustration, a sign at the Petrified
National Forest Park in Arizona attempts to prevent theft of petrified wood by
informing visitors about the regrettably high number of thefts each year. Field
experiments have shown that this antitheft sign depicting the prevalence of theft
actually increased theft by almost 300% (Cialdini, 2003; Griskevicius et al., 2012).
Third, individual differences matter in the way people respond to social
dilemma strategies. For instance, public education appeals to donate money or
behave more sustainably are going to be more persuasive among people with a
basic understanding of the problem, with strong personal norms, or a prosocial
disposition. Yet other individuals lacking the knowledge or motivation to change
are more likely to respond to individual reward and punishment (Van Lange
et al., 1997; Van Vugt et al., 1996; Wenzel, 2004). Similarly, we suspect that people
with high belongingness needs will be persuaded more strongly by social incen-
tives, for example, giving feedback on how well they are doing compared to their
neighbors in terms of their electricity use (Nolan, Schultz, Cialdini, Goldstein,
& Griskevicius, 2008). Finally, do different cultures respond differently to differ-
ent social dilemma strategies? Perhaps in more individualistic cultures, there is a
stronger aversion toward institutional strategies limiting people’s decision free-
dom, for example, whether or not to immunize their children against an infectious
disease. Yet such legislation may be more strongly endorsed in collective cultures.

What are the lessons for policy?

In the end, policymakers must try to tackle social dilemmas through finding
the right mix of strategies. Many social dilemmas in society are complex and
often solutions require a good understanding of human social psychology and
importance of cultural norms, institutions and governments. The recent chal-
lenge in Europe to save the Euro currency presents a good example of how a
complex social dilemma—countries contributing money to save the Euro—is
addressed by restructuring the problem in terms of a cooperative challenge
for all countries involved, strengthening a joint European identity, building in
142 ■ Applications and Future of Social Dilemmas

penalties for “defecting” countries (like Greece, Spain and Portugal), and rely-
ing on fair and legitimate institutions to administer penalties.

■ SUMMARY AND CONCLUSIONS

There is no reason to be daunted by the complex social dilemmas we are fac-


ing. With a proper scientific analysis of the problem and the right combina-
tion of strategies, we believe that many challenges posed by various large-scale
social dilemmas can be addressed. This does not mean that they can always
be completely resolved. But part of the solution is in the realization that we
as humans largely cause the problems we face in environmental issues, pub-
lic health, politics, international security, and intergroup conflict. The social
dilemma literature, including theory, experimental research in the laboratory,
and field research, should help scientists and practitioners in applying proper
scientific analysis and may contribute to finding the most effective solutions to
the various social dilemmas we face in contemporary society.
8 Prospects for the Future

Looking back, researchers have made significant progress in theory develop-


ment; interdisciplinary research; understanding the impact of structural, psy-
chological and dynamic factors on cooperation; the evolution of cooperation;
culture; and strong applications in domains such as management and organiza-
tion, environment, politics, and health. We have witnessed increased attention
to paradigms and issues more closely approximating real-world dilemmas, not
only paradigms such as the public goods dilemma and the commons dilemma,
but also paradigms that recognize asymmetries, noise, and structural solutions.
Moreover, we have seen an increased attention to theory, interdependence the-
ory and evolutionary theory in particular, and for broad domains that have
received new or renewed interest such as culture, organizations and manage-
ment, the environment, politics, security, and health.
The field has made significant and exciting advances over the past two decades,
yielding valuable novel insights into the dynamics of cooperation across a variety
of social dilemmas. As noted earlier, we acknowledge that our coverage of the
social dilemma literature has not been exhaustive. Our focus has been on the psy-
chology of social dilemmas, because it is nearly impossible to capture research in
all fields and disciplines relevant to social dilemmas. In the last two decades alone,
there have been countless publications coming from anthropologists, evolutionary
scientists, experimental economists, mathematicians, political scientists, and the-
oretical biologists. These studies have not always been framed as social dilemma
studies—the term that the late psychologist Robyn Dawes (1980) coined—but
they clearly are capturing methodology and findings that are informative to this
literature. The important point is that we have restricted ourselves to research on
the psychology of social dilemmas while acknowledging the existence of a much
broader literature on social dilemmas that we have not discussed.
Looking ahead, we see several promising directions for future research. At the
broadest level, we believe the field would benefit from continued attention to the-
ory development. Earlier, we described evolutionary theory as a broad theoretical
framework for social dilemmas in Chapter 3, and we briefly reviewed interdepen-
dence theory, a psychological theory relevant to social dilemmas, in Chapter  4.
These frameworks share a number of meaningful connections that should be
explored. Interdependence theory provides a relatively coherent framework in
which the conceptual links among dilemma situations are delineated by provid-
ing a taxonomy of dimensions, including situational “dimensions” such as degree
of dependence, degree of conflicting interest, information availability, and time
(horizon) as key dimensions (e.g., Kelley et al., 2003; Van Lange & Rusbult, 2012).
Paul Van Lange had primary responsibility for preparation of this chapter.

143
144 ■ Applications and Future of Social Dilemmas

This taxonomy helps us understand the game (read: situation) people are facing,
and the problems or opportunities that the game (again read: situation) affords.
This interdependence-based analysis not only provides key insights into the struc-
ture of the situation (what is the situation about?), it also emphasizes the relevance
of our own interaction goals (are we cooperative or not?) and those we attribute to
others in a global or concrete manner (are other people cooperative or not?). The
latter attributions or beliefs are, of course, closely linked to the concept of trust.
Evolutionary theory provides a meta-theoretical framework for understanding
the (ultimate) functions of trust and cooperation in social dilemmas, and how nat-
ural selection has shaped proximate psychological mechanisms, as discussed and
illustrated in Chapter 3. Evolutionary and psychological explanations complement
each other, of course, and together they can provide the bigger, and more complete,
picture of decision-making in social dilemmas (Van Vugt & Van Lange, 2006). To
illustrate, interdependence theory (and game theory) conveys the importance of
incomplete information for the development of cooperation. By virtue of its focus
on the conflict between self-interest and collective interest, incomplete informa-
tion in social dilemmas presupposes some degree of trust in others:  “Does the
other person intentionally or unintentionally harm the collective interest?” From
an evolutionary perspective, acknowledging the role of incomplete information is
important because it challenges our thinking about the evolution of cooperation.
For example, it may help us understand why focusing on intentions rather than on
actual behaviors has functional value in an evolutionary sense. Even more, it may
help us understand the roots of generosity (Nowak & Sigmund, 1992). Proximally,
giving others the benefit of the doubt, especially when accompanied by the com-
munication of generosity, will enhance the level of trust the other has in your
intentions—which in turn is crucial for coping with uncertainty and incomplete
information (Van Lange et  al., 2002). We are looking forward to a fruitful and
comprehensive integration of structural factors (the games we play), psychologi-
cal explanations (what we make of the game), and the ultimate functions these
factors serve in terms of psychological, economic, and evolutionary benefits (the
outcomes of playing the game).

■ INTERDEPENDENCE STRUCTURE

The following broader research themes may well receive increased empirical
attention in the future. An interdependence framework suggests the importance
of (a) availability of information, (b) the dimension of time, and (c) the unit of
analysis (individuals versus groups).
From complete to incomplete information. The notion that people are often
faced with incomplete information in social dilemmas is well recognized. In fact,
in most social dilemmas in the real world, people do not have complete infor-
mation about issues such as the preferences of interaction partners or how out-
comes are precisely determined by their own and others’ behavior (e.g., would
the other person really appreciate my initiative to complete a major portion of
a joint task). Likewise, people often experience outcomes (e.g., the person did
not respond to my e-mail) but lack information about how these outcomes came
Prospects for the Future ■ 145

about (e.g., perhaps he was not able to use e-mail). Imperfect information may be
even more important in larger scale social dilemmas such as environmental social
dilemmas—for example, how limited are our natural resources?
Clearly, the concepts of noise, social uncertainty (lack of information about oth-
ers’ actions) and environmental uncertainty (lack of information about the objec-
tive state of affairs, such as resource size) are all important from both a scientific and
societal perspective. These structural and psychological factors might activate trust
or distrust, optimism or pessimism, or the closing or opening of one’s mind for new
information. In larger-scale social dilemmas, incomplete information might trigger
collective activities aimed at informing policy through research, and challenge the
ways in which authorities might communicate opportunities and risks, as well as
the specific ways in which people maintain sufficient levels of efficacy (the feeling
that their choice matters), trust in others’ cooperation, and show willingness to
make a contribution themselves (Kerr, 2012; Parks et al., 2013; Van Vugt, 2009).
From present to future. The dimension of time is clearly very important in
social dilemmas. This is even more so in social dilemmas outside of the laboratory,
where various collective goals take time to materialize, where repeated interaction
unfolds over time, but where the individual costs are often in the here and now
and the benefits are much delayed. Environmental dilemmas are just one exam-
ple where the dimension of time clearly is important. Axelrod (1984) referred to
the shadow of the future as a mechanism that might help individuals realize that
cooperative action now will provide benefits over repeated interactions with the
same partner in the future. There is good support for this notion (e.g., Roth &
Murnighan, 1978). Moreover, there is even evidence that punishment is far more
effective when the time horizon is long (e.g., 50 trials) rather than short (e.g., 10
trials), which suggests that sometimes it takes time for groups to promote coop-
eration with one another through punishment (Gächter, Renner, & Sefton, 2008).
At the same time, there has been considerable research on temporal discounting,
showing that people are not always very good at sacrificing short-term interest and
prioritizing longer-term goals. This discounting mechanism might explain failure
to delay gratification in consumption, enjoying the cigarette or fattening snack
now while neglecting the possible consequences in the future, or delaying visits
to the dentist (e.g., Green, Myerson, Lichtman, Rosen, & Fry, 1996; Mischel, 2012;
Rachlin, 2006). Moreover, there is the intriguing issue of asymmetrical relations
among generations of people, involving issues of altruism, conflict, and fairness.
For example, elderly people have a shorter time horizon than younger people, yet
it might take a fair amount of effort or sacrifices from the elderly to maintain
a healthy environment for the next generations—such intergenerational social
dilemmas are real, and worthy of future study (Wade-Benzoni & Tost, 2009).
The overall point here is that, although social dilemma studies have addressed
the time dimension to a certain extent, there is clear potential to design studies
that address the time dimension more fully—for example, by using longitudi-
nal research designs. Theoretically, this would allow us to capture topics such as
self-control, delay of gratification, and long-term orientation in the social dilemma
context in which such mechanisms support collective interest, rather than just indi-
vidual interest (Joireman, Shaffer, Balliet, Strathman, 2012; Strathman et al., 1994).
146 ■ Applications and Future of Social Dilemmas

Societally, this would be very important, because most meaningful interactions


take place in a social context in which a longer time horizon does matter (Joireman
et  al., 2004). We interact more often with family members, friends, community
members, and colleagues than with complete strangers who we might never see
again in the future. An orientation toward the future is an key ingredient to the
maintenance of healthy and stable relationships (e.g., Rusbult & Van Lange, 2003).
From interpersonal to intergroup interactions. Patterns of interdependence are
often more complex than they seem at first glance. This may hold in particular
for intergroup interactions. How should one understand social dilemmas between
groups? One scenario is that members within one group are “promotively” inter-
dependent in the sense that their individual goals are totally aligned with the
group goals. An example would be a situation in which, for example, two pizza
delivery stores compete excessively but there is little competition within the stores
(e.g., they all enjoy working at the delivery store). A  different scenario is when
members of a group face a social dilemma, and the group in turn faces a social
dilemma with another group. For example, soldiers may understandably want to
avoid personal risks, but if no one in the group fights the enemy group, then the
result will be a comprehensive defeat: “Rout and slaughter [is] worse for all the
soldiers than is taking chances” (Dawes, 1980; p. 170). Yet the superordinate inter-
est of both groups is that no soldier in any battle will “take chances” and fight.
These inter-group social dilemmas have been studied intensely by Gary Bornstein
(2003), who has also provided a taxonomy of inter-group relations as team games
(see also Bornstein, 1992; Halevy et al., 2008).
Although a bit more complex than single group social dilemmas, these team
games are very important in our understanding of interactions among units in
organizations, groups in society, and nations in the world. As noted earlier, the
baseline level of cooperation between groups is smaller than between individu-
als. That is, inter-group interactions tend to be less cooperative, more competi-
tive, and less trusting than inter-individual interactions (Insko & Schopler, 1998;
Insko, Kirchner, Pinter, Efaw, & Wildschut, 2005; Wildschut et al., 2003). Relative
to interactions between individuals, interactions between individuals who serve
the role of group representative reveal lower levels of cooperation, and this may be
due to the idea that representatives tend to attribute selfishness and competition to
other representatives, with also the tendency for representatives to be a in reality a
bit more concerned about their own outcomes, both in absolute terms and relative
to the other representatives (Reinders Folmer et al., 2012). Switching the attention
from interpersonal to inter-group social dilemmas is a significant step in social
dilemma theory and research. Research on inter-group dilemmas might inform
us about some basic theoretical issues regarding the evolution of cooperation in
humans, such as whether natural selection has occurred at the level of individuals
only or also at the level of groups (Wilson et al., 2008).

■ U N D E R S TA N D I N G P R O C E S S E S

For a deeper understanding of cooperation in social dilemmas, it is important


to consider what is going on inside the minds of the interaction partners. Thus,
Prospects for the Future ■ 147

we need to understand the deeper structure of the psychological processes


underlying cooperation, from neurons to behaviors.
From neuroscience to behavior. The last 15  years or so have revealed a very
important development in research on social dilemmas—an enormous growth
in research on the social neuroscience of human cooperation (e.g., Crone et al.,
2013; Glimcher, Camerer, Fehr, & Poldrack, 2008; Rilling & Sanfey, 2011). This
is an important development because it deepens our understanding of the brain
mechanisms underlying cooperation. One of the consistent messages coming out
of this literature is that decisions to cooperate or defect occur in a more automatic
manner than many researchers assumed before the emergence of (social) neuro-
science in the field of social dilemmas. We have already noted that people with a
prosocial orientation respond to unequal distributions of outcomes with increased
activation in the amygdala (Haruno & Frith, 2009). Such findings uncover the
neuroscience underpinnings of fairness as well as the automaticity with which
such “judgments” may be formed, at least in some people. Thus, fairness judg-
ments may be made without much conscious awareness. This explains why norm
violations involving unfairness evoke anger in a fairly automatic manner, with the
result that people show disapproval and seek out possibilities for punishment (e.g.,
De Quervain et al., 2004,; see also Rilling & Sanfey, 2011).
Beyond these findings, there is evidence that empathy is activated in automatic
ways as well (e.g., Singer et al., 2004). Recent work has devoted increasing atten-
tion to hormonal influences on behavior in social dilemmas. These are clearly new
developments, and it is informative to see that a hormone such as testosterone
increases competition and undermines trust at least in some people (e.g., Bos,
Terburg, & Van Honk, 2010), but the ultimate social functions of testosterone are
open to empirical investigation. Conversely, the hormone oxytocin is associated
with affiliation, caring, and trust (e.g., Kosfeld, Heinrichs, Zak, Fischbacher, &
Fehr, 2005; Zak, 2008). Yet there is also recent evidence indicating that oxytocin
promotes caring for in-group members but not for members of out-groups (e.g.,
De Dreu et al., 2010). One issue with the literature on hormones and social coop-
eration is that there might be pronounced differences between the external provi-
sion versus internal release of a particular hormone, raising complex issues of the
specific workings of hormones along with questions of causality. Taken together,
there is much that we do not know about the neurobiological and neuroscien-
tific underpinnings of human cooperation, including the specific functions that
various hormones may have in promoting trust and cooperation in various social
dilemmas.
From perceptions and emotions to behaviors. Many interactions unfold when
people register information about others. In our interactions with strangers, snap-
shot judgments of the face are important, and there is evidence revealing strong
(and indeed quick) effects of facial information on perceived trustworthiness. This
line of research suggests fairly automatic links between perception and general
judgment. With strangers and friends alike, emotional expressions may constitute
important determinants of behavior in social dilemmas. Whether a person looks
happy or sad, expresses anger or disappointment, can clearly be important deter-
minants of whether we expect others to cooperate, and whether we are going to
148 ■ Applications and Future of Social Dilemmas

behave cooperatively ourselves (e.g., Todorov & Duchaine, 2008; Van Dijk, Van
Kleef, Steinel, & Van Beest, 2008). Such information may be gleaned from the face
as well as other bodily cues such as height, symmetry, and muscularity (Aviezer,
Trope, & Todorov, 2012; Spisak Dekker, Kruger, & Van Vugt, 2012).
Cognition has received already a fair amount of attention in the social dilemma
literature. For example, the roles of framing and priming have been subject of
empirical study, but as suggested earlier, these lines of research need to be comple-
mented by additional research to understand the mechanics of relatively subtle
influences, along with their boundary conditions. Earlier, we outlined the impor-
tance of studying nested social dilemmas involving the person, the group, and
the collective. We think that more subtle cognitive processes, for example catego-
rization effects, may play an important role in those complex but realistic social
dilemmas (Wit & Kerr, 2002; see also Kerr & Tindale, 2004). Also, in everyday life,
social dilemmas may sometimes be quite complex in that actors do not always
directly “see” how one’s own behavior might affect another person’s outcomes in
direct or indirect ways. In such situations, skill may matter, such as the ability to
adopt another person’s perspective, but such skill may also be promoted by pro-
social motivation—one might see the other’s preferences more clearly if one is
more strongly concerned about the other’s welfare (Van Doesum & Van Lange,
2013; see also Yamagishi, Hashimoto, & Schug, 2008). Likewise, it may takes skill
(and perhaps will) to accurately read the emotions that people might express in
social dilemmas (see Baron-Cohen, Wheelwright, Hill, Raste, & Plumb, 2001),
which might guide our subsequent expectations, beliefs, and behavior. And in
addition to people’s own construal of the social dilemma, people may have expec-
tations or beliefs as to what game other people think they are facing, and these
meta-cognitive processes may also shape our expectations of others’ behavior and
our own behavior (see Havely, Chou, & Murnighan, 2012).
Affect and emotions have received far less attention in social dilemma research.
This is surprising because some emotions—such as anger or guilt—may be pow-
erful determinants of decision making in social dilemmas such as cooperation,
defection, and punishment. It is also possible that emotions play a somewhat dif-
ferent role if people do not play the game for money or points, but for outcomes
that may be seen as more personal or less universal—such as providing effort,
sharing information, or giving time. For example, a recent study revealed that giv-
ing time to friends or strangers, as opposed to receiving “free time” for oneself,
increases perceptions of having time, both in terms of the present and the future
(Mogilner, Cahnce, & Norton, 2012). It is also possible that people construe social
dilemmas differently once they have been told that spending money on others
(generosity) promotes happiness (Dunn, Aknin, & Norton, 2008). Taken together,
different cognitions and emotions play an important role in social dilemmas. It is
interesting how small variations in how we frame a social dilemma, how we see
others, and—very importantly—how we interpret the behavior of others can have
pronounced effects on behavior.
From students to seniors. Processes are importantly influenced not only by
the perceived, but also by the perceiver. We suggest the importance of a research
strategy that includes a broader sample than just university students. Evidence is
Prospects for the Future ■ 149

accumulating that even students might differ a fair amount in terms of the beliefs
regarding others’ behavior (Frank, Gilovich, & Regan, 1993), and in their own social
value orientations. Even at the beginning of the first year of their study, the dominant
orientation among psychology majors is prosociality, whereas the dominant orienta-
tion among economics majors, however, is individualism (Van Lange et al., 2011).
Moreover, there is increasing evidence from various samples in the United States
that social class may matter, with those with a lower social economic status (“lower
social class”) being more likely to adopt a prosocial orientation to various situa-
tions (e.g., Piff, Krauss, Côté, Cheng, & Keltner, 2010). Also, there is evidence that
the prevalence of prosocial orientations increases with age (Van Lange et al., 1997).
There is recent neuroscientific evidence suggesting that older individuals, relative
to younger individuals, are more trusting of people with untrustworthy faces, even
though they are about equally trusting of people with trustworthy faces (Castle et al.,
2012), suggesting some evidence for the role of learning and experience.
Why is this important? By focusing on university students in our samples, we
may be underestimating the importance of people who are more trusting in oth-
ers’ benign intent, and we may be underestimating a tendency to adopt a prosocial
orientation, perhaps especially a concern with egalitarianism. There may be other
differences as well, such as less crystallized attitudes, and less well-established
social networks (Sears, 1986). Such sample selection may account for underesti-
mation of trust and cooperation, as well as an overestimation of social influences
on cooperation, in that younger people might have a greater ability and motivation
to be open to new information and other perspectives. It may well be that these
issues are especially important for issues related to trust, fairness, and cooperation,
which are at heart of social dilemmas.
From behavior to effective and efficient solutions. One issue that is important for
the implementation of theory-based knowledge about social dilemmas is rooted
in the distinction between effectiveness and efficiency, and the social processes
that are involved in this. It is one thing to conclude that a particular interven-
tion is effective (in that it elicits high levels of cooperation), but it is quite another
thing to conclude whether an intervention is efficient and socially fair. For exam-
ple, if people can punish one another, it is possible that the benefits of enhanced
cooperation do not outweigh the costs of a maintaining an expensive sanctioning
system (see Balliet et  al., 2011; Gächter et  al., 2008). The same may be true for
rewarding cooperation, even though there is evidence that this may both effective
and efficient (Kiyonari & Barclay, 2008). It is also interesting to note that because
punishments are costly to both the punisher and the punished, one might wonder
whether such costly acts might be replaced with more efficient mechanisms, such
as a concern for reputation. There is some evidence indicating that even if reputa-
tion as a mechanism is quite effective, people are likely still to punish free-riders
to further enhance cooperation (Rockenbach & Milinski, 2006). At the same time,
as we have seen in Chapter  5, this tendency may not be consistently observed
across all cultures, as there is some tendency in some cultures to punish not only
free-riders but also cooperators (Herrmann et al., 2008).
One issue that is very central to the development of cooperation is how a sanc-
tioning system is organized, implemented, and used. For example, it is often true
150 ■ Applications and Future of Social Dilemmas

that relatively small groups in large societies, such as local communities, have
enormous potential to organize and manage themselves in cost-effective ways that
promote cooperation and prevent them from depleting natural resources (Ostrom
& Walker, 2003; Poteete, Janssen, & Ostrom, 2010). In small groups, people are
able to develop rules that match the local circumstances, they are able to monitor
one another’s behavior, and punish free-riding and reward generosity quite effec-
tively. People care very strongly about their image or reputation in their local com-
munity, and so if the norms favoring cooperation are well-specified, then often the
mere presence of others makes a big difference. These are important virtues of a
local organization, formal or informal, relative to a more global authority.
These findings paint a picture in which the ways in which individuals relate
to each other in small groups and local communities is important to the over-
all functioning of society—and this suggests the strong positive reinforcement
among structural solutions, third-party intervention, and psychological solutions.
A case in point is Tyler and Degoey’s (1995) research on the 1991 water shortage
in California, which demonstrated that people exercised more constraint on their
water consumption if they felt treated more fairly by the local authorities.
Many of the insights described above were already recognized by the late
Elinor Ostrom, who suggested more than two decades ago that institutions could
play a very important role in regulating the local management to preserve natu-
ral resources and avoid ecosystem collapses (Ostrom, 1990). In retrospect, her
insights in many ways reinforce conclusions that are now supported by research.
In particular, among smaller units such as dyads and small groups, it is trust and
reciprocity that matters (and we would add, generosity and forgiveness), along
with effective communication. Within a frame of sufficient vertical trust, people
will adopt accepting attitudes to governmental interventions, such as the provi-
sion of rewards and punishment, and some constraint on their autonomy. These
are also analyses of social dilemmas in which the various scientific fields and dis-
ciplines might inform one another to understand how small groups might help
effectively—and efficiently—manage and resolve ongoing social dilemmas.

■ SUMMARY AND CONCLUSIONS

Looking back and looking ahead, we conclude that the study of social dilem-
mas is “alive and kicking.” Over the years, the field has produced numerous
replicable findings, advanced our theoretical understanding of human coop-
eration, fostered communication among scientific disciplines, and has at least
made a beginning of applying such knowledge to help resolve social dilem-
mas in everyday life. Being dedicated social dilemmas researchers ourselves,
our observations may be a bit biased, of course. It is our strong conviction that
there is now a solid body of knowledge on the psychology of social dilemmas
that could be of exceptional utility in facing the numerous challenges—theo-
retical, empirical, methodological, and societal—that the field will encounter in
the future.
We already noted several avenues for future research. Further challenges are
to increase our understanding of the how and why of rewards and punishment,
Prospects for the Future ■ 151

the importance of fairness as social preference, material outcomes or immaterial


outcomes, social norms, the power of beliefs about humankind (as individuals
and groups) and how these might impact our behavior. In addition, the field has
just started to explore the role of hormones, physical markers, emotions, construal
processes, intergroup issues, reputation, gossip, and many more issues that are rel-
evant to how people approach others in social dilemmas. Understanding the psy-
chological, neuroscience, economical, and evolutionary mechanisms underlying
decision making in social dilemmas is an important challenge for the future. We
could go on; simply thinking about these intriguing issues makes us look forward
to the next several decades of research on social dilemmas.
■ REFERENCES

Abrams, L. C., Cross, R., Lesser, E., & Levin, D. Z. (2003). Nurturing interpersonal trust in
knowledge-sharing networks. Academy of Management Executives, 17, 64–77.
Adams, G., & Markus, H. R. (2004). Toward a conception of culture suitable for a social
psychology of culture. In M. Schaller & C. S. Crandall (Eds.), Psychological foundations
of culture (pp. 335–360). Mahwah, NJ: Erlbaum.
Agarwal, R., Croson, R., & Mahoney, J. T. (2010). The role of incentives and communication
in strategic alliances: An experimental investigation. Strategic Management Journal, 31,
413–437.
Alcock, J. (1993). Animal behavior:  An evolutionary approach. Sunderland, MA:  Sinauer
Associates.
Alexander, R. D. (1987). The biology of moral systems. New York: Aldine de Gruyter.
Allison, S. T., McQueen, L. R., & Schaerfl, L. M. (1992). Social decision making processes
and the equal partitionment of shared resources. Journal of Experimental Social
Psychology, 28, 23–42.
Allison, S. T., & Messick, D. M. (1990). Social decision heuristics in the use of shared
resources. Journal of Behavioral Decision Making, 3, 195–204.
Anderson, L. R., DiTraglia, F. J., & Gerlach, J. R. (2011). Measuring altruism in a public
goods experiment: A comparison of U.S. and Czech subjects. Experimental Economics,
14, 426–437.
Anderson, L. R., Mellor, J. M., & Milyo, J. (2004). Social capital and contributions in a
public-goods experiment. American Economic Review, 94, 373–376.
André, J.-B., & Baumard, N. (2011). Social opportunities and the evolution of fairness.
Journal of Theoretical Biology, 289, 128–135.
Angle, H. L., & Perry, J. L. (1986). Dual commitment and labor-management relationship
climates. Academy of Management Journal, 29, 31–50.
Aquino, K., Grover, S. L., Goldman, B., & Folger, R. (2003). When push doesn’t come to
shove:  Interpersonal forgiveness in workplace relationships. Journal of Management
Inquiry, 12, 209–216.
Archetti, M., & Scheuring, I. (2010). Coexistence of cooperation and defection in public
goods games. Evolution, 65, 1140–1148.
Arend, R. J., & Seale, D. A. (2005). Modeling alliance activity:  An iterated prisoner’s
dilemma with exit option. Strategic Management Journal, 26, 1057–1074.
Argote, L. A., & Ingram, P. (2000). Knowledge transfer: A basis for competitive advantages
in firms. Organizational Behavior and Human Decision Processes, 82, 150–169.
Aryee, S., Budhwar, P. S., & Chen, Z. -X. (2002). Trust as a mediator of the relationship
between organizational justice and work outcomes:  Test of a social exchange model.
Journal of Organizational Behavior, 23, 267–285.
Au, W. T., & Kwong, Y. Y. (2004). Measurements and effects of social value orientation in
social dilemmas: A review. In R. Suleiman, D. V. Budescu, I. Fischer, & D. M. Messick
(Eds.), Contemporary research on social dilemmas (pp. 71–98). New York:  Cambridge
University Press.

153
154 ■ References

Au, W. T., & Ngai, M. Y. (2003). Effects of group size uncertainty and protocol of play in a
common pool resource dilemma. Group Processes and Intergroup Relations, 6, 265–283.
Avellar, K, & Kagan, S. (1976). Development of competitive behaviors in Anglo-American
and Mexican-American children. Psychological Reports, 39, 191–198.
Aviezer, H., Trope, Y., & Todorov, A. (2012). Body cues, not facial expressions, discriminate
between intense positive and negative emotions. Science, 338, 1225–1229.
Axelrod, R. (1984). The evolution of cooperation. New York: Basic Books.
Badaracco, J. L., & Webb, A. P. (1995). Business ethics: A view from the trenches. California
Management Review, 37 (2), 8–28.
Baldassari, D., & Grossman, G. (2011). Centralized sanctioning and legitimate authority
promote cooperation in humans. Proceedings of the National Academy of Sciences, 108,
11023–11027 (June 20).
Balliet, D. (2010). Communication and cooperation in social dilemmas: A meta-analytic
review. Journal of Conflict Resolution, 54, 39–57.
Balliet, D., Li, N., & Joireman, J. (2011). Relating trait self-control and forgiveness within
prosocials and proselfs: Compensatory vs. synergistic models. Journal of Personality and
Social Psychology, 101, 1090–1105.
Balliet, D., Li, N. P., Macfarlan, S. J., & Van Vugt, M. (2011a). Sex differences in
cooperation:  A  meta-analytic review of social dilemmas. Psychological Bulletin, 137,
881–909.
Balliet, D., Mulder, L. B., & Van Lange, P. A.  M. (2011b). Reward, punishment, and
cooperation: A meta-analysis. Psychological Bulletin, 137, 594–614.
Balliet, D., Parks, C. D., & Joireman, J. (2009). Social value orientation and cooperation
in social dilemmas:  A  meta-analysis. Group Processes and Intergroup Relations, 12,
533–547.
Balliet, D., & Van Lange, P. A. M. (2013a). Trust, conflict, and cooperation: A meta-analysis.
Psychological Bulletin, 139, 1090–1112.
Balliet, D. & Van Lange, P. A.M. (2013b). Trust, punishment, and cooperation across 18
societies: A meta-analysis. Perspectives on Psychological Science, 8, 363–379.
Banaji, M. R., & Bhaskar, R. (2000). Implicit stereotypes and memory:  The bounded
rationality of social beliefs. In D. L. Schachter & E. Scarry (Eds.), Memory, brain and
belief (pp. 139–175). Cambridge, MA: Harvard University Press.
Barclay, P. (2004). Trustworthiness and competitive altruism can also solve the “tragedy of
the commons.” Evolution & Human Behavior, 25, 209–220.
Barclay, P. (2006). Reputational benefits for altruistic punishment. Evolution and Human
Behavior, 27, 325–344.
Barclay, P. (2008). Enhanced recognition of defectors depends on their rarity. Cognition,
107, 817–828.
Barclay, P. (2010). Altruism as a courtship display: Some effects of third-party generosity on
audience perceptions. British Journal of Psychology, 101, 123–135.
Barclay, P. (2011). Competitive helping increases with the size of biological markets and
invades defection. Journal of Theoretical Biology, 281, 47–55.
Barclay, P., & Van Vugt, M. (in press). The evolution of human prosociality. In D. A.
Schroeder & W. G. Graziano (Eds.), Handbook of prosocial behavior. London: Sage.
Barclay, P., & Willer, R. (2007). Partner choice creates competitive altruism in humans.
Proceedings of the Royal Society-B, 274, 749–753.
Barling, J., Kelloway, E. K., & Bremermann, E. H. (1991). Preemployment predictors of
union attitudes:  The role of family socialization and work beliefs. Journal of Applied
Psychology, 76, 725–731.
References ■ 155

Baron-Cohen, S., Wheelwright, S., Hill, J., Raste, Y., & Plumb, I. (2001). The “Reading
the Mind in the Eyes” Test Revised Version:  A  study with normal adults, and adults
with Asperger Syndrome or High-Functioning Autism. Journal of Child Psychiatry and
Psychiatry, 42, 241–252.
Barrett, P. (2007). Structural equation modeling:  Adjudging model fit. Personality and
Individual Differences, 42, 815–824.
Bateson, M., Nettle, D., & Roberts, G. (2006). Cues of being watched enhance cooperation
in a real-world setting. Biology Letters, 2, 412–414.
Batson, C. D. (1994). Why act for the public good? Four answers. Personality and Social
Psychology Bulletin, 20, 603–610.
Batson, C. D. (2011). Altruism in humans. New York: Oxford University Press.
Batson, C. D., & Ahmad, N. (2001). Empathy-induced altruism in a prisoner’s dilemma II: What
if the target of empathy has defected? European Journal of Social Psychology, 31, 25–36.
Batson, C. D., Batson, J. G., Todd, R. M., Brummett, B. H., Shaw, L. L., & Aldeguer, C.
M. R. (1995). Empathy and the collective good: Caring for one of the others in a social
dilemma. Journal of Personality and Social Psychology, 68, 619–631.
Batson, C. D., Sager, K., Garst, E., Kang, M., Rubchinsky, K., & Dawson, K. (1997). Is
empathy-induced helping due to self-other merging? Journal of Personality and Social
Psychology, 73, 495–509.
Baumard, N., André, J.-B., & Sperber, D. (2013). A mutualistic approach to morality.
Behavioral and Brain Sciences, 36, 59–122.
Bendor, J., Kramer, R. M., & Stout, S. (1991). When in doubt . . . cooperation in a noisy
prisoners-dilemma. Journal of Conflict Resolution, 35, 691–719.
Bentham, J. (1789/1970) An introduction to the principles of morals and legislation.
London: Athlone Press.
Bergeron, D. M. (2007). The potential paradox of organizational citizenship behavior: Good
citizens at what cost? Academy of Management Review, 32, 1078–1095.
Bettencourt, L. A., Gwinner, K. P., & Meuter, M. L. (2001). A comparison of attitude,
personality, and knowledge predictors of service-oriented organizational citizenship
behaviors. Journal of Applied Psychology, 86, 29–41.
Blair, M. M., & Stout, L. A. (1999). A team production theory of corporate law. Virginia Law
Review, 85, 248–328.
Blair, M. M., & Stout, L. A. (2001). Trust, trustworthiness, and the behavioral foundations of
corporate law. University of Pennsylvania Law Review, 149, 1785–1789.
Blasi, J., Conte, M., & Kruse, D. (1996). Employee stock ownership and corporate
performance among public companies. Industrial and Labor Relations Review, 50, 60–79.
Bloche, M. G. (2002). Trust and betrayal in the medical marketplace. Stanford Law Review,
55, 919–954.
Bock, G. -W., Zmud, R. M., Kim, Y. -G., & Lee, J. -N. (2005). Behavioral intention formation
in knowledge sharing: Examining the roles of extrinsic motivators, social-psychological
forces, and organizational climate. MIS Quarterly, 29, 87–111.
Bogaert, S., Boone, C., & Declerck, C. (2008). Social value orientation and cooperation in
social dilemmas: A review and conceptual model. British Journal of Social Psychology,
47, 453–480.
Bohnet, I., Herrmann, B., & Zechhauser, R. (2010). Trust and the reference points for
trustworthiness in gulf and western countries. Quarterly Journal of Economics, 125,
811–828.
Bolton, G. E., Katok, E., & Ockenfels, A. (2005). Cooperation among strangers with limited
information about reputation. Journal of Public Economics, 89, 1457–1468.
156 ■ References

Bonacich, P., Shure, G. H., Kahan, J. P., & Meeker, R. J. (1976). Cooperation and group size
in the N-person prisoner’s dilemma. Journal of Conflict Resolution, 20, 687–706.
Bond, M. H., Leung, K., Au, A., Tong, K. -K., & Chemonges-Nielson, Z. (2004). Combining
social axioms with values in predicting behaviours. European Journal of Personality, 18,
177–191.
Bond, M. H., Leung, K., Au, A., Tong, K. -K., Reimel de Carrasquel, S., Murakami, F., et al.
(2004). Culture-level dimensions of social axioms and their correlates across 41 cultures.
Journal of Cross-Cultural Psychology, 35, 548–570.
Boone, J. L. (1998). The evolution of magnanimity: When is it better to give than to receive?
Evolution and Human Behavior, 9, 1–21.
Boone, C., Brabander, B. D., & van Witteloostuijn, A. (1999). The impact of personality
on behavior in five prisoner’s dilemmas games. Journal of Economic Psychology, 20,
343–377.
Booth, A. L., & Chatterji, M. (1995). Union membership and wage bargaining when
membership is not compulsory. Economic Journal, 105, 345–360.
Borman, W. C., & Motowidlo, S. J. (1993). Expanding the criterion domain to include
elements of contextual performance. In N. Schmitt & W. C. Borman (Eds.), Personnel
selection in organizations (pp. 71–98). San Francisco: Jossey-Bass.
Borman, W. C., Penner, L. A., Allen, T. D., & Motowidlo, S. J. (2001). Personality predictors
of citizenship performance. International Journal of Selection and Assessment, 9, 52–69.
Bornstein, G. (1992). The free-rider problem in intergroup conflicts over step-level and
continuous public goods. Journal of Personality and Social Psychology, 62, 597–606.
Bornstein, G. (2003). Intergroup conflict:  Individual, group, and collective interests.
Personality and Social Psychology Review, 7, 129–145.
Bornstein, G., & Ben-Yossef, M. (1994). Cooperation in intergroup and single-group social
dilemmas. Journal of Experimental Social Psychology, 30, 52–67.
Bos, P. A., Terburg, D.,Van Honk, J. (2010). Testosterone decreases trust in socially naïve
humans. Proceedings of the National Academy of Sciences, 107, 9991–9996.
Bowles, S. (2009). Did warfare among ancestral hunter-gatherers affect the evolution of
human social behaviors? Science, 324, 1293–1298.
Boyd, R., Gintis, H., Bowles, S., & Richerson, P. J. (2003). The evolution of altruistic
punishment. Proceedings of the National Academy of Sciences, 100, 3531–3535.
Boyd, R., & Lorberbaum, J. P. (1987). No pure strategy is evolutionarily stable in the
repeated prisoner’s dilemma game. Nature, 327, 58–59.
Boyd, R., & Richerson, P. J. (2002). Group beneficial norms can spread rapidly in a structured
population. Journal of Theoretical Biology, 215, 287–296.
Boyd, R., & Richerson, P. J. (2009). Culture and the evolution of human cooperation.
Philosophical Transactions of the Royal Society-B, 364, 3281–3288.
Bradfield, M., & Aquino, K. (1999). The effects of blame attributions and offender likableness
on forgiveness and revenge in the workplace. Journal of Management, 25, 607–631.
Brams, S. J. (1985). Superpower games. New Haven, CT: Yale University Press.
Brembs, B. (1996). Chaos, cheating, and cooperation: Potential solutions to the prisoner’s
dilemma. Oikos, 76, 14–24.
Brewer, M. B., & Kramer, R. M. (1986). Choice behavior in social dilemmas: Effects of social
identity, group size, and decision framing. Journal of Personality and Social Psychology,
50, 543–549.
Brosnan, S. F, Schiff, H. C., & de Waal, F. B. M. (2005). Tolerance for inequity may increase
with social closeness in chimpanzees. Proceedings of the Royal Society-B, 272, 253–258.
References ■ 157

Brown, S. L., & Brown, M. (2006). Selective investment theory. Psychological Inquiry,
17, 1–29.
Brucks, W. M., & Van Lange, P. A. M. (2007). When prosocials act like proselfs in a commons
dilemma. Personality and Social Psychology Bulletin, 33, 750–758.
Brucks, W. M. & Van Lange, P. A.  M. (2008). No control, no drive:  How noise may
undermine conservation behavior in a commons dilemma. European Journal of Social
Psychology, 38, 810–822.
Buchan, N. R., Brewer, M. B., Grimalda, G., Wilson, R. K., Fatas, E., & Foddy, M. (2011).
Global social identity and global cooperation. Psychological Science, 22, 821–828.
Buchan, N. R., & Croson, R. (2004). The boundaries of trust: own and other’s actions in the
US and China. Journal of Economic Behavior and Organization, 55, 485–504.
Buchan, N. R., Grimalda, G., Wilson, R., Brewer, M, Fatas, E., & Foddy, M. (2009).
Globalization and human cooperation. Proceedings of the National Academy of Sciences,
106, 4138–4142.
Buchan, N. R., Johnson, E. J., & Croson, R. T. A. (2006). Let’s get personal: An international
examination of the influence of communication, culture, and social distance on other
regarding preferences. Journal of Economic Behavior and Organization, 60, 373–398.
Budescu, D.V., Au, W., & Chen, X.P. (1996). Effects of protocol of play and social orientation
on behavior in sequential resource dilemmas. Organizational Behavior and Human
Decision Processes, 1997, 69, 179–194.
Budescu, D. V., Erev, I., Zwick, R. (1999). Games and human behavior. Mahwah, NJ.,
Lawrence Erlbaum Associates.
Budescu, D. V., Rapoport, A., Suleiman, R. (1990). Resource dilemmas with environmental
uncertainty and asymmetrical players. European Journal of Social Psychology, 20,
475–487.
Burger, J., Ostrom, E., Norgaard, R. B., Policansky, D., & Goldstein, B. D. (Eds.) (2001).
Protecting the commons:  A  framework for resource management in the Americas.
Washington, DC: Island Press.
Burnham, T. C., & Hare, B. (2007). Engineering human cooperation:  Does involuntary
neural activation increase public goods contributions? Human Nature, 18, 88–108.
Burnstein, E., Crandall, C., & Kitayama, S. (1994). Some neo-Darwinian decision rules for
altruism: Weighing cues for inclusive fitness as a function of biological importance of
the decision. Journal of Personality and Social Psychology, 67, 773–789.
Butler, J. K. (1999). Trust expectations, information sharing, climate of trust, and negotiation
effectiveness and efficiency. Group and Organization Management, 24, 217–238.
Cabrera, A., & Cabrera, E. F. (2002). Knowledge-sharing dilemmas. Organization Studies,
23, 687–710.
Cabrera, A., Collins, W. C., & Salgado, J. F. (2006). Determinants of individual engagement
in knowledge sharing. International Journal of Human Resource Management, 17,
245–264.
Caldwell, M. D. (1976). Communication and sex effects in a five-person prisoner’s dilemma
game. Journal of Personality and Social Psychology, 33, 273–280.
Cameron, L. D., Brown, P. M., & Chapman, J. G. (1998). Social value orientation and
decisions to take proenvironmental action. Journal of Applied Social Psychology, 28,
675–697.
Campbell, W. K., Bush, C. P., & Brunell, A. B. (2005). Understanding the social costs of
narcissism: The case of the tragedy of the commons. Personality and Social Psychology
Bulletin, 31, 1358–1368.
158 ■ References

Caporael, L. R., Dawes, R. M., Orbell, J. M., & van de Kragt, A. J.  C. (1989). Selfishness
examined:  Cooperation in the absence of egoistic incentives. Behavioral and Brain
Sciences, 12, 683–699.
Cardenas, J. C., Chong, A., & Nopo, H. (2008). To what extent do Latin Americans trust
and cooperate? Field experiments on social exclusion in Six Latin American countries.
Economía, 9, 45–88.
Carnevale, P. J., & Pruitt, D. G. (1992). Negotiation and mediation. Annual Review of
Psychology, 43, 531–582.
Carpenter, J., & Cardenas, J. C. (2011). An intercultural examination of cooperation in the
commons. Journal of Conflict Resolution, 55, 632–651.
Carpenter, J. Daniere, A. G., & Takahashi, L. M. (2004). Cooperation, trust, and social
capital in southeast Asian urban slums. Journal of Economic Behavior and Organization,
55, 533–551.
Castle, E., Eisenberger, N. I., Seeman, T. E., Moons, W. G., Boggero, I. A. Grinblatt, M. S., &
Taylor, S. T. (2012). Neurological and behavior bases of age differences in perceptions of
trust. Proceedings of the National Academy of Science, 109, 20848–20852.
Chaison, G. N., & Dhavale, D. G. (1992). The choice between union membership and
free-rider status. Journal of Labor Research, 13, 355–369.
Charlwood, A. (2002). Why do non-union employees want to unionize? Evidence from
Britain. British Journal of Industrial Relations, 40, 463–491.
Chatman, J. A., & Barsade, S. G. (1995). Personality, organizational culture, and
cooperation: Evidence from a business simulation. Administrative Science Quarterly, 40,
423–443.
Chen, C. C., Chen, X.-P., & Meindl, J. R. (1998). How can cooperation be fostered? The cultural
effects of individualism-collectivism. Academy of Management Review, 23, 285–304.
Chen, X. -P. (1996). The group-based binding pledge as a solution to public goods problems.
Organizational Behavior and Human Decision Processes, 66, 192–202.
Chen, X. -P., Au, W. T., & Komorita, S. S. (1996). Sequential choice in a step-level public
good dilemma: The effects of criticality and uncertainty. Organizational Behavior and
Human Decision Processes, 65, 37–47.
Chen, X. -P., & Bachrach, D. G. (2003). Tolerance of free-riding: The effects of defection
size, defection pattern, and social orientation in a repeated public goods dilemma.
Organizational Behavior and Human Decision Processes, 90, 139–147.
Chen, X. -P., Pillutla, M. M., & Yao, X. (2009). Unintended consequences of cooperation
inducing and maintaining mechanisms in public goods dilemmas: Sanctions and moral
appeals. Group Processes and Intergroup Relations, 12, 241–255.
Choi, Y., & Mai-Dalton, R. R. (1999). The model of followers’ responses to self-sacrificial
leadership: An empirical test. Leadership Quarterly, 10, 397–421.
Christensen, L. (1988). Deception in psychological research:  When is its use justified?
Personality and Social Psychology Bulletin, 14, 664–675.
Cialdini, R. B. (2003). Crafting normative messages to protect the environment. Current
Directions in Psychological Science, 12, 105–109.
Cialdini, R. B., Brown, S. L., Lewis, B. P., Luce, C., Neuburg, S. L. (1997). Reinterpreting
the empathy-altruism relationship:  When one into one equals oneness. Journal of
Personality and Social Psychology, 73, 481–494.
Cinyabuguma, M., Page, T., & Putterman, L. (2005). Cooperation under the threat of
expulsion in a public goods experiment. Journal of Public Economics, 89, 1421–1435.
References ■ 159

Clark, K., & Sefton, M. (2001). The sequential prisoner’s dilemma: Evidence on reciprocation.
Economic Journal, 111, 51–68.
Clark, M. S., & Mills, J. (1993). The difference between communal and exchange
relationships: What it is and is not. Personality and Social Psychology Bulletin, 19, 684–691.
Clary, E. G., Snyder, M., Ridge, R. D., Copeland, J., Stukas, A. A., Haugen, J., & Miene,
P. (1998). Understanding and assessing the motivations of volunteers:  A  functional
approach. Journal of Personality and Social Psychology, 74, 1516–1530.
Cohen, D. (2007). Methods in cultural psychology. In S. Kitayama & D. Cohen (Eds.),
Handbook of cultural psychology (pp. 196–236). New York: Guilford.
Colbert, A. E., Mount, M. K., Harter, J. K., Witt, L. A., & Barrick, M. R. (2004). Interactive
effects of personality and perceptions of the work situation on workplace deviance.
Journal of Applied Psychology, 89, 599–609.
Connelly, C. E., & Kelloway, E. K. (2003). Predictors of employees’ perceptions of knowledge
sharing cultures. Leadership and Organizational Development Journal, 24, 294–301.
Constant, D., Kiesler, S., & Sproull, L. (1994). What’s mine is ours, or is it? A  study of
attitudes about information sharing. Information Systems Research, 5, 400–421.
Conybeare, J. A. C. (1984). Public goods, prisoner’s dilemmas and the international political
economy. International Studies Quarterly, 28, 5–22.
Cook, K. S., Hardin, R., & Levi, M. (2005). Cooperation without trust? New York: Russell
Sage Foundation.
Coombs, C. A. (1973). A reparameterization of the prisoner’s dilemma game. Behavioral
Science, 18, 424–428.
Cosmides, L., Barrett, H. C., & Tooby, J. (2010). Adaptive specializations, social exchange,
and the evolution of human intelligence. Proceedings of the National Academy of Sciences,
107, 9007–9014.
Cox, T. H., Lobel, S. A., & McLeod, P. L. (1991). Effects of ethnic group cultural differences
in cooperative and competitive behavior on a group task. Academy of Management
Journal, 34, 827–847.
Cress, U., & Kimmerle, J. (2007). Guidelines and feedback in information exchange: The
impact of behavioral anchors and descriptive norms in a social dilemma. Group
Dynamics, 11, 42–53.
Cress, U., & Kimmerle, J. (2008). Endowment heterogeneity and identifiability in the
information-exchange dilemma. Computers in Human Behavior, 24, 862–874.
Cress, U., Kimmerle, J., & Hesse, F. W. (2006). Information exchange with shared
databases as a social dilemma: The effect of metaknowledge, bonus systems, and cost.
Communication Research, 33, 370–390.
Crone, E. A., Will, G. J., Overgaauw, S., & Güroğlu, B. (in press). Social decision-making
in childhood and adolescence. In P. A. M. Van Lange, B. Rockenbach, & T. Yamagishi
(Eds), Social dilemmas: New perspectives on reward and punishment. New York: Oxford
University Press.
Cropanzano, R., & Byrne, Z. M. (2000). Workplace justice and the dilemma of organizational
citizenship. In M. van Vugt, M. Snyder, T. R. Tyler, & A. Biel (Eds.), Cooperation in
modern society (pp. 142–161). New York: Routledge.
Cross, J. G., & Guyer, M. J. (1980). Social traps. Ann Arbor, MI: University of Michigan Press.
Darley, J. M. (2004). The cognitive and social psychology of contagious organizational
corruption. Brooklyn Law Review, 70, 1177–1194.
Dawes, R. M. (1980). Social dilemmas. Annual Review of Psychology, 31, 169–193.
160 ■ References

Dawes, R. M., & Messick, D. M. (2000). Social dilemmas. International Journal of Psychology,
35, 111–116.
Dawes, R. M., McTavish, J., & Shaklee, H. (1977). Behavior, communication, and
assumptions about other people’s behavior in a commons dilemma situation. Journal of
Personality and Social Psychology, 35, 1–11.
Dawes, R. M., van de Kragt, A. J. C., & Orbell, J. M. (1988). Not me or thee but we: The
importance of group identity in eliciting cooperation in dilemma situations: Experimental
manipulations. Acta Psychologica, 68, 83–97.
Dawkins, R. (2006). The selfish gene (30th anniversary ed.). Oxford: Oxford University Press.
DeBruine, L. M. (2002). Facial resemblance enhances trust. Proceedings of the Royal
Society-B, 269, 1307–1312.
DeBruine, L. M. (2005). Trustworthy but not lust-worthy: context-specific effects of facial
resemblance. Proceedings of the Royal Society-B, 272, 919–922.
De Cremer, D. (2000). Leadership selection in social dilemmas—Not all prefer it:  The
moderating effect of social value orientation. Group Dynamics330–337.
De Cremer, D., & Barker, M. (2003). Accountability and cooperation in social dilemmas: The
influence of others’ reputational concerns. Current Psychology, 22, 155–163.
De Cremer, D., & Van Dijk, E. (2005). When and why leaders put themselves first: Leader
behaviour in resource allocations as a function of feeling entitled. European Journal of
Social Psychology, 35, 553–563.
De Cremer, D., & Van Dijk, E. (2011). On the near miss in public good dilemmas: How
upward counterfactuals influence group stability when the group fails. Journal of
Experimental Social Psychology, 47, 139–146.
De Cremer, D., & Van Lange, P. A. M. (2001). Why prosocials exhibit greater cooperation
than proselfs:  The roles of social responsibility and reciprocity. European Journal of
Personality, 15, 5–S18.
De Cremer, D., & Van Vugt, M. (1999). Social identification effects in social dilemmas:
A transformation of motives. European Journal of Social Psychology, 29, 871–893.
De Dreu, C. K.  W., & Boles, T. L. (1998). Share and share alike or winner take all? The
influence of social value orientation upon choice and recall of negotiation heuristics
Organizational Behavior and Human Decision Processes, 76, 253–276.
De Dreu, C. K. W., Giacomantonio, M., Shalvi, S., & Sligte, D. J. (2009). Getting stuck or
stepping back:  Effects of obstacles and construal level in the negotiation of creative
solutions. Journal of Experimental Social Psychology, 45, 542–548.
De Dreu, C. K. W., Greer, L. L., Handgraaf, M. J. J., Shalvi, S., Van Kleef, G. A., Baas, M.,
et  al. (2010). The neuropeptide oxytocin regulates parochial altruism in intergroup
conflict among humans. Science, 328, 1408–1411.
De Dreu, C. K.  W., & McCusker, C. (1997). Gain–loss frames and cooperation in
two-person social dilemmas:  A  transformational analysis. Journal of Personality and
Social Psychology, 72, 1093–1106.
De Dreu, C. K.  W., Weingart, L. R., & Kwon, S. (2000). Influence of social motives on
integrative negotiation:  A  meta-analytic review and test of two theories. Journal of
Personality and Social Psychology, 78, 889–905.
De Herdt, T. (2003). Cooperation and fairness: The Flood-Dresher experiment revisited.
Review of Social Economy, 61, 183–210.
De Hooge, I. E., Breugelmans, S. M., & Zeelenberg, M. (2008). Not so ugly after all: When
shame acts as a commitment device. Journal of Personality and Social Psychology, 95,
933–943.
References ■ 161

De Kwaadsteniet, E. W., Van Dijk, E., Wit, A., & De Cremer, D. (2006). Social dilemmas as
strong versus weak situations:  Social value orientations and tacit coordination under
resource uncertainty. Journal of Experimental Social Psychology, 42, 509–516.
De Quervain, D. J. F., Fischbacher, U., Treyer, V., Schellhammer, M., Schnyder, U., Buck, A.,
& Fehr, E. (2004). The neural basis of altruistic punishment. Science, 305, 1254–1258.
de Waal, F. B. M., & Suchak, M. (2010). Prosocial primates: Selfish and unselfish motivations.
Philosophical Transactions of the Royal Society-B, 365, 2711–2722.
Deci, E. L., Koestner, R., & Ryan, R. M. (1999). A meta-analytic review of experiments
investigating the effects of extrinsic rewards on intrinsic motivation. Psychological
Bulletin, 125, 627–668.
Deery, S. J., Iverson, R. D., & Erwin, P. J. (1994). Predicting organizational and union
commitment:  The effect of industrial relations climate. British Journal of Industrial
Relations, 32, 581–597.
Delton, A. W., Krasnow, M. M., Cosmides, L., & Tooby, J. (2011). Evolution of direct
reciprocity under uncertainty can explain human generosity in one-shot encounters.
Proceedings of the National Academy of Sciences, 108, 13335–13340.
Den Hartog, D. N., De Hoogh, A. H. B., & Keegan, A. E. (2007). The interactive effects of
belongingness and charisma on helping and compliance. Journal of Applied Psychology,
92, 1131–1139.
Dennett, D. C. (2006). Breaking the spell. New York: Viking.
Diekmann, A. (1985). Volunteer’s dilemma. Journal of Conflict Resolution, 29, 605–610.
Dietz, T., Ostrom, E., & Stern, P. C. (2003). The struggle to govern the commons. Science,
302, 1907–1912.
Dolšak, N., & Ostrom, E. (2003). The commons in the new millennium:  Challenges and
adaptations. Cambridge, MA: MIT Press.
Domino, G. (1992). Cooperation and competition in Chinese and American children.
Journal of Cross-Cultural Psychology, 23, 456–467.
Doney, P. M., Cannon, J. P., & Mullen, M. R. (1998). Understanding the influence of national
culture on the development of trust. Academy of Management Review, 23, 601–620.
Dudley, S. A., & File, A. L. (2007). Kin recognition in an annual plant. Biology Letters, 3,
435–438.
Dukerich, J. M., Golden, B. R., & Shortell, S. M. (2002). Beauty is in the eye of the
beholder:  The impact of organizational identification, identity, and image on the
cooperative behaviors of physicians. Administrative Science Quarterly, 47, 507–533.
Dunbar, R. I. M., Baron, R., Frangou, A., Pearce, E., van Leeuwen, E. J. C., Stow, J., et al.
(2012). Social laughter is correlated with an elevated pain threshold. Proceedings of the
Royal Society-B, 279, 1161–1167.
Dunlop, P. D., & Lee, K. (2004). Workplace deviance, organizational citizenship behavior,
and business unit performance:  The bad apples do spoil the whole barrel. Journal of
Organizational Behavior, 25, 67–80.
Dunn, E. W., Aknin, L. B., & Norton, M. I. (2008). Spending money on others promotes
happiness. Science, 319, 1687–1688.
Earley, P. C. (1989). Social loafing and collectivism: A comparison of the United States and
the People’s Republic of China. Administrative Science Quarterly, 34, 565–581.
Earley, P. C. (1993). East meets west meets Mideast: Further explorations of collectivistic
and individualistic work groups. Academy of Management Journal, 36, 319–348.
Eek, D., & Gärling, T. (2006). Prosocials prefer equal outcomes to maximizing joint
outcomes. British Journal of Social Psychology, 45, 321–337.
162 ■ References

Egas, M., & Riedl, A. (2008). The economics of altruistic punishment and the maintenance
of cooperation. Proceedings of the Royal Society-B, 275, 871–878.
Ehrhart, M. G. (2004). Leadership and procedural justice climate as antecedents of unit-level
organizational citizenship behavior. Personnel Psychology, 57, 61–94.
Eisenberg, M. A. (1998). Corporate conduct that does not maximize shareholder gain: Legal
conduct, ethical conduct, the penumbra effect, reciprocity, the prisoner’s dilemma,
sheep’s clothing, social conduct, and disclosure. Stetson Law Review, 28, 1–27.
Eisenberger, N. I., Lieberman, M. D., & Williams, K. D. (2003). Does rejection hurt? An
fMRI study of social exclusion. Science, 302, 290–292.
Elffers, H. (2000). But taxpayers do cooperate! In M. Van Vugt, M. Snyder, T. R. Tyler, & A.
Biel (Eds.), Cooperation in modern society (pp. 184–194). New York: Routledge.
Ellemers, N., de Gilder, D., & van den Heuvel, H. (1998). Career-oriented versus
team-oriented commitment and behavior at work. Journal of Applied Psychology, 83,
717–730.
Epley, N., & Huff, C. (1998). Suspicion, affective response, and educational benefit as a
result of deception in psychology research. Personality and Social Psychology Bulletin,
24, 759–768.
Evans, A. M., & Krueger, J. I. (2010). Elements of trust: Risk and perspective-taking. Journal
of Experimental Social Psychology, 47, 171–177.
Feather N. T., & Rauter, K. A. (2004). Organizational citizenship behaviours in relation to
job status, job insecurity, organizational commitment and identification, job satisfaction
and work values. Journal of Occupational and Organizational Psychology, 77, 81–94.
Fehr, E., Bernhard, H. & Rockenbach, B. (2008). Egalitarianism in young children. Nature,
454, 1079–1083.
Fehr, E., & Gächter, S. (2000). Cooperation and punishment in public goods experiments.
American Economic Review, 90, 980–994.
Fehr, E., & Gächter, S. (2002). Altruistic punishment in humans. Nature, 415, 137–140.
Fehr, E., & Gintis, H. (2007). Human motivation and social cooperation: Experimental and
analytical foundations. Annual Review of Sociology, 33, 43–64.
Fehr, E., & Schmidt, K. M. (1999). A theory of fairness, competition, and cooperation. The
Quarterly Journal of Economics, 114, 817–868.
Fischbacher, U., Gächter, S., & Fehr, E. (2001). Are people conditionally cooperative?
Evidence from a public goods experiment. Economics Letters, 71, 397–404.
Fiske, S. T. (2004). Social beings:  A  core motives approach to social psychology. Hoboken,
NJ: Wiley.
Flood, M. M. (1952). Some experimental games. Research memorandum RM-789. Santa
Monica, CA: RAND Corporation.
Foddy, M., Platow, M. J., Yamagishi, T. (2009). Group-based trust in strangers: The role of
stereotypes and group heuristics. Psychological Science, 20, 419–422.
Foddy, M., Smithson, M., Schneider, S., & Hogg, M. (Eds.) (1999). Resolving social
dilemmas: Dynamic, structural, and intergroup aspects. Philadelphia: Psychology Press.
Foster, K. R., Wenseleers, T., Ratnieks, F. L. W., & Queller, D. C. (2006). There is nothing
wrong with inclusive fitness. Trends in Ecology and Evolution, 21, 599–600.
Frank, R. H. (1988). Passions within reason. New York: Norton.
Frank, R. H., Gilovich, T., & Regan, D. T. (1993). Does studying economics inhibit
cooperation? Journal of Economic Perspectives, 7, 159–171.
Frank, R. H., Gilovich, T., & Regan, D. T. (1996). Do economists make bad citizens? Journal
of Economic Perspectives, 10, 187–192.
References ■ 163

Frechet, M. (1953). Emile Borel, initiator of the theory of psychological games and its
application. Econometrica, 21, 95–96.
Freeman, R., Kruse, D., & Blasi, J. (2010). Worker responses to shirking under shared
capitalism. In D. Kruse, R. Freeman, & J. Blasi (Eds.), Shared capitalism at work (pp.
77–103). Chicago: University of Chicago Press.
Frey, B. S., & Jegen, R. (2001). Journal of Economic Surveys, 15, 589–611.
Fukuyama, F. (1995). Trust:  The social virtues and the creation of prosperity.
New York: Free Press.
Fullagar, C., & Barling, J. (1989). A longitudinal test of a model of the antecedents and
consequences of union loyalty. Journal of Applied Psychology, 74, 213–227.
Gächter, S., & Herrmann, B. (2009). Reciprocity, culture and human cooperation: Previous
insights and a new cross-cultural experiment. Philosophical Transactions of the Royal
Society Biological Sciences, 364, 791–806.
Gächter, S., & Herrmann, B. (2011). The limits of self-governance when cooperators get
punished:  Experimental evidence from urban and rural Russia. European Economic
Review, 65, 193–210.
Gächter, S., Herrmann, B., & Thöni, C. (2004). Trust, voluntary cooperation, and
socio-economic background: Survey and experimental evidence. Journal of Economic
Behavior and Organization, 55, 505–531.
Gächter, S., Herrmann, B., & Thoni, C. (2010). Culture and cooperation. Philosophical
Transactions of the Royal Society-B, 365, 2651–2661.
Gächter, S., Renner, E., & Sefton, M. (2008). The long-run benefits of punishment. Science,
322, 1510.
Gallo, P., & Sheposh, J. (1971). Effects of incentive magnitude on cooperation in the prisoner’s
dilemma game: A reply to Gumpert, Deutsch, and Epstein. Journal of Personality and
Social Psychology, 19, 42–46.
Gardner, A., & West, S. A. (2004). Spite and the scale of competition. Journal of Evolutionary
Biology, 17, 1195–1203.
Gardner, G. T., & Stern, P. C. (2002). Environmental problems and human behavior. Needham
Heights, MA: Allyn & Bacon.
Garman, M., & Kamien, M. I. (1968). The paradox of voting:  Probability calculations.
Behavioral Science, 4, 306–316.
Gintis, H. (2003). The hitchhiker’s guide to altruism:  Gene-culture coevolution and the
internalization of norms. Journal of Theoretical Biology, 220, 407–418.
Gintis, H. (2007). A framework for the integration of the behavioral sciences. Behavioral
and Brain Sciences, 30, 1–61.
Gintis, H., Smith, E. A., & Bowles, S. (2001). Costly signaling and cooperation. Journal of
Theoretical Biology, 213, 103–119.
Glimcher, P. W., Camerer, C., Fehr, E., & Poldrack, R. A. (2008). Neuroeconomics:
Decision-making and the brain. Elsevier, New York.
Gneezy, U. & Rustichini, A. (2004). Incentives, punishment and behavior. Advances in
Behavioral Economics, 572–589.
Gottschlag, O., & Zollo, M. (2007). Interest alignment and competitive advantage. Academy
of Management Review, 38, 418–437.
Green, L., Myerson, J., Lichtman, D., Rosen, S., & Fry A. (1996). Temporal discounting in choice
between delayed rewards: The role of age and income. Psychology and Aging, 11, 79–84.
Griesinger, D. W., & Livingston, J. W. (1973). Toward a model of interpersonal motivation
in experimental games. Behavioral Science, 18, 173–188.
164 ■ References

Griskevicius, V., Cantu, S., & Van Vugt, M. (2012). The evolutionary bases for sustainable
behavior. Journal of Public Policy and Marketing, 31, 115–128
Griskevicius, V., Tybur, J. M., & van den Bergh, B. (2010). Going green to be seen: Status,
reputation, and conspicuous conservation. Journal of Personality and Social Psychology,
98, 392–404.
Groarke, L. (2008). Ancient skepticism. Stanford Encyclopedia of Philosophy, http://plato.
stanford.edu/entries/skepticism-ancient/#EPE, retrieved 12/7/09.
Gruber, J., Mauss, I. B., & Tamir, M. (2011). A dark side of happiness? How, when, and why
happiness is not always good. Perspectives on Psychological Science, 6, 222–233.
Gumpert, P., Deutsch, M., & Epstein, Y. (1969). Effect of incentive magnitude on cooperation
in the prisoner’s dilemma game. Journal of Personality and Social Psychology, 11, 66–69.
Gürerk, Ö., Irlenbusch, B., & Rockenbach, B. (2006). The competitive advantage of
sanctioning institutions. Science, 312, 108–111.
Gurven, M. (2004). To give and to give not: the behavioral ecology of human food transfers.
Behavioral and Brain Sciences, 27, 543–583.
Gurven, M., Allen-Arave, W., Hill, K., & Hurtado, A. M. (2000). “It’s a Wonderful
Life”: signaling generosity among the Ache of Paraguay. Evolution and Human Behavior,
21, 263–282.
Gustafsson, M., Biel, A., & Gärling, T. (1999). Over-harvesting of resources of unknown
size. Acta Psychologica, 103, 47–64.
Guyer, M., Fox, J., & Hamburger, H. (1973). Format effects in the prisoner’s dilemma.
Journal of Conflict Resolution, 17, 719–744.
Halevy, N., Bornstein, G., & Sagiv, L. (2008). “In-group-love” and “out-group hate” as
motives for individual participation in intergroup conflict. Psychological Science, 19,
405–411.
Halevy, N., Chou, E. Y., & Murnighan, J. K. (2012). Mind games: The mental representation
of conflict. Journal of Personality and Social Psychology, 102, 132–148.
Haley, K. J., & Fessler, D. M. T. (2005). Nobody’s watching? Subtle cues enhance generosity
in an anonymous economic game. Evolution and Human Behavior, 26, 245–256.
Hamburger, H. (1973). N-person prisoner’s dilemma. Journal of Mathematical Sociology,
3, 27–48.
Hamburger, H. (1979). Games as models of social phenomena. San Francisco: W.H. Freeman.
Hamburger, H., Guyer, M., & Fox, J. (1975). Group size and cooperation. Journal of Conflict
Resolution, 19, 503–531.
Hamilton, W. D. (1964). The genetical evolution of social behaviour (I and II). Journal of
Theoretical Biology, 7, 1–52.
Hammer, T. H., & Berman, M. (1981). The role of noneconomic factors in faculty union
voting. Journal of Applied Psychology, 66, 415–421.
Harbaugh, W. T. (1998). What do donations buy? A  model of philanthropy based on
prestige and warm glow. Journal of Public Economics, 67, 269–284.
Hardin, G. (1968). The tragedy of the commons. Science, 162, 1243–1248.
Hardy, C. L., & Van Vugt, M. (2006). Nice guys finish first:  The competitive altruism
hypothesis. Personality and Social Psychology Bulletin, 32, 1402–1413.
Harris, A. C., & Madden, G. J. (2002). Delay discounting and performance on the prisoner’s
dilemma game. Psychological Record, 52, 429–440.
Hart, C. M., & van Vugt, M. (2006). From fault line to group fission: Understanding membership
changes in small groups. Personality and Social Psychology Bulletin, 32, 392–404.
References ■ 165

Haruno, M., & Frith, C. D. (2009). Activity in the amygdala elicited by unfair divisions
predicts social value orientation. Nature Neuroscience, 13, 160–161.
Haselton, M. G., & Buss, D. M. (2000). Error management theory: a new perspective on
biases in cross-sex mind reading. Journal of Personality and Social Psychology, 78, 81–91.
Hawkes, K. (1991). Showing off: tests of an hypothesis about men’s foraging goals. Ethology
and Sociobiology, 12, 29–54.
Hemesath, M. (1994). Cooperate or defect? Russian and American students in a prisoner’s
dilemma. Comparative Economic Studies, 36, 83–93.
Hemesath, M., & Pomponio, X. (1998). Cooperation and culture: Students from China and
the United States in a prisoner’s dilemma. Cross-Cultural Research, 32, 171–184.
Henle, C. A. (2005). Predicting workplace deviance from the interaction between
organizational justice and personality. Journal of Managerial Issues, 17, 247–263.
Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., Gintis, H., & McElreath, R. (2001).
In search of homo economicus:  Behavioral experiments in 15 small-scale societies.
American Economic Review, 91, 73–78.
Henrich, J., Boyd, R., & Richerson, P. J. (2008). Five misunderstandings about cultural
evolution. Human Nature, 19, 119–137.
Henrich, J., Ensminger, J., McElreath, R., Barr, A., Barrett, C., Bolyanatz, A., et al. (2010).
Markets, religion, community size, and the evolution of fairness and punishment.
Science, 327, 1480–1484.
Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world.
Behavioral and Brain Sciences, 33, 61–135.
Henrich, J., McElreath, R., Barr, A., Ensminger, J., Barrett, C., Bolyanatz, A., et al. (2006).
Costly punishment across human societies. Science, 312, 1767–1770.
Henrich, J., & Henrich, N. (2006). Culture, evolution and the puzzle of human cooperation.
Cognitive Systems Research, 7, 220–245.
Henrich, N., & Henrich, J. (2007). Why humans cooperate. Oxford: Oxford University Press.
Herrmann, B., Thöni, C., & Gächter, S. (2008). Antisocial punishment across societies.
Science, 319, 1362–1367.
Hertel, G., & Fiedler, K. (1994). Affective and cognitive influences in a social dilemma
game. European Journal of Social Psychology, 24, 131–145.
Higgins, E. T. (1998). Promotion and prevention:  Regulatory focus as a motivational
principle. Advances in Experimental Social Psychology, 30, 1–46.
Hinsz, V. B., Tindale, R. S., & Vollrath, D. A. (1997). The emerging conceptualization of
groups as information processors. Psychological Bulletin, 121, 43–64.
Hobbes, T. (1651/1985). Leviathan. New York: Viking Penguin.
Hofstede, G. (2001). Culture’s consequences: Comparing values, behaviors, institutions, and
organizations across nations (2nd Edition). London, UK: Sage Publications.
Holmström, B., & Milgrom, P. (1990). Regulating trade among agents. Journal of Institutional
and Theoretical Economics, 146, 85–105.
Hong R.Y., & Wong, Y., (2005). Dynamic influences of culture and cooperation in a
prisoner’s dilemma. Psychological Science, 16, 429–434.
Hsu, M. -H., Ju, T. L., Yen, C. -H., & Chang, C. -M. (2007). Knowledge sharing behavior
in virtual communities:  The relationship between trust, self-efficacy, and outcome
expectations. International Journal of Human-Computer Studies, 65, 153–169.
Huff, L., & Kelley, L. (2003). Levels of organizational trust in individualist versus collectivist
societies: A seven-nation study. Organizational Science, 14, 81–90.
166 ■ References

Hui, C., Law, R. S., & Chen, Z. -X. (1999). A structural equation model of the effects of
negative affectivity, leader-member exchange, and perceived job mobility on in-role and
extra-role performance: A Chinese case. Organizational Behavior and Human Decision
Processes, 77, 3–21.
Inglehart, R., & Baker, W. E. (2000). Modernization, cultural change, and the persistence of
traditional values. American Sociological Review, 65, 19–51.
Inglehart, R., Basanez, M., & Moreno, A. (1998). Human values and beliefs: A cross-cultural
sourcebook. Ann Arbor, MI: University of Michigan Press.
Insko, C. A., Kirchner, J. L., Pinter, B., Efaw, J., & Wildschut, T. (2005). Inter-individual-intergroup
discontinuity as a function of trust and categorization:  The paradox of expected
cooperation. Journal of Personality and Social Psychology, 88, 365–385.
Insko, C. A., & Schopler, J. (1998). Differential distrust of groups and individuals. In C.
Sedikides, J. Schopler, & C. A. Insko (Eds.), Intergroup cognition and intergroup behavior
(pp. 75–107). Mahwah, NJ: Erlbaum.
Insko, C. A., Schopler, J., Pemberton, M. B., Wieselquist, J., McIlraith, S. A., Currey, D.
P, & Gaertner, L. (1998). Long-term outcome maximization and the reduction of
interindividual–intergroup discontinuity. Journal of Personality and Social Psychology,
75, 695–711.
Iredale, W., Van Vugt., M., & Dunbar, R. (2008). Showing off in humans: Male generosity as
mate signal. Evolutionary Psychology, 6, 386–392.
Jackson, D. L. (2003). Revisiting sample size and number of parameter estimates:  Some
support for the N:q hypothesis. Structural Equation Modeling, 10, 128–141.
James, H. S., & Cohen, J. P. (2004). Does ethics training neutralize the incentives of the
prisoner’s dilemma? Evidence from a classroom experiment. Journal of Business Ethics,
50, 53–61.
Jarvenpaa, S. L., & Staples, D. S. (2001). Exploring perceptions of organizational ownership
of information and expertise. Journal of Management Information Systems, 18, 151–183.
Johnson, D. D. P. (2005). God’s punishment and public goods. Human Nature, 16, 410–446.
Johnson, D. D. P., & Bering, J. (2006). Hand of God, mind of man: Punishment and cognition
in the evolution of cooperation. Evolutionary Psychology, 4, 219–233.
Joireman, J., Daniels, D., George-Falvy, J., & Kamdar, D. (2006). Organizational citizenship
behaviors as a function of empathy, consideration of future consequences, and employee
time horizon: An initial exploration using an in-basket simulation of OCBs. Journal of
Applied Social Psychology, 36, 2266–2292.
Joireman, J., Kamdar, D., Daniels, D., & Duell, B. (2006). Good citizens to the end? It
depends:  Empathy and concern with future consequences moderate the impact of a
short-term time horizon on organizational citizenship behaviors. Journal of Applied
Psychology, 91, 1307–1320.
Joireman, J. A., Lasane, T. P., Bennett, J., Richards, D., & Solaimani, S. (2001). Integrating
social value orientation and the consideration of future consequences within the
extended norm activation model of proenvironmental behavior. British Journal of Social
Psychology, 40, 133–155.
Joireman, J., Posey, D. C., Truelove, H. B., & Parks, C. D. (2009). The environmentalist
who cried drought:  Reactions to repeated warnings about depleting resources under
conditions of uncertainty. Journal of Environmental Psychology, 29, 181–192.
Joireman, J., Shaffer, M., Balliet, D., & Strathman, A. (2012). Promotion orientation explains
why future oriented people exercise and eat healthy:  Evidence from the two-factor
References ■ 167

consideration of future consequences 14 scale. Personality and Social Psychology Bulletin,


38, 1272–1287.
Joireman, J. A., Van Lange, P. A. M., & Van Vugt, M. (2004). Who cares about the environmental
impact of cars? Those with an eye toward the future. Environment and Behavior, 36,
187–206.
Joireman, J. A., Van Lange, P. A. M., Van Vugt, M., Wood, A., Vander Leest, T., & Lambert,
C. (2001). Structural solutions to social dilemmas:  A  field study on commuters’
willingness to fund improvements in public transit. Journal of Applied Social Psychology,
31, 504–526.
Jones, T. M. (1995). Instrumental stakeholder theory: A synthesis of ethics and economics.
Academy of Management Review, 20, 404–437.
Joyner, B. E., & Payne, D. (2002). Evolution and implementation: A study of values, business
ethics and corporate social responsibility. Journal of Business Ethics, 41, 297–311.
Kagan, S., & Madsen, M. C., (1971). Cooperation and competition of Mexican,
Mexican-American, and Anglo-American children of two ages under instructional sets.
Developmental Psychology, 5, 32–39.
Kagan, S., & Madsen, M. C., (1972). Experimental analyses of cooperation and competition
of Anglo-American and Mexican children. Developmental Psychology, 6, 49–59.
Kagan, S., Zhan, G. L., & Geally, J. (1977). Competition and school achievement among
Anglo-American and Mexican-American children. Journal of Educational 69, 432–441.
Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk.
Econometrica, 47, 263–292.
Kaptein, M. (2002). Guidelines for the development of an ethics safety net. Journal of
Business Ethics, 41, 217–234.
Kelley, H. H., Deutsch, M., Lanzetta, J. T., Nuttin, J. M., Jr., Shure, G. H., Faucheux, C.,
Moscovici, S., & Rabbie, J. M. (1970). A comparative experimental study of negotiation
behavior. Journal of Personality and Social Psychology, 16, 411–438.
Kelley, H. H., Holmes, J. G., Kerr, N. L., Reis, H. T., Rusbult, C. E., & Van Lange, P. A. M.
(2003). An atlas of interpersonal situations. Cambridge: Cambridge University Press.
Kelley, H. H., & Stahelski, A. J. (1970). Social interaction basis of cooperators’ and
competitors’ beliefs about others. Journal of Personality and Social Psychology, 16, 66–91.
Kelley, H. H., & Thibaut, J. W. (1978). Interpersonal relations. New York: Wiley.
Kelly, S., & Dunbar, R. I. M. (2001). Who dares wins: Heroism versus altruism in women’s
mate choice. Human Nature, 12, 89–105.
Kenrick, D. T., Li, N. P., & Butner, J. (2003). Dynamical evolutionary psychology: Individual
decision rules and emergent social norms. Psychological Review, 110, 3–28.
Kerr, N. L. (1989). Illusions of self-efficacy: The effect of group size on perceived efficacy in
social dilemmas. Journal of Experimental Social Psychology, 25, 287–313.
Kerr, N. L. (2012). Social dilemmas. In J. Levine (Ed.), Frontiers in Social Psychology
(pp. 85–110) New York, NY: Psychology Press.
Kerr, N. L., Kerr, Garst, J., Lewandowski, D. A., & Harris, S. E. (1997). The still, small
voice: Commitment to cooperate as an internalized versus a social norm. Personality
and Social Psychology Bulletin, 23, 1300–1311.
Kerr, N. L., & Kaufman-Gilliland, C. M. (1994). Communication, commitment, and
cooperation in social dilemmas, Journal of Personality and Social Psychology, 66, 513–526.
Kerr, N. L., Rumble, A. C., Park E., Ouwerkerk, J. W., Parks, C. D., Gallucci, M., & Van Lange,
P. A. M. (2009). How many bad apples does it take to spoil the whole barrel?: Social
168 ■ References

exclusion and toleration for bad apples. Journal of Experimental Social Psychology, 45,
603–613.
Kerr, N. L., & Tindale, R. S. (2004). Group performance and decision making. Annual
Review of Psychology, 55, 623–655.
Ketelaar, T., & Au, W. T. (2003). The effects of feelings of guilt on the behavior of uncooperative
individuals in repeated social bargaining games: An affect-as-information interpretation
of the role of emotion in social interaction. Cognition and Emotion, 17, 429–453.
Kimmel, A. J. (1997). In defense of deception. American Psychologist, 53, 803–805.
Kimmel, A. J. (2006). From artifacts to ethics: The delicate balance between methodological
and moral concerns in behavioral research. In D. A. Hantula (Ed.), Advances in social
and organizational psychology (pp. 113–140). Mahwah, NJ: Erlbaum.
Kimmerle, J., Cress, U., & Hesse, F. W. (2007). An interactional perspective on group
awareness:  Alleviating the information-exchange dilemma (for everybody)?
International Journal of Human-Computer Studies, 65, 899–910.
Kimmerle, J., Wodzicki, K., Jarodzka, H., & Cress, U. (2011). Value of information, behavioral
guidelines, and social value orientation in an information-exchange dilemma. Group
Dynamics, 15, 173–186.
Kitayama, S., & Cohen, D. (2007). Handbook of cultural psychology. New York: Guilford Press.
Kiyonari, T., & Barclay, P. (2008). Cooperation in social dilemmas:  free-riding may be
thwarted by second-order rewards rather than punishment. Journal of Personality and
Social Psychology, 959(4), 826–842.
Klandermans, B. (1986). Psychology and trade union participation: Joining, acting, quitting.
Journal of Occupational and Organizational Psychology, 59, 189–204.
Klandermans, B. (2001). Why social movements come into being and why people join
them. In J. R. Blau (Ed.), The Blackwell companion to sociology (pp. 268–281). Malden,
MA: Blackwell.
Klandermans, B. (2002). How group identification helps to overcome the dilemma of
collective action. American Behavioral Scientist, 45, 887–900.
Klandermans, B., Van Der Toorn, J., Van Stekelenburg, J. (2008). Embeddedness and
identity:  How immigrants turn grievances into action. American Sociological Review,
73, 992–1012.
Klapwijk, A., & Van Lange, P. A.  M. (2009). Promoting cooperation and trust in noisy
situations: The power of generosity. Journal of Personality and Social Psychology, 96, 83–103.
Kline, R. B. (2011). Principles and practice of structural equation modeling (3rd ed.).
New York: Guilford.
Knack, S., & Keefer, P. (1997). Does social capital have an economic payoff ? A cross-country
investigation. Quarterly Journal of Economics, 112, 1251–1288.
Knight, G. P., & Kagan, S., (1977a). Development of prosocial and competitive behaviors in
Anglo-American and Mexican-American children. Child Development, 48, 1385–1393.
Knight, G. P., & Kagan, S., (1977b). Acculturation of prosocial and competitive behaviors
among second and third generation Mexican-American children. Journal of
Cross-Cultural Psychology, 8, 273–284.
Knox, R. E., & Douglas, R. L. (1971). Trivial incentives, marginal comprehension, and
dubious generalizations from prisoner’s dilemma studies. Journal of Personality and
Social Psychology, 20, 160–165.
Kollock, P. (1993). “An eye for an eye leaves everybody blind”: Cooperation and accounting
systems. American Sociological Review, 58, 768–786.
References ■ 169

Kollock, P. (1998). Social dilemmas:  The anatomy of cooperation. Annual Review of


Sociology, 24, 183–224.
Komorita, S. S., & Parks, C. D. (1994). Social dilemmas. Boulder, CO: Westview Press.
Komorita, S. S., & Parks, C. D. (1995). Interpersonal relations: Mixed-motive interaction.
Annual Review of Psychology, 46, 183–307.
Komorita, S. S., Parks, C. D., & Hulbert, L. G. (1992). Reciprocity and the induction of
cooperation in social dilemmas. Journal of Personality and Social Psychology, 62, 607–617.
Koole, S. L., Jager, A., van den Berg, A. E., Vlek, C. A. J.,& Hofstee, W. K. B. (2001). On the
social nature of personality: Effects of extraversion, agreeableness, and feedback about
collective resource use on cooperation in a resource dilemma. Personality and Social
Psychology Bulletin, 27, 289–301.
Kopelman, S. (2008). The herdsman and the sheep, mouton, or kivsa? The influence of
group culture on cooperation in social dilemmas. In A. Biel, D. Eek, T. Gärling, & M.
Gustafsson (Eds.), New issues and paradigms in research on social dilemmas (pp. 177–184).
New York: Springer.
Kortenkamp, K. V., & Moore, C. F. (2006). Time, uncertainty, and individual differences in
decisions to cooperate in resource dilemmas. Personality and Social Psychology Bulletin,
32, 603–615.
Kosfeld, M., Heinrichs, M., Zak, P. J., Fischbacher, U., & Fehr, E. (2005). Oxytocin increases
trust in humans. Nature, 435, 673–676.
Kramer, R. M. (1999). Trust and distrust in organizations: Emerging perspectives, enduring
questions. Annual Review of Psychology, 50, 569–598.
Kramer, R. M., & Brewer, M. B. (1984). Effects of group identity on resource use in a simulated
commons dilemma. Journal of Personality and Social Psychology, 46, 1044–1057.
Kramer, R. M., Hanna, B. A., Su, S., & Wei, J. (2001). Collective identity, collective trust,
and social capital: Linking group identification and group cooperation. In M. E. Turner
(Ed.), Groups at work (pp. 173–196). Mahwah, NJ: Erlbaum.
Kramer, R. M., & Pittinksy, T. L. (2012). Restoring trust in organizations and leaders.
New York: Oxford University Press.
Krupp, D. B., DeBruine, L. M., & Barclay, P. (2008). A cue of kinship promotes cooperation
for the public good. Evolution and Human Behavior, 29, 49–55.
Kuhlman, D. M., & Marshello, A. F. (1975). Individual differences in game motivation
as moderators of preprogrammed strategy effects in prisoner’s dilemma. Journal of
Personality and Social Psychology, 32, 922–931.
La Du Lake, R., & Huckfeldt, R. (1998). Social capital, social networks, and political
participation. Political Psychology, 19, 567–584.
La Porta, R., Lopez-de-Silanes, F., Shleifer, A., & Vishny, R. W. (1997). Trust in large
organizations. American Economic Review, 87, 333–338.
Laland, K. N., & Brown, G. R. (2006). Niche construction, human behavior, and the
adaptive-lag hypothesis. Evolutionary Anthropology, 15, 95–104.
Ledyard, J. O., & Palfrey, T. (2002). The approximation of efficient public good mechanisms
by simple voting schemes. Journal of Public Economics, 83, 153–171.
Lee, K., & Allen, N. J. (2002). Organizational citizenship behavior and workplace
deviance: The role of affect and cognitions. Journal of Applied Psychology, 87, 131–142.
LePine, J. A., Erez, A., & Johnson, D. E. (2002). The nature and dimensionality of
organizational citizenship behavior:  A  critical review and meta-analysis. Journal of
Applied Psychology, 87, 52–65.
170 ■ References

Leung, K., Au, A., Huang, X., Kurman, J., Niit, T., & Niit, K. (2007). Social axioms and
values: A cross-cultural examination. European Journal of Personality, 21, 91–111.
Leung, K., Tong, K. -K., and Ho, S. S. -Y. (2004). Effects of interactional justice on egocentric
bias in resource allocation decisions. Journal of Applied Psychology, 89, 405–415.
Liberman, V., Samuels, S. M., & Ross, L. (2004). The name of the game: predictive power
of reputations versus situational labels in determining prisoner’s dilemma game moves.
Personality and Social Psychology Bulletin, 30, 1175–1185.
Liebrand, W. B. G. (1983). A classification of social dilemma games. Simulations & Games,
14, 123–138.
Liebrand, W. B. G., Jansen, R. W. T. L., Rijken, V. M., & Suhre, C. J. M. (1986). Might over
morality:  Social values and the perception of other players in experimental games.
Journal of Experimental Social Psychology, 22, 203–215.
Liebrand, W. B. G., & Van Run, G. J. (1985). The effects of social motives on behavior in
social dilemmas in two cultures. Journal of Experimental Social Psychology, 21, 86–102.
Liebrand, W. B. G., Wilke, H. A. M., Vogel, R., & Wolters, F. J. M. (1986). Value orientation
and conformity in three types of social dilemma games. Journal of Conflict Resolution,
30, 77–97.
Lienard, P. (in press). Beyond kin: cooperation in a tribal society. In:  P. A. M. Van Lange,
B. Rockenbach, & T. Yamagishi (Eds), Social dilemmas: New perspectives on reward and
punishment. New York: Oxford University Press.
Lind, E. A. (2001). Fairness heuristic theory:  Justice judgments as pivotal cognitions
in organizational relations. In J. Greenberg & R. Cropanzano (Eds.), Advances in
organizational justice (pp. 56–88). Stanford, CA: Stanford University Press.
Lindskold, S. (1978). Trust development, the GRIT proposal, and the effects of conciliatory
acts on conflict and cooperation. Psychological Bulletin, 85, 107–128.
Lu, L., Leung, K., & Koch, P. T. (2006). Managerial knowledge sharing:  The role of
individual, interpersonal, and organizational factors. Management and Organization
Review, 2, 15–41.
Luce, R. D., & Raiffa, H. (1957). Games and decisions. New York: Wiley and Sons.
Lumsden, C. J., & Wilson, E. O. (1981). Genes, mind, and culture. Cambridge, MA: Harvard
University Press.
Lumsden, J., Miles, L. K., Richardson, M. J., Smith, C. A., & Macrae, C. N. (2012). Who
syncs? Social motives and interpersonal coordination. Journal of Experimental Social
Psychology, 48, 746–751.
Luo, Y. (2007). The independent and interactive roles of procedural, distributive, and
interactional justice in strategic alliances. Academy of Management Journal, 50, 644–664.
Lyle, H. F. III, Smith, E. A., & Sullivan, R. J. (2009). Blood donations as costly signals of
donor quality. Journal of Evolutionary Psychology, 4, 263–286.
MacCoun, R. J., & Kerr, N. L. (1987). Suspicion in the psychological laboratory: Kelman’s
prophecy revisited. American Psychologist, 42, 199.
Macy, M. W., & Willer, R. (2002). From factors to actors:  Computational sociology and
agent-based modeling. Annual Review of Sociology, 28, 143–166.
Madsen, M. C., & Shapira, A., (1970). Cooperative and competitive behavior of urban
Afro-American, Mexican-American, and Mexican village children. Developmental
Psychology, 3, 16–20.
Magenau, J. M., Martin, J. E., & Peterson, M. M. (1988). Dual and unilateral commitment
among stewards and rank-and-file union members. Academy of Management Journal,
31, 359–376.
References ■ 171

Majolo, B., Ames, K., Brumpton, R., Garratt, R., Hall, K., & Wilson, N. (2006). Human
friendship favours cooperation in the iterated prisoner’s dilemma. Behaviour, 143,
1383–1395.
Martinez, L. M. F., Zeelenberg, M., & Rijsman, J. B. (2011). Behavioral consequences of regret
and disappointment in social bargaining games. Cognition and Emotion, 25, 351–359.
Marwell, G., & Ames, R. E. (1979). Experiments on the provision of public goods.
I.  Resources, interest, group size, and the free-rider problem. American Journal of
Sociology, 84, 1335–1360.
Marwell, G., & Ames, R. E. (1981). Economists free ride, does anyone else? Experiments on
the provision of public goods, IV. Journal of Public Economics, 15, 295–310.
Marlowe, F. W., Berbesque, J. C., Barr, A., Barrett, C., Bolyanatz, A., Cardenas, J. C., et al.
(2008). More “altruistic” punishment in larger societies. Proceedings of the Royal
Society-B, 275, 587–592.
Mathew, S., & Boyd, R. (2011). Punishment sustains large-scale cooperation in prestate
warfare. Proceedings of the National Academy of Sciences, 108, 11875–11880.
Matzler, K., Renzl, B., Mooradian, T., van Krogh, G., & Müller, J. (2011). Personality
traits, affective commitment, documentation of knowledge, and knowledge sharing.
International Journal of Human Resource Management, 22, 296–310.
Matzler, K., Renzl, B., Müller, J., Herting, S., & Mooradian, T. (2008). Personality traits and
knowledge sharing. Journal of Economic Psychology, 29, 301–313.
Mauss, M. (1950/1990). The gift: The form and reason for exchange in archaic societies (W. D.
Halls translation). New York: Norton.
McAndrew, F. T. (2002). New evolutionary perspectives on altruism: Multilevel selection
and costly signaling theories. Current Directions in Psychological Science, 11, 79–82.
McCarter, M. W., Budescu, D. V., & Sheffran, J. (2011). The give-or-take-some dilemma: An
empirical investigation of a hybrid social dilemma. Organizational Behavior and Human
Decision Processes, 116, 83–95.
McCarter, M. W., Mahoney, J. T., & Northcraft, G. B. (2011). Testing the waters:  Using
collective real options to manage the social dilemma of strategic alliances. Academy of
Management Review, 36, 621–640.
McCarter, M. W., & Northcraft, G. B. (2007). Happy together? Insights and implications of
viewing managed supply chains as a social dilemma. Journal of Operations Management,
25, 498–511.
McClintock, C. G. (1972). Social motivation—a set of propositions. Behavioral Science, 17,
438–454.
McClintock, C. G., (1974). Development of social motives in Anglo-American and
Mexican-American children. Journal of Personality and Social Psychology, 29, 348–354.
McClintock, C. G. & Allison, S. T. (1989). Social value orientation and helping behavior.
Journal of Applied Social Psychology, 19, 353–362.
McClintock, C. G., & Moskowitz, J. M. (1976). Children’s preferences for individualistic,
cooperative, and competitive outcomes. Journal of Personality and Social Psychology,
34, 543–555.
McDonald, M. M., Navarrete, C. D., & Van Vugt, M. (2012). Evolution and the psychology
of intergroup conflict:  The male warrior hypothesis. Philosophical Transactions of the
Royal Society-B, 367, 670–679.
Meleady, R., Hopthrow, T., & Crisp, R. J. (2013). Simulating social dilemmas: Promoting
cooperative behavior through imagined group discussion. Journal of Personality and
Social Psychology, 104, 839–853.
172 ■ References

Mealey, L. (1995). The sociobiology of sociopathy:  An integrated evolutionary model.


Behavioral and Brain Sciences, 18, 523–599.
Menon, T., & Pfeffer, J. (2003). Valuing internal vs. external knowledge:  Explaining the
preference for outsiders. Management Science, 49, 497–513.
Messick, D. M. (1973). To join or not to join: An approach to the unionization decision.
Organizational Behavior and Human Performance, 10, 145–156.
Messick, D. M. (1974). When a little “group interest” goes a long way: A note on social motives
and union joining. Organizational Behavior and Human Performance, 12, 331–334.
Messick, D. M. (1999). Alternative logics for decision making in social situations. Journal of
Economic Behavior and Organization, 39, 11–28.
Messick, D. M., Allison, S. T., & Samuelson, C. D. (1988). Framing and communication
effects on group members’ responses to environmental and social uncertainty. In S.
Maital (Ed.), Applied Behavioral Economics (Vol. 2, pp. 677–700). New York: New York
University Press.
Messick, D. M., & Brewer, M. B. (1983). Solving social dilemmas: A review. In L. Wheeler
& P. G. Shaver (Eds.), Review of Personality and Social Psychology (Vol. 4, pp. 11–44).
Beverly Hills, CA: Sage.
Messick, D. M., & Liebrand, W. B.  G. (1995). Individual heuristics and the dynamics of
cooperation in large groups. Psychological Review, 102, 131–145.
Messick, D. M., & McClelland, C. L. (1983). Social traps and temporal traps. Personality and
Social Psychology Bulletin, 9, 105–110.
Messick, D. M., & McClintock, C. G. (1968). Motivational bases of choice in experimental
games. Journal of Experimental Social Psychology, 4, 1–25.
Messick, D. M., & Sentis, K. P. (1983). Fairness, preference, and fairness biases. In D. M.
Messick & K. S. Cook (Eds.), Equity theory: Psychological and sociological perspectives
(pp. 61–94). New York: Praeger.
Messick, D. M., & Thorngate, W. B. (1967). Relative gain maximization in experimental
games. Journal of Experimental Social Psychology, 3, 85–101.
Messick, D. M., Wilke, H., Brewer, M. B., Kramer, R. M., Zemke, P. E., & Lui, L. (1983).
Individual adaptions and structural change as solutions to social dilemmas. Journal of
Personality and Social Psychology, 44, 294–309.
Meyer, J. P., & Allen, N. J. (1991). A three-component conceptualization of organizational
commitment. Human Resource Management Review, 1, 61–89.
Mifune, N., Hashimoto, H., & Yamagishi, T. (2010). Altruism towards in-group members as
a reputation mechanism. Evolution and Human Behavior, 31, 109–117.
Milinski, M., Semmann, D., Bakker, T. C.  M., & Krambeck, H.-J. (2001). Cooperation
through indirect reciprocity:  Image scoring or standing strategy? Proceedings of the
Royal Society-B, 268, 2495–2501.
Milinski, M., Semmann, D., & Krambeck, H. -J. (2002). Donors to charity gain in both
indirect reciprocity and political reputation. Proceedings of the Royal Society-B, 269,
881–883.
Milinski, M., Semmann, D., Krambeck, H. -J., & Marotzke, J. (2006). Stabilizing the earth’s
climate is not a losing game:  Supporting evidence from public goods experiments.
Proceeding of the National Academy of Sciences, 103, 3994–3998.
Mill, J. S. (1861/1998). Utilitarianism. New York: Oxford University Press.
Miller, F. G., & Kaptchuk, T. J. (2008). Deception of subjects in neuroscience: An ethical
analysis. Journal of Neuroscience, 28, 4841–4843.
References ■ 173

Mirowski, P. (1992). What were von Neumann and Morgenstern trying to accomplish? In E.
R. Weintraub (Ed.), Toward a history of game theory (pp. 113–147). Durham, NC: Duke
University Press.
Mischel, W. (2012). Self-control theory. In P. A. M. Van Lange, A. W. Kruglanski, & E. T.
Higgins (Eds), Handbook of theories of theories of social psychology (Vol. 2, pp. 1–22).
Thousand Oaks, CA: Sage.
Mogilner, C., Chance, Z., & Norton, M. I. (2012). Giving time gives you time. Psychological
Science, 23, 1233–1238.
Mooradian, T., Renzl, B., & Matzler, K. (2006). Who trusts? Personality, trust, and knowledge
sharing. Management Learning, 37, 523–540.
Moreton, D. R. (1998). An open shop trade union model of wages, effort and membership.
European Journal of Political Economy, 14, 511–527.
Morris, H. (1976). On guilt and innocence. Berkeley : University of California Press.
Mulder, L. B., & Nelissen, R. M. A. (2010). When rules really make a difference: The effect
of cooperation rules and self-sacrificing leadership on moral norms in social dilemmas.
Journal of Business Ethics, 95, 57–72.
Mulder, L. B., van Dijk, E., De Cremer, D., & Wilke, H. A. M. (2006). Undermining trust
and cooperation:  The paradox of sanctioning systems in social dilemmas. Journal of
Experimental Social Psychology, 42, 147–162.
Murnighan, J. K., Kim, J. W., & Metzger, A. R. (1993). The volunteer dilemma. Administrative
Science Quarterly, 38, 515–538.
Murnighan, J. K., & Roth, A. E. (1983). Expecting continued play in prisoner’s dilemma
games. Journal of Conflict Resolution, 27, 279–300.
Murphy, R., Ackerman, K., & Handgraaf, M. (2011). Measuring social value orientation.
Judgment and Decision Making, 6, 771–781.
Murphy, S. M., Wayne, S. J., Liden, R. C., & Erdogan, B. (2003). Understanding social loafing: The
role of justice perceptions and exchange relationships. Human Relations, 56, 61–84.
Muthusamy, S. K., & White, M. A. (2005). Learning and knowledge transfer in strategic
alliances: A social exchange view. Organization Studies, 26, 415–441.
Myatt, D. P., & Wallace, C. (2008). An evolutionary analysis of the volunteer’s dilemma.
Games and Economic Behavior, 62, 67–76.
Nash, J. F. (1950). Equilbrium points in n-person games. Proceedings of the National
Academy of Sciences, 36, 48–49.
Nauta, A., De Dreu, C. K.  W., & Van der Vaart, T. (2002). Social value orientation,
organizational goal concerns, and interdepartmental problem-solving behavior. Journal
of Organizational Behavior, 23, 199–213.
Naylor, R. (1990). A social custom model of collective action. European Journal of Political
Economy, 6, 201–216.
Naylor, R., & Cripps, M. (1993). An economic theory of the open shop trade union.
European Economic Review, 37, 1599–1620.
Nelissen, R. M. A., Dijker, A. J. M., & deVries, N. K. (2007). How to turn a hawk into a dove
and vice versa: Interactions between emotions and goals in a give-some dilemma game.
Journal of Experimental Social Psychology, 43, 280–286.
Nemeth, C. (1972). A critical analysis of research utilizing the prisoner’s dilemma paradigm
for the study of bargaining. Advances in Experimental Social Psychology, 6, 203–234.
Nesse, R. M. (2005). Natural selection and the regulation of defenses: A signal detection
analysis of the smoke detector principle. Evolution and Human Behavior, 26, 88–105.
174 ■ References

Nesse, R. M. (2007). Runaway social selection for displays of partner value and altruism.
Biological Theory, 2, 143–155.
Neufeld, S. L., Griskevicius, V., Ledlow, S. E., Li, Y. J., Neel, R., Berlin, A., & Yee, C. (2011).
Going green to help your genes: The use of kin-based appeals in conservation messaging.
Manuscript submitted for publication.
Neuman, G. A., & Kickul, J. R. (1998). Organizational citizenship behaviors: Achievement
orientation and personality. Journal of Business and Psychology, 13, 263–279.
Newton, L. A., & Shore, L. M. (1992). A model of union membership:  Instrumentality,
commitment, and opposition. Academy of Management Review, 17, 275–298.
Neyer, F. J., & Lange, F. R. (2003). Blood is thicker than water: Kinship orientation across
adulthood. Journal of Personality and Social Psychology, 84, 310–321.
Noë, R., & Hammerstein, P. (1994). Biological markets: Supply and demand determine the
effect of partner choice in cooperation, mutualism and mating. Behavioral Ecology and
Sociobiology, 35, 1–11.
Noë, R., & Hammerstein, P. (1995). Biological markets. Trends in Ecology & Evolution, 10,
336–339.
Nolan, J. P., Schultz, P. W., Cialdini, R. B., Goldstein, N. J., & Griskevicius, V. (2008).
Normative social influence is underdetected. Personality and Social Psychology Bulletin,
34, 913–923.
Nosenzo, D., & Sefton, M. (2013). Promoting cooperation:  The distribution of reward
and punishment power. In P. A. M. Van Lange, B. Rockenbach, & T. Yamagishi (Eds),
Social dilemmas:  New perspectives on reward and punishment. New  York:  Oxford
University Press.
Nowak, M. A. (2006). Five rules for the evolution of cooperation. Science, 314, 1560–1563.
Nowak, M. A., & Highfield, R. (2011). SuperCooperators. New York: Free Press.
Nowak, M. A., & Sigmund, K. (1992). Tit for tat in heterogenous populations. Nature, 355,
250–253.
Nowak, M. A., & Sigmund, K. (1993). A strategy of win-stat, lose-shift that outperforms
tit-for-tat in the prisoner’s dilemma game. Nature, 364, 56–58.
Nowak, M. A., & Sigmund, K. (2005). Evolution of indirect reciprocity. Nature, 437,
1291–1298.
O’Gorman, R. O., Henrich, J., & Van Vugt, M.(2008). Constraining free-riding in public
goods games: Designated solitary punishers can sustain human cooperation. Proceedings
of Royal Society-B, 276, 323–329.
Ohtsuki, H., & Iwasa, Y. (2007). Global analyses of evolutionary dynamics and exhaustive
search for social norms that maintain cooperation by reputation. Journal of Theoretical
Biology, 244, 518–531.
Olson, M. (1965). The logic of collective action. Cambridge, MA: Harvard University Press.
Olweus, D. (1979). Stability of aggressive reaction patterns in males: A review. Psychological
Bulletin, 86, 852–875.
Omoto, A., & Snyder, M. (2002). Considerations of community: The context and process of
volunteerism. American Behavioral Scientist, 45, 846–867.
Oosterbeek, H., Sloof, R., & Van de Kuilen, G. (2004). Cultural differences in ultimatum
game experiments: Evidence from meta-analysis. Experimental Economics, 7, 171–188.
Opotow, S. & Weiss, L. (2000). New ways of thinking about environmentalism: Denial and
the process of moral exclusion in environmental conflict. Journal of Social Issues, 56,
475–490.
References ■ 175

Orbell, J., Van de Kragt, A. J. C., & Dawes, R. M. (1988). Explaining discussion-induced
cooperation. Journal of Personality and Social Psychology, 54, 811–819.
Ortmann, A., & Hertwig, R. (1997). Is deception acceptable? American Psychologist, 52,
746–747.
Osgood, C. (1962). An alternative to war or surrender. Urbana: University of Illinois Press.
Ostrom, E. (1990). Governing the commons: The evolution of institutions for collective action.
Cambridge: Cambridge University Press.
Ostrom, E. (2000). Collective action and the evolution of social norms. Journal of Economic
Perspectives, 14, 137–158.
Ostrom, E., & Ahn, T. K. (2008). The meaning of social capital and its link to collective
action. In D. Castiglione, J. W. van Deth, & G. Wolleb (Eds.). Handbook on social capital
(pp. 17–35). Northampton, MA: Elgar.
Ostrom, E., Gardner, R., & Walker, J. (2003). Rules, games, and common-pool resources. Ann
Arbor: University of Michigan Press.
Ostrom, E., & Walker, J. (2003). Trust and reciprocity. New York: Russell Sage Foundation.
Ouwerkerk, J. W., Kerr, N. L., Gallucci, M., & Van Lange, P. A. M. (2005). Avoiding the social
death penalty: Ostracism and cooperation in social dilemmas. In K. D. Williams, J. P.
Forgas, & W. von Hippel (Eds.), The social outcast: Ostracism, social exclusion, rejection,
and bullying (pp. 321–332). New York: Psychology Press.
Oyserman, D., & Lee, S. W.  -S. (2007). Priming “culture”:  Culture as situated cognition.
In S. Kitayama & D. Cohen (Eds.), Handbook of cultural psychology (pp. 255–282).
New York: Guilford.
Palmer, G. (1990). Alliance politics and issue areas:  Determinants of defense spending.
American Journal of Political Science, 34, 190–211.
Park, J. H., Schaller, M., & Van Vugt, M. (2008). Psychology of human kin recognition: Heuristic
cues, erroneous inferences, and their implications. Review of General Psychology, 12,
215–235.
Parkhe, A. (1993). Strategic alliance structuring:  A  game theoretic and transaction cost
examination of interfirm cooperation. Academy of Management Journal, 36, 794–829.
Parks, C.D. (1994). The predictive ability of social values in resource dilemmas and public
goods games. Personality and Social Psychology Bulletin, 20, 431–438.
Parks, C. D., Henager, R. F., & Scamahorn, S. D. (1996). Trust and reactions to messages of
intent in social dilemmas. Journal of Conflict Resolution, 40, 134–151.
Parks, C. D., & Hulbert, L. G. (1995). High and low trusters’ responses to fear in a payoff
matrix. Journal of Conflict Resolution, 39, 718–730.
Parks, C. D., Joireman, J., & Van Lange, P. A.  M. (2013). Cooperation, trust, and
antagonism: How public goods are promoted. Psychological Science in the Public Interest,
14, xxx–xxx.
Parks, C. D., & Komorita, S. S. (1997). Reciprocal strategies for large groups. Personality and
Social Psychology Review, 1, 314–322.
Parks, C. D., & Rumble, A.C. (2001). Elements of reciprocity and social value orientation.
Personality and Social Psychology Bulletin, 27, 1301–1309.
Parks, C. D., Rumble, A. C., & Posey, D. C. (2002). The effects of envy on reciprocation in a
social dilemma. Personality and Social Psychology Bulletin, 28, 509–520.
Parks, C. D., Sanna, L. J., & Posey, D. C. (2003). Retrospection in social dilemmas: How
thinking about the past affects future cooperation. Journal of Personality and Social
Psychology, 84, 988–996.
176 ■ References

Parks, C. D., & Stone, A. B. (2010). The desire to expel unselfish members from the group.
Journal of Personality and Social Psychology, 99, 303–310.
Parks, C. D., & Vu. A. D. (1994). Social dilemma behavior of individuals from highly
individualist and collectivist cultures. Journal of Conflict Resolution, 38, 708–718.
Penn, D. J. (2003). The evolutionary roots of our environmental problems:  Toward a
Darwinian ecology. Quarterly Review of Biology, 78, 275–301.
Penner, L. A., Midili, A. R., & Kegelmeyer, J. (1997). Beyond job attitudes: A personality
and social psychology perspective on the causes of organizational citizenship behavior.
Human Performance, 10, 111–131.
Piff, P. K., Kraus, M. W., Côté, S., Cheng, B. H., & Keltner, D. (2010). Having less, giving
more:  The influence of social class on prosocial behavior. Journal of Personality and
Social Psychology, 99, 771–784.
Pillutla, M. M., & Chen, X. -P. (1999). Social norms and cooperation in social dilemmas: The
effects of context and feedback. Organizational Behavior and Human Decision Processes,
78, 81–103.
Platt, J. (1973). Social traps. American Psychologist, 28, 641–651.
Plous, S. (1985). Perceptions illusions and military realities: The nuclear arms race. Journal
of Conflict Resolution, 29, 363–389.
Poppe, M., & Valkenberg, H. (2003). Effects of gains versus loss and certain versus probable
outcomes on social value orientations. European Journal of Social Psychology, 33, 331–337.
Poteete, A. R., Janssen, M. A., & Ostrom, E. (Eds.) (2010). Working together: Collective action,
the commons, and multiple methods in practice. Princeton: Princeton University Press.
Probst, T. M., Carnevale, P. J., & Triandis, H. C. (1999). Cultural values in intergroup and
single group social dilemmas. Organizational Behavior and Human Decision Processes,
77, 171–191.
Pruitt, D. G., & Kimmel, M. J. (1977). Twenty years of experimental gaming:  Critique,
synthesis, and suggestions for the future. Annual Review of Psychology, 28, 363–392.
Pruitt, D. G., & Rubin, J. Z. (1986). Social conflict. Reading, MA: Addison-Wesley.
Putnam, R. (1993). Making democracy work. Princeton, NJ: Princeton University Press.
Rachlin, H. (2006). Notes on discounting. Journal of the Experimental Analysis of Behavior,
85, 425–435.
Raihani, N. J., & Bshary, R. (2011). The evolution of punishment in n-player public goods
games: A volunteer’s dilemma. Evolution, 65, 2725–2728.
Rapoport, Am. (1987). Research paradigms and expected utility models for the provision of
step-level public goods. Psychological Review, 94, 74–83.
Rapoport, Am. (1988). Provision of step-level goods:  Effects of inequality in resources.
Journal of Personality and Social Psychology, 54, 432–440.
Rapoport, An. (1967). A note on the “index of cooperation” for prisoner’s dilemma. Journal
of Conflict Resolution, 11, 100–103.
Rapoport, An., & Chammah, A. M. (1965). Prisoner’s dilemma. Ann Arbor, MI: University
of Michigan Press.
Rapoport, An., & Dale, P. S. (1967). The “end” and “start” effects in iterated prisoner’s
dilemma. Journal of Conflict Resolution, 11, 354–362.
Reeve, H. K. (2000). Multi-level selection and human cooperation. Evolution and Human
Behavior, 21, 65–72.
Reinders Folmer, C., Klapwijk, A., De Cremer, D., & Van Lange, P. A. M. (2012). One for
all: What representing a group may do to us. Journal of Experimental Social Psychology,
48, 1047–1056.
References ■ 177

Reisel, W. D., Probst, T. M., Chia, S. -L., Maloles, C. M., & König, C. J. (2010). The effects
of job insecurity on job satisfaction, organizational citizenship behavior, deviant
behavior, and negative emotions of employees. International Studies of Management and
Organization, 40, 74–91.
Renzl, B. (2008). Trust in management and knowledge sharing: The mediating effects of fear
and knowledge documentation. Omega, 36, 206–220.
Richerson, P. J., & Boyd, R. (2005). Not by genes alone:  How culture transformed human
evolution. Chicago, IL: University of Chicago Press.
Rigdon, M., Ishii, K., Watabe, M., & Kitayama, S. (2009). Minimal social cues in the dictator
game. Journal of Economic Psychology, 30, 358–367.
Rilling, J. K., & Sanfey, A. G. (2011). The neuroscience of social decision-making. Annual
Review of Psychology, 62, 23–48.
Rioux, S. M., & Penner, L. A. (2001). The causes of organizational citizenship
behavior: A motivational analysis. Journal of Applied Psychology, 86, 1306–1314.
Roberts, G. (1998). Competitive altruism:  From reciprocity to the handicap principle.
Proceedings of the Royal Society-B, 265, 427–431.
Roberts, G. (2005). Cooperation through interdependence. Animal Behaviour, 70, 901–908.
Roberts, G., & Renwick, J. S. (2003). The development of cooperative relationships:  An
experiment. Proceedings of the Royal Society-B, 270, 2279–2283.
Roberts, G., & Sherratt, T. N. (1998). Development of cooperative relationships through
increasing investment. Nature, 394, 175–179.
Robinson, S. L., & Bennett, R. J. (1995). A typology of deviant workplace behaviors:
A multidimensional scaling study. Academy of Management Journal, 38, 555–572.
Robinson, S. L., & O’Leary-Kelly, A. E. (1998). Monkey see, monkey do: The influence of
work groups on the antisocial behavior of employees. Academy of Management Journal,
41, 658–672.
Roch, S. G., Lane, J. A., S., Samuelson, C. D., Allison, S. T., & Dent, J. L.  (2000). Cognitive
load and the equality heuristic:  A  two-stage model of resource overconsumption in
groups. Organizational Behavior and Human Decision Processes, 83, 185–212.
Rockenbach, B., & Milinski, M. (2006). The efficient interaction of indirect reciprocity and
costly punishment. Nature, 444, 718–723.
Rockenbach, B., & Milinski, M. (2011). To qualify as a social partner humans hide severe
punishment though their observed cooperativeness is decisive, Proceedings of the
National Academy of Science (PNAS), 105/45, 18307–18312.
Rockmann, K., & Northcraft, G. B. (2008). To be or not to be trusted:  The influence of
media richness on defection and deception. Organizational Behavior and Human
Decision Processes, 107, 106–122.
Rotemberg, J. J. (1994). Human relations in the workplace. Journal of Political Economy,
102, 684–717.
Roth, A. E., & Murnighan, J. K. (1978). Equilibrium behavior and repeated play in the
prisoner’s dilemma. Journal of Mathematical Psychology, 17, 189–198.
Roth, A. E., Prasnikar, V., Okuno-Fujiwara, M., & Zamir, S. (1991). Bargaining and market
behavior in Jerusalem, Ljublajana, Pittsburgh, and Tokyo:  An experimental study.
American Economic Review, 81, 1068–1095.
Rotter, J. B. (1967). A new scale for the measurement of interpersonal trust. Journal of
Personality, 35, 651–665.
Rousseau, D. M., Sitkin, S. B., Burt R. S., & Camerer, C. (1998). Not so different after
all: A cross-discipline view of trust. Academy of Management Review, 23, 393–404.
178 ■ References

Rumble, A. C., Van Lange, P. A. M., & Parks, C. D. (2010). The benefits of empathy: When
empathy may sustain cooperation in social dilemmas. European Journal of Social
Psychology, 40, 856–866.
Rupp, D. E., & Cropanzano, R. (2002). The mediating effects of social exchange relationships
in predicting workplace outcomes from multifoci organizational justice. Organizational
Behavior and Human Decision Processes, 89, 925–946.
Rusbult, C. E., & Van Lange, P. A. M. (2003). Interdependence, interaction, and relationships.
Annual Review of Psychology, 54, 351–375.
Rutte, C. G. (1990). Solving organizational social dilemmas. Social Behaviour, 5, 285–294.
Rutte, C. G., & Wilke, H. A.  M. (1985). Preference for decision structures in a social
dilemmas situation. European Journal of Social psychology, 15, 367–370.
Rutte, C. G., & Wilke, H. A. M. (1992). Goals, expectations and behavior in a social dilemma
situation. In W. B. G. Liebrand, D. M. Messick, & H. A. M. Wilke (Eds.), Social dilemmas
(pp. 280–305). New York: Pergamon.
Sahlins, M. (1972). Stone age economics. New York: Aldine De Gruyter.
Samuelson, C. D. (1990). Energy conservation:  A  social dilemma approach. Social
Behaviour, 5, 207–230.
Samuelson, C. D. (1991). Perceived task difficulty, causal attributions, and preferences for
structural change in resource dilemmas. Personality and Social Psychology Bulletin, 17,
181–187.
Samuelson, C. D. (1993). A multiattribute evaluation approach to structural change in
resource dilemmas. Organizational Behavior and Human Decision Processes, 55, 298–324.
Samuelson, C. D., Messick, D. M., Rutte, C. G., & Wilke, H. A. M. (1984). Individual and
structural solutions to resource dilemmas in two cultures. Journal of Personality and
Social Psychology, 47, 94–104.
Sartorius, R. (1975). Individual conduct and social norms. Belmont, CA: Dickenson.
Sattler, D. N., & Kerr, N. L. (1991). Might versus morality explored:  Motivational and
cognitive bases for social motives. Journal of Personality and Social Psychology, 60,
756–765.
Scalet, S. (2006). Prisoner’s dilemmas, cooperative norms, and codes of business ethics.
Journal of Business Ethics, 65, 309–323.
Schelling, T. C. (1960). The strategy of conflict. Cambridge, MA: Harvard University Press.
Scherer, A. G., & Palazzo, G. (2007). Toward a political conception of corporate
responsibility: Business and society seen from a Habermasian perspective. Academy of
Management Review, 32, 1096–1120.
Schroeder, D. A. (Ed.) (1995). Social dilemmas:  Perspectives on individuals and groups.
Westport, CT: Praeger.
Schwartz, S. H. (1992). Universals in the content and structure of values: Theoretical advances
and empirical tests in 20 countries. Advances in Experimental Social Psychology, 25, 1–66.
Schwartz, S. H. (1999). A theory of cultural values and some implications for work. Applied
Psychology: An International Review, 48, 23–47.
Searcy, W. A., & Nowicki, S. (2005). The evolution of animal communication: Reliability and
deception in signaling systems. Princeton, NJ: Princeton University Press.
Sears, D. O. (1986). College sophomores in the laboratory:  Influences of a narrow data
base on social psychology’s view of human nature. Journal of Personality and Social
Psychology, 51, 515–530.
Seinen, I., & Schram, A. (2006). Social status and group norms: Indirect reciprocity in a
helping experiment. European Economic Review, 50, 581–602.
References ■ 179

Selten, R., & Stoecker, R. (1986). End behavior in sequences of finite prisoner’s dilemma
supergames: A learning theory approach. Journal of Economic Behavior and Organization,
7, 47–70.
Semmann, D., Krambeck, H.-J., & Milinski, M. (2004). Strategic investment in reputation.
Behavioral Ecology and Sociobiology, 56, 248–252.
Sen, S., Gurhan-Canli, Z., & Morwitz, V. (2001). Withholding consumption:  A  social
dilemma perspective on consumer boycotts. Journal of Consumer Research, 28, 399–417.
Settoon, R. P., Bennett N., & Liden, R. C. (1996). Social exchange in organizations: Perceived
organizational support, leader-member exchange, and employee reciprocity. Journal of
Applied Psychology, 81, 219–227.
Shah, R., & Goldstein, S. M. (2006). Use of structural equation modeling in operations
management research: Looking back and forward. Journal of Operations Management,
24, 148–169.
Shaw, J. I. (1976). Response-contingent payoffs and cooperative behavior in the prisoner’s
dilemma game. Journal of Personality and Social Psychology, 34, 1024–1033.
Sheldon, K.M. (1999). Learning the lessons of tit-for-tat:  Even competitors can get the
message. Journal of Personality and Social Psychology, 77, 1245–1253.
Sheldon, K. M., & McGregor, H. A. (2000). Extrinsic value orientation and “the tragedy of
the commons.” Journal of Personality, 68, 383–411.
Shelley, G. P., Page, M., Rives, P., Yeagley, E., & Kuhlman, D. M. (2010). Nonverbal
communication and detection of individual differences in social value orientation. In
R. M. Kramer, A. Tenbrunsel, & M. H. Bazerman (Eds.), Social decision making: Social
dilemmas, social values, and ethics (pp. 147–170). New York: Routledge.
Shen, S.-F., Reeve, H. K., & Herrnkind, W. (2010). The Brave Leader game and the timing of
altruism among non-kin. American Naturalist, 176, 242–248.
Sherratt, T. N., & Roberts, G. (1999). The evolution of quantitatively responsive cooperative
trade. Journal of Theoretical Biology, 200, 419–426.
Simon, B., Loewy, M., Stürmer, S., Weber, U., & Freytag, P., Habig, C., et  al. (1998).
Collective identification and social movement participation. Journal of Personality and
Social Psychology, 74, 646–658.
Simon, H. A. (1990). A mechanism for social selection and successful altruism. Science,
250, 1665–1668.
Simon, H. A. (1991). Organizations and markets. Journal of Economic Perspectives, 5, 34–38.
Singer, T., Seymour, B., O’Doherty, J., Kaube, H., Dolan, R. J., Frith, C. D. (2004). Empathy for
pain involves the affective but not sensory components of pain. Science, 303, 1157–1162.
Singer, T., Seymour, B., O’Doherty, J. P., Stephan, K. E., Dolan, R. J., & Frith, C. D. (2006).
Empathic neural responses are modulated by the perceived fairness of others. Nature,
439, 466–469.
Skarlicki, D. P., & Folger, R. (1997). Retaliation in the workplace: The roles of distributive,
procedural, and interactional justice. Journal of Applied Psychology, 82, 434–443.
Smith, A. (1759/2002). Theory of moral sentiments. New York: Cambridge University Press.
Smith, A. (1776/1976). Wealth of nations. New York: Oxford University Press.
Smith, E. A. (2004). Why do good hunters have higher reproductive success? Human
Nature, 15, 343–364.
Smith, E. A., & Bliege Bird, R. (2000). Turtle hunting and tombstone opening:  Public
generosity as costly signaling. Evolution and Human Behavior, 21, 245–262.
Sober, E., & Wilson, D. S. (1998). Unto others:  The evolution and psychology of unselfish
behavior. Cambridge, MA: Harvard University Press.
180 ■ References

Sommerfeld, R. D., Krambeck, H.-J., Semmann, D., & Milinski, M. (2007). Gossip as an
alternative for direct observation in games of indirect reciprocity. Proceedings of the
National Academy of Sciences, 104, 17435–17440.
Spisak, B., Dekker, P., Kruger, M., & Van Vugt, M. (2012). Warriors and peacekeepers: Testing
a biosocial implicit leadership hypothesis of intergroup relations using masculine and
feminine faces. PloS ONE, 7, 1 e30399.
Staats, H. J., Wit, A. P., & Midden, C. Y. H. (1996). Communicating the greenhouse effect
to the public: Evaluation of a mass media campaign from a social dilemma perspective.
Journal of Environmental Management, 45, 189–203.
Steinel, W., Utz, S., & Koning, L. (2010). The good, the bad and the ugly thing to do when
sharing information:  Revealing, concealing and lying depend on social motivation,
distribution, and importance of information. Organizational Behavior and Human
Decision Processes, 113, 85–96.
Stern, P. C. (1976). Effect of incentives and education on resource conservation decisions in a
simulated commons dilemma. Journal of Personality and Social Psychology, 34, 1285–1292.
Stevens, J. R., & Hauser, M. D. (2004). Why be nice? Psychological constraints on the
evolution of cooperation. Trends in Cognitive Science, 8, 60–65.
Stouten, J., De Cremer, D., & Van Dijk, E. (2005). All is well that ends well, at least for
proselfs:  Emotional reactions to equality violation as a function of social value
orientation. European Journal of Social Psychology, 35, 767–783.
Stouten, J., De Cremer, D., & Van Dijk, E. (2007). Managing equality in social dilemmas:
Emotional and retributive implications. Social Justice Research, 20, 53–67.
Stouten, J., De Cremer, D., & Van Dijk, E. (2009). Behavioral (in)tolerance of equality
violation in social dilemmas: When trust affects contribution decisions after violation of
equality. Group Processes and Intergroup Relations, 12, 517–531.
Strathman, A., Gleicher, F., Boninger, D. S., & Edwards, C. S. (1994). The consideration of
future consequences: Weighing immediate and distant outcomes of behavior. Journal of
Personality and Social Psychology, 66, 742–752.
Suleiman, R., Budescu, D. V., Fischer, I., & Messick, D. M. (2004). Contemporary psychological
research on social dilemmas. New York: Cambridge University Press.
Suleiman, R. & Rapoport A. (1988). Environmental and social uncertainty in single-trial
resource dilemmas. Acta Psychologica, 68, 99–112.
Sylwester, K., & Roberts, G. (2010). Cooperators benefit through reputation-based partner
choice in economic games. Biology Letters, 6, 659–662.
Tan, H. -B., & Forgas, J. P. (2010). When happiness makes us selfish, but sadness makes
us fair: Affective influences on interpersonal strategies in the dictator game. Journal of
Experimental Social Psychology, 46, 571–576.
Taylor, M. (1987). The possibility of cooperation. New York: Cambridge University Press.
Tazelaar, M. J. A., Van Lange, P. A. M., & Ouwerkerk, J. W. (2004). How to cope with “noise”
in social dilemmas:  the benefits of communication. Journal of Personality and Social
Psychology, 87, 845–859.
Tenbrunsel, A. E., & Messick, D. M. (1999). Sanctioning systems, decision frames, and
cooperation. Administrative Science Quarterly, 44, 684–707.
Tenbrunsel, A. E., & Messick, D. M. (2004). Ethical fading: The role of self-deception in
unethical behavior. Social Justice Research, 17, 223–236.
Tepper, B. J., & Taylor, E. C. (2003). Relationships among supervisors’ and subordinates’
procedural justice perceptions and organizational citizenship behaviors. Academy of
Management Journal, 46, 97–105.
References ■ 181

Thibaut, J. W., & Kelley, H. H. (1959). The social psychology of groups. New  York:  Wiley
and Sons.
Tinbergen, N. (1968). On war and peace in animals and man. Science, 160, 1411–1418.
Toda, M., Shinotsuka, H., McClintock, C. G., & Stech, F. J. (1978). Development of
competitive behavior as a function of cultural, age, and social comparison. Journal of
Personality and Social Psychology, 36, 825–839.
Todorov, A., & Duchaine, B. (2008). Reading trustworthiness in faces without recognizing
faces. Cognitive Neuropsychology, 25, 395–410.
Tooby, J., & Cosmides, L. (1996). Friendship and the Banker’s Paradox:  Other pathways
to the evolution of adaptations for altruism. Proceedings of the British Academy, 88,
119–143.
Triandis, H. C. (1989). Cross-cultural studies of individualism and collectivism. In J. J.
Berman (Ed.), Nebraska Symposium on Motivation (pp. 41–133). Lincoln, NE: University
of Nebraska Press.
Trivers, R. (1971). The evolution of reciprocal altruism. Quarterly Review of Biology,
46, 35–57.
Turiel, E.(1983). The development of social knowledge: Morality and convention. Melbourne,
Australia: Cambridge University Press.
Tyler, T. R., & Blader, S. L. (2003). The group engagement model: Procedural justice, social
identity, and cooperative behavior. Personality and Social Psychology Review, 7, 349–361.
Tyler, T. R., & Degoey, P. (1995). Collective restraint in social dilemmas: Procedural justice
and social identification effects on support for authorities. Journal of Personality and
Social Psychology, 69, 482–497.
Tyler, T. R., Degoey, P., & Smith, H. (1996). Understanding why the justice of group
procedures matters:  A  test of the psychological dynamics of the group-value model.
Journal of Personality and Social Psychology, 70, 913–930.
Tyson, T. (1990). Believing that everyone else is less ethical: Implications for work behavior
and ethics instruction. Journal of Business Ethics, 9, 715–721.
Tyson, T. (1992). Does believing that everyone else is less ethical have an impact on work
behavior? Journal of Business Ethics, 11, 707–717.
Utz, S. (2004a). Self-construal and cooperation: Is the interdependent self more cooperative
than the independent self? Self and Identity, 3, 177–190.
Utz, S. (2004b). Self-activation is a two-edged sword: The effects of I primes on cooperation.
Journal of Experimental Social Psychology, 40, 769–776.
Utz, S., Ouwerkerk, J. W., & Van Lange, P. A. M. (2004). What is smart in a social dilemma?
Differential effects of priming competence on cooperation. European Journal of Social
Psychology, 34, 317–332.
Van den Bergh, B., & Dewitte, S. (2006). The robustness of the “raise-the-stakes”
strategy:  Coping with exploitation in noisy prisoner’s dilemma games. Evolution and
Human Behavior, 27, 19–28.
Van den Bos, K., & Lind, E. A. (2002). Uncertainty management by means of fairness
judgments. Advances in Experimental Social Psychology, 34, 1–60.
Van der Vegt, G., van de Vliert, E., & Oosterhof, A. (2003). Informational dissimilarity and
organizational citizenship behavior:  The role of intrateam interdependence and team
identification. Academy of Management Journal, 46, 715–727.
Van Dijk, E., De Kwaadsteniet, E. W., & De Cremer, D. (2009). Tacit coordination in social
dilemmas: The importance of having a common understanding. Journal of Personality
and Social Psychology, 96, 665–678.
182 ■ References

Van Dijk, E., Van Kleef, G. A., Steinel, W., & Van Beest, I. (2008). A social functional
approach to emotions in bargaining:  When communicating anger pays and when it
backfires. Journal of Personality and Social Psychology, 94, 600–614.
Van Dijk, E., & Wilke, H. (1993). Differential interests, equity, and public good provision.
Journal of Experimental Social Psychology, 29, 1–16.
Van Dijk, E., & Wilke, H. (1994). Asymmetry of wealth and public good provision. Social
Psychology Quarterly, 57, 352–359.
Van Dijk, E., & Wilke, H. (1995). Coordination rules in asymmetric social dilemmas: A
comparison between public good dilemmas and resource dilemmas. Journal of
Experimental Social Psychology, 31, 1–27.
Van Dijk, E., & Wilke, H. (2000). Decision-induced focusing in social dilemmas: Give-some,
keep-some, take-some, and leave-some dilemmas. Journal of Personality and Social
Psychology, 78, 92–104.
Van Dijk, E., Wilke, H., & Wit, A. (2003). Preferences for leadership in social dilemmas: Public
good dilemmas versus resource dilemmas. Journal of Experimental Social Psychology,
39, 170–176.
Van Dijk, E., Wit, A., Wilke, H., & Budescu, D. V. (2004). What we know (and do not know)
about the effects of uncertainty on behavior in social dilemmas. In R. Suleiman, D. V.
Budescu, I. Fischer, & D. M. Messick (Eds.), Contemporary psychological research on
social dilemmas (pp. 315–331). Cambridge: Cambridge University Press.
Van Doesum, N., & Van Lange, P. A. M. (2013). Social mindfulness: Will and skill to navigate
the social world. Unpublished manuscript. VU University, Amsterdam.
Van Goozen, S. H. M., Frijda, N. H., & Van de Poll, N. E. (1995). Anger and aggression
during role-playing: Gender differences between hormonally treated male and female
transsexuals and controls. Aggressive Behavior, 21, 257–273.
Van Kleef, G. A., De Dreu, C. K.  W., & Manstead, A. S.  R. (2006). Supplication and
appeasement in conflict and negotiation: The interpersonal effects of disappointment,
worry, guilt, and regret. Journal of Personality and Social Psychology, 91, 124–142.
Van Lange, P. A. M. (1999). The pursuit of joint outcomes and equality in outcomes: An
integrative model of social value orientation. Journal of Personality and Social Psychology,
77, 337–349.
Van Lange, P. A. M. (2008). Does empathy trigger only altruistic motivation—How about
selflessness and justice? Emotion, 8, 766–774.
Van Lange, P. A. M. (2013). What we should expect from theories in social psychology: Truth,
abstraction, progress, and applicability as standards (TAPAS). Personality and Social
Psychology Review, 17, 40–55.
Van Lange, P. A. M., Bekkers, R., Chirumbolo, A., & Leone, L. (2012). Are conservatives less
likely to be prosocial than liberals? From games to ideology, political preferences and
voting. European Journal of Personality, 26, 461–473.
Van Lange, P. A. M., De Cremer, D., Van Dijk, E., & Van Vugt, M. (2007a). Self-interest and
beyond: Basic principles of social interaction. In A. W. Kruglanski & E. T. Higgins (Eds.),
Social psychology: Handbook of basic principles (pp. 540–561). New York: Guilford.
Van Lange, P. A. M., Bekkers, R., Schuyt, T. N. M., & van Vugt, M. (2007b). From games
to giving: Social value orientation predicts donation to noble causes. Basic and Applied
Social Psychology, 29, 375–384.
Van Lange, P. A. M. Joireman, J., Parks, C. D., & Van Dijk, E. (2013). The psychology of
social dilemmas: A review. Organizational Behavior and Human Decision Processes, 120,
125–141.
References ■ 183

Van Lange, P. A. M., Klapwijk, A., & van Munster, L. M. (2011). How the shadow of the
future might promote cooperation. Group Processes and Intergroup Relations, 14,
857–870.
Van Lange, P. A. M., & Kuhlman, D. M. (1994). Social value orientations and impressions of
partner’s honesty and intelligence: A test of the might versus morality effect. Journal of
Personality and Social Psychology, 67, 126–141.
Van Lange, P. A. M., Otten, W., De Bruin, E. M. N., & Joireman, J. A. (1997). Development
of prosocial, individualistic, and competitive orientations:  Theory and preliminary
evidence. Journal of Personality and Social Psychology, 73, 733–746.
Van Lange, P. A. M., Ouwerkerk, J. W., & Tazelaar, M. J. A. (2002). How to overcome the
detrimental effects of noise in social interaction: The benefits of generosity. Journal of
Personality and Social Psychology, 82, 768–780.
Van Lange, P. A.  M., & Rusbult, C. E. (2012). Interdependence theory. In P. A.  M. Van
Lange, A. W. Kruglanski, & E. T. Higgins (Eds.), Handbook of theories of social psychology
(Vol. 2, pp. 251–272). Thousand Oaks, CA: Sage.
Van Lange, P. A.  M., Schippers, M., & Balliet, D. (2011). Who volunteers in psychology
experiments? An empirical review of prosocial motivation in volunteering. Personality
and Individual Differences, 51, 297–284.
Van Lange, P. A. M., & Sedikides, C. (1998). Being more honest but not necessarily more
intelligent than others:  Generality and explanations for the Muhammad Ali effect.
European Journal of Social Psychology, 28, 675–680.
Van Lange, P. A. M., & Visser, K. (1999). Locomotion in social dilemmas: How people adapt
to cooperative, tit-for-tat, and noncooperative partners. Journal of Personality and Social
Psychology, 77, 762–773.
Van Prooijen, J. -W., Stahl, T., Eek, D., & Van Lange, P. A. M. (2012). Injustice for all or just
for me? Social value orientation predicts responses to own versus other’s procedures.
Personality and Social Psychology Bulletin, 38, 1247–1258.
Van Vugt, M. (1997). Why the privatisation of public goods might fail: A social dilemma
approach. Social Psychology Quarterly, 63, 355–366.
Van Vugt, M. (2001). Community identification moderating the impact of financial
incentives in a natural social dilemma:  Water conservation. Personality and Social
Psychology Bulletin, 25, 731–745.
Van Vugt, M. (2009). Averting the tragedy of the commons:  Using social psychological
science to protect the environment. Current Directions in Psychological Science, 18,
169–173.
Van Vugt, M. & Ahuja, A. (2010). Naturally selected: The evolutionary science of leadership.
New York: Harper.
Van Vugt, M., & De Cremer, D. (1999). Leadership in social dilemmas: The effects of group
identification on collective actions to provide public goods. Journal of Personality and
Social Psychology, 76, 587–599.
Van Vugt, M., De Cremer, D., & Janssen, D. P. (2007). Gender differences in competition
and cooperation: The male warrior hypothesis. Psychological Science. 18, 19–23.
Van Vugt, M., & Hardy, C. L. (2010). Cooperation for reputation: Wasteful contributions as
costly signals in public goods. Group Processes and Intergroup Relations, 1–11.
Van Vugt, M., & Hart, C. M. (2004). Social identity as social glue:  The origins of group
loyalty. Journal of Personality and Social Psychology, 86, 585–598.
Van Vugt, M., & Iredale, W. (2013). Men behaving nicely: Public goods as peacock tails.
British Journal of Psychology, 104, 3–13.
184 ■ References

Van Vugt, M., Jepson, S. F., Hart, C. M., & De Cremer, D. (2004). Autocratic leadership in social
dilemmas: A threat to group stability. Journal of Experimental Social Psychology, 40, 1–13.
Van Vugt, M., & Samuelson, C. D. (1999). The impact of personal metering in the
management of a natural resource crisis:  A  social dilemma analysis. Personality and
Social Psychology Bulletin, 25, 731–745.
Van Vugt, M., & Van Lange, P. A.  M. (2006). Psychological adaptations for prosocial
behaviour: The altruism puzzle. In M. E. Schaller, J. A. Simpson, & D. T. Kenrick (Eds.),
Evolution and social psychology (pp. 237–261). New York: Psychology Press.
Van Vugt, M., Van Lange, P. A.  M., Meertens, R. M., & Joireman, J. A. (1996). How a
structural solution to a real-world social dilemma failed: A field experiment on the first
carpool lane in Europe. Social Psychology Quarterly, 59, 364–374.
Vierikko, E., Pulkkinen, L., Kaprio, J., & Rose, R. J. (2006). Genetic and environmental
sources of continuity and change in teacher-rated aggresson during early adolescence.
Aggressive Behavior, 32, 308–320.
Von Neumann, J. (1928). Zur theorie der gesellschaftspiele [Theory of parlor games].
Mathematische Annelen, 100, 295–320.
Von Neumann, J., & Morgenstern, O. (1944). Theory of games and economic behavior.
Princeton, NJ: Princeton University Press.
Wade-Benzoni, K. A., Okumura, T., Brett, J. M., Moore, D. A., Tenbrunsel, A. E., & Bazerman,
M. H. (2002). Cognitions and behavior in asymmetric social dilemmas: A comparison
of two cultures. Journal of Applied Psychology, 87, 87–95.
Wade-Benzoni, K. A., & Tost, L. P. (2009). The egoism and altruism of intergenerational
behavior. Personality and Social Psychology Review, 13, 165–193.
Wade-Benzoni, K. A., Tenbrunsel, A. E., & Bazerman, M. H. (1996). Egocentric perceptions
of fairness in asymmetric, environmental social dilemmas:  Explaining harvesting
behavior and the role of communication. Organizational Behavior and Human Decision
Processes, 67, 111–126.
Wagner, J. A. III. (1995). Studies of individualism-collectivism: Effects on cooperation in
groups. Academy of Management Journal, 38, 152–172.
Wagner, S. L., & Rush, M. C. (2000). Altruistic organizational citizenship behavior: Context,
disposition, and age. Journal of Social Psychology, 140, 379–391.
Wang, H., Law, K. S., Hackett, R. D., Wang, D., & Chen, Z. -X. (2005). Leader-member
exchange as a mediator of the relationship between transformational leadership and
followers’ performance and organizational citizenship behavior. Academy of Management
Journal, 48, 420–432.
Weber, E. U., & Morris, M. W. (2010). Culture and judgment and decision making:  The
constructivist turn. Perspectives on Psychological Science, 5, 410–419.
Weber, J. M., Kopelman, S., & Messick, D. M. (2004). A conceptual review of social
dilemmas:  Applying a logic of appropriateness. Personality and Social Psychology
Review, 8, 281–307.
Weber, M. J., & Murnighan, J. K. (2008). Suckers or saviors? Consistent contributors in
social dilemmas. Journal of Personality and Social Psychology, 95, 1340–1353.
Webley, P., Robben, H., Elffers, H., & Hessing, D. (1991). Tax evasion:  An experimental
approach. Cambridge: Cambridge University Press.
Wedekind, C., & Milinski, M. (2000). Cooperation through image scoring in humans.
Science, 288, 850–852.
Wendler, D., & Miller, F. G. (2004). Deception in pursuit of science. Archives of Internal
Medicine, 164, 597–600.
References ■ 185

Wenzel, M. (2004). An analysis of norm processes in tax compliance. Journal of Economic


Psychology, 25, 213–228.
West, S. A., El Mouden, C., & Gardner, A. (2011). Sixteen misconceptions about the
evolution of cooperation in humans. Evolution and Human Behaviour, 32, 231–262.
West, S.A., Gardner, A., Shuker, D.M., Reynolds, T., Burton-Chellow, M., Sykes, E.M.,
Guinnee, M.A., & Griffin, A.S. (2006). Cooperation and the scale of competition in
humans. Current Biology, 16, 1103–1106.
Wildschut, T., & Insko, C. A. (2007). Explanations of interindividual-intergroup
discontinuity:  A  review of the evidence. European Review of Social Psychology, 18,
175–211.
Wildschut, T., Pinter, B., Vevea, J. L., Insko, C. A., & Schopler, J. (2003). Beyond the group
mind:  A  quantitative review of the interindividual-intergroup discontinuity effect.
Psychological Bulletin, 129, 698–722.
Williams, R. M. Jr. (1970). American society:  A  sociological interpretation (3rd Ed.).
New York: Knopf.
Williams, S., Pitre, R., & Zainuba, M. (2002). Justice and organizational citizenship behavior
intentions: Fair reward versus fair treatment. Journal of Social Psychology, 142, 33–44.
Wilson, D. S., Van Vugt, M., & O’Gorman, R. (2008). Multilevel selection theory and major
evolutionary transitions. Current Directions in Psychological Science, 17, 6–9.
Wit, A. P., & Kerr, N. L. (2002). “Me versus just us versus us all” categorization and
cooperation in nested social dilemmas. Journal of Personality and Social Psychology, 83,
616–637.
Wolfe, C., & Loraas, T. (2008). Knowledge sharing: The effects of incentives, environment,
and person. Journal of Information Systems, 22, 53–76.
Wong, R. Y., & Hong, Y. (2005). Dynamic influences of culture on cooperation in the
prisoner’s dilemma. Psychological Science, 16, 429–434.
Wubben, M. J.  J., De Cremer, D., & Van Dijk, E. (2009). How emotion communication
guides reciprocity: Establishing cooperation through disappointment and anger. Journal
of Experimental Social Psychology, 45, 987–990.
Yamagishi, T. (1986a). The provision of a sanctioning system as a public good. Journal of
Personality and Social Psychology, 51, 110–116.
Yamagishi, T. (1986b). The structural goal/expectation theory of cooperation in social
dilemmas. Advances in Group Processes, 3, 51–87.
Yamagishi, T. (1988a). Exit from the group as an individualistic solution to the free rider
problem in the United States and Japan. Journal of Experimental Social Psychology, 24,
530–542.
Yamagishi, T. (1988b). The provision of a sanctioning system in the United States and Japan.
Social Psychology Quarterly, 51, 265–271.
Yamagishi, T. (2011). Trust. New York: Springer.
Yamagishi, T., Hashimoto, H., & Schug, J. (2008). Preferences versus strategies as
explanations for culture-specific behavior. Psychological Science, 19, 579–584.
Yamagishi, T., Terai, S., Kiyonari,T., Mifune, N., & Kanazawa, S. (2007). The social exchange
heuristic: Managing errors in social exchange. Rationality and Society, 19, 259–291
Yamagishi, T., & Yamagishi, M. (1994). Trust and commitment in the United States and
Japan. Motivation and Emotion, 18, 129–166.
Yezer, A. M., Goldfarb, R. S., & Poppen, P. J. (1996). Does studying economics discourage
cooperation? Watch what we do, not what we say or how we play. Journal of Economic
Perspectives, 10, 177–186.
186 ■ References

Zahavi, A., and Zahavi, A. (1997). The handicap principle: A missing piece of Darwin’s puzzle.
New York: Oxford University Press.
Zak, P. J. (2008). The neurobiology of trust. Scientific American, 298, 88–95
Zeng, M., & Chen, X. -P. (2003). Achieving cooperation in multiparty alliances: A social
dilemma approach to partnership management. Academy of Management Review, 28,
587–605.
■ INDEX

adaptivity, 10, 41, 48–51 Batson, C. D., 73


affect, 66, 148. See also emotions beliefs, cultural, 95–100
age and cooperation, 112, 149 belonging motive, 127, 130, 133, 134, 135
agent-based modeling, 31 Bentham, J., 15
aggression, 56, 57, 75, 77 Bergeron, D. M., 111
agreeableness, 65, 116 between- vs. within-culture variations in
Ahmad, N., 73 cooperation, 86
Alchian, A., 21 bicultural awareness, 92
altruism biological influences, 57, 75, 147–148. See
competitive, 47 also evolutionary perspective
definition, 55–56 biological markets theory, 47
evolutionary cost of, 40 Blader, S. L., 109
long-term perspective, 54 Borel, E., 20–21
Prisoner’s Dilemma, 9 Bornstein, G., 138, 146
reciprocal, 42–44 brain mechanisms in cooperation, 57,
responsive nature of, 57 75, 147–148. See also evolutionary
role of, 73–77 perspective
altruistic punishment, 77 Brewer, M. B., 126–127
Americans in cross-societal comparisons, Buchan, N., 101
81–82, 90–91, 93–94, 96, 97, 98 Burnstein, E., 42
Americans vs. Russians in Prisoner’s Byrne, Z. M., 109
Dilemma, 81
Ames, R. E., 117 Cardenas, J. C., 83, 96
antisocial punishment, 59–60 career commitment, 111
appetitive competition, 74 Carnevale, P. J., 91
approval, desire for, 17 Caucasian Americans, 81
Aristotle, 16 CFC (consideration of future
arms race, 137–138 consequences), 64–65
artificial virtue, 17 Chatman, J. A., 92
Asians in cross-societal comparisons, 82, Chicken Dilemma, 5–7, 6f, 24
90–91, 96, 97 Chinese in cross-societal comparisons, 82,
Assurance Dilemma, 5–6, 6f, 7, 24–25 92, 93–94
asymmetries in social dilemmas, 60, 145 choice revision process, 31
autocratic vs. democratic leadership, 123, Chong, A., 83
124 Clark, K., 33
aversive competition, 74 classical period, mixed motives theory in,
Axelrod, R., 43, 145 14–15
cognition and social dilemma behavior,
Baker, W. E., 86 9, 148
Balliet, D., 99, 102–103 coin exchange paradigm, 28
Barling, J., 118 collective action problems, 26
Barsade, S. G., 92 collective efficacy, 93
187
188 ■ Index

collective outcomes, consequences of Cropranzano, R., 109


other-regarding motives for, 75–76 cross-societal variability in dealing
collective rationality, 7 with social dilemmas, 80–84.
collectivism See also cultural perspective
cultural values, 90–95, 104 crowding out phenomenon in incentive
Prisoner’s Dilemma, 9 strategies, 140–141
vs. self-interest perspective, vii, 3–4 CSR (corporate social responsibility), 122
and trust levels, 96 cultural group selection, 52
commercial virtue, 17 cultural perspective
commitment in workplace, 111 beliefs, 95–100, 104
common-pool resources (CPRs). See and evolutionary perspective,
resource dilemmas 51–52, 53
communicated emotions, 67 globalization, 100–102
communication as tool for cooperation, introduction, 79–80
70–71 social norms, 87–90, 98–99, 103
communities. See collectivism; cultural summary, 103–104
perspective; groups trust, 97–99, 103
community resource management, 130–132 values, 90–95, 104
competition variation in cooperation, 85–87
cross-societal similarity in socialization culture, definitions, 85
for, 81
definition, 55 Dawes, R., 5
as fuel for unionization, 119 deception, 34–35
inter-group, 138 De Dreu, C. K. W., 65
kin mitigation of, 42 deficient equilibrium, 5
potential benefits for collective delayed gratification, 5, 44, 145
outcomes, 77 delayed social fences, 8
vs. self-interest, 75 delayed social traps, 7–8
competitive altruism, 47 democratic management, 123–124
competitors in social dilemmas, 68, 70, developmental approach to altruism and
74–75, 120 cooperation, 40
compulsory voting, 134 deviant behavior in workplace, 112–113,
confined generosity, 16–17 122–123
conformity bias, 51 Dictator Game, 25, 82–83, 87–88
conscientiousness, 116 direct reciprocity, 42–44, 68–69, 89
consideration of future consequences direct vs. indirect reproductive fitness, 41
(CFC), 64–65 dispositional envy, 65
Constant, D., 115 distributive justice, 110
contending, self-concern position of, 56 Douglas, R. L., 32
continuous public good, 27–28 Dresher, M., 21–22, 21f
conventional norms, 87 dual-concern model, 56
cooperation, psychology of, viii, ix, 39–40, dynamic interaction processes, 58,
55. See also social dilemmas 67–72
coordination games, 132, 133, 137–138 dynamic vs. static paradigms of social
Core Social Motivations Model, 127 dilemma choice, 30–32
corporate ethics, 121–122
corporate social responsibility (CSR), 122 Earley, P. C., 93–94
costly signaling theory, 46–47, 48 economic outcomes, 102–103, 104, 108. See
critical contributor, 27 also workplace
Index ■ 189

effective matrix, 54–55 formal vs. informal social norm sanctions,


effectiveness vs. efficiency, 149–150 85–86, 88
efficiency, 72, 149–150 4xI framework, 127
egalitarianism, 57, 62–63, 76 framing of social dilemmas, 65
elementary cooperation, 8 freedom, effect on cooperation, 72
emotions free riding, 26, 42–43, 99, 118, 133–134
connection to altruism, 73 Fukuyama, F., 97
effect on cooperation, 7, 9, 66–67 Fullagar, C., 118
influence on decision making, 148 functional approach to altruism and
in kin selection, 42 cooperation, 40
stoic management of, 14
empathy, 73–74, 76, 112, 147 Gächter, S., 59, 86
endgame effects, 33–34 Gallo, P., 32–33
environmental sustainability, 128–132 game theory, 5–11, 13, 20–21. See also
Epicurus, 14, 15 specific social dilemma games
equality of outcomes, 56–57, 62–63, 66, 76–77 gene-culture co-evolutionary models,
ethical behavior in workplace, 113–114. See 51–52
also morality generosity, 16–17, 69, 82–83, 88
ethnic and societal variation in genetic or biological influences, 57, 75,
cooperation, 81–84 147–148. See also evolutionary
eudamonia, 16 perspective
evolutionary perspective Gintis, H., 4–5, 11, 46, 48, 87
adaptivity, 48–49 given matrix, 54–55
costly signaling, 46–47 give-some dilemmas, 6t, 8, 26–28, 30, 65.
culture’s role, 51–52, 53 See also public goods dilemmas
introduction, 39–41 globalization, 100–102
kin selection, 41–42 global social identity, 101, 102
neuroscience of fairness, 57 Governing the Commons (Ostrom), 131
non-adaptivity issue, 49–51, 52 government, role in resolving social
reciprocity, 42–45 dilemmas, 132. See also institutional
value in future research, 144 strategies
extra-role behaviors in workplace, 8, groups
109–112 adaptivity within, 48–49
extraversion, 65 composition and locomotion, 70
cooperation based on size of, 94–95
fairness cultural values and in-group vs. out-
automatic neurological activation of, group effect, 94, 101, 104
57, 147 evolutionary fitness benefits of, 52
costly signaling theory, 47 inherent competitiveness of, 138–139
effect on cooperation, 72 intergroup public goods (IPG)
and egalitarianism, 57 situation, 119
in Prisoner’s Dilemma, 9 power of identity, 71, 129, 130
in workplace, 109–110, 120, 123, 124 Prisoner’s Dilemma between, 91
Fehr, E., 59 sustainable community structures, 131
first order dilemmas, 8 unique characteristics of, 121, 146
Fiske, S., 127
fitness, reproductive, 39, 41–47, 48, 49, 52 Hamilton, W. (Bill), 41
Flood, M., 21–22, 21f happiness, Bentham’s definition, 15
forgiveness, 63, 122 Hardin, G., 29, 123, 128
190 ■ Index

Hardy, C. L., 45 transportation and mobility, 133


Hawk-Dove Dilemma, 5–7, 6f, 24 volunteering, 136
health, public, 139–140 warfare participation, 138
hedonic calculus, 15–16 inclusive reproductive fitness, 41–47, 48, 49
Hemaseth, M., 81–82 income tax paying, 134–135
Henrich, J., 82–83, 87–89 indirect reciprocity, 44–45, 69–70
Herrmann, B., 84, 85, 86 indirect vs. direct reproductive fitness, 41
heuristics, applying to social dilemma individual characteristics
choice, 66, 71 adaptivity, 48–49
historical perspective real-world social dilemmas, 130, 141
Borel, 20–21 social norms, 103
Huizinge case, 4–5 workplace, 111–112, 116
interdependence, 16–18, 19 individualism
le her, 19–20 cultural values, 90–95, 104
mixed motive concept, 13–16 definition, 55
outcome maximization, 14–16 equality as less important for, 63
Prisoner’s Dilemma Game, 21–22 reciprocity, 68
for social dilemma research, 9–11 trust levels, 96
Hobbes, T., 13, 17–18 unionization motives, 119
Holmström, B., 108 and youth of research samples, 149
Hong, Y., 92 individual vs. structural strategies for
horizontal collectivists, 91 solving social dilemmas, 126–127
horizontal individualists, 91 inequality aversion, 57
hormonal influence on behavior, 75, 147 infectious diseases and vaccinations, 139–140
Huff, L., 96 informal vs. formal social norm sanctions,
Huizinge case, 4–5 85–86, 88
human nature information
philosophical perspectives, 14–18 as criterion for indirect reciprocity, 45
selfishness vs. unselfishness, 36 importance of availability, 144–145
social dilemma clues to, viii knowledge-sharing in workplace,
Hume, D., 16–17 115–117
as social dilemma strategy, 128–129,
identifiability of contribution, 94–95 132–133, 134–135, 136, 139, 140
identity See also uncertainty level
global social, 101, 102 information-exchange dilemmas, 116
group, 71, 129, 130 information-pooling task, 115
organizational identification, 111 Inglehart, R., 86
social dilemma strategy, 136–137, 140, 141 in-group vs. out-group cooperation, 94,
immediate consequences, 5, 16. 101, 104
See also time dimension institutional outcomes, cooperation’s effect
imperfect information, 144–145 on, 102–103, 104
inaction, self-concern position of, 56 institutional strategies
incentives environmental sustainability, 129–130
environmental sustainability, 130, importance of local participation, 150
131–132 politics, 134, 138
research issue, 32–33 transportation and mobility, 133
solving social dilemma strategy, vaccinations, 139–140
140–141 volunteering, 136
Index ■ 191

instrumental cooperation, 8, 64, 77, 119 Kramer, R., 61, 64–65, 72, 101, 111
intangible vs. tangible outcomes, 32–33 Kuhlman, D. M., 68, 74
interactional justice, 109–110, 120
interdependence laissez-faire leadership, 123, 124
dynamic interaction processes, 58, 67–72 large-scale societies, 83–84, 89–90
functionality of, 17 Latin America in cross-cultural
and globalization, 101 comparisons, 96–97, 98
historical perspective, 16–21 leadership, 71, 110, 114, 123
psychological influences on, 20–21, 58, le her (card game), 19–20
62–67 Leviathan (Hobbes), 13, 18
structural influences on, 58–62 Liebrand, W. B. G., 4, 7, 31, 63, 66, 69, 73,
structures of, 5–6 75, 82
theoretical considerations, 54–55, 143–144 local social management, 131, 150
interdisciplinary nature of social dilemma locomotion, 70
research, 11, 13 Logic of Appropriateness Model, 127
inter-group interactions, 91, 138–139, 146 The Logic of Collective Action (Olson), 136
inter-group public goods (IPG) situation, 119 long-term orientation, 55, 65. See also time
international social dilemmas, 100–102 dimension
interpersonal basis of aggression, 75 Luce, R. D., 9
interpersonal organizational citizenship
behavior (OCBI), 109 MacCoun, R. J., 34–35
intra-group conflicts, 138–139 management. See workplace
intrinsic orientation, 65 market integration, and cooperation
IPG (inter-group public goods) situation, 119 norms, 88, 89
Iredale, W., 46–47 Marlowe, F. W., 89
Marotzke, J., 45
Japanese in cross-societal comparisons, 97, 98 marriage contract problem, 19
Jarvenpaa, S. L., 115 Marshello, A. F., 68, 74
Jewish law, 19 Marwell, G., 117
joint outcomes, preference for, 55, 56, 62 Maximizing Difference Game, 74, 81
Joireman, J., 5, 8, 11, 25, 62–65, 72, 93, 109, McClintock, C. G., 74, 81
128, 133, 145–146 McCusker, C., 65
justice in workplace, 109–110, 120, 123, mechanism approach to altruism and
124. See also fairness cooperation, 40
membership status in organizations, 111
Kelley, H. H., 75 memetics, 51
Kelley, L., 96 Messick, D. M., 59, 74, 117, 126–127, 134
Kerr, N. L., 34–35, 59 Mexican Americans, 81
Kiesler, S., 115 Mexicans, 81
K index in Prisoner’s Dilemma, 24 Milgrom, P., 108
kin selection, 41–42 Milinski, M., 44–45
Klandermans, B., 118, 120, 136 Mill, J. S., 15–16
knowledge-sharing dilemmas, 115–117. minimizing large differences, Prisoner’s
See also information Dilemma, 9
Knox, R. E., 32 mismatches, non-adaptive cooperation
Kollock, P., 43, 61, 68 due to, 50–51
Komorita, S. S., 63 mistakes, non-adaptive cooperation due
Krambeck, H.-J., 45 to, 49–50
192 ■ Index

mixed-choice strategy, 20 organizational setting. See workplace


mixed-motive concept, 13–16, 35 ostracism, 59
modernism, mixed motives in, 15–16 Ostrom, E., 72, 130–131, 150
Montmort, P. di, 19 outcome-based transformations, 54–58,
mood states and cooperation, 66 73–78. See also interdependence
morality, 14, 16–17, 87–90, 113–114 outcome maximization, 14–16
Morgenstern, O., 9, 13, 21 out-group vs. in-group cooperation, 94,
Morris, H., 18 101, 104
motivation overassimilation, 75
activation of, 57–58 ownership of information, and sharing, 115
mixed-motive concept, 13–16, 35
outcome-based transformation, 57 parental care as kinship selection, 42
in Prisoner’s Dilemma, 9 Parks, C. D., 59–60, 63, 69, 82, 90–91
multiattribute evaluation model, 71–72 PDG. See Prisoner’s Dilemma Game
multilayered social dilemmas, 76, 77 (PDG)
multilevel selection theory, 48–49 perception and judgment, automatic links
mutualism, 40 between, 147–148
personal experience criterion in indirect
narcissism, 65 reciprocity, 45
natural environment, 128–133 personality variables in social dilemmas,
natural selection vs. prosocial behavior, 39 62–67
negotiation strategies and dual-concern phylogeny approach to altruism and
model, 56 cooperation, 40
neighbor-modulated fitness, 48 pleasure/pain dichotomy in motivation
Nemeth, C., 25, 26 theory, 14, 15–16
nepotism, 41 pledge systems in social dilemmas, 71
neuroscience of human cooperation, 57, politics, 71, 133–137
75, 147–148. See also evolutionary priming of cooperation, 65–66
perspective Prisoner’s Dilemma Game (PDG)
Newton, L. A., 119 Cold War arms race as, 137
noise in social interaction, 43, 61–62, 64, collective rationality in, 7
68, 145 corporate social responsibility as, 122
non-normative work behavior, 112–113, criticism of, 25–26
122–123 cross-societal variations in response
non-verbal behavior, 67, 147–148 to, 81
Nopo, H., 83 and deviant work behavior, 112, 113,
norms. See social norms 114, 122
Nowak, M., 10, 43, 44, 51, 68–69, 144 direct reciprocity strategies, 43–44
n-person Prisoner’s Dilemma, 25 dynamics of, 22–24
game theory role of, 9–11
OCBI (interpersonal organizational and give-some games, 26, 28
citizenship behavior), 109 historical perspective, 21–22
OCBO (whole organizational citizenship inter-group interaction, 91
behavior), 109 and social dilemma definition, 5
Olson, M., 26, 136 structure of, 6f, 21–24, 22–23f
organizational citizenship behavior (OCB), and take-some games, 28, 30
8, 109–112 variants of, 24–25
organizational identification, 111 problem-solving, self-concern position of, 56
Index ■ 193

Probst, T. M., 91 Raiffa, H., 9


procedural justice, 109–110, 120, 123, 124 Raise-the-Stakes strategy, 43–44
productivity, social dilemma’s impact on, RAND corporation, 10
108. See also workplace Rapoport, Am., 10, 60, 61
promising, effectiveness in cooperation, 70 Rapoport, An., 24, 33, 58, 137
prosocial attitudes and behaviors real-world social dilemmas
developmental progress of, 74 basic issues, 140–142
and egalitarianism, 57 characteristics of, 125–126
evolutionary perspective, 39 environmental sustainability, 128–132
and fairness, 110 international security, 137–139
priming for cooperation, 66 politics, 133–137
psychological influences on choice, 62–63 public health, 139–140
and unionization motives, 119 strategies and motivations, 127
psychological perspective, 20–21, 58, transportation and mobility, 132–133
62–67, 78, 144. See also altruism; reason and outcome maximization, 14–15
interdependence reciprocal altruism, 42–44
public goods dilemmas reciprocity
cross-societal variation in response to, and biological markets theory, 47
83–84 direct, 42–44, 68–69, 89
definition, 8 indirect, 44–45, 69–70
as give-some games, 6f, 26–28 social exchange heuristic, 71
income tax paying, 134–135 in workplace justice, 110
indirect reciprocity, 45 religion, 88–89, 90, 99–100
inter-group public goods situation, 119 repeated-choice data, 31
knowledge-sharing dilemma as, 115 reproductive fitness, 39, 41–47, 48, 49, 52
motivation to support, 26 reputation
vs. resource dilemmas, 126 and conformity bias, 51–52
reward and punishment incentive, 33, and costly signaling, 46–47
84, 85, 86 in indirect reciprocity, 44–45, 69–70
unionization as, 117 and mismatches in adaptation, 50–51
volunteerism as, 135 in small-scale societies, 89
warfare as, 138 as substitute for punishment, 149
public health, 139–140 and unionization, 118
punishment research perspective
altruistic, 77 corporate ethics, 121
antisocial, 59–60 cross-societal variability, 80–84
costly, 59 cultural influences, 85, 86
cross-societal variations, 84, 85, 86, deception, 34–35
88–89, 90 endgame, 33–34
long-term prospect of, 145 evolutionary approach’s role, 40–41, 52
in outcome matrix, 23–24 future prospects, 143–151
reputation as substitute for, 149 give-some games, 26–28
social effectiveness of, 59, 149 multilevel selection theory, 49
third-party punishment game, 87–89 overview, 9–12, 35–36
See also sanctions for social norm practical considerations, 32
violation Prisoner’s Dilemma, 21–24
Putnam, R., 97 real-world social dilemmas, 25–26,
Pyrrho, 14, 15 140–141
194 ■ Index

research perspective (Cont.) Sefton, M., 33


sampling issue, 148–149 self-enhancing motive, 127, 135–136
static vs. dynamic paradigms, 30–32 self-interest
structural equation modeling, 31–32 awareness of punishment for
take-some games, 28–30 excessive, 59
tangible vs. intangible outcomes, 32–33 vs. collective interest, vii, 3–4
workplace behaviors, 112, 117, 120 as commercial virtue, 17
resource dilemmas vs. competition, 75
asymmetries in resource allocation, 60 as only human motivation in social
indirect reciprocity, 45 dilemma, 57
vs. public goods dilemmas, 126 vs. other-regarding, 54–58
real-world considerations, 126, 128–133 self-monitoring, 65
as take-some games, 6t, 8, 28–30 self-restraint, human difficulty with, 18
reward and punishment Selten, R., 33
and aggression’s role in collective SEM (structural equation modeling),
outcomes, 77 31–32
complications in real-world Semmann, D., 45
dilemmas, 149 sensation seeking, 65
environmental sustainability, 130 sexual mate competition, 47
as incentive in public goods dilemmas, shadow of the future, 145
33, 84, 85, 86 shared responsibility, 93–94
and motives in social dilemma decision Sheposh, J., 32–33
making, 127 Shinotsuka, H., 81
non-trusters’ willingness to impose shirking, 113
sanctions, 64 Shore, L. M., 119
structural influences on cooperation, 62 short-term consequences, 5, 16. See also
timing of, 69 time dimension
See also punishment skepticism, classical Greek, 14, 15
reward in outcome matrix, 23–24 small-scale societies, 82–83, 87–89, 150
Rumble, A. C., 69 Smith, A., 17
runaway social selection, 47 Snow Drift Dilemma, 5–7, 6f, 24
Russians in cross-societal comparisons, 81 social capital, 131
Rutte, C. G., 107 social class and research samples, 149
social cynicism, 95
Samuelson, C. D., 71–72, 134 social death penalty, 58–60
sanctions for social norm violation social dilemmas
cross-societal variations, 98–99 definitions and assumptions, vii,
cultural influence on cooperation, 87 5, 8–9, 11
effectiveness vs. efficiency, 149–150 games perspective, 5–11, 13, 20–21
informal vs. formal, 85–86, 88 importance of understanding, 12
social exclusion, 59, 77 interdisciplinary nature of, 11, 13
sustainable community resource introduction, 3–4
management, 132 theory and application relationship,
warfare participation, 138 11–12
in workplace, 113, 114, 123 See also historical perspective; real-
Sartorius, R., 18 world social dilemmas
second order dilemmas, 8, 59, 99 social exchange heuristic, 71
security, international, 137–139 social exclusion, 59, 77
Index ■ 195

social fences, 6t, 8, 30. See also give-some interdependence theory, 58–62
dilemmas to social dilemma solutions, 71,
socialization, 81 126–127
social loafing, 113 structural equation modeling (SEM), 31–32
social movements, 136–137 sustainability, 131
social norms SVO. See social value orientation (SVO)
conformity bias, 51–52 sympathy, and interdependence, 17
cultural perspective, 87–90, 98–99, 103
definition, 87 take-some dilemmas, 6t, 8, 28–30, 65. See
individual response to, 103 also resource dilemmas
religion, 88–89, 90, 99–100 tangible vs. intangible outcomes, 32–33
reward and punishment method for tax paying, 134–135
enforcing, 59–60 Tazelaar, M. J. A., 64
as sources for transformations, 58 TEA (tradeable environmental
strength of internalized, 71 allowances), 130
tax paying compliance, 135 team commitment, 111
workplace, 112–113, 116, 122–123 temporal dimension. See time dimension
See also sanctions for social norm temporal discounting, 145
violation temptation in outcome matrix, 19,
social preferences theory, 57 23–24
social traps, 6t, 7–8, 30. See also take-some Tenbrunsel, A. E., 59
dilemmas Theory of Games and Economic Behavior
social value orientation (SVO) (Neumann and Morgenstern), 21
direct reciprocity, 68 Theory of Moral Sentiments (Smith), 17
and framing of decisions, 65 third-party punishment game, 87–89
in information sharing, 116 Thöni, C., 86
organizational citizenship behaviors, 112 Thorngate, W. B., 74
as psychological influence on choice, time dimension
62–63 age and cooperation, 112, 149
strategic alliances, 120 altruism, 54
typology of, 55–57, 56f cultural assimilation, 87
unionization, 119 delayed gratification, 5, 44, 145
spite. See competition development of longer-term
Sproull, L., 115 perspective, 16
Stag Hunt (Assurance) Dilemma, 5–6, 6f, dynamic interaction processes,
7, 24–25 67–72
Stahelski, A. J., 75 importance for future research,
Staples, D. S., 115 145–146
static vs. dynamic paradigms of social rewards and punishments, 69
dilemma choice, 30–32 and social dilemma definition, 5
Stech, F. J., 81 social dilemma structure, 7–8
step-level public good, 27, 27f and Tit-for-Tat strategy, 43
Stern, P. C., 33 Tit-for-Tat strategy, 43, 68–69, 137
Stoecker, R., 33 Toda, M., 81
stoicism, classical Greek, 14–15 tradeable environmental allowances
Stone, A. B., 59–60 (TEA), 130
strategic alliances, workplace, 120–121 tragedy of the commons, 29, 128
structural approaches Tragedy of the Commons (Hardin), 29
196 ■ Index

transformations, outcome, 54–58, 73–78. virtue, social nature of, 16


See also interdependence Visser, K., 70
transportation and mobility, 132–133 volunteerism, 135–136
Triandis, H. C., 91 Von Neumann, J., 9, 13, 21
trust voter’s paradox, 134
and collectivism, 96 voting as social dilemma, 133–134
cross-societal differences in, 97–99, 103 Vu, A. D., 82, 90–91
definition, 63 vulnerability, and trust, 64
direct reciprocity, 44, 68
and disarmament, 137 Wade-Benzoni, K. A., 97
and individualism, 96 Waldegrave problem, 19–20
and institutional strategies, 130 warfare, 138
and knowledge sharing, 116 Wealth of Nations (Smith), 17
power of local community, 150 Weber, J. M., 127
as psychological influence on choice, Wedekind, C., 44–45
63–64, 127 whole organizational citizenship behavior
and voice in organizational (OCBO), 109
governance, 123 Williams, J., 21
and warfare participation, 138, 139 Win Stay, Lose Shift (WSLS) strategy, 69
Trust Dilemma, 5–6, 6f, 7, 24–25 within- vs. between-culture variations in
Tucker, A. (mathematician), 22 cooperation, 86
Tyler, T. R., 109 Wong, R. Y., 92
workplace
Ultimatum Game, 25 citizenship behavior, 109–110
uncertainty level collectivism-individualism
and competition, 74–75 dichotomy, 92
in environmental sustainability corporate ethics, 121–122
dilemmas, 128–129 cultural values, 93–95
real-world vs. experimental settings, democratic management, 123–124
60–61, 62 deviant behavior, 112–113, 122–123
research considerations, 145 ethical behavior, 113–114
strategic alliances, 120–121 fairness, 109–110, 120, 123, 124
and trust, 64 individual characteristics, 111–112, 116
understanding motive, 127, 128–129, 136, introduction, 107–108
137, 138 knowledge sharing, 115–117
unionization, 117–120, 136 leadership role, 110
utilitarianism, 15 membership status, 111
organizational identification, 111
vaccinations, 139–140 social norms, 112–113, 116, 122–123
values, cultural, 90–95, 104 strategic alliances, 120–121
Van Dijk, E., 5, 8, 30, 57, 59, 60–61, 63, unionization, 117–120, 136
65–67, 70–71, 126, 147–148 World Values Survey, 96, 97, 99
Van Lange, P. A. M., 70, 99, 102–103 WSLS (Win Stay, Lose Shift) strategy, 69
Van Vugt, M., 25–26, 45, 46–47
vertical collectivists, 91 Yamagishi, T., 59, 98
vertical individualists, 91 yielding, self-concern position of, 56
Vietnamese in cross-societal comparisons,
82, 90–91 Zeno, 14–15

You might also like