Professional Documents
Culture Documents
Taylor N. Carlson - What Goes Without Saying-Cambridge University Press (2022)
Taylor N. Carlson - What Goes Without Saying-Cambridge University Press (2022)
TAYLOR N. CARLSON
Washington University in St. Louis
JAIME E. SETTLE
William & Mary
www.cambridge.org
Information on this title: www.cambridge.org/9781108831864
: 10.1017/9781108912495
A catalogue record for this publication is available from the British Library.
TAYLOR N. CARLSON
Washington University in St. Louis
JAIME E. SETTLE
William & Mary
www.cambridge.org
Information on this title: www.cambridge.org/9781108831864
: 10.1017/9781108912495
A catalogue record for this publication is available from the British Library.
TAYLOR N. CARLSON
Washington University in St. Louis
JAIME E. SETTLE
William & Mary
www.cambridge.org
Information on this title: www.cambridge.org/9781108831864
: 10.1017/9781108912495
A catalogue record for this publication is available from the British Library.
Notes 259
Works Cited 277
Index 293
vii
ix
xi
for the execution of the lab experiments. The Osher Lifelong Learning
Institute (formerly the Christopher Wren Association), in particular
Judith Bowers, partnered with us to help disseminate our findings.
W&M also funded several opportunities for Taylor through an Honors
Fellowship, which ultimately supported an experiment described in
Chapter 7, conference travel to the Midwest Political Science
Association (MPSA), and training at the Summer Institute for Political
Psychology at Stanford in 2013. The University of California, San Diego’s
Center for American Politics provided space on a CCES module that
provided data for analyses that appear in Chapter 8. Washington
University in St. Louis provided Taylor with the time needed to dedicate
to this project and resources to hire graduate research assistants, such as
Erin Rossiter and Benjamin Noble, without whom this project would
have been seriously delayed.
Taylor was one of the core founding members of the Social Networks
and Political Psychology (SNaPP) Lab, which Jaime established in 2013.
The impact of the lab on this project cannot be overstated. Multiple
cohorts of W&M students were directly and indirectly involved in the
projects presented in this book. We are especially grateful to the
members of the Lab Experiments Team over the years. Drew
Engelhardt was the first to tackle the setup of the BioPac equipment
and AcqKnowledge software, followed by Karina Charipova. John
Stuart, Edward Hernandez, Zarine Kharazian, Dan Brown, and
Michelle Hermes collected, cleaned, and preprocessed most of the data
for the psychophysiologically informative studies in Chapters 6 and 7.
Laurel Detert, Emily Saylor, Alex Bulova, Nick Oviedo-Torres, Emma
DiLauro, and Nora Donnelly were involved in analyzing some of this
data as well as in thinking about how the study protocols could be
extended in the future.
Countless other SNaPP Lab alums were involved in the production of
the data. We thank Meg Schwenzfeier for her work in coding the social
network analysis data from the first psychophysiological study. We thank
Michael Payne for the inspiration for what we report as the Names as
Cues study but internally called the “Ezekiel Studies” because of a
humorous conversation with him. Another set of students – Ally Brown,
Vera Choo, Leslie Davis, Aidan Fielding, Jacob Nelson, Alexis Payne,
Anne Pietrow, Kathleen Quigley, and Olivia Yang – were involved in
coding free response data. The “COVID Cohort” – Julia Campbell,
Claudia Chen, Leslie Davis, Andrew Luchs, Kaylie Martinez-Ochoa,
and Frank Tao – contributed at the final stages of writing the manuscript,
rekindled her own, and she is a better scholar because of the opportunity
to work with Taylor.
Taylor would like to thank her family for providing years of examples
of different types of political discussions (both lived and observed), which
provided motivation to write this book. In addition to providing her with
a colorful variety of conversations that no doubt capture every single
behavior we describe empirically in this book, her family gave her unend-
ing support. She thanks her parents, Charlie and Skip, for giving her the
opportunity to attend the College of William & Mary and take full
advantage of opportunities along the way that helped pave the way for
this book. She thanks her grandparents, Bud and Sharon, for remaining
ever-curious about the research and what exactly it is that she does with
her time. Taylor also thanks her in-laws, Bob, Michelle, Alyssa (Hahn),
and Evan Carlson, for their support of and interest in her work. Taylor
owes the biggest gratitude to Eric and Daniel Carlson. Without Eric’s
support during the entirety of this project’s near-decade of work, Taylor
would not have had the motivation to finish. Eric gave her the space and
time to write, listened when she needed to vent or verbally process parts of
the project, and held down the fort when she traveled. Although Daniel’s
arrival into this world just before the Covid-19 pandemic delayed our
progress on this book by a few months, Taylor is most grateful of all to
her little Danny Bob. Daniel gave this book new meaning and purpose,
yet served as a constant reminder that life is more important than a book
(with apologies to readers!). Daniel “wrote” his first line of code for this
book and many paragraphs were written with Daniel asleep in
Taylor’s arms.
Taylor owes a huge thank-you to Jaime. Jaime has been an incredible
mentor and collaborator for nearly a decade. This book would not have
been possible without Jaime’s guidance, intuition, and patience while
Taylor learned the ropes of academia and struggled through challenges
that were just a distant memory for Jaime at the time, such as taking
comprehensive exams or finishing a dissertation. Jaime has been incred-
ibly supportive of work–life balance during this project, and Taylor is
indebted to Jaime for that above all else.
having been in his shoes before, and does his best to steer the conversation
toward safer waters. Football. Movies. The holidays. Their kids’ dance
recitals. Your grandmother’s bunion removal. Anything but politics.
Joe’s day continues. At lunch, he overhears people at the table next to
him talking about the latest economic news. It sounds like the most
talkative member of the group is sharing more opinion than fact, even
though he’s billing it as objective reality. Joe finishes his workday and
takes the bus home, remembering to use his headphones to avoid more
unwelcome political encounters. Thursday nights are dinners with his in-
laws; he gave up years ago trying to impress them, but he still makes an
effort not to antagonize them. While “not antagonizing them” used to be
simple enough, their constant political commentary has complicated
things. They always have something to say, typically motivated by the
cable news programs that monopolize their television. Joe never knows if
they want him to reply or not, but because they seem well-informed, he
never feels like he has much to contribute. He is always grateful when his
wife takes one for the team, and even though this conversation is the
longest one he’s had all day about politics, he does not actually say
anything.
As Joe falls asleep that night, he rehashes in his mind all the close calls
he had during the day, where he narrowly avoided getting drawn into an
unpleasant situation talking about politics. The evasion is exhausting, but
necessary. When he thinks back to the occasions where he has not steered
clear of contentious conversations – with strangers, with his coworkers,
and with his family – he cringes. Those negative memories are what
motivates him to work so hard to avoid offending others and minimize
his own discomfort.
The bottom line: Questions about the rates and nature of political
discussion are difficult to ask and answer. Scholars have wildly different
definitions of political discussion (or conversation, talk, deliberation,
interaction), and “talking about politics” means different things to differ-
ent people (Eveland, Morey, and Hutchens 2011; Morey and Eveland
2016). Following points carefully raised by these scholars, we note that
the most commonly used political discussion survey items are far too
blunt to capture the nuances of the behavior. Analyzing data from full
networks, Morey and Eveland (2016) find that dyads tend to agree on
whether they had any conversation, but do not agree on whether they
discussed politics. This suggests that individuals have different conceptu-
alizations of what constitutes a political discussion, and these different
conceptualizations can make it difficult to measure the existence or fre-
quency of discussion.
Turning toward the who of conversations, Joe, like most Americans,
would report on a survey that he talks most frequently with those with
whom he has a close relationship, such as his wife and in-laws. He also
would probably report that most of the people with whom he talks tend
to agree with him (Mutz 2006), though disagreement persists in his
network (Huckfeldt, Johnson, and Sprague 2004). A survey might also
pick up that Joe is exposed to more diversity of opinion in the workplace,
consistent with findings that coworkers are an important source of cross-
cutting discussion (Mutz and Mondak 2006). But these measures would
miss several important aspects of the who in Joe’s political conversation
experiences. We would not fully understand the effect of the group
context and power dynamic in his water-cooler conversation, both of
which have been shown to be important (e.g. Richey 2009). And although
we would accurately identify that he encounters disagreement at work,
we would likely misattribute to these conversations the ability to influence
Joe. Eveland, Morey, and Hutchens (2011) importantly find that political
conversations with neighbors and coworkers are more likely to be in the
form of small talk, motivated by the desire to pass time, while political
conversations with strong ties, such as partners and relatives, are gener-
ally motivated by more instrumental factors, for example, trying to form
an opinion or inform others. Joe’s experience seems to fit this finding, but
our standard survey questions would likely miss this nuance and how the
nature of his various social relationships affects the tenor of different
conversations.
Our understanding of where political discussion takes place is often
deduced from whom people report as political discussants. As a result, we
assume that most discussion takes place in the workplace and at home
(e.g. Conover, Searing, and Crewe 2002),3 where Americans spend most
of their time. But discussions take place in other locations as well, such as
regular discussion groups that emerge out of civic or community associ-
ations (Cramer 2004, 2006, 2008), barbershops (Harris-Lacewell 2010),
or in public spaces such as social gatherings and pubs, although these
locations facilitate political discussion far less frequently than the work-
place (Conover, Searing, and Crewe 2002). Churches also play an import-
ant role as a place for people to exchange political opinions, especially for
African Americans.
A major problem with common measures of the if, when, who, and
where of political conversation is that they are narrowly focused on
conversations with regular discussants. As a result of the survey ques-
tions, scholars would miss the multitude of incidental interactions in Joe’s
day, ranging from the fleeting exchange on the bus, akin to a “snort of
derision” in Mansbridge’s (1999) terms, to the political talk briefly inter-
woven into the conversation around the office water cooler. These con-
versations do not necessarily influence Joe’s policy opinions – he is not
learning from them or hoping to persuade anyone. But collectively, they
are a regular, albeit diverse, feature of his daily life that could help him
grasp the political world around him and shape his expectations of what
kinds of people tend to believe what kinds of things about politics. While
regular political discussants might be more influential on standard polit-
ical behaviors such as voting, learning, and attitude formation, ignoring
the full set of interactions might constrain the scholarly understanding of
how conversations can affect broader attitudes toward politics. Relatedly,
we have not captured anything about the conversations that Joe could
have had but successfully avoided. The active avoidance of political
conversation may affect Joe in ways that extend beyond the simple
absence of discussion.
Where social scientists might really come up short is in the answers to
the questions of why and how. Why do people talk about politics? The
vast majority of Americans do talk about politics, at least sometimes.
Many scholars start their inquiry with an assumption about what motiv-
ates political discussion, but few have tested these assumptions. The
conventional answer is that people communicate about politics to achieve
instrumental goals related to decision-making. They communicate in
order to learn and to persuade others. Research stemming from this
assumption finds evidence that circumstantially supports it: people
who are more vested in the political system – those who are more
Our Contribution
Our book addresses this missing piece in our understanding of interper-
sonal political interaction, a behavior some consider to be the lifeblood of
democracy. Why, despite high rates of reported political discussion, do so
many Americans dislike talking about politics? And how do these consid-
erations affect the way that people communicate? We argue that we need
to consider the psychological experience of political discussion as navi-
gating a social process that is rife with potential challenges to one’s sense
of self and one’s relationships with others. Our argument emphasizes two
features of political discussion. The first is that political discussion is an
inherently social behavior. As such, we follow seminal research before us
and argue that without assessing the social factors influencing the decision
to talk about politics, we cannot fully understand who talks about polit-
ics, with whom, under what conditions, and with what consequence.
Variation in the cognitive resources of political conversation, such as
interest in politics or political knowledge, or in instrumental goals related
to learning and persuasion cannot fully explain people’s motivation to
seek or avoid discussion, although considerations related to information
certainly are part of the story.
The second is that political discussion is a process. Previous research
on the causes and consequences of political discussion tends to focus
rigidly on the inputs and outputs of discussion, but not the mechanisms
of the relationship between them. For example, dozens of quantitative
empirical studies examine properties of political discussion networks,
such as the amount of disagreement, to assess the effects of political
discussion, such as political knowledge, political engagement, or vote
choice. These studies rarely measure the actual experience of discussion:
who is available to discuss politics, how people scan their environment to
find (or avoid) discussants, how these factors affect the probability that
politics emerges in discussion and whether people decide to engage, the
group dynamics in the conversations that do occur, and what opinions
people actually express. Those studies that do focus on discussion itself –
such as who discusses politics, when, and with whom (e.g. Minozzi et al.
2020) – do not always capture the iterative nature of discussion, or how
people’s experience in one political discussion affects their decision-
making in the next one.
This book is an effort to open the lid on the processes that lead up to a
political discussion and the implications of the conversations that do
happen. Our approach is to build on what we already know about
political discussion, focusing on the gaps in our knowledge as a field,
resulting from untested assumptions and limited methodologies in previ-
ous work. We apply new measurement techniques in order to better study
the decision-making processes that lead to the initiation of discussion, the
nuances of the interactions that do occur, and the consequences of those
conversations on a wide set of political and social outcomes. We view our
contribution in three parts.
First, we provide a new framework, the 4D Framework, for conceptu-
alizing the feedback cycle of interpersonal political interactions. It models
political discussion as a process motivated by people’s pursuit of the goals
that have been shown to motivate other forms of interpersonal behavior:
accuracy, affirmation, and affiliation. Previous work focuses on the inputs
these choices, but also in the salience of different types of decisions. For
example, someone who loves to talk about politics with anyone who will
listen is unlikely to spend much time thinking about the decision to
engage, while that same decision could be quite salient to someone whose
desire to talk politics is contingent on holding similar opinions as a
potential discussant.
We spend several chapters unpacking the kinds of considerations that
happen in advance of and during a discussion. In order to do this, we
deploy a set of methodological tools that are new or underutilized in
previous work on political discussion. We asked people to tell us about
their political conversations in their own words. We used behavioral
economics approaches to infer people’s preferences. We hooked people
up to heart rate monitors and then asked them to talk about politics with
others. As a result, we do not address the metrics of discussion that are
well trod – discussion frequency, discussion network composition, or
downstream participatory behaviors. Instead, we focus on asking ques-
tions that have not received much attention. For example, how do people
recognize if they agree or disagree with someone before a conversation
begins? Under what conditions do people try to avoid political conversa-
tions? How do they perceive the opportunities and benefits of discussion,
and what concerns them about the possibility of discussion with certain
kinds of people?
This focus on decision-making reveals new explanations for patterns
that long have been detected in Americans’ discussion networks. For
example, many people have studied discussion network composition in
an effort to disentangle the effects of selection versus influence in the
development of political opinions. We demonstrate that the preference
for like-minded discussion is not simply a reflection of environmental
availability of discussants, but rather reflects active choices about
avoiding certain kinds of interactions.
What has not received adequate attention in the discussion literature –
though it has in the deliberation literature (e.g. Karpowitz and
Mendelberg 2014) – is the extent to which social factors affect the choices
people make about voicing their political opinions, even if they are
interested in politics. People are aware of whether their opinions are in
the majority or minority, and whether they are at an informational
advantage or disadvantage. These group dynamics – such as the perceived
knowledge gap between potential discussants, or the power hierarchy
between individuals – up the ante of the social repercussions people might
face for expressing their political views (e.g. Noelle-Neumann 1974;
Glynn, Hayes, and Shanahan 1997; Scheufele and Moy 2000; Morey,
Eveland, and Hutchens 2012), increasing the proportion of people who
choose to silence or censor themselves. While the overall frequency of
political discussion in the United States has remained relatively high, the
rates of self-censorship have increased, largely due to micro-level fears of
social isolation from expressing unpopular views (Gibson and Sutherland
2020). As a result, the largely homophilous conversations in which people
participate reflect a series of behavioral choices that have consequences
beyond the information to which a person is exposed during a discussion
or the effects of that information on learning, persuasion, or vote choice.
Our third major contribution is to emphasize the role of individual
differences that shape the way people navigate the political discussion
landscape. Throughout our inquiry, we are focused on the fact that there
is significant heterogeneity in people’s attitudes, expectations, and experi-
ences with political discussion. We assess the way that demographic,
political, and psychological dispositions moderate how people interpret
the demands and ramifications of potentially contentious social inter-
actions about politics, with an eye toward evaluating what broader
implications that has for the composition of people who most vocally
discuss politics. While we are not the first to consider the role of individ-
ual dispositions in political discussion behavior, we more tightly map
theories about which kinds of traits should matter for which kinds of
decisions within the process of a political discussion.
Many point to the Internet and social media as the ideal way to amplify
the opportunities for exposure to diverse perspectives. But we are increas-
ingly pessimistic about the possibility of digital technologies to provide
the kind of exposure that can foster meaningful exchanges of ideas. In our
separate research agendas, we observe that people quickly recognize and
negatively evaluate people with whom they disagree on Facebook (Settle
2018), and that the information that is transmitted in short bits of written
communication deteriorates in ways that undermine the value of socially
transmitted information (Carlson 2018, 2019). Online interpersonal com-
munication can amplify many of the negative behaviors that both scholars
and the public care about deeply, such as information distortion (Carlson
2018) and belief in misinformation (Anspach and Carlson 2020).
In an era in which Americans are able to select into echo chambers and
the mass media has become largely compartmentalized by the preferences
of its viewers, face-to-face interpersonal interaction remains a conduit
through which people might be exposed to opinions that are different
than their own. People may encounter fewer individuals who differ
greatly from them, but the people they do encounter become dispropor-
tionately important as opportunities for perspective and dialogue.
There is reason to think that interpersonal interactions could both help
and hurt the various facets of the polarization problem America faces in
the twenty-first century. Scholars have explored the possibility that polit-
ical discussion could foster tolerance for the other side (Mutz 2002; Mutz
and Mondak 2006) or help depolarize attitudes (Parsons 2010).
Interpersonal communication can also amplify political learning under
some conditions (Carlson 2019; Ahn, Huckfeldt, and Ryan 2014). But at
the same time, interpersonal interaction can further the deleterious effects
of attitude polarization from partisan media (Druckman, Levendusky,
and McLain 2018). Much of the evidence suggests that disagreeable
deliberation can actually facilitate entrenchment and deepen polarization
(Wojcieszak 2011) or may increase ambivalence in a way that undermines
participation (Mutz 2006), but it is possible that this “dark side” of
disagreement is largely concentrated among people who are in a complete
opinion minority, rather than those who are exposed to a mix of opinions
(Nir 2011; Bello 2012).
How do we know if interpersonal interaction will be constructive or
damaging for the health of our democracy? The findings in this book are
important because they will guide researchers in understanding which
kinds of political interactions actually occur and with what consequence.
The argument and evidence presented in this book highlight a
individual differences between people can make some more sensitive than
others to various features of their context and their discussants. Our
exploration of the 4D Framework uses an eclectic set of methodological
techniques. Chapter 3 is an overview of the methodological core of our
inquiry. We explain the key operationalizations of the concepts in the 4D
Framework and provide context and details for the studies that appear in
multiple chapters throughout the book.
Chapter 4 commences the empirical tests of our argument, beginning
with Stage 1 of the 4D Framework: Detection. We directly tackle a
question buried implicitly in previous findings, as well as our own, that
people prefer like-minded discussants: How do people detect the political
views of others? People must be able to do this if they make active
selections about their discussion partners. The stakes of discussion may
be higher in a polarized environment, but the readily available cues
stemming from a divided and politicized society make the process of
sorting into amicable discussions easier. We show that individuals are
able to use a variety of cues to infer political leanings, including more
obvious cues such as demographic characteristics and extremely subtle
cues, such as first names, pet preferences, and movie preferences. We then
explore the existence of stereotypes that individuals hold about partisans,
under the assumption that these attitudes could affect the ability to
recognize others’ views and willingness to engage in a discussion. Given
that individuals are (differentially) able to recognize the viewpoints of
others, what assumptions do those identities trigger when a person is
deciding whether to engage in a conversation? We find that, consistent
with research on affective polarization, individuals hold negative stereo-
types about outpartisans: They ascribe more negative personality traits to
outpartisans and consider them to be ill-informed, ignorant, and overly
reliant on partisan media. People make these judgments even about out-
partisans they personally know.
Under what conditions are people most likely to discuss politics, and
how do they perceive the costs and benefits of potential conversations of
different configurations? Our focus in Chapter 5 is on the moment of
Decision itself (Stage 2). We use three novel approaches to answer this
question, including a semi-structured discussion experiment, a series of
more than a dozen vignette experiments, and the “name-your-price”
paradigm. The semi-structured discussion experiment, which we call the
True Counterfactual Study, asked participants to reflect upon and
describe either political discussions in which they recently engaged or
political discussions in which they could have engaged, but chose to
People make thousands of small decisions each and every day from the
moment they wake up to the moment they go to sleep. Some research,
made popular in the strategic management world, suggests that we make
over 35,000 decisions a day (see Krockow 2018 for a discussion). More
than 200 of these decisions are about food alone (Wansink and Sobal
2007). In fact, we make so many decisions on a daily basis that by
necessity most do not feel like a conscious choice. We process the
information around us to inform our behavior, often without ever fully
stopping to consider alternative forms of action.
While cognitive psychologists have focused on the subconscious, auto-
matic, and often irrational ways that humans arrive at their behavioral
choices, political scientists have largely conceptualized behavioral choice
in the realm of politics as conscious, deliberate, and calculated. In this
conceptualization, when people report that they have engaged in a polit-
ical discussion, it is because they made an active choice to do so. Yet, as
we highlighted in the opening vignette of the book, the “choice” to engage
in political discussion is actually the result of a series of micro decisions
that reflect a varying degree of agency on the part of an individual. Many
of the forces that structure the likelihood and experience of a political
conversation – the context in which individuals find themselves, the
distribution of others’ opinions – are out of individuals’ control at the
moment a potential discussion emerges. The decision to talk politics
reflects an assessment of the costs and benefits of doing so, given the
circumstances in the moment the choice is made.
In this chapter we advance our argument that the decisions about
whether and how to engage in a political discussion are shaped by many
20
of the same forces that guide other interpersonal interactions. The classic
depiction of political discussion behavior suggests that it emerges as a
result of one’s interest in politics, shaped by one’s personality and the
availability of discussion partners. But we argue that this depiction is
incomplete. We propose the 4D Framework to characterize the process
of political discussion at four stages – Detection, Decision, Discussion,
and Determination – during which people’s decisions about how to
engage are motivated by the same goals underpinning interpersonal inter-
actions more generally: to be accurate, to affirm a positive self-concept,
and to affiliate with others. Considerations are shaped by the contextual
factors of the conversation as well as people’s psychological and psycho-
physiological dispositions. Over time, people develop more generalized
propensities and preferences for political discussion based on their previ-
ous experiences navigating the four stages of past discussions.
Focusing on the process of discussion encourages an assessment of
facets of political discussion that have remained un- or under-explored
to date. Our attention shifts from the outcomes of a discussion to the
anticipation of a discussion; from the composition of a discussion net-
work to the experience of a discussion itself; and from an emphasis on
learning and persuasion to an emphasis on social evaluation and relation-
ship management. Reorienting our focus in this way suggests that the
variation in political discussion behavior between people is driven in part
by a set of individual differences that extend beyond political interest,
related to how people process their social environments more generally.
(Klofstad 2010; Sinclair 2012; Ahn, Huckfeldt, and Ryan 2014), consist-
ent with theories that position discussion as an important component of
the two-step flow of information. However, Eveland, Morey, and
Hutchens (2011) find that these instrumental motivations for political
discussion are less common overall, especially among weaker social ties,
echoing Mutz (2006), who writes that, “[p]eople tend to care more about
social harmony in their immediate face-to-face personal relationships
than about the larger political world” (p. 106).
Scholars using qualitative or formal methods back up these assertions,
revealing or modeling the social nature of political discussion. Conover
and Searing (2005) note of their focus groups that “many participants
were interested in the information they gained less for instrumental polit-
ical purposes than to learn about the lives of others and to seek common
ground. Hence, social motives may be much more important than we
have assumed” (p. 279). Cramer (2004) writes that “much of political
behavior is rooted in social rather than political processes” (p. 8). And in
a footnote to his formal model of political discussion, MacKuen (1990)
argues that “one would expect that individual choice to engage in polit-
ical discussion should be particularly sensitive to the nature of the social
environment” because of the inherent role for morality in such discus-
sions (p. 94–95).
At its core, political discussion is a social behavior motivated by social
considerations. While prior research has revealed this insight, previous
studies have not used it as a starting assumption for quantitative studies
of political conversation. We think this observation is so important that
it should ground our theoretical expectations about why political discus-
sion does or does not occur, what happens when conversations are
initiated, and how people interpret what they experience during a dis-
cussion about politics.
Our emphasis builds directly on Eliasoph’s (1998) framework for her
study of how people create environments conducive for political talk, a
process she interchangeably calls “civic practices,” “political manners,”
or “etiquette” (p. 21). She writes:
This etiquette implicitly takes into account a relationship to the wider world;
politeness, beliefs, and power intertwine, in practice, through this sense of civility.
The concept, then, refers to citizens’ companionable ways of creating and main-
taining a comfortable context for talk in the public sphere. Goffman called this
constant, unspoken process of assessing the grounds for interaction “footing.”
Are there stairs here? Loose gravel? Ice? To walk we have to assess the footing.
Talking is the same: are we talking to make conversation? To accomplish a task?
These risks can also be oriented toward others’ feelings, as Cramer (2004)
notes that “[p]olitical topics arise when they fit in with the flow of the
conversation and run little risk of offending someone within earshot”
(p. 41).
Eliasoph’s work is designed to understand how people navigate the
process of producing contexts for political talk, especially in informal
contexts where the rules about doing so are not entirely clear. Eliasoph
(1998) notes that even in rule-bound settings, people “relentlessly make
inferences about the nature of those settings, improvising rules that par-
ticipants do not recognize as improvised” (p. 236). She continues:
“[T]here is an extra layer of uncertainty: in contrast to these rule-bound
or traditional settings, in which participants think that they know what is
going on and how to act, participants in post-suburban civic life them-
selves say that they are unsure about how to act. They know that they
have to figure out the rules as they go” (p. 236).
The vast majority of Americans report that they discuss politics at least
occasionally. Navigating contexts in which political talk might emerge
necessitates that people assess the costs and benefits of such discussions.
These assessments structure their decision-making – before, during, and
after a political discussion – and shape the rules and norms people
develop for their discussion behavior.
Individuals tend to feel good about themselves when they identify with
and conform to groups that they value (Brewer and Roccas 2001).
Finally, they might also be motivated to affiliate with others. By express-
ing the same opinion or providing the same answer as those in the group,
they might be more likely to be included in the group; even if they are
giving an incorrect answer to an objective question or an ill-informed
opinion, at least the whole group will be wrong together.
Given that these three goals have been shown to motivate individuals
in a variety of interpersonal contexts – situations involving social
influence, social norms, message-based persuasion, conformity, and
compliance – we expect individuals to be susceptible to similar motiv-
ations during political interactions. Importantly, these goals can lead
individuals both to avoid and to pursue political discussion. As we
explore more shortly, previous scholars have touched on these very types
of motivations, but they have not fully integrated their observations into a
unified theory of the psychological factors contributing to the decision-
making process inherent in political discussion. We do.
In this book, we develop and test the AAA Typology in which these
three motivations shape the way people find their “footing” in the con-
texts in which political discussion can or does emerge. Political discussion
can be uncomfortable – challenging individuals’ accuracy, affiliation, and
affirmation goals – making the process of navigating political talk stress-
ful for many people. These interpersonal interactions pose the risk of
damaging people’s self-concept and their social relationships, motivating
people to make decisions that minimize the costs and maximize the
benefits of political conversation. Moreover, we suggest that situational
factors of the conversation affect the considerations that people hold.
There are many such factors that could matter – such as the closeness of
the relationship between discussants (e.g. Morey, Eveland, and Hutchens
2012), the power dynamic embedded within that relationship (e.g.
Ohbuchi and Tedeschi 1997; Richey 2009), or the setting of the
conversation – but we choose to focus primarily on two features that have
received considerable prior attention in the literature: the political know-
ledge level of discussants, and the amount of disagreement between them.
Accuracy Goals
As scholars like Huckfeldt and colleagues have explored, the desire to
learn can motivate political discussion. Indeed, scholarship stemming
from the Columbia School would lead us to believe that learning is
anything about it . . . I’ve heard people say what they said, but I’ve also
heard people say the opposite, that it was a good thing, and I do not know
how to tell the difference. I do not even know enough to know what to
believe’” (p. 108–109).
People who are more confident of their political knowledge face other
sorts of accuracy challenges. Even before the “fake news” era of contem-
porary politics, Americans often disagreed about the facts of politics,
challenging people’s motivation to be accurate during a political conver-
sation. For example, Bartels (2002) examines partisan bias in perceptions
of objective events, such as economic performance. He finds that although
the economy improved during the Reagan administration, Democrats
reported that the economy had gotten worse and Republicans reported
that it had improved tremendously (see p. 134 and Figure 3). Similarly,
Jerit and Barabas (2012) find evidence for selective learning: Partisans are
more knowledgeable about facts that make their party look good and less
knowledgeable about facts that challenge their party, especially on topics
that are made salient through media exposure.2 These findings imply that
conversations between people who disagree with one another entail the
clash of different interpretations of the facts. While Conover, Searing, and
Crewe (2002) find that people are willing to learn in political discussions
if they perceive they are educating themselves, they note an important
caveat: “[T]hey do not want to be pushed by others to accept ideas that
challenge them” (p. 60).
Altogether, accuracy goals can serve to facilitate both engagement in
and avoidance of political discussion, as well as the various types of
behavior within discussions. Some might be driven to conversation in
an effort to learn from their peers – or to be the ones who inform them!
Others might not be interested in learning, but fear having their know-
ledge challenged by others in the group, leading them to avoid this threat
to accuracy. Wanting to appear informed could lead some to go along
with what the group says instead of what they really believe, but could
lead others to loudly overshare their views in an effort to persuade.
Affirmation Goals
The affirmation goal is the desire to pursue a positive self-concept. Implicit
within the literature on motivated reasoning, selective exposure, and the
propensity to choose like-minded discussion partners is the idea that it
simply “feels better” to receive positive reinforcement for one’s political
preferences and validation of one’s opinions. The pursuit of affirmation
goals tends to highlight the factors that can push individuals toward
engaging in discussion, rather than the factors that can pull individuals
away from engagement. For example, Neblo, Esterling, Kennedy, Lazer,
and Sokhey (2010) find that Americans are much more willing to deliber-
ate than conventional wisdom would lead us to expect. They find that in
contrast to the Stealth Democracy (Hibbing and Theiss-Morse 2002) view
of political participation, most Americans do not view political engage-
ment as “taking their medicine,” but rather gain some utility from engage-
ment. Moreover, individuals might engage in conversations to better
understand the world around them, rather than to pursue more instru-
mental goals of persuading or informing others (Neblo 2015).
Prior literature on political discussion mostly has framed the affirmation
goal in terms of self-expression. In some contexts, the opportunity to
publicly share one’s views can positively reinforce self-concept, as
Conover, Searing, and Crewe (2002) write, “[c]itizens understand political
discussion as an act of ‘self-expression’ . . . Most obviously, when we
discuss issue concerns, we are required to make known our preferences
on those issues” (p. 56). But they go on to explore a second facet of self-
expression involved in political discussion: “Some of our preferences are
‘constitutive’ preferences in that they are central to the meaning of a
particular identity. Therefore, stating your issue positions can expose more
than just your preferences; it sometimes reveals a basic identity, who you
are at your core. Thus discussion can fuse a ‘politics of ideas’ with a
‘politics of identity or presence’” (p. 56). They highlight the idea mentioned
earlier, that discussion risks being “truly recognized and thereby revealed
to one’s fellow citizens, or being ‘pressured’ to transform your preferences
and thereby change the nature of your identity” (p. 56).
There is reason to think that these dangers of self-expression have
increased as our politics have become more polarized and sorted.
Gibson and Sutherland (2020) find that the percentage of the public
who does not feel free to speak their minds has dramatically increased
since the 1950s. In 1954, only 13 percent of Americans reported that they
did not feel free to speak their minds, but in 2019, 40 percent of
Americans felt this way. The authors find that this pattern is driven
primarily by affective polarization: Self-censorship (not feeling free to
speak one’s mind) is highly correlated with affective polarization. This
echoes a central finding made by Gibson (1992) that those who do not
feel free to express themselves politically are less tolerant of others and
tend to have more homogeneous networks. The vilification of the other
side has grown in tandem with increasing numbers of Americans
Affiliative Goals
The goal to affiliate – to feel included and identify with a group – is
foundational in the way people navigate political discussions. We can
think about group affiliation in the context of political discussion as
shared political views or identities. Similar to the affirmation goal, dis-
cussing politics with like-minded others can provide the opportunity to
connect and form bonds with other people over a common set of values or
priorities. At the core of the work applying social identity theory to
politics is the idea that people are motivated to define their ingroup vis-
à-vis an outgroup to bolster their sense of belonging.
These bonds of affiliation could also be about the intensity of one’s
engagement with politics, not the direction of one’s views. For example, if a
person’s peers are particularly involved in politics and regularly engage in
discussions, she might choose to opt into these conversations in an effort to
avoid feeling “left out.” Krupnikov and Ryan (2022) argue that one of the
most important political divides in the United States is between those who
are deeply engaged in politics and those who are not, echoing a point raised
by Klar and Krupnikov (2016) about Independents’ distaste for partisan-
ship more generally. The desire to fit in with a group could outweigh the
costs of discussing a topic that someone finds boring (at best) or distasteful
(at worst), leading them to engage. Humans are social creatures and the
desire to affiliate socially with a group and feel part of a team can have a
powerful influence on decisions to opt into political conversations.
Close relationships are strong enough to withstand the potential disruption that
might occur from either abruptly – and rudely – disengaging from a contested
discussion or turning it into a real argument full of passion and anger. With close
friends and family, “you feel like they’re going to accept you . . . You might have a
temporary argument but they love you and you love them. And you’re not going
to lose that love just because of politics.” By contrast, persuasive and argumenta-
tive discussions with acquaintances run the risk of alienating people and disrupt-
ing social relations that must be maintained (such as co-workers). Outside of close
relationships, you cannot be sure if you will be accepted “for yourself or just by
what you say or how you act.” (2002, p. 57)
This insight is also foundational for Mutz’s (2006) argument that cross-
cutting networks discourage political participation: “[T]he demands of
social accountability create anxiety because disagreement threatens social
relationships” (p. 106). While her operationalization of social account-
ability precludes a full test of this idea,3 she supports her assertation with
work from Rosenberg (1954–1955) and Mansbridge (1980), who both
find evidence from qualitative work that the desire to avoid conflict and
minimize threats to interpersonal harmony were important factors in their
studies of political engagement.
Thus, while most political discussion research tends to align with the
notion that affiliative goals can push people away from political discus-
sion or lead them to engage in conformity within conversations that do
happen, affiliative goals can lead some to be more likely to engage.
Weaving together the motivational theory proposed by social psycholo-
gists with the observations from previous work on political discussion
leads us to propose the framework that will guide our empirical analysis.
We organize this framework of political discussion as a cycle in four
stages: Detection, Decision, Discussion, and Determination. The choices
a person makes at one stage have implications for the opportunities
available to them at the next, and unique individual dispositions affect
people’s preferences and behavior in each stage.
This framework emphasizes different facets of discussion behavior
than has previous scholarship. First, we focus considerably more on what
happens in anticipation of a potential political discussion. People must get
a read on their discussants before a conversation has even begun if they
are going to minimize their discomfort or maximize their enjoyment. We
care about the decision to engage in an interaction, not simply the
conversations that result. Second, we focus considerably more on the
nonpolitical factors that guide the decision to talk politics and what
people choose to say. Even those people who are interested in and
knowledgeable about politics may have reasons that they seek to avoid
political conversation in certain contexts, just as those who are less
interested might have reasons to pursue conversations. Third, we suggest
that most people prioritize preserving their self-esteem and their social
relationships over the instrumental political benefits that may be gained
from a discussion. If people perceive that a political conversation will do
lasting damage, they are likely to steer clear of the topic, even if they could
learn something or improve their vote choice in the process. Similarly, if
people perceive that a political conversation will improve their social
relationships, they are likely to engage, even if they have little to gain
politically.
Throughout the book, we rely on a core analogy for the 4D
Framework: the “choose your own adventure” books popular in the
1980s and early 1990s. The books would trace the storyline of a protag-
onist, unfolding in small chunks, each ending with a decision about two
or more courses of action that the main character could choose. The story
changed depending on the reader’s preference, and thus a single book
could contain dozens of different storylines.4 The choices the reader made
for the main character earlier in the story had consequences for the range
of choices available at later stages of the story, although it often felt like
some endings were inevitable.
We conceptualize the process of political discussion in a similar way.
People make many choices as they navigate through social contexts in
which political discussion may appear. The three motivational goals
discussed previously – accuracy, affiliation, and affirmation – shape the
decisions that individuals make before, during, and after a political con-
versation, as we elaborate on shortly. The choices they make depend on
the circumstances in which they find themselves and the people with
whom they might interact. These choices impact the options they have
moving forward, and the decisions they make shape the decisions they
face in the next “round” of their story. Additionally, we expect there to be
significant variation between individuals, so within each stage of the cycle
we describe, we ultimately want to explain variation in the preferences
people have and how that affects their choices.
We will use the “choose your own adventure” analogy throughout the
book to highlight some of the decisions that our protagonist from the
opening vignette of the book, Joe, makes during his day-to-day experi-
ences. Each stage of the 4D Framework presents a decision point for Joe,
and each complete storyline would be a lap around the cycle we charac-
terize in the framework.
political agreement and frequency of discussion, such that those who are
less extraverted and less emotionally stable will be more likely to
avoid disagreement.
The Big Five personality traits are a useful framework for a wide
variety of behaviors, but we seek to move beyond the broad traits to look
at particular characteristics that could matter for affecting the way indi-
viduals anticipate, experience, and internalize political discussion. We
look toward traits that should uniquely affect a person’s awareness of
and sensitivity to the social costs involved in political discussion: a per-
son’s inherent comfort level with social interaction, as measured by facets
of their personality or their sensitivity to social anxiety. Specifically, we
focus on social interaction anxiety, conflict avoidance, and willingness to
self-censor (WTSC).
In the next chapter, we go into more detail about how we measure
these traits. As a preview, we expect that those who are more socially
anxious will be more likely to avoid political conversations, regardless of
situational factors, such as disagreement or knowledge asymmetries.
When socially anxious people end up discussing politics, we expect that
they will be less likely to express their true political views and will feel
less comfortable than those who are less socially anxious, but these
differences might not be driven by situational factors. At the
Determination stage, we expect social anxiety to be associated with
increased reports of politically motivated social estrangement, in an
effort to avoid future interactions.
An aversion to conflictual interactions might lead people high in the
trait to be more practiced in detecting potential disagreement before a
discussion even begins. We expect those who are conflict avoidant to be
more likely to avoid political discussions, especially when they anticipate
disagreement. Unlike social anxiety, conflict avoidance should be strongly
related to situational factors, such as disagreement. Once they are
engaging in a discussion – voluntarily or not – conflict-avoidant individ-
uals might be less willing to express their true opinions, which survey
evidence from Pew Research Center suggests could be the case for some
topics (Doherty et al. 2019). For example, the authors find that only
26 percent of those who are low in “comfort with conflict” report that
they would share their views about Trump during a dinner conversation
with others who disagree, compared to 76 percent of those who are high
in “comfort with conflict.” Similarly, we expect that those who are more
conflict avoidant might be more likely to silence their views, censor them,
or even conform to the group’s majority opinion.
Stage 1: Detection
One of our starting assumptions is that people assess the social costs of
political discussion. In order to do so, people must be able to ascertain the
likely course of the conversation to calculate whether the potential bene-
fits involved in discussing politics outweigh the potential risks. The ques-
tion is, how do people do this? Research on perceptions of political views
(e.g. Rule and Ambady 2010) and political discussion networks is often
conducted separately. In fact, some of the seminal research on social
political communication only discusses perception of discussant views as
a theoretical aside, removed from the empirical core of the article (e.g.
Huckfeldt and Sprague 1987).
The extant literature has focused primarily on the interactions that
people report as being most frequent or regular. And most people report
knowing with high degrees of certainty the viewpoints of their friends and
family members. One particular study from Pew suggests that approxi-
mately 90 percent of people report knowing the viewpoints of their
family, close friends, and spouse or partner on an issue salient during
the time of the study (Hampton et al. 2014). Huckfeldt and Sprague
(1987) find that individuals are strikingly accurate in identifying the
presidential candidate preferences of political discussants who agree with
them (91–92 percent accurate), but substantially less accurate in identify-
ing the preferences of those who disagree.5 More recent studies have
critiqued some of this early work, arguing that the previously reported
statistics are difficult to interpret because they often conflate inaccuracy
with uncertainty depending on the authors’ treatment of “don’t know”
responses (Eveland and Hutchens 2013; Eveland et al. 2019). However,
there are reasons to think that most people are adequately accurate in
reporting agreement but are less able to recognize disagreement, and that
increased discussion frequency improves accuracy (Eveland and Hutchens
2013). Using a method that allows for the flexible measurement of uncer-
tainty, Carlson and Hill (2021) find that individuals are able to accurately
infer presidential vote choice of strangers based on a variety of demo-
graphic and political cues, and accuracy increases as individuals have
more characteristics in common.
What matters more than accuracy per se is perception: Accuracy may
be a prerequisite for effective communication (Huckfeldt, Johnson, and
Sprague 2004), but perception matters more at the outset of a conversa-
tion when people’s guess of their discussants’ views is going to guide their
decision-making. The question that remains is how people recognize and
perceive these differences among their weaker social ties or for the people
they interact with incidentally throughout the course of their day. When a
stranger makes a comment in passing, what additional clues do people use
to guess his or her likely political views?
In a polarized society, one in which partisan politics has become
aligned with worldviews, religions, and identities (Mason 2018),
Americans view the political world through a partisan lens and have
come to recognize their ingroup and their outgroup. Hetherington and
Weiler (2018) describe a microcosmic example of the extent to which
worldview preferences have resulted in parallel but distinct realms of
existence for the “fixed” and the “fluid,” worldviews that correlate with
political identities. They write that “[t]oday, Americans are divided by
choices that seem much more trivial than where they live, work, and
worship . . . while these day-to-day preferences say less about the convic-
tions and values than do choices about occupation, residence, and
religion, they nevertheless reflect how Americans think about the world
more broadly” (p. 90).
Hetherington and Weiler (2018) assert that people do make inferences
about others based on the existence of “tells,” day-to-day lifestyle prefer-
ences that are “signs of larger beliefs about the world, and about their
political commitments” (p. 91). Deichert (2019) provides the evidence for
this claim. In addition to showing that certain cultural preferences –
depicted both in written description but also in visual manifestations –
are strongly associated with one or the other political party, Deichert uses
an implicit association test to demonstrate the existence of these cognitive
linkages in long-term memory. We connect her ideas to the process of
political discussion. People perceive potential political discussants with
these assumptions already in place, and we evaluate others through the
associations we hold about the other side. Recognition of someone’s
political identity triggers a set of expectations from previous interactions
with people in the same group.
Think back to the vignette about Joe’s day in Chapter 1. When he
got on the bus, he instinctively scanned potential seatmates to screen
out people who seemed like they might initiate political discussions.
He had a read on his coworkers’ political views and recognized when
the group members gathered around the water cooler did not agree
with one another. He knew his in-laws’ views with certainty on the
topics he had heard them discuss, and based on what he knew about
their media preferences, he could extrapolate what they thought about
other policy issues. Joe is not walking around trying to be an amateur
political psychologist, but without too much effort, he is mapping
the political environment around him. As we detail more later, Joe’s
behavior is not unique.
We explore this stage of the process more generally in Chapter 4,
where we highlight results from a series of studies – both our own and
by others studying this process – that suggests the extent to which people
make assumptions about others based on easily identified physical and
verbal signals. It turns out that many of the differentiating factors that
social scientists have recognized between those on the left and those on
the right – such as consumer preferences, worldview differences, and
baby-naming preferences – are recognizable to the public as well. These
results suggest that awareness exists of the political views of potential
discussion partners before a political discussion is even initiated, and that
the assumptions these identities activate could shape people’s interest in
engaging in conversation.
Stage 2: Decision
One of our main contributions is the idea that the anticipation of discus-
sion matters. What people expect to happen shapes their subsequent
behavior. If people expect a positive, enjoyable discussion, they should
be more likely to engage, regardless of potential instrumental costs; if
people expect a negative, upsetting, unpleasant discussion, they should be
more likely to opt out, regardless of potential instrumental benefits.
Previous research has focused largely on the subset of the conversations
that materialize, and materialize regularly enough to be reported on a
survey. As a result, discussion frequency and discussion network compos-
ition become the variables studied, at the expense of studying the emer-
gence of discussion itself. In effect, this truncates the dependent variable
and systematically misses the process of deciding to discuss politics. We
think the process of anticipation – the stage at which someone actively
decides to engage (or not) in a discussion based on the information they
have at the onset of a conversation – is important in its own right.
People do not arrive as blank slates at the moment a political discus-
sion emerges. They bring with them their individual predispositions as
well as the assumptions they have made about their potential political
discussion partner. There is always a moment of decision – albeit fleeting –
preceding a political conversation. The opening narrative of this book
characterizes several such decisions. If someone else has initiated the
conversation, a person must decide whether and how to respond, or to
derail the political aspect of the conversation. If a person initiates a
conversation with someone else, they decide when, where, with whom,
and what to say to engage the other person.
At the onset of a potential conversation, a person can make informed
guesses about a number of facets of the conversation. How many people are
present? What is the ratio of agreement to disagreement in the group? Given
the composition and context of the discussion, what can the person antici-
pate about the experience of the discussion itself based on the interaction of
these factors? A private conversation with a discussion partner someone
knows well and knows agrees with her is a very different experience than a
discussion with a seatmate on a plane, train, or bus.6 A discussion with a
small group of coworkers around the water cooler at work is different than
a conversation around the family holiday dinner table.
As a result of this assessment, there is variation between people not only
in the frequency with which they discuss politics but also in their general
orientation toward political discussion. This was prominently studied in
Stage 3: Discussion
Despite the vast research on discussion network composition and the
emphasis on disagreement in the literature, considerably less attention
has been paid to the dynamics of organic political conversation itself. We,
as a field, know comparatively little about the actual contours of political
discussions primarily because our methods are not well suited to captur-
ing those dynamics: Quantitative approaches rely on faulty human recol-
lection and reporting, and qualitative approaches are costly and produce
findings that are often difficult to generalize. Our focus here is not on the
Stage 4: Determination
The end of the discussion itself is only the beginning of its consequences.
We already know that people who engage in agreeable political discus-
sions are more likely to engage in other forms of participatory behaviors,
but that some of this relationship is driven by interest in politics and
This chapter has provided an overview of the 4D Framework we use to
evaluate the process of political discussion. Moving beyond previous
Data Collection
44
4D Framework Inputs
We focus on three broad categories of inputs to the 4D Framework:
individual dispositions, situational factors, and psychological consider-
ations. We describe each in turn, providing a brief recap of the role each
plays in the model and then explaining how we measure each construct in
the book.
Individual Dispositions
Individual dispositions structure the way in which people navigate the 4D
Framework, a point we elaborated on in Chapter 2. We focus on three
types of individual dispositions – demographic, political, and psycho-
logical – and discuss our theoretical expectations and empirical analysis
in more detail in Chapter 9. For demographics, we examine gender and
race and ethnicity, as measured using respondents’ self-reported gender
and racial or ethnic identities on our surveys. We examine interest in
politics and strength of partisanship for our political dispositions. We use
standard measures of these questions, commonly used in major surveys,
such as the American National Election Study (ANES). In this chapter, we
want to provide more detail on the psychological dispositions that we
explore because they are less commonly employed in political science
Situational Factors
In addition to individual dispositions, features of the social context in
which political conversations occur can also affect individual behavior in
the 4D Framework. We focus primarily on two situational factors: dis-
agreement and knowledge asymmetries. While there are a myriad of other
situational factors we could consider – such as the gender or racial
composition of the group, the power dynamics or social tie strength
between discussants, participant levels of political interest or engagement,
or the location of the conversation – we chose to focus on two factors that
are the most strongly tied to our theoretical framework, as we describe in
Chapter 2.
The pattern of findings in the political discussion literature, especially
those using egocentric analysis based on name generators, has found that
different operationalizations of disagreement can lead to different conclu-
sions (Klofstad, Sokhey, and McClurg 2013). We theorize that in the
context of a social interaction, disagreement based on identity (e.g. parti-
sanship or candidate preference) may operate differently than disagree-
ment based on the clash of opinions (issue disagreement, or the experience
respondent
Candidate Vignette Manipulation: “It quickly becomes clear to Decision 5 CIPI I Vignette
Disagreement her that they have very different political views Discussion 7 Experiment
from hers, as they discuss their support for the
candidate Sarah opposes.”
Compensation demands for conversations with a Decision 5 Name Your Price Study
group of Clinton supporters, Sanders supporters,
Trump supporters, Cruz supporters, etc.
Vignette Manipulation: “It quickly becomes clear to Determination 8 Vignette Pilot Studies
him/her that [they have very different views from
him/her, as they discuss their support for the
candidate that John/Sarah opposes / most of the
group has very similar political views as him/her,
https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press
4D Framework Outputs
The 4D Framework is an organizing structure for the nuanced choices
related to whether people talk about politics, what they choose to say,
and what they distill from their conversations. In this section, we focus on
outputs of the 4D Framework, which largely take on the role of depend-
ent variables in our empirical analyses. The outputs include behaviors
such as whether individuals try to detect others’ political views in advance
of a discussion, for example, or whether individuals hedge when revealing
their partisan identity to a stranger in a conversation.
In characterizing and studying cycle outputs, we could have proceeded
in two ways. The first would be to pick a single outcome variable for each
stage of the 4D Framework and more robustly test the correlates of that
outcome measure. The advantage of this approach would be that we
could make a more concise and refined argument about each key stage.
The second approach would be to pick a variety of outcome variables
pertinent to each stage and explore more facets of various choices. The
advantage with this approach would be that we could capture a wider
swath of the various decisions and behaviors embedded within political
discussion.
We chose to pursue the latter strategy for several reasons. First, polit-
ical discussion is a multifaceted behavior that is anything but formulaic.
We see our contribution primarily as conceptual – theorizing the iterative
process of political discussion – instead of a measurement contribution
(continued)
Outcome
Stage Concepts Measurements – Proximal Behaviors Measurements – General Tendencies
Detection Disagreement Imagine that you were trying to guess someone’s Below we have listed some characteristics
Recognition political views, but you couldn’t ask them directly. about people. How would you describe
How would you go about guessing their political someone’s political party affiliation if all
views? [Free response] you knew was that he or she . . .
When you discuss politics with someone new,
do you typically try to guess his or her
political views before starting the
discussion? Yes, No
Decision Discussion How do you think John/Sarah would respond to Some people try to avoid getting into political
57
Avoidance the person’s question? discussions because they think that people
Say nothing on the subject, even though s/he can get into arguments and it can get
disagrees with them unpleasant. Other people enjoy discussing
politics even though it sometimes leads to
arguments. What is your feeling on this –
do you usually try to avoid political
discussions; do you enjoy them; or are you
somewhere in between?
Which of the following best describes your
political discussion behavior?
(continued)
https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press
Outcome
Stage Concepts Measurements – Proximal Behaviors Measurements – General Tendencies
– I’ll talk about politics with someone, but
only if I know their political views ahead of
time
– I’ll only talk about politics with someone if
I know we have the same political views
– I’ll only talk about politics with someone if
I know we have different political views
Discussion Discomfort Increased heart rate and increased electrodermal How do you feel when someone disagrees with
activity you on a political issue? Select all that apply
and indicate the strength of your response on
a 5-point scale from weak to strong. Angry,
58
Proximal Behaviors
Most of our studies focused on particular instances where people were
faced with a decision. We tackled these proximal behaviors in moments of
decision in Stage 2 (Decision) and Stage 3 (Discussion), though we do
investigate some proximal behaviors in Stage 4 (Determination) as well.
We assess these decisions in three key studies: the True Counterfactual
Study, vignette experiments, and the lab studies.
We have already referenced the True Counterfactual Study, but the
idea again is that we randomly assigned participants on our CIPI I Survey
to think about a time in which they either engaged in a political discussion
or had the opportunity to discuss politics, but chose to avoid it. We then
asked a series of questions about the situational factors, such as who was
there and whether there was any disagreement, and why they chose to
avoid or engage.
This approach was useful in distinguishing the features that charac-
terize conversations that happen from those that failed to materialize,
but does not give us causal identification over those situational factors.
In an effort to better understand the causal effect of situational factors,
such as disagreement and knowledge, on discussion behaviors in the
4D Framework, we turned to vignette experiments. We describe the
vignette experiments in more detail shortly, but we used them to
construct a moment where a subject had to make a decision about
how a character would behave. After reading a vignette, participants
were typically asked to report how they thought the character would
respond. This included capturing the likelihood of expressing his or her
true opinion to the group, as well as a behavioral choice: deflecting by
not saying anything at all, conforming, censoring, expressing his or her
true opinion, or stating an opinion that was more extreme than what
he or she actually thought. We then asked about the likelihood with
which the character would engage in conversations with these groups
in the future.
Finally, we measure proximal behaviors in our lab studies, where
subjects were presented with discrete moments of choice, where they
had to make decisions about what to say. In essence, we hoped to measure
particular instances of decisions instead of generalized behavioral pat-
terns. The behaviors measured vary by study, but they complement the
more concretely measured behaviors in the vignette experiments with
more subtle measures of how much someone’s expressed opinion differed
from their private opinion, or how much they hedged their verbal
responses, for example.
General Tendencies
In addition to measuring behavior at the moment of decision, we also
wanted to measure individuals’ general political discussion preferences
and behaviors during various stages of the 4D Framework. These meas-
ures are used primarily in our analyses of the role of individual differ-
ences, where we seek to find associations between individuals’
dispositions and their reported discussion behavior. We primarily rely
on more traditional survey questions, slightly modified to capture previ-
ously unexplored facets of political discussion. Responses to these ques-
tions lack the causal identification and direct link to situational factors
that our studies on the moments of decision feature. However, studying
the general tendencies helps us to link individual dispositions to political
discussion behavior more concretely.
While perusing Table 3.5, readers might notice that we do not include
general tendency measures for revealed opinion. Some of our pilot work
indicated that social desirability bias likely leads to overestimating the
extent to which individuals express their real opinions in conversations.
Given social norms surrounding honesty and the ability to engage in
political conversations, we anticipated that individuals might be reluctant
to admit that they would not express their true opinions to the group.
This is why we chose to measure proximal behaviors for this stage of the
4D Framework. We do report results from other researchers (e.g. Gibson
and Sutherland 2020) about self-censorship, but do not include any of our
own data for general tendencies of opinion revelation.
The data collection for this book spanned a five-year period between
2013 and 2018, using a wide range of methods and samples. In this
section, we describe the nuts and bolts of how we collected the data for
the measures we described in the previous section. We do not characterize
every study that we present in the book but focus on the studies that make
repeat appearances. Further details will be provided in the chapters
themselves alongside the results and conclusions.
Surveys
Table 3.6 summarizes the details about the main surveys used in this
book. We conducted two nationally representative surveys and one con-
venience sample survey ourselves, and added some questions to the
Thanksgiving Study This study was conducted over five Wave 1: November 17 Wave 2: 300 in each wave
waves (cross-sectional) on November 24 Wave 3:
Mechanical Turk. Participants were December 1 Wave 4:
asked questions about basic December 22
demographics, their Christmas/ Wave 5: December 29
Thanksgiving celebrations, All Waves: 2014
individual differences, perceptions of
Democrats and Republicans, and
then given two stereotyping
batteries. The first was a list
experiment about partisan
stereotypes (one six-item list and one
seven-item list). Then, they were
63
(continued)
https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press
practical standpoint, the equipment used to measure heart rate and elec-
trodermal activity were not so obtrusive as to prevent subjects from
conversing with one another. We worried that connecting subjects with
electrodes that measured their facial muscles (electromyography) or their
respiratory rates (using equipment placed around their rib cage) would be
too cumbersome in a study in which people interact.
One important feature of psychophysiological response is that it can be
used as both an independent variable and a dependent variable. That is,
some researchers are interested in how a certain stimulus, such as expos-
ure to incivility, makes individuals respond physiologically, whereas
others are interested in how individuals think or behave politically once
they are in a particular physiological state (Stern et al. 2001). Given our
framework of thinking about the 4D Framework as a cycle, we think
about psychophysiological data as playing both roles. We are primarily
interested in measuring how individuals respond psychophysiologically to
various political discussion stimuli, such as anticipating a disagreeable
conversation or actually participating in a discussion. Here, psycho-
physiological data helps us better characterize how people actually
experience political discussion and helps us better measure the “discom-
fort” that previous researchers have alluded to, but not tested, without
relying solely on self-report measures of emotional experiences. However,
we also expect that the way in which individuals respond psychophysio-
logically to political discussions likely informs their future discussion
behavior. Individuals who experience increased heart rates and sweaty
palms when discussing politics with outpartisans might prefer to avoid
those experiences in the future. We make this point in one of our previous
papers where we find that those who are more psychophysiologically
reactive to anticipating political discussions are more likely to form
copartisan discussion networks (Carlson, McClean, and Settle 2019).
We collected psychophysiological data in two experiments. Both stud-
ies were conducted using a sample of undergraduate students recruited
from political science courses. Most students participated in exchange for
course credit, but some students who had participated in previous studies
were invited back to the lab and paid for their participation. We worked
with an outstanding team of undergraduate research assistants to proctor
the studies.6 The results from the Psychophysiological Anticipation Study
and the Psychophysiological Experience Study are discussed primarily in
Chapter 6 with some additional results presented in Chapter 7. We
describe each study in more detail in the next section, but summarize
the details in Table 3.7.
Number of
Name Description Date Collected Observations
Psychophysiological Physiological study in which student participants viewed videos Fall 2014, September 205
Anticipation Study showing both political and apolitical contention and then were 2015
given the discussion stimulus in which participants were told
that they would be asked to discuss politics with another
70
Survey Experiments
Our approach in this book also relies on a handful of survey experiments.
We will remind readers of the basics of the studies in the chapters in which
we discuss the results, but as a preview, they are the Names as Cues
Studies and Stereotypes Anchoring Experiment in Chapter 4; the True
Counterfactual Study and Name Your Price Studies in Chapter 5; and
Vignette Experiments in Chapters 5, 7, and 8. Because the vignette experi-
ments are used across several chapters, we focus some attention on
describing them here.
One of the core components of our empirical analysis in this book
comes from vignette experiments. Vignette experiments present partici-
pants with a short description of a scenario, randomizing components of
it, such as the people, location, or actions. Researchers ask participants to
either imagine that they are in the scenario described and to report how
they would think or act, or ask participants to report how they think the
character(s) in the scenario would think or act. These studies are useful
for exploring how a variety of scenarios that might be too complicated or
unethical to observe in the real world affect attitudes and behavior. In our
case, we wrote vignettes that described a variety of political discussion
experiences, randomizing the relationships between the characters, the
levels of disagreement and political knowledge, the power dynamics, and
the social context. It was infeasible for us to conduct in-person lab experi-
ments that manipulated these characteristics, so we chose to sacrifice
some external validity in exchange for more precise control over the
features of political discussion scenarios that are central to our theory.
From 2014 to 2016, we conducted five pilot vignette experiments on
Mechanical Turk and on a student sample before launching our principal
vignette experiment on the CIPI 1 Survey, as summarized in Table 3.8.
The experiment we conducted on the CIPI I Survey was heavily influenced
by our findings in the vignette experiments that preceded it. We describe
the convenience sample studies here to give readers a sense for the
magnitude of the pilot work we did to inform the experiment we fielded
on the CIPI I Survey. However, we focus most of our analysis in this book
on the large vignette experiment we conducted on the nationally repre-
sentative sample because we have a full set of covariates measured,
including our measures of individual differences.
In most cases, participants were presented with a vignette in which a
character (Sarah for self-identified female participants, John for self-
identified male participants) was described engaging in some kind of
We prioritized breadth and methodological pluralism in this research
project, but we hope not at the expense of conceptual clarity about how
and why we measure the inputs and outputs of the 4D Framework in the
way that we do. With methodological pluralism comes the potential for
methodological confusion, and this chapter orients readers to our
approach in this book in order to contextualize the results that we present
in the empirical chapters that follow.
Detection
As he and his wife Katie walked over to their neighbors’ house, Joe took
note of the trees lining the streets in his new neighborhood. The suburbs
were a nice change of pace from the bustle of the city. Two weeks earlier,
Joe had unpacked the last of the boxes from the moving truck into their new
home. Moving always had been stressful for Joe: New places, new people,
and new norms all took some adjustment. Thankfully, many of the neigh-
bors had been quick to introduce themselves, and the Smiths had even
invited them to a neighborhood Memorial Day backyard party.
Half a block away, Joe could already hear the cacophony of laughter and
music coming from the Smiths’ backyard. He opened the gate and was hit
with the smell of barbeque. They spotted the Smiths and quickly made their
way over to thank them for the invitation, though the hosts were just as
soon swept away to talk with other guests. Katie turned and started chatting
with a woman about her age who looked vaguely familiar, and Joe scanned
the crowd, looking for a conversation into which he could integrate himself.
To his left, he saw two middle-aged men – older than him but younger than
his dad – whom he thought he’d seen last week, sitting in one of their
garages watching sports and drinking beer, alternately tinkering under the
hood of a giant Ford truck parked in the driveway. To his right was a
woman in her mid-60s, with a canvas tote bag bearing the NPR logo slung
over her shoulder. Her short hair, accented by the dangling beaded earrings
she wore, sent off a decidedly “earth mother” vibe. The younger man she
was talking to gave off a similar vibe, with shoulder-length, untamed hair
and a reusable plate he had clearly brought from home filled with vegetables
instead of barbecue.
As always, Joe wanted to avoid any talk about politics or the news, even
though that felt inescapable these days. If he couldn’t completely avoid
political conversation, he at least would prefer not to get into a disagreement
with one of his brand-new neighbors. Which pair of people does he approach?
77
Red MAGA hats, pink pussy hats, and black armbands in the United
States. Yellow vests in France. Green handkerchiefs in Argentina.1 From
time to time, people don apparel that sends a clear signal about the
content of their political views.
However, the vast majority of people do not regularly wear attire
advocating for a political party, issue, or candidate. When Americans
gather around the Thanksgiving dinner table, their place cards are not
labeled with their policy preference on the most salient, contentious issue
of the day. Americans do not (typically) walk around with “Democrat” or
“Republican” stamped on their hands. And when individuals make a new
acquaintance, they will most certainly share their names, and perhaps
what they do for a living or where they are from. But it would be
considered rude in most circumstances to introduce themselves as
“Jaime, a Democrat.”
Part of our endeavor in introducing the 4D Framework is to think
about all of the micro-decisions entailed in engaging in a political discus-
sion. Our broad approach is to put psychological and social consider-
ations at the forefront of the decision-making process, and the primary
goal in this chapter is to think about how people go about the decision to
engage in the first place. If people have preferences over which political
discussions to pursue, they may try to get a read on what the discussion
will be like, and thus they must be able to detect something about the
likely political views of their discussants. In the absence of explicit visible
signals of others’ political orientations, how is it that people come to
recognize the political views of potential discussants before politics even
emerges as a topic of conversation? The answer to this question matters
for the systematic understanding of the political discussions that ultim-
ately come to fruition.
Embedded in the previous work on political discussion are unspoken
assumptions about how people make choices over which conversations to
encourage and which to stifle. In particular, the literature seems to take as
given that individuals prefer to discuss politics with those who agree with
them and that they are able to infer confidently this (dis)agreement. In one
sense, the process underpinning these assumptions is less critical in previ-
ous work because the political discussion literature largely focuses on
regular discussants in individuals’ networks. There is no mystery in the
fact that people often know the viewpoints of their family members, and
depending on the context of a friendship, it is also likely that people will
have a sense for a friend’s political orientation before politics comes up
the first time.
Our initial focus in this chapter is on the detection skills that Americans
use preceding their decision to engage in a political discussion. Our
emphasis on detection bridges the gap between two distinct fields of
inquiry: the correlates of partisan and ideological identities, and the
accuracy of person perception as it relates to political identity. Despite
the growth in these areas of the literature separately, previous researchers
have not married them together to examine how the average American
recognizes the political identity of his or her fellow citizens. What we do
not know is the extent to which people are either aware of these patterns –
and apply them to make inferences about individuals – or are able to
detect subtle signals about a person’s political views.
On the one hand, the average American may not be particularly
attuned to detecting others’ political views. Most Americans are not all
that interested in politics. Any inferences Americans draw about others’
political identities are unlikely to be driven by their knowledge of
established empirical regularities in political behavior, and more likely
driven more by their personal experiences or archetypes portrayed in the
media of politically extreme, engaged, and vocal partisans (e.g. Hersh
2020; Krupnikov and Ryan 2022). Mutz (2006) asserts that “[p]eople’s
political views are seldom obvious upon first meeting, and conversations
about politics do not occur with sufficient regularity that people always
know when they are in the company of people who hold cross-cutting
views” (65).
However, there are reasons to believe that Americans either can detect
others’ views or are able to apply aggregate patterns. In the same book,
Mutz (2006) finds that people in the United States are more likely than
residents of other countries to be able to perceive the partisanship of their
non-spouse discussants. As she writes, “[h]igh levels of partisanship—
whether it is favoritism for parties or for particular candidates—make it
easier to select congenial discussion partners. Moreover, the highly sim-
plified two-party system in the United States may make these distinctions
more visible than in countries that have many parties” (p. 53).
Additionally, as Americans become increasingly sorted, socially and
geographically, individuals might actually overestimate patterns in parti-
sanship. Ahler and Sood (2018a, 2018b) have found that people perceive
higher stereotypic associations between partisanship and demographic
traits than exist in reality, for both their ingroup and their outgroup.
However, Carlson and Hill (2021) find that perceptions of how others
voted in the 2016 election are not nearly as biased as the perceptions of
social group composition within the parties. They find that party identifi-
cation, individuals’ explanation of the most important problem facing the
United States, and membership in a racial or ethnic minority group are the
most informative characteristics for accurately guessing how someone
else voted in 2016. Importantly, they also find that the more socially
similar a respondent was to the person whose vote they were trying to
guess, the more accurate their guesses were. Similarly, Deichert (2019)
finds that social cues, such as the clothing one wears, can lead individuals
to consistently and stereotypically assign partisan labels. For example, she
finds that individuals wearing camouflage were consistently identified
as Republicans.
The clearest articulation of detection we have read in the political
science literature is in a formal model presented by MacKuen (1990), a
model we engage with more seriously in the next chapter. Included in his
model is a behavior called signaling, which we think of as the flip side of
detection. He argues that in the absence of certainty about others’ views,
“it makes sense for the individual to use all available information about
the prospects of any particular conversation before making a strategy
choice . . . one may send signals to others in order to increase the
Percent of
Percent of Informative
Responses Responses
Non-Guessers 27
Blank or Nonsense 9
Don’t Know How to Guess 8
Wouldn’t Try to Guess 10
Just By Looking at Them 20 26
“Gut Level” Impression 6 8
Visible Demographic Characteristic 6 8
Clothing and Visible Signaling 8 10
The Facts of Life 18 23
Personality or Trait Characteristics 4 6
Geography 3 4
Occupation and Lifestyle 11 13
Conversational Cues 34 46
Media Usage 4 5
Directly Political Cues 38 50
Note: Categories were not mutually exclusive, so percentages can sum to more than 100.
Free response data collected on the CIPI II Survey. Left-hand column reflects hand coding of
984 responses; right-hand column reflects hand coding of 734 responses. A response was
considered belonging in a category if at least one coder considered it to belong there.
Our goal here is largely inductive: We did not begin with strong
expectations about what people would say and allowed their responses
to guide our categorization. First, we want to highlight the variety of
strategies people use to recognize the political views of others. Second,
we want to connect their answers back to previous literature about the
way people ascertain the identities of those around them. This reveals
high concordance: Many of the patterns that social scientists have
detected in the aggregate seem to be informative to members of the mass
public, as well. In Chapter 9, we more systematically explore the traits
that correlate with an individual’s “detection system” to assess who is
most likely to try to ascertain the viewpoints of others and whether they
say that information is necessary to them before proceeding with a
political discussion.
the 2014–2017 period that focus on the extent to which individuals use
different kinds of cues to categorize individuals as Democrats
and Republicans.
The Non-Guessers
Have no idea.
I don’t know.
I do not bother.
I wouldn’t even try.
I would not.
not my business.
i would never guess about anyone's political views – it is their business
We begin with the 27 percent of subjects who either refused to answer the
question (leaving it blank or writing a garbled response), said they would
not know how to go about making a guess, or said that they would not
try to guess. A variety of rationales emerge among those who provided a
written response indicating that they were “Non-Guessers.” While some
subjects indicated that they would not know how to guess, others seemed
to indicate that they would choose not to, even if they had an idea about
how they could. Finally, a small portion of subjects seemed offended at
the premise of the question, indicating that to try and guess someone else’s
views was inappropriate.
We take these responses seriously. While some of them – particularly
the blank and nonsensical ones – may simply reflect respondents who
did not make an effort to answer a free response question, there is good
reason to believe that a large proportion of the population does not
know how or does not try to figure out the viewpoints of the people with
whom they actually talk about politics, let alone their potential
discussants.
For our purposes, we are not interested in whether people are accurate
in their perceptions of the viewpoints of their potential discussants.
Although Huckfeldt, Johnson, and Sprague (2004) argue that accurate
inferences are necessary for effective communication for instrumental
outcomes, in this book we are not focused on how conversation affects
attitude change or knowledge, for which effective communication is
helpful. Instead, we care about detection because it informs the decisions
that people make in the next step of the political discussion cycle.
Really guess if they are a person of color or not and/or “LOOK“ like a
conservative (asshole)
Their age . . . the clothing they wear, their race.
Gender
How they dress and age.
Facial expresions [sic]
Nose.
Clothing choice, how they carry themselves, facial expressions,
Appearance
what is written on their shirt or hat, any buttons they wear
Size them up.
Sometimes dress can indicate, body language etc.
look(haircut, makeup or not, etc)
cues are only likely to be expressed by the subset of Americans who are
interested and confident enough in their political views to advertise them to
others. The American National Election Study reveals that in the past thirty
years, between 10 and 20 percent of Americans reported that they either
wore a campaign button, put a campaign sticker on their car, or placed a
sign in their window or in front of their house in a given election season
(Makse, Minkoff, and Sokhey 2019).
Many respondents in this category mentioned that they could deduce
political views from a person’s clothing but did not imply explicitly
political garments like campaign t-shirts or buttons. Although we did
not probe further, there might be reason to suggest that the style in which
a person dresses or the brands they choose to wear are indicative of their
politics. And in our politicized consumer climate, certain brands have
come to be associated with one party more than the other. In conjunction
with the survey firm YouGov, a study conducted by The Guardian found
that Democratic and Republican millennials have distinct sartorial pref-
erences.10 The preference for different clothing brands may be rooted in
the expression of different values: traditionalism for Republicans and
diversity for Democrats. Building on these ideas, Deichert (2019) finds
that individuals perceive men wearing camouflage, western, or formal
business attire to be Republicans, whereas men wearing “hipster” or
“hippie” attire (two distinct fashion styles) were more likely to be viewed
as Democrats.
The third response set in this category is the most ambiguous, comprised
of the 8 percent of subjects whose responses indicated something about the
way a person looked but without providing additional detail. These com-
ments cut to the core of the person perception literature in social psych-
ology. Psychologists have explored thoroughly how individuals form first
impressions of others, and because physical appearance has such a strong
influence on first impressions in face-to-face interactions, scholars have also
turned their attention toward how photographs of individuals can affect
first impressions (Vazire and Gosling 2004; Marcus, Machilek, and Schutz
2006; Gosling, Gaddis, and Vazire 2008). These first impressions, even
those based solely on physical appearance, can impact a perceiver’s subse-
quent behavior (Efran 1974; Todorov et al. 2005).
A small subarea of this field explores how people detect ideology based
on facial structure, finding that, at a rate significantly more accurate than
chance, Americans are able to categorize the political affiliations of others
based on simply looking at photographs of them. This work has been
premised on guessing the partisanship of elites – typically, unknown or
past candidates for office. But a study that extends to evaluating the faces of
college seniors based on their yearbook photos (Rule and Ambady 2010)
suggests the findings should generalize to our ability to detect the ideology –
and thus a clue about partisanship and political beliefs – among potential
political discussants. The proposed mechanism of these perceptions links
facial features to personality traits such as dominance (Rule and Ambady
2010; Samochoweic, Wanke, and Fiedler 2010), warmth (Rule and
Ambady 2010), or sex-typicality (Carpinella and Johnson 2013).
All three types of free responses in this category of our data – ascriptive
traits, visual markers, and a general “look” – are signals that can be
detected before people even open their mouths. While these cues may
simply be proxies for other traits that are more informative to people
about others’ views, it is important to note that our respondents named
the cue, not necessarily which views it signified. We turn next to the kind
of information that can be gleaned from other aspects of an individual’s
identity, typically based on at least cursory verbal interaction.
If it were available, I would look at their car and their possessions, if they
recycle, who their friends are
getting them to talk about themselves as to what they do or where they went to
school. Most people tend to give it away by how they answer tghose [sic]
questions
based on what else i know about them - are they gay, do they have a high
paying job, what kind of car do they drive, how they dress.
Asking them what music they listen to.
I would ask about them, their type of job, their values
Either by the things they are doing or purchasing or watching on tv or what
they allow their chikdren [sic] to do
How much do you make?
Based on their educational level
Their hobbies and interests
ask if they smoke cannabis
Their personality
ask them if they are from California or New York
where do you live (neighborhood), do yo ugo [sic] to church, were you in the
military
As the United States has become increasingly polarized and sorted both
politically and socially, liberals have become more similar to other liberals
on nonpolitical dimensions, and conservatives have become more similar
to other conservatives on nonpolitical dimensions. Can these sorted cues
help people infer others’ political leanings? For 23 percent of our respond-
ents, signals related to a person’s traits, lifestyle, or geography were
informative of his or her political views. Approximately 4 percent of
respondents mentioned some facet of geography, a cue that could be
informative about socioeconomic status in several ways. Starting at the
most proximate level, a person’s neighborhood is often a clue about his or
her income and educational levels. At least recently, college-educated
individuals tend to vote for Democrats, while wealthier individuals tend
to vote for Republicans. Scaling up a bit, the characteristics of one’s
community may be informative as a proxy for a person’s lifestyle prefer-
ences. Counties with higher median household incomes are more likely to
vote Republican than counties with lower median household incomes.
Rural Americans are more likely than urban Americans to affiliate with
the Republican Party, even though there is important within-party vari-
ation among rural Americans (Nemerever 2021). Some states have
become so associated with one party or the other that a person has a
better-than-chance rate of guessing successfully just based on state resi-
dence alone.11 Carlson and Hill (2021) find that knowing that a person
was from Washington, DC, was associated with more than a twenty-
point increase in accuracy in guessing how that person voted in 2016.
Many more respondents – 13 percent – indicated that they could guess
another person’s political views based on other details about the individ-
ual’s life trajectory and lifestyle. Much of this information seems to be
rooted in indications of a person’s socioeconomic status; some respond-
ents said that they would directly try to ascertain a person’s income level
(or type of job) or educational level (or educational pedigree). This
category also included the values and worldview clues mentioned by
our respondents. Religiosity should be a useful cue on both sides of the
spectrum. A Pew Research Center survey indicated that 63 percent of the
religiously unaffiliated identify with or lean toward the Democrats. In
2008, religiously unaffiliated voters voted for Obama at similar rates as
White evangelical Protestants did for McCain.12
Beyond these factors, other responses indicated that a person’s interests
and hobbies could be informative. A long vein of research in sociology
explores the notion of “lifestyle enclaves,” where people’s communities
and networks share their tastes for leisure, consumption, and ways of
.. Socioeconomic and non-visible demographic traits and inferred partisanship
Independent Independent
Leans Leans
Democrat Democrat Independent Republican Republican
Were gay or lesbian (D) 53 12 27 3 5
Liked hip-hop (D) 37 19 34 6 4
Did not believe in God (D) 36 12 38 6 8
Preferred cultural fusion food (D) 33 17 36 7 7
Volunteered in the community 27 14 34 9 17
Had a college degree 24 12 31 10 22
Preferred cats to dogs (D) 23 13 46 8 10
Never attended college 19 13 37 11 21
92
Conversational Cues
I'd ask them questions about something in fiction that's relatable to something in
reality. They're usually more willing to answer about something related to pop-
culture. Depending on how they feel about it I can gauge their general position.
Discussing something of cultural importance and gauging their reaction.
I would perhaps ask them other questions about their life that might give me
an indication as to their political views.
At my age general conversation usually results in people letting me know
where they stand politically
Listening to their discussions with other people.
I would observe their way of talking. Liberals and conservatives leave certain
hints.
THE WAY THERE [sic] TALKING
manner of speech
Listen to their comments closely.
The next category of cues included answers both about what someone
says as well as how they were saying it, what might be thought of as
political shibboleths. Nearly half of respondents seemed to indicate that
listening to a person talk about things that are not related to politics can
be informative about their political leanings.
Is there evidence that those on the left and those on the right talk
differently? Certainly about policy itself, individuals might make some
predictions about who says what. If Democrats and Republicans have
different policy priorities, then they might be more inclined to talk about
different societal problems. Individuals could also reflect the frames, catch-
phrases, and buzz words used by political elites on their side of the aisle,
given that the mass public can be led by the messaging of elites (Zaller
1992). But moving beyond known political differences in the substance of
what is said, what sorts of clues might be present in nonpolitical speech?
The free response answers did not provide too many clues, but we can
speculate. Speech patterns that correlate with other informative traits –
such as regional accents, the use of religious language, or the sophistica-
tion of one’s vocabulary – could signal identities that are likely to align
with someone’s political views. Based on the patterns of cultural differ-
ences we outlined previously, it is possible that on average, people per-
ceive liberals and conservatives to talk about different things; in a
conversation about last night’s television programming, discussing a love
Media Usage
many of our subjects indicated that they would, too. In today’s fractured
media environment, news source preferences correlate strongly with par-
tisanship. A 2014 Pew Research Center report suggests that 47% of
consistent conservatives prefer Fox News, whereas consistent liberals
prefer a combination of CNN (15%), MSNBC (12%), NPR (13%), and
the New York Times (10%).16
There are signs that the selection of a news source has become politi-
cized, in large part because much of the public perceives bias in the news.
For example, Figure 6.1 in Settle (2018) shows that most people think
that the majority of thirty-six news sources assessed had an ideological
bias, and conservatives were more likely than liberals to ascribe bias to
news sources. Even attitudes about the media have polarized over time.
Gallup reports that in 2018, only 21% of Republicans have a great deal
or fair amount of trust in the media, but 76% of Democrats do. Thus,
even hearing how someone talks about the news or media may be a signal
of their partisan or ideological views.
Finally, in the internet age, social media behavior might be another
informative cue. Settle (2018) presents evidence that individuals can infer
political leanings of their Facebook friends based on the political and
apolitical content that they post. However, only about 5% of our respond-
ents indicated that they would use social media cues to infer others’
political views, making it less popular than the other subtle cues described.
Trump. Others replied that they would share some of their own views and
see what the other person said. Beyond serving as a case study of free
response noncompliance, we find these answers telling. If we take people
at their word – and we think at least a subset of these respondents were
answering in good faith – this suggests that some people’s “detection
strategy” is to be blunt and initiate a political conversation, a fact that
has interesting implications in light of the results we present in the
remaining chapters of this book about the extent to which many people
have an aversive reaction to such tactics.
.. Actual and perceived political leanings of phonetically ideological names
Conservative Phonemes
Kent 40 60 63,988 4.78 4.72 4.89
Dirk 43 57 11,416 4.27 4.17 4.37
Kurt 42 58 84,893 3.99 3.81 4.23
Tucker 45 55 5,731 4.95 5.01 4.99
Note: Columns 1–3 reflect data from Clarity Campaign Labs, as of 2014. Columns 4–6 show the average ideology ratings for each name given by
508 Mechanical Turk study participants in February of 2015. Ideology was measured on a 7-point scale, ranging from 1 (extremely liberal) to 7
(extremely conservative).
100 Detection: Mapping the Political Landscape (Stage 1)
political attitudes that people might hold on polarizing issues for each
party.26 We also include information-processing stereotypes as well as
three neutral or positive statements about each party. Subjects were
randomly assigned to assess the stereotypes for three target groups in
one of two orders: (1) candidates, voters, and known partisans; or (2)
known partisans, voters, and candidates. The full text of each stereotype
can be found in the Appendix.
Figure 4.1 shows the top-line results for the proportion of subjects who
agree with stereotypes about each target group of the outparty. The first
thing to note is the high rate of agreement. While the study design does
not eliminate the possibility that we are detecting expressive partisanship,
it does allow us to evaluate people’s assessments of outpartisans they
personally know, either unanchored or anchored by their evaluations of
more distant targets. As shown in Figure 4.1, greater proportions of
people agree with the characterizations about candidates, but they also
agree with the stereotypes for voters and known partisans. In some
instances, the proportions are statistically indistinguishable (analysis
shown in the Appendix). Most of the differences are not statistically
significant, but when they are, subjects who evaluated known partisans
first indicated more agreement with the statements than those who
anchored on candidates and voters. This suggests that people reduce their
agreement with stereotypes of known outpartisans only when comparing
them to outparty elites. This finding is consistent with the argument that
affective polarization, as often measured by feeling thermometers toward
“Republicans” or “Democrats,” is biased by individuals thinking about
political elites or extreme, politically engaged outpartisans (Druckman
et al. 2021). However, that we observe small differences on most meas-
ures by varying the order suggests that this might not be as big of a
measurement problem as some argue, depending on the stereotypes or
negative traits measured.
People are not making strong distinctions between the elites and the
non-elites in a party when conjuring up negative evaluations. While we
cannot rule out “partisan cheerleading,” we can at least state that people
are willing to cheerlead at the expense of evaluations of their friends and
family. Thus, not only do these attitudes matter for vote choice, but we
think they matter for interactions between members of the public. The
most partisan are the most likely to hold these attitudes, and they are also
the most likely to engage in political discussion. This means that while
stereotypes may not dissuade those who hold them from talking about
politics, they likely affect how people choose to communicate.
what implication for democratic discourse (p. 84). Yet thirty years later,
while scholars have assumed that people signal and detect, there are few
systematic tests of how, whether they do, and whether people are any
good at reading others’ signals. We began this chapter with a mystery:
How do people figure out the political views of potential discussants
before a conversation begins?
We took a first step in demonstrating some of the ways in which
individuals infer the political leanings of others in advance of a political
discussion. While about 27 percent of our subjects said they did not know
how or would not guess another’s views, the approximately 73 percent
who did provide an answer gave a wide-ranging set of responses, includ-
ing how others speak, dress, make a living, and otherwise engage in their
everyday lives. We also presented experimental evidence that individuals
draw consistent inferences about others’ political views based on their
first names. Finally, we introduced a host of stereotypes that individuals
hold about outpartisans. There is relatively high agreement with these
negative stereotypes, consistent with previous research and current ideas
about affective polarization in the United States. Moreover, individuals
hold these stereotypes about outpartisans they know personally, in add-
ition to outpartisan voters and outpartisan candidates.
We make no claim that this analysis is exhaustive. For instance, our
coding scheme should not be interpreted as assessing the proportion of
people who regularly or ever use each cue we explored; different research
designs would be needed to answer that question. In fact, this chapter
raises more questions than it answers. How do people learn these associ-
ations? How do they reconcile signals that may compete with one
another? What about other cues that we were not able to explore in this
chapter? Future research can explore an abundance of remaining ques-
tions about how people uncover others’ political views.
Our goal was to show that people are able to make some inferences
about potential discussion partners, so that at the moment at which
they make a decision about whether to engage in a discussion, they are
not arriving with a blank slate. Not only do they carry with them the
effects of their own predispositions, but they carry with them their
expectations about potential discussants. Many people have assump-
tions about the kind of people their political opponents are and endow
them with traits that make fruitful conversations about political differ-
ence less likely.
The process of perception also matters because people make assump-
tions about what others think about them. The notion of reflected
Decision
When we last left Joe in the beginning of Chapter 4, he’d just arrived at a
neighborhood party with his wife, who had promptly walked off to go
mingle. Joe clutched his beer. In the time he’d been figuring out which
group to join at the barbeque, the duo with an earthy vibe had moved
closer to the grill and buffet table to refill their plates. That left the pair
of middle-aged men. Joe thought back to this week’s episode of his
favorite sports podcast, racking his brain for some of the factoids
discussed on the show about the MLB season. He made his way over
to the men.
They introduced themselves as Ken, from “the red brick house down the
road,” and Jack, the “one with the big truck in the driveway.” They talked
amicably for a bit about Joe’s job in the city, the adjustment to the neigh-
borhood, and the unseasonably warm weather, until finally the conversa-
tion turned to the holiday. “I think it’s great that we still have a day to
remember the true patriots of this country. Nothing has been easy for our
boys in blue lately,” Jack said. Ken nodded in agreement, mentioning how
little respect it seemed like most people had these days for police officers.
Jack then launched into a long-winded story that had something to do with
an altercation he’d observed last time he was in the city between the police
and some teenagers.
While Jack talked, Joe’s mind started racing. Why hadn’t he remembered
the Blue Lives Matter bumper sticker on Jack’s truck? He really didn’t like
where this conversation was heading and wanted to extricate himself from
the situation. But at the same time, he didn’t want to be rude and he
couldn’t afford to alienate these new neighbors and potential friends. If he
was going to excuse himself, it would be best to do so quickly, before the
political conversation really took off.
109
What does Joe do? Does he catch Katie’s eye, creating an excuse to wrap up
the conversation and walk away? Does he stay put, trying to navigate the
conversation toward less contentious topics but otherwise keeping his
mouth shut? Or does he stick it out, actively but carefully participating in
the conversation he would’ve preferred to avoid?
ignoring what someone just said typically violates social norms – but this
response is available in many other situations.4 Finally, the person could
respond with a pertinent comment, commencing their involvement in a
political discussion they do not want to have. That comment, however,
might include conversational defense mechanisms, such as self-censorship
or conformity to soften any potential disagreement. We explore these
defense mechanisms in Chapters 6 and 7, but focus here on the crucial
decision about whether to engage at all.
What has been studied previously about this moment of decision? The
most thorough theoretical treatment comes from MacKuen’s (1990)
formal models of the decision to talk about politics. At the core of the
models is an argument about the cost-benefit trade-off of political talk,
where costs and benefits are calculated according to the anticipated
agreeableness of the conversation and a person’s tolerance for exposure
to oppositional views. Since its publication, dozens of scholars have cited
the models for their implication that disagreeable discussion should be
extinguished in many contexts. The model set the stage for the literature
in the 1990s and 2000s that sought to assess the extent to which disagree-
ment persists in discussion networks despite predictions otherwise (e.g.
Huckfeldt, Johnson, and Sprague 2004).
However, as far as we can tell, subsequent scholarship has not tested
rigorously the core assumptions, parameters, and strategies from the
models. In the Appendix, we walk through the similarities and differences
between MacKuen’s models and the way we conceptualize the decision to
talk, but we note the key similarities and differences here. In terms of
similarities, MacKuen’s model notes the importance of viewpoint detec-
tion, although he flips the concept to focus on the way that people might
intentionally signal their political viewpoints to others to facilitate the
decision about starting a conversation. MacKuen also incorporates the
notion of variation between individuals in their tolerance for different
types of conversations, what he calls the “expressivity criterion.” We also
think that individual differences between people affect their propensity to
decide to talk (and how to respond), and we assess a number of charac-
teristics underpinning that notion in Chapter 9.
Our studies deviate from his model in several important ways, based
on the empirical evidence that has been accumulated about political
discussion in the thirty years since he developed the model, as well as
our own findings. First, MacKuen’s model is written for dyadic conversa-
tions, whereas we explore multi-person conversations as well because of
the importance of group dynamics in behavioral decision-making (e.g.
Asch 1956; Mintz and Wayne 2016). Second, his model specifies only two
strategies (what he calls “talk” and “clam”) whereas we explore a wide
range of behavioral responses once someone decides to talk (a point to
which we return in Chapter 7). Third, his model assumes that a person
decides to talk when they anticipate more pleasure than pain from the
conversation, and otherwise stays silent. He acknowledges both that there
may be situations in which a person cannot be silent, as well as that some
people may develop more stable strategies (i.e. to always talk or always
stay silent) instead of actively monitoring the environment and potential
discussants, but these factors remain outside the parameters of the model.
We engage these factors directly, given the broad patterns we described in
Chapter 1 that people seem to talk about politics more than they desire,
and that there are some people who never talk about politics.
Although subsequent research has deemphasized the choice stage of
political discussion in order to focus on the consequences of those
decisions, there has been wide recognition that shared attitudes
between discussants is an important factor influencing the likelihood
of political discussion. Incontrovertibly, when Americans talk about
politics, they are more likely to do so with like-minded discussion
partners (Huckfeldt and Sprague 1995; Mutz 2002b; Huckfeldt,
Johnson, and Sprague 2004; Mutz 2006; Gerber et al. 2012;
Klofstad, Sokhey, and McClurg 2013; Minozzi et al. 2020).
Americans appear to self-select like-minded discussants at a higher rate
than do the residents of other countries (Mutz 2006). Whatever lack of
consensus remains about the amount of disagreement in political talk is
largely due to discrepancies in measurement (Klofstad, Sokhey, and
McClurg 2013). Yet, while social scientists have ample descriptive data
on the patterns of homophily within networks and the (in)frequency of
political discussion with those who disagree, we do not have a full
understanding of why this occurs.
Scholars have put forth three main theories that could account for
this homophily: discussant availability, discussant selection, and social
influence. First, homophily could be driven by homogeneity in the
available supply of political discussants, particularly as the United
States becomes more geographically sorted. Second, political discussion
network homogeneity could be attributed to active choices made by the
discussants. This could be based on political views, coined political
selection (Bello and Rolfe 2014), or purposive selection (Minozzi et al.
2020); or it could be due to incidental selection, where individuals
choose their political discussants based on apolitical characteristics that
Avoided Engaged
Everyone [would have] disagreed with me 14 7
Most [would have] disagreed with me 21 11
About half [would have] disagreed with me 43 41
Most [would have] agreed with me 18 28
Everyone [would have] agreed with me 4 13
Note: Data collected in CIPI I Survey. Avoided: N ¼ 1,490; Engaged: N ¼ 1,515.
situation. We first wanted to get a sense for how many people were
involved in each situation and how much disagreement the participants
sensed (or experienced). It turns out that discussions that were avoided
had significantly more people involved than discussions that were pur-
sued. For instance, 23% of the discussions that occurred were one-on-one
discussions, whereas only 13% of the discussions that were avoided were
one-on-one.
It should be clear by this point in the book – and the extant political
discussion literature – that anticipated political disagreement is an
important decision criterion. Our evidence here further supports this
point. We asked participants to report how many people would have
agreed with them about politics had they participated in the discussion
(or how many people agreed with them about politics in the discussion
that occurred). Perhaps to no surprise, discussions that were avoided
had significantly more anticipated disagreement than the discussions in
which respondents participated. As shown in Table 5.1, only about
4% of discussions that were avoided included complete agreement,
whereas 13% of discussions that occurred included complete agree-
ment. This, once again, highlights the importance of detection:
Individuals sniff out disagreement before a conversation fully begins,
often in order to avoid it.
We also asked participants to list the topics that they [would have]
discussed. There was not substantial variation in topics between the
discussions that were and the discussions that were not. Similarly, when
we asked participants to describe their relationships with each person
who was in the discussion, the same broad patterns held across the two
prompts: Most conversations were among close ties, such as family and
friends. However, we did note that there was somewhat more disagree-
ment among acquaintances and coworkers, which is consistent with
previous research (Mutz and Mondak 1998).
.. Motivation for engagement or avoidance, coded free response answers
(continued)
https://doi.org/10.1017/9781108912495.005 Published online by Cambridge University Press
At my aunt’s house the other day, they started discussing current politics.
That part of the family is extremely conservative, whereas I am extremely
liberal. I did not participate in the discussion at all. I am not comfortable
with arguing or debating with people, so mostly just listened to them say
things that I did not agree with and that were flat out wrong and waited for
them to switch to a much more neutral topic.
Thanksgiving Study participant
Sarah is at a small neighborhood party with some of her friends and acquaintances
and everyone is enjoying some snacks and good company. As Sarah mingles
through the party, she steps into a conversation with a group of people. As she
listens to the conversation, she realizes it is about the upcoming election. It quickly
becomes clear to her that they have very different political views from hers, as they
discuss their support for the candidate Sarah opposes. They sound highly know-
ledgeable and well-informed. It sounds to Sarah like they have been following the
news and campaign a lot more closely than Sarah has. As the conversation
continues, the person who seems the most knowledgeable turns to Sarah and asks
about her thoughts on the candidates.
Participants were asked to report how they thought John or Sarah would
respond, with the options covering deflection (changing the subject or
saying nothing),8 conformity (saying what the group believes, even if it is
against what John/Sarah believes), censorship (moderating what John/
Sarah believes in the direction of the group), true expression (stating what
John/Sarah truly believes), and entrenchment (stating an opinion that is
stronger or more extreme than what John/Sarah truly believes, but in
disagreement with the group).
We will return to the bulk of the analysis of these studies in Chapter 7,
when we consider how the psychological experience of discussion affects
what opinions individuals choose to express during the conversation
itself. Here, we focus narrowly on just one of the response options: the
choice respondents had to say they would derail a conversation by being
silent or changing the subject, compared to the choice to engage in a
conversation in one of four ways. We dichotomize the choice to get a
sense for which contextual factors are associated with higher rates of
deflection.
In sum, we evaluated dozens of different conversation permutations.
Across all the different contexts, approximately 20 percent of the subjects
thought that the character would deflect the conversation in some way.
We do not make a claim that aggregating across these studies captures the
actual proportion of conversations that are deflected or silenced. That
statistic would be impossible to systematically evaluate given the thou-
sands of conversational permutations that exist in the real world. But this
finding does give credence to the idea that the discussions in which people
fully participate are only a subset of the conversations that could occur.
We present in the Appendix more detailed findings about the specific
results from the full set of vignette studies but provide an overview here
about the effects of our experimental manipulations of features of the
situation. Our most consistent findings pertained to knowledge differen-
tials and tie strength. We found consistent evidence across three studies
The Discussions That Were Costly: The Name Your Price Study
I hate getting into conversations about politics.
True Counterfactual Study participant
We covered a lot of ground in this chapter as we unpacked the Decision
stage of the 4D Framework. In Table 5.3, we provide an overview of our
findings. What are the key patterns? Conversation is most likely to emerge
among discussants who agree with one another, especially if they share a
political identity. Not only were a greater proportion of the avoided
conversations disagreeable in the True Counterfactual Study, but we also
discovered a higher rate of conversational deflection in disagreeable con-
versations in the vignette studies. In the Name Your Price studies, subjects
demanded more money to have disagreeable conversations.
Perceived knowledge differentials between discussants also affect
which discussions emerge. Not only does the concern about being nega-
tively evaluated lead people to avoid discussion, but people are more
likely to deflect the conversation when they perceive that they are at a
knowledge disadvantage. In general, participants tend to demand more to
discuss politics with those who are perceived as more knowledgeable.
However, among more knowledgeable samples, the opposite is true.
Largely driven by a desire to avoid feeling annoyed or making the other
person uncomfortable, more knowledgeable respondents demand more to
talk politics with their less knowledgeable peers.
Some readers might find this chapter to be redundant, telling us some-
thing we already had evidence to demonstrate from prior research. We
believe that we make four main contributions.
First, the results from the True Counterfactual Study suggest that
political conversation happens more frequently in small groups than in
two-person interactions. Yet, one of the most common methods to date of
soliciting and analyzing information about people’s political discussion
experience is to ask them about their discussion partners. The mismatch
between the kinds of conversational dynamics to which people are
exposed, and the way we measure that exposure, suggests that our focus
on dyadic disagreement (however measured) or the overall composition
Tie Strength Conversations less likely to occur Silencing more likely when character
between weaker social ties is interacting with weak ties
Context Conversations less likely to occur in No evidence that there is more
larger groups, but the vast majority silencing in the workplace
of conversations are in small groups, compared to social situations
not dyads
Topic Topic does not seem to be Even nonpolitical topics are costlier
systematically related to the when discussed with disagreeable
emergence of conversation others
Note: Cells describe the results of the study pertaining to the relationship between the listed factor (e.g. disagreement) and the emergence of conversation.
Cells that are left blank represent relationships that were not tested in a given study.
128 Decision: To Talk or Not to Talk? (Stage 2)
out but to suffer through it. In Chapter 7, we will return to this idea and
describe some of the ways in which individuals “grin and bear it,” such as
censoring the opinions they share with others or fully conforming to the
group’s opinions to avoid disagreement (Suhay 2015; Carlson and Settle
2016; Levitan and Visser 2016).
Finally, we have evidence to suggest that people prefer discussions with
people of a similar knowledge level to themselves. In other words, in
addition to homophily in political views, we might also expect homophily
in knowledge levels in political discussions. We elaborate more on this in
the concluding chapter to the book, where we consider the implications
for the health of a democracy if people self-sort by knowledge level into
political conversations with agreeable others. However, this is a direct
challenge to “opinion leadership” theories of democracy that assume
people want to seek out others who know more about politics as a
shortcut to help them make informed vote choices.
In the next chapter, we turn to focus on what people feel on the cusp
of and during a political conversation, using data from two laboratory
studies where we collected psychophysiologically informative data. The
constraints of a lab experiment prevented us from exploring the full
range of contextual and configural factors that we explored in this
chapter. We narrow our focus to the two factors that seemed to matter
most: the presence of disagreement, and the relative knowledge level
of discussants.
Discussion
Joe stares at his plate, mixing together his mashed potatoes and the gravy
from the roast. He takes another sip of wine, trying to relax. He’d been
dreading this dinner all week. He’d paid enough attention to the news cycle
in recent days to know that his in-laws, Frank and Susan, were going to be
fired up and ready to pick a fight. Katie fell far from the metaphorical tree,
and her politics were about as different as could be from her parents’
opinions. It didn’t help that in recent years they’d come to take the incessant
chatter of cable news pundits as the absolutely infallible truth. Part of what
Joe loved about his wife was her willingness to stand up for her viewpoints
and counter her parents’ sound bites with facts. He always nodded in
encouragement when she spoke and frequently put his arm around the back
of her chair to try and signal that they were united in their beliefs. But he
still felt a pit in his stomach when he was inevitably put on the spot to share
his opinion.
At any moment, his father-in-law would pull his classic move, turning
toward Joe and intimating in a vaguely patronizing way that Joe shared
his opinion, before looking him in the eye and asking directly. Joe felt his
father-in-law turn in his direction. “Talk some sense into that daughter of
mine, Joe. You’re the man of the house,” chuckling as his comment made
his daughter roll her eyes. “Surely you agree with us, don’t you?”
130
Suppose Joe has weighed the pros and cons of engaging in the discus-
sion and chooses to reply to his father-in-law with something relevant
instead of staying silent or deflecting the conversation. What happens
next? Political science research examining the actual dynamics of political
discussion is scant. Most of what we know is learned from asking people
to characterize their conversations or discussion partners, and then study-
ing associations between these features, such as the amount of disagree-
ment or the gender composition, and what happens afterward. Do
people’s attitudes change? Did individuals become more participatory?
Did participants become more tolerant of other views? With the exception
of some research on formal deliberation (e.g. Mendelberg and Karpowitz
2014), qualitative focus groups (Conover, Searing, and Crewe 2002;
Conover and Searing 2005), and limited ethnographic research on large,
regular discussion groups (Cramer 2004) or social groups (Eliasoph
1998), political scientists do not know much about what people actually
say or how they act, let alone what motivates those choices.
In this chapter and in Chapter 7, we aim to “open the black box” on
political talk. We assess the psychophysiological, psychological, and
behavioral dynamics of actively engaging in a political discussion. We
begin by exploring what political discussion feels like. Our goal here is to
characterize what people experience as they anticipate and participate in a
political discussion. We focus on how situational factors, such as dis-
agreement and knowledge asymmetries, affect these experiences. In the
analyses in this chapter, we let participants “speak” to us directly, using
psychophysiological measurement to capture what they feel.
least four times as large as the EDA response to the videos, even though
the videos preceded the discussion stimulus and therefore we might expect
attenuated response to the discussion prompt, due to habituation.
the maximal level of exclusion based on the quality of the subjects’ data.
This tentative result could reflect the subtleties we discussed in the Name
Your Price Study. Individuals might be more comfortable discussing
politics with less knowledgeable outpartisans, which would be detected
as smaller psychophysiological responses here, even if they are more
annoyed while doing so.
Emotional Response
Heart rate and EDA data can give us a sense for the magnitude of an
individual’s psychophysiological experience but cannot provide a defini-
tive answer as to the valence of a person’s emotional response. For
instance, an increase in heart rate could indicate anxiety or stress, but it
could also signal excitement or enthusiasm. In an effort to better under-
stand the valence behind what individuals experience, we complement the
psychophysiological data with self-report measures of participants’ emo-
tional responses.5
After the three-pronged political discussion stimuli, participants were
asked to indicate which emotions they experienced when they learned
they were going to have a political discussion. Overall, 48% of the sample
reported one or more negative emotions, while 37% reported one or more
positive emotions.6 A greater proportion of the subjects in the outpartisan
treatment group experienced negative emotion compared to the coparti-
san group (60% vs. 43%), but the knowledge level treatment did not have
a significant main effect on overall positive or negative emotion.
High Low
Outpartisan Copartisan p-value Knowledge Knowledge p-value
Angry 1.41 1.24 0.18 1.30 1.33 0.85
Annoyed 2.06 1.69 0.03 1.76 1.97 0.21
Anxious 3.39 3.00 0.04 3.21 3.05 0.37
Motivated 3.07 2.64 0.02 2.86 2.79 0.71
Happy 2.23 2.26 0.86 2.24 2.26 0.91
Relieved 1.49 1.53 0.79 1.48 1.55 0.56
Note: Data come from the Psychophysiological Anticipation Study. Subjects responded to
the question “[h]ow did the idea of having a political discussion with someone make you
feel? Please select all that apply and indicate the strength of your response” In a grid-style
question, subjects were asked to respond to each emotion listed in the rows on a five-point
scale (1=weak, 5=strong). See Appendix for more details on sample size and measurement.
attended high school in the United States should qualify for financial aid
from the government to attend college. In some analyses, we combine all
four issues together.
On average, we expected that discussions where the subjects had
clashing partisan identities would have higher levels of issue disagree-
ment, and as a result, they would perceive more disagreement. These three
measurements are obviously not an exhaustive list of the way disagree-
ment could be operationalized, but they fit nicely with past operationali-
zations (Klofstad, Sokhey, and McClurg 2013). Moreover, we report in
Carlson and Settle (2016) that the way someone’s opinion is elicited does
not seem to affect people’s level of reported discomfort. Thus, we focused
on the configuration of discussion partners or opinions as opposed to
Partisan Clash
The results from the Name Your Price Study in Chapter 5 suggest that
people were less inclined to talk with outpartisans about both political
Study Segment
Identity Revelation Political Discussion Announcement Discussion of Topics
Partisan Clash Subjects whose partisan identities Subjects whose partisan identities Subjects whose partisan identities
clash will experience increased clash will experience increased clash will experience increased
psychophysiological activation psychophysiological activation psychophysiological activation,
when revealing their partisan when told they will have a political compared to subjects whose
identity, compared to subjects discussion, compared to subjects partisan identities do not clash
whose identities do not clash whose identities do not clash
Issue NA NA Subjects whose issue opinions do not
142
and nonpolitical topics. Thus, as described in the top row of Table 6.2, we
expect that people might have a stronger psychophysiological reaction to
engaging with an outpartisan.
We test this manipulation at three different points during the study.
First, we measure what happens when the discussion partners initially
reveal their partisan identities to each other. Second, we measure what
happens when they are told they will have a political discussion. And
third, we assess whether people in cross-partisan conversations experi-
enced more discomfort or psychophysiological activation during the dis-
cussion topics themselves.
The design of our study guided participants to talk about a nonpolitical
topic (their favorite class) while we collected a baseline psychophysio-
logical reading. They were then instructed to state their partisanship
verbally, responding to the question: “Are you a Republican or a
Democrat? When the screen goes blank, please state your answer out
loud.” In Table 6.2, we term this “Identity Revelation.” We structured
this part of the study to create a clear moment where the subjects revealed
their identity. In our pretest survey, 91 percent of the sample identified
with a party, including 26 percent of the sample identifying as
Independents who leaned toward a party. Thus, most respondents should
have been able to respond to the question with “Democrat” or
“Republican,” but respondents had the freedom to explain their partisan
identities in whatever words they preferred. We acknowledge that this
question would be difficult to answer for Independents, who might have
felt less sure of what to say.
Our operationalization of partisan clash was a “hard test.” We
return to this point in more detail in Chapter 7, but for now note that
in many instances, the partisan clash treatment during the conversation
was weaker than it appeared on paper. We did not pick highly conten-
tious issues for which partisan identity was especially salient. Our
student sample contained very few self-identified Republicans, which
meant we had a limited number of conversations with true partisan
clash. Of the forty-one conversations where subjects’ partisan identities
did not align, only six occurred between a self-identified Republican
and a self-identified Democrat, although that number increases to
twenty-one if we code leaners as partisans.8 Additionally, as we dis-
covered when watching the recordings of the interactions, people were
not particularly forthcoming about their political identities! Many of
our subjects were quite verbose when describing their partisanship,
suggesting that they qualified their identity expression in ways that
Issue Disagreement
While we did not assign subjects to conversations based on the amount of
issue disagreement between the discussants, our hope was that we would
also achieve variation in the extent to which subjects disagreed with one
another on the issues discussed, and as we note in Chapter 3, we did. In
this section, we compare psychophysiological and emotional responses to
conversation segments on topics where the participants agreed, to
conversation segments where they did not, based on their pretest answers.
We outline our expectations in the second row of Table 6.2. A given dyad
could have both agreeable and disagreeable exchanges across the four
topics. In some analyses, we also aggregate the disagreement to get a sense
for how much overall disagreement existed across the four issues based on
their pre-survey answers.
Looking first at psychophysiological response, for each topic, we pool
discussion segments where respondents agreed and compare to the pool
of discussion segments where they did not. Figure 6.5 shows a clear
pattern for both heart rate and EDA, even if not all differences are
statistically significant at the topic level: Disagreeable conversations tend
to elicit higher psychophysiological response. In regression models includ-
ing observations for each discussion topic with clustered standard errors
for each individual, actual issue disagreement was positively and
Perceived Disagreement
The third way to conceptualize our treatment was in terms of the amount
of disagreement that subjects perceived, based on their report in the post-
test survey questions. We describe our expectations for perceived dis-
agreement in the third row of Table 6.2. While the other two measures
are plausibly exogenous to the participants’ discussion preferences and
behaviors, this measure clearly is not: We expect there to be variation in
the extent to which people recognize and perceive a conversation as
“disagreeable.”
While we can’t think of another study that assesses subjects’ percep-
tions of disagreement immediately following a discussion, there are clues
from previous literature suggesting that perception is not a perfect
mirror of reality. Research utilizing name generators to assess discussion
network composition almost always relies exclusively on subjects’ per-
ceptions, and although early studies reported relatively high levels of
perception accuracy (Huckfeldt and Sprague 1987), later work has
questioned the veracity of people’s perceptions because of known cog-
nitive biases such as the false-consensus effect. Moreover, Eveland et al.
(2019) argue that previous measures of others’ political views might
conflate inaccuracy with uncertainty. Carlson, Abrajano, and García
Bedolla (2020) also find variation between ethnoracial groups in will-
ingness to guess the political party identification of those in their discus-
sion networks, with Latinos being most likely to refuse to answer
the question.
Therefore, perhaps it is not surprising that overall we find high rates of
recall, but low rates of accuracy. Table 6.3 provides an overview of recall
and accuracy for our subjects’ reports of their discussion partners’ opin-
ions. We asked subjects, in separate questions, to report their partner’s
Percent Percent
Percent Percent Accurate Accurate
DK about DK about Recall in Recall in
Partner’s Perceived Agreeable N Agreeable Disagreeable N Disagreeable
Opinion Agreement Dyads Dyads Dyads Dyads
WM Tuition 11 2 94 34 11 36
148
WM Testing 8 1 85 46 29 34
Testing 8 1 95 84 29 24
Immigrant Financial 13 2 88 48 38 48
Aid
Note: Data come from the post-test administered to subjects after completing the discussion. Subjects reported their partners’ opinion (agree or disagree)
on each issue, as well as the level of agreement between themselves and their partner (on a six-point scale). Both questions included a “don’t know”
response.
The Psychophysiological Experience of Discussion 149
12 0.12
More Agreement
11 0.11
Less Agreement
Heart Rate Change over ISI, in BPM
10 0.1
What does political discussion feel like? The first half of our exploration
into the experience of political discussion reveals that it is neither uni-
formly a positive nor a negative experience, but it is a physiologically
activating one. For the vast majority of subjects in our study, political
discussion entailed an increased heart rate, especially so for anticipated or
experienced disagreement. While we found less consistent evidence for
electrodermal activity during discussion, the anticipation of discussion
was four times as activating as simply watching contentious interactions
on television.
The data we have presented so far paint a nuanced picture about
the effect of disagreement, suggesting that different facets of the con-
cept may matter in different ways. For example, disagreement as
captured by partisan clash seems to matter most in anticipation of a
discussion. Building on our findings in Chapter 5 that people prefer
conversations with copartisans, it seems that the anticipation of cross-
partisan conversations is emotionally activating. But the results from
our second lab study suggest that in and of itself, discussion between
cross-partisans may not evoke higher reactivity than it does among
copartisans.
There are several caveats about our studies that suggest more investi-
gation into these processes is merited. First, as we explore more in
Chapter 7, it is not clear that the partisan clash treatment in our second
study was delivered clearly. Many subjects who identified as partisans on
the pre-survey were coy about those identities when asked to state them
out loud. Moreover, how the discussion partners interpreted this ambigu-
ity is also unknown. When asked in the post-test questions to recall their
partners’ partisanship, people were largely able to correctly identify the
partisanship of partisans but overestimated the extent to which self-
identified Independents leaned toward a party; about one-third of
Independents were remembered as partisans.11
Similarly, in an experimental design where subjects were instructed
to talk about a series of issues, without confederates directing the
[further] Discussion
154
(e.g. Dailey and Palomares 2004) to illustrate that the decision about
whether and how to engage in a political discussion is guided by three
goals: (1) to be accurate; (2) to affiliate with others and maintain
relationships; and (3) to affirm a positive self-concept. To shine light
on the psychological experience of a discussion, we examine the consid-
erations individuals hold when deciding how to behave in a discussion.
We are interested in what is going through people’s heads as they
navigate the conversation and consider how to participate. There are
many ways that political conversations could threaten people’s accur-
acy, affiliation, and affirmation goals; we thus expect individuals to alter
their behavior to reduce those threats. Here we zero in to study the
opinions people choose to express.
In the first half of the chapter, we present survey-based work that
explores a number of questions. First, what are people thinking about
during a political conversation? We report the prevalence of consider-
ations that underpin people’s discussion experiences. Next, what is the
relationship between people’s considerations and their reluctance to
express their true opinion, such as self-censorship? Finally, what is the
effect of knowledge differentials in disagreeable conversations on people’s
considerations and expression?
Are these findings about censorship and conformity merely an artifact
of the research design in our survey experiments? Do people withhold
their opinions when talking about politics, or do they simply expect that
other people do? In the second half of the chapter, we push on the insights
gleaned from our survey work to test some of the key ideas in a laboratory
setting. In the first study, we test whether people actually censor or
conform their viewpoints during political discussion. In the second study,
we build on the findings from Chapter 6 to explore linguistic markers in
the conversations to assess differences between agreeable and disagree-
able conversations. Our lab experiments provide confirmation that people
do in fact deviate from expressing their true opinions when they encoun-
ter disagreement.
The findings in this chapter reveal that the conversations that do
occur are far from the ideal envisioned by deliberative theorists and
practitioners working to heal political tensions in the public. Paired with
the considerations people hold in their mind for the ramifications of a
discussion, a substantial portion of the population may censor or silence
their viewpoints to mitigate the potential for adverse consequences,
especially when they find themselves at a structural disadvantage in
a conversation.
We examined the patterns and relationships between discussion configur-
ation, considerations, and expression using the series of vignette experi-
ments first described in Chapter 3 and analyzed in part in Chapter 5. The
idea was to present respondents with a hypothetical scenario in which a
character was faced with the decision to engage in a discussion with a
group of people who disagreed. We can then measure how our respondents
thought the character would psychologically and behaviorally respond to
the conversation, conditional on various characteristics of the context that
we manipulated experimentally. As we discussed previously, we chose to
use vignette experiments for two reasons. First, we wanted to maximize the
number of conversational features we could manipulate, something very
difficult to do in lab studies. Second, we wanted to use a method that
mitigated social desirability bias. Although it is somewhat ambiguous as to
what the socially desirable response is,1 we anticipated at the outset of this
project that individuals would be reluctant to report that they would not
express their true opinion, since conforming to a group in an individualistic
culture, such as the United States, is generally frowned upon.
In our pilot work on Mechanical Turk, we tested a variety of configur-
ations of discussion contexts as our independent variables and randomly
assigned people to different vignettes to bolster our ability to make causal
claims. We present details on the full vignettes in Chapters 3 and 5, but as
a refresher, we describe a character (same gender as the subject) who finds
him or herself in a political conversation and is asked to share his or her
opinion by one of the other characters. The pattern of findings in the pilot
work led us to narrow the factors we tested on the CIPI I Survey and for
the core analysis in this chapter, we focus on testing differences in the
distribution of knowledge in situations where the character is in a parti-
san minority. In the Appendix, we explore where our pilot work using a
wider range of contextual factors (such as the balance of viewpoints, the
distribution of power, and the strength of social connection) corroborated
the main results we present.
We proceed by presenting the descriptive characterizations of subjects’
responses about considerations and expression before moving to test the
hypotheses outlined in Table 7.1 that assess the effects of the knowledge
treatment. Recall from Chapter 3 that the high-knowledge condition is a
situation in which the other discussants were more knowledgeable than
the character and the low-knowledge condition is a situation in which the
discussants were less knowledgeable than the character.
Considerations
In Chapter 3, we introduced our operationalization of the AAA
Typology: the considerations that people hold in their mind that affect
their behavioral decisions, even if these decisions happen without much
conscious thought and in a matter of seconds. We identified these as
inputs to the 4D Framework that are likely affected by both individuals’
traits and the contextual factors of the conversation.
We constructed a list of both concerns (e.g. considerations that suggest
discussion could lead to a negative outcome, likely making someone less
likely to engage meaningfully) and opportunities (e.g. considerations that
suggest that discussion could lead to a positive outcome, likely making
someone more likely to engage meaningfully) (see Table 3.4). After read-
ing the vignette and providing information for the behavioral dependent
variables, respondents were asked, “[w]hich of the following seem like
plausible considerations for John/Sarah? Check all that apply.”
Deciding how to engage in a political discussion could theoretically
involve weighing both positive and negative considerations, and for half
of our sample that is true: 51% of subjects selected some combination of
concerns and opportunities. Within that group, on average 53% of the
considerations they selected were concerns. Yet not everyone felt con-
flicted: 26% of our respondents selected only concerns and 22% selected
only opportunities.
What types of considerations did respondents think the hypothetical
character would make when deciding how to engage in the discussion? To
results make clear a pattern that is harder to discern in the table. We find
that if respondents selected an accuracy consideration as most important,
they were equally likely to choose an opportunity (51%) or a concern
(49%). If respondents chose an affirmation consideration, they were
slightly more likely to select an opportunity (54%) than a concern
(46%). However, if someone chose an affiliation consideration, they were
nearly three times as likely to select a concern (75%) as an opportunity
(25%). This pattern for affiliation underscores one of the key arguments
of the book: Social considerations matter in structuring our political
discussion preferences, and the people who think of conversations in these
terms are much more likely to be concerned about the consequences than
be excited about the opportunities.
What were the effects of the experimental treatment, manipulating the
level of knowledge of the hypothetical discussants? Broadly, we find that
62% of participants in the high-knowledge condition selected a concern
as the most important consideration the character would make, compared
to 50% of participants in the low-knowledge condition. This difference is
statistically significant. Figure 7.2 provides more nuance, illustrating
variation in the opportunities and concerns selected within our AAA
Typology by treatment group. Figure 7.2 shows theoretically consistent
patterns for the ways in which knowledge asymmetries affect the consid-
erations that are perceived to be most important in navigating a political
discussion. Accuracy and affirmation display a similar pattern:
Participants were more likely to select a concern as most important when
they were at a knowledge disadvantage and an opportunity when they
were advantaged. In contrast, knowledge advantages do not confer the
perception of affiliation opportunities compared to knowledge disadvan-
tages. Interestingly, when the character was more knowledgeable than the
discussants, affiliation concerns were more common.
Expression Behavior
With an understanding of the considerations that run through people’s
minds and the effects of knowledge asymmetries, we turn to assess pat-
terns in expected expression behavior. We did not aim to create an
exhaustive list of behavioral choices in a political discussion and instead
focused on how people chose to verbally express their opinions, if at all,
in conversations. Thus, our categorization does not include behaviors
such as name-calling, cursing, or yelling, nor does it include micro (but
still observable) responses, such as avoiding eye contact, crossing one’s
Likelihood of
Expressing
Consideration True Opinion
Concern that people would judge him/her for his/her 2.91
knowledge level
Concern about expressing an opinion about which s/he is 2.94
uncertain
Concern that these people would judge him/her for his/her 3.02
opinion
Concern that his/her opinion is based on factually inaccurate 3.14
information
Concern that expressing disagreement will make people 3.15
uncomfortable
Concern about defending his/her true opinion 3.17
Concern that expressing a dissenting opinion will negatively 3.25
affect his/her chance of getting invited to another
neighborhood gathering
Concern that expressing his/her opinion will make people 3.28
uncomfortable
Concern that expressing a dissenting opinion will damage the 3.29
relationship John/Sarah has with these people
Concern about public speaking 3.62
Opportunity to engage more with these people 3.72
Opportunity to solidify his/her opinion 3.80
Opportunity to get to know these people on a deeper level 3.92
Opportunity to express his/her real political opinions 3.92
Opportunity to persuade these people to change their minds 3.98
Opportunity to justify his/her opinion 3.98
Opportunity to discuss important issues with these people 4.09
Opportunity to do his/her civic duty by discussing politics and 4.09
exercising free speech
Note: Data come from CIPI I Vignette Experiment. Sample sizes for each consideration can
be viewed in Table 7.2.
Using a large experiment on a nationally representative sample, we find
strong evidence that the balance of knowledge between discussants affects
both the considerations individuals think a character holds and the
behavior that they anticipate the character will exhibit. Individuals
appear to be more sensitive to the negative considerations (e.g. concerns
about expressing their opinions, being judged by others, damaging social
relationships) when they are less knowledgeable than the other
The vignette experiments were designed to maximize our ability to
explain variation in psychological and behavioral response across differ-
ent discussion configurations, as well as to minimize social desirability
concerns. However, it is reasonable to ask whether people’s attribution of
response to a character is an accurate way to measure their own behavior.
While vignette survey research has been shown to flexibly allow research-
ers to measure expression in situations that are difficult to randomize, we
wanted to complement the survey work with lab experiments designed to
push further on our key findings.
the lab experiment), a lab session, and a posttest. Participants were asked
to indicate the extent to which they agreed with fourteen policies, using
questions adapted from the American National Election Studies about
political issues.
During the lab session, participants were told that they were joining a
“focus group” about students’ political opinions on campus with two
other “participants,” who were actually confederates in the study. The
treatment itself was the ordering in which the students shared their
opinions. In the control condition, subjects were assigned to give
their responses before the confederates; because they would be giving
their responses to each political question without knowing the opinions
of the confederates on the issue at hand, they had limited information
about how to conform to the group on the particular issue. Those
randomly assigned to give their responses last were in the treatment
condition because they would only give their response after hearing that
the confederates disagreed with them on an issue, giving them a position
with which to conform.7 Three days after the lab session, participants
were emailed a posttest survey that included the same fourteen questions
(buried within a larger survey) that they answered in the large pretest
survey and in the lab session.8
Our expectation was that participants in the treatment condition
would conform at a higher frequency and to a greater degree than
participants in the control condition.9 Our primary dependent variable
in this analysis is the number of times participants conformed across the
issues during the session, measured in two ways. Potential conformity
means that in the lab, a participant gave an answer that differed from his
or her pretest response, moved in the direction of the confederates, and
crossed the midpoint on the scale, such that the lab response actually
countered the pretest response.10 Pure conformity includes the require-
ments of potential conformity, in addition to requiring participants to
give the same response on the pretest and the posttest. Pure conformity
captures the construct most clearly – subjects altering their opinions only
in the presence of others who disagree – whereas potential conformity
allows for the possibility that subjects changed their mind over the course
of the study period.11
While this distinction between pure and potential conformity is
important conceptually, our measurement cannot account for response
instability between the pretest, lab session, and posttest. While we cannot
know with certainty, we are confident that our measures capture some-
thing more than attitude instability given that we impose the strict
expressing at least one opinion during the study, and a significant per-
centage of participants in all configurations of the vignette experiments –
ranging from 25% to 50% – expected the character to censor or conform
their viewpoints. Thus, when people reported high rates of conformity or
censorship in our vignette studies, they were not simply evaluating their
expectations of others’ behavior: They were likely projecting their own.
The Political Chameleons Study demonstrates empirically that individuals
do indeed conform and self-censor to better match group opinions in
political conversations. How do these behaviors manifest linguistically
during more organic political conversation? We return to the
Psychophysiological Experience Study described in the previous chapter,
where participants in a student subject pool were hooked up to psycho-
physiological measurement equipment while they talked about a series of
political issues. We recorded videos of the conversations, allowing us to
assess the linguistic markers associated with the expression of their iden-
tities and viewpoints. We first used a speech-to-text automated tool to
extract the transcripts of the conversations.13 We then watched the videos
to manually clean words or phrases missed or incorrectly transcribed and
to divide the text into speaking turns that could be attributed to each
individual subject.
In our analysis here, we focus on subtle verbal indicators that capture
meaningful variation in the dynamic, quality, and tone of the conversa-
tions. Table 7.5 outlines the linguistic variables we measure, using a mix
of hand coding and automated coding to capture several constructs, such
as how many words were exchanged, how much of a subject’s speech
consisted of verbal hedging, and the sentiment14 of the conversation.
These constructs do not map directly to the expression we studied in
the vignette experiments related to conformity and censorship, because
the “ground truth” we have for the subjects’ true opinion – the pretest
survey questions – are difficult to use as the basis for assessment about
whether a subject was truthful. For example, consider a subject who
indicated she agreed with a policy in the pretest. In the conversation,
the other person spoke first, and the subject engaged with her discussant
about the discussant’s opinion before the allotted time expired. If the
subject never clearly states her own opinion, has she been truthful? Is
she avoiding answering the question, or was she just curious to learn what
Used Party Name An indicator for whether a subject used “Democrat,” “Republican,” or NA
“Independent” when they described their partisan identity in the identity
revelation segment, using hand coding
Conversational An indicator for whether a subject spoke first during each conversational Range 0–7, mean 3.5
Initiation segment. For the overall conversation, we measure the number of times the
respondent spoke first in each segment; for analyses of the identity
revelation segment and each issue discussion segment, we use binary
indicators for whether the respondent spoke first
Sentiment Sentiment coded using the Lexicoder Sentiment Dictionary, interpreted as the Range 0.001–0.041, mean
gap between the percentage of positive and negative words used, for the 0.02
overall conversation, for the identity revelation segment, and for the issue
discussion. A score of 1 indicates a 1 percent percentage-point gap.
Note: Data come from the Psychophysiological Experience Study. Linguistic measures available for 138 subjects.
Linguistic Markers 173
her discussant thought and ran out of time before she could express her
own thoughts?
We calculated the metrics in Table 7.5 separately for the partisan
identity revelation portion of the study (where subjects described their
partisanship) and the issue discussion (where subjects discussed four
issues in separate conversational segments).16 Within each of the two
portions of the study, we looked for effects of the disagreement treatment
(whether subjects’ partisan identities matched or clashed, and whether
subjects’ pretest attitudes agreed or disagreed), as well as subjects’ indi-
vidual traits. In the issue discussion portion of the study, we also asked
subjects a series of questions about their perceptions during each of the
four issue discussions. Table 7.6 outlines the explanatory variables.
50
Percent Qualifiying Response
40
30
20
10
0
Partisan Match Partisan Clash Partisan Match Partisan Clash
Issue Discussion
We next examine the conversation segments over policy issues. To assess
the relationship between the variables outlined in Table 7.6 and linguistic
expression during the policy segments, we pool all issue segments
together, meaning that we have eight observations per conversation (each
participant’s speech during four policy segments) and then run a series of
regressions between the independent variables from Table 7.6 and the
core conversational behaviors from Table 7.5, clustering standard errors
by subject.
We find no evidence that either the partisan clash treatment or the
actual amount of issue disagreement between subjects was related to any
of the linguistic measures during the issue discussion segment. Nor did we
What did Joe choose to do in response to his father-in-law’s question?
Based on the results presented in this chapter, probabilistically Joe would
choose to censor his viewpoints to Frank, given that he felt he was at a
knowledge disadvantage and was concerned about maintaining the social
relationship. The experience was physiologically and psychologically
taxing, and it likely involved more verbal hedging and negative sentiment
than if he had talked to someone with whom he agreed.
Our exploration of the experience of a political discussion showed that
individuals consider a wide range of concerns and opportunities as they
contemplate what to say in a discussion, and that these choices are often
made with a backdrop of physiological arousal and emotional activation.
Importantly, the examination of the psychological considerations individ-
uals make indicates that individuals do indeed consider the social conse-
quences of their expression in the discussion. We found that 32 percent of
respondents select an affiliation consideration as most important in
driving a character’s expression in a vignette, and among those who do,
that they are much more likely to express a concern about the situation
than see an opportunity to strengthen a social relationship. Our vignette
experiment also showed that it is not only a structural disadvantage that
heightens these social concerns: Our subjects anticipated a higher rate of
.. Pattern of findings for linguistic markers in issue discussion in Psychophysiological Experience Study
Linguistic
Marker Perceptions Individual Traits
Number of Subjects used fewer words to discuss an issue when they Subjects with higher SIAS scores used fewer words to
Words perceived their discussant to be more knowledgeable discuss an issue, especially when subjects disagreed on
on that issue the issue
Verbal Hedging Subjects used more filler words when discussing an issue
with a partner they perceived to disagree with them
on that issue
Conversational Subjects were less likely to speak first about an issue Whites were more likely to initiate conversation than
Initiation when they were less comfortable discussing ethno-racial minorities
that issue Subjects with higher SIAS scores were less likely to
Subjects were less likely to speak first about an issue speak first about an issue
when they perceived their partner to be Subjects with higher CA scores were less likely to
uncomfortable discussing that issue speak first about an issue
178
affiliation concerns when the character was more knowledgeable than the
hypothetical discussants.
We find that a majority of our respondents expected individuals to
deviate from their true opinions through censorship, conformity, or silen-
cing. The behavioral path they chose was conditional on the relative
knowledge levels of the discussants, but it was also related to whether
the subject reported that a concern or an opportunity was the most
important consideration for the character. People who emphasized the
importance of a concern were more likely to report that the character
would not express his or her true opinion. This tendency toward reticence
was reinforced by findings from lab experiments that not only validate
high rates of censorship and conformity but also shed light on the subtle
dynamics of disagreeable conversations. We do not have lab results to
validate the findings about the effects of knowledge differentials on what
subjects say and how they say it, but we think this is a fruitful avenue for
further research.
In Chapter 8, we conclude our 4D Framework with the Determination
stage. Here, we examine the extent to which contentious political inter-
actions, such as those described in this chapter, might lead individuals to
avoid political discussions, politics more generally, and even nonpolitical
social experiences in the future. One participant in our Thanksgiving
Study captured this notion well, when asked to describe a recent political
discussion:
It was with a co-worker and we were debating homosexual marriage. I was
against and he was for. We were at work in a team room and it was just hard
to convince him of my views. Plus he wasn’t discussing but rather trying to agitate
me, and it worked. I hate feeling like that and it just made me angry. I learned
from that lesson to not get involved in these types of discussions so much.
Determination
For the umpteenth weekend in a row, Joe rakes the leaves from his yard to
the curb, the annual Sisyphean ritual of suburban home ownership. He
stares down the street at his neighbor Jack’s house, where the weekly
driveway auto shop gathering has switched entertainment from baseball
to football alongside the change in season. After the conversation at the
barbeque earlier in the year, Joe had tried again last month to get to know
Jack and his friends, joining them in the driveway as everyone was figuring
out who they wanted to recruit for their fantasy football teams. But once
again, as soon as the conversation turned away from small talk and on to
the news, it was clear Joe’s views were in the minority. He uncomfortably
muddled through the conversation and retreated back to his yardwork as
soon as he could. Now, a month later, he was still mulling over the situation
as he once again saw men gathering in the driveway. Would it be rude to
pop over for a few minutes to talk sports, but immediately leave when
politics came up? Should he continue to grin and bear it through more
political banter every week for the sake of being neighborly? Could he get
away with just a friendly wave from afar? Or should he avoid Jack at all
costs, changing his raking schedule and taking a different route to avoid
jogging past him in the mornings?
180
open-ended responses that could be coded about why someone did or did
not engage in a conversation were nearly evenly divided between each
category. In our vignette experiments in Chapter 7, we found that 37% of
respondents selected affirmation, 32% selected affiliation, and 31%
selected accuracy as the character’s most important consideration when
determining his or her expression during a conversation.
Our results show that while all three categories of consideration
matter, people who think of conversation in terms of affiliation are much
more likely to be threatened by potential negative consequences than
motivated by potential positive benefits. With respect to the decision to
participate, we found in Chapter 5 that affiliation concerns were most
prevalent overall, based on the free response answers provided by
respondents. But this pattern was driven by the people who were asked
why they avoided a particular political conversation: 28% of people
mentioned affiliation concerns, compared to only 17% who mentioned
accuracy and 10% who mentioned affirmation concerns. Moreover, only
11% of participants identified an affiliation consideration as their reason
for engaging in a conversation. It seems that affiliation considerations are
a much stronger driver of opting out than opting in.
Similarly, in our exploration of the motivations for behavior during a
discussion in Chapter 7, Figure 7.1 demonstrated that people were much
more likely to pick an affiliation concern as their most important consid-
eration compared to an affiliation opportunity. The difference in the
concern–benefit ratio was not nearly as stark for the other two categories.
Moreover, concerns about affiliation are not limited to those who were
disadvantaged in the conversation. People in the vignette experiment
condition who had a knowledge advantage relative to their discussants
were actually more concerned about affiliation consequences than those
in the condition with a knowledge disadvantage! Thus, while affiliation is
not a more prevalent or more important category of consideration for
people overall, on average it is a more negative consideration, one that
frequently has an adverse effect on individuals’ engagement in a
conversation.
Affiliation considerations have not been ignored in previous litera-
ture, but neither have they been fully explored. The foundational work
studying core discussion networks theorized about social considerations
in discussant selection but did not rigorously test those ideas. For
example, when assessing the strategic function that political discussion
can play in informing the public, Ahn, Huckfeldt, and Ryan (2014)
describe the decisions individuals make in choosing the people with
whom they will discuss politics and for what purposes. They argue that
“individuals have multiple preferences in the construction of communi-
cation networks, and politics is only one among a long list of preferen-
tial criteria – sparkling personalities, trustworthiness, a hatred for the
Yankees, and so on” (p. 10). While the authors raise the point that
social – as opposed to purely political – considerations might affect
political discussion choices, they do not test the extent to which these
social considerations affect behavior.
Other research on political expression and tolerance speaks somewhat
to social considerations and fits between Stages 3 and 4 of our cycle.
Gibson (1992) demonstrates that those who do not feel free to express
their political views have more homogeneous networks and live in less
tolerant communities. Recent work in this vein suggests that individuals
are driven to self-censorship – a key component of Stage 3 in our cycle –
primarily because they fear that expressing unpopular views will alienate
them from those in their social networks (Gibson and Sutherland 2020).
Relatedly, Mutz (2006) argues that individuals in politically diverse net-
works avoid political activity “mainly out of a desire to avoid putting
their social relationships at risk” (p. 123). While Mutz’s work is theoret-
ically rich and largely consistent with much of our argument in this book,
the social implications of discussion are not tested. Weber and Klar
(2019) find support for the idea that people who are sensitive to norma-
tive social pressure are more likely to sort themselves socially and ideo-
logically, but they do not actually assess the experience of social pressure
itself. Thus, while there is evidence that social pressure is at play in
structuring our choices, the specific mechanisms about the desire to
preserve our social relationships remain largely untested.
Where affiliation motivations have received considerable attention as a
driving force in political discussion behavior is the focus on social concerns
in studies utilizing qualitative approaches. Conover and colleagues use a
multi-method approach to specifically focus on why people avoid political
discussion, which we would situate in Stage 2 of the 4D Framework
(Decision). Political conversations raise affiliation concerns in a number
of ways. Conversation may reveal information about a person’s view-
points that elicit judgment from others,1 a concern that is most pro-
nounced in conversations with weaker ties.2 Conversation creates the
possibility to learn information about other people’s views that reveal
fundamental differences3 or make it difficult to maintain respect for them.4
Finally, the act itself of disengaging from a conversation that becomes
contentious can be offensive,5 alienating others by refusing to continue to
talk. This dynamic is made clear in Talking about Politics, where Cramer
(2004) contrasts the men she studies (the “Old Timers”) to a group of
women in a guild who do not know each other as well and avoid discuss-
ing politics to keep the conversation polite (p. 37).
Scholars concerned with affective polarization have uncovered plenty of
evidence of altered social relationships. Study after study reveals new ways
in which Democrats and Republicans report their dislike for each other.
Instead of simply examining whether partisans rate outpartisans “colder”
on a feeling thermometer scale, many of these studies explore whether
people would be unhappy about their progeny marrying an outpartisan
(Iyengar, Sood, and Lelkes 2012), would discriminate against outpartisans
(Iyengar and Westwood 2015), would be less willing to date outpartisans
(Huber and Malhotra 2017; Easton and Holbein 2020), would avoid selling
football tickets to outpartisans (Engelhardt and Utych 2018), and would not
want their children playing with the children of outpartisans (Mason
2018).6 A wealth of articles in the popular press echo the same points.
The Daily Beast published an article headlined “Friends unfriend over
politics”;7 NPR had a segment on All Things Considered called “‘Dude,
I’m done’: When politics tears families and friendships apart”;8 Reuters
published a piece headlined “‘You are no longer my mother’: A divided
America will struggle to heal after Trump era”;9 and The Washington Post
published a piece headlined “Politics and conspiracy theories are fracturing
relationships. Here’s how to grieve those broken bonds.”10
But scholars have not interrogated what provokes the disruption to
people’s social networks based on partisan antipathy. The core argument
is that strong patterns of partisan sorting along social, cultural, and
geographic lines have activated political tribalism (Mason 2018) or polit-
ical sectarianism (Finkel et al. 2020), and that Social Identity Theory can
explain the resultant change in attitudes. But what does that unfolding
process look like in the daily lives of Americans? How do day-to-day
interactions about politics actually affect our social relationships?
Our aim in this chapter is to integrate together these previous veins of
literature with our own findings to conceptualize what happens after a
political discussion with respect to social relationships. We shed light on
whether individuals’ affiliation concerns are warranted. Our findings
suggest that political conversation puts people in a bind. As their political
and social worlds collide, individuals are left with a choice about how to
reconcile their social relationships with their political attitudes. Do they
downplay the importance of their political views to allow for harmonious
disagreement with their peers and maintain the friendship? Or do they
reduce their social contact with those who disagree in order to maintain
their political allegiances and political identities?
We examine social repercussions in terms of social distancing and social
polarization, as shown in Table 8.1. By social distancing, we refer to behav-
iors as they relate directly to the people who were part of the discussion
under consideration. This can include both future political and nonpolitical
(social) interactions, but the focus is on the people with whom someone has
already talked about politics. By social polarization, we refer to behaviors
that are targeted toward others who were not part of the conversation that
capture the increasing social discomfort between Democrats and
Republicans (Mason 2018).11 In both cases, we are interested in how the
experience of being in an uncomfortable political discussion – here, meas-
ured as being in the political opinion minority – affects how individuals
approach future political and social interactions.
When it comes to social distancing, we are interested in the likelihood
with which individuals will discuss politics with that same group again, as
well as the likelihood with which they will otherwise engage socially with
The 4D Framework is premised on the idea that what someone learns
from one discussion experience feeds into their a priori expectations of
political discussion the next time they are presented with an opportunity
to talk about politics. At its most proximal level, this affects whether they
change their pattern of interaction with the people in the discussion, either
in terms of talking about politics again, or in terms of modifying their
social relationship, such that they avoid future nonpolitical interactions
with them, too.
The point that these respondents clearly make is that they chose to
forego political discussions in order to maintain their social relationships.
This is a textbook case of acting on an affiliation concern. These respond-
ents did not stop talking politics because they felt unknowledgeable on the
topic, felt they were being persuaded to adopt a different view, or any-
thing that more clearly aligns with accuracy motivations or strategic
discussion behavior. Rather, they stopped talking politics to avoid des-
troying a relationship.
Another important point from these responses is that although many
described their friends’ views or the disagreement stemming from the
political discussions in deeply emotional language, they still desired to
keep the friendships. That is, this particular subset of respondents who
only distanced politically valued friendship over politics, even if they had
some colorful things to say about their friends’ views. For example, one
respondent wrote that their friend “says crazy shit.” Another, more
eloquently, wrote that his or her friend “expressed decidedly prejudiced
views after Obama was nominated for [p]resident.” In this case, the
respondent raised concerns about prejudiced comments his or her friend
made, but chose to stop talking about politics instead of ending the
relationship altogether. As we discuss later in this chapter, other respond-
ents were not so forgiving. And eliminating political talk from a relation-
ship can inadvertently affect the overall relationship. As one respondent
wrote: “My goddaughter and I have decreed a no talk zone on politics so
have distanced ourselves in that topic area . . . we still love and respect
each other but much of disposable social time is spent in supporting those
issues that it limits our social time.”
avoid future political discussions with this group than those who read
about a character in a balanced – but contentious – group or who was in
the partisan majority.15
We also asked how likely they thought it would be that the character
would go to these people for information about the election in the
future. We find that characters in the partisan minority were perceived
to be significantly less likely to go to these discussants for political
information in the future (mean ¼ 2.4) than those in the partisan major-
ity (mean ¼ 3.4) and a balanced group (mean ¼ 3.3). Turning to the
main effects of the knowledge treatment, we observe suggestive evidence
that respondents expected a less knowledgeable character (mean ¼ 4.1)
to be more likely to avoid discussing politics with this group in the
future than a character who was more knowledgeable than the group
(mean ¼ 3.8). This is consistent with results we presented in Chapters 5
and 7, suggesting that individuals tend to avoid conversations in which
they are less knowledgeable than others in the group. However, we also
observe that respondents expected a character described as less know-
ledgeable than the discussants (mean ¼ 3.6) to be more likely to turn to
these people for information about future elections, compared to char-
acters described as being more knowledgeable than the other discussants
(mean ¼ 2.3).
While we do not observe an interaction effect between the knowledge
treatment and partisan composition on the perceived likelihood that the
character would avoid future political discussions with the group, we do
observe a statistically significant interaction between knowledge and
partisan composition such that characters who interacted with coparti-
sans who were more knowledgeable were more likely to report that they
would ask them for information in the future. This fits nicely with the
general notion that individuals should be relying on others who are more
knowledgeable than they are, especially if they are copartisans, for infor-
mation about politics (e.g. Lupia and McCubbins 1998; Ahn, Huckfeldt,
and Ryan 2014; Carlson 2019).
The results from this study complete our picture of the relationship
between political discussion experiences and future political interactions
with discussants. Echoing the results from Chapter 5 – where we found
that people were much more likely to avoid discussions in which they
would be in an opinion minority – we see here that the experience of a
discussion where one is in the opinion minority has the strongest effects
on future political interactions. Knowledge asymmetries between discuss-
ants also seemed to affect these behaviors, but to a lesser extent.
6 5
Likelihood of Future Social Interaction
1 2 3 0 4
One implication of the 4D Framework is that the experiences an individ-
ual has with political discussion impact future decision-making, accumu-
lating decisions that ultimately have an impact on the structure of their
discussion networks, political and otherwise, as well as the frequency with
which they communicate, whether about politics or pop culture.
Causality in this process is very difficult to establish, as we expect that
these choices are self-reinforcing, but we think that the associations are
telling. We use data from the 2018 CCES to examine the relationship
between political discussion network characteristics and broader social
polarization behaviors.17 This analysis tells us how political discussions
are associated with changes in our likelihood of avoiding future social
interactions with outpartisans.
As we saw in Chapter 5, people demanded more money to talk about
even nonpolitical topics with outpartisans. This suggests that the social
effects of political discussion may extend beyond avoiding future political
Experiences throughout the 4D Framework shape future political and
social behavioral intentions. Individuals do indeed report that they dis-
tance themselves socially from their friends because of politics. We found
that about a quarter of Americans report cutting ties with others for
political reasons, but these reasons vary from person to person.
Sometimes, the behavior stems from pure intolerance of disagreeable
views or a distaste for political engagement more broadly, while other
times it stems from the signals that political views can send about under-
lying values or lack of common interests between friends. We also
explored the extent to which social polarization is linked to political
discussion network composition. We find mixed evidence that exposure
to disagreement in discussion networks is connected in some way to social
polarization, but more clear patterns about strong partisans, who we
think have the most agency in structuring their discussion networks.
Among this subset of the public, those who have the most homogenous
discussion networks are the most socially polarized.
Throughout this chapter, we have taken care to note some of the
strengths and limitations of our analyses. Our hope is that future
researchers can improve upon the groundwork we have laid in this
chapter to more carefully answer these nuanced questions. For example,
our analyses of the CCES data rely upon one question aimed at assessing
network composition, but this does not capture the complexities of polit-
ical discussion that we have tried carefully to unpack throughout this
book, especially in Chapter 7. This analysis also uses a social polarization
measure that might be susceptible to the biases raised by Druckman et al.
(2021), leading us to overestimate the degree of social polarization by not
teasing out the differences between perceptions of ideologically extreme,
engaged partisans from perceptions of the more typical moderate, less
engaged partisans.
Synthesizing the findings from this chapter together raises an important
question: Is the picture bleak? On the one hand, we might argue “yes.” We
found that the very kinds of conversations where people are likely to
encounter new information – conversations with outpartisans – are pre-
cisely the ones that individuals are least likely to repeat. Our vignette
experiments repeatedly demonstrated that those in the partisan minority
were perceived to be less likely to engage in a political discussion in the
future with the same discussants. In an analysis detailed in the Appendix,
we also find that individuals in the partisan minority in their political
discussion networks discuss politics less often than individuals who are in
the partisan majority. These heterogeneous conversations are potentially
the types of discussions that have the potential to increase tolerance for the
other side through interparty contact. This means that some of the foun-
dational theorized benefits of political conversation – exposure to diverse
views, access to a wider variety of information, and increased tolerance –
might be increasingly rare. Moreover, we also find that these political
interactions have the potential to change individuals’ social networks
through social polarization, which makes future inadvertent exposure less
likely. As individuals sever ties with their friends because of politics, they
reduce the likelihood that they are incidentally exposed to political infor-
mation and conversation in the future. Finally, we observe some evidence
for social polarization. It seems that exposure to disagreement in political
discussion networks can be associated with increased social polarization
among weak partisans and independents, while homogeneous networks
are associated with increased social polarization among strong partisans.
However, we might look at the evidence presented in this chapter and
conclude that there is a glimmer of hope. First and foremost, the vast
majority of respondents reported that they have not socially distanced
themselves from a friend because of politics. Similarly, in our vignette
experiments, the average likelihood of engaging in future political and
social interactions was above the midpoint on the scale, indicating that
most respondents still thought it likely that the character would continue
to engage socially. We also asked respondents in a vignette experiment
about the likelihood that the character would engage in future political
discussions with other people who were not part of the conversation in
the vignette. We found no evidence that the partisan composition of the
discussion under consideration affected the likelihood of future political
discussion in general. It is easy to point to the quarter of respondents
who have distanced and worry about the state of the social fabric in
America. But most people have not severed social ties because of polit-
ical discussion. Moreover, when we dig into the types of social polariza-
tion we explored, we observe that this seems to be most common in
romantic partnerships. The majority of respondents still seem relatively
comfortable engaging with outpartisans as neighbors, friends, or
acquaintances; they just draw the line at marriage (Iyengar, Konitzer,
and Tedin 2018).
In Chapter 10, we revisit these findings and discuss how our political
discussion experiences might affect our broader societal attitudes, such as
hostility toward outpartisans and political elites. For the moment, we
leave it up to the reader to decide for herself whether these results present
an optimistic or pessimistic window into the state of political discussion
in America.
202
Among the most frequently measured demographic characteristics in
political behavior research are race and gender. For decades, scholars
have been interested in understanding how race and gender structure
political behavior: Do Whites and minorities have different political
preferences? Why do minorities tend to participate in politics at lower
rates? Do men and women have distinct political preferences? Why do we
observe women participating in politics at increasing rates over time?
While the vast majority of this research has been focused on questions
of participation (Schlozman Burns, and Verba 1999; Leighley and Vedlitz
1999; Lawless and Fox 2011; Anoll 2018); vote choice (Rouse 2013;
Dolan 2014; Setzler and Yanus 2018); and political knowledge (Dolan
2011; Abrajano 2015; Pérez 2015; Dolan and Kraft n.d.), there is an
important body of research specifically focused on understanding the role
race and gender play in political discussion.
Scholars of political discussion, and particularly deliberation, have
focused some attention on gender and race because they are characteris-
tics intricately linked to structural inequalities in American society. The
Political PID
Stage Outcome Data Women Minorities Interest Strength SIAS CA WTSC
Detection Try to Guess Views in Advance CIPI I & CIPI II + + + +
Makes a Guess (free response) CIPI II + – – –
Directly Ask Others’ Views (free CIPI II – – –
response)
Decision Generally Try to Avoid Political CIPI I + – – + + +
Discussions
Deflect (expect vignette CIPI I + – + +
character to silence)
Perceived Value of Initiating an CIPI I & CIPI II + + –
205
Agreeable Discussion
Perceived Value of Initiating a CIPI I & CIPI II – –
Disagreeable Discussion
Discussion Perceived Value of a Political CIPI I & CIPI II + –
Conversation with a Stranger
Perceived Value of an Agreeable CIPI I & CIPI II +
Discussion
Perceived Value of a CIPI I & CIPI II – –
Disagreeable Discussion
Proportion of Concerns about a CIPI I – + + +
Conversation Vignette
Character Would Have
(continued)
https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press
Political PID
Stage Outcome Data Women Minorities Interest Strength SIAS CA WTSC
Likelihood of Expressing True CIPI I – + + – –
Opinion for Vignette
Character
Expecting Vignette Character to CIPI I
Entrench
Expecting Vignette Character to CIPI I – – –
Express True Opinion
Expecting Vignette Character to CIPI I
Censor
Expecting Vignette Character to CIPI I + + +
206
Conform
Expecting Vignette Character to CIPI I – + +
Silence
Determination Socially Distanced from a CIPI I & CIPI II – + + + + +
Friend Due to Politics
Distanced on Social Media Due CIPI I & CIPI II +
to Politics
Note: Symbols in each cell denote the direction of the observed relationship between the individual characteristic in the column and the discussion
behavior in the row: + denotes a positive relationship; – denotes a negative relationship; and a blank cell means that we did not observe a statistically
significant relationship at the p < .05 level. All individual disposition data come from the CIPI I survey. Analyses using the combined CIPI I & CIPI II
data include individual dispositions measured in CIPI I and dependent variables measured in CIPI II. Demographic and political dispositions were also
measured in CIPI II and results are robust to using the CIPI I or CIPI II measures of these dispositions.
Demographic Dispositions 207
systematic barriers that women and racial and ethnic minorities face in
society more generally might affect how they participate in a political
discussion or deliberation, if at all. Just as people from underrepresented
communities are silenced representationally, so too might they be less
likely or able to raise their voices in political conversations.
If the effects of these structural inequalities are strong enough, we
expect that we might be able to detect gender or racial differences in
our measures of the 4D Framework, because people would have intern-
alized these disadvantages and altered their attitudes and expectations
accordingly. But we want to be clear about two things. First, our
expectations and analyses do not give full credence to the complexity
of gender and racial identities. Plenty of research has shown that
simply dichotomizing gender into men and women or race into White
and non-White crudely washes over important heterogeneity within
each group. Moreover, if we consider race and gender to be social
constructs, these characteristics are proxies for differences in lived
experiences, such as exposure to gendered stereotypes. Future research
focused directly on the effects of gender and race on political discussion
should think critically about how to best conceptualize and measure
these traits. Second, we want to be clear that we did not design our
studies in ways to specifically activate the inequality that women or
racial minorities might experience in organic conversations. Unlike the
approach in some deliberative participation research (e.g. Karpowitz,
Mendelberg, and Shaker 2012; Mendelberg, Karpowitz, and Oliphant
2014; Mendelberg and Karpowitz 2016), we did not manipulate the
gender (or racial) composition in our discussion studies. We do not test
any interactions with gender or race from our experimental studies and
instead focus on the main effects of gender or race. Thus, the results we
present in the next section should be seen as a first step in an explor-
ation of the role of demographics in nuanced discussion behaviors, not
the final verdict.
Gender
On average, women participate in political discussions less frequently
than do men.2 While the gender gap is relatively small, Mendelberg and
Karpowitz (2016) forcefully argue that it is “worth the time to under-
stand, because it demonstrates how power—the lifeblood of politics—
animates the political system . . . gender affects how power is instantiated,
reinforced, or undermined when people exercise voice. It does so in ways
both overt and subtle, through means that may be simultaneously polit-
ical, psychological, and social” (p. 1–2).
Beyond frequency, how does gender affect behavior in political discus-
sion? Some have connected the frequency gap to preferences about dis-
agreement. Djupe, McClurg, and Sokhey (2018) find that disagreement
decreases women’s participation in political discussion but increases par-
ticipation from men. One possible mechanism for this could be conflict
avoidance. Women tend to be more conflict avoidant, meaning that they
might be less likely to engage in discussions, especially when disagreement
is present. Along this same vein, Wolak (2020) finds that gender gaps in
political engagement are driven more by men’s preference for (or enjoy-
ment of ) conflict than by women’s preference to avoid it.
Within formal deliberative settings, previous research has shown that
the gender composition and decision rules employed strongly affect the
extent to which women speak up (or not) in these discussions. Specifically,
when there are many women in the group and when the decision rule is
unanimous, rather than majority rule, women are more likely to speak up
(e.g. Karpowitz, Mendelberg, and Shaker 2012; Mendelberg, Karpowitz,
and Oliphant 2014). The theoretical foundation behind the “strength
in numbers” component of these findings is rich (see Karpowitz,
Mendelberg, and Mattioli 2015 for a thoughtful review). Much of this
research points to socialized gender norms about both whether politics is
for women and how women and men should interact socially. Although
we do not explore gender dynamics, the lessons gleaned from this previ-
ous work is useful. For example, women are more likely to be socially
sanctioned when they speak confidently, instead of following the norm of
being modest (Babcock and Laschever 2003). Karpowitz, Mendelberg,
and Mattioli (2015) write “[g]roup interactions between men and women
tend to crystallize women’s ex ante inferior authority, rendering them less
likely to exercise influence in a deliberative setting” (p. 151).
Within the context of the 4D Framework, we do not have strong
expectations for how and why gender should affect Detection (Stage 1)
based on past research, but these previous findings suggest that gender
could play an important role in the Decision stage (Stage 2). In particular,
we expect women to be less likely to engage in political discussions than
men, following from the broad findings we outlined, as well as patterns
uncovered from previous research on the rates of discussion between men
and women (e.g. Huckfeldt and Sprague 1995). Thinking about the
effects of situational factors such as disagreement and knowledge, we
should expect gender gaps to be particularly large in response to
They might try to steer conversations away from politics, but they should
be less likely to simply sever the tie than men.
As detailed in Table 9.4, we test these expectations using a logit model
that controls for race, interest in politics, strength of partisanship, and
psychological dispositions. We found that women were significantly less
likely to report that they had distanced themselves from a friend because
of politics, compared to men, consistent with our expectation.
Ethnorace
While there was ample previous research on gender and political discus-
sion, there is far less research on which to draw to develop expectations
about how race and ethnicity affect political discussion. As researchers
have recently noted, the majority of political discussion research has
focused on samples of White respondents or has not over-sampled minor-
ities in a way that allows for meaningful inferences (Leighley and
Matsubayashi 2009; Carlson, Abrajano, and García Bedolla 2019,
2020). Our book is also vulnerable to this critique. What research has
been conducted demonstrates that there are dramatic differences in the
partisan (Carlson, Abrjano, and García Bedolla 2020; Eveland and
Appiah 2020) and ethnoracial (Leighley and Matsubayashi 2009;
Eveland and Appiah 2020) composition of discussion networks between
Whites and minorities, as well as the network size. For example, Eveland
and Appiah (2020) find that Black Americans have more racial diversity
in their discussion networks, while White Americans have more political
diversity in their discussion networks. In general, Whites tend to discuss
politics more frequently than individuals from ethnoracial minority
groups, but the reason behind this remains relatively unknown. As
Carlson, Abrajano, and García Bedolla (2020) argue, it could be that
Whites are socialized to be positively inclined toward political discussion
and political engagement more so than ethnoracial minority groups. It
Perhaps the most foundational individual dispositions to consider while
trying to explain variation in political discussion behavior are political
characteristics. Specifically, we focus on interest in politics and strength of
partisanship, as we expect strong Democrats and strong Republicans to
have similar political discussion preferences throughout the cycle. We
outline our expectations in two ways. First, we assess when we expect
these traits to have similar effects, even if the mechanism of the relation-
ship might differ. Given that strength of partisanship and interest in
politics are correlated in our CIPI I Survey data (r = .29), we should
anticipate similar effects in many instances. Second, we focus on the
unique effect each characteristic should have, independent of its relation-
ship to the other characteristic.
When might partisanship strength and interest in politics have similar
effects on discussion behavior? We expect large effects at the Detection
stage (Stage 1). Both traits incline people to pay more attention to politics.
Paying more attention to politics heightens people’s ability to see more
differences between the political parties and to recognize patterns of
0.7
0.7
0.6
0.6
Predicted Probability of Guessing Others' Views
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0.0
0.0
Not at all Not very Somewhat Very Pure Independent Weak Strong
Interested Interested Interested Interested Independents Leaners Partisans Partisans
likely to report that they would try to guess others’ views in advance of a
conversation. Specifically, the predicted probability of guessing someone’s
views in advance for the least interested in politics was .14, but this rose
to .43 for those who were most interested in politics. Likewise, predicted
probabilities based on our logit model suggest that the probability of
guessing someone’s views for pure Independents was about .24, but
.39 among strong partisans. Note that this did not mean that knowing
someone else’s views in advance was a requirement for their participation,
but rather that they simply engage in this detection behavior. Second,
assessing the free response data, we find that the highly interested were
more likely to provide an answer about how they would guess others’ views
and directly ask someone their views, but that strong partisans were not.
Our expectations for the two traits diverge for Stage 2, as we expect
them to have different preferences for the kinds of conversations in which
they prefer to engage, but we do expect that on average, strong partisans
and those who are more interested in politics will be less likely to report
that they would avoid a political discussion and less likely to expect a
character in the CIPI I Vignette Experiment to deflect by remaining silent in
the conversation. We have similar expectations for their opinion expression
in Stage 3. Both traits can function in ways to make politics a “hobby,”
where people derive enjoyment from talking about politics and both likely
make people more confident in and willing to express their true opinions.
We find mixed support for these expectations in Stage 2 and Stage 3. We
do not find a relationship between interest in politics or strength of parti-
sanship and a preference to avoid political discussion, a point we expand
on shortly when we assess the traits individually. As shown in Figure 9.2,
we do not observe many differences in the anticipated expression response
in the vignette experiment based on interest in politics or strength of
partisanship. Those who were more interested in politics were less likely
to expect the character to be silent, but we do not observe any other
statistically significant differences between the most and least interested,
nor by strength of partisanship. However, the more interested in politics
someone is and the stronger their partisan attachment is, the more likely
they are to expect the vignette character from the CIPI I Survey to express
his or her true opinion, as measured using the six-point Likert scale.
Finally, at Stage 4, our overarching expectation is that those who are
more interested in politics and more partisan will be more likely to engage
in social distancing and affective polarization. These individuals likely
care more deeply and will be more sensitive to the presence of politics in
their daily lives. Their political views are important to them and they may
be more likely to distance themselves from their friends because of polit-
ical views, compared to those who are less interested in politics, whether
the salient difference is due to disagreement or a discrepancy in their level
of political engagement or interest (Klar and Krupnikov 2016; Krupnikov
and Ryan 2022).
In general, we find support for these expectations. Among respondents
who completed both the CIPI I and CIPI II Surveys, a logit model reveals
that those who are more interested in politics are more likely to report
that they have distanced themselves socially from a friend because of
politics. Specifically, the model suggests that the predicted probability of
distancing from a friend because of politics for the least interested is .10,
but this nearly triples (.29) for the most interested.
1.0
Least Interested Pure Independents
Most Interested Most Partisans
0.8
0.8
Predicted Probability
Predicted Probability
0.6
0.6
0.4
0.4
0.2
0.2
0.0
0.0
Entrench True Censor Conform Silence Entrench True Censor Conform Silence
Opinion Opinion
To this point in the chapter, we have largely focused on the “usual
suspects” for individual characteristics that are associated with political
discussion behavior. We now turn to psychological characteristics. Our
analysis of the three traits we explore shortly is meant to illustrate a
broader point: There is much to be gained from developing more theor-
etically driven inquiries that link psychological variation between people
to the kinds of preferences and behaviors they exhibit at each stage of the
4D Framework. We selected two traits that have been well-studied in the
political discussion literature (conflict avoidance and willingness to self-
censor) and one trait that we think should make people especially sensi-
tive to the social concerns inherent in political discussion (social inter-
action anxiety). Each of these personality characteristics becomes relevant
at different stages of the cycle.
In the sections that follow, we describe our expectations for the ways
in which each psychological disposition could be related to particular
stages of the 4D Framework, as well as any empirical evidence to
support these expectations. We take each disposition in turn for both
theoretical and methodological reasons. We find it theoretically import-
ant to analyze each of these factors independently from one another, but
controlling for the effects of the “usual suspects” explored previously.
Methodologically, we also note that these psychological dispositions are
strongly correlated with one another, much more so than the demo-
graphic and political dispositions we examine. As a consequence, we
hope to avoid issues of multicollinearity by analyzing separate models
for each psychological disposition.
Social Anxiety
As we describe in more detail in Chapter 3, social anxiety captures the
extent to which individuals are uncomfortable engaging with other
people. Those who are more socially anxious are more likely to experi-
ence this discomfort during interactions in which their behavior is
dependent upon the behavior of others, such as political discussion.
Because our measure of social anxiety (SIAS scale) addresses the inter-
action itself, we are relatively agnostic about how the trait might affect the
Detection stage. On the one hand, people who are more socially anxious
might be especially attuned to political cues as they try to prepare to tailor
their behavior in the forthcoming interaction. On the other hand, people
who are more socially anxious might not care to detect others’ views
because they simply prefer to avoid all social interactions, regardless of
the presence of disagreement or the political nature of the conversation
more generally. The finding reported in Figure 9.4 suggests the latter.
We do, however, have strong expectations for how social anxiety
structures Stage 2 (Decision) and Stage 3 (Discussion) behaviors. We
expect that those who are more socially anxious will be more likely to
avoid political discussions. We test this expectation using the measures
described in Table 9.2. We found suggestive evidence that those who are
more socially anxious were more likely to report that they avoid political
discussions. Substantively, we found that moving from the lowest to the
highest level of social anxiety increased the odds of avoiding political
discussions (“avoiding” or “somewhere in between” relative to
“enjoying”) by about 24 percent. However, we do not find evidence that
social anxiety was associated with anticipated deflection in the vignette
experiment or variation in the value placed on initiating agreeable and
disagreeable political discussions.
Just as we expected social anxiety to be associated with the Decision
stage (Stage 2), we also expected social anxiety to be relevant in Stage 3,
the discussion itself. This is a bit trickier to evaluate given that we expect
people who are more socially anxious to be more likely to avoid these
interactions in the first place, but we still have expectations for how social
anxiety might affect the ways in which individuals participate in a con-
versation, should they find themselves in one. We expect that those who
are more socially anxious will place lower value on political conversation
itself, whether casual conversations with strangers or agreeable or dis-
agreeable conversations. Their anticipated discomfort in the social inter-
action more generally should lead them to find these experiences costly. In
224
have strong expectations for how social anxiety should be associated with
Determination, but we find that it is positively associated with socially
distancing from friends because of politics. This is consistent with a
finding we published in a separate article (Carlson, McClean, and Settle
2019), where we used data from the Psychophysiological Anticipation
Study to show that those who are more socially anxious and have
stronger psychophysiological reactions to anticipating a political discus-
sion have more politically homogeneous discussion networks. We leave to
future research the challenge of unpacking why social anxiety is related to
the aftermath of political discussion, although we can speculate that it
might be due to a tendency to spend more time analyzing social inter-
actions before and after they occur.
Conflict Avoidance
Conflict avoidance involves the extent to which individuals prefer to
evade disagreement. Throughout this book, we have shown that disagree-
ment is a central feature affecting behavior in the 4D Framework. Based
on the findings of other researchers who have studied the role of conflict
avoidance in political discussion, we expect the trait to play a role in
structuring behavior at each stage of the political discussion cycle.
At the Detection stage (Stage 1), we expect that individuals who are
more conflict avoidant will have more finely tuned detection systems and
will engage in detection more frequently than those who are less conflict
avoidant. Because individuals who are more conflict avoidant prefer to
dodge disagreement, they are likely to be on the lookout for cues signaling
disagreement in an effort to avoid those interactions. Specifically, we
expect those who are more conflict avoidant to be more likely to report
that they try to guess others’ political views in advance of a conversation
and to be more likely to offer a guess in our free response question, but
less likely to report that they would ask someone their views directly.
Asking someone what they think about politics could bring out explicit
disagreement, so we expect those who are conflict avoidant to be more
likely to turn to subtle cues instead.
We find mixed evidence for these expectations. We find no evidence of
a relationship between conflict avoidance and guessing political views in
advance, and when we turn to the free response data, we find that the
more conflict avoidant someone was, the less likely they were to answer
the question with a description of how they would go about guessing
someone else’s views. This stands in contrast to our expectation.
However, consistent with our expectations, we find that the more conflict
avoidant a respondent was, the less likely they were to report that they
would directly ask someone about their political views, instead turning to
subtle cues, as visualized in Figure 9.4.
While we found mixed support for our expectations regarding the
relationship between conflict avoidance and Detection, we find strong,
consistent support for our expectations for the role conflict avoidance
plays in the Decision stage. Put simply, we expected that individuals who
were more conflict avoidant would be more likely to avoid political
discussions. We captured this using the four tests described in Table 9.2:
self-reported discussion avoidance, anticipating the CIPI I vignette charac-
ter to remain silent, and placing lower value on initiating both agreeable
and disagreeable discussion. Using the same modeling strategies described
previously, we found that the more conflict avoidant someone was, the
more likely he or she was to avoid political discussions. Specifically, we
found that the odds of avoiding political discussions increase about
237 percent going from the lowest to highest levels of conflict avoidance.
This effect is much stronger than the effect we found for social anxiety,
which was about a 29 percent increase in discussion avoidance. The results
on perceived likelihood of deflection from the CIPI I Vignette Experiment
are also consistent with our expectations. Specifically, we find that the
predicted probability of remaining silent (deflecting the conversation) rises
about 16 percentage points going from the lowest to highest levels of
conflict avoidance. We find suggestive evidence that the more conflict
avoidant a respondent was, the less they valued initiating even agreeable
political discussions, but we – to no surprise – find strong evidence that the
more conflict avoidant a respondent was, the less they valued initiating
disagreeable political discussions.
At the Discussion stage, we expect conflict avoidance to lead to a
lower likelihood of expressing true opinions. Individuals who are more
conflict avoidant, but are forced into a conversation, will likely engage in
strategies to temper disagreement. Following the findings from Stage 2,
we expect those who are more conflict avoidant to place lower value on
engaging in casual conversations about politics with strangers and dis-
agreeable discussions. A priori, we would not expect conflict avoidance
to be associated with the value of agreeable discussion, but our results
from Stage 2 indicate that we might expect it to be associated with lower
perceived benefit of even agreeable discussion. Thinking about the psy-
chological considerations individuals make when contemplating how to
respond in a disagreeable political discussion, we expect conflict
227
0.7
Lowest in Psychological Disposition
Highest in Psychological Disposition
0.6
0.5
Predicted Probability
0.4
0.3
0.2
0.1
0.0
Willingness to Self-Censor
Finally, the willingness to self-censor disposition focuses on whether
individuals moderate what they say in conversations with others, largely
to avoid conflict or social judgment. Similar to social anxiety, we do not
Overall, we find that gender asserts strong influence over the Decision
stage, but perhaps less so after that point and we do not find strong
evidence that race consistently affects behavior in political discussion.
Significantly more work is needed to fully connect the structural inequal-
ity that women and minorities face in society with their experience in
political discussion. The 4D Framework offers a theoretical scaffold
through which future scholars could more rigorously manipulate conver-
sational dynamics that specifically activate gender or racial inequality or
power structures.
The political dispositions studied here are more consistently related to
behavior in the 4D Framework than the demographic dispositions of race
and gender. Interest in politics is associated with behavior at each stage of
the cycle, as is strength of partisanship, though less consistently. It is
unsurprising that interest in politics and strength of partisanship are
associated with discussion behavior, but the implications of these rela-
tionships are important for understanding the composition of people who
participate most actively and honestly in political discussions.
The results about the psychological dispositions that we examined also
have powerful implications. Much like interest in politics (though often in
the opposite direction), conflict avoidance and willingness to self-censor
are associated with behavior at every stage of the 4D Framework. Social
anxiety has somewhat more mixed effects but is associated with at least
one behavior at each stage. Reviewing the summary of our results at the
beginning of this chapter in Table 9.1, it is noteworthy that the psycho-
logical dispositions seem to be just as consistently associated with political
discussion behaviors as the “usual suspects,” if not more so.
Together, the results in this chapter add more nuance to our story. The
individual dispositions – demographic, political, and psychological – that
we examined help to explain some of the variation we observed in the
empirical results presented in the previous five chapters. Our goal was not
to identify the single most important disposition that predicts behavior
throughout the cycle, but rather to demonstrate the value in theoretically
linking innate characteristics to specific facets of discussion behavior.
Certain characteristics will be more influential in some stages than others
and identifying when and how traits matter in the process of discussion
will allow researchers to draw stronger conclusions about which kinds of
people are most likely to talk about politics, and how they communicate
when they do.
Joe watches the evening news with dismay. Everything seems so contentious
and everyday life choices have become politicized. The conversations with
his in-laws have turned even more extreme. The discussions no longer dance
solely around differences in policy preferences; the conversations now
reveal disagreement about what constitutes fact. He has been friendly, but
from a distance, with Jack and Ken, unwilling to risk another conversation
like he had earlier in the year. And the human resources department at his
office circulated a memo reminding everyone that in order to foster a
workplace that values inclusivity and civility, conversations about politics
were discouraged.
His instinct is to hunker down and double up on his efforts to avoid
political interaction for the sake of preserving his personal and professional
relationships. But he has the sinking suspicion that doing so makes him part
of a bigger problem: Americans no longer communicate effectively about
their political differences. Which path was costlier for the country: frag-
menting social relationships or losing the vitality of civil interactions across
lines of difference?
American democracy was more strained in the aftermath of the 2020 elec-
tion than it had been in over a century, according to historians and
political scientists.1 Organizations that track the quality of democracy
worldwide downgraded America’s rating as a result of the absence of a
peaceful transfer of power, growing distrust in institutions among the
public, and a growing proportion of Americans endorsing violence as a
solution for political disagreement.2 Building on concerns about the
accelerated degradation of norms during the Trump administration and
the growth of antidemocratic attitudes among the American public,3
234
experts rang the bell in an effort to call attention to the magnitude of the
problems Americans face as a citizenry.4 Amid the chorus of voices calling
for major institutional reforms were calls for civility and dialogue,5
arguing that the divisions most visible in Washington, DC had permeated
the interactions that everyday people had with each other. From this point
of view, the solution to the country’s troubles needed to come from the
bottom up, as well as the top down.6
A plethora of civil society groups seeking to address the growing rift
between Democrats and Republicans have latched onto these ideas of
civility and dialogue. Ranging from the Koch Foundation7 to National
Public Radio’s StoryCorps,8 dozens of organizations have funded and
prioritized efforts to get Americans to communicate across lines of
difference, in order to build perspective and empathy for those whose
experiences and opinions diverge from their own. They are joined by
advocates of deliberative democracy, who suggest that we can find
consensus solutions for seemingly intractable policy problems if we
structure fora for people to inform themselves and respectfully debate
their opinions.
But at the same time, others weigh in that it is preferable to avoid airing
our differences. From advice columns about social niceties9 to changes to
company policies that discourage political conversation in the work-
place,10 the collective wisdom from etiquette experts and human
resources professionals is to keep our mouths shut. Echoing the call that
less is more, many people severed social relationships.11 Others vocally
stated their intention to leave social media platforms altogether because
they were tired of the infusion of politics, while some retreated into niche
platforms catering to particular ideologies. Podcasts sprung up, counsel-
ing people about what to do with conservative relatives who still had not
accepted Trump’s loss in 202012 or liberal “snowflake” relatives.13
If Americans talked to each other more about their political differ-
ences, would those interactions repair or fracture the divisions in the
country? This question recasts14 a fundamental tension at the core of
the study of democracy: Can the goals of participatory and deliberative
democracy coexist? Our hope is that the findings in this book inform
further study about when discussion might help and when it might hurt,
and what potential pitfalls lie ahead if increased political discussion is
pursued as a remedy for the consequences of high levels of polarization.
The stakes are high, and while many of the problems facing the United
States are structural and institutional in nature, many of the proposed
solutions call on everyday Americans to lead the charge.
In addition to the contribution of the 4D Framework and the specific
findings from each of its stages, synthesizing across our results suggests a
series of bigger takeaway points about the process of political discussion.
Future research on political discussion must grapple with the implications
of these points.
Lesson #1:
The social processes of discussion begin before words are exchanged and
do not end when the conversation stops. Previous research has focused
primarily on the instrumental aspects of political discussion, such as
whether people learn, become more tolerant, or vote at higher rates.
Our work suggests that the social aspects of conversation – the consider-
ations people hold in their mind, but also the dynamics of conversations
themselves – are integrally interwoven into the decisions people make at
all stages of the discussion process.
Social considerations affect which conversations people pursue and
what they choose to say during those conversations. Our findings show
that social considerations are as important as considerations about infor-
mation and expression, and that the people who have concerns about
political discussion are especially likely to report the importance of social
considerations. People’s concerns about the social ramifications of political
discussion are not unfounded, as political conversations have consequences
for people’s future political discussion and social networks: About one in
four people report that they have altered a social relationship with someone
because of their political views. This is most common among those most
attached to the political system (partisans and the interested) but also those
most psychologically susceptible to the costs of discussion.
Supporting the findings gleaned from past qualitative work, we find
that most people report talking about politics in small groups, not in
pairs. Thus, group dynamics matter within conversations, not just within
discussion networks. Previous political science research has focused on
the composition of people’s regular discussant network, not the compos-
ition of the discussions to which people are exposed. This distinction is
subtle, but important. Our work demonstrates that these group dynamics
have an immense impact on which conversations occur, who chooses to
participate, and what gets said.
Against this backdrop, we encourage future research to take seriously
the social contours of political discussion. In qualitative or survey-based
work, this could mean asking questions about the social relationships
between discussants, attitudes discussants have toward each other, and
the distribution of opinions within conversations. In experimental work,
this could mean more realistically incorporating social considerations into
research designs to allow for greater generalizability outside the lab or
survey context.
Lesson #2:
Not all political discussions are created equally. They differ in motive,
participation rate, and participation type. We do not intend to make a
straw man argument: No scholar of political discussion advocates or
agrees with the premise of uniformity in discussion. But this heterogeneity
typically is not factored into discussion research, as a result of focusing
primarily on discussion frequency and discussion network composition.
Our descriptive work shows the rich variation in the interactions grouped
together as “political discussion” and our inferential work shows that
both contextual and dispositional factors affect the way a person’s par-
ticipation in political discussion unfolds.
Previous research has assumed implicitly that when people report that
they have talked about politics that they have been a verbal contributor to
the conversation, expressing their true opinion and engaging in the conver-
sation with positive goals to learn or persuade. But our findings imply –
although we do not test this directly – that most conversations do not match
that account, based on the preferences, context, and behavior of the people
participating. Our results suggest that people are not reaping the benefits of
political discussion, and instead may suffer its consequences. Moreover, the
absence of a political discussion is more than an absence of exposure to
other people’s political views; it can reflect an active choice of avoidance, a
choice that shapes the way a person conceptualizes both herself (“I am not a
person who talks about politics”) as well as her mapping of her political
context (“Those kinds of conversations are threatening and should be
avoided.”) Thus, both political discussion and the intentional choice not to
have a political discussion can impact a person’s behavior.
We can no longer ignore the fact that many conversations are
undesired. Engaging in a political conversation involuntarily will likely
have different consequences than participating in one willingly. People
make active choices about political discussion, but they cannot always
avoid the conversations they wish they could. This point could be inferred
based on the gap in estimates of the population who say they talk about
politics versus the proportion who say that they enjoy talking about
politics. We push further on this point, to show that political discussion
is physiologically activating and for many people, is experienced similarly
to the feeling of a panic attack. Our work reveals what happens when
people talk about politics involuntarily. The coping mechanism for these
unwanted conversations appears to be twofold.
First, some people silence themselves or try to discretely change the
subject in order to avoid the conversation. As a result, when people report
that they have had a political discussion, they lump together a wide variety
of experiences, including those where they do not contribute. This is high-
lighted by the fact that when we asked people to tell us about a discussion
they had, many described a situation in which they did not actually talk!
Second, not everyone communicates honestly. Some behave like cha-
meleons, pretending to agree with the group to save face. Others simply
self-censor, hedging their language or moderating their opinions. We esti-
mate noticeable rates of conversational deflection, both in our vignette
studies but also in our lab studies where we recorded people talking about
politics. All of these behaviors are more common when individuals disagree
with the group and are less knowledgeable about the topic.
These two points suggest that an update to our standard measure of
political discussion behavior is due. We see two problems. The first is that
common measures of discussion frequency cannot distinguish presence
during a conversation from participation within it. The frequency of
political discussion behavior is typically measured with a single question
asking about the number of days a person has talked about politics in the
last week. We suggest that this measure is inadequately nuanced to
capture the heterogeneity of discussions that people have. The variation
between people that is captured by this measure seems less instructive
than a measure that better captures people’s engagement within a conver-
sation. Which facet of discussion researchers address will depend on their
interest in the behavior, but we encourage scholars to think about con-
versational role and individual agency, capturing variation in discussion
Lesson #3:
There is meaningful heterogeneity across individuals that affects the
process and outcome of discussion.
Scholars of political discussion long have considered the importance of
variation in individual dispositions, but these studies have focused on the
overall patterns of discussion, such as frequency or network composition.
Doing so masks the way that these traits affect all the small decisions
involved in the process of discussion and may have the effect of circum-
scribing the importance of individual differences.
Previous work has revealed that there are biases in who participates in
conversation, as well as what kind of discussion they are likely to encoun-
ter based on the composition of their discussion network. Our work adds
two insights. First, the characteristics that describe someone who prefers
to avoid conversations altogether are also associated with suboptimal
discussion behaviors in undesired conversations. Second, these traits
matter before and after a conversation as well. It is not just that some
people talk less frequently and are more likely to talk with like-minded
others. In other words, in the Choose Your Own Adventure novel of
political discussion, people can follow entirely different storylines.
Our findings suggest that roughly one-third of American adults
attempt to avoid conversation if at all possible. The socially anxious,
conflict avoidant, and those willing to censor themselves are dispropor-
tionately represented in this group, as are the politically uninterested and
less partisan. Those who prefer to avoid discussion are not always suc-
cessful, and they get roped into conversations from time to time. These
individuals are less likely to report that they try to guess the viewpoints of
others before they talk about politics, so presumably they are more likely
to arrive “blind” about what awaits them in the conversations they do
have. When they do, they are less likely to express their true opinion and
are more likely to be concerned about the negative consequences of the
conversation than they are excited by its opportunities. When their hearts
race during these conversations, we speculate that their physiological
reaction is driven by negative emotions.
Another third of the public reports that they like talking about politics.
In many ways, they are the mirror opposite of the first group: politically
interested and attached, and less likely to be concerned about conflict or
plagued with social anxiety. Many, but likely not all, in this group would
be what Krupnikov and Ryan (2022) consider the “deeply engaged,” a
small group whose passion for politics is so forceful that it can repel
others. This shows up in a number of ways. This group of people who
enjoy talking politics are likely to feel more comfortable directly asking
people what their political views are, but they are also better equipped to
detect other people’s views in advance of a conversation. Once in a
conversation, they are more likely to express their true opinion.
Depending on their tolerance for disagreement, they may or may not talk
about politics with people who disagree. Their physiological activation is
much more likely to map to positive emotion.
The final third of the public report that they will talk about politics, but
conditional on knowing the viewpoints of their discussants. These indi-
viduals are more likely to report that they guess the viewpoints of other
people and are more confident in their ability to do so. But they are the
most selective and sensitive to their preferences. This group is more
heterogeneous than the previous two groups – some are selective about
the people with whom they discuss politics, but are comfortable express-
ing their views once opting in. Others are selective, but still shy away once
the discussion occurs. A better understanding of the factors that shape the
preferences of this group is essential to identifying how political discus-
sion can maximize desirable aims while minimizing detrimental ones.
Just as no two political discussions are the same, neither are any two
people involved in such a discussion and the rich variation in decision-
making behavior at all stages of discussion affects not only the frequency
and composition of discussion, but likely the effects of the conversation
itself. The 4D Framework suggests that discussion experiences
Informational Benefits
Talking about politics is seen as key to the flow of information through
society. The earliest theories in political behavior emphasized the notion
of a “two-step flow” of information through which the people who were
most interested in politics paid the most attention, and then passed what
they gleaned along to others in their social networks. Thus, even if most
people did not take the time to stay informed, collectively people could act
as though they had enough information to make “correct” choices in the
voting booth.
The core principles of the two-step flow are that people who know
more about politics share their knowledge with people who know less,
and that discussion across lines of difference can facilitate greater learning
by increasing the variety of information to which individuals are exposed.
Eveland and Hively (2009), for example, find that discussing politics with
outpartisans can lead to increased political knowledge. But this process is
not perfect. Ahn, Huckfeldt, and Ryan (2014) explain that political
discussion is a political process and individuals might face incentives to
mislead others when they share information. Even when individuals are
motivated by accuracy, they seek out discussants with a more diverse set
of views, but they ultimately do not make more accurate judgments
(Pietryka 2016).
There are already cracks in the idea that this informational flow is
helpful in an era with high levels of political polarization, as polarization
might exacerbate these problems and create others. In a hyperpartisan
environment, the people paying the most attention are likely to be the
most opinionated (Krupnikov and Ryan 2022), perhaps even engaging in
politics as though it is a hobby (Hersh 2020). The media environment that
is both a cause and result of hyperpartisanship makes the situation even
worse. Druckman, Levendusky, and McLain (2018) show that partisan
media bias becomes amplified through political conversations, leading to
more extreme opinions among those who were exposed to a discussion
with people who read partisan news but were not exposed to partisan
news themselves. Aarøe and Petersen (2020) find that some types of
media frames are more likely to propagate through social transmission
than others. For example, Bøggild, Aarøe, and Petersen (2020) find that
individuals are more likely to transmit information about self-serving
politicians, which decreases political trust among those who receive that
information. Carlson (2018, 2019) finds that the quality of information
transmitted interpersonally degrades over time and is subject to signifi-
cant bias. Individuals inject their own, biased opinions into the infor-
mation they pass on to their peers, leading others to become less informed
than they would have been had they read the news directly. A caveat to
this is that social information can facilitate learning on par with reading
the news directly if that information comes from someone who is more
knowledgeable and a copartisan.
The challenge, however, as we show in this book, is that individuals are
more likely to avoid conversations with those who are more knowledgeable
and those who are most knowledgeable are sensitive to the social ramifica-
tions of the conversation. The salubrious effect of the two-step process
breaks down under these conditions. The most beneficial political discus-
sions (in theory) are those that Americans seem most interested in avoiding.
Further undermining the flow of information, previous research has shown
that individuals tend to overestimate the expertise of their political discus-
sion networks (Ryan 2011). This means that if our finding generalizes –
that individuals are less comfortable discussing politics with someone who
is more knowledgeable – people might be avoiding more political
discussions than “necessary” by writing off discussants that they think are
more knowledgeable than they actually are.
If the vast majority of conversations that occur are between people
who share the same views, and if conversations where people nominally
disagree do not manifest that disagreement (as we show in our lab studies
and vignette experiments), then we must turn toward the literature that
considers the effects of conversation among those who share opinions or
identities. The findings do not bode well for democratic goals:
Homogenous groups are much more likely to polarize and arrive at more
extreme opinions (Wojcieszak 2011).
The concerns about the flow of information are amplified in an era
where cable news shows are fanning the flames of misinformation.
Scholars have focused on the rise of Fox News, but in recent years even
more extreme outlets such as Newsmax and OANN have increased their
market share. Layer on top of this the known problems of misinformation
in the social media environment, and the resulting information environ-
ment is quite polluted. How is this being addressed? Many social media
platforms banish producers of disinformation or remove misinformation
related to particular topics, such as COVID-19. Factually incorrect infor-
mation that is not removed from a platform is sometimes labeled as such,
and news and fact-checking organizations seek to offer near real-time
corrections to widely circulating false information. All of these efforts
build on research assessing which individual characteristics make people
more prone to spreading or believing low-quality information or conspir-
acy theories, such as low levels of digital literacy or cognitive reflection, or
high levels of partisanship.
Despite attempts to stop the dissemination of false information, to
label it when it appears, and to encourage people to stop and reflect
before they spread low-quality information, mis- and disinformation is
bound to permeate face-to-face political conversation in addition to its
spread on social media. The question is thus whether interpersonal com-
munication, both online and offline, will make the problem better or
worse. To the extent that current tactics to stop misinformation do
consider the role of social networks, scholarship has focused on whether
people are more likely to believe information when it comes from a friend
or a copartisan stranger. While some hold hope that the increased cred-
ibility of family and friends as sources of news may position ordinary
people to be powerful disruptors to the spread of misinformation, our
work suggests a much more complicated process.
Participation
The relationship between discussion and participation is a central focus of
previous research. There is broad agreement and evidence that the kind of
people who are inclined to talk frequently about politics are also the kind
of people who are inclined to participate in other ways such as voting.
Our findings support this well-established relationship. Analyzing data in
our CIPI I Survey, we find that those who avoid political discussions are
less likely to vote (53 percent compared to 79 percent of those who enjoy
political discussions); engage in fewer political activities (such as
Mutual Toleration
Even if we have concerns about the potential of conversation to inform
the public or bias participation, there may be other benefits that outweigh
those concerns, namely the promise of increasing understanding and
mutual toleration, while minimizing stereotypes and animus. This vein
of thought permeates literatures focused on both democratic deliberation
and everyday talk and is the premise justifying the work of civil society
organizations bringing people together to dialogue about their political
and worldview differences. While each of these three types of interper-
sonal interaction has unique theoretical foundations, they share one in
common: contact theory. This theory posits that communication has the
possibility of increasing perspective-taking and building empathy. If
people who disagree come to understand what goals, values, and prefer-
ences they have in common, they will be better equipped to arrive at
common solutions or to lessen their reliance on stereotypes they have of
the “other.” While initially theorized to respond to racial tensions, the
contact hypothesis has been tested in many contexts including artificially
created group identities, suggesting the power of discussion to improve
partisan relations.
Research on the contact hypothesis writ large suggests that increased
contact with outgroup members can decrease prejudice, but the pattern of
results with respect to the efficacy of the contact hypothesis for partisan
interactions is mixed (see Busby 2021 for an elaboration). Based on
observational data about discussion networks, Mutz (2006) argues that
disagreeable political discussions can facilitate tolerance for those who
disagree. A flurry of experimental work in recent years finds some positive
ties with others who disagreed, even if they had other identities in
common. Explicitly priming people to think about their positive inter-
actions with outpartisans might work to reduce outparty hostility, but if
these respected outpartisans represent the minority of outpartisans with
whom people interact, we are skeptical that priming alone will overcome
people’s more readily available negative associations.
Weaving this altogether, we are pessimistic about the potential of
organic political talk to heal the fractures in our society, without the
guidance and structure provided by a formalized dialogue. We anticipate
that researchers conducting experiments premised on contact theory may
find positive outcomes in controlled settings, and while most civil society
organizations do not often make public any sort of rigorous program
evaluation, their efforts are likely doing more good than harm for the self-
selected group of people who volunteer to participate (although these are
likely not the people most in need of the intervention!). But these experi-
ences simply do not translate to the kinds of conversations that our
research suggests people actually have in their daily lives.
We have several concerns. We do not think people approach political
conversation with the same sense of purpose and goal-orientation that is
created in civil dialogues or experimental instructions. People do expect
there to be some benefits from discussion, but we do not see much
evidence that people view political discussion as a channel through which
to reach compromise or agreement. We are optimistic that people are
interested in learning about each other, but day-to-day talk does not
incentivize the same kind of honesty and disclosure that is provided by
the anonymity of an experiment or the commitment established at the
beginning of a dialogue. Moreover, our results suggest that people experi-
ence physiological activation during discussion in ways that may under-
mine effective communication, such as the relationship between
emotional “flooding” and avoidance behavior. Calling on people to
“gut check” themselves or be more reflective is likely much easier said
than done.
Our final concern is rooted in the fact that many of these studies and
dialogues happen in a social vacuum. In deliberation, people assume roles
as citizens or delegates tasked with making decisions. In empathy-
building exercises, strangers are brought together because their lives
would never intersect otherwise. The whole goal is to encounter difference
and engage directly with it. But our research suggests that left to their own
devices, people seem more inclined to avoid conversations with people
they do not know well. Thus, even when the opportunity arises
organically to engage across lines of difference, those are the very conver-
sations in which people are most likely to opt out. Engaging across lines
of difference with close social connections raises concerns that do not
exist when engaging with strangers in structured spaces. People cannot
just walk away at the end of the day if interactions go poorly. If friendship
with an outgroup member is enough to convey the positive effects of
contact theory, there is an increased likelihood of success: Most
Americans are still very willing to be friends with outpartisans. But as
we show, preserving that friendship might mean sidestepping meaningful
conversations about politics. If it is meaningful conversations that allow
for the perspective-taking and signaling of mutual respect that are able to
reduce outparty animosity, then we might be out of luck.
talking about politics, who talks to whom, or whether they are enjoying
the experience. Our concern is about the subtle but systematic biases in
who opts in and who opts out of discussion, and what gets communicated
in the conversations that do happen. This worry echoes and expands on
the conclusion Mutz (2006) draws from her study of political discussion
in the early 2000s, that “the meek and mild abstain from participation so
as not to offend anyone, while ideologically extreme political bullies rule
the Earth” (p. 124). We cannot say conclusively that the biases we explore
are worse than they were in the past, but we are confident that they have
not improved. And while we lack evidence to show that social concerns
are a more prevalent driver of people’s behavior now than they were in
the past, we expect that they are, given the rise of social sorting and social
polarization.
Our findings raise questions about the benefits of discussion and
amplify its concerns, calling attention to the variation in the discussion
experience people actually have. Americans make socially optimal deci-
sions that have detrimental outcomes for their political engagement. In
other words, in an effort to protect their social relationships and their
sense of self, people shy away from expressing their true opinions. We
anticipate that the effect of these biases is to create distortions in how
people view their own attitudes in relationship to others’ attitudes, as well
as to tilt the content and tone of public discourse away from moderation.
We hope that those who call on the American people to talk to each
other more about politics – opinion editorialists, civil society organiza-
tions, and politicians – will be more nuanced in their request. Any
recommendation that talk is the answer to our problems must be
research-driven and reflect an understanding of how findings from experi-
mental studies would realistically translate to organic discussions. For
example, political talk entails people’s prior assumptions about their
discussants; suggesting that people can communicate across lines of dif-
ference without taking into account the stereotypes they bring with them
is naive. More research into the perceptions, as well as the meta-
perceptions, that people have is necessary. We should also be mindful
that calling on people to talk may risk that they censor or conform their
opinions; the people who most need the encouragement to engage are
most likely to engage in these democratically suboptimal behaviors if left
to their own devices. We also encourage more research into the idea that
people see conversation as a channel in which to get to know others as
people, as our findings are mixed on this point. Better understanding
human curiosity on this very social facet of discussion seems like a
Chapter 1
1 This variation could be due to legitimate ebbs and flows in political discussion
frequency. For example, there are higher rates of political discussion in
2000 and 2016, after incredibly close presidential elections. However, there is
also variation in survey question wording and structure in these years that could
alter rates of measured political discussion frequency.
2 See Neblo (2007) for a helpful discussion of the theoretical diversity of deliber-
ation from a normative perspective, as well as the measurement challenges that
stem from this diversity.
3 Conover, Searing, and Crewe (2002) make important comparisons between
respondents from the United States and Britain. While both had discussions
most frequently in their homes (31.5% of American respondents and 30.1% of
British respondents reported that they have discussions in the home often),
Americans had discussions at work much more frequently than did the
British: 28.6% of Americans reported having these discussions at work often,
whereas only 16.7% of British respondents did so.
4 Several news articles suggest that politics frequently comes up at Thanksgiving
meals and that individuals try to avoid these dreadful conversations: www
.theatlantic.com/politics/archive/2017/11/go-ahead-talk-about-politics-at-thanks
giving/546536/; www.cnn.com/2016/11/22/health/thanksgiving-holiday-conver
sation-survival-guide-trnd/index.html; www.cnn.com/2017/11/22/politics/
thanksgiving-dinner-politics/index.html; www.nbcnews.com/better/health/
how-survive-thanksgiving-when-politics-loom-large-ncna821206; www.npr
.org/2017/11/21/565482714/americans-say-to-pass-the-turkey-not-the-politics-
at-thanksgiving-this-year; www.npr.org/2017/11/21/565613945/americans-
prefer-a-politics-free-discussion-at-thanksgiving-dinner.
5 www.pewresearch.org/fact-tank/2016/12/22/how-americans-are-talking-about-
trumps-election-in-6-charts.
6 While we are cognizant and appreciative of the fact that much political inter-
action now occurs in online contexts – and we have each written separately on
259
Chapter 2
1 www.people-press.org/2019/06/19/the-publics-level-of-comfort-talking-politics-
and-trump
2 See Bullock et al. (2015) and Prior et al. (2015) for important critiques of these
findings. See Bullock and Lenz (2019) for a review. These studies argue that the
partisan knowledge gaps that we observe are due to expressive responding or
“partisan cheerleading.” The authors find that when participants are given
small monetary incentives for correct answers, the partisan gaps shrink sub-
stantially, suggesting that partisans are aware of the correct answers but choose
to report answers that support their party on surveys. It is also possible that
individuals respond to knowledge questions as opinion questions.
3 Mutz’s operationalization of social accountability is a measure of the individual
difference trait of conflict avoidance. Thus, a precise interpretation of what she
finds is that conflict avoidant individuals are less likely to anticipate voting
when they have cross-cutting networks.
4 Readers interested in a modern spin on this concept should check out Black
Mirror’s Netflix adaptation of this concept, Bandersnatch.
5 Specifically, their seminal study revealed that only 66 percent of Reagan voters
correctly identified Mondale voters in their networks and only 55 percent of
Mondale voters correctly identified Reagan voters in their networks.
6 Surely, political science majors, graduate students, and faculty members can
relate to the scenario in which a seatmate on a plane or a rideshare driver tries
to make innocent, polite conversation with you by asking what you do or what
you’re studying in school. We’re all then faced with a decision: (a) to reveal that
we study political science, risking a potentially long, contentious discussion
about current events and candidates; or (b) to dodge the conversation with a
white lie about what we study (applied statistics is usually a safe – and not
Chapter 3
1 Respondents were presented with the full list of considerations in a randomized
order. They were not shown the accuracy, affiliation, and affirmation labels.
The specific question wording was “which of the following seem like plausible
considerations for John/Sarah? Check all that apply.”
2 An additional 21 percent of responses were deemed by both coders to be a
meaningful answer to the question of “why” a subject did or did not participate
in a conversation but could not be coded into one of the three key categories.
Finally, 19 percent of responses provided information about other facets of the
conversation but did not provide an answer about why a subject did or did not
talk. The remainder of the answers were uninformative.
3 The sample was originally supposed to be 1,500 respondents, but errors on
SSI’s end required them to increase the sample to ensure it was representative.
4 Specifically, respondents were shown the following prompt: “At this point in
the survey, we would like you to stop for a moment and think about a recent
time when you had a political discussion with someone or with a group of
people. Think about who was there, where you were, what you discussed, and
how you felt. Now please describe the experience you just thought about, giving
as much detail as possible.”
5 For more information about the poll, see www.scribd.com/document/
325387759/TargetSmart-William-Mary-Poll-Ohio-Statewide.
6 We strongly urge you to read the Acknowledgements section of this book to see
for yourself the tremendous work that they completed. Please.
7 We note here that we make reference to some of the findings that first appeared
in Carlson, McClean, and Settle (2019), but that the article only uses the fall
2014 data collection, given the key role of network analysis in that paper. It
only made sense to collect the network data in 2014 in the full student subject
pool, not in the ad hoc pool we created in September 2015. Therefore, there are
some slight discrepancies between the reported results, though they do not
change the overall pattern in the data.
8 It is difficult to compare the magnitude of psychophysiological effects across
samples (Settle et al. 2020) and therefore we did not want to rely on compari-
sons with the larger body of work using psychophysiology to measure response
to contention or negativity after media exposure (Mutz and Reeves 2005; Mutz
2015; Renshon, Lee and Tingley 2015; Soroka, Fournier, and Nir 2019).
Chapter 4
1 www.washingtonpost.com/lifestyle/style/the-maga-hat-is-not-a-statement-of-
policy-its-an-inflammatory-declaration-of-identity/2019/01/23/9fe84bc0-1f39-
11e9-8e21-59a09ff1e2a1_story.html; www.npr.org/2019/01/27/689191278/
the-symbol-of-the-maga-hat; www.pussyhatproject.com/our-story; https://
en.wikipedia.org/wiki/Yellow_vests_movement; www.aclu.org/blog/free-
speech/what-black-armband-means-forty-years-later; www.ippfwhr.org/
resource/the-green-hankerchief-the-new-symbol-of-the-international-womens-
resistance.
2 If you research political discussion and are unfamiliar with MacKuen (1990),
do yourself a favor: Put down our book and track down MacKuen’s piece for a
thorough read before you continue with Chapters 4 and 5.
3 She writes in a section of her book subtitled A New Human Ability Discovered:
Perceiving the Climate of Opinion that a series of studies conducted in
Germany in the 1970s, “consistently confirmed the people’s apparent ability
to perceive something about majority and minority opinions, to sense the
frequency distribution of pro- and con viewpoints, and this is all quite inde-
pendently of any published poll figures” (1993, 9). For a useful review of the
Spiral of Silence literature, see Scheufele and Moy (2000).
4 Specifically, participants were asked “Imagine that you were trying to guess
someone’s political views, but you couldn’t ask them directly. How would you
go about guessing their political views?” Participants could then type
their responses.
5 See Eveland et al. (2019) and Eveland and Hutchens (2013) for an important
discussion of challenges in measuring accuracy in reporting discussants’
political views.
16 www.journalism.org/2014/10/21/political-polarization-media-habits.
17 Specifically, participants were shown the following prompt: “Sometimes our
first impressions of people are quite accurate. We are interested in your first
impressions of the person listed below.” In cases in which participants evalu-
ated two people, the prompt read: “Sometimes our first impressions of people
are quite accurate. We are interested in your first impressions of the people
listed below” (emphasis added). The party identification question was:
“NAME is an American registered voter. With which party do you think he
is registered?” with response options Democrat and Republican. The ideology
question was: “We hear a lot of talk these days about liberals and conserva-
tives. Here is a seven-point scale on which the political views that people might
hold are arranged from extremely liberal to extremely conservative. Where
would you place NAME on this scale?”
18 www.claritycampaigns.com/names.
19 There were 61,830 total registered voters named Dwayne and 95,864 total
registered voters named Duane.
20 58 percent of registered voters named Gideon were registered Democrats and
57 percent of registered voters named Jedediah were registered Republicans
21 There were 2,301 Jedidiahs, 3,320 Gideons, and 4,958 Brendens registered
to vote.
22 We shy away from the phrase “accurate inference” because there are liberals
with phonemically conservative names and vice versa. However, the majority
of subjects identified the name as being liberal or conservative in a manner
concordant with the probability that name is more associated with one party
or the other.
23 Building on the findings from the apolitical cues analysis, we did not want to
describe Kent (phonetically conservative) spending time at the Met, when Kent
would be unlikely to be at the Met in the first place, at least relative to Liam
(phonetically liberal). Importantly, we did not want participants to infer
ideology based on the context (e.g. assume that Liam was liberal because he
was spending time at the Met) instead of the name as a cue. We therefore tried
to write scenarios where the character was in a situation that would not evoke
sociopolitical associations.
24 This study was fielded on Mechanical Turk in July 2015, N ¼ 661.
25 We collected the list of traits from Crawford, Modri, and Motyl (2013).
Participants rated their agreement on a scale from 1 (strongly disagree) to 6
(strongly agree). We present the full results in the Appendix. In short, When
forced to “take a side” – in other words, when we bisect the six-point scale –
Americans seem willing to give their outgroup the benefit of the doubt on a
variety of traits. Majorities of both groups rated their ingroup and outgroup as
passionate, sociable, friendly, extraverted. They also rated them as thorough,
organized, competent, intelligent, skillful, capable, conscientious. And finally
they saw them as polite, but emotional.
26 For example, we consider one of the most polarizing issues in contemporary
American politics: abortion. The stereotype about Republicans on abortion
might be “[t]hey use ‘family values’ as a justification to try to impose their
morals on everyone else’s reproduction choices.” Here, we extrapolate greatly
Chapter 5
1 For example, Cramer writes, “[w]hen disagreement appeared to surface, it was
not met with challenge or counterpoint but with silence or an abrupt change in
topic” (2004, p. 111). In the women’s guild she studies, she observes that,
“[w]hen the topic of Bill Clinton is touched upon, the women avoid stating
their views, rather than jump at the chance to criticize or support him. In a
context in which they are not equipped with a perspective rooted in group
identity, they enter into conversations about public figures tentatively, if at all”
(p. 116). She chose to record these as instances of political talk: “Even their
attempts to change the topic—for example, when someone raised their cup of
coffee and said, ‘Here’s to Bill Clinton!’ and another responded, ‘Oh, please,
I haven’t had my breakfast yet . . . ’—were coded as political” (p. 38).
2 As an example, one of our respondents wrote: “[I] was talking to my sister
who is very liberal the conversation changed to presidential politics and i had
to leave because my sister argues without facts she wont [sic] even entertain
the facts so it was just easier to leave than fight a battle that i couldnt
[sic] win.”
3 An example of this comes from a respondent who said she avoided a conver-
sation “because it offended my friends [sic] husband and things were getting
tense so i changed the conversation to make a friendlier atmosphere.”
4 For instance, a survey respondent in our Thanksgiving Study wrote: “I have
these [political] discussions at work, and I am the black sheep in an office of
liberals. I usually bite my tongue to avoid longer arguments.”
5 Of course, there are all sorts of limitations with this approach. We do not
argue that the discussions people recount are a random sample of all of the
discussions they have. Certain kinds of discussions are likely to be more
memorable than others. However, our larger argument is about the way in
which the experience of discussion feeds into future choices about discussion
and other political behaviors. Therefore, we think that the discussions people
choose to share are likely those that are most important in shaping their
attitudes about the process of discussion itself.
6 Specifically, respondents were shown one of two prompts: (1) “Please think
about a single time when you could have discussed politics with someone or a
group of people, but chose not to participate in the discussion. Think about
who was there and your relationship with those people, what topics you could
have discussed, whether the others would have agreed with you, the emotions
you experienced, and why you chose not to participate in the discussion”; (2)
“Please think about a single time when you discussed politics with someone or
a group of people. Think about who was there and your relationship with
those people, what topics you discussed, whether the others agreed with you,
the emotions you experienced, and why you chose to participate in the
discussion.”
7 Our pilot testing did not indicate significant differences between vignettes
about John or Sarah. Previous research on vignette experiments suggests that
the character of interest in the vignettes should be as similar to the participant
as possible, so we chose to match the names on gender.
8 In our vignettes, deflecting is operationalized through response wording in the
vignette in a few ways, including the character saying nothing at all or trying
to change the subject. It is important to note that silencing here does not
include walking away from the conversation altogether, which might be a
stronger behavior more reflective of truly avoiding a conversation.
9 See Settle and Carlson (2019).
10 This study was fielded early in 2016 when primaries were still underway. At
the time, Marco Rubio and Ted Cruz still looked like promising Republican
nominees, while most news coverage assumed Trump would never get
the nomination.
11 Because this finding ran counter to all of our previous work, we wanted to
reproduce the result. We conducted a follow-up study on our student sample,
specifically focusing on knowledge. In the fall of 2017, we asked them to name
their price to discuss the politics of health care reform with groups of know-
ledgeable or unknowledgeable Republicans and Democrats. Our sample was
dominated by Democrats, so we were underpowered to analyze Republicans
separately. We found that Democrats demanded significantly more to discuss
health care reform with Republicans overall, consistent with the premise of
avoiding disagreement. We then find suggestive evidence that they demanded
more to discuss health care reform with unknowledgeable Republicans than
with knowledgeable Republicans.
Chapter 6
1 In the Appendix, we elaborate on the variety of criteria that could be used for
data exclusion given the finnicky nature of psychophysiological data. For this
analysis, we chose to use the full set of participants from whom we collected
psychophysiological data. We measured the psychophysiological response to
the initial stimulus in which subjects found out they were to have a political
discussion. We measured the psychophysiological response to the videos using
only the measurements of subjects who saw that category of video first in a
counterbalanced design. The full pattern of results using other criteria are
reported in the Appendix but alternate specifications do not change our
interpretation and in fact, tend to increase the differences between stimuli type.
2 Although the confidence intervals of the political video and the discussion
stimulus overlap in Figure 6.1, there is a significant difference of means
between the conditions if we use the full set of responses for the contentious
political videos (e.g. both subjects who watched that set of videos first and
those who watched them after the apolitical videos).
3 More details are provided in the Appendix about our data cleaning processes
and exclusion criteria. See also Carlson, McClean, and Settle (2019) and Settle
et al. (2020).
4 We are almost certainly underpowered to detect significant differences
between the groups, as there are only between forty and fifty subjects in each
group. While the pattern of results for the heart rate data suggests that we are
underpowered, we were surprised to find a complete lack of support for our
expectations in the EDA data.
5 Previous research suggests that self-report emotional responses and psycho-
physiological response should not necessarily be correlated (Settle et al. 2020).
Therefore, we do not integrate the two measures together.
6 Respondents were coded as experiencing the emotion if they marked a 4 or 5
on the five-point scale. See Appendix for more details.
7 Originally, we had intended to answer a third question as well: Is political
disagreement distinct from disagreement on other contentious issues?
However, as we explain in the Appendix, the operationalization of this test
was unsuccessful, rendering it difficult to draw conclusions from that aspect of
the design.
Chapter 7
1 We thank Lisa Argyle for this important and thought-provoking suggestion
and encourage future research on political discussion to unpack what the
socially desirable response to disagreement really is.
2 There are important caveats to how we should interpret these percentages. First,
the response options for the considerations were not evenly distributed across
the AAA categories. For example, there were seven considerations (three con-
cerns and four opportunities) that fell into the affirmation category, but only five
considerations (three concerns and two opportunities) in the accuracy category.
This means that if respondents were clicking at random, they would be more
likely to select an affirmation consideration than an accuracy or affiliation
consideration, for example. Second, much like our analysis of free response
data in Chapter 4, we do not have a way of distinguishing between respondents
who left this question blank because they did not think any of the considerations
applied and those who left it blank because they skipped the question altogether.
Because of this, we follow our method in Chapter 4 and exclude the eight
percent of respondents who left the question blank.
3 What would John/Sarah do in response to the person’s question?
Say that s/he strongly disagrees with them, even though s/he really just
disagrees with them (coded as entrench)
Say that s/he disagrees with them, which s/he does (coded as true opinion)
Say that s/he slightly disagrees with them, even though s/he really disagrees
with them more than slightly (coded as censor)
Say that s/he agrees with them, even though s/he really disagrees with them
(coded as conform)
Say nothing on the subject, even though s/he disagrees with them (coded as
silence/deflect)
4 Those who thought the character would entrench were most likely to think
that the opportunity to express his/her real political opinions was the most
important consideration. Those who thought the character would express his
or her true opinion were most likely to report the opportunity to discuss
important issues with these people as the most important consideration.
5 The lower bound is from the CIPI I Vignette Experiment; the upper bound is
from the Power x Partisan Composition Pilot Study (see Table 3.8).
6 A total of seventy students participated in this study, but seven were removed
from the analysis because of treatment administration errors such as missing
confederates, confederates using the wrong script, and participants knowing
the confederates personally. The remaining sixty-three participants were
included in most analyses
7 Participants shared their opinion in the randomly assigned order for one
issue at a time. Each of the fourteen issues was presented one at a time on a
screen that changed to the next issue automatically after one minute.
Participants and confederates were instructed to state their opinion on the
question on the screen and discuss it if they wanted. Because the questions
presented on the screen were the same as the pretest, participants were
sometimes asked to report a number to indicate where their opinion fell on
a scale. Aside from the order in which participants were randomly assigned
to give their responses, the procedures were the same across the treatment
groups. In other words, both treatments involved exposing participants to
differing viewpoints from their own but varied the order in which that
information was disclosed. The treatment, therefore, was deliberately very
subtle and designed to test whether people would conform to a group’s
opinion when given the opportunity to do so. Overall, the lab session
included ten “critical” questions on which the confederates disagreed with
the participant according to the script, and four “faux” questions designed
to make the study more realistic, with confederates disagreeing with each
other, agreeing with the participant, or providing a neutral response, as
shown in Table 2 and Table 7 in the article appendix.
8 The primary purpose of the posttest was to examine the distinction between
persuasion or attitude change and conformity. If participants gave the same
responses on the pretest and posttest, but gave a different response in the lab
session, then we have strong evidence that individuals were indeed conforming
in the lab. However, if individuals gave the same response in the lab session
and on the posttest, but this response differed from the pretest, then this could
be evidence of attitude change or persuasion. While this distinction between
temporary conformity in the discussion and potential persuasion as a result of
the discussion is important conceptually, this method cannot account for
general response instability.
9 Based on extant findings on conformity in social psychology, we expected
participants to conform to a group’s political opinion when they had heard the
confederates state opinions with which they disagreed. In the control condi-
tion, participants would not know the political opinions of the confederates
before stating their opinions, so they would have limited information with
which to conform. It is possible that participants could intuit that the confed-
erates generally disagreed with the participant over the course of the study,
which means that we might observe some preemptive “conformity” in the
control condition. However, we expect to see a greater frequency of conform-
ity in the treatment condition, when participants are certain of the group’s
opinions prior to stating their own opinion, compared to the control condition
where they can only surmise the group’s opinions over time in the study.
10 For example, if on the pretest a participant indicated that he or she strongly
agreed with something, but in the lab only said that he or she agreed, that
would not be coded as potential conformity. If that participant said that he or
she disagreed or strongly disagreed in the lab, that would be considered
potential conformity. We call this potential conformity because the observed
attitude shift in the lab has the potential to be conformity, but it could also be
genuine attitude change.
11 Note that both of our measures of conformity require movement across a
midpoint in the scale, a much stricter requirement than previous studies
exploring the public expression of opinions (Levitan and Verhulst 2015).
We do this in order to differentiate the concept of conformity from other
factors that could induce movement on a response scale for an issue position
between a pretest and a lab session. On questions utilizing a response scale
with more than five points, some movement is likely to be expected simply
because of the lack of distinction in a subject’s mind on the scale points, for
example a “5” and “6” on a seven-point scale. We cannot say with certainty
that this movement would represent conformity and is not simply a form of
response instability. By limiting the measurement of our construct to opinions
that actually “flip sides,” we can be more confident that subjects are publicly
expressing an opinion that is meaningfully different from the opinion they
expressed privately on the pretest.
12 It is possible that we are statistically underpowered to detect a significant
difference between the treatment groups by pure conformity standards.
Because pure conformity is measured based on posttest results, only those
participants who completed the posttest can be included in the analysis, which
reduces our sample to forty-six participants for the pure conformity tests. As
Figure 3 in the article illustrates, participants conformed in both conditions,
but the frequency of conformity was significantly higher in the treatment
group for potential conformity. Although it is possible that participants in
the control condition were able to guess the group’s opinion over the course of
the study, we find that participants were no more likely to conform at the
beginning of the study than at the end, making this less likely.
13 We thank Erin Rossiter for her expertise and research assistance in designing
and executing this process.
14 We used Young and Soroka’s (2012) Lexicoder Sentiment Dictionary to count
positive words (1,709 in the dictionary) and negative words (2,858 in
the dictionary) and followed their approach to construct the measure as the
proportion of positive words minus the proportion of negative words in the
text in question. A score of 1 should be interpreted as a one percentage–point
gap between the number of positive words and the number of negative words
used in the text on average.
15 The phrases we hand coded for in the partisan qualification measure would be
considered verbal hedging. The key distinctions are (1) the partisan qualifica-
tion measure is strictly examined in the identity revelation segment of the
study and the phrases had to be used at the moment of revelation; and (2) we
hand coded the partisan qualification measure but used automated coding for
the verbal hedging measure.
16 Although these metrics are available for the overall conversation, relationships
studied at that level would be harder to interpret given the heterogeneity of the
discussion prompts. For example, increased words spoken during the identity
revelation likely reveals less certainty about one’s identity, whereas increased
words spoken during the issue discussion might be better thought of as interest
in the topic.
17 We structured the question as a forced choice in order to push the subjects
toward stating a partisan identity if they had one: More than 91 percent of the
sample leaned to or identified with a party in our pretest survey. However, in
hindsight, we wished we had allowed people to identify as Independents. An
alternative explanation for the pattern of results described here is that
Independents were simply less sure how to answer the question.
18 Here, the unit of analysis is the conversation, cutting our N to the fifty-eight
conversations where we had pretest information for both subjects. Thus, we
think we are likely underpowered to detect the small difference in the number
of words expressed in the partisan aligned versus partisan clash conversations.
19 This partisan tendency did not seem to extend to the conversations about
specific policy positions, suggesting that it may be limited to the expression of
one’s identity, or may be an artifact of the way we asked the identity question.
Chapter 8
1 “People are afraid to be disliked, or to be cast out of the group, or appear
different from the rest, as opposed to standing up and saying, ‘hey!’”
(Conover, Searing, and Crewe 2002, p. 55)
2 “Citizens are more willing to state their preferences in private discussions with
family and close friends, because the risks of disclosing something unknown
about oneself are lower; not because these discussions are less revealing, but
because we are more likely to be talking with people who we already know
and accept who we are. By contrast, public discussions with acquaintances or
strangers pose a greater danger; because citizens are less known to each other,
there is much more to reveal – or hide. By opting out of public discussion,
people can protect the privacy of their preferences and thus the privacy of their
identities” (Conover, Searing, and Crewe 2002, p. 57).
3 “By not discussing controversial issues, we avoid learning more than we really
‘need’ to know about friends and acquaintances, things that might disrupt our
ongoing relationships with them. ‘You might find out something that maybe
you don’t want to find out about somebody . . . And these come down to real
value systems. So what you do is to back off a little bit, allow that person to
believe what they want to believe.’ Here again, discussions reveal not just the
preferences of the participants but also their characters and sometimes their core
identities. And this can alert discussants to deeper, more fundamental differ-
ences that can make it difficult to maintain relationships” (Conover, Searing,
and Crewe, p. 55). They also write: “[M]utual respect can lead a citizen to
recognize that there is no common ground to share on an issue, and therefore, in
the interest of friendship, perhaps little point to discussing it” (pp. 55–56).
4 “Although citizens are committed to reciprocity in principle, in practice they
sometimes find it difficult to respect their fellow citizens” (Conover and
Searing 2005, p. 278).
5 “Close relationships are strong enough to withstand the potential disruption
that might occur from either abruptly – and rudely – disengaging from a
contested discussion or turning it into a real argument full of passion and
anger. With close friends and family, ‘you feel like they’re going to accept
you . . . You might have a temporary argument but they love you and you love
them. And you’re not going to lose that love just because of politics.’ By
contrast, persuasive and argumentative discussions with acquaintances run
the risk of alienating people and disrupting social relations that must be
maintained (such as co-workers). Outside of close relationships, you cannot
be sure if you will be accepted ‘for yourself or just by what you say or how you
act’” (Conover, Searing, and Crewe 2002, p. 57).
6 Despite the mounting empirical evidence depicting troubling patterns of affect-
ive polarization, others question the extent to which this is a product of
question wording and anchoring around our attitudes toward elites.
Druckman and Levendusky (2019) find that individuals show lower levels of
affective polarization when thinking about outpartisan voters, compared to
outpartisan elites, and Druckman et al. (2021) argue that affective polariza-
tion is really driven by hostility toward ideologically extreme and politically
active outpartisans. Others speculate that affective polarization is largely a
product of expressive responding on surveys: Individuals would not actually
be upset if their child married an outpartisan, but when asked on a survey,
they cheerlead for their party and report that they would be upset.
Methodologically addressing the concern about expressive responding is diffi-
cult, but the overwhelming evidence across surveys, experiments, and con-
texts – paired with observational patterns in sorting – would be inconsistent
with the idea that expressive responding accounts for all of the affective
polarization that has been measured. Even if people imagine unrepresentative
caricatures of outpartisans when they answer survey questions, they very well
could use those same exaggerations when they make real-world evaluations
of outpartisans.
7 www.thedailybeast.com/friends-unfriend-over-politics,
8 www.npr.org/2020/10/27/928209548/dude-i-m-done-when-politics-tears-
families-and-friendships-apart.
9 www.reuters.com/article/us-usa-election-trump-families/you-are-no-longer-
my-mother-a-divided-america-will-struggle-to-heal-after-trump-era-idUSK
BN27I16E.
10 www.washingtonpost.com/lifestyle/wellness/relationship-broken-grief-ambiguous-
loss/2021/02/09/0c28c2b6-673f-11eb-8468-21bc48f07fe5_story.html.
11 The scale that Mason tests, and we use, derives from a “social distance” scale
first used by sociologists. However, we use it the way that Mason does: to
capture social discomfort between partisans that can be measured by levels of
willingness to engage in social contact with outgroup partisans.
12 We are agnostic about whether this should underestimate or overestimate the
actual percentage of people who have engaged in this behavior. On the one
hand, there could be social desirability bias acting to keep people from
answering honestly in the affirmative, if on average people think they should
not cut friends out of their lives based on politics. On the other hand, this
question may activate expressive partisanship, providing an outlet for people to
signal their dislike for the outgroup. It is worth noting that our direct approach
of simply asking people whether they had ever engaged in any of these social
distancing behaviors because of politics should be relatively well-shielded from
the measurement concerns raised by Druckman et al. (2021). The authors argue
that when we ask respondents about their attitudes toward vaguely described
Republicans or Democrats, individuals are really thinking about the stereotypes
of highly engaged, ideologically extreme partisans commonly portrayed in the
media. However, our approach here does not ask people to reflect upon their
behavior or attitudes toward a specific group. Instead, it allows individuals to
reflect upon their experiences and report whether they think they have distanced
(or been distanced from) because of politics. This approach also allows individ-
uals to consider situations in which the political reasons were not based on
partisanship alone. For example, it could have been based on political engage-
ment, regardless of whether they disagreed; differences over policy or candidate
preferences; or some other form of political expression.
13 www.pewinternet.org/2016/10/25/the-political-environment-on-social-media.
14 These results for partisan composition replicate in another vignette experiment
in which we manipulated the power dynamics instead of the knowledge
asymmetries.
15 Specifically, in the knowledge study, we found that those in the partisan
minority were perceived to be the most likely to avoid a future discussion
(mean ¼ 4.6), compared to those in the partisan majority (mean ¼ 3.3), and
those in the balanced condition (mean ¼ 3.6). There is suggestive evidence at
the p < .10 level that those in the balanced condition were more likely to think
the character would avoid a future discussion than those in the majority
condition. In the power study, those in the partisan minority were perceived
to be more likely to avoid a future discussion (mean ¼ 5.2), compared to those
in the partisan majority (mean ¼ 3.6), and those in the balanced condition
(mean ¼ 3.9). There is suggestive evidence at the p < .10 level that those in the
balanced condition were more likely to avoid a future discussion than those in
the balanced condition.
16 We replicated these results for partisan composition in an experiment exam-
ining power dynamics instead of knowledge asymmetries.
17 The CCES module we analyzed includes a nationally representative sample of
1,000 respondents. Respondents were surveyed before and after the
2018 midterm elections. The questions we analyze were both measured on the
preelection wave. We thank the Center for American Politics at UC San Diego
for funding support.
18 We acknowledge that estimating the distribution of partisanship within one’s
network can be a challenging task. Individuals might have incentives to
conform to the group (Carlson and Settle 2016; Levitan and Verhulst 2016)
and false consensus biases might lead us to overestimate agreement.
Researchers more commonly use name generators to measure network com-
position characteristics, but we were limited in survey space. Moreover,
Eveland et al. (2019) note that accuracy in inferring others’ views, even in a
name generator approach, can be misleading. We hope that future researchers
can explore methods for measuring network composition in a way that is
efficient on surveys and as accurate as possible.
19 Respondents who reported identifying as Democrats, including Independents
who leaned toward the Democratic Party, were asked to report how likely
they would do each activity with a Republican. Republicans, including
Independents who leaned toward the Republican Party, were asked to report
how likely they would do each activity with a Democrat. Pure Independents
were randomly assigned to view either “Republican” or “Democrat.” In the
analyses that follow, we include pure Independents, but the results remain
unchanged if we exclude them.
20 For example, 5% reported that they absolutely would not spend occasional
social time with an outpartisan, 4%reported that they absolutely would not be
next-door neighbors with an outpartisan, and 6% reported that they abso-
lutely would not be close friends with an outpartisan.
21 The interaction is significant at least at the p <. 05 level for all four
weighted models.
Chapter 9
1 We do not expect that every single characteristic we examine will be associated
with every single outcome we measured in this book. From a theoretical
standpoint, that would be expecting individual dispositions to do a lot of work.
From a practical standpoint, that would simply be messy to evaluate critically
for two reasons. First, as we described in Chapter 3, we chose to pursue breadth
over depth in our approach to triangulating the behaviors at each stage of the
cycle. This led to many studies and many outcomes measured, but that would
make for a lot of hypotheses to test when then examined against several
individual dispositions. Second, the personality batteries that we use to measure
the psychological dispositions of interest are somewhat long and we were not
able to include them on each one of our studies. As a result, there are some
outcomes we examined in the previous chapters for which we simply do not
have the individual disposition data.
2 For example, in the cumulative ANES data, we find that 74% of women report
that they have ever discussed politics, whereas 78% of men report the same.
When it comes to discussion frequency, the cumulative ANES data suggests that
men discuss politics more often, but the gap is relatively small: About 13% of
men report discussing politics every day, compared to 12% of women; 26%
of men report never discussing politics, compared to 31% of women.
3 For example, thousands of people engaged with a quiz about guessing whether
someone was a Trump or Biden supporter based on images of what they had in
their refrigerators. The quiz was published by the New York Times in October
2020. While many engaged with the quiz, it faced some criticism of classism.
Find the quiz here: www.nytimes.com/interactive/2020/10/27/upshot/biden-
trump-poll-quiz.html and a critique here www.refinery29.com/en-us/2020/10/
10131989/fridge-politics-quiz-trump-biden-new-york-times-classism.
Chapter 10
1 www.theatlantic.com/ideas/archive/2020/11/the-crisis-of-american-democracy-
is-not-over/616962; www.npr.org/2020/11/15/935112333/how-the-2020-elec
tion-has-changed-trust-in-u-s-democracy; https://fivethirtyeight.com/features/
how-much-danger-is-american-democracy-in.
2 www.vanderbilt.edu/lapop/news/022317.US-WashingtonPost.pdf
3 www.pewresearch.org/fact-tank/2021/01/15/in-their-own-words-how-americans-
reacted-to-the-rioting-at-the-u-s-capitol.
4 www.insidehighered.com/quicktakes/2021/01/07/hundreds-political-scientists-
call-removing-trump; http://brightlinewatch.org/american-democracy-on-
the-eve-of-the-2020-election/a-democratic-stress-test-the-2020-election-and-
its-aftermathbright-line-watch-november-2020-survey.
5 For example, speaking about the January 6, 2021, riots at the Capitol, House
Minority Leader Kevin McCarthy said in an interview with Full Court Press that
“everybody across this country has some responsibility.” He went on to say,
“[w]hat do we write on our social media? What do we say to one another? How
do we disagree and still not be agreeable even when it comes to opinion.” In a
video posted on Facebook on January 16, he said “[w]e all owe some responsibility
here too, what our rhetoric has been, the different language that people have used,
what you said on social media. We have risen these temperatures so great. In the
last inaugural, we started with, ‘resist.’ We have members of Congress saying we
should get in other people’s faces. We need to lower the temperature. We need to
understand that we’re all Americans, and we need to start respecting differences of
opinion.” In a speech delivered in the aftermath of the insurrection at the Capitol,
Joe Biden said “[t]hrough war and strife, America’s endured much and we will
endure here and we will prevail again, and we’ll prevail now. The work of the
moment and the work of the next four years must be the restoration of democracy,
of decency, honor, respect the Rule of Law, just plain, simple decency, the renewal
of a politics. It’s about solving problems, looking out for one another, not stoking
the flames of hate and chaos. As I said, America is about honor, decency, respect,
tolerance. That’s who we are. That’s who we’ve always been.”
6 www.washingtonpost.com/video/opinions/opinion-our-political-divide-is-danger
ous-a-neuroscientist-and-political-scientist-explain-why/2020/12/24/f470ab01-
a6a9-4e29-8187-2fdaa572176d_video.html?itid=ap_katewoodsome.
7 www.charleskochfoundation.org/courageous-collaborations
8 https://storycorps.org/discover/onesmallstep.
9 www.washingtonpost.com/lifestyle/advice/miss-manners-why-must-i-be-a-teen
ager-in-love/2019/06/02/3ab570d2-7e31-11e9-8bb7-0fc796cf2ec0_story.html.
10 www.washingtonpost.com/technology/2019/08/23/google-says-only-talk-
about-work-workand-definitely-no-politics.
11 In an interview with Reuters, Mayra Gomez said that her twenty-one-year-old
son cut her out of his life, telling her “[y]ou are no longer my mother, because
you are voting for Trump.” www.reuters.com/article/us-usa-election-trump-
families/you-are-no-longer-my-mother-a-divided-america-will-struggle-to-
heal-after-trump-era-idUSKBN27I16E.
12 www.npr.org/2022/01/05/1070362852/trump-big-lie-election-jan-6-families
13 www.npr.org/2019/12/24/791125357/trump-campaign-site-offers-help-in-
winning-arguments-with-snowflake-relatives.
14 “Theories of participatory democracy are in important ways inconsistent with
theories of deliberative democracy. The best possible social environment for
purposes of either one of these two goals would naturally undermine the
other” (Mutz 2006, p. 16).
15 www.washingtonpost.com/history/2020/12/22/martial-law-trump-flynn-history.
16 www.nytimes.com/2020/12/03/us/election-officials-threats-trump.html.
Aarøe, Lene, and Michael Bang Petersen. 2020. “Cognitive biases and communi-
cation strength in social networks: The case of episodic frames.” British
Journal of Political Science 50(4): 1561–1581.
Abrajano, Marisa. 2015. “Reexamining the ‘racial gap’ in political knowledge.”
The Journal of Politics 77(1): 44–54.
Ahler, Douglas J. 2014. “Self-fulfilling misperceptions of public polarization.”
The Journal of Politics 76(3): 607–620.
Ahler, Douglas J., and Gaurav Sood. 2018. “The parties in our heads:
Misperceptions about party composition and their consequences.” The
Journal of Politics 80(3): 964–981.
2018. “Measuring perceptions of shares of groups.” In Brian G. Southwell,
Emily A. Thorson, and Laura Sheble (eds.), Misinformation and Mass
Audiences (pp. 71–90). Austin: University of Texas.
Ahn, T. K., Robert Huckfeldt, and John Barry Ryan. 2014. Experts, Activists, and
Democratic Politics: Are Electorates Self-Educating? New York: Cambridge
University Press.
Allport, Gordon. W. 1954. The Nature of Prejudice. Cambridge, MA: Addison-
Wesley.
Ambrose, Graham. 2016. “At the country’s most elite and liberal colleges, some
Trump supporters stay closeted.” The Washington Post, September 20.
www.washingtonpost.com/news/grade-point/wp/2016/09/20/at-the-countrys-
most-elite-and-liberal-colleges-some-trump-supporters-stay-closeted/?noredirect=
on&utm_term=.4ce3d1f660ff.
Anoll, Allison P. 2018. “What makes a good neighbor? Race, place, and norms of
political participation.” The American Political Science Review 112(3):
494–508.
Anspach, Nicolas M., and Taylor N. Carlson. 2020. “What to believe? Social
media commentary and belief in misinformation.” Political Behavior 42(3):
697-718.
277
Appleby, Jacob. 2018. “Do they like us? Meta-stereotypes and meta-evaluations
between political groups.” PhD diss., University of Minnesota.
Appleby, Jacob, and Eugene Borgida. “Ideological metastereotypes:
Overestimations of intergroup antipathy and sources of anxiety.” Manuscript
in preparation. https://jacobappleby.wordpress.com/home/research.
Asch, Solomon E. 1956. “Studies of independence and conformity: A minority of
one against a unanimous majority.” Psychological Monographs 70(9): 1–70.
Bakker, Bert N., Yphtach Lelkes, and Ariel Malka. 2021. “Reconsidering the link
between self-reported personality traits and political preferences.” American
Political Science Review 115(4): 1482–1498.
Bartels, Larry M. 2002. “Beyond the running tally: Partisan bias in political
perceptions.” Political behavior 24(2): 117–150.
Bello, Jason. 2012. “The dark side of disagreement? Revisiting the effect of
disagreement on political participation.” Electoral Studies 31(4): 782–795.
Bello, Jason, and Meredith Rolfe. 2014. “Is influence mightier than selection?
Forging agreement in political discussion networks during a campaign.”
Social Networks 36: 134–156.
Benjamin, Daniel J., and Jesse M. Shapiro. 2009. “Thin-slice forecasts of guber-
natorial elections.” The Review of Economics and Statistics 91(3): 523–536.
Benoit Kenneth, Kohei Watanabe, Haiyan Wang, et al. 2018. “Quanteda: An
R package for the quantitative analysis of textual data.” Journal of Open
Source Software, 3(30): 774.
Bøggild, Troels, Lene Aarøe, and Michael Bang Petersen. 2021. “Citizens as
complicits: Distrust in politicians and biased social dissemination of political
information.” American Political Science Review, 115(1): 269–285.
Brewer, Marilyn B., and Sonia Roccas. 2001. “Individual values, social identity,
and optimal distinctiveness.” In C. Sedikides and M. B. Brewer (eds.),
Individual Self, Relational Self, Collective Self (pp. 219–237). New York:
Psychology Press.
Brown, Elissa J., Julia Turovsky, Richard G. Heimberg, Harlan R. Juster, Timothy
A. Brown, and David H. Barlow. 1997. “Validation of the social interaction
anxiety scale and the social phobia scale across the anxiety disorders.”
Psychological Assessment 9(1): 21.
Bullock, John G., Alan S. Gerber, Seth J. Hill, and Gregory A. Huber. 2015.
“Partisan bias in factual beliefs about politics.” Quarterly Journal of Political
Science 10(4): 519–578.
Bullock, John G., and Gabriel Lenz. 2019. “Partisan bias in surveys.” Annual
Review of Political Science 22: 325–342.
Busby, Ethan. 2021. Should You Stay Away from Strangers? Experiments on the
Political Consequences of Intergroup Contact. Cambridge: Cambridge
University Press.
Busby, Ethan, Adam Howat, Jacob Rothschild, and Richard Shafranek. 2021.
The Partisan Next Door: Stereotypes of Party Supporters and Consequences
for Polarization in America. Cambridge: Cambridge University Press.
Butler, Daniel M., and David E. Broockman. 2011. “Do politicians racially
discriminate against constituents? A field experiment on state legislators.”
American Journal of Political Science 55(3): 463–477.
Butler, Daniel M., and Jonathan Homola. 2017. “An empirical justification for
the use of racially distinctive names to signal race in experiments.” Political
Analysis 25(1): 122–130.
Butters, Ross, and Christopher Hare. “Polarized networks? New evidence on
American voters’ political discussion networks.” Political Behavior (2020):
1–25.
Carlson, Taylor N. 2018. “Modeling political information transmission as a game
of telephone.” The Journal of Politics 80(1): 348–352.
2019. “Through the grapevine: Informational consequences of interpersonal
political communication.” American Political Science Review 113(2):
325–339.
Carlson, Taylor N., Marisa Abrajano, and Lisa García Bedolla. 2020. Political
discussion networks and political engagement among voters of color.
Political Research Quarterly 73(1), 79–95.
Carlson, Taylor N., Marisa Abrajano, and Lisa García Bedolla. 2020. Talking
Politics: Political Discussion Networks and the New American Electorate.
New York: Oxford University Press.
Carlson, Taylor N., and Seth J. Hill. 2021. “Experimental measurement of mis-
perception in political beliefs.” Journal of Experimental Political Science.
1–14.
Carlson, Taylor N., Charles T. McClean, and Jaime E. Settle. 2020. “Follow your
heart: Could psychophysiology be associated with political discussion net-
work homogeneity?” Political Psychology 41(1): 165–187.
Carlson, Taylor N., and Jaime E. Settle. 2016. “Political chameleons: An explor-
ation of conformity in political discussions.” Political Behavior 38(4):
817–859.
Carney, Dana R., John T. Jost, Samuel D. Gosling, and Jeff Potter. 2008. “The
secret lives of liberals and conservatives: Personality profiles, interaction
styles, and the things they leave behind.” Political Psychology 29(6):
807–840.
Carpinella, Colleen M., and Kerri L. Johnson. 2013. “Appearance-based politics:
Sex-typed facial cues communicate political party affiliation.” Journal of
Experimental Social Psychology 49(1): 156–160.
Chambers, John R., and Darya Melnyk. 2006. “Why do I hate thee? Conflict
misperceptions and intergroup mistrust.” Personality and Social Psychology
Bulletin 32(10): 1295–1311.
Chambers, John R., Robert S. Baron, and Mary L. Inman. 2006. “Misperceptions
in intergroup conflict: Disagreeing about what we disagree about.”
Psychological Science 17(1): 38–45.
Chen, M. Keith, and Ryne Rohla. 2018. “The effect of partisanship and political
advertising on close family ties.” Science 360(6392): 1020–1024.
Cialdini, Robert B., and Melanie R. Trost. 1998. “Social influence: Social
norms, conformity and compliance.” In D. T. Gilbert, S. T. Fiske, and G.
Lindzey (eds.), The Handbook of Social Psychology. 151–192. New York:
McGraw-Hill.
Cialdini, Robert B., and Noah J. Goldstein. 2004. “Social influence: Compliance
and conformity.” Annual Review of Psychology 55(1): 591–621.
Dolan, Kathleen. 2011. “Do women and men know different things? Measuring
gender differences in political knowledge.” The Journal of Politics 73(1):
97–107.
2014. When Does Gender Matter? Women Candidates and Gender Stereotypes
in American Elections. New York: Oxford University Press.
Dolan, Kathleen and Patrick Kraft. Asking the Right Questions: A Framework to
Develop Gender-Balanced Knowledge Batteries. Unpublished manuscript.
Druckman, James N., and Matthew S. Levendusky. 2019. “What do we measure
when we measure affective polarization?” Public Opinion Quarterly 83(1):
114–122.
Druckman, James N., Matthew S. Levendusky, and Audrey McLain. 2018. “No
need to watch: How the effects of partisan media can spread via interpersonal
discussions.” American Journal of Political Science 62(1): 99–112
Druckman, James N., Samara Klar, Yanna Krupnikov, Matthew Levendusky, and
John Barry Ryan. “(Mis-)estimating affective polarization.” Forthcoming,
Journal of Politics.
Duggan, Maeve, and Aaron Smith. 2016. “The political environment on social
media.” Pew Research Center. October 25. www.pewinternet.org/2016/10/
25/the-political-environment-on-social-media.
Efran, Michael G. 1974. “The effect of physical appearance on the judgment of
guilt, interpersonal attraction, and severity of recommended punishment in a
simulated jury task.” Journal of Research in Personality 8(1): 45–54.
Eliasoph, Nina. 1998. Avoiding Politics: How Americans Produce Apathy in
Everyday Life. Cambridge: Cambridge University Press.
Engelhardt, Andrew M., and Stephen M. Utych. 2020. “Grand old (tailgate)
party? Partisan discrimination in apolitical settings.” Political Behavior
42(3): 769–789.
Eveland, William P., Alyssa C. Morey, and Myiah J. Hutchens. 2011. “Beyond
deliberation: New directions for the study of informal political conversation
form a communication perspective.” Journal of Communication 61(6):
1082–1103.
Eveland, William P., Jr., Hyunjin Song, Myiah J. Hutchens, and Lindsey Clark
Levitan. 2019. “Not being accurate is not quite the same as being inaccurate:
Variations in reported (in)accuracy of perceptions of political views of net-
work members due to uncertainty.” Communication Methods and Measures
13(4): 305–311.
Eveland, William P., and Myiah J. Hutchens. 2009. “Political discussion fre-
quency, network size, and ‘heterogeneity’ of discussion as predictors of
political knowledge and participation.” Journal of Communication 59(2):
205–224.
2013. “The role of conversation in developing accurate political perceptions:
A multilevel social network approach.” Human Communication Research 39
(4): 422–444.
Eveland, William P., and Osei Appiah. 2021. “A national conversation about
race? Political discussion across lines of racial and partisan difference.”
Journal of Race, Ethnicity, and Politics 6(1): 187–213.
Figlio, David N. 2005. Names, Expectations and the Black-White Test Score Gap.
No. w11195. National Bureau of Economic Research.
Fisher, Marc. 2016. “It’s hard enough to be a Republican in deep-blue D.C. Try being
a Trump voter.” The Washington Post, February 22. www.washingtonpost
.com/local/its-hard-enough-to-be-a-republican-in-deep-blue-dc-try-being-a-
trump-voter/2016/02/22/d8f18b96-d4f2-11e5-9823-02b905009f99_story.
html?utm_term=.9ada7154080e.
Fitzgerald, Jennifer. 2013. “What does ‘political’ mean to you?” Political
Behavior 35(3): 453–479.
Fouka, Vasiliki. 2020. “Backlash: The unintended effects of language prohibition
in US schools after World War I.” The Review of Economic Studies 87(1):
204–239.
Frey, Frances E., and Linda R. Tropp. 2006. “Being seen as individuals versus as
group members: Extending research on metaperception to intergroup con-
texts.” Personality and Social Psychology Review 10(3): 265–280.
French, Jeffrey A., Kevin B. Smith, John R. Alford, Adam Guck, Andrew K. Birnie,
and John R. Hibbing. 2014. “Cortisol and politics: variance in voting behav-
ior is predicted by baseline cortisol levels.” Physiology & Behavior 133:
61–67.
Funder, David C. 1995. “On the accuracy of personality judgment: A realistic
approach.” Psychological review 102(4): 652.
Gadarian, Shana Kushner, and Bethany Albertson. 2014. “Anxiety, immigration,
and the search for information.” Political Psychology 35(2): 133–164.
Gell-Redman, Micah, Neil Visalvanich, Charles Crabtree, and Christopher J.
Fariss. 2018. “It’s all about race: How state legislators respond to immigrant
constituents.” Political Research Quarterly 71(3): 517–531.
Gerber, Alan S., Gregory A. Huber, David Doherty, and Conor M. Dowling.
2012. “Disagreement and the avoidance of political discussion: Aggregate
relationships and differences across personality traits.” American Journal of
Political Science 56(4): 849–874.
Gerber, Alan S., Gregory A. Huber, David Doherty, Conor M. Dowling, and
E. Ha Shang. 2010. “Personality and political attitudes: Relationships across
issue domains and political contexts.” American Political Science Review
104(1): 111–133.
Gibson, James L. 1992. “The political consequences of intolerance: Cultural
conformity and political freedom.” The American Political Science Review
86(2): 338–356.
Gibson, James L., and Joseph L. Sutherland. 2020. “Keeping your mouth shut:
Spiraling self-censorship in the United States.” Working Paper.
Glynn, Carroll J., Andrew F. Hayes, and James Shanahan. 1997. “Perceived
support for one’s opinions and willingness to speak out: A meta-analysis of
survey studies on the ‘spiral of silence.’” Public Opinion Quarterly 61(3):
452–463.
Goggin, Stephen N., and Alexander G. Theodoridis. 2017. “Disputed ownership:
Parties, issues, and traits in the minds of voters.” Political Behavior 39(3):
675–702
Gosling, Sam. 2008. Snoop: What Your Stuff Says about You. London: Profile
Books.
Gosling, Samuel D., Peter J. Rentfrow, and William B. Swann. 2003. “A very brief
measure of the big-five personality domains.” Journal of Research in
Personality 37(6): 504–528.
Gosling, Samuel D., Sam Gaddis, and Simine Vazire. 2008. “First impressions
based on the environments we create and inhabit.” In Nalini Ambady and
John Joseph Skowronski (eds.), First Impressions (pp. 334–356). New York:
The Guilford Press.
Graham, Jesse, Jonathan Haidt, and Brian A. Nosek. 2009. “Liberals and conser-
vatives rely on different sets of moral foundations.” Journal of Personality
and Social Psychology 96(5): 1029.
Green, Stephanie. 2017. “What it’s like to be a Trump supporter in DC.” The
Washingtonian, January 18. www.washingtonian.com/2017/01/18/what-its-
like-to-be-a-trump-supporter-in-dc.
Greene, Stephen. 1999. “Understanding party identification: A social identity
approach.” Political Psychology 20(2): 393–403.
Haidt, Jonathan. 2014. “Your personality makes your politics.” TIME, January 9.
https://science.time.com/2014/01/09/your-personality-makes-your-politics.
Haidt, Jonathan, and Chris Wilson. 2014. “Can TIME predict your politics?”
TIME, January 9. http://time. com/510/can-time-predict-your-politics.
Hampton, Keith, Lee Rainie, Weixu Lu, Maria Dwyer, Inyoung Shin, and Kristen
Purcell. 2014. “Social media and the ‘spiral of silence.’” Pew Research
Center.
Harris-Lacewell, Melissa Victoria. 2010. Barbershops, Bibles, and BET:
Everyday Talk and Black Political Thought. Princeton, NJ: Princeton
University Press.
Hatemi, Peter K., John R. Hibbing, Sarah E. Medland, et al. 2010. “Not by
twins alone: Using the extended family design to investigate genetic influ-
ence on political beliefs.” American Journal of Political Science 54(3):
798–814.
Hayes, Danny. 2005. “Candidate qualities through a partisan lens: A theory of
trait ownership.” American Journal of Political Science 49(4): 908–923.
Hayes, Andrew F., Carroll J. Glynn, and James Shanahan. 2005. “Willingness to
self-censor: A construct and measurement tool for public opinion research.”
International Journal of Public Opinion Research 17(3): 298–323.
Heimberg, Richard G., Gregory P. Mueller, Craig S. Holt, Debra A. Hope, and
Michael R. Liebowitz. 1992. “Assessment of anxiety in social interaction and
being observed by others: The social interaction anxiety scale and the social
phobia scale. Behavior Therapy 23(1): 53–73
Hersh, Eitan. 2020. Politics Is for Power: How to Move beyond Political
Hobbyism, Take Action, and Make Real Change. New York: Scribner.
Hetherington, Marc J., and Jonathan D. Weiler. 2009. Authoritarianism and
Polarization in American Politics. Cambridge: Cambridge University Press.
2018. Prius or Pickup? How the Answers to Four Simple Questions Explain
America’s Great Divide. Boston: Houghton Mifflin Harcourt.
Judd, Charles M., and James W. Downing. 1995. “Stereotypic accuracy in judg-
ments of the political positions of groups and individuals.” In Milton Lodge
and Kathleen McGraw (eds.), Political Judgment: Structure and Process
(pp. 65–90). Ann Arbor: University of Michigan Press.
Karpowitz, Christopher F., and Tali Mendelberg. 2014. The Silent Sex: Gender,
Deliberation, and Institutions. Princeton, NJ: Princeton University Press.
Karpowitz, Christopher F., Tali Mendelberg, and Lauren Mattioli. 2015. “Why
women’s numbers elevate women’s influence, and when they do not: Rules,
norms, and authority in political discussion.” Politics, Groups, and Identities
3(1): 149–177.
Karpowitz, Christopher F., Tali Mendelberg, and Lee. Shaker (2012). Gender
inequality in deliberative participation. American Political Science Review
106(3): 533–547.
Katz, Josh. 2016. “‘Duck Dynasty’ vs. ‘Modern Family’: 50 maps of the U.S.
cultural divide” The New York Times, December 27. www.nytimes.com/
interactive/2016/12/26/upshot/duck-dynasty-vs-modern-family-television-
maps.html.
Klar, Samara, and Yanna Krupnikov. 2016. Independent Politics: How American
Disdain for Parties Leads to Political Action. New York: Cambridge
University Press.
Klofstad, Casey A. 2009. “Civic talk and civic participation: The moderating
effect of individual predispositions.” American Politics Research 37(5):
856–878.
2010. Civic Talk: Peers, Politics, and the Future of Democracy. Philadelphia:
Temple University Press.
Klofstad, Casey A., Anand Edward Sokhey, and Scott D. McClurg. 2013.
“Disagreeing about disagreement: How conflict in social networks affects
political behavior.” American Journal of Political Science 57(1): 120–134.
Klofstad, Casey A., Scott D. McClurg, and Meredith Rolfe. 2009. “Measurement
of political discussion networks.” Public Opinion Quarterly 73(3): 462–483.
Kreibig, Sylvia D. 2010. “Autonomic nervous system activity in emotion:
A review.” Biological Psychology 84(3): 394–421.
Krockow, Eva M. 2018 “How many decisions do we make each day?”
Psychology Today, www.psychologytoday.com/us/blog/stretching-theory/
201809/how-many-decisions-do-we-make-each-day.
Krupnikov, Yanna, and John Barry Ryan. 2022. The Other Divide: Polarization and
Disengagement in American Politics. Cambridge: Cambridge University Press.
Kulas, Michelle. 2017. “The normal heart rate during a panic attack.” www
.livestrong.com/article/344010-the-normal-heart-rate-during-a-panic-attack.
Ladd, Jonathan McDonald, and Gabriel S. Lenz. 2008. “Reassessing the role of
anxiety in vote choice.” Political Psychology 29(2): 275–296.
2011. “Does anxiety improve voters’ decision making?” Political Psychology
32(2): 347–361
Lajevardi, Nazita. 2020.”Access denied: Exploring Muslim American representa-
tion and exclusion by state legislators.” Politics, Groups, and Identities 8(5):
957–985.
Lawless, Jennifer L., and Richard L. Fox. 2010. It Still Takes a Candidate: Why
Women Don’t Run for Office. New York: Cambridge University Press.
Leighley, Jan E., and Arnold Vedlitz. 1999. “Race, ethnicity, and political partici-
pation: Competing models and contrasting explanations.” The Journal of
Politics 61(4): 1092–1114.
Leighley, Jan E., and Tetsuya Matsubayashi. 2009. “The implications of class,
race, and ethnicity for political networks.” American Politics Research 37(5):
824–855.
Lerner, Jennifer S., Ye Li, Piercarlo Valdesolo, and Karim S. Kassam. 2015.
“Emotion and decision making.” Annual Review of Psychology 66:
799–823.
Levendusky, Matthew S., and Dominik A. Stecula. 2021. We Need to Talk: How
Cross-Party Dialogue Reduces Affective Polarization. Cambridge:
Cambridge University Press.
Levendusky, Matthew S., and Neil Malhotra. 2016. “(Mis)perceptions of partisan
polarization in the American public.” Public Opinion Quarterly 80(S1):
378–391.
Levitan, Lindsey, and Brad Verhulst. 2016. “Conformity in groups: The effects of
others views on expressed attitudes and attitude change.” Political Behavior
38(2): 277–315.
Lieberson, Stanley. 2000. A Matter of Taste: How Names, Fashions, and Culture
Change. New Haven, CT: Yale University Press.
Lieberson, Stanley, and Eleanor O. Bell. 1992. “Children’s first names: An
empirical study of social taste.” American Journal of sociology 98(3):
511–554
Long, Jacob A., and William P. Eveland Jr. 2018. “Entertainment use and political
ideology: Linking worldviews to media content.” Communication Research.
Lupia, A., and M. D. McCubbins. 1998. The Democratic Dilemma: Can
Citizens Learn What They Need to Know? Cambridge: Cambridge
University Press.
Lyons, Jeffrey and Anand E. Sokhey. 2014. “Emotion, motivation, and social
information seeking about politics.” Political Communication 31(2):
237–258.
2017. “Discussion networks, issues, and perceptions of polarization in the
American electorate.” Political Behavior 39(4): 967–988.
MacKuen, Michael. 1990. “Speaking of politics: Individual conversational choice,
public opinion, and the prospects for deliberative democracy.” In John A.
Ferejohn and James H. Kuklinski (eds.), Information and Democratic
Processes (pp. 59–99). Urbana: University of Illinois Press.
Makse, Todd, Scott Minkoff, and Anand Sokhey. 2019. Politics on Display: Yard
Signs and the Politicization of Social Spaces. New York: Oxford University
Press.
Mansbridge, Jane J. 1980. Beyond Adversary Democracy. New York: Basic
Books.
1999. “Everyday talk in the deliberative system.” In S. Macedo (ed.), Essays on
Democracy and Disagreement. New York: Oxford University Press.
Marcus, Bernd, Franz Machilek, and Astrid Schütz. 2006. “Personality in cyber-
space: Personal web sites as media for personality expressions and impres-
sions.” Journal of Personality and Social Psychology 90(6): 1014–1031.
Marcus, George E., John L. Sullivan, Elizabeth Theiss-Morse, and Daniel Stevens.
2005. “The emotional foundation of political cognition: The impact of
extrinsic anxiety on the formation of political tolerance judgments.”
Political Psychology 26(6): 949–963.
Marcus, George E., and Michael B. MacKuen. 1993. “Anxiety, enthusiasm, and
the vote: The emotional underpinnings of learning and involvement during
presidential campaigns.” American Political Science Review 87(3): 672–685.
Mason, Lilliana. 2018. Uncivil Agreement: How Politics Became Our Identity.
Chicago: University of Chicago Press.
Mattick, Richard P., and J. Christopher Clarke. 1998. “Development and valid-
ation of measures of social phobia scrutiny and social interaction anxiety.”
Behaviour Research and Therapy 36(4): 455–470.
McClurg, Scott D. 2006. “Political disagreement in context: The conditional effect
of neighborhood context, disagreement and political talk on electoral partici-
pation.” Political Behavior 28(4): 349–366.
McCrae, Robert R. 1996. “Social consequences of experiential openness.”
Psychological Bulletin 122(3): 323–337.
McDermott, Rose, Dustin Tingley, and Peter K. Hatemi. 2014. “Assortative
mating on ideology could operate through olfactory cues.” American
Journal of Political Science 58(4): 997–1005.
McLeod, Jack M., Dietram A. Scheufele, and Patricia Moy. 1999. “Community,
communication, and participation: The role of mass media and interpersonal
discussion in local political participation.” Political Communication 16(3):
315–336.
McLeod, Jack M., Katie Daily, Zhongshi Guo, et al. 1996. “Community integra-
tion, local media use, and democratic processes.” Communication Research
23(2): 179–209.
Mendelberg, Tali, and Christopher F. Karpowitz. 2016. “Power, gender, and
group discussion.” Political Psychology 37(1): 23–60.
Mendelberg, Tali, Christopher F. Karpowitz, and Nicholas Goedert. 2014. “Does
descriptive representation facilitate women’s distinctive voice? How gender
composition and decision rules affect deliberation.” American Journal of
Political Science 58(2): 291–306.
Minozzi, William, Hyunjin Song, David M. J. Lazer, Michael A. Neblo, and
Katherine Ognyanova. 2020. “The incidental pundit: Who talks politics with
whom, and why?.” American Journal of Political Science 64(1): 135–151.
Mintz, A., and C. Wayne. 2016. The Polythink Syndrome: US Foreign Policy
Decisions on 9/11, Afghanistan, Iraq, Iran, Syria, and ISIS. Stanford, CA:
Stanford University Press.
Mondak, Jeffery J. 2010. Personality and the Foundations of Political Behavior.
Cambridge: Cambridge University Press.
Mondak, Jeffery J., and Karen D. Halperin. 2008. “A framework for the study of
personality and political behaviour.” British Journal of Political Science
38(2): 335–362.
www.pewresearch.org/fact-tank/2016/12/22/how-americans-are-talking-
about-trumps-election-in-6-charts.
Oliver, J. Eric, Thomas Wood, and Alexandra Bass. 2016. “Liberellas versus
Konservatives: Social status, ideology, and birth names in the United
States.” Political Behavior 38(1): 55–81.
Olivola, Christopher Y., and Alexander Todorov. 2010. “Elected in 100 millisec-
onds: Appearance-based trait inferences and voting.” Journal of Nonverbal
Behavior 34(2): 83–110.
Oxley, Douglas R., Kevin B. Smith, John R. Alford, et al. 2008. “Political attitudes
vary with physiological traits.” Science 321(5896): 1667–1670.
Page-Gould, Elizabeth, Wendy Berry Mendes, and Brenda Major. 2010.
“Intergroup contact facilitates physiological recovery following stressful
intergroup interactions.” Journal of Experimental Social Psychology 46(5):
854–858.
Paluck, Elizabeth Levy, Seth A. Green, and Donald P. Green. 2019. “The contact
hypothesis re-evaluated.” Behavioural Public Policy 3(2): 129–158.
Parsons, Bryan M. 2010. “Social networks and the affective impact of political
disagreement.” Political Behavior 32(2): 181–204.
Pérez, Efrén O. 2015. “Mind the gap: Why large group deficits in political
knowledge emerge – and what to do about them.” Political Behavior 37(4):
933–954.
Pettigrew, Thomas F., and Linda R. Tropp. 2006. “A meta-analytic test of inter-
group contact theory.” Journal of Personality and Social Psychology 90(5):
751.
Pietryka, Matthew T. 2016. “Accuracy motivations, predispositions, and social
information in political discussion networks.” Political Psychology 37(3):
367–386.
Prior, Markus, Gaurav Sood, and Kabir Khanna. 2015. “You cannot be serious:
The impact of accuracy incentives on partisan bias in reports of economic
perceptions.” Quarterly Journal of Political Science 10(4): 489–518.
Rahim, M. Afzalur. 1983. “A measure of styles of handling interpersonal con-
flict.” The Academy of Management Journal 26(2): 368–376.
Reifen Tagar, Michal, Christopher M. Federico, Kristen E. Lyons, Steven Ludeke,
and Melissa A. Koenig. 2014. “Heralding the authoritarian? Orientation
toward authority in early childhood.” Psychological Science 25(4): 883–892.
Renshon, Jonathan, Jooa Julia Lee, and Dustin Tingley. 2015. “Physiological
arousal and political beliefs.” Political Psychology 36(5): 569–585.
Richey, Sean. 2009. “Hierarchy in political discussion.” Political Communication
26(2): 137–152.
Riordan, Cornelius. 1978. “Equal-status interracial contact: A review and revi-
sion of the concept.” International Journal of Intercultural Relations 2(2):
161–185.
Riordan, Cornelius, and Josephine Ruggiero. 1980. “Producing equal-status
interracial interaction: A replication.” Social Psychology Quarterly 43(1):
131–136.
Robinson, Robert J., Dacher Keltner, Andrew Ward, and Lee Ross. 1995. “Actual
versus assumed differences in construal: ‘Naive realism’ in intergroup
Shi, Yongren, Kai Mast, Ingmar Weber, Agrippa Kellum, and Michael Macy.
2017. “Cultural fault lines and political polarization.” WebSci ‘17, June
25–28, Troy, NY.
Sigelman, Lee, and Steven A. Tuch. 1997. “Metastereotypes: Blacks’ perceptions
of Whites’ stereotypes of Blacks.” The Public Opinion Quarterly 61(1):
87–101.
Sinclair, Betsy. 2012. The Social Citizen: Peer Networks and Political Behavior.
Chicago: University of Chicago Press.
Sokhey, Anand E., and Paul A. Djupe. 2014. “Name generation in interpersonal
political network data: Results from a series of experiments.” Social
Networks 36: 147–161.
Song, Hyunjin. 2014. “Uncovering the structural underpinnings of political dis-
cussion networks: Evidence from an exponential random graph model.”
Journal of Communication 65(1): 146–169.
Soroka, Stuart N. 2019. “Skin conductance in the study of politics and communi-
cation.” In Gigi Foster (ed.), Biophysical Measurement in Experimental
Social Science Research: Theory and Practice (pp. 85–104). London:
Academic Press.
Soroka, Stuart N., Patrick Fournier, and Lilach Nir. 2019. “Cross-national evi-
dence of a negativity bias in psychophysiological reactions to news.”
Proceedings of the National Academy of Science 116(38): 18888–18892.
Soroka, Stuart N., Patrick Fournier, Lilach Nir, and John Hibbing. 2019.
“Psychophysiology in the study of political communication: An expository
study of individual-level variation in negativity biases.” Political
Communication 36(2): 288–302.
Stern, Robert Morris, William J. Ray, and Karen S. Quigley. 2001.
Psychophysiological Recording. New York: Oxford University Press.
Straits, Bruce C. 1991. “Bringing strong ties back in interpersonal gateways to
political information and influence.” Public Opinion Quarterly 55(3): 432–448.
Suhay, Elizabeth 2015. “Explaining group influence: The role of identity and
emotion in political conformity and polarization.” Political Behavior 37(1):
221–251.
Sumaktoyo, Nathanael Gratias. 2021. “Friends from across the aisle: The effects of
partisan bonding, partisan bridging, and network disagreement on outparty
attitudes and political engagement.” Political Behavior 43(1) : 223-245.
Taylor, Shelley E., Laura Cousino Klein, Brian P. Lewis, Tara L. Gruenewald,
Regan A. R. Gurung, and John A. Updegraff. 2000. “Biobehavioral
responses to stress in females: Tend-and-befriend, not fight-or-flight.”
Psychological Review 107(3): 411–429.
Thompson, Dennis F. 2008. “Deliberative democratic theory and empirical polit-
ical science.” Annual Review of Political Science 11(1): 497–520.
Todorov, Alexander, Anesu N. Mandisodza, Amir Goren, and Crystal C. Hall.
2005. “Inferences of competence from faces predict election outcomes.”
Science 308(5728): 1623–1626.
Trawalter, Sophie, Jennifer A. Richeson, and J. Nicole Shelton. 2009. “Predicting
behavior during interracial interactions: A stress and coping approach.”
Personality and Social Psychology Review 13(4): 243–268.
293
character and trait associations. See also deliberation, 11–12, 204–207, 209–211,
individual disposition 260
assumption and, 101–102 democracy, 181
of Democrats, 101 interpersonal interactions and, 13–14
of Republicans, 101 political discussion and, 18, 243–254
CIPI I Survey, 60, 62–65, 122, 156, strains on, 234–235
209–210, 215, 218, 226, 230–231, 255 Democrats, 79, 143, 193, 263–265,
CIPI II Survey, 62–65, 82–84, 91, 105, 186, 267
193, 202–203 character and trait associations, 101
Citizen Participation Study, 37–38 in Name Your Price Study, 124–125
civil orientation to conflict, 7 demographics
Clarity Campaign Labs, 97–98 of Democrats, 90
cognitive psychology, 20 individual dispositions and, 204–215
comfort. See discomfort of Republicans, 90
conflict avoidance, 7, 37–38, 65, 241–242 detection, 15, 79–84
in individual disposition, 46 assumption in, 100–106
measurement of, 46–47 CIPI II Survey studying, 82–84
political discussion in, 33 cue categories, 83–84
as psychological disposition, 225–227 Directly Political detection system, 84,
self-expression and, 33 95–96
conformity, 40, 269–270 facial features, political identity and,
in group dynamics, 24–25 88–89
lab studies on, 168–171 Facts of Life detection system, 84, 89–91
measures of, 270 Just by Looking at Them detection
potential, 169 system, 84, 87–89
pure, 169 Media Usage detection system, 84, 94–95
in vignette experiments, 162 Non-Guesser detection systems, 84–86
Conover, Pamela Johnston, 7–17, 22–23, political discussion and, 105–106
25–26, 28, 30 of political identities, 82–83
contact hypothesis, 24, 40–41, 248–254 self-reporting of, 84–100
Cooperative Congressional Election Study disagreement, 28–29, 126–128, 247
(CCES), 61–65, 67 avoidance of, 78
COVID-19, 245 candidate, 49–50
Cramer, Katherine, 7–17, 22, 30, 183–184 discomfort and, 39, 140–141, 150–151
Crewe, Ivor M., 22–23, 25–26, 28, 30 gender and, 208–209
crosscutting networks, 7 general, 49–50
identity-based, 48–49
data collection issue, 145–147
in 4D Framework, 44–45, 61 measurement of, 49–50, 151–152
in lab experiments, 67–72, 135–136, in Name Your Price Study, 124
138–141 with outpartisans, 124
survey experiments, 73–75 partisan identity, 38, 49–50, 72
surveys, 61–67 perceived, 146, 150–151
vignette experiments, 73–75 polarization and, 13
decision-making, 5–6, 11, 35, 41–42, 110 policy, 49–50, 72
emotional response and, 149–153 in Psychophysiological Experience Study,
group dynamics in, 112–113, 126–128 72, 139–140
perception in, 85–86 recognition of, 247
deflection, in discussion, 240 seeking out, 39
in vignette experiments, 120–123 discomfort, 24
Deichert, Maggie Ann, 34–36, 236 disagreement and, 140–141, 150–151
motivations, 39–40, 239–241. See also AAA social polarization and, 197–198
Typology socioeconomic traits and, 92
goals and, 181 stereotypes, 91
for political discussion, 6, 21–22, 24–25 strength of, 176, 197–198, 215–216,
mutual toleration, 248–254 220–221
Mutz, Diana, 7–17, 21–22, 24, 71, 132, perceived disagreement
134–135, 183, 247–249, 255–256 in Psychophysiological Experience Study,
146
Name Your Price Study, 15–16, 73, 111, psychophysiological response to,
114–115, 123–126, 136–137 150–151
Democrats and Republicans in, 124–125 perception, 35, 176–177
disagreement in, 124 assumptions and, 107–108
names in decision-making, 85–86
phonetically ideological, 99 meta-perceptions, 107–108, 256–257
political identity and, 96–100 personality traits
Names As Cues Studies, 73 Big Five, 7, 32–34
Neblo, Michael A., 27–28 political ideology and, 203–204
negative outcomes, 53–54, 157 Pew Research Center, 6, 24, 33–35, 46–47,
network homogeneity, 113–114. See also 94–95, 188
homophily polarization, 14, 17–18
Noelle-Neumann, Elisabeth, 7–17, 30, 261 affective, 15, 28–29, 184
disagreement and, 13
online communication, 13 geography and, 90
opinion leadership, 129 identity and, 35–36
opinion v. fact, 2 interpersonal interactions and, 13
outpartisans, 101, 107, 136, 196, 198–200 stereotypes linked to, 101
disagreement with, 124 policy disagreement, 49–50, 72
in discussion networks, 197 Political Chameleons Study, 168
emotional response to, 137 potential conformity, 169
heart rates in conversations with, pure conformity, 169
135–136 political discussion, 8–9
rating of, 184 absence of, 110
stereotypes about, 15, 65–66, 79–80, 103 American dislike of, 8–9
anticipation of, 37
partisan bias, 27, 79. See also outpartisans; benefits of, 200
stereotypes choice in, 111–115
cheerleading, 79–80, 103, 260 conflict avoidance in, 33
future political interactions and, 191 dangers in, 23
future social interactions and, 195 defining, 4, 10
in-group and out-groups in, 35–36 democracy and, 18, 243–254
partisan clash, 151, 173–175 detection systems and, 105–106
in Psychophysiological Experience Study, disagreement/contention in, 6 (See also
141–145 disagreement)
partisan identities, 151, 268 discussants in, 4–5
demographics of, 92 dyads in, 4, 262
disagreement, 38, 49–50, 72 enjoyment of, 6
discussion networks and, 198 footing in, 23
linguistic markers in, 173–176 frequency of, 2–3
political disposition and, 220–221 group dynamics in, 11–12
psychophysiological response to, 145 initiation of, 78, 115–119
revelation of, 143, 173–176 as interpersonal interactions, 21–24