Download as pdf or txt
Download as pdf or txt
You are on page 1of 324

What Goes Without Saying

Why are political conversations uncomfortable for so many people?


Current literature focuses on the structure of discussion networks,
and the frequency with which people talk about politics, but not the
dynamics of conversations themselves. In What Goes Without Saying,
Taylor N. Carlson and Jaime E. Settle investigate how Americans
navigate these discussions in their daily lives, with particular attention
to the decision–making process around when and how to broach polit-
ics. The authors use a multi–methods approach to unpack what they
call the 4D Framework of political conversation: identifying the ways
that people detect others’ views, decide whether to talk, discuss their
opinions honestly – or not, and determine whether they will repeat the
experience in the future. In developing a framework for studying and
explaining political discussion as a social process, What Goes Without
Saying will set the agenda for research in political science, psychology,
communication, and sociology for decades to come.

 .  is an Assistant Professor of Political Science at


Washington University in St. Louis. She is the author of Talking
Politics: Political Discussion Networks and the New American
Electorate (2020). She has received numerous awards for her research.
 .  is an Associate Professor of Government at the
College of William & Mary. Her first book, Frenemies: How Social
Media Polarizes America (Cambridge University Press 2018), won the
Best Book Award from the Experimental Politics section of APSA.

Published online by Cambridge University Press


Published online by Cambridge University Press
What Goes Without Saying
Navigating Political Discussion in America

TAYLOR N. CARLSON
Washington University in St. Louis

JAIME E. SETTLE
William & Mary

Published online by Cambridge University Press


University Printing House, Cambridge  , United Kingdom

One Liberty Plaza, 20th Floor, New York,  10006, USA

477 Williamstown Road, Port Melbourne,  3207, Australia

314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre,


New Delhi – 110025, India

103 Penang Road, #05–06/07, Visioncrest Commercial, Singapore 238467

Cambridge University Press is part of the University of Cambridge.

It furthers the University’s mission by disseminating knowledge in the pursuit of


education, learning, and research at the highest international levels of excellence.

www.cambridge.org
Information on this title: www.cambridge.org/9781108831864
: 10.1017/9781108912495

© Taylor N. Carlson and Jaime E. Settle 2022

This publication is in copyright. Subject to statutory exception


and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without the written
permission of Cambridge University Press.

First published 2022

A catalogue record for this publication is available from the British Library.

Library of Congress Cataloging-in-Publication Data


: Carlson, Taylor N., author. | Settle, Jaime E., 1985- author.
: What goes without saying / Taylor N. Carlson, Washington University in St. Louis, Jaime E. Settle,
College of William & Mary, Virginia.
: New York : Cambridge University Press, 2022. | Includes bibliographical references and index.
:  2022001129 (print) |  2022001130 (ebook) |  9781108831864 (Hardback) | 
9781108927444 (Paperback) |  9781108912495 (ePub)
: : Communication in politics. | Group identity. | Discussion. | Political sociology. | :
POLITICAL SCIENCE / American Government / General
:  85 .384497 2022 (print) |  85 (ebook) |  320.01/4–dc23/eng/20220304
LC record available at https://lccn.loc.gov/2022001129
LC ebook record available at https://lccn.loc.gov/2022001130

 978-1-108-83186-4 Hardback


 978-1-108-92744-4 Paperback

Additional resources for this publication at www.cambridge.org/9781108831864

Cambridge University Press has no responsibility for the persistence or accuracy


of URLs for external or third-party internet websites referred to in this publication
and does not guarantee that any content on such websites is, or will remain,
accurate or appropriate.

Published online by Cambridge University Press


What Goes Without Saying

Why are political conversations uncomfortable for so many people?


Current literature focuses on the structure of discussion networks,
and the frequency with which people talk about politics, but not the
dynamics of conversations themselves. In What Goes Without Saying,
Taylor N. Carlson and Jaime E. Settle investigate how Americans
navigate these discussions in their daily lives, with particular attention
to the decision–making process around when and how to broach polit-
ics. The authors use a multi–methods approach to unpack what they
call the 4D Framework of political conversation: identifying the ways
that people detect others’ views, decide whether to talk, discuss their
opinions honestly – or not, and determine whether they will repeat the
experience in the future. In developing a framework for studying and
explaining political discussion as a social process, What Goes Without
Saying will set the agenda for research in political science, psychology,
communication, and sociology for decades to come.

 .  is an Assistant Professor of Political Science at


Washington University in St. Louis. She is the author of Talking
Politics: Political Discussion Networks and the New American
Electorate (2020). She has received numerous awards for her research.
 .  is an Associate Professor of Government at the
College of William & Mary. Her first book, Frenemies: How Social
Media Polarizes America (Cambridge University Press 2018), won the
Best Book Award from the Experimental Politics section of APSA.

Published online by Cambridge University Press


Published online by Cambridge University Press
What Goes Without Saying
Navigating Political Discussion in America

TAYLOR N. CARLSON
Washington University in St. Louis

JAIME E. SETTLE
William & Mary

Published online by Cambridge University Press


University Printing House, Cambridge  , United Kingdom

One Liberty Plaza, 20th Floor, New York,  10006, USA

477 Williamstown Road, Port Melbourne,  3207, Australia

314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre,


New Delhi – 110025, India

103 Penang Road, #05–06/07, Visioncrest Commercial, Singapore 238467

Cambridge University Press is part of the University of Cambridge.

It furthers the University’s mission by disseminating knowledge in the pursuit of


education, learning, and research at the highest international levels of excellence.

www.cambridge.org
Information on this title: www.cambridge.org/9781108831864
: 10.1017/9781108912495

© Taylor N. Carlson and Jaime E. Settle 2022

This publication is in copyright. Subject to statutory exception


and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without the written
permission of Cambridge University Press.

First published 2022

A catalogue record for this publication is available from the British Library.

Library of Congress Cataloging-in-Publication Data


: Carlson, Taylor N., author. | Settle, Jaime E., 1985- author.
: What goes without saying / Taylor N. Carlson, Washington University in St. Louis, Jaime E. Settle,
College of William & Mary, Virginia.
: New York : Cambridge University Press, 2022. | Includes bibliographical references and index.
:  2022001129 (print) |  2022001130 (ebook) |  9781108831864 (Hardback) | 
9781108927444 (Paperback) |  9781108912495 (ePub)
: : Communication in politics. | Group identity. | Discussion. | Political sociology. | :
POLITICAL SCIENCE / American Government / General
:  85 .384497 2022 (print) |  85 (ebook) |  320.01/4–dc23/eng/20220304
LC record available at https://lccn.loc.gov/2022001129
LC ebook record available at https://lccn.loc.gov/2022001130

 978-1-108-83186-4 Hardback


 978-1-108-92744-4 Paperback

Additional resources for this publication at www.cambridge.org/9781108831864

Cambridge University Press has no responsibility for the persistence or accuracy


of URLs for external or third-party internet websites referred to in this publication
and does not guarantee that any content on such websites is, or will remain,
accurate or appropriate.

Published online by Cambridge University Press


What Goes Without Saying

Why are political conversations uncomfortable for so many people?


Current literature focuses on the structure of discussion networks,
and the frequency with which people talk about politics, but not the
dynamics of conversations themselves. In What Goes Without Saying,
Taylor N. Carlson and Jaime E. Settle investigate how Americans
navigate these discussions in their daily lives, with particular attention
to the decision–making process around when and how to broach polit-
ics. The authors use a multi–methods approach to unpack what they
call the 4D Framework of political conversation: identifying the ways
that people detect others’ views, decide whether to talk, discuss their
opinions honestly – or not, and determine whether they will repeat the
experience in the future. In developing a framework for studying and
explaining political discussion as a social process, What Goes Without
Saying will set the agenda for research in political science, psychology,
communication, and sociology for decades to come.

 .  is an Assistant Professor of Political Science at


Washington University in St. Louis. She is the author of Talking
Politics: Political Discussion Networks and the New American
Electorate (2020). She has received numerous awards for her research.
 .  is an Associate Professor of Government at the
College of William & Mary. Her first book, Frenemies: How Social
Media Polarizes America (Cambridge University Press 2018), won the
Best Book Award from the Experimental Politics section of APSA.

Published online by Cambridge University Press


Published online by Cambridge University Press
What Goes Without Saying
Navigating Political Discussion in America

TAYLOR N. CARLSON
Washington University in St. Louis

JAIME E. SETTLE
William & Mary

Published online by Cambridge University Press


University Printing House, Cambridge  , United Kingdom

One Liberty Plaza, 20th Floor, New York,  10006, USA

477 Williamstown Road, Port Melbourne,  3207, Australia

314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre,


New Delhi – 110025, India

103 Penang Road, #05–06/07, Visioncrest Commercial, Singapore 238467

Cambridge University Press is part of the University of Cambridge.

It furthers the University’s mission by disseminating knowledge in the pursuit of


education, learning, and research at the highest international levels of excellence.

www.cambridge.org
Information on this title: www.cambridge.org/9781108831864
: 10.1017/9781108912495

© Taylor N. Carlson and Jaime E. Settle 2022

This publication is in copyright. Subject to statutory exception


and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without the written
permission of Cambridge University Press.

First published 2022

A catalogue record for this publication is available from the British Library.

Library of Congress Cataloging-in-Publication Data


: Carlson, Taylor N., author. | Settle, Jaime E., 1985- author.
: What goes without saying / Taylor N. Carlson, Washington University in St. Louis, Jaime E. Settle,
College of William & Mary, Virginia.
: New York : Cambridge University Press, 2022. | Includes bibliographical references and index.
:  2022001129 (print) |  2022001130 (ebook) |  9781108831864 (Hardback) | 
9781108927444 (Paperback) |  9781108912495 (ePub)
: : Communication in politics. | Group identity. | Discussion. | Political sociology. | :
POLITICAL SCIENCE / American Government / General
:  85 .384497 2022 (print) |  85 (ebook) |  320.01/4–dc23/eng/20220304
LC record available at https://lccn.loc.gov/2022001129
LC ebook record available at https://lccn.loc.gov/2022001130

 978-1-108-83186-4 Hardback


 978-1-108-92744-4 Paperback

Additional resources for this publication at www.cambridge.org/9781108831864

Cambridge University Press has no responsibility for the persistence or accuracy


of URLs for external or third-party internet websites referred to in this publication
and does not guarantee that any content on such websites is, or will remain,
accurate or appropriate.

Published online by Cambridge University Press


Contents

List of Figures page vii


List of Tables ix
Acknowledgements xi

1 Opening the Black Box of Political Discussion 1


2 The 4D Framework of Political Discussion 20
3 Data Collection 44
4 Detection: Mapping the Political Landscape (Stage 1) 77
5 Decision: To Talk or Not to Talk? (Stage 2) 109
6 Discussion: The Psychophysiological Experience of Political
Discussion (Stage 3) 130
7 [further] Discussion: Expression in Political Discussions
(Stage 3) 154
8 Determination: When Discussion Divides Us (Stage 4) 180
9 Individual Dispositions and the 4D Framework 202
10 The Costs of Conversation 234

Notes 259
Works Cited 277
Index 293

Published online by Cambridge University Press


Published online by Cambridge University Press
Figures

4.1 Proportion of respondents agreeing with stereotype


statements page 104
6.1 Average change in psychophysiological response to video
and discussion stimuli 134
6.2 Average change in psychophysiological response to
discussion stimuli 136
6.3 Psychophysiological response throughout Psychophysiological
Experience Study 140
6.4 Psychophysiological response to discussion prompts by
partisan identity concordance 145
6.5 Psychophysiological response to discussion prompts
by issue disagreement 146
6.6 Psychophysiological response to discussion prompts, by
perceived disagreement 150
7.1 Most important concerns and opportunities, by AAA
Typology 158
7.2 Most important concerns and opportunities, by
treatment group 161
7.3 Considerations by expression response 167
7.4 Qualification in partisan identity expression,
by partisan clash condition 175
8.1 Types and frequency of political and social distancing 187
8.2 Main effects of partisan composition on avoiding
future political discussions with this group 191
8.3 Main effects of partisan composition on avoiding future
social interactions with this group 195

vii

Published online by Cambridge University Press


viii List of Figures

8.4 Predicted likelihood of polarization, by strength of


partisanship and network composition 198
9.1 Predicted probability of guessing views, by political
dispositions 217
9.2 Predicted probability of expression responses, by political
dispositions 219
9.3 Predicted probability of expression response, by
psychological dispositions 224
9.4 Predicted probability of relying on cues, by psychological
dispositions 227
9.5 Predicted probability of distancing, by psychological
dispositions 229

Published online by Cambridge University Press


Tables

3.1 Correlations between individual dispositions page 48


3.2 Conceptualization and measurement of disagreement 50
3.3 Conceptualization and operationalization of
knowledge asymmetries 52
3.4 Considerations coded into AAA Typology 55
3.5 Measurement of 4D Framework outcomes 57
3.6 Survey data collection 62
3.7 Psychophysiological lab studies 70
3.8 Vignette experiments overview 74
4.1 Detection categories based on free response data 83
4.2 Socioeconomic and non-visible demographic traits and
inferred partisanship 92
4.3 Actual and perceived political leanings of phonetically
ideological names 99
4.4 Frequency of guessing views and confidence in guesses 106
5.1 Agreement in discussions that were avoided and engaged in 116
5.2 Motivation for engagement or avoidance, coded free
response answers 117
5.3 Summary of results about Stage 2 (Decision) 127
6.1 Mean emotional response, treatment main effects 137
6.2 Hypotheses in Psychophysiological Experience Study 142
6.3 Recall and accuracy in discussion partner’s opinion 148
7.1 Vignette experiment hypotheses 157
7.2 Percentage of respondents selecting each consideration as
most important 159

ix

Published online by Cambridge University Press


x List of Tables

7.3 Expression response from vignette experiment, by


treatment condition 163
7.4 Likelihood of expressing true opinion, by most
important consideration 166
7.5 Linguistic markers in Psychophysiological Experience Study 172
7.6 Explanatory variables for linguistic marker analysis 174
7.7 Pattern of findings for linguistic markers in issue discussion in
Psychophysiological Experience Study 178
8.1 Summary of Determination stage questions 185
9.1 Summary of key findings 205
9.2 Empirical approach for evaluating individual dispositions
and Stage 2 behavior 210
9.3 Empirical approach for evaluating individual dispositions and
Stage 3 behavior 212
9.4 Empirical approach for evaluating individual
dispositions and Stage 4 behavior 213
9.5 Empirical approach for evaluating individual dispositions
and Stage 1 behavior 216

Published online by Cambridge University Press


Acknowledgements

This project is the result of shared intellectual curiosity nurtured by an


environment that incentivized and rewarded collaborative research
between faculty and students. We began working together as professor
and undergraduate research assistant in 2012, but we wrap up this project
as coequal collaborators and friends.
Jaime’s interest in contentious interactions and Taylor’s interest in
conformity blossomed into this book, a project nearly ten years in the
making that served as the organizing principle of our joint research
agenda during a critically important era, professionally, for each of us.
A project of this duration and magnitude would not have been possible
without the support of many people and organizations.
The methodological pluralism in this book necessitated a substantial
investment of resources. We thank the National Science Foundation
(NSF) for its financial support (SES 1423788: Understanding the
Mechanisms for Disengagement from Contentious Political Interaction)
and for its engagement when our research was politicized as part of a
battle in Congress over funding for the social sciences. The NSF grant
funded the lab equipment for the psychophysiological studies, the nation-
ally representative surveys, countless additional experiments with con-
venience samples, an army of undergraduate research assistants, training
and workshop travel for a dozen students, and conference travel allowing
us to share our work and receive important feedback.
We are also grateful for the support provided by the universities with
which we’ve been affiliated during our time working on this book.
William & Mary (W&M) generously supported the Social Science
Research Methods Center and the Omnibus Project, which were essential

xi

Published online by Cambridge University Press


xii Acknowledgements

for the execution of the lab experiments. The Osher Lifelong Learning
Institute (formerly the Christopher Wren Association), in particular
Judith Bowers, partnered with us to help disseminate our findings.
W&M also funded several opportunities for Taylor through an Honors
Fellowship, which ultimately supported an experiment described in
Chapter 7, conference travel to the Midwest Political Science
Association (MPSA), and training at the Summer Institute for Political
Psychology at Stanford in 2013. The University of California, San Diego’s
Center for American Politics provided space on a CCES module that
provided data for analyses that appear in Chapter 8. Washington
University in St. Louis provided Taylor with the time needed to dedicate
to this project and resources to hire graduate research assistants, such as
Erin Rossiter and Benjamin Noble, without whom this project would
have been seriously delayed.
Taylor was one of the core founding members of the Social Networks
and Political Psychology (SNaPP) Lab, which Jaime established in 2013.
The impact of the lab on this project cannot be overstated. Multiple
cohorts of W&M students were directly and indirectly involved in the
projects presented in this book. We are especially grateful to the
members of the Lab Experiments Team over the years. Drew
Engelhardt was the first to tackle the setup of the BioPac equipment
and AcqKnowledge software, followed by Karina Charipova. John
Stuart, Edward Hernandez, Zarine Kharazian, Dan Brown, and
Michelle Hermes collected, cleaned, and preprocessed most of the data
for the psychophysiologically informative studies in Chapters 6 and 7.
Laurel Detert, Emily Saylor, Alex Bulova, Nick Oviedo-Torres, Emma
DiLauro, and Nora Donnelly were involved in analyzing some of this
data as well as in thinking about how the study protocols could be
extended in the future.
Countless other SNaPP Lab alums were involved in the production of
the data. We thank Meg Schwenzfeier for her work in coding the social
network analysis data from the first psychophysiological study. We thank
Michael Payne for the inspiration for what we report as the Names as
Cues study but internally called the “Ezekiel Studies” because of a
humorous conversation with him. Another set of students – Ally Brown,
Vera Choo, Leslie Davis, Aidan Fielding, Jacob Nelson, Alexis Payne,
Anne Pietrow, Kathleen Quigley, and Olivia Yang – were involved in
coding free response data. The “COVID Cohort” – Julia Campbell,
Claudia Chen, Leslie Davis, Andrew Luchs, Kaylie Martinez-Ochoa,
and Frank Tao – contributed at the final stages of writing the manuscript,

Published online by Cambridge University Press


Acknowledgements xiii

including to help polish the opening vignettes in the chapters. Dozens of


other W&M students served as proctors or subjects in our studies.
Jaime thanks John Hibbing and Kevin Smith for hosting her for a visit
in 2012 that was the crash course in how to set up a psychophysiology
lab, as well as for their support and encouragement throughout this
project. During that trip, the opportunity to learn from Carly Jacobs,
Mike Gruszczynski, Amanda Friesen, Jayme Renfro, Kristen Anderson,
Karl Giuseffi, Frank Gonzalez, and Scott Bokemper was invaluable. With
funding from the NSF grant, Jaime and Taylor sponsored a workshop in
2015 for graduate and undergraduate research assistants to “open the
black box” on using psychophysiological measures; in 2016 a second
workshop was held in conjunction with a conference hosted by Matt
Hibbing and supported by UC Merced. Those workshops were crucial
to ensure that we were using best practices in our collection and analysis
of physiological data, and we are grateful to the participants and support-
ers of that effort: Nicolas Anspach, Vin Arceneaux, Chelsea Coe, John
Peterson, and several research assistants from the SNaPP Lab named
previously.
Over the years, we’ve been exceptionally fortunate to have the oppor-
tunity to present this work in front of a number of audiences. We thank
our fellow panelists, discussants, and audience members from panels at
the American Political Science Association (APSA) (2014, 2015, and
2016), MPSA (2014 and 2015), International Society of Political
Psychology (ISPP) (2014, 2015, and 2017), World Association for
Public Opinion Research (WAPOR) 2015, and Political Networks
Conference (PolNet) 2016. We’ve also had the chance to give talks about
the project at Northwestern University’s American Politics Workshop, the
University of Virginia’s American Politics Speaker Series, CEVIPOF’s
Political Economy of Accountability Workshop, as well as the New
Methods and Perspectives in Political Psychology Symposium at the
2021 SPSP Political Psychology Preconference. We are both members of
the Omni Methods Group (formerly known as the Human Nature
Group) and we are grateful for the invaluable feedback we’ve received
on so many facets of the project, from its earliest incarnations to brain-
storming book titles. Many of Jaime’s favorite professional memories are
from retreats and conferences with this crew, where she has learned that
constructive, but supportive, criticism is ideally given and received in a
beautiful locale with a beer in hand.
There are a number of individuals whom we also want to thank for
their feedback. Brad LeVeck was extremely helpful in thinking through

Published online by Cambridge University Press


xiv Acknowledgements

the Names as Cues studies. Stuart Soroka provided excellent conceptual


feedback at a critical juncture in the book’s development. Chris
Karpowitz made a comment at ISPP 2015 that inspired the True
Counterfactual Experiment design. Lisa Argyle provided incredibly
thoughtful comments on the earliest drafts of the first few chapters of this
manuscript. We thank her for encouraging us to think about what the
socially desirable behaviors truly are in political discussions and for years
of insightful conversations about conversations. We thank Jeremy Levy
for helpful discussions and feedback on the manuscript that pushed us to
improve its clarity and scope. Jamie Druckman’s enthusiasm and encour-
agement for this project bolstered our resolve to finish it. We thank the
anonymous reviewers for their feedback on the manuscript. This book
benefited from our conversations with Tali Mendelberg and Bridget
Flannery-McCoy, who pushed us to balance situating our argument
within scholarly debates and striking a tone that could have a broader
impact. We’re so grateful for the opportunity to work with Sara Doskow
at Cambridge University Press. Sara’s initial enthusiasm about the project
gave us the energy to see it through. She was incredibly supportive
throughout the process, especially given the tumultuous year we all had,
completing the manuscript during the Covid-19 pandemic. Joshua
Penney, Rachel Blaifeder, and Jadyn Fauconier-Herry enthusiastically
took the reins on this project to see it through to the finish line. We are
grateful for their attention to detail and suggestions for increasing the
reach of the book. Ashley and Tanner Schreiber-May of Hyphen did
phenomenal work on the cover art, capturing the essence of the book in
a visually compelling way.
Jaime would like to thank Michael Draeger for all the ways that he
keeps her grounded and reminds her that writing a book should be only
one piece of a rich and fulfilling life instead of an all-consuming endeavor.
Turns out we didn’t need a deus ex machina to keep the narrative of the
book moving, but she appreciates the suggestion and all the other ways he
has made her laugh. Jaime’s parents, Patty and Gene, have been as
supportive as ever and have stepped up in many ways, big and small, to
make life better. She wants to thank her extended family and the extended
Draeger clan for the chance to observe and participate in many interper-
sonal interactions about politics. Finally, Jaime thanks Taylor for inspir-
ing her at every step along the way. Taylor’s creativity, energy,
organization, perseverance, and humor are what kept this project moving
forward, despite slowdowns caused by Jaime’s other projects and post-
tenure malaise. She is so appreciative that Taylor’s drive and focus

Published online by Cambridge University Press


Acknowledgements xv

rekindled her own, and she is a better scholar because of the opportunity
to work with Taylor.
Taylor would like to thank her family for providing years of examples
of different types of political discussions (both lived and observed), which
provided motivation to write this book. In addition to providing her with
a colorful variety of conversations that no doubt capture every single
behavior we describe empirically in this book, her family gave her unend-
ing support. She thanks her parents, Charlie and Skip, for giving her the
opportunity to attend the College of William & Mary and take full
advantage of opportunities along the way that helped pave the way for
this book. She thanks her grandparents, Bud and Sharon, for remaining
ever-curious about the research and what exactly it is that she does with
her time. Taylor also thanks her in-laws, Bob, Michelle, Alyssa (Hahn),
and Evan Carlson, for their support of and interest in her work. Taylor
owes the biggest gratitude to Eric and Daniel Carlson. Without Eric’s
support during the entirety of this project’s near-decade of work, Taylor
would not have had the motivation to finish. Eric gave her the space and
time to write, listened when she needed to vent or verbally process parts of
the project, and held down the fort when she traveled. Although Daniel’s
arrival into this world just before the Covid-19 pandemic delayed our
progress on this book by a few months, Taylor is most grateful of all to
her little Danny Bob. Daniel gave this book new meaning and purpose,
yet served as a constant reminder that life is more important than a book
(with apologies to readers!). Daniel “wrote” his first line of code for this
book and many paragraphs were written with Daniel asleep in
Taylor’s arms.
Taylor owes a huge thank-you to Jaime. Jaime has been an incredible
mentor and collaborator for nearly a decade. This book would not have
been possible without Jaime’s guidance, intuition, and patience while
Taylor learned the ropes of academia and struggled through challenges
that were just a distant memory for Jaime at the time, such as taking
comprehensive exams or finishing a dissertation. Jaime has been incred-
ibly supportive of work–life balance during this project, and Taylor is
indebted to Jaime for that above all else.

Published online by Cambridge University Press


Published online by Cambridge University Press
1

Opening the Black Box of Political Discussion

It’s December 4, nearly a month after a contentious election, and Joe is


just beginning a typical Thursday. Living in a big city means he can take
public transit to work, and that’s what he’s doing at 8:37 this morning.
Climbing into the bus, he quickly scans the open seats and chooses to sit
next to a woman holding a backpack on her lap that appears to be
blissfully free of political buttons and patches. As he settles in for the
twelve-minute ride, he pulls out his copy of the local newspaper, but
before he even cracks it open, his seatmate pipes up with a snarky
comment about the legitimacy of the election results. Joe feels trapped;
his heart is racing, and his palms feel sweaty. He instantly regrets his
decision to read the paper instead of listening to a podcast, where his
headphones could shield him from the unwanted political commentary
of others. He thinks it would be rude to get up and move away, but the
last thing he wants to do before his first cup of coffee is have a debate
about the election. He attempts a throwaway comment to derail the
conversation and buries his nose in the paper, hoping his seatmate will
take the hint.
The office, at long last. Joe peels off his winter layers and walks into the
office lounge where his colleagues are chatting. Joe knew as soon as he saw
them that he was going to be roped into a conversation. His apprehension
isn’t based on a lack of interest – one of the articles he read that morning
was on a related subject – and it’s not based on the absence of an opinion.
But he dreads the delicate dance of navigating these office talks. He knows
that his boss agrees with him, but the new program assistant probably
does not, based on the campaign buttons that adorn his cubicle. Joe is
acutely aware of how uncomfortable the new program assistant must be,

https://doi.org/10.1017/9781108912495.001 Published online by Cambridge University Press


2 Opening the Black Box of Political Discussion

having been in his shoes before, and does his best to steer the conversation
toward safer waters. Football. Movies. The holidays. Their kids’ dance
recitals. Your grandmother’s bunion removal. Anything but politics.
Joe’s day continues. At lunch, he overhears people at the table next to
him talking about the latest economic news. It sounds like the most
talkative member of the group is sharing more opinion than fact, even
though he’s billing it as objective reality. Joe finishes his workday and
takes the bus home, remembering to use his headphones to avoid more
unwelcome political encounters. Thursday nights are dinners with his in-
laws; he gave up years ago trying to impress them, but he still makes an
effort not to antagonize them. While “not antagonizing them” used to be
simple enough, their constant political commentary has complicated
things. They always have something to say, typically motivated by the
cable news programs that monopolize their television. Joe never knows if
they want him to reply or not, but because they seem well-informed, he
never feels like he has much to contribute. He is always grateful when his
wife takes one for the team, and even though this conversation is the
longest one he’s had all day about politics, he does not actually say
anything.
As Joe falls asleep that night, he rehashes in his mind all the close calls
he had during the day, where he narrowly avoided getting drawn into an
unpleasant situation talking about politics. The evasion is exhausting, but
necessary. When he thinks back to the occasions where he has not steered
clear of contentious conversations – with strangers, with his coworkers,
and with his family – he cringes. Those negative memories are what
motivates him to work so hard to avoid offending others and minimize
his own discomfort.

       


   
Joe – a narrative composite built on the anecdotes shared with us by
hundreds of research subjects in open-ended questions – could be thought
of as the prototypical American when it comes to political discussion. In
what ways does Joe personify what is known about the discussion behav-
ior of the American public? And what is left unmeasured from Joe’s
experience using the standard techniques deployed in research on political
discussion?
We begin with the if and when of conversation: How do social scien-
tists measure whether Joe talked about politics? Imagine that Joe was

https://doi.org/10.1017/9781108912495.001 Published online by Cambridge University Press


What We Do Not Know about Political Discussion 3

randomly selected to participate in the American National Election Study


(ANES). When asked if he has ever discussed politics with family and
friends, the answer is straightforward: yes. Joe, like the vast majority of
Americans, would probably report that he does talk about politics.
Perhaps to no surprise given the wording of the question, more than
70 percent of Americans from 1984 to 2016 report that they have
discussed politics at least once with their close social ties. While this
question is fairly direct, other questions about the frequency of political
discussion behavior might be harder to answer. Using the cumulative
ANES dataset, on average, Americans report that they discuss politics
about 2.4 days per week, although there is considerable variation over
time.1 When asked how many days in the past week Joe discussed politics,
how would he respond? He never initiated these interactions, but Joe
faced multiple opportunities to express his opinions if he desired. Does he
count the interaction on the bus or just the longer conversation with his
in-laws? Another question used to assess overall discussion behavior
assesses whether people try to avoid political discussions, enjoy them, or
fall somewhere in between. Joe is certain that he avoids them, but he
would have no way to tell the surveyor that he is not usually successful in
doing so.
Next, we consider the what of the conversation, or the substance and
experience of the political interactions Joe has. Most previous research
has not focused too intently on what Joe is likely to discuss. Scholars who
study deliberation would likely find Joe’s conversations lacking, as the
interactions he had fail to meet the criteria for deliberation, such as
participation from both people in the conversation (e.g. Thompson
2008).2 We know that the format of survey questions – asking about
discussing “important matters” versus “political matters” – tends to yield
similar distributions of discussant preferences (Klofstad, McClurg, and
Rolfe 2009; Sokhey and Djupe 2014). But with a handful of exceptions
(Fitzgerald 2013; Settle 2018), researchers do not tend to deeply explore
how people interpret the meaning of “political” when they are asked to
recount those conversations. Nor do researchers probe deeply into how
people feel during the conversation itself. We would systematically miss
Joe’s negative emotional experience discussing politics by not pushing
further on the fact that he says he tries to avoid discussion. While some
work sheds light on the emotional motivations for and results of the kinds
of discussions Joe might have (Parsons 2010; Lyons and Sokhey 2014),
considerably less is known about the considerations inside people’s heads
during a political conversation.

https://doi.org/10.1017/9781108912495.001 Published online by Cambridge University Press


4 Opening the Black Box of Political Discussion

The bottom line: Questions about the rates and nature of political
discussion are difficult to ask and answer. Scholars have wildly different
definitions of political discussion (or conversation, talk, deliberation,
interaction), and “talking about politics” means different things to differ-
ent people (Eveland, Morey, and Hutchens 2011; Morey and Eveland
2016). Following points carefully raised by these scholars, we note that
the most commonly used political discussion survey items are far too
blunt to capture the nuances of the behavior. Analyzing data from full
networks, Morey and Eveland (2016) find that dyads tend to agree on
whether they had any conversation, but do not agree on whether they
discussed politics. This suggests that individuals have different conceptu-
alizations of what constitutes a political discussion, and these different
conceptualizations can make it difficult to measure the existence or fre-
quency of discussion.
Turning toward the who of conversations, Joe, like most Americans,
would report on a survey that he talks most frequently with those with
whom he has a close relationship, such as his wife and in-laws. He also
would probably report that most of the people with whom he talks tend
to agree with him (Mutz 2006), though disagreement persists in his
network (Huckfeldt, Johnson, and Sprague 2004). A survey might also
pick up that Joe is exposed to more diversity of opinion in the workplace,
consistent with findings that coworkers are an important source of cross-
cutting discussion (Mutz and Mondak 2006). But these measures would
miss several important aspects of the who in Joe’s political conversation
experiences. We would not fully understand the effect of the group
context and power dynamic in his water-cooler conversation, both of
which have been shown to be important (e.g. Richey 2009). And although
we would accurately identify that he encounters disagreement at work,
we would likely misattribute to these conversations the ability to influence
Joe. Eveland, Morey, and Hutchens (2011) importantly find that political
conversations with neighbors and coworkers are more likely to be in the
form of small talk, motivated by the desire to pass time, while political
conversations with strong ties, such as partners and relatives, are gener-
ally motivated by more instrumental factors, for example, trying to form
an opinion or inform others. Joe’s experience seems to fit this finding, but
our standard survey questions would likely miss this nuance and how the
nature of his various social relationships affects the tenor of different
conversations.
Our understanding of where political discussion takes place is often
deduced from whom people report as political discussants. As a result, we

https://doi.org/10.1017/9781108912495.001 Published online by Cambridge University Press


What We Do Not Know about Political Discussion 5

assume that most discussion takes place in the workplace and at home
(e.g. Conover, Searing, and Crewe 2002),3 where Americans spend most
of their time. But discussions take place in other locations as well, such as
regular discussion groups that emerge out of civic or community associ-
ations (Cramer 2004, 2006, 2008), barbershops (Harris-Lacewell 2010),
or in public spaces such as social gatherings and pubs, although these
locations facilitate political discussion far less frequently than the work-
place (Conover, Searing, and Crewe 2002). Churches also play an import-
ant role as a place for people to exchange political opinions, especially for
African Americans.
A major problem with common measures of the if, when, who, and
where of political conversation is that they are narrowly focused on
conversations with regular discussants. As a result of the survey ques-
tions, scholars would miss the multitude of incidental interactions in Joe’s
day, ranging from the fleeting exchange on the bus, akin to a “snort of
derision” in Mansbridge’s (1999) terms, to the political talk briefly inter-
woven into the conversation around the office water cooler. These con-
versations do not necessarily influence Joe’s policy opinions – he is not
learning from them or hoping to persuade anyone. But collectively, they
are a regular, albeit diverse, feature of his daily life that could help him
grasp the political world around him and shape his expectations of what
kinds of people tend to believe what kinds of things about politics. While
regular political discussants might be more influential on standard polit-
ical behaviors such as voting, learning, and attitude formation, ignoring
the full set of interactions might constrain the scholarly understanding of
how conversations can affect broader attitudes toward politics. Relatedly,
we have not captured anything about the conversations that Joe could
have had but successfully avoided. The active avoidance of political
conversation may affect Joe in ways that extend beyond the simple
absence of discussion.
Where social scientists might really come up short is in the answers to
the questions of why and how. Why do people talk about politics? The
vast majority of Americans do talk about politics, at least sometimes.
Many scholars start their inquiry with an assumption about what motiv-
ates political discussion, but few have tested these assumptions. The
conventional answer is that people communicate about politics to achieve
instrumental goals related to decision-making. They communicate in
order to learn and to persuade others. Research stemming from this
assumption finds evidence that circumstantially supports it: people
who are more vested in the political system – those who are more

https://doi.org/10.1017/9781108912495.001 Published online by Cambridge University Press


6 Opening the Black Box of Political Discussion

knowledgeable, more interested in politics, and more attached to their


partisan identities – are more likely to report talking about politics more
regularly (e.g. Straits 1991; McLeod, Daily, Guo, Eveland, Bayer, Yang,
and Wang 1996, p. 196; McLeod, Scheufele, and Moy 1999, p. 324;
Huckfeldt 2001, p. 431; Klofstad 2009) .
Yet, there is plenty of evidence to suggest that people are not all that
interested in talking about politics, despite the fact that they do report
conversing. For example, Ulbig and Funk (1999) find that only 35 percent
of their respondents reported enjoying political discussions. When we
asked the same question in 2016, only 26 percent of our respondents
reported enjoying political discussions. Despite the fact that most people
report that they talk about politics with at least some regularity, it does
not appear to be an enjoyable experience for many of them. A particularly
telling example is a recent study focused on conversations that happen at
Thanksgiving dinner tables (Chen and Rhola 2018). The authors found
that political disagreement shortens the duration of holiday gatherings, a
scholarly acknowledgment of the common wisdom that holiday dinner
conversations can be incredibly stressful.4 In fact, just after the 2016
election, Pew Research Center reported that “with the holidays approach-
ing, 39% of U.S. adults say their families avoid conversations about
politics” (Oliphant and Smith 2016).5 This suggests that individuals
might engage in political discussions with less instrumental – and perhaps
more social – motivations. Eveland, Morey, and Hutchens (2011) find
that the most commonly reported motivations for political discussion are
indeed social: to pass the time or stimulate an interesting debate.
Learning, informing, and persuading were the least common motivations
for engaging in a political discussion.
What most quantitative approaches to studying political discussion
would miss completely is how Joe’s desire to avoid political conversation
affects his decision-making with respect to when he does talk and what he
says. Thus, a better framing of the questions about the why and how of
political discussion would be: How do people’s motivations for political
discussion affect the way they navigate those conversations?

Navigating Political Discussion


What motivates some people to drive straight into political discussion and
others to pump (or slam on) the brakes to avoid it? Once in a political
discussion, how do people steer around – or crash into – obstacles such as
conflict, lack of knowledge, or social pressures? Quantitatively driven

https://doi.org/10.1017/9781108912495.001 Published online by Cambridge University Press


What We Do Not Know about Political Discussion 7

answers to these foundational questions are surprisingly hard to come by,


but a close read of several important texts in the field reveals convergence
on a proposed explanation: political discussion is driven more by social
than instrumental motivations. Many people do not want to talk about
politics because of the social and psychological costs involved in doing so.
At the micro level, political scientists focusing on the psychology under-
pinning political discussion have zeroed in on the role of personality and
other individual traits. For example, Hibbing, Ritchie, and Anderson
(2011) argue that “psychological predispositions captured by individual
personality traits play an important role in shaping the kinds of conversa-
tions citizens engage in, the setting for those conversations, and the influ-
ence discussion may or may not have on the individual . . . Individuals can
differ in their reactions to political discussions based on their own person-
alities” (p. 602). Research on the Big Five personality traits and political
discussion suggests that individuals who are more extraverted, more open
to experience, and more conscientious discuss politics more often (e.g.
Mondak and Halperin 2008). Those who are more extraverted and more
open to experience have larger discussion networks with more disagree-
ment, while those who are higher in agreeableness tend to be exposed to
less disagreement (e.g. Mondak et al. 2010; Gerber et al. 2012). A similar
vein in the literature highlights the role of conflict avoidance, demonstrat-
ing that conflict-avoidant people talk about politics less frequently (Ulbig
and Funk 1999). Relatedly, Mutz’s use of the notion of civil orientation to
conflict (2002, 2006) captures the idea that some people emphasize social
harmony while expressing dissenting views.
Expanding a bit more reveals another thread: social considerations are
an important part of the mystery of why people do and do not want to
talk about politics. Cramer (2004) writes that “much of political behavior
is rooted in social rather than political processes” (p. 8) and argues that
recognition and activation of shared social identities help people make
sense of the political world, facilitating political discussion in the process.
Conover and Searing (2005) write that “social motives may be much
more important than we have assumed” (p. 278–279) based on their
focus group research. Mutz (2006) suggests that social accountability –
or the idea that being held accountable to multiple, conflicting constitu-
encies makes people uncomfortable and causes them anxiety because it
threatens social relationships (p. 106) – explains the adverse effect of
crosscutting exposure on voter turnout.
The scholars who have probed deeply into the role of social
considerations – Cramer, Eliasoph, Noelle-Neumann, Mutz, and

https://doi.org/10.1017/9781108912495.001 Published online by Cambridge University Press


8 Opening the Black Box of Political Discussion

Conover and colleagues – have developed compelling explanations from


their mostly qualitative work, but they have not integrated those insights
into rigorous tests of the psychological and social mechanisms that under-
pin discussion more generally. Mutz’s work incorporates the importance
of social accountability and social harmony, but her empirical tests do not
focus on how these concepts affect the emergence of conversation or what
happens in the conversation itself. Scholars are thus left with the sugges-
tion of an answer – psychological and social considerations are important
factors motivating political discussion, perhaps more important than
instrumental motivations – but no quantitative analysis that explores
how such considerations affect the decisions people make about when
and how to discuss politics.
As a field, our theorization about political discussion has centered on
instrumental, politically oriented outcomes, such as the effects of discus-
sion on information transmission and participation. But when quantita-
tive work glosses over or assumes away the very social and psychological
features that make discussion a unique form of political behavior, we lose
the forest through the trees. Decades of foundational qualitative work tell
us that political discussion is a social process: It is complicated, nuanced,
and potentially costly. Political discussion is not entirely driven by instru-
mental political goals, and thus we should expect that it affects more than
just vote choice, public opinion, and political engagement. Discussion has
the power to fundamentally shape social relationships and how individ-
uals think about each other. But without a unifying framework to bring
together the contributions of instrumental and social considerations, our
field will struggle to adequately engage these social and psychological
dynamics and incorporate them meaningfully into future research.

Our Contribution
Our book addresses this missing piece in our understanding of interper-
sonal political interaction, a behavior some consider to be the lifeblood of
democracy. Why, despite high rates of reported political discussion, do so
many Americans dislike talking about politics? And how do these consid-
erations affect the way that people communicate? We argue that we need
to consider the psychological experience of political discussion as navi-
gating a social process that is rife with potential challenges to one’s sense
of self and one’s relationships with others. Our argument emphasizes two
features of political discussion. The first is that political discussion is an
inherently social behavior. As such, we follow seminal research before us

https://doi.org/10.1017/9781108912495.001 Published online by Cambridge University Press


What We Do Not Know about Political Discussion 9

and argue that without assessing the social factors influencing the decision
to talk about politics, we cannot fully understand who talks about polit-
ics, with whom, under what conditions, and with what consequence.
Variation in the cognitive resources of political conversation, such as
interest in politics or political knowledge, or in instrumental goals related
to learning and persuasion cannot fully explain people’s motivation to
seek or avoid discussion, although considerations related to information
certainly are part of the story.
The second is that political discussion is a process. Previous research
on the causes and consequences of political discussion tends to focus
rigidly on the inputs and outputs of discussion, but not the mechanisms
of the relationship between them. For example, dozens of quantitative
empirical studies examine properties of political discussion networks,
such as the amount of disagreement, to assess the effects of political
discussion, such as political knowledge, political engagement, or vote
choice. These studies rarely measure the actual experience of discussion:
who is available to discuss politics, how people scan their environment to
find (or avoid) discussants, how these factors affect the probability that
politics emerges in discussion and whether people decide to engage, the
group dynamics in the conversations that do occur, and what opinions
people actually express. Those studies that do focus on discussion itself –
such as who discusses politics, when, and with whom (e.g. Minozzi et al.
2020) – do not always capture the iterative nature of discussion, or how
people’s experience in one political discussion affects their decision-
making in the next one.
This book is an effort to open the lid on the processes that lead up to a
political discussion and the implications of the conversations that do
happen. Our approach is to build on what we already know about
political discussion, focusing on the gaps in our knowledge as a field,
resulting from untested assumptions and limited methodologies in previ-
ous work. We apply new measurement techniques in order to better study
the decision-making processes that lead to the initiation of discussion, the
nuances of the interactions that do occur, and the consequences of those
conversations on a wide set of political and social outcomes. We view our
contribution in three parts.
First, we provide a new framework, the 4D Framework, for conceptu-
alizing the feedback cycle of interpersonal political interactions. It models
political discussion as a process motivated by people’s pursuit of the goals
that have been shown to motivate other forms of interpersonal behavior:
accuracy, affirmation, and affiliation. Previous work focuses on the inputs

https://doi.org/10.1017/9781108912495.001 Published online by Cambridge University Press


10 Opening the Black Box of Political Discussion

and outputs of political discussion, such as how the partisan composition


of a person’s discussion network affects her level of tolerance, but it
deemphasizes the mechanism of the discussion process. Our contribution
is to provide a unifying framework that synthesizes the qualitative work
emphasizing social considerations with the quantitative work that nar-
rowly focuses on the precursors and outcomes of discussion.
This framework incorporates a wider aperture on both the behavior
we seek to explain and the consequences of that behavior. What do we
mean when we talk about “interpersonal political interaction”? For our
purposes, we focus on instances where two or more people communicate
face-to-face6 about topics that relate to politics, policy, policymakers, or
about topics that reveal the political opinions or identities of the discuss-
ants. We will use the terms “political discussion,” “political conversa-
tion,” and “political interaction” interchangeably in this book to refer to
these interactions.7 Importantly, we consider interactions with both regu-
lar discussants and incidental discussants. We explore conversations that
individuals were unsuccessful in escaping. We also study those inter-
actions that do not happen but could have – the discussion opportunities
that fail to materialize because a person actively avoids them. We note
that the conversations that do not happen may be just as important for
understanding downstream consequences as the conversations that do.
The 4D Framework also incorporates the effect of people’s discussion
behavior on a wider range of outcomes than those that have been studied
before. Instead of thinking about political discussion as a political behav-
ior that narrowly affects political outcomes – such as learning, tolerance,
trust, and voting – we think of political discussion as a social behavior
that should broadly affect social outcomes, such as social estrangement8
and psychological forms of polarization.
Our second major contribution is an emphasis on exploring the factors
that shape the countless choices individuals make during the cycle of
political discussion – both in terms of which conversations to pursue
and how to engage in the discussions that result. We separate and study
four distinct stages of discussion in our 4D Framework: Detection,
Decision, Discussion, and Determination. At each of these four stages,
people make choices about their behavior after weighing considerations
of potential costs and benefits. Some of these choices are active and
deliberate, while others happen so quickly that people might not be
consciously aware of them, especially if they have formed behavioral
habits guiding how they typically engage in political discussion. We
expect variation in not only the decisions people make when faced with

https://doi.org/10.1017/9781108912495.001 Published online by Cambridge University Press


What We Do Not Know about Political Discussion 11

these choices, but also in the salience of different types of decisions. For
example, someone who loves to talk about politics with anyone who will
listen is unlikely to spend much time thinking about the decision to
engage, while that same decision could be quite salient to someone whose
desire to talk politics is contingent on holding similar opinions as a
potential discussant.
We spend several chapters unpacking the kinds of considerations that
happen in advance of and during a discussion. In order to do this, we
deploy a set of methodological tools that are new or underutilized in
previous work on political discussion. We asked people to tell us about
their political conversations in their own words. We used behavioral
economics approaches to infer people’s preferences. We hooked people
up to heart rate monitors and then asked them to talk about politics with
others. As a result, we do not address the metrics of discussion that are
well trod – discussion frequency, discussion network composition, or
downstream participatory behaviors. Instead, we focus on asking ques-
tions that have not received much attention. For example, how do people
recognize if they agree or disagree with someone before a conversation
begins? Under what conditions do people try to avoid political conversa-
tions? How do they perceive the opportunities and benefits of discussion,
and what concerns them about the possibility of discussion with certain
kinds of people?
This focus on decision-making reveals new explanations for patterns
that long have been detected in Americans’ discussion networks. For
example, many people have studied discussion network composition in
an effort to disentangle the effects of selection versus influence in the
development of political opinions. We demonstrate that the preference
for like-minded discussion is not simply a reflection of environmental
availability of discussants, but rather reflects active choices about
avoiding certain kinds of interactions.
What has not received adequate attention in the discussion literature –
though it has in the deliberation literature (e.g. Karpowitz and
Mendelberg 2014) – is the extent to which social factors affect the choices
people make about voicing their political opinions, even if they are
interested in politics. People are aware of whether their opinions are in
the majority or minority, and whether they are at an informational
advantage or disadvantage. These group dynamics – such as the perceived
knowledge gap between potential discussants, or the power hierarchy
between individuals – up the ante of the social repercussions people might
face for expressing their political views (e.g. Noelle-Neumann 1974;

https://doi.org/10.1017/9781108912495.001 Published online by Cambridge University Press


12 Opening the Black Box of Political Discussion

Glynn, Hayes, and Shanahan 1997; Scheufele and Moy 2000; Morey,
Eveland, and Hutchens 2012), increasing the proportion of people who
choose to silence or censor themselves. While the overall frequency of
political discussion in the United States has remained relatively high, the
rates of self-censorship have increased, largely due to micro-level fears of
social isolation from expressing unpopular views (Gibson and Sutherland
2020). As a result, the largely homophilous conversations in which people
participate reflect a series of behavioral choices that have consequences
beyond the information to which a person is exposed during a discussion
or the effects of that information on learning, persuasion, or vote choice.
Our third major contribution is to emphasize the role of individual
differences that shape the way people navigate the political discussion
landscape. Throughout our inquiry, we are focused on the fact that there
is significant heterogeneity in people’s attitudes, expectations, and experi-
ences with political discussion. We assess the way that demographic,
political, and psychological dispositions moderate how people interpret
the demands and ramifications of potentially contentious social inter-
actions about politics, with an eye toward evaluating what broader
implications that has for the composition of people who most vocally
discuss politics. While we are not the first to consider the role of individ-
ual dispositions in political discussion behavior, we more tightly map
theories about which kinds of traits should matter for which kinds of
decisions within the process of a political discussion.

       


  ?
In the era of social media, fake news, and decreasing trust in the media,
why should we shine our light on the psychological factors of face-to-face
interpersonal conversation? Social scientists have studied the way people
talk about politics for more than sixty years. Are the remaining puzzles
important enough to pursue? Our answer is a decided “yes.”
Exposure to divergent viewpoints is a fundamental tenet of participa-
tory democracy. But as others have documented, Americans are increas-
ingly siloed away from those who hold differing viewpoints (Mason
2018). As a result of geographic sorting, religious sorting, residential
segregation, zoning laws – and the multitude of other factors that serve
to separate people based on race, class, and education level – Americans
today live and work with people who share many of their identities and
backgrounds.

https://doi.org/10.1017/9781108912495.001 Published online by Cambridge University Press


Why Should We Care about Social Psychology? 13

Many point to the Internet and social media as the ideal way to amplify
the opportunities for exposure to diverse perspectives. But we are increas-
ingly pessimistic about the possibility of digital technologies to provide
the kind of exposure that can foster meaningful exchanges of ideas. In our
separate research agendas, we observe that people quickly recognize and
negatively evaluate people with whom they disagree on Facebook (Settle
2018), and that the information that is transmitted in short bits of written
communication deteriorates in ways that undermine the value of socially
transmitted information (Carlson 2018, 2019). Online interpersonal com-
munication can amplify many of the negative behaviors that both scholars
and the public care about deeply, such as information distortion (Carlson
2018) and belief in misinformation (Anspach and Carlson 2020).
In an era in which Americans are able to select into echo chambers and
the mass media has become largely compartmentalized by the preferences
of its viewers, face-to-face interpersonal interaction remains a conduit
through which people might be exposed to opinions that are different
than their own. People may encounter fewer individuals who differ
greatly from them, but the people they do encounter become dispropor-
tionately important as opportunities for perspective and dialogue.
There is reason to think that interpersonal interactions could both help
and hurt the various facets of the polarization problem America faces in
the twenty-first century. Scholars have explored the possibility that polit-
ical discussion could foster tolerance for the other side (Mutz 2002; Mutz
and Mondak 2006) or help depolarize attitudes (Parsons 2010).
Interpersonal communication can also amplify political learning under
some conditions (Carlson 2019; Ahn, Huckfeldt, and Ryan 2014). But at
the same time, interpersonal interaction can further the deleterious effects
of attitude polarization from partisan media (Druckman, Levendusky,
and McLain 2018). Much of the evidence suggests that disagreeable
deliberation can actually facilitate entrenchment and deepen polarization
(Wojcieszak 2011) or may increase ambivalence in a way that undermines
participation (Mutz 2006), but it is possible that this “dark side” of
disagreement is largely concentrated among people who are in a complete
opinion minority, rather than those who are exposed to a mix of opinions
(Nir 2011; Bello 2012).
How do we know if interpersonal interaction will be constructive or
damaging for the health of our democracy? The findings in this book are
important because they will guide researchers in understanding which
kinds of political interactions actually occur and with what consequence.
The argument and evidence presented in this book highlight a

https://doi.org/10.1017/9781108912495.001 Published online by Cambridge University Press


14 Opening the Black Box of Political Discussion

fundamental tension in modern American politics. The fabric holding


Americans together is comprised of the interactions people have with
others in daily life. The effort to maintain this social fabric – to nurture
and retain relationships with family, friends, and colleagues, and to
preserve esteem in their eyes – may in fact undermine people’s ability to
communicate genuinely and effectively across political divides.
At present, it seems that many Americans opt out of deeper conversa-
tions about politics in order to preserve their social relationships. If
interpersonal interaction is to be part of the solution to polarization
instead of one of its contributing factors, researchers must address this
core tension. Doing so necessitates that social scientists explore the deci-
sions that shape our willingness to engage in political discussions with
others, as well as the implications of those choices.
Without assessing the social factors contributing to the “two-step
flow” of information – how information flows from the mass media to
a few who are interested enough to read the news and then on to their
friends through conversation – we, as social scientists, cannot update our
expectations about how it should operate in an era with high levels of
affective and social polarization, when people have unprecedented access
to information both about politics and each other’s views. We must focus
on the widest definition of those interactions – the inadvertent, the
inescapable, the injurious – if we want to understand the full range of
implications for our civic culture. By better understanding how
Americans experience face-to-face political discussions in the affectively
polarized world in which they live, we can begin to consider the ways in
which discussion could remedy some of the major sociopolitical problems
Americans face today.

      


In Chapter 2, we outline the theoretical core of our inquiry. To fully
comprehend the experience of political discussion, we must think more
broadly about the full set of considerations that structure people’s deci-
sions. We introduce the concept of the 4D Framework to the process of
political discussion, articulating what happens at each of four stages
preceding, during, and after the opportunity to discuss politics. Our
framework emphasizes the role of social considerations, assuming that
people’s primary goals may not be to acquire information or persuade
others in a conversation but rather to preserve their self-esteem and social
ties with their potential discussants. We also introduce the idea that

https://doi.org/10.1017/9781108912495.001 Published online by Cambridge University Press


Roadmap for the Rest of the Book 15

individual differences between people can make some more sensitive than
others to various features of their context and their discussants. Our
exploration of the 4D Framework uses an eclectic set of methodological
techniques. Chapter 3 is an overview of the methodological core of our
inquiry. We explain the key operationalizations of the concepts in the 4D
Framework and provide context and details for the studies that appear in
multiple chapters throughout the book.
Chapter 4 commences the empirical tests of our argument, beginning
with Stage 1 of the 4D Framework: Detection. We directly tackle a
question buried implicitly in previous findings, as well as our own, that
people prefer like-minded discussants: How do people detect the political
views of others? People must be able to do this if they make active
selections about their discussion partners. The stakes of discussion may
be higher in a polarized environment, but the readily available cues
stemming from a divided and politicized society make the process of
sorting into amicable discussions easier. We show that individuals are
able to use a variety of cues to infer political leanings, including more
obvious cues such as demographic characteristics and extremely subtle
cues, such as first names, pet preferences, and movie preferences. We then
explore the existence of stereotypes that individuals hold about partisans,
under the assumption that these attitudes could affect the ability to
recognize others’ views and willingness to engage in a discussion. Given
that individuals are (differentially) able to recognize the viewpoints of
others, what assumptions do those identities trigger when a person is
deciding whether to engage in a conversation? We find that, consistent
with research on affective polarization, individuals hold negative stereo-
types about outpartisans: They ascribe more negative personality traits to
outpartisans and consider them to be ill-informed, ignorant, and overly
reliant on partisan media. People make these judgments even about out-
partisans they personally know.
Under what conditions are people most likely to discuss politics, and
how do they perceive the costs and benefits of potential conversations of
different configurations? Our focus in Chapter 5 is on the moment of
Decision itself (Stage 2). We use three novel approaches to answer this
question, including a semi-structured discussion experiment, a series of
more than a dozen vignette experiments, and the “name-your-price”
paradigm. The semi-structured discussion experiment, which we call the
True Counterfactual Study, asked participants to reflect upon and
describe either political discussions in which they recently engaged or
political discussions in which they could have engaged, but chose to

https://doi.org/10.1017/9781108912495.001 Published online by Cambridge University Press


16 Opening the Black Box of Political Discussion

avoid. Comparing these descriptions tells us that discussions that were


avoided tended to have larger groups with more disagreement. In the
vignette experiments, we show participants scenarios in which a hypo-
thetical character is presented with the opportunity to discuss politics,
manipulating characteristics of the social context and political discuss-
ants. We find that individuals were more likely to engage in the discussion
when they were in the political majority, when they were more know-
ledgeable than the other discussants, and when the discussion was with
strong social ties. Across all of the different contexts, we find that
approximately 20 percent of our subjects anticipated that the character
in the vignette would deflect a discussion by remaining silent or changing
the subject.
We then utilize the “name-your-price” paradigm, which we introduce
in a recent article (Settle and Carlson 2019). The idea here is to construct
a variety of discussion contexts, similar to our vignettes, and ask partici-
pants how much they would need to be paid to participate in a study that
involved a discussion in each group type. We find that individuals
demand more compensation to discuss both political and nonpolitical
topics with those who disagree, especially when that disagreement is
characterized in terms of partisan identity.
That people have a preference for like-minded discussants in small-
group environments comports with previous findings, but we show that
these preferences exist even when people have other options. What is
surprising is that people also appear to have a slight preference for
discussing politics with others of their same – or lower – knowledge level,
a finding that runs counter to normative ideals of the power of opinion
leadership to rectify the informational deficits of the average American.
The analysis exploring the Discussion stage of the 4D Framework
(Stage 3) is split into two chapters, Chapters 6 and 7. Chapter 6 is about
what people feel during a political discussion, captured using psycho-
physiological measurement during two different lab experiments. In the
Psychophysiological Anticipation Study, we measure changes in partici-
pants’ heart rates and skin conductance (how sweaty their palms get) in
response to anticipating a political discussion. Importantly, we find that
individuals had a much larger psychophysiological response to even the
thought of engaging in a political discussion than they did to observing
both political and nonpolitical uncivil discourse on video. Additionally,
while some of these results are not statistically significant, we find sug-
gestive evidence that participants who were randomly assigned to antici-
pate a discussion with an outpartisan, especially if the discussant was

https://doi.org/10.1017/9781108912495.001 Published online by Cambridge University Press


Roadmap for the Rest of the Book 17

described as more knowledgeable, had a stronger psychophysiological


response. In the Psychophysiological Experience Study, we measure psy-
chophysiological responses during actual conversations. We present evi-
dence that individuals exhibit many of the psychophysiological signs of
discomfort while actually discussing politics, especially when the conver-
sation was disagreeable.
Building on these findings, in Chapter 7 we turn to better understand-
ing what people say during political conversations. We use a series of
vignette experiments to demonstrate what people anticipate a hypothet-
ical character will do, a technique designed to get around the social
desirability we expect taints many self-reports of discussion behavior.
Here, we are focused less on simply whether people engage in the discus-
sion or not, and instead focus on what they anticipate the character would
say. Which opinions do they express? How comfortable do they feel? We
validate the findings from this set of survey experiments with additional
evidence from laboratory studies where we are able to measure what
people say, as well as how they say it. Our vignette experiments reveal a
modal expectation that the character will express his or her true opinion,
but a substantial minority of people expect the character to self-censor,
conform to the group, or remain silent, especially when the character is
less knowledgeable and in an opinion minority. We observe similar
patterns in lab experiments, finding that the vast majority of people
conformed at least once in an Asch-style study, and that individuals
verbally hedge when describing their identities in another study.
Chapter 8 considers the last stage (Stage 4) of the 4D Framework:
Determination. Here, we examine the implications of a person’s discus-
sion experience, focusing on how individuals anticipate relationships
changing after intense political conversations and how discussion behav-
ior is correlated with a variety of attitudes about social polarization. We
use our nationally representative survey data to capture individuals’
reflections on their own social estrangement behaviors as well as their
projections of such behavior onto hypothetical characters in the vignette
experiments. We uncover that about a quarter of Americans have dis-
tanced themselves socially from a friend because of politics. Americans
have done so in a variety of ways, including stopping all political discus-
sion, ceasing all communication, forbidding their children from playing
together, and severing all social ties completely.
The final empirical chapter (Chapter 9) explores how individual dis-
positions affect the way people navigate the 4D Framework. After explor-
ing the role of gender, race, political interest, and partisanship strength,

https://doi.org/10.1017/9781108912495.001 Published online by Cambridge University Press


18 Opening the Black Box of Political Discussion

we explore how variation in social anxiety, conflict avoidance, and will-


ingness to self-censor are associated with our ability and desire to Detect
others’ views (Stage 1), the Decision to engage in a discussion (Stage 2),
the Discussion itself (Stage 3) and the Determination in the aftermath of a
discussion (Stage 4). We find that demographic dispositions exert less
consistent influence on political discussion than political and psycho-
logical dispositions. Gender is largely influential at Stage 2 – women are
more likely to avoid political discussion than are men – but does not exert
much influence in later stages. Race and ethnicity do not play a large role
throughout the cycle – at least not that we are able to uncover with our
research designs and samples. Interest in politics and strength of partisan-
ship influence almost every stage: Stronger partisans and those more
interested in politics are more likely to detect others’ views in advance,
engage in political discussion, and expect vignette characters to express
their true opinions. The psychological dispositions add nuance, and all
three are influential at unique stages of the cycle.
Finally, in Chapter 10, we assess the broader consequences of the
process of political discussion for the health of American democracy.
We suggest that this process – while not responsible for psychological
forms of polarization among the mass public – certainly contributes to its
perpetuation by decreasing the likelihood that Americans engage in
meaningful exchange with others whose viewpoints differ. On the one
hand, it may be preferable that Americans seem to prioritize protecting
their relationships, stretching the social fabric across the political divide.
But there are reasons to be concerned that this process exacerbates
stereotyped thinking and potentially damages our ability to learn effect-
ively from conversations. It appears that Americans do not want to
follow with the prescription of previous researchers who suggest that
informational ailments can be remedied if individuals talk with know-
ledgeable others.
It is our hope that this book sheds light on how political discussion
operates in America today. Our theoretical and empirical contributions
draw on vast literatures in political science, psychology, communication,
and sociology to unpack assumptions made by previous research on
discussion. As we explain more in Chapter 10, we also believe that this
book’s importance extends beyond scholarly debates. There are countless
news articles citing public concern about heightened social tensions
driven by politics. Some individuals lament their inability to have a civil
conversation about politics, while others hide their views to avoid social
repercussions. The research presented in this book helps us characterize

https://doi.org/10.1017/9781108912495.001 Published online by Cambridge University Press


Roadmap for the Rest of the Book 19

why these tensions persist, as well as which specific features of a discus-


sion foster more discomfort, disengagement, untruthful political expres-
sion, and distaste for the other side.
We do not purport to have the panacea for the problems we uncover.
But we close the book with our thoughts about how we might reverse the
trend toward avoiding contentious political discussions. We consider the
ways in which Americans could channel their desire to preserve their
social relationships into a force for constructive political dialogue.

https://doi.org/10.1017/9781108912495.001 Published online by Cambridge University Press


2

The 4D Framework of Political Discussion

People make thousands of small decisions each and every day from the
moment they wake up to the moment they go to sleep. Some research,
made popular in the strategic management world, suggests that we make
over 35,000 decisions a day (see Krockow 2018 for a discussion). More
than 200 of these decisions are about food alone (Wansink and Sobal
2007). In fact, we make so many decisions on a daily basis that by
necessity most do not feel like a conscious choice. We process the
information around us to inform our behavior, often without ever fully
stopping to consider alternative forms of action.
While cognitive psychologists have focused on the subconscious, auto-
matic, and often irrational ways that humans arrive at their behavioral
choices, political scientists have largely conceptualized behavioral choice
in the realm of politics as conscious, deliberate, and calculated. In this
conceptualization, when people report that they have engaged in a polit-
ical discussion, it is because they made an active choice to do so. Yet, as
we highlighted in the opening vignette of the book, the “choice” to engage
in political discussion is actually the result of a series of micro decisions
that reflect a varying degree of agency on the part of an individual. Many
of the forces that structure the likelihood and experience of a political
conversation – the context in which individuals find themselves, the
distribution of others’ opinions – are out of individuals’ control at the
moment a potential discussion emerges. The decision to talk politics
reflects an assessment of the costs and benefits of doing so, given the
circumstances in the moment the choice is made.
In this chapter we advance our argument that the decisions about
whether and how to engage in a political discussion are shaped by many

20

https://doi.org/10.1017/9781108912495.002 Published online by Cambridge University Press


Political Discussion as Interpersonal Interaction 21

of the same forces that guide other interpersonal interactions. The classic
depiction of political discussion behavior suggests that it emerges as a
result of one’s interest in politics, shaped by one’s personality and the
availability of discussion partners. But we argue that this depiction is
incomplete. We propose the 4D Framework to characterize the process
of political discussion at four stages – Detection, Decision, Discussion,
and Determination – during which people’s decisions about how to
engage are motivated by the same goals underpinning interpersonal inter-
actions more generally: to be accurate, to affirm a positive self-concept,
and to affiliate with others. Considerations are shaped by the contextual
factors of the conversation as well as people’s psychological and psycho-
physiological dispositions. Over time, people develop more generalized
propensities and preferences for political discussion based on their previ-
ous experiences navigating the four stages of past discussions.
Focusing on the process of discussion encourages an assessment of
facets of political discussion that have remained un- or under-explored
to date. Our attention shifts from the outcomes of a discussion to the
anticipation of a discussion; from the composition of a discussion net-
work to the experience of a discussion itself; and from an emphasis on
learning and persuasion to an emphasis on social evaluation and relation-
ship management. Reorienting our focus in this way suggests that the
variation in political discussion behavior between people is driven in part
by a set of individual differences that extend beyond political interest,
related to how people process their social environments more generally.

    


Scholars using both qualitative and quantitative methods have found that
political discussion is most likely to occur among those with whom we
talk about other important matters. A review of the qualitative work on
political discussion sheds light on why: Political talk emerges out of
interactions about other topics. Cramer (2004) notes that “whether polit-
ical conversations last two minutes or thirty, the transitioning between
them and other subjects of life is virtually seamless” (p. 40) and that
“topics related to the government or politicians arise without any
fanfare—they slip in like talk about other subjects” (p. 83).
Where scholars using different methods tend to diverge is in their
assumptions about the motivations people have for political discussion.
Most scholars studying the quantitative patterns of political discussion
have emphasized its role in learning, persuasion, and mobilization

https://doi.org/10.1017/9781108912495.002 Published online by Cambridge University Press


22 The 4D Framework of Political Discussion

(Klofstad 2010; Sinclair 2012; Ahn, Huckfeldt, and Ryan 2014), consist-
ent with theories that position discussion as an important component of
the two-step flow of information. However, Eveland, Morey, and
Hutchens (2011) find that these instrumental motivations for political
discussion are less common overall, especially among weaker social ties,
echoing Mutz (2006), who writes that, “[p]eople tend to care more about
social harmony in their immediate face-to-face personal relationships
than about the larger political world” (p. 106).
Scholars using qualitative or formal methods back up these assertions,
revealing or modeling the social nature of political discussion. Conover
and Searing (2005) note of their focus groups that “many participants
were interested in the information they gained less for instrumental polit-
ical purposes than to learn about the lives of others and to seek common
ground. Hence, social motives may be much more important than we
have assumed” (p. 279). Cramer (2004) writes that “much of political
behavior is rooted in social rather than political processes” (p. 8). And in
a footnote to his formal model of political discussion, MacKuen (1990)
argues that “one would expect that individual choice to engage in polit-
ical discussion should be particularly sensitive to the nature of the social
environment” because of the inherent role for morality in such discus-
sions (p. 94–95).
At its core, political discussion is a social behavior motivated by social
considerations. While prior research has revealed this insight, previous
studies have not used it as a starting assumption for quantitative studies
of political conversation. We think this observation is so important that
it should ground our theoretical expectations about why political discus-
sion does or does not occur, what happens when conversations are
initiated, and how people interpret what they experience during a dis-
cussion about politics.
Our emphasis builds directly on Eliasoph’s (1998) framework for her
study of how people create environments conducive for political talk, a
process she interchangeably calls “civic practices,” “political manners,”
or “etiquette” (p. 21). She writes:

This etiquette implicitly takes into account a relationship to the wider world;
politeness, beliefs, and power intertwine, in practice, through this sense of civility.
The concept, then, refers to citizens’ companionable ways of creating and main-
taining a comfortable context for talk in the public sphere. Goffman called this
constant, unspoken process of assessing the grounds for interaction “footing.”
Are there stairs here? Loose gravel? Ice? To walk we have to assess the footing.
Talking is the same: are we talking to make conversation? To accomplish a task?

https://doi.org/10.1017/9781108912495.002 Published online by Cambridge University Press


Political Discussion as Interpersonal Interaction 23

To show off? The footing draws on that “inexhaustible reservoir” of “common


knowledge” on which participants rely for interpreting each other’s conversa-
tions, which members intuitively understand to be giving meaning to
the interaction.

Why do people need “footing” in the terrain of political discussion?


Although previous scholars have highlighted the potential upshots of
political discussion, talking about politics also entails risk. As Conover,
Searing, and Crewe (2002) write, “the complexity of the motivational
foundation for discussion suggests that it is a risky enterprise that carries
not only benefits but also potential costs. Understanding the risks – and
the corollary advantages of silence – is essential, therefore, to understand-
ing discussion itself” (p. 53). For example, when recounting focus group
members’ evaluations of the potential for self-expression in political
discussion, they write:
Discussion exposes one’s preferences and identities, and makes both the object
of public scrutiny and possibly contestation. This makes discussion a dangerous
enterprise, and not just because of the risk of being ‘misrecognized’ and dis-
respected. There are also the risks of being truly recognized and thereby
revealed to one’s fellow citizens, or being “pressured” to transform your pref-
erences and thereby change the nature of your identity. (Conover, Searing, and
Crewe 2002, p. 56)

These risks can also be oriented toward others’ feelings, as Cramer (2004)
notes that “[p]olitical topics arise when they fit in with the flow of the
conversation and run little risk of offending someone within earshot”
(p. 41).
Eliasoph’s work is designed to understand how people navigate the
process of producing contexts for political talk, especially in informal
contexts where the rules about doing so are not entirely clear. Eliasoph
(1998) notes that even in rule-bound settings, people “relentlessly make
inferences about the nature of those settings, improvising rules that par-
ticipants do not recognize as improvised” (p. 236). She continues:
“[T]here is an extra layer of uncertainty: in contrast to these rule-bound
or traditional settings, in which participants think that they know what is
going on and how to act, participants in post-suburban civic life them-
selves say that they are unsure about how to act. They know that they
have to figure out the rules as they go” (p. 236).
The vast majority of Americans report that they discuss politics at least
occasionally. Navigating contexts in which political talk might emerge
necessitates that people assess the costs and benefits of such discussions.
These assessments structure their decision-making – before, during, and

https://doi.org/10.1017/9781108912495.002 Published online by Cambridge University Press


24 The 4D Framework of Political Discussion

after a political discussion – and shape the rules and norms people
develop for their discussion behavior.

     


What guides people as they navigate the choices involved in political
discussion, potentially risky situations that involve high levels of uncer-
tainty? Although not articulated as a central motivation driving discus-
sion behavior, previous work has highlighted that people’s desire to avoid
discomfort may contribute to their discussion behavior. Mutz (2006)
writes that “[a]lthough cross-cutting contact is conceptually appealing
as an idea, many people find it uncomfortable in practice. Studies of
intergroup contact suggest that it is precisely this anxiety and discomfort
that interfere with the potentially positive effects of cross-cutting contact”
(p. 139). Bolstering Mutz’s notion is a recent study by Pew Research
Center that tackled this question directly, finding that about half of
Americans report that political discussions with people who disagree are
stressful and frustrating and that they would be somewhat or very
uncomfortable discussing politics with people they did not know well
(Doherty et al. 2019).1
But what exactly makes people uncomfortable? Social psychologists
have coalesced around a motivation-based framework to explain behav-
ior in many interpersonal interactions with a potential for influence:
People strive to be accurate, affirm a positive self-concept, and affiliate
with others (Cialdini and Trost 1998; Wood 2000; Cialdini and Goldstein
2004). Individuals become uncomfortable in interpersonal interactions if
these goals are threatened, and therefore behave in a way to ameliorate
those threats. Dailey and Palomares (2004) articulate a similar frame-
work specifically within the context of conversational topic avoidance.
They argue that the reasons individuals avoid discussing a particular topic
can be organized as information-based reasons (akin to accuracy),
individual-based reasons (akin to affirmation), and relationship-based
reasons (akin to affiliation).
A classic example is Asch’s experiment asking participants to judge the
length of a line (Asch 1952). When individuals in a social situation realize
that they are the only ones who think differently, they often conform to
the group, giving the impression that they hold the same opinion as the
group. They might do this because of the desire to be accurate – they do
not want to be wrong and trust the consensus judgment over their own. It
is also possible that they are motivated to maintain a positive self-concept.

https://doi.org/10.1017/9781108912495.002 Published online by Cambridge University Press


A Motivational Typology for Political Discussion 25

Individuals tend to feel good about themselves when they identify with
and conform to groups that they value (Brewer and Roccas 2001).
Finally, they might also be motivated to affiliate with others. By express-
ing the same opinion or providing the same answer as those in the group,
they might be more likely to be included in the group; even if they are
giving an incorrect answer to an objective question or an ill-informed
opinion, at least the whole group will be wrong together.
Given that these three goals have been shown to motivate individuals
in a variety of interpersonal contexts – situations involving social
influence, social norms, message-based persuasion, conformity, and
compliance – we expect individuals to be susceptible to similar motiv-
ations during political interactions. Importantly, these goals can lead
individuals both to avoid and to pursue political discussion. As we
explore more shortly, previous scholars have touched on these very types
of motivations, but they have not fully integrated their observations into a
unified theory of the psychological factors contributing to the decision-
making process inherent in political discussion. We do.
In this book, we develop and test the AAA Typology in which these
three motivations shape the way people find their “footing” in the con-
texts in which political discussion can or does emerge. Political discussion
can be uncomfortable – challenging individuals’ accuracy, affiliation, and
affirmation goals – making the process of navigating political talk stress-
ful for many people. These interpersonal interactions pose the risk of
damaging people’s self-concept and their social relationships, motivating
people to make decisions that minimize the costs and maximize the
benefits of political conversation. Moreover, we suggest that situational
factors of the conversation affect the considerations that people hold.
There are many such factors that could matter – such as the closeness of
the relationship between discussants (e.g. Morey, Eveland, and Hutchens
2012), the power dynamic embedded within that relationship (e.g.
Ohbuchi and Tedeschi 1997; Richey 2009), or the setting of the
conversation – but we choose to focus primarily on two features that have
received considerable prior attention in the literature: the political know-
ledge level of discussants, and the amount of disagreement between them.

Accuracy Goals
As scholars like Huckfeldt and colleagues have explored, the desire to
learn can motivate political discussion. Indeed, scholarship stemming
from the Columbia School would lead us to believe that learning is

https://doi.org/10.1017/9781108912495.002 Published online by Cambridge University Press


26 The 4D Framework of Political Discussion

actually a primary motivation of the famed two-step flow of information.


Individuals can rely on opinion leaders, such as the more interested and
informed members of their social networks, to relay information about
politics to them. Although Eveland, Morey, and Hutchens (2011) find
that most people do not approach political discussions with the desire to
learn, Eveland and Hively (2009) uncover a positive correlation between
discussion frequency and political knowledge. Moreover, Ahn,
Huckfeldt, and Ryan (2014) argue that the public can be self-educating
through political discussions, even if these discussions do not always lead
to optimal outcomes. Conover, Searing, and Crewe (2002) note that “a
careful reading of the focus group transcripts suggests [that] citizens are
willing to learn from discussions so long as it feels as though they are
educating themselves . . .” (p. 60). A desire for accuracy – to learn more
about politics to make more informed opinions and choices – can motiv-
ate political discussion. But the process of engaging with new information
can also pose a direct threat to accuracy goals.
For people who are less interested in politics, the threat to accuracy
may stem from the realization that they do not know very much about
politics or are not confident about the information they do have. People
want to believe that their opinions are rooted in the truth, and political
conversations include the possibility of a direct challenge to the facts a
person has used to arrive at their beliefs. Many people are not confident in
their own knowledge of politics, a theme that has emerged in previous
work on political discussion. When reporting the findings from their focus
groups, Conover, Searing, and Crewe (2002) write that “citizens who
doubt their own competence as political actors worry that they will be
unsuccessful – inarticulate, uniformed, unpersuasive and, in the worst
case, simply unheard – in political discussions” (p. 47). They continue:
“[P]eople are cautious about revealing their preferences because they are
themselves uncertain: ‘[T]hey do not really know where they stand;
they’re kind of half here and half there’” (p. 54).
This concern is echoed in a passage from Nina Eliasoph’s (1998)
ethnographic study of the “Buffaloes,” a group of country western line
dancers who regularly socialize. She describes a situation in which one of
the regulars made a quip about a politician and was met by complete
silence by the others in the group. She writes of her own impressions,
“I tentatively guessed that they were silent because none knew who [the
politician] was, and each feared that the others did” (p. 108). She says her
hunch was confirmed by an interview with a member of the group who
was present for the exchange: “Betsy provided a key: ‘I did not know

https://doi.org/10.1017/9781108912495.002 Published online by Cambridge University Press


A Motivational Typology for Political Discussion 27

anything about it . . . I’ve heard people say what they said, but I’ve also
heard people say the opposite, that it was a good thing, and I do not know
how to tell the difference. I do not even know enough to know what to
believe’” (p. 108–109).
People who are more confident of their political knowledge face other
sorts of accuracy challenges. Even before the “fake news” era of contem-
porary politics, Americans often disagreed about the facts of politics,
challenging people’s motivation to be accurate during a political conver-
sation. For example, Bartels (2002) examines partisan bias in perceptions
of objective events, such as economic performance. He finds that although
the economy improved during the Reagan administration, Democrats
reported that the economy had gotten worse and Republicans reported
that it had improved tremendously (see p. 134 and Figure 3). Similarly,
Jerit and Barabas (2012) find evidence for selective learning: Partisans are
more knowledgeable about facts that make their party look good and less
knowledgeable about facts that challenge their party, especially on topics
that are made salient through media exposure.2 These findings imply that
conversations between people who disagree with one another entail the
clash of different interpretations of the facts. While Conover, Searing, and
Crewe (2002) find that people are willing to learn in political discussions
if they perceive they are educating themselves, they note an important
caveat: “[T]hey do not want to be pushed by others to accept ideas that
challenge them” (p. 60).
Altogether, accuracy goals can serve to facilitate both engagement in
and avoidance of political discussion, as well as the various types of
behavior within discussions. Some might be driven to conversation in
an effort to learn from their peers – or to be the ones who inform them!
Others might not be interested in learning, but fear having their know-
ledge challenged by others in the group, leading them to avoid this threat
to accuracy. Wanting to appear informed could lead some to go along
with what the group says instead of what they really believe, but could
lead others to loudly overshare their views in an effort to persuade.

Affirmation Goals
The affirmation goal is the desire to pursue a positive self-concept. Implicit
within the literature on motivated reasoning, selective exposure, and the
propensity to choose like-minded discussion partners is the idea that it
simply “feels better” to receive positive reinforcement for one’s political
preferences and validation of one’s opinions. The pursuit of affirmation

https://doi.org/10.1017/9781108912495.002 Published online by Cambridge University Press


28 The 4D Framework of Political Discussion

goals tends to highlight the factors that can push individuals toward
engaging in discussion, rather than the factors that can pull individuals
away from engagement. For example, Neblo, Esterling, Kennedy, Lazer,
and Sokhey (2010) find that Americans are much more willing to deliber-
ate than conventional wisdom would lead us to expect. They find that in
contrast to the Stealth Democracy (Hibbing and Theiss-Morse 2002) view
of political participation, most Americans do not view political engage-
ment as “taking their medicine,” but rather gain some utility from engage-
ment. Moreover, individuals might engage in conversations to better
understand the world around them, rather than to pursue more instru-
mental goals of persuading or informing others (Neblo 2015).
Prior literature on political discussion mostly has framed the affirmation
goal in terms of self-expression. In some contexts, the opportunity to
publicly share one’s views can positively reinforce self-concept, as
Conover, Searing, and Crewe (2002) write, “[c]itizens understand political
discussion as an act of ‘self-expression’ . . . Most obviously, when we
discuss issue concerns, we are required to make known our preferences
on those issues” (p. 56). But they go on to explore a second facet of self-
expression involved in political discussion: “Some of our preferences are
‘constitutive’ preferences in that they are central to the meaning of a
particular identity. Therefore, stating your issue positions can expose more
than just your preferences; it sometimes reveals a basic identity, who you
are at your core. Thus discussion can fuse a ‘politics of ideas’ with a
‘politics of identity or presence’” (p. 56). They highlight the idea mentioned
earlier, that discussion risks being “truly recognized and thereby revealed
to one’s fellow citizens, or being ‘pressured’ to transform your preferences
and thereby change the nature of your identity” (p. 56).
There is reason to think that these dangers of self-expression have
increased as our politics have become more polarized and sorted.
Gibson and Sutherland (2020) find that the percentage of the public
who does not feel free to speak their minds has dramatically increased
since the 1950s. In 1954, only 13 percent of Americans reported that they
did not feel free to speak their minds, but in 2019, 40 percent of
Americans felt this way. The authors find that this pattern is driven
primarily by affective polarization: Self-censorship (not feeling free to
speak one’s mind) is highly correlated with affective polarization. This
echoes a central finding made by Gibson (1992) that those who do not
feel free to express themselves politically are less tolerant of others and
tend to have more homogeneous networks. The vilification of the other
side has grown in tandem with increasing numbers of Americans

https://doi.org/10.1017/9781108912495.002 Published online by Cambridge University Press


A Motivational Typology for Political Discussion 29

questioning the motivations of their political opponents. Americans’


views on politics are tied deeply to their values, principles, and priorities.
Discussion about societal problems or policy solutions is often a proxy for
revealing someone’s vision for how society should operate. Disagreement
can represent a direct challenge to someone’s desire to affirm their own
worldviews.
The broader point is that the decision to engage in a political discussion
and how individuals behave in it could be structured by a desire to
maintain a positive self-concept. That could make individuals more likely
to engage and to express their true opinions: Some may view this form of
engagement as fundamentally important to their identity, citizenship, or
the political system more generally. Others, however, might find engaging
in political discussions to threaten their self-concept, and could therefore
be more likely to avoid the activity altogether, or doctor what they say in
those conversations to try to preserve their self-esteem.

Affiliative Goals
The goal to affiliate – to feel included and identify with a group – is
foundational in the way people navigate political discussions. We can
think about group affiliation in the context of political discussion as
shared political views or identities. Similar to the affirmation goal, dis-
cussing politics with like-minded others can provide the opportunity to
connect and form bonds with other people over a common set of values or
priorities. At the core of the work applying social identity theory to
politics is the idea that people are motivated to define their ingroup vis-
à-vis an outgroup to bolster their sense of belonging.
These bonds of affiliation could also be about the intensity of one’s
engagement with politics, not the direction of one’s views. For example, if a
person’s peers are particularly involved in politics and regularly engage in
discussions, she might choose to opt into these conversations in an effort to
avoid feeling “left out.” Krupnikov and Ryan (2022) argue that one of the
most important political divides in the United States is between those who
are deeply engaged in politics and those who are not, echoing a point raised
by Klar and Krupnikov (2016) about Independents’ distaste for partisan-
ship more generally. The desire to fit in with a group could outweigh the
costs of discussing a topic that someone finds boring (at best) or distasteful
(at worst), leading them to engage. Humans are social creatures and the
desire to affiliate socially with a group and feel part of a team can have a
powerful influence on decisions to opt into political conversations.

https://doi.org/10.1017/9781108912495.002 Published online by Cambridge University Press


30 The 4D Framework of Political Discussion

However, more frequently noted than the idea of affiliation as a


positive motivation for discussion is the notion that political talk can
serve as a threat to affiliative goals. In this conceptualization, people
primarily care about their social relationships, and the introduction of
political talk risks damage to those bonds. This is the theme most likely to
emerge from prior qualitative work on why people pursue or avoid
conversations. The idea is central to Noelle-Neumann’s (1993) work on
the spiral of silence, the notion that people have “a desire to avoid
isolating ourselves, a desire that apparently all of us share” (p. 6).
Cramer (2010) observes this as well in her observations of the craft guild:
“The women in the guild tend to evade political topics . . . They do not
know one another or their political dispositions very well, and thus they
tend to avoid politics for fear of disrupting the air of politeness (MacKuen
1990; Eliasoph 1998)” (p. 37).
The threat to affiliation is especially of concern when conversations
turn contentious. As Conover, Searing, and Crewe (2002) write, “The
potential for disrupting social relations is heightened when discussion
becomes passionate. A few people ‘like heated discussions’; they ‘sit back
and enjoy it’. But far more worry about the social consequences of
contentiousness” (p. 55). They add:

Close relationships are strong enough to withstand the potential disruption that
might occur from either abruptly – and rudely – disengaging from a contested
discussion or turning it into a real argument full of passion and anger. With close
friends and family, “you feel like they’re going to accept you . . . You might have a
temporary argument but they love you and you love them. And you’re not going
to lose that love just because of politics.” By contrast, persuasive and argumenta-
tive discussions with acquaintances run the risk of alienating people and disrupt-
ing social relations that must be maintained (such as co-workers). Outside of close
relationships, you cannot be sure if you will be accepted “for yourself or just by
what you say or how you act.” (2002, p. 57)

This insight is also foundational for Mutz’s (2006) argument that cross-
cutting networks discourage political participation: “[T]he demands of
social accountability create anxiety because disagreement threatens social
relationships” (p. 106). While her operationalization of social account-
ability precludes a full test of this idea,3 she supports her assertation with
work from Rosenberg (1954–1955) and Mansbridge (1980), who both
find evidence from qualitative work that the desire to avoid conflict and
minimize threats to interpersonal harmony were important factors in their
studies of political engagement.

https://doi.org/10.1017/9781108912495.002 Published online by Cambridge University Press


The 4D Framework 31

Thus, while most political discussion research tends to align with the
notion that affiliative goals can push people away from political discus-
sion or lead them to engage in conformity within conversations that do
happen, affiliative goals can lead some to be more likely to engage.

  
Weaving together the motivational theory proposed by social psycholo-
gists with the observations from previous work on political discussion
leads us to propose the framework that will guide our empirical analysis.
We organize this framework of political discussion as a cycle in four
stages: Detection, Decision, Discussion, and Determination. The choices
a person makes at one stage have implications for the opportunities
available to them at the next, and unique individual dispositions affect
people’s preferences and behavior in each stage.
This framework emphasizes different facets of discussion behavior
than has previous scholarship. First, we focus considerably more on what
happens in anticipation of a potential political discussion. People must get
a read on their discussants before a conversation has even begun if they
are going to minimize their discomfort or maximize their enjoyment. We
care about the decision to engage in an interaction, not simply the
conversations that result. Second, we focus considerably more on the
nonpolitical factors that guide the decision to talk politics and what
people choose to say. Even those people who are interested in and
knowledgeable about politics may have reasons that they seek to avoid
political conversation in certain contexts, just as those who are less
interested might have reasons to pursue conversations. Third, we suggest
that most people prioritize preserving their self-esteem and their social
relationships over the instrumental political benefits that may be gained
from a discussion. If people perceive that a political conversation will do
lasting damage, they are likely to steer clear of the topic, even if they could
learn something or improve their vote choice in the process. Similarly, if
people perceive that a political conversation will improve their social
relationships, they are likely to engage, even if they have little to gain
politically.
Throughout the book, we rely on a core analogy for the 4D
Framework: the “choose your own adventure” books popular in the
1980s and early 1990s. The books would trace the storyline of a protag-
onist, unfolding in small chunks, each ending with a decision about two

https://doi.org/10.1017/9781108912495.002 Published online by Cambridge University Press


32 The 4D Framework of Political Discussion

or more courses of action that the main character could choose. The story
changed depending on the reader’s preference, and thus a single book
could contain dozens of different storylines.4 The choices the reader made
for the main character earlier in the story had consequences for the range
of choices available at later stages of the story, although it often felt like
some endings were inevitable.
We conceptualize the process of political discussion in a similar way.
People make many choices as they navigate through social contexts in
which political discussion may appear. The three motivational goals
discussed previously – accuracy, affiliation, and affirmation – shape the
decisions that individuals make before, during, and after a political con-
versation, as we elaborate on shortly. The choices they make depend on
the circumstances in which they find themselves and the people with
whom they might interact. These choices impact the options they have
moving forward, and the decisions they make shape the decisions they
face in the next “round” of their story. Additionally, we expect there to be
significant variation between individuals, so within each stage of the cycle
we describe, we ultimately want to explain variation in the preferences
people have and how that affects their choices.
We will use the “choose your own adventure” analogy throughout the
book to highlight some of the decisions that our protagonist from the
opening vignette of the book, Joe, makes during his day-to-day experi-
ences. Each stage of the 4D Framework presents a decision point for Joe,
and each complete storyline would be a lap around the cycle we charac-
terize in the framework.

Stage 0: Baseline Propensity for Political Discussion


Just as bountiful as the literature studying the composition of political
discussion networks is the literature assessing the individual characteris-
tics that influence a person’s propensity to engage in political discussion.
Most work has used the Big Five personality framework, often measured
using the Ten Item Personality Inventory (TIPI) (Gosling, Rentfrow, and
Swann Jr., 2003) to capture Big Five personality dimensions. Personality
traits appear to be related to the frequency of political discussion and the
composition of discussion networks (Hibbing, Ritchie, and Anderson
2010; Mondak 2010; Mondak et al. 2010; Gerber et al. 2012). For
example, individuals who are more extraverted tend to discuss politics
more frequently (e.g. Mondak 2010). Gerber et al. (2012) find that
extraversion and emotional stability moderate the relationship between

https://doi.org/10.1017/9781108912495.002 Published online by Cambridge University Press


The 4D Framework 33

political agreement and frequency of discussion, such that those who are
less extraverted and less emotionally stable will be more likely to
avoid disagreement.
The Big Five personality traits are a useful framework for a wide
variety of behaviors, but we seek to move beyond the broad traits to look
at particular characteristics that could matter for affecting the way indi-
viduals anticipate, experience, and internalize political discussion. We
look toward traits that should uniquely affect a person’s awareness of
and sensitivity to the social costs involved in political discussion: a per-
son’s inherent comfort level with social interaction, as measured by facets
of their personality or their sensitivity to social anxiety. Specifically, we
focus on social interaction anxiety, conflict avoidance, and willingness to
self-censor (WTSC).
In the next chapter, we go into more detail about how we measure
these traits. As a preview, we expect that those who are more socially
anxious will be more likely to avoid political conversations, regardless of
situational factors, such as disagreement or knowledge asymmetries.
When socially anxious people end up discussing politics, we expect that
they will be less likely to express their true political views and will feel
less comfortable than those who are less socially anxious, but these
differences might not be driven by situational factors. At the
Determination stage, we expect social anxiety to be associated with
increased reports of politically motivated social estrangement, in an
effort to avoid future interactions.
An aversion to conflictual interactions might lead people high in the
trait to be more practiced in detecting potential disagreement before a
discussion even begins. We expect those who are conflict avoidant to be
more likely to avoid political discussions, especially when they anticipate
disagreement. Unlike social anxiety, conflict avoidance should be strongly
related to situational factors, such as disagreement. Once they are
engaging in a discussion – voluntarily or not – conflict-avoidant individ-
uals might be less willing to express their true opinions, which survey
evidence from Pew Research Center suggests could be the case for some
topics (Doherty et al. 2019). For example, the authors find that only
26 percent of those who are low in “comfort with conflict” report that
they would share their views about Trump during a dinner conversation
with others who disagree, compared to 76 percent of those who are high
in “comfort with conflict.” Similarly, we expect that those who are more
conflict avoidant might be more likely to silence their views, censor them,
or even conform to the group’s majority opinion.

https://doi.org/10.1017/9781108912495.002 Published online by Cambridge University Press


34 The 4D Framework of Political Discussion

We expect that those who are more willing to self-censor might be


generally less comfortable in political discussions. When it comes to Stage
2, deciding whether to engage in the discussion, willingness to self-censor
might play less of a role. On one hand, those who are more willing to self-
censor might be generally uncomfortable in political discussions, prefer-
ring to avoid situations in which they would feel the need to self-censor.
On the other hand, willingness to self-censor might simply structure how
individuals behave once in a conversation and be less strongly associated
with the decision to engage. Once they are engaging in a discussion (either
by choice or by obligation), those who are more willing to self-censor
should be more likely to silence, censor, or even conform to the group,
just as with those who are more conflict avoidant.
We conceptualize these three traits not as entirely fixed but as innate,
relatively stable characteristics subject to updating. These traits mold the
way people perceive their environment and their evaluations of the social
costs involved in engaging in particular kinds of conversations. And they
shape the lessons that a person takes away from political conversations,
so that the next time that person is presented with the chance to talk
politics, they have formed an updated set of expectations about the
relative costs and benefits of doing so. Because these traits are highly
correlated with each other (see Table 3.1 in Chapter 3), we assess their
contribution separately from one another in Chapter 9 to isolate the
impact of each trait. But it is important to remember that these character-
istics frequently co-occur.

Stage 1: Detection
One of our starting assumptions is that people assess the social costs of
political discussion. In order to do so, people must be able to ascertain the
likely course of the conversation to calculate whether the potential bene-
fits involved in discussing politics outweigh the potential risks. The ques-
tion is, how do people do this? Research on perceptions of political views
(e.g. Rule and Ambady 2010) and political discussion networks is often
conducted separately. In fact, some of the seminal research on social
political communication only discusses perception of discussant views as
a theoretical aside, removed from the empirical core of the article (e.g.
Huckfeldt and Sprague 1987).
The extant literature has focused primarily on the interactions that
people report as being most frequent or regular. And most people report
knowing with high degrees of certainty the viewpoints of their friends and

https://doi.org/10.1017/9781108912495.002 Published online by Cambridge University Press


The 4D Framework 35

family members. One particular study from Pew suggests that approxi-
mately 90 percent of people report knowing the viewpoints of their
family, close friends, and spouse or partner on an issue salient during
the time of the study (Hampton et al. 2014). Huckfeldt and Sprague
(1987) find that individuals are strikingly accurate in identifying the
presidential candidate preferences of political discussants who agree with
them (91–92 percent accurate), but substantially less accurate in identify-
ing the preferences of those who disagree.5 More recent studies have
critiqued some of this early work, arguing that the previously reported
statistics are difficult to interpret because they often conflate inaccuracy
with uncertainty depending on the authors’ treatment of “don’t know”
responses (Eveland and Hutchens 2013; Eveland et al. 2019). However,
there are reasons to think that most people are adequately accurate in
reporting agreement but are less able to recognize disagreement, and that
increased discussion frequency improves accuracy (Eveland and Hutchens
2013). Using a method that allows for the flexible measurement of uncer-
tainty, Carlson and Hill (2021) find that individuals are able to accurately
infer presidential vote choice of strangers based on a variety of demo-
graphic and political cues, and accuracy increases as individuals have
more characteristics in common.
What matters more than accuracy per se is perception: Accuracy may
be a prerequisite for effective communication (Huckfeldt, Johnson, and
Sprague 2004), but perception matters more at the outset of a conversa-
tion when people’s guess of their discussants’ views is going to guide their
decision-making. The question that remains is how people recognize and
perceive these differences among their weaker social ties or for the people
they interact with incidentally throughout the course of their day. When a
stranger makes a comment in passing, what additional clues do people use
to guess his or her likely political views?
In a polarized society, one in which partisan politics has become
aligned with worldviews, religions, and identities (Mason 2018),
Americans view the political world through a partisan lens and have
come to recognize their ingroup and their outgroup. Hetherington and
Weiler (2018) describe a microcosmic example of the extent to which
worldview preferences have resulted in parallel but distinct realms of
existence for the “fixed” and the “fluid,” worldviews that correlate with
political identities. They write that “[t]oday, Americans are divided by
choices that seem much more trivial than where they live, work, and
worship . . . while these day-to-day preferences say less about the convic-
tions and values than do choices about occupation, residence, and

https://doi.org/10.1017/9781108912495.002 Published online by Cambridge University Press


36 The 4D Framework of Political Discussion

religion, they nevertheless reflect how Americans think about the world
more broadly” (p. 90).
Hetherington and Weiler (2018) assert that people do make inferences
about others based on the existence of “tells,” day-to-day lifestyle prefer-
ences that are “signs of larger beliefs about the world, and about their
political commitments” (p. 91). Deichert (2019) provides the evidence for
this claim. In addition to showing that certain cultural preferences –
depicted both in written description but also in visual manifestations –
are strongly associated with one or the other political party, Deichert uses
an implicit association test to demonstrate the existence of these cognitive
linkages in long-term memory. We connect her ideas to the process of
political discussion. People perceive potential political discussants with
these assumptions already in place, and we evaluate others through the
associations we hold about the other side. Recognition of someone’s
political identity triggers a set of expectations from previous interactions
with people in the same group.
Think back to the vignette about Joe’s day in Chapter 1. When he
got on the bus, he instinctively scanned potential seatmates to screen
out people who seemed like they might initiate political discussions.
He had a read on his coworkers’ political views and recognized when
the group members gathered around the water cooler did not agree
with one another. He knew his in-laws’ views with certainty on the
topics he had heard them discuss, and based on what he knew about
their media preferences, he could extrapolate what they thought about
other policy issues. Joe is not walking around trying to be an amateur
political psychologist, but without too much effort, he is mapping
the political environment around him. As we detail more later, Joe’s
behavior is not unique.
We explore this stage of the process more generally in Chapter 4,
where we highlight results from a series of studies – both our own and
by others studying this process – that suggests the extent to which people
make assumptions about others based on easily identified physical and
verbal signals. It turns out that many of the differentiating factors that
social scientists have recognized between those on the left and those on
the right – such as consumer preferences, worldview differences, and
baby-naming preferences – are recognizable to the public as well. These
results suggest that awareness exists of the political views of potential
discussion partners before a political discussion is even initiated, and that
the assumptions these identities activate could shape people’s interest in
engaging in conversation.

https://doi.org/10.1017/9781108912495.002 Published online by Cambridge University Press


The 4D Framework 37

Stage 2: Decision
One of our main contributions is the idea that the anticipation of discus-
sion matters. What people expect to happen shapes their subsequent
behavior. If people expect a positive, enjoyable discussion, they should
be more likely to engage, regardless of potential instrumental costs; if
people expect a negative, upsetting, unpleasant discussion, they should be
more likely to opt out, regardless of potential instrumental benefits.
Previous research has focused largely on the subset of the conversations
that materialize, and materialize regularly enough to be reported on a
survey. As a result, discussion frequency and discussion network compos-
ition become the variables studied, at the expense of studying the emer-
gence of discussion itself. In effect, this truncates the dependent variable
and systematically misses the process of deciding to discuss politics. We
think the process of anticipation – the stage at which someone actively
decides to engage (or not) in a discussion based on the information they
have at the onset of a conversation – is important in its own right.
People do not arrive as blank slates at the moment a political discus-
sion emerges. They bring with them their individual predispositions as
well as the assumptions they have made about their potential political
discussion partner. There is always a moment of decision – albeit fleeting –
preceding a political conversation. The opening narrative of this book
characterizes several such decisions. If someone else has initiated the
conversation, a person must decide whether and how to respond, or to
derail the political aspect of the conversation. If a person initiates a
conversation with someone else, they decide when, where, with whom,
and what to say to engage the other person.
At the onset of a potential conversation, a person can make informed
guesses about a number of facets of the conversation. How many people are
present? What is the ratio of agreement to disagreement in the group? Given
the composition and context of the discussion, what can the person antici-
pate about the experience of the discussion itself based on the interaction of
these factors? A private conversation with a discussion partner someone
knows well and knows agrees with her is a very different experience than a
discussion with a seatmate on a plane, train, or bus.6 A discussion with a
small group of coworkers around the water cooler at work is different than
a conversation around the family holiday dinner table.
As a result of this assessment, there is variation between people not only
in the frequency with which they discuss politics but also in their general
orientation toward political discussion. This was prominently studied in

https://doi.org/10.1017/9781108912495.002 Published online by Cambridge University Press


38 The 4D Framework of Political Discussion

Ulbig and Funk’s (1999) exploration of conflict avoidance and political


participation, including engagement in political discussions. The research-
ers use data from the Citizen Participation Study (1989–1990) in which
respondents were asked whether they “usually try to avoid political discus-
sions,” “enjoy them,” or “fall somewhere in between.” The results suggest
that in 1990, about 22 percent of Americans reported that they usually try
to avoid political discussions. We like this survey question and included it
on our own nationally representative surveys, along with other questions
ascertaining people’s preferences for discussion. We find that the vast
majority of people who enjoy discussions will talk with someone regardless
of whether their views are known or shared; the vast majority of people
who avoid discussion will do so if at all possible. And for those who find
themselves somewhere between enjoyment and avoidance, their knowledge
and expectations of their potential discussant’s views shape their decision
to participate in a discussion or not.
Because of the difficulty in studying the proper counterfactual to
conversations that do occur – those that could occur, but which people
opt to avoid – we had to devise creative methods to assess the factors that
could matter in a person’s decision calculus. In Chapter 5, we utilize four
novel approaches to collect data to compare conversations that could
happen but do not, to those conversations that could happen and do.
Understanding the systematic differences between these conversations
allows us to uncover the decision criteria people appear to be using.
Our comparison reveals that discussions that were avoided tended to
comprise of larger groups with more disagreement, and that aversion is
stronger when disagreement is characterized in terms of partisan identity.
Individuals were more likely to engage in a discussion when they were in
the political majority, when they were more knowledgeable than the other
discussants, and when the discussion was with strong social ties.

Stage 3: Discussion
Despite the vast research on discussion network composition and the
emphasis on disagreement in the literature, considerably less attention
has been paid to the dynamics of organic political conversation itself. We,
as a field, know comparatively little about the actual contours of political
discussions primarily because our methods are not well suited to captur-
ing those dynamics: Quantitative approaches rely on faulty human recol-
lection and reporting, and qualitative approaches are costly and produce
findings that are often difficult to generalize. Our focus here is not on the

https://doi.org/10.1017/9781108912495.002 Published online by Cambridge University Press


The 4D Framework 39

content of the conversation; our preliminary research suggested that


people are much less sensitive to the topic of the conversation than they
are to the constitution and characteristics of the people within it. Thus, we
assess people’s experience and behavior while engaging in discussions of
varying composition.
Implicitly embedded in previous work on discussion are three assump-
tions: that people actually speak when they report they have had a political
discussion, that people express their true opinions, and that disagreement
tends to make people uncomfortable. Together, these three assumptions
facilitate the theory explaining why discussion networks are largely homo-
geneous: Individuals prefer to avoid disagreement and thus seek out like-
minded discussants. We believe that it is important to not only test these
assumptions by finding unique ways to measure the discussion experience,
but also to more fully examine the processes that underpin the behavior we
observe during discussion. Doing so allows us to move beyond studying the
inputs (e.g. discussion composition and context) and outputs (e.g., attitude
change, persuasion, polarization, learning, engagement), and instead
unpack the otherwise “black-boxed” experience of what happens between
these inputs and outputs. Our approach also allows us to shed more light
on the possibility that some people seek out disagreement. While these
people make up a minority of the American public, they are an important
piece to helping us understand political discussion holistically.
To try to better measure what people experience in the course of a
political conversation, we first focus on the physiological experience of
discussions, measuring people’s heart rate and electrodermal activity as
they engage in a discussion. We find suggestive evidence that individuals
have stronger physiological and more negative emotional responses to
discussing politics with those who they perceive disagree with them and
are more knowledgeable about politics.
We then push further to understand the psychological processes that
contribute to the experience of discussion. The motivational framework
we just discussed – accuracy, affirmation, and affiliation considerations –
affects what people choose to say when they do engage. For example, the
demand to form opinions about complex issues might be enough to trigger
stress for some individuals. Alternatively, people may easily form opinions
about political issues, but be put off by the expectation to express that
opinion when asked. Based especially on concerns about accuracy, indi-
viduals may be concerned about articulating their thoughts or beliefs out
of fear that their beliefs will be factually incorrect. Yet, others might be
excited by the opportunity to inform others and help them arrive at

https://doi.org/10.1017/9781108912495.002 Published online by Cambridge University Press


40 The 4D Framework of Political Discussion

opinions based on accurate information. Affiliative concerns could matter


in both the short term and the long term. In the short term, individuals
might be concerned with their immediate emotional well-being, leading to
tension over which opinions to express to the group. Maintaining cogni-
tive consistency might lead some to want to express their true opinions,
but this could come at the cost of sparking disagreement and discomfort
within the group discussion. In the longer term, individuals might be
concerned about their future ability to affiliate with others in the conver-
sation, given what could be revealed about their preferences. Individuals’
deep-seated concerns about the social ramifications of engaging in a
discussion are merited, given our findings in Chapter 8. However, some
might anticipate affiliative opportunities, such as getting to know people
on a deeper level, which could drive them to engage.
We then turn to assessing the behavioral outcomes – conformity, cen-
sorship, stating one’s true opinion, entrenchment, and silencing7 – sug-
gested by social psychology research to be potential responses in a
discussion. We expect that a person’s motivational goals will structure
how they behave within a conversation and we are interested in what
opinions (if any) individuals choose to express, conditional on features of
the conversation itself. Using vignette experiments, we find that individuals
are less likely to express their true opinions in a conversation when the
discussants disagree with them and when discussants are more knowledge-
able about the topic. We also examine these behaviors in lab experiments,
finding that individuals will conform to group opinions (see Carlson and
Settle 2016 and Levitan and Verhulst 2016). Video recordings of actual
political discussions held in the lab reveal a myriad of verbal cues to assess
how people express their opinions and respond to disagreement, and we
show that people are often quite ambivalent in expressing their identities or
opinions. While these behaviors that demonstrate individuals’ unwilling-
ness to express their true opinions were common – and sometimes the
norm – there were some respondents who freely, enthusiastically expressed
their opinions to others. Our lab experiments suggest that strong partisans
were among the least inhibited in expressing their opinions.

Stage 4: Determination
The end of the discussion itself is only the beginning of its consequences.
We already know that people who engage in agreeable political discus-
sions are more likely to engage in other forms of participatory behaviors,
but that some of this relationship is driven by interest in politics and

https://doi.org/10.1017/9781108912495.002 Published online by Cambridge University Press


Conclusion 41

strength of partisanship. There are mixed findings about the consequences


of exposure to disagreement, and Klofstad, Sokhey, and McClurg (2013)
note that the difficulty in measuring disagreement can lead to different
conclusions about its effects. For example, some research suggests that
exposure to disagreement can lead individuals to be less likely to partici-
pate in politics (Mutz 2002; McClurg 2006), and more ambivalent (Mutz
2002). In contrast, other research finds that exposure to different views in
some settings, such as at church or at work, can actually increase political
participation (Scheufele et al. 2004). Much of the relationship between
disagreement and decreased participation is driven by networks in which
individuals face complete disagreement, rather than a mix of opinions
(Nir 2011; Bello 2012). Putting political participation aside, exposure to
disagreement has been shown to increase tolerance for the other side
(Mutz 2002, 2006), and interparty contact through conversation has been
shown to reduce outparty hostility (Rossiter 2020; Levendusky and
Stecula 2021).
The focus on the relationship between the frequency of discussion or
the composition of a person’s discussion network and downstream behav-
iors like voter turnout and vote choice does not capture the full ramifica-
tions of engaging in political discussion. We construe these ramifications
more widely, as we think that both the political and social aspects of a
political interaction have bearings on a person’s future choices about
social interactions with their political discussants and attitudes toward
the polity. For example, a respondent to one of our studies reported that
he or she avoided a political conversation with his or her friend because
“the person was a friend of mine and I was afraid that my opposite
opinion would hurt our friendship.”
What a person walks away feeling from the conversation – and the
lasting social benefits or damages that result – impact the assumptions
they make about others and affect their decision-making process moving
forward. In Chapter 8 we review these outcomes and the expectations
people have about the consequences of discussions, exploring the impli-
cations of the cycle of political discussion we articulate in the 4D
Framework in an era of polarization, showing that many of these dynam-
ics are self-perpetuating.


This chapter has provided an overview of the 4D Framework we use to
evaluate the process of political discussion. Moving beyond previous

https://doi.org/10.1017/9781108912495.002 Published online by Cambridge University Press


42 The 4D Framework of Political Discussion

research, we emphasize the role of motivational goals in shaping


people’s expectations of, experience during, and evaluation of interper-
sonal political interactions with a wide range of potential discussion
partners. In the chapters that follow, we endeavor first to describe more
thoroughly what happens at each stage of the cycle, but also to explain
variation in preferences and choices between people as they navigate
each stage.
We want to be clear about what we are and are not proposing. We do
not test whether the 4D Framework we suggest is a better fit than
alternative frameworks that could be proposed. Our assessment of the
political discussion literature within political science and political com-
munication reveals a lack of other comprehensive frameworks to study
political discussion. The ample research on the topic tends to pick and
choose theories that apply to one facet of the discussion process instead of
thinking more comprehensively about the role of political discussion in an
individual’s day-to-day experience. Nor do we tackle the topic as a
cognitive psychologist might. We do not design studies to test people’s
recognition of the framework that structures their decision-making.
Instead, we proceed as political psychologists: seeking to understand the
preferences and decision-making guiding nuanced and specific political
attitudes and behaviors.
Finally, we acknowledge that the variation between individuals
includes the possibility that certain stages are more salient for certain
kinds of people. For example, individuals who only engage in political
discussions if they know that the others agree likely expend more
energy in the Detection and Decision stages than the 15 percent of
people who say they both enjoy conversations and will talk with
anyone, regardless of whether they know their views. It is possible that
some people bypass some stages, or are not consciously aware of the
choices they make: More than 60 percent of our respondents reported
that they do not try to guess others’ views in advance of a discussion.
We choose to leverage this variation to help us understand the wide
range of strategies that people pursue, and what selection effects are
created in terms of who discusses politics most frequently, with which
kinds of people.
Our analysis in this book is focused on the micro-level political discus-
sions that happen in Americans’ daily lives. But unpacking the 4D
Framework and thinking about political discussion as a cycle has
immense implications for not only better understanding political discus-
sion in the discipline, but also for developing expectations about the

https://doi.org/10.1017/9781108912495.002 Published online by Cambridge University Press


Conclusion 43

consequences of political discussion on broader patterns and trends in


American political behavior. As we discuss in more detail in the book’s
conclusion, understanding the dynamics of political discussion in a more
nuanced way can help researchers assess systematic biases in who is most
vocal in political discourse, as well as the impact of discussion as a
contribution to, or potential solution for, polarization and the spread
of misinformation.

https://doi.org/10.1017/9781108912495.002 Published online by Cambridge University Press


3

Data Collection

The theory of the 4D Framework builds directly on previous seminal


work – both qualitative and quantitative – by taking a multimethod
approach to understanding the social and psychological dynamics of
political discussion. This approach is designed to address some of the
empirical limitations of previous work by using new research designs that
allow us to explore the nuances of political discussion experiences.
However, this methodological pluralism has resulted in a large and eclec-
tic set of studies that come to bear on our central questions.
This chapter is a guide to help the reader navigate our approach to
exploring the key concepts and studies that appear repeatedly throughout
the book. Parts of this chapter might be a bit technical and are most likely
useful to those interested in conducting research on these topics.
However, we think this foundation provides context for the results that
we present in the empirical chapters that follow. We proceed in two ways.
First, we describe the measurement and operationalization of the core
concepts we study. We begin by explaining the inputs to the 4D
Framework, including individual dispositions, situational factors, and
the psychological considerations underpinning the AAA Typology
described in Chapter 2. Throughout the book, these 4D Framework
inputs largely take on the role of independent variables. We then focus
on 4D Framework outputs, explaining how we operationalize our core
dependent variables: people’s preferences, choices, and behaviors before,
during, and after political discussion. We think about 4D Framework
outputs as both moments of decision (how individuals react to specific
situations during a political discussion experience), as well as general
tendencies (individuals’ broad political discussion preferences).

44

https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press


Conceptual Operationalization of the 4D Framework 45

Second, we describe the details of our core data collection efforts,


organized by study type. These include surveys, psychophysiological lab
experiments, and survey experiments. We provide an overview about the
samples and research designs for studies that appear multiple times
throughout the book; within each chapter, we provide more details as
we present the results, as well as describe studies that appear only in that
chapter.

   


 
In the previous chapter, we presented the theory of the 4D Framework.
The 4D Framework conceptualizes political discussion as a cycle, such
that the decisions people make at one stage affect the opportunities
available to them at the next. Individual dispositions affect our behavior
at each of four stages in the political discussion process: Detection,
Decision, Discussion, and Determination. Here, we identify the discrete
concepts and measurement that we use throughout the book.

4D Framework Inputs
We focus on three broad categories of inputs to the 4D Framework:
individual dispositions, situational factors, and psychological consider-
ations. We describe each in turn, providing a brief recap of the role each
plays in the model and then explaining how we measure each construct in
the book.

Individual Dispositions
Individual dispositions structure the way in which people navigate the 4D
Framework, a point we elaborated on in Chapter 2. We focus on three
types of individual dispositions – demographic, political, and psycho-
logical – and discuss our theoretical expectations and empirical analysis
in more detail in Chapter 9. For demographics, we examine gender and
race and ethnicity, as measured using respondents’ self-reported gender
and racial or ethnic identities on our surveys. We examine interest in
politics and strength of partisanship for our political dispositions. We use
standard measures of these questions, commonly used in major surveys,
such as the American National Election Study (ANES). In this chapter, we
want to provide more detail on the psychological dispositions that we
explore because they are less commonly employed in political science

https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press


46 Data Collection

research. We use well-validated measures that are theoretically related to


people’s sensitivity to and preferences for social interaction: social anx-
iety, conflict avoidance, and willingness to self-censor.
We begin with an individual’s level of comfort in social interactions,
regardless of whether they involve politics. Social interaction anxiety is
conceptually related to certain facets of extraversion, emotional stabil-
ity, and agreeableness, but measures a distinct facet of a person’s innate
disposition that we argue is especially important for social aspects of
political behavior like political discussion. Despite an emphasis on the
importance of anxiety in political decision-making (e.g., Marcus and
MacKuen 1993; Marcus, Sullivan, Theiss-Morse, and Stevens 2005;
Ladd and Lenz 2008; Ladd and Lenz 2011; Gadarian and Albertson
2014; Denny 2016; Wagner and Morisi 2019), very few studies in
political science have examined the impact of anxiety about social
interaction itself.
To measure this trait, we use the Social Interaction Anxiety Scale
(SIAS), developed by Mattick and Clarke (1998), which measures anxiety
in social interaction situations, assessing fears broadly, according to the
DSM III-R definition of social phobia (Heimberg et al. 1992; Mattick and
Clarke, 1998). The scale captures anxiety about contingent social inter-
actions, defined as interactions in which someone tailors his or her
behavior to the responses of another person (Mattick and Clarke 1998,
p. 457). In these cases, individuals continuously monitor the behavior of
another person and adjust their own behavior accordingly. Several
researchers have evaluated the validity of the SIAS against other meas-
ures, and they have consistently found the SIAS to be a valid and reliable
measure of social anxiety (i.e. Heimberg et al. 1992; Brown et al. 1997;
Mattick and Clarke 1998).
Second, we focus on conflict avoidance. This trait affects several facets
of political discussion behavior: those who are conflict avoidant have
been shown to engage in political discussions less frequently (Ulbig and
Funk 1999; Doherty et al. 2019) and to consider discussions with those
who disagree to be more stressful and frustrating and less interesting and
informative (Doherty et al. 2019). Additionally, conflict avoidance has
been shown to moderate the effects of exposure to disagreement on
subsequent political behaviors like voting (Mutz 2006). These established
patterns suggest that conflict avoidance should affect additional facets of
decision-making in the discussion feedback loop.
The conflict avoidance measure we deploy was originally developed by
Rahim (1986) and asks respondents to report the extent to which they

https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press


Conceptual Operationalization of the 4D Framework 47

consider each of six statements to describe them. One example statement


is “I usually avoid open discussion of my differences with others.” We
aggregate the responses to each statement to give each respondent a
conflict avoidance score. This measure differs from other measures of
conflict avoidance that have been used in previous research on political
discussion. For example, Ulbig and Funk (1999) measure the concept by
asking respondents whether they try to avoid political discussions because
they can be unpleasant, enjoy discussing politics even though it can
sometimes lead to arguments, or whether they are somewhere in between.
We wanted a measure of more generalized conflict avoidance, rather than
one specifically tied to political discussions. A second measure of comfort
with conflict used by Pew Research Center (Doherty et al. 2019) is useful
and likely taps into the core concept of interest to us. However, Pew’s
scale was implemented after we collected the vast majority of the data for
this book and it does not include any validation measures, as the Rahim
(1986) scale does. The Rahim (1986) scale’s ability to capture generalized
conflict avoidance in a way that is not directly tied to political discussion
makes it useful for our purposes.
Finally, we assess a person’s willingness to self-censor. This trait cap-
tures the idea that individuals will moderate or censor the views they
share with others. While researchers were initially interested in exploring
whether individuals would censor potentially racist attitudes (Hayes,
Glynn, and Shanahan 2005), we suspect the trait is applicable to political
views more broadly. As we wrote in Chapter 2, there is some ambiguity
about how this trait might affect decision-making in advance of a conver-
sation, but previous scholarship suggests clear expectations about how it
affects behavior during a conversation.
We measure willingness to self-censor using a scale developed by
Hayes, Glynn, and Shanahan (2005). The scale includes eight statements,
such as “[w]hen I disagree with others, I’d rather go along with them than
argue about it.” Respondents are asked to report the extent to which they
agree with each statement on a scale from 1 (strongly disagree) to 5
(strongly agree). For our purposes, we summed the responses to all eight
items together to give every respondent a willingness to self-censor
(WTSC) score.
We do not view these dispositions as entirely fixed but rather as innate,
relatively stable characteristics subject to updating. These dispositions can
shape the way people perceive their environment and their evaluations of
the social costs and benefits involved in engaging in (or avoiding) political
discussions with unique characteristics. Should an individual choose to

https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press


48 Data Collection

 .. Correlations between individual dispositions

Social Anxiety Conflict Avoidance WTSC


Social Anxiety 0.471 0.643
Conflict Avoidance 0.471 0.625
WTSC 0.643 0.625
All correlations are statistically significant at the p < .001 level. Correlations calculated using
the CIPI I Survey data, N ¼ 2,874 for social anxiety, N ¼ 3,003 for conflict avoidance, and
N ¼ 2,970 for WTSC, after dropping cases for which respondents did not answer all
questions on the battery.

opt into a conversation, these dispositions can structure how he or she


reflects on that conversation and uses that reflection to update his or her
preferences for future interactions. Because these traits are highly correl-
ated with each other, as shown in Table 3.1, we assess their contribution
separately from one another in Chapter 9 to isolate the impact of each
trait. But it is important to remember that these characteristics frequently
co-occur. It is not our goal to advocate for the importance of these
dispositions over others in driving political discussion behavior, but
rather to use these dispositions to highlight the ways in which unique
individual characteristics can affect the process of political discussion.

Situational Factors
In addition to individual dispositions, features of the social context in
which political conversations occur can also affect individual behavior in
the 4D Framework. We focus primarily on two situational factors: dis-
agreement and knowledge asymmetries. While there are a myriad of other
situational factors we could consider – such as the gender or racial
composition of the group, the power dynamics or social tie strength
between discussants, participant levels of political interest or engagement,
or the location of the conversation – we chose to focus on two factors that
are the most strongly tied to our theoretical framework, as we describe in
Chapter 2.
The pattern of findings in the political discussion literature, especially
those using egocentric analysis based on name generators, has found that
different operationalizations of disagreement can lead to different conclu-
sions (Klofstad, Sokhey, and McClurg 2013). We theorize that in the
context of a social interaction, disagreement based on identity (e.g. parti-
sanship or candidate preference) may operate differently than disagree-
ment based on the clash of opinions (issue disagreement, or the experience

https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press


Conceptual Operationalization of the 4D Framework 49

of overall disagreement). We therefore test several different operationali-


zations of the concept of disagreement: general disagreement, candidate
disagreement, partisan identity disagreement, and policy disagreement.
Our operationalizations in different studies are described in Table 3.2.
The second situational factor we consider is knowledge. As we dis-
cussed in Chapter 2, variation in political knowledge is one of the funda-
mental theoretical reasons why discussing politics with others can be
advantageous. However, given motivations for accuracy, affiliation, and
affirmation, knowledge asymmetries might very well contribute to dis-
comfort in political discussions and lead individuals to avoid political
talk, especially when disagreement is anticipated. Because political know-
ledge is difficult to measure and because individuals tend to overestimate
their own knowledge levels, we always conceptualize knowledge asym-
metries as relative. That is, we do not describe someone in a vignette
strictly as “knowledgeable” or “unknowledgeable” about politics.
Instead, we describe others as more or less knowledgeable than the
character of interest or the participant. One exception to this is in the
Psychophysiological Anticipation Study, where we described the discuss-
ant as scoring in either the top or bottom percentiles on political know-
ledge questions on a pretest. We made this decision because given the high
average level of political knowledge in the sample, we wanted to make
sure that the notion of “higher level of knowledge” was salient to sub-
jects. We summarize our measures of relative knowledge in Table 3.3.

Psychological Considerations: The AAA Typology


Finally, we examine psychological considerations that shape how individ-
uals might behave in the political discussion cycle. In Chapter 2, we
introduced the AAA Typology, which represents the psychological motiv-
ations that structure decisions and behavior throughout the 4D
Framework. In brief, the AAA Typology suggests that individuals are
motivated by considerations rooted in accuracy (the desire to be correct),
affiliation (the desire to fit in with and be accepted by others socially), and
affirmation (the desire to maintain a positive self-concept). As individuals
navigate political discussions, they might give greater weight to one piece
of the AAA Typology over another, conditional on individual dispositions
and situational factors. Thus, psychological considerations are not an
entirely independent input and are likely responsive to other inputs.
Measuring the AAA Typology is challenging, given that individuals
often struggle to explain why they do what they do. We measure the AAA
Typology in two ways. First, we coded free response answers that subjects

https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press


https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press

 .. Conceptualization and measurement of disagreement

Concept Measure Stage Chapter Study


General How many people would have agreed with you Decision 5 True Counterfactual
Disagreement about the political topics in the discussion [had Study
you participated]?
Everyone [would have] disagreed with me; most
[would have] disagreed with me; about half
[would have] disagreed with me; most [would
have] agreed with me; everyone [would have]
agreed with me
Compensation demands for conversations with a Decision 5 Name Your Price Study
group of people who disagreed with the
50

respondent
Candidate Vignette Manipulation: “It quickly becomes clear to Decision 5 CIPI I Vignette
Disagreement her that they have very different political views Discussion 7 Experiment
from hers, as they discuss their support for the
candidate Sarah opposes.”
Compensation demands for conversations with a Decision 5 Name Your Price Study
group of Clinton supporters, Sanders supporters,
Trump supporters, Cruz supporters, etc.
Vignette Manipulation: “It quickly becomes clear to Determination 8 Vignette Pilot Studies
him/her that [they have very different views from
him/her, as they discuss their support for the
candidate that John/Sarah opposes / most of the
group has very similar political views as him/her,
https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press

as they discuss their support for the candidate


John/Sarah supports / they are about evenly split
in their political views, as some discuss support for
the candidate John/Sarah opposes and some
discuss support for the candidate s/he supports].”
Partisan Identity Compensation demands for conversations with a Decision 5 Name Your Price Study
Disagreement group of Democrats, Republicans, etc.
Participants were told that they were going to have a Decision 6 Psychophysiological
conversation with a Democrat or Republican Anticipation Study
(coded as copartisan or outpartisan)
Participants had a discussion with a copartisan or an Discussion 7 Psychophysiological
outpartisan Experience Study
Think about the people with whom you talk about Determination 8 2018 CCES
51

politics, candidates, and elections. About how


many of them identify with the same political
party as you?
None, less than half, about half, more than half, all
Policy Disagreement Actual: Participants had a discussion with someone Discussion 7 Physiological Experience
who agreed or disagreed with them about a set of Study
four policies
Perceived: Participants reported whether they
thought their discussion partner agreed or
disagreed with them about a set of four policies
Study confederates were scripted to express different Discussion 7 Political Chameleons
policy opinions than the participant Study
52 Data Collection

 .. Conceptualization and operationalization of


knowledge asymmetries

Measure Stage Chapter Study


Vignette Manipulation: “They Decision 5 CIPI I Vignette
[don’t] sound highly Discussion 7 Experiment
knowledgeable and well-
informed. It sounds to Sarah
like they have been
following the news and
campaign a lot more/less
closely than Sarah has. As
the conversation continues,
the person who seems the
most/least knowledgeable
turns to Sarah and asks
about her thoughts on the
candidates.”
Compensation demands for Decision 5 Name Your Price
conversations with groups Study
of people described as
knowledgeable or
unknowledgeable about
politics
Participants were told that Decision 6 Psychophysiological
they were going to have a Anticipation
discussion with someone Study
who scored in the 10th or
90th percentile on a political
knowledge pretest.
Vignette Manipulation: Determination 8 Vignette Pilot
“[They all sound highly Studies
knowledgeable and well-
informed. It sounds to John/
Sarah like they have been
following the news and
campaign a lot more closely
than John/Sarah has. / They
don’t sound highly
knowledgeable and well-
informed. It sounds to John/
Sarah like they have been
following the news and
campaign a lot less closely
than John/Sarah has. / It

https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press


Conceptual Operationalization of the 4D Framework 53

Measure Stage Chapter Study


sounds to John/Sarah like
they have been following the
news and campaign about
the same amount John/
Sarah has.] As the
conversation continues, the
person who seems [the most/
the least/equally]
knowledgeable turns to
John/Sarah and asks about
his/her thoughts on the
candidates.”
Think about the people with Determination 8 2018 CCES
whom you talk about
politics, candidates, and
elections. Would you say
that most of them . . .
Know more about politics,
candidates, and elections
than you; know about the
same about politics,
candidates, and elections as
you; know less about
politics, candidates, and
elections than you

provided during the True Counterfactual Study, explained in more detail


in Chapter 5. We asked participants – randomly assigned to describe a
conversation they actually had or a conversation they could have had but
avoided – why they chose to engage in or avoid the discussion. We
worked with a team of research assistants to hand code these responses,
following a coding scheme that mapped onto the AAA Typology that we
generated based on explanations that emerged from the focus group work
of Conover, Searing, and Crewe (2002).
We then incorporated what we learned from the free response coding
to improve our coding scheme to develop a series of considerations
individuals might use when deciding how to behave in a political discus-
sion, even if these decisions are happening without much conscious
thought and in a matter of seconds. This set of forced choice consider-
ations was used for the vignette experiments. We constructed a list of both
concerns (e.g. considerations that suggest discussion could lead to a

https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press


54 Data Collection

negative outcome, likely making someone less likely to engage meaning-


fully), and opportunities (e.g. considerations that suggest that discussion
could lead to a positive outcome, likely making someone more likely to
engage meaningfully). After reading a vignette about a hypothetical char-
acter in a political discussion scenario and reporting how they thought the
character would behave in the discussion, respondents were asked to
report which considerations the character was likely to make in the
situation.1
Table 3.4 includes example responses from the coded free response
answers, as well as the full set of considerations we developed. The AAA
Typology does not capture all of the considerations that run through
people’s minds: Of the 2,808 coded free response answers, only about
half (47 percent)2 were able to be coded into one or more of the three
motivational goals by at least one of the coders. Moreover, the AAA
Typology is not rigid, as some free responses fell into multiple categories.

4D Framework Outputs
The 4D Framework is an organizing structure for the nuanced choices
related to whether people talk about politics, what they choose to say,
and what they distill from their conversations. In this section, we focus on
outputs of the 4D Framework, which largely take on the role of depend-
ent variables in our empirical analyses. The outputs include behaviors
such as whether individuals try to detect others’ political views in advance
of a discussion, for example, or whether individuals hedge when revealing
their partisan identity to a stranger in a conversation.
In characterizing and studying cycle outputs, we could have proceeded
in two ways. The first would be to pick a single outcome variable for each
stage of the 4D Framework and more robustly test the correlates of that
outcome measure. The advantage of this approach would be that we
could make a more concise and refined argument about each key stage.
The second approach would be to pick a variety of outcome variables
pertinent to each stage and explore more facets of various choices. The
advantage with this approach would be that we could capture a wider
swath of the various decisions and behaviors embedded within political
discussion.
We chose to pursue the latter strategy for several reasons. First, polit-
ical discussion is a multifaceted behavior that is anything but formulaic.
We see our contribution primarily as conceptual – theorizing the iterative
process of political discussion – instead of a measurement contribution

https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press


Conceptual Operationalization of the 4D Framework 55

 .. Considerations coded into AAA Typology

Examples from Free Response Statements in Vignette


Coding Experiments
Accuracy Did not have all the facts to Concern that his/her opinion is
fully participate based on factually inaccurate
Didn’t want people to think information
that I didn’t know what Concern about expressing an
I was talking about opinion about which s/he is
I intervene with my opinion uncertain
when the ignorance becomes Concern that people would
unbearable and I feel the judge him/her for his/her
need to enlighten others of knowledge level
the other side of the story Opportunity to discuss
Because I thought I would learn important issues with these
something new and it made people
me challenge myself Opportunity to persuade these
people to change their minds
Affirmation I was afraid of being judged Concern about public speaking
because my opinion was Concern that people would
different from everyone elses judge him/her for his/her
[sic] opinion
It was important for me to have Concern about defending his/
my opinion heard her opinion
Interesting to self-evaluate Opportunity to discuss his/her
myself real political opinions
I felt very strongly about the Opportunity to do his/her civic
topic and wanted my opinion duty by discussing politics
to be considered and exercising free speech
Opportunity to solidify his/her
opinion
Opportunity to justify his/her
opinion
Affiliation The person was a friend of mine Concern that expressing a
and I was afraid that my dissenting opinion will
opposite opinion would hurt damage the relationship John/
our friendship Sarah has with these people
I wanted to be included in Concern that expressing a
the group dissenting opinion will
Because they were family negatively affect his/her
members and I didn’t want chance of getting invited to
to create a rift. I don’t see another neighborhood
them often so I felt it was gathering

(continued)

https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press


56 Data Collection

 .. (continued)

Examples from Free Response Statements in Vignette


Coding Experiments
better to just let it go rather Concern that expressing his/her
than speak up. opinion will make people
My inlaws [sic] are very set in uncomfortable
their ways my opinion would Concern that expressing
have upset them and caused disagreement will make
unneeded stress on my people uncomfortable
husband I did not want to Opportunity to engage more
hurt any relationships with these people
Opportunity to get to know
these people on a deeper level

aimed at introducing or validating new survey questions or approaches


that reliably and accurately capture people’s behavioral tendencies.
Second, some of the behaviors and attitudes in the 4D Framework are
exploratory and have not been well-studied, such as the process of detec-
tion. We prioritized exploring the breadth of new behaviors instead of
their depth. Third, we wanted to explore discussion under a range of
conditions, but certain outputs matter more than others under particular
conditions. It was thus difficult to isolate a single output for each stage
given our emphasis on capturing the breadth of discussion conditions.
In addition to these challenges, we wanted to balance measurement at
two levels: proximal behaviors and general tendencies. One of our argu-
ments in this book is that social scientists need to think about all of the
small decisions that individuals make (consciously or not) to ultimately
arrive at a political discussion. As a result, we needed to design studies
that allowed us to capture these moments of decision, which we call
proximal behaviors. However, we needed to balance these designs meas-
uring discrete decisions with designs that capture individuals’ broader
political discussion behaviors, which we call general tendencies.
Measuring both proximal behaviors and general tendencies required that
we engage with multiple research designs and multiple dependent vari-
ables at each stage of the 4D Framework in an effort to triangulate
decision-making at the micro-level with the macro pattern of overall
discussion behavior.
Table 3.5 summarizes the wide array of measures we use in this book
to capture both proximal behaviors (moments of decision) and general
tendencies (broader patterns) at each stage of the 4D Framework.

https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press


https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press

 .. Measurement of 4D Framework outputs

Outcome
Stage Concepts Measurements – Proximal Behaviors Measurements – General Tendencies
Detection Disagreement Imagine that you were trying to guess someone’s Below we have listed some characteristics
Recognition political views, but you couldn’t ask them directly. about people. How would you describe
How would you go about guessing their political someone’s political party affiliation if all
views? [Free response] you knew was that he or she . . .
When you discuss politics with someone new,
do you typically try to guess his or her
political views before starting the
discussion? Yes, No
Decision Discussion How do you think John/Sarah would respond to Some people try to avoid getting into political
57

Avoidance the person’s question? discussions because they think that people
Say nothing on the subject, even though s/he can get into arguments and it can get
disagrees with them unpleasant. Other people enjoy discussing
politics even though it sometimes leads to
arguments. What is your feeling on this –
do you usually try to avoid political
discussions; do you enjoy them; or are you
somewhere in between?
Which of the following best describes your
political discussion behavior?

(continued)
https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press

 .. (continued)

Outcome
Stage Concepts Measurements – Proximal Behaviors Measurements – General Tendencies
– I’ll talk about politics with someone, but
only if I know their political views ahead of
time
– I’ll only talk about politics with someone if
I know we have the same political views
– I’ll only talk about politics with someone if
I know we have different political views
Discussion Discomfort Increased heart rate and increased electrodermal How do you feel when someone disagrees with
activity you on a political issue? Select all that apply
and indicate the strength of your response on
a 5-point scale from weak to strong. Angry,
58

annoyed, anxious, motivated, happy,


relieved, it doesn’t bother me at all
Discussion Opinions How do you think John/Sarah would respond to
Revealed the person’s question?
– Say that s/he strongly disagrees with them, even
though s/he really just disagrees with them
(entrench)
– Say that s/he disagrees with them, which s/he does
(true opinion)
– Say that s/he slightly disagrees with them, even
though s/he really disagrees with them more than
slightly (censor)
– Say that s/he agrees with them, even though s/he
really disagrees with them (conform)
https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press

– Say nothing on the subject, even though s/he


disagrees with them (silence/deflect)
What is the likelihood that John/Sarah expresses
his/her true opinion to the group? Very unlikely,
Unlikely, Somewhat unlikely, Somewhat likely,
Likely, Very likely
Hedging in Conversations
Expressing the same opinion as the group in a
discussion, when that opinion differs from the
opinion expressed on a private pretest
Determination Political What is the likelihood that John/Sarah avoids Have you ever distanced yourself from a
Estrangement discussing politics with other people who were not friend because of his or her political views?
part of this discussion? Yes, No
59

In which of the following ways have you


distanced yourself from a friend because of
his or her political views?
Determination Social How likely do you think it is that John/Sarah invites How likely is it that you would spend
Estrangement these colleagues over for dinner? occasional social time with an outpartisan,
How likely do you think it is that John/Sarah be next-door neighbors with an
attends another gathering that this group will be outpartisan; be close friends with an
at in the future? outpartisan; marry an outpartisan? 4-point
How likely do you think it is that John/Sarah scale from I absolutely would not do this
chooses to sit with these coworkers at lunch to I absolutely would do this
tomorrow? Feeling thermometers toward the outparty
60 Data Collection

Proximal Behaviors
Most of our studies focused on particular instances where people were
faced with a decision. We tackled these proximal behaviors in moments of
decision in Stage 2 (Decision) and Stage 3 (Discussion), though we do
investigate some proximal behaviors in Stage 4 (Determination) as well.
We assess these decisions in three key studies: the True Counterfactual
Study, vignette experiments, and the lab studies.
We have already referenced the True Counterfactual Study, but the
idea again is that we randomly assigned participants on our CIPI I Survey
to think about a time in which they either engaged in a political discussion
or had the opportunity to discuss politics, but chose to avoid it. We then
asked a series of questions about the situational factors, such as who was
there and whether there was any disagreement, and why they chose to
avoid or engage.
This approach was useful in distinguishing the features that charac-
terize conversations that happen from those that failed to materialize,
but does not give us causal identification over those situational factors.
In an effort to better understand the causal effect of situational factors,
such as disagreement and knowledge, on discussion behaviors in the
4D Framework, we turned to vignette experiments. We describe the
vignette experiments in more detail shortly, but we used them to
construct a moment where a subject had to make a decision about
how a character would behave. After reading a vignette, participants
were typically asked to report how they thought the character would
respond. This included capturing the likelihood of expressing his or her
true opinion to the group, as well as a behavioral choice: deflecting by
not saying anything at all, conforming, censoring, expressing his or her
true opinion, or stating an opinion that was more extreme than what
he or she actually thought. We then asked about the likelihood with
which the character would engage in conversations with these groups
in the future.
Finally, we measure proximal behaviors in our lab studies, where
subjects were presented with discrete moments of choice, where they
had to make decisions about what to say. In essence, we hoped to measure
particular instances of decisions instead of generalized behavioral pat-
terns. The behaviors measured vary by study, but they complement the
more concretely measured behaviors in the vignette experiments with
more subtle measures of how much someone’s expressed opinion differed
from their private opinion, or how much they hedged their verbal
responses, for example.

https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press


Data Collection 61

General Tendencies
In addition to measuring behavior at the moment of decision, we also
wanted to measure individuals’ general political discussion preferences
and behaviors during various stages of the 4D Framework. These meas-
ures are used primarily in our analyses of the role of individual differ-
ences, where we seek to find associations between individuals’
dispositions and their reported discussion behavior. We primarily rely
on more traditional survey questions, slightly modified to capture previ-
ously unexplored facets of political discussion. Responses to these ques-
tions lack the causal identification and direct link to situational factors
that our studies on the moments of decision feature. However, studying
the general tendencies helps us to link individual dispositions to political
discussion behavior more concretely.
While perusing Table 3.5, readers might notice that we do not include
general tendency measures for revealed opinion. Some of our pilot work
indicated that social desirability bias likely leads to overestimating the
extent to which individuals express their real opinions in conversations.
Given social norms surrounding honesty and the ability to engage in
political conversations, we anticipated that individuals might be reluctant
to admit that they would not express their true opinions to the group.
This is why we chose to measure proximal behaviors for this stage of the
4D Framework. We do report results from other researchers (e.g. Gibson
and Sutherland 2020) about self-censorship, but do not include any of our
own data for general tendencies of opinion revelation.

 
The data collection for this book spanned a five-year period between
2013 and 2018, using a wide range of methods and samples. In this
section, we describe the nuts and bolts of how we collected the data for
the measures we described in the previous section. We do not characterize
every study that we present in the book but focus on the studies that make
repeat appearances. Further details will be provided in the chapters
themselves alongside the results and conclusions.

Surveys
Table 3.6 summarizes the details about the main surveys used in this
book. We conducted two nationally representative surveys and one con-
venience sample survey ourselves, and added some questions to the

https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press


https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press

 .. Survey data collection

Name Description Date Collected Number of Observations


CIPI I Survey Nationally representative survey March 29–April 8, 2016 3,310
fielded by SSI. This survey included
demographic questions, political
batteries (e.g. engagement),
personality batteries (SIAS, conflict
avoidance, group attachment,
partisan attachment, willingness to
self censor), the vignette experiment
(knowledge level manipulated),
stereotypes questions, and the
62

political discussion free response


study.
CIPI II Survey Nationally representative survey August 1–11, 2017 528 recontacts from CIPI 1;
fielded by SSI. We recontacted as 487 new contacts
many individuals who completed
our CIPI 1 survey as possible and
then filled the rest of the sample with
new respondents. This survey
included questions about
conversation initiation,
interpersonal efficacy, social
polarization, and social distancing.
https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press

Thanksgiving Study This study was conducted over five Wave 1: November 17 Wave 2: 300 in each wave
waves (cross-sectional) on November 24 Wave 3:
Mechanical Turk. Participants were December 1 Wave 4:
asked questions about basic December 22
demographics, their Christmas/ Wave 5: December 29
Thanksgiving celebrations, All Waves: 2014
individual differences, perceptions of
Democrats and Republicans, and
then given two stereotyping
batteries. The first was a list
experiment about partisan
stereotypes (one six-item list and one
seven-item list). Then, they were
63

assigned to one of two orders for a


partisan stereotyping “which agree”
battery. They were asked to evaluate
either (1) Republican/Democrat
candidates, voters, then people they
know; or (2) Republican/Democrat
people they know, voters, then
candidates. The survey concluded
with questions about hot issues at
the time: Keystone XL Pipeline and
immigration.

(continued)
https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press

 .. (continued)

Name Description Date Collected Number of Observations


TargetSmart Poll Polling data collected by TargetSmart. September 2016 821
Questions about how confident
individuals would be guessing
political views from various cues, as
well as comfort engaging in
discussions, along with consumer
data, vote preferences, etc.
2018 CCES Included original questions about September 27– November 5, 2018 1,000
social polarization from Mason
(2018) and discussion network
64

composition on the UC San Diego


module for the 2018 CCES.
Questions were included on the
preelection survey.
Data Collection 65

2018 Cooperative Congressional Election Study (CCES) and a poll


TargetSmart conducted in Ohio in 2016.
Our two original nationally representative surveys share the acronym
CIPI, which stands for Contentious Interpersonal Political Interactions.
Our first nationally representative survey, CIPI I, was fielded between
March and April of 2016 by Survey Sampling International (SSI, now
called Dynata). We obtained a sample of 3,310 total respondents,3 quota
matched to census demographics. This survey included a standard suite of
demographic data, such as age, gender, race, ethnicity, income, and
education; political batteries including political engagement, trust in gov-
ernment, and political efficacy; questions about political discussion pref-
erences; and personality batteries, including social interaction anxiety
(SIAS), conflict avoidance, and willingness to self-censor. This study also
included our core vignette experiment, described in more detail shortly,
and a political discussion free response experiment. The key goal of this
survey was to gather representative data on the individual differences of
interest in our book and general tendencies, to allow us to examine the
relationship between these characteristics. We use data from the CIPI
I Survey in almost every empirical chapter of this book.
The second nationally representative survey, the CIPI II Survey, was
fielded in August 2017 by SSI. We had a total of 1,015 respondents quota
matched to census demographics to complete the CIPI II Survey. This
included an effort to recontact respondents who completed the CIPI
I Survey, resulting in over 500 respondents who completed both CIPI
I and CIPI II Surveys. The CIPI II Survey included more detailed questions
about political discussion decisions, including questions about discussion
initiation, allowing us to consider the conditions under which individuals
are willing to begin a political discussion. We also included questions
about social polarization and estrangement. For the respondents who
completed both surveys, we can also examine the correlations between
the individual personality characteristics and political discussion and
social distancing behaviors. Because the sample is representative, we are
also able to provide important descriptive statistics on the rates at which
individuals report that they distance from others because of politics.
In addition to the two original nationally representative surveys, we
conducted an original five-wave cross-sectional survey on a convenience
sample (Mechanical Turk), which we call the Thanksgiving Study. The
goal behind the Thanksgiving Study was to survey respondents before
and after major holiday gatherings, times at which politics is known to
come up in conversation in social settings. We wanted to explore

https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press


66 Data Collection

variation in political discussion experiences before and after these holiday


gatherings as a somewhat exogeneous shock to the salience of contentious
political discussion experiences. Between November and December of
2014, we surveyed 300 people on Mechanical Turk a week before
Thanksgiving, the week after Thanksgiving, early December, the week
before Christmas, and the week after Christmas. We did not observe
dramatic differences in political discussion behaviors between waves, so
we pool the waves together in our analyses. We analyze data from the
Thanksgiving Study primarily in Chapter 4 when we examine the stereo-
types individuals hold about people from the opposite political party
(outpartisans). On the Thanksgiving Study, we asked respondents to rate
their agreement with a host of stereotypes about outpartisans they knew
personally, outpartisan voters, and outpartisan candidates, randomizing
the order in which they made these evaluations.
We also use free response data from the Thanksgiving Study through-
out the book to qualitatively illustrate different stages of the 4D
Framework. On the Thanksgiving Study, we asked respondents to engage
in a “stop and think” task in which they were asked to pause and reflect
about recent political discussions they had and then describe them in their
own words, detailing who was there, what was discussed, and how they
felt.4 While discussions with family members at holiday gatherings might
be more likely to come up in these responses than they otherwise would,
given the time at which the data were collected, this analysis allows us to
better understand the experience of a political discussion in respondents’
own words, without needing to rely on our a priori expectations about
their behavior when designing survey questions.
In addition to our original data collection, we were able to include
questions on two other surveys that we analyze in this book. First, we
included questions on a poll conducted by TargetSmart in September of
2016 in Ohio.5 The poll was conducted both online and over the phone,
but some of our questions were only fielded to the internet sample. On
this poll, we asked questions about how confident individuals would be in
guessing others’ political views based on various characteristics including
the news sources they use most regularly, where they live, how frequently
they attend religious services, which candidate most of their friends
support, information they post on social media (such as Facebook or
Twitter), and demographic characteristics (such as age, gender, or race).
We analyze these data in Chapter 4 in an effort to understand the cues
individuals use to infer others’ political views.

https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press


Data Collection 67

Finally, we were also able to include some questions on the University


of California, San Diego’s module on the Cooperative Congressional
Election Study (CCES). Our questions were included on the preelection
wave, fielded to a nationally representative sample between September
27 and November 5, 2018. We were able to obtain a host of demographic
and political information included in the common content on the survey,
in addition to our original questions. Our questions were focused on
social polarization, drawing a battery from Mason (2018) to ask about
how likely individuals would be to spend occasional social time with an
outpartisan, be next-door neighbors with an outpartisan, be close friends
with an outpartisan, or marry an outpartisan. The scale ranged from 0
(“I absolutely would not do this”) to 3 (“I absolutely would do this”). We
also included questions to measure the partisan composition of political
discussion networks by asking respondents to estimate the share of their
discussants who are from the same political party as them. While we did
not write the question, we were also able to analyze a question that asked
about whether respondents had talked about politics with their friends
online or in person in the past year. We analyze the CCES data in
Chapter 8 as we unpack the social consequences at the Determination
stage of the 4D Framework.

Psychophysiological Lab Experiments


Perhaps the most novel data collection included in this book is the use of
psychophysiological lab experiments. Much of our theoretical framework
draws on seminal theories arguing that discussing politics, particularly
with those who disagree, can be “uncomfortable.” But little research thus
far has been able to effectively measure that discomfort.
Self-reported measures of comfort can certainly be informative, but
they might also suffer from a number of biases. Some might think that it is
socially desirable to be comfortable discussing politics, especially in con-
versations across the aisle, leading them to overreport their comfort levels.
As we discuss in more detail shortly, psychophysiological data can help
get around the problem of social desirability bias by measuring individ-
uals’ instantaneous, uncensored physiological responses to various polit-
ical discussion experiences. In addition to addressing the social
desirability bias problem, the physiological data can help us more fully
characterize the experience of political discussion, by capturing what
individuals actually feel.

https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press


68 Data Collection

At the heart of behavioral physiology research is the notion that our


bodies biologically respond to stimuli in our environments and our
physiological responses can inform our subsequent behavior. The physio-
logical data collected in this book come from the autonomic nervous
system (ANS), which is responsible for controlling bodily functions not
consciously directed, such as breathing, sweating, heartbeat, and digestive
processes. Given that the ANS operates without conscious direction, it is,
perhaps, the closest approximation to a “gut-level” response.
There are a number of psychophysiological measures assessing differ-
ent facets of the nervous system that have been applied to political science
research questions (Settle et al. 2020), but we choose to focus on two:
individuals’ heart rates and electrodermal activity (EDA). We measure
heart rate using changes in beats per minute. Our measure of EDA
includes changes in skin conductance level (SCL), which captures the total
electrical conductivity produced. Essentially, EDA captures how sweaty
individuals’ palms become in response to a stimulus.
What does psychophysiological reactivity tell us? Beyond the “fight or
flight” response captured, researchers posit that psychophysiological
response captures emotional response, although there is not a direct
correspondence between self-reported emotion and psychophysiologically
measured arousal. Emotions are highly associated with physiological
activity in the cardiovascular system (heart rate) and the electrodermal
system (skin conductance). Specifically, ANS activity has long been linked
to emotional responses, as emotions such as anger, anxiety, disgust,
embarrassment, fear, some forms of sadness, and joy are all associated
with increased heart rate (Kreibig 2010). The same emotions are also
associated with increased electrodermal activity. Given this ambiguity, we
collect some self-report data about emotional response as well.
Additionally, the activation captured by skin conductance response has
been associated with attentiveness and information processing (for more
information, see Soroka 2019).
We chose to focus on heart rate and EDA for both theoretical and
practical reasons. From a theoretical perspective, heart rate and EDA
better capture nonconscious response to discussion because they are
rooted in the ANS, whereas other measures, such as electromyography,
are part of the somatic nervous system, which captures voluntary move-
ment. As we described in Chapter 2, we think that the 4D Framework
involves both subconscious processing and conscious, deliberate choices.
We wanted to leverage the psychophysiological measures to get at the
automatic and implicit components of response to discussion. From a

https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press


Data Collection 69

practical standpoint, the equipment used to measure heart rate and elec-
trodermal activity were not so obtrusive as to prevent subjects from
conversing with one another. We worried that connecting subjects with
electrodes that measured their facial muscles (electromyography) or their
respiratory rates (using equipment placed around their rib cage) would be
too cumbersome in a study in which people interact.
One important feature of psychophysiological response is that it can be
used as both an independent variable and a dependent variable. That is,
some researchers are interested in how a certain stimulus, such as expos-
ure to incivility, makes individuals respond physiologically, whereas
others are interested in how individuals think or behave politically once
they are in a particular physiological state (Stern et al. 2001). Given our
framework of thinking about the 4D Framework as a cycle, we think
about psychophysiological data as playing both roles. We are primarily
interested in measuring how individuals respond psychophysiologically to
various political discussion stimuli, such as anticipating a disagreeable
conversation or actually participating in a discussion. Here, psycho-
physiological data helps us better characterize how people actually
experience political discussion and helps us better measure the “discom-
fort” that previous researchers have alluded to, but not tested, without
relying solely on self-report measures of emotional experiences. However,
we also expect that the way in which individuals respond psychophysio-
logically to political discussions likely informs their future discussion
behavior. Individuals who experience increased heart rates and sweaty
palms when discussing politics with outpartisans might prefer to avoid
those experiences in the future. We make this point in one of our previous
papers where we find that those who are more psychophysiologically
reactive to anticipating political discussions are more likely to form
copartisan discussion networks (Carlson, McClean, and Settle 2019).
We collected psychophysiological data in two experiments. Both stud-
ies were conducted using a sample of undergraduate students recruited
from political science courses. Most students participated in exchange for
course credit, but some students who had participated in previous studies
were invited back to the lab and paid for their participation. We worked
with an outstanding team of undergraduate research assistants to proctor
the studies.6 The results from the Psychophysiological Anticipation Study
and the Psychophysiological Experience Study are discussed primarily in
Chapter 6 with some additional results presented in Chapter 7. We
describe each study in more detail in the next section, but summarize
the details in Table 3.7.

https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press


https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press

 .. Psychophysiological lab studies

Number of
Name Description Date Collected Observations
Psychophysiological Physiological study in which student participants viewed videos Fall 2014, September 205
Anticipation Study showing both political and apolitical contention and then were 2015
given the discussion stimulus in which participants were told
that they would be asked to discuss politics with another
70

participant, described as more/less knowledgeable and as


same/opposite party.
Psychophysiological Physiological study in which student participants actually Fall 2015 165
Experience Study engaged in a political discussion with another person who was
either a copartisan or an outpartisan. Participants revealed
their partisan identities at the start of the conversation and
discussed several issues, including immigration policy.
Data Collection 71

Psychophysiological Anticipation Study


This study was originally fielded in the fall of 2014 using a student subject
pool. A second round of data was collected in September 2015, recruiting
from a different pool of student subjects who said that they would be
interested in participating in future research studies for pay. We recruited
a combined total of 205 participants. The general procedures remain
nearly identical between the two samples, and we combine them for all
of our analyses.7
The study was designed to answer two key questions. First, how do
individuals actually feel when anticipating a political conversation? We
wanted to test the real-world experience of a political discussion directly,
but we needed a baseline with which to compare the magnitude of our
results.8 We chose to compare psychophysiological responses to anticipat-
ing a real political discussion to responses to watching others engage in
contentious political interactions on video. Mutz and Reeves (2005) and
Mutz (2015) argue that television likely elicits stronger psychophysiological
responses than other media because it is more similar to real-world experi-
ences. Given their findings about increased psychophysiological response to
incivility on video, we thought this would be a useful benchmark against
which to compare anticipating actual political conversations.
Second, and more central to the argument we advance in this book, we
wanted to know if conversations of different types elicit different levels of
physiological response. We chose to experimentally manipulate the fea-
tures of the discussion by informing participants that they would be asked
to have a conversation with someone who was a copartisan or an out-
partisan and who was more or less knowledgeable than them about
politics. Finally, we selected three contentious issues as the topics that
would be discussed, issues about which the two political parties had very
different stances: Obamacare, tax cuts, and abortion.
Psychophysiological data can give us a sense for the magnitude of an
individual’s experience, but it cannot provide a definitive answer as to the
valence of a person’s response. For instance, an increase in heart rate
could indicate anxiety or stress, but it could also signal excitement or
enthusiasm. In an effort to better understand the valence behind what
individuals experience, we complement the psychophysiological data with
self-report measures of participants’ emotional responses.9

Psychophysiological Experience Study


Our second psychophysiological lab study builds directly on the first. The
goal of this study was to measure psychophysiological responses to

https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press


72 Data Collection

engagement during a political discussion. Whereas the first study focused


on anticipation, this study focused on trying to measure the actual
experience of talking about politics. Specifically, we sought to examine
differences in psychophysiological response between agreeable and dis-
agreeable conversations, with disagreement conceptualized in a variety of
ways, as previewed in Table 3.7. In the fall of 2015, we conducted a lab
experiment with 165 undergraduate student participants recruited from
their political science courses in exchange for course credit.10 The study
was conducted with two participants at a time,11 who sat in front of a
television screen that delivered the discussion prompts to them.
We aimed to manipulate the presence of disagreement to test
whether people exposed to disagreement reported more negative emo-
tion, experienced more discomfort, and displayed more physiological
reactivity. We operationalized our concept of disagreement in three
ways: partisan identity disagreement;12 actual policy disagreement;
and perceived policy disagreement (see Table 3.2). We “randomized”
treatment by allowing people to sign up for their own discussion slot,
with the idea that the distribution of identity in the sample should lead
to conversations with varying degrees of both stated identity and
revealed opinion disagreement. This was obviously not true random
assignment, but a subject’s assignment should not be correlated to their
preferences about political discussion. In short, we attempted to exploit
variation in identity concordance between the paired discussants,
selecting issues that would or would not activate the importance of
those identities.
In one sense, our approach worked to generate treatment assignment
that manipulated levels of disagreement in a way that was uncorrelated
with subjects’ preferences. We found that there was more issue disagree-
ment on the pre-survey questions in the clashing partisan conversations.
A balance table between partisan aligned and partisan clash conversa-
tions shows that there were no differences between the groups on gener-
alized discussion preferences, such as how frequently they discussed
politics or how often they talked politics with people who disagreed (see
Chapter 6 appendix). When asked in a post-test about their discussant’s
partisan identity, subjects were much more likely to accurately remember
the partisanship of their discussant than other characteristics of the dis-
cussant, such as whether he or she was an in-state student. However, as
we discuss in Chapter 6 and in the appendix to that chapter, not all facets
of our treatment and design worked as intended, necessitating some
changes to the way we analyzed our data.

https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press


Data Collection 73

Survey Experiments
Our approach in this book also relies on a handful of survey experiments.
We will remind readers of the basics of the studies in the chapters in which
we discuss the results, but as a preview, they are the Names as Cues
Studies and Stereotypes Anchoring Experiment in Chapter 4; the True
Counterfactual Study and Name Your Price Studies in Chapter 5; and
Vignette Experiments in Chapters 5, 7, and 8. Because the vignette experi-
ments are used across several chapters, we focus some attention on
describing them here.
One of the core components of our empirical analysis in this book
comes from vignette experiments. Vignette experiments present partici-
pants with a short description of a scenario, randomizing components of
it, such as the people, location, or actions. Researchers ask participants to
either imagine that they are in the scenario described and to report how
they would think or act, or ask participants to report how they think the
character(s) in the scenario would think or act. These studies are useful
for exploring how a variety of scenarios that might be too complicated or
unethical to observe in the real world affect attitudes and behavior. In our
case, we wrote vignettes that described a variety of political discussion
experiences, randomizing the relationships between the characters, the
levels of disagreement and political knowledge, the power dynamics, and
the social context. It was infeasible for us to conduct in-person lab experi-
ments that manipulated these characteristics, so we chose to sacrifice
some external validity in exchange for more precise control over the
features of political discussion scenarios that are central to our theory.
From 2014 to 2016, we conducted five pilot vignette experiments on
Mechanical Turk and on a student sample before launching our principal
vignette experiment on the CIPI 1 Survey, as summarized in Table 3.8.
The experiment we conducted on the CIPI I Survey was heavily influenced
by our findings in the vignette experiments that preceded it. We describe
the convenience sample studies here to give readers a sense for the
magnitude of the pilot work we did to inform the experiment we fielded
on the CIPI I Survey. However, we focus most of our analysis in this book
on the large vignette experiment we conducted on the nationally repre-
sentative sample because we have a full set of covariates measured,
including our measures of individual differences.
In most cases, participants were presented with a vignette in which a
character (Sarah for self-identified female participants, John for self-
identified male participants) was described engaging in some kind of

https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press


https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press

 .. Vignette experiments overview

Study Name Description Date Sample


Vignette Pilot Manipulations: first vs. third person; workplace vs. social November 2014 N ¼ 400 MTurk
gathering
Question Wording Pilot Manipulations: workplace vs. social gathering; dependent April 2015 198 MTurk
variable question wording (behavioral response in
vignette)
Knowledge Ties–Power Pilot Manipulations: knowledge composition (more July 2015 961 MTurk
knowledgeable, less knowledgeable, same); social tie
strength (strong tie; weak tie); power composition (more
74

powerful, less powerful, same). Character always in


partisan minority
Knowledge x Partisan Manipulations: partisan composition (majority, minority, December 2015 703 MTurk
Composition Pilot balanced); knowledge composition (more knowledgeable,
less knowledgeable, same)
Power x Partisan Manipulations: partisan composition (majority, minority, December 2015 742 MTurk
Composition Pilot balanced); power composition (more powerful, less
powerful, same)
CIPI I Vignette Experiment Manipulations: randomly assigned the discussants to be March 29–April 3,310
more or less knowledgeable. All participants in the 8, 2016
partisan minority
Data Collection 75

social encounter, such as walking into the breakroom at work, mingling


at a neighborhood gathering, or participating in other similar situations.
John or Sarah never raise the topic of politics themselves; they stumble
into the conversation inadvertently. We described whether the others in
the vignette said things that agreed or disagreed with John or Sarah’s
opinions. At the end of the vignette, one of the others turns to John or
Sarah and asks what he or she thought about the topic they were discuss-
ing. Table 3.8 presents an overview of the studies we conducted,
the samples, and the features of the discussion we manipulated in the
vignettes. In the Chapter 3 appendix, we list the full text of all of the
vignettes and the dependent variable question wording in each study.
After reading a vignette, participants were typically asked to report
how they thought John or Sarah would respond. This included capturing
the likelihood of expressing his or her true opinion to the group, as well as
more nuanced behavioral options, such as deflecting by not saying any-
thing at all, conforming, censoring, expressing his or her true opinion, or
conforming, as well as the likelihood with which he or she engages in
conversations with these groups in the future.
Our vignette experiments make several appearances throughout this
book. In Chapter 5, we analyze the vignette experiments to understand
the characteristics of a discussion that lead individuals to deflect or
engage. In Chapter 7, we use them to understand the discussion features
that structure the considerations people hold and their actual behavior in
the discussion. In Chapter 8, we consider the features of the discussion
that are associated with willingness to engage in future political and
nonpolitical interactions with the people from the vignette. Finally, in
Chapter 9, we examine the individual differences associated with
vignette responses.
While the vignettes give us greater control over the scenario and allow
us to study situations that would be difficult to replicate in the lab or
measure causally in the real world, they come with their limitations. We
have to make the strong assumption that how a participant reports the
behavior of a hypothetical character in the vignette maps onto how he or
she would behave if placed in the same situation. This projection about
an imaginary scenario might be especially challenging for those who do
not regularly talk about politics or have pruned their social networks of
all disagreement, and this difficulty could vary systematically with our
dependent variable. Still, we view these studies as informative for sug-
gesting which characteristics of a discussion affect our outcomes
of interest.

https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press


76 Data Collection


We prioritized breadth and methodological pluralism in this research
project, but we hope not at the expense of conceptual clarity about how
and why we measure the inputs and outputs of the 4D Framework in the
way that we do. With methodological pluralism comes the potential for
methodological confusion, and this chapter orients readers to our
approach in this book in order to contextualize the results that we present
in the empirical chapters that follow.

https://doi.org/10.1017/9781108912495.003 Published online by Cambridge University Press


4

Detection

Mapping the Political Landscape (Stage 1)

As he and his wife Katie walked over to their neighbors’ house, Joe took
note of the trees lining the streets in his new neighborhood. The suburbs
were a nice change of pace from the bustle of the city. Two weeks earlier,
Joe had unpacked the last of the boxes from the moving truck into their new
home. Moving always had been stressful for Joe: New places, new people,
and new norms all took some adjustment. Thankfully, many of the neigh-
bors had been quick to introduce themselves, and the Smiths had even
invited them to a neighborhood Memorial Day backyard party.
Half a block away, Joe could already hear the cacophony of laughter and
music coming from the Smiths’ backyard. He opened the gate and was hit
with the smell of barbeque. They spotted the Smiths and quickly made their
way over to thank them for the invitation, though the hosts were just as
soon swept away to talk with other guests. Katie turned and started chatting
with a woman about her age who looked vaguely familiar, and Joe scanned
the crowd, looking for a conversation into which he could integrate himself.
To his left, he saw two middle-aged men – older than him but younger than
his dad – whom he thought he’d seen last week, sitting in one of their
garages watching sports and drinking beer, alternately tinkering under the
hood of a giant Ford truck parked in the driveway. To his right was a
woman in her mid-60s, with a canvas tote bag bearing the NPR logo slung
over her shoulder. Her short hair, accented by the dangling beaded earrings
she wore, sent off a decidedly “earth mother” vibe. The younger man she
was talking to gave off a similar vibe, with shoulder-length, untamed hair
and a reusable plate he had clearly brought from home filled with vegetables
instead of barbecue.
As always, Joe wanted to avoid any talk about politics or the news, even
though that felt inescapable these days. If he couldn’t completely avoid
political conversation, he at least would prefer not to get into a disagreement
with one of his brand-new neighbors. Which pair of people does he approach?

77

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


78 Detection: Mapping the Political Landscape (Stage 1)

Red MAGA hats, pink pussy hats, and black armbands in the United
States. Yellow vests in France. Green handkerchiefs in Argentina.1 From
time to time, people don apparel that sends a clear signal about the
content of their political views.
However, the vast majority of people do not regularly wear attire
advocating for a political party, issue, or candidate. When Americans
gather around the Thanksgiving dinner table, their place cards are not
labeled with their policy preference on the most salient, contentious issue
of the day. Americans do not (typically) walk around with “Democrat” or
“Republican” stamped on their hands. And when individuals make a new
acquaintance, they will most certainly share their names, and perhaps
what they do for a living or where they are from. But it would be
considered rude in most circumstances to introduce themselves as
“Jaime, a Democrat.”
Part of our endeavor in introducing the 4D Framework is to think
about all of the micro-decisions entailed in engaging in a political discus-
sion. Our broad approach is to put psychological and social consider-
ations at the forefront of the decision-making process, and the primary
goal in this chapter is to think about how people go about the decision to
engage in the first place. If people have preferences over which political
discussions to pursue, they may try to get a read on what the discussion
will be like, and thus they must be able to detect something about the
likely political views of their discussants. In the absence of explicit visible
signals of others’ political orientations, how is it that people come to
recognize the political views of potential discussants before politics even
emerges as a topic of conversation? The answer to this question matters
for the systematic understanding of the political discussions that ultim-
ately come to fruition.
Embedded in the previous work on political discussion are unspoken
assumptions about how people make choices over which conversations to
encourage and which to stifle. In particular, the literature seems to take as
given that individuals prefer to discuss politics with those who agree with
them and that they are able to infer confidently this (dis)agreement. In one
sense, the process underpinning these assumptions is less critical in previ-
ous work because the political discussion literature largely focuses on
regular discussants in individuals’ networks. There is no mystery in the
fact that people often know the viewpoints of their family members, and
depending on the context of a friendship, it is also likely that people will
have a sense for a friend’s political orientation before politics comes up
the first time.

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


Detection: Mapping the Political Landscape (Stage 1) 79

But when social scientists stop to consider a fuller range of political


conversations – with new acquaintances, at the workplace, or incidental
interactions – the question of assessing concordance in viewpoints, or lack
thereof, becomes more important. The more agency researchers ascribe to
individuals in shaping the composition of their discussion network,
the more critical it is that we understand the progression of their
decision-making.
Moreover, and equally consequential for understanding the decision-
making involved in political discussion, we argue that identity recognition
activates a set of assumptions that entail judgment. Political identity has
become embedded into a larger, meta identity (Mason 2018) and thus
sharing one’s partisanship or candidate preference has the potential to
activate others’ stereotypes – for good and for bad – about the kinds of
people who support each party. The attitudes individuals hold and
assumptions they make about people whom they believe identify with a
particular political party, or a party at all (Klar and Krupnikov 2016), can
color individuals’ perceptions of potential political discussants. These
perceptions might shape people’s willingness to engage in the discussion
in the first place, in addition to how they feel, what they say, and what
they learn in the conversation, should they decide to participate.
In this chapter, we begin by describing how detection works and why it
is a relevant feature of the cycle of political discussion. Since we first
started data collection for this project, the field has exploded with studies
documenting the political – and apolitical – differences between
Democrats and Republicans, liberals and conservatives. We engage with
this research, adding some of our own evidence to the mix, but keep our
focus on the implications that these differences have for engaging in
political discussion. That is, our goal in this chapter is not to demonstrate
that there are partisan divides over seemingly nonpolitical things, but
rather to explore how individuals actually go about detecting political
leanings from these nonpolitical preferences, with an eye toward what this
means for political discussion.
We present results from a study in which we asked participants to
describe how they detect others’ views in advance of a discussion, with-
out directly asking, revealing that individuals use a variety of implicit and
explicit cues. After describing these self-reported strategies in detail, we
move on to explore experimental evidence of the extent to which even
something as subtle as a first name can be politically informative. We
then consider what happens after people detect someone’s views: the
activation of stereotypes about outpartisans. We provide evidence that

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


80 Detection: Mapping the Political Landscape (Stage 1)

outparty stereotyping occurs for partisans that people know personally,


and that outparty stereotyping includes characteristics central to social
interactions: the very process by which people think their political out-
group has come to arrive at their opinions. While we cannot rule out
expressive responding as an alternative explanation for our results, this
partisan “cheerleading” comes at the expense of evaluations of people’s
friends and family. Our approach also allows us to rule out the possibil-
ity that people are simply anchoring their stereotypes on negative evalu-
ations of elites and voters in the abstract. Understanding the content and
implications of the stereotypes people hold aids our understanding of
why some conversations fail to materialize as well as the expectations
people have at the outset of the conversations that do occur. Synthesizing
all of this together, we close the chapter by discussing the conditions
under which the detection system is important for political discussion.
More than a quarter of Americans depend upon a detection system – and
these Americans are also more likely to report that their willingness to
discuss politics is conditional on the circumstances, compared to those
who will discuss politics with anyone or those who avoid political
discussions at all costs.


Our initial focus in this chapter is on the detection skills that Americans
use preceding their decision to engage in a political discussion. Our
emphasis on detection bridges the gap between two distinct fields of
inquiry: the correlates of partisan and ideological identities, and the
accuracy of person perception as it relates to political identity. Despite
the growth in these areas of the literature separately, previous researchers
have not married them together to examine how the average American
recognizes the political identity of his or her fellow citizens. What we do
not know is the extent to which people are either aware of these patterns –
and apply them to make inferences about individuals – or are able to
detect subtle signals about a person’s political views.
On the one hand, the average American may not be particularly
attuned to detecting others’ political views. Most Americans are not all
that interested in politics. Any inferences Americans draw about others’
political identities are unlikely to be driven by their knowledge of
established empirical regularities in political behavior, and more likely
driven more by their personal experiences or archetypes portrayed in the
media of politically extreme, engaged, and vocal partisans (e.g. Hersh

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


Detection 81

2020; Krupnikov and Ryan 2022). Mutz (2006) asserts that “[p]eople’s
political views are seldom obvious upon first meeting, and conversations
about politics do not occur with sufficient regularity that people always
know when they are in the company of people who hold cross-cutting
views” (65).
However, there are reasons to believe that Americans either can detect
others’ views or are able to apply aggregate patterns. In the same book,
Mutz (2006) finds that people in the United States are more likely than
residents of other countries to be able to perceive the partisanship of their
non-spouse discussants. As she writes, “[h]igh levels of partisanship—
whether it is favoritism for parties or for particular candidates—make it
easier to select congenial discussion partners. Moreover, the highly sim-
plified two-party system in the United States may make these distinctions
more visible than in countries that have many parties” (p. 53).
Additionally, as Americans become increasingly sorted, socially and
geographically, individuals might actually overestimate patterns in parti-
sanship. Ahler and Sood (2018a, 2018b) have found that people perceive
higher stereotypic associations between partisanship and demographic
traits than exist in reality, for both their ingroup and their outgroup.
However, Carlson and Hill (2021) find that perceptions of how others
voted in the 2016 election are not nearly as biased as the perceptions of
social group composition within the parties. They find that party identifi-
cation, individuals’ explanation of the most important problem facing the
United States, and membership in a racial or ethnic minority group are the
most informative characteristics for accurately guessing how someone
else voted in 2016. Importantly, they also find that the more socially
similar a respondent was to the person whose vote they were trying to
guess, the more accurate their guesses were. Similarly, Deichert (2019)
finds that social cues, such as the clothing one wears, can lead individuals
to consistently and stereotypically assign partisan labels. For example, she
finds that individuals wearing camouflage were consistently identified
as Republicans.
The clearest articulation of detection we have read in the political
science literature is in a formal model presented by MacKuen (1990), a
model we engage with more seriously in the next chapter. Included in his
model is a behavior called signaling, which we think of as the flip side of
detection. He argues that in the absence of certainty about others’ views,
“it makes sense for the individual to use all available information about
the prospects of any particular conversation before making a strategy
choice . . . one may send signals to others in order to increase the

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


82 Detection: Mapping the Political Landscape (Stage 1)

probability of encountering Friends and avoiding Opponents” (MacKuen


1990, p. 80). He posits that two types of signals could be sent: intentional
and inadvertent. The benefits of signaling depend on the individual’s
tolerance for talking about politics, the distribution of political views in
the environment, and “on the ability or desire of others to interpret
signals of varying ambiguity” (p. 81). Yet, as far as we can tell, political
scientists have not quantitatively investigated this “ability or desire,”
what we label as the Detection stage of the 4D Framework.2
While the process of signaling and detection – as either perception of
an individual or application of known patterns in the aggregate – has not
been a major focus of study in the political science literature, the existence
of detection has appeared as an important assumption in many previous
analyses of political discussion. Noelle-Neumann’s seminal work (1974,
1993) on the “spiral of silence” is predicated on the idea that people can
perceive the viewpoints of others as a way to gauge the strengths of
contending ideas or parties and sense swings in the climate of public
opinion.3 Although they do not address the process by which people
come to recognize the unspoken political views of others, qualitative
studies by Kathy Cramer and Nina Eliasoph suggest that some environ-
ments are more conducive to the emergence of discussion than others,
those in which people are able to perceive similarity and shared identity
with their potential discussants.
Evidence from other disciplines suggests that people may be quite good
at perception. A long line of research on person perception in social
psychology has found that even for identification of perceptually ambigu-
ous group members – that is, individuals who do not show any explicit
visible markers of group membership (Tskhay and Rule 2013) – people
can accurately identify the traits or characteristics of others at a rate better
than chance alone. Tskhay and Rule (2013) argue that person perception
is both a rapid and automatic process and that “when two individuals
meet for the first time, they immediately begin to make inferences about
each other based on physical characteristics: clothing, hairstyle, body
type, and even individual facial features” (p. 72). While not all work in
the political domain has yielded significant results (see Benjamin and
Shapiro 2009), the preponderance has (Olivola and Todorov 2010;
Rule and Ambady 2010; see Samochowiec, Wanke, and Fiedler 2010
for a study outside the American context) as has a meta-analysis of
previous studies (Tskhay and Rule 2013).
Synthesizing this together, we have reason to believe both that people
can perceive the political identities of others based on quick impressions,

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


Detection 83

and that Americans may actually overestimate the stereotypic patterns of


association between demographics and political partisanship. Given that
people do connect political views with other identity markers, but with
little guidance from previous literature about how these associations are
utilized, we organize this chapter based on what respondents themselves
disclosed when we asked them about their process. In a free-response
question on our CIPI II Survey, we asked participants to tell us how they
would go about guessing the political views of others, if they could not
directly ask them.4 We wanted to get a sense for which pieces of infor-
mation individuals found most useful when trying to guess what others
think about politics. We worked with a team of research assistants to
code the responses into types of cues, as shown in Table 4.1. More details
about our approach can be found in the Appendix, but we note that a
response could be coded into more than one category, if more than one
method was described.

 .. Detection categories based on free response data

Percent of
Percent of Informative
Responses Responses
Non-Guessers 27
Blank or Nonsense 9
Don’t Know How to Guess 8
Wouldn’t Try to Guess 10
Just By Looking at Them 20 26
“Gut Level” Impression 6 8
Visible Demographic Characteristic 6 8
Clothing and Visible Signaling 8 10
The Facts of Life 18 23
Personality or Trait Characteristics 4 6
Geography 3 4
Occupation and Lifestyle 11 13
Conversational Cues 34 46
Media Usage 4 5
Directly Political Cues 38 50
Note: Categories were not mutually exclusive, so percentages can sum to more than 100.
Free response data collected on the CIPI II Survey. Left-hand column reflects hand coding of
984 responses; right-hand column reflects hand coding of 734 responses. A response was
considered belonging in a category if at least one coder considered it to belong there.

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


84 Detection: Mapping the Political Landscape (Stage 1)

Our goal here is largely inductive: We did not begin with strong
expectations about what people would say and allowed their responses
to guide our categorization. First, we want to highlight the variety of
strategies people use to recognize the political views of others. Second,
we want to connect their answers back to previous literature about the
way people ascertain the identities of those around them. This reveals
high concordance: Many of the patterns that social scientists have
detected in the aggregate seem to be informative to members of the mass
public, as well. In Chapter 9, we more systematically explore the traits
that correlate with an individual’s “detection system” to assess who is
most likely to try to ascertain the viewpoints of others and whether they
say that information is necessary to them before proceeding with a
political discussion.

Unpacking the Self-Reported Detection System


As shown in Table 4.1, we uncovered six broad categories of cues. The
first category, “The Non-Guessers,” includes individuals who left the
question blank or wrote something nonsensical, as well as those who
reported that they could not even begin to try to guess or that they
simply would not try to guess. The second category, “Just by Looking at
Them,” captures responses based on visual cues, including “gut”
responses. The “Facts of Life” category refers to nonvisual cues about
the individual, such as his or her personality, where he or she lives, or
what he or she does for a living. Fourth, “Conversational Cues” is a
category that also relies on nonvisual information, but instead refers to
both information that someone reveals once the conversation unfolds
and how they speak, with reference to their general conversational tone.
The “Media Usage” category represents a revealed political behavior:
their media usage and preferences. Finally, the “Directly Political”
category reflects responses in which our subjects ignored our instruc-
tions and said they would directly engage the person about their political
views. In the sections that follow, we provide more detail about these
categories, example responses, and theoretical justification for these
categories based on previous research.
Below, we unpack each of the categories we coded from the free
response data. We first provide example responses that are illustrative
of the category. We then connect the coded themes to empirical analyses
from a variety of studies, including our own and findings from the
literature. Our analyses stem from a series of studies we conducted in

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


Detection 85

the 2014–2017 period that focus on the extent to which individuals use
different kinds of cues to categorize individuals as Democrats
and Republicans.

The Non-Guessers

Have no idea.
I don’t know.
I do not bother.
I wouldn’t even try.
I would not.
not my business.
i would never guess about anyone's political views – it is their business

We begin with the 27 percent of subjects who either refused to answer the
question (leaving it blank or writing a garbled response), said they would
not know how to go about making a guess, or said that they would not
try to guess. A variety of rationales emerge among those who provided a
written response indicating that they were “Non-Guessers.” While some
subjects indicated that they would not know how to guess, others seemed
to indicate that they would choose not to, even if they had an idea about
how they could. Finally, a small portion of subjects seemed offended at
the premise of the question, indicating that to try and guess someone else’s
views was inappropriate.
We take these responses seriously. While some of them – particularly
the blank and nonsensical ones – may simply reflect respondents who
did not make an effort to answer a free response question, there is good
reason to believe that a large proportion of the population does not
know how or does not try to figure out the viewpoints of the people with
whom they actually talk about politics, let alone their potential
discussants.
For our purposes, we are not interested in whether people are accurate
in their perceptions of the viewpoints of their potential discussants.
Although Huckfeldt, Johnson, and Sprague (2004) argue that accurate
inferences are necessary for effective communication for instrumental
outcomes, in this book we are not focused on how conversation affects
attitude change or knowledge, for which effective communication is
helpful. Instead, we care about detection because it informs the decisions
that people make in the next step of the political discussion cycle.

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


86 Detection: Mapping the Political Landscape (Stage 1)

Perception guides decision-making, whether or not a person correctly


perceives the political identity of another.5
In addition to exploring the variation in cues that people suggest they
could use, we also want to highlight the variation between people in the
confidence they have in making guesses about another person’s political
views. We had the opportunity to include a question on a representative
poll conducted by TargetSmart in advance of the 2016 general election in
which we asked respondents to indicate how confident6 they would be
guessing the political views of others if they had certain pieces of infor-
mation about them. In this case, we provided a list of six different
characteristics7 that we thought would be useful cues based on extant
political science research. On average, respondents had some confidence
in their ability to guess a person’s party identity, with information about a
person’s social media and news use being the kind of information in
which people had the most confidence. (See the Appendix for basic
summary statistics for respondents’ confidence in guessing someone’s
political views based on each characteristic).
Yet, there was substantial variation from one respondent to the next.
Between 14 and 57% of respondents in each of our six categories said
that they would be able to accurately guess someone’s political views at
“no better than chance,” based on the characteristic. In fact, 9% gave that
response to all six categories, indicating that they would be guessing at
random. Conversely, there is a very small set of people who are quite
confident in their ability to determine political views based on other
aspects of a person’s identity: 1% marked that they would be virtually
certain guessing views based on the information provided for all six
characteristics. To relax this somewhat, we find that 27% of respondents
reported that they would be virtually certain guessing views based on at
least one characteristic.
In the sections that follow, we are interested in those individuals who
do try to guess others’ views in advance of a discussion. Turning back to
the CIPI II Survey, the rest of the analysis supporting Table 4.1 focuses on
the 73% of respondents who provided an answer indicating how they
would try to guess another person’s view. When we report proportions of
responses in the different categories described shortly, we use as the
denominator these 734 respondents. Thus, the interpretation in the far
right column of Table 4.1 should read: “Of the people who indicate that
they could or would try to guess another person’s political views, what
percent reported using this type of information?”

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


Detection 87

Just by Looking at Them

Really guess if they are a person of color or not and/or “LOOK“ like a
conservative (asshole)
Their age . . . the clothing they wear, their race.
Gender
How they dress and age.
Facial expresions [sic]
Nose.
Clothing choice, how they carry themselves, facial expressions,
Appearance
what is written on their shirt or hat, any buttons they wear
Size them up.
Sometimes dress can indicate, body language etc.
look(haircut, makeup or not, etc)

Just over a quarter of respondents provided an answer indicating that


they could guess someone’s political views based on looking at a person
without clues that require the exchange of words. The answers in this
category fall into three groups. The first are subjects who identified a
clear, visible clue about a person’s demographics – including, age, gender,
and race or ethnicity – representing 8 percent of the total informative
responses given. Political scientists know that there are persistent and
often strong patterns between an individual’s ascriptive demographic
traits and their political affiliations. For example, Black Americans are
more likely to identify as Democrats. Older individuals are more likely to
be Republican. As we mentioned at the beginning of this chapter, people
may actually overestimate the strength of these patterns (Ahler and Sood
2018) and this sort of information appears to be useful to a number of our
respondents, just as it was informative for individuals trying to guess how
people voted in 2016 (Carlson and Hill 2021).
The second group of responses comes from subjects who pointed to a
clear, but not ascriptive, characteristic, representing 10 percent of the
subjects. These sorts of responses referred to the clothes people wear and
other grooming choices such as makeup or hair styles. People can make
deliberate choices8 in what they communicate to others. One can list
examples of explicit visible signals – such as yard signs, bumper stickers,
buttons, and laptop stickers – that can serve as effective ways to send a
message about one’s beliefs.9 However, we know that not everyone is
willing to publicly express their political views in these ways. These explicit

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


88 Detection: Mapping the Political Landscape (Stage 1)

cues are only likely to be expressed by the subset of Americans who are
interested and confident enough in their political views to advertise them to
others. The American National Election Study reveals that in the past thirty
years, between 10 and 20 percent of Americans reported that they either
wore a campaign button, put a campaign sticker on their car, or placed a
sign in their window or in front of their house in a given election season
(Makse, Minkoff, and Sokhey 2019).
Many respondents in this category mentioned that they could deduce
political views from a person’s clothing but did not imply explicitly
political garments like campaign t-shirts or buttons. Although we did
not probe further, there might be reason to suggest that the style in which
a person dresses or the brands they choose to wear are indicative of their
politics. And in our politicized consumer climate, certain brands have
come to be associated with one party more than the other. In conjunction
with the survey firm YouGov, a study conducted by The Guardian found
that Democratic and Republican millennials have distinct sartorial pref-
erences.10 The preference for different clothing brands may be rooted in
the expression of different values: traditionalism for Republicans and
diversity for Democrats. Building on these ideas, Deichert (2019) finds
that individuals perceive men wearing camouflage, western, or formal
business attire to be Republicans, whereas men wearing “hipster” or
“hippie” attire (two distinct fashion styles) were more likely to be viewed
as Democrats.
The third response set in this category is the most ambiguous, comprised
of the 8 percent of subjects whose responses indicated something about the
way a person looked but without providing additional detail. These com-
ments cut to the core of the person perception literature in social psych-
ology. Psychologists have explored thoroughly how individuals form first
impressions of others, and because physical appearance has such a strong
influence on first impressions in face-to-face interactions, scholars have also
turned their attention toward how photographs of individuals can affect
first impressions (Vazire and Gosling 2004; Marcus, Machilek, and Schutz
2006; Gosling, Gaddis, and Vazire 2008). These first impressions, even
those based solely on physical appearance, can impact a perceiver’s subse-
quent behavior (Efran 1974; Todorov et al. 2005).
A small subarea of this field explores how people detect ideology based
on facial structure, finding that, at a rate significantly more accurate than
chance, Americans are able to categorize the political affiliations of others
based on simply looking at photographs of them. This work has been
premised on guessing the partisanship of elites – typically, unknown or

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


Detection 89

past candidates for office. But a study that extends to evaluating the faces of
college seniors based on their yearbook photos (Rule and Ambady 2010)
suggests the findings should generalize to our ability to detect the ideology –
and thus a clue about partisanship and political beliefs – among potential
political discussants. The proposed mechanism of these perceptions links
facial features to personality traits such as dominance (Rule and Ambady
2010; Samochoweic, Wanke, and Fiedler 2010), warmth (Rule and
Ambady 2010), or sex-typicality (Carpinella and Johnson 2013).
All three types of free responses in this category of our data – ascriptive
traits, visual markers, and a general “look” – are signals that can be
detected before people even open their mouths. While these cues may
simply be proxies for other traits that are more informative to people
about others’ views, it is important to note that our respondents named
the cue, not necessarily which views it signified. We turn next to the kind
of information that can be gleaned from other aspects of an individual’s
identity, typically based on at least cursory verbal interaction.

The Facts of Life

If it were available, I would look at their car and their possessions, if they
recycle, who their friends are
getting them to talk about themselves as to what they do or where they went to
school. Most people tend to give it away by how they answer tghose [sic]
questions
based on what else i know about them - are they gay, do they have a high
paying job, what kind of car do they drive, how they dress.
Asking them what music they listen to.
I would ask about them, their type of job, their values
Either by the things they are doing or purchasing or watching on tv or what
they allow their chikdren [sic] to do
How much do you make?
Based on their educational level
Their hobbies and interests
ask if they smoke cannabis
Their personality
ask them if they are from California or New York
where do you live (neighborhood), do yo ugo [sic] to church, were you in the
military

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


90 Detection: Mapping the Political Landscape (Stage 1)

As the United States has become increasingly polarized and sorted both
politically and socially, liberals have become more similar to other liberals
on nonpolitical dimensions, and conservatives have become more similar
to other conservatives on nonpolitical dimensions. Can these sorted cues
help people infer others’ political leanings? For 23 percent of our respond-
ents, signals related to a person’s traits, lifestyle, or geography were
informative of his or her political views. Approximately 4 percent of
respondents mentioned some facet of geography, a cue that could be
informative about socioeconomic status in several ways. Starting at the
most proximate level, a person’s neighborhood is often a clue about his or
her income and educational levels. At least recently, college-educated
individuals tend to vote for Democrats, while wealthier individuals tend
to vote for Republicans. Scaling up a bit, the characteristics of one’s
community may be informative as a proxy for a person’s lifestyle prefer-
ences. Counties with higher median household incomes are more likely to
vote Republican than counties with lower median household incomes.
Rural Americans are more likely than urban Americans to affiliate with
the Republican Party, even though there is important within-party vari-
ation among rural Americans (Nemerever 2021). Some states have
become so associated with one party or the other that a person has a
better-than-chance rate of guessing successfully just based on state resi-
dence alone.11 Carlson and Hill (2021) find that knowing that a person
was from Washington, DC, was associated with more than a twenty-
point increase in accuracy in guessing how that person voted in 2016.
Many more respondents – 13 percent – indicated that they could guess
another person’s political views based on other details about the individ-
ual’s life trajectory and lifestyle. Much of this information seems to be
rooted in indications of a person’s socioeconomic status; some respond-
ents said that they would directly try to ascertain a person’s income level
(or type of job) or educational level (or educational pedigree). This
category also included the values and worldview clues mentioned by
our respondents. Religiosity should be a useful cue on both sides of the
spectrum. A Pew Research Center survey indicated that 63 percent of the
religiously unaffiliated identify with or lean toward the Democrats. In
2008, religiously unaffiliated voters voted for Obama at similar rates as
White evangelical Protestants did for McCain.12
Beyond these factors, other responses indicated that a person’s interests
and hobbies could be informative. A long vein of research in sociology
explores the notion of “lifestyle enclaves,” where people’s communities
and networks share their tastes for leisure, consumption, and ways of

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


Detection 91

living. Increasingly, these enclaves align with political and ideological


views (for a review, see DellaPosta, Shi, and Macy 2015). The advent of
new data sources have bolstered these findings: There are ideological
differences in preferences for books (Shi et al. 2017), movies (Haidt
2014; Haidt and Wilson 2014), musical genres (Long and Eveland
2018), and television (Katz 2016). Data from social media have been
used to reveal “politically aligned lifestyle enclaves”: Shi et al. (2013)
study the pattern of co-following on Twitter to reveal clusters of users
whose members share preferences both for facets of the cultural land-
scape – such as movies or food – and politics.13
In an effort to contextualize the free responses in this category
within broader partisan stereotypes, we conducted a supplemental
analysis based on another question in the CIPI II Survey.14 Table 4.2
shows the percentage of respondents who thought that individuals
described as having various nonpolitical preferences aligned with each
political party. For example, 58% of respondents considered someone
who liked country music to be a Republican or lean Republican,
whereas only 18% of respondents thought a country music fan would
be a Democrat. In contrast, 56% of respondents thought that a hip-
hop fan would be a Democrat or lean Democrat, whereas only 10% of
respondents thought hip-hop fans would be Republicans. At the same
time, many signals for which there are documented patterns in the
literature were fairly uninformative – whether someone had a college
degree, whether someone had a messy desk, and whether someone
preferred cats to dogs.
The final set of answers in this category of the free response data
referred to traits and personalities, clues that were informative for about
6 percent of respondents. Ideological sorting among the American public
into the political parties may have had the side effect of sorting the parties
on more innate traits such as philosophical outlook. Political ideology
also can be interpreted as a reflection of broader psychological dispos-
itions, and there is evidence that liberals and conservatives have sorted on
moral foundations (Graham, Haidt, and Nosek 2009), motivated social
cognition (Jost et al. 2003), and authoritarianism (Hetherington and
Weiler 2009; Hetherington 2018). There is even evidence of personality
differences (McCrae 1996; Jost et al. 2003; Jost 2006; Carney et al. 2008;
Mondak and Halperin 2008; Gerber et al. 2010; Mondak et al. 2010),
although Ludeke et al. (2014) find that people tend to overreport the traits
they deem to be desirable, so these personality differences may be rooted
in different preferences for personality traits.

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press

 .. Socioeconomic and non-visible demographic traits and inferred partisanship

Independent Independent
Leans Leans
Democrat Democrat Independent Republican Republican
Were gay or lesbian (D) 53 12 27 3 5
Liked hip-hop (D) 37 19 34 6 4
Did not believe in God (D) 36 12 38 6 8
Preferred cultural fusion food (D) 33 17 36 7 7
Volunteered in the community 27 14 34 9 17
Had a college degree 24 12 31 10 22
Preferred cats to dogs (D) 23 13 46 8 10
Never attended college 19 13 37 11 21
92

Had a messy desk (D) 17 15 51 6 11


Had children 16 11 43 10 21
Liked sports 14 10 42 15 19
Preferred action films (R) 15 10 37 14 24
Served in the military (R) 13 7 25 16 39
Liked country music (R) 12 6 25 16 42
Regularly attended church (R) 13 6 22 13 45
Owned guns (R) 6 4 16 11 62
Note: Data collected from CIPI II Survey, N ¼ 997 to 1,007 for each item, reflecting some missing data. Republican column combines Strong Republican
and Republican; Democrat column combines strong Democrat and Democrat. (D) denotes a characteristic that is stereotypically or empirically more
likely among liberals or Democrats; (R) denotes a characteristic that is stereotypically or empirically more likely among conservatives or Republicans,
based on previous research.
Detection 93

Conversational Cues

I'd ask them questions about something in fiction that's relatable to something in
reality. They're usually more willing to answer about something related to pop-
culture. Depending on how they feel about it I can gauge their general position.
Discussing something of cultural importance and gauging their reaction.
I would perhaps ask them other questions about their life that might give me
an indication as to their political views.
At my age general conversation usually results in people letting me know
where they stand politically
Listening to their discussions with other people.
I would observe their way of talking. Liberals and conservatives leave certain
hints.
THE WAY THERE [sic] TALKING
manner of speech
Listen to their comments closely.

The next category of cues included answers both about what someone
says as well as how they were saying it, what might be thought of as
political shibboleths. Nearly half of respondents seemed to indicate that
listening to a person talk about things that are not related to politics can
be informative about their political leanings.
Is there evidence that those on the left and those on the right talk
differently? Certainly about policy itself, individuals might make some
predictions about who says what. If Democrats and Republicans have
different policy priorities, then they might be more inclined to talk about
different societal problems. Individuals could also reflect the frames, catch-
phrases, and buzz words used by political elites on their side of the aisle,
given that the mass public can be led by the messaging of elites (Zaller
1992). But moving beyond known political differences in the substance of
what is said, what sorts of clues might be present in nonpolitical speech?
The free response answers did not provide too many clues, but we can
speculate. Speech patterns that correlate with other informative traits –
such as regional accents, the use of religious language, or the sophistica-
tion of one’s vocabulary – could signal identities that are likely to align
with someone’s political views. Based on the patterns of cultural differ-
ences we outlined previously, it is possible that on average, people per-
ceive liberals and conservatives to talk about different things; in a
conversation about last night’s television programming, discussing a love

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


94 Detection: Mapping the Political Landscape (Stage 1)

of Duck Dynasty, which was most predictive of voting for Donald


Trump, may signal different preferences than talking about the latest
programming on Family Guy, which was most predictive of voting for
Hillary Clinton (Katz 2016).
There is a small literature exploring differences in speech patterns them-
selves between liberals and conservatives.15 The vast majority of this work
has been conducted on speeches or text generated by political elites, such as
US presidents (Cichocka et al. 2016) or political bloggers, although it has
been demonstrated cross-culturally across several different European coun-
tries (Cichocka et al. 2016; Schoonvelde et al. 2019) and in Lebanon
(Cichocka et al. 2016). Liberals speak in more complex language than
do conservatives, who prefer short and unambiguous statements.
Conservatives are also more likely to prefer nouns over verbs and adjectives
(Cichocka et al. 2016). The posited reasoning is that language is a reflection
of cognitive, affective, and motivation preferences; known differences in
these preferences between liberals and conservatives suggest that there
could be differences in language. To date, no one has studied the everyday
speech or political talk of liberals and conservatives to assess whether these
linguistic differences permeate to the mass public, but it is possible that it is
these sorts of subtle cues that individuals pick up on to infer others’ views.

Media Usage

Social media post history

check their facebook page


I might check out their Facebook presence initially.
Ask what media they track . . .
See what newspaper they were reading
would go on a rant about how evil Fox News and Trump is and see their
reaction
I would ask them what news program to they watch or listen to on t.v. or
radio.
I will ask what news network they watch
examine their social media posts
Ask them if they watch Fox news?

Political communication scholars would suggest that individuals examine


a person’s media usage behavior to learn about their political views, and

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


Detection 95

many of our subjects indicated that they would, too. In today’s fractured
media environment, news source preferences correlate strongly with par-
tisanship. A 2014 Pew Research Center report suggests that 47% of
consistent conservatives prefer Fox News, whereas consistent liberals
prefer a combination of CNN (15%), MSNBC (12%), NPR (13%), and
the New York Times (10%).16
There are signs that the selection of a news source has become politi-
cized, in large part because much of the public perceives bias in the news.
For example, Figure 6.1 in Settle (2018) shows that most people think
that the majority of thirty-six news sources assessed had an ideological
bias, and conservatives were more likely than liberals to ascribe bias to
news sources. Even attitudes about the media have polarized over time.
Gallup reports that in 2018, only 21% of Republicans have a great deal
or fair amount of trust in the media, but 76% of Democrats do. Thus,
even hearing how someone talks about the news or media may be a signal
of their partisan or ideological views.
Finally, in the internet age, social media behavior might be another
informative cue. Settle (2018) presents evidence that individuals can infer
political leanings of their Facebook friends based on the political and
apolitical content that they post. However, only about 5% of our respond-
ents indicated that they would use social media cues to infer others’
political views, making it less popular than the other subtle cues described.

Directly Political Cues

Ask about the economy


There [sic] attitude towards gas prices
just talk about current events and get their views
How do you think Pres. Trump is doing?
By expressing one of my own personal values which could indicate my political
views and note the other person's reaction and reply to my statement
Ask an open ended question that would provide input on their positions

Finally, a significant percentage – 50% – of our respondents either did not


read our prompt carefully or did not care and replied that they would ask
a person directly or initiate a political conversation. Many said they
would ask about issues related to politics, most commonly the economy
or health care, or ask people about their attitude toward President

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


96 Detection: Mapping the Political Landscape (Stage 1)

Trump. Others replied that they would share some of their own views and
see what the other person said. Beyond serving as a case study of free
response noncompliance, we find these answers telling. If we take people
at their word – and we think at least a subset of these respondents were
answering in good faith – this suggests that some people’s “detection
strategy” is to be blunt and initiate a political conversation, a fact that
has interesting implications in light of the results we present in the
remaining chapters of this book about the extent to which many people
have an aversive reaction to such tactics.

What’s in a Name? Inferring Political Preferences from Names


The range of cues discussed previously – based on what a person looks
like, who they are, or what they say – are the kinds of signals that can be
gleaned in relatively brief interactions. None of our subjects mentioned in
their free response answers that they would use a person’s name to make a
guess about their political views. But names can be important expressions
of social status, cultural and ethnic identity, and religion, all of which
are signals respondents indicated could be used to infer someone’s
political leanings.
First names can be “an expression of taste, social position, and status
aspiration” (Lieberson 2000, quoted in Oliver, Wood, and Bass 2016,
p. 56). Americans, in particular, are likely to choose Anglo names to signal
upward social mobility, since names associated with ethnic or racial minor-
ities can lead to stereotyping (Figlio 2005). Conversely, selecting decidedly
ethnic-sounding names may signal cultural attachments: Fouka (2020)
shows that after German was banned from language instruction in schools
following World War I, German-Americans were more likely to embrace
their cultural identity and choose distinctly German names, such as
Adolph, for their children. Sociologists have also noted trends in the
popularity of names over time and the socioeconomic signals that names
can send (Lieberson and Bell 1992; Joubert 1994; Lieberson 2010).
A person’s name is the signal individuals are most likely to know before
actually meeting someone face-to-face: in a professional context with a new
colleague; before meeting the romantic partner of a friend for the first time;
or when initially contacting a doctor, repair person, or municipal employee,
for example. There are also countless interactions in a person’s daily life
where they know someone’s name but not much else.
Do people have the ability to glean political information based on
names alone? In this section, we explore the extent to which names can

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


Detection 97

be a useful cue for inferring political leanings by conducting a series of


experiments in which participants guessed the political leaning of people
with names selected to signal liberal or conservative political ideology.
The general protocol worked like this: Participants, recruited from
Amazon’s Mechanical Turk, were randomly assigned to evaluate some-
one with a name selected based on its liberal or conservative connotation.
Sometimes participants were only presented with a name and other times
they were presented with a vignette describing a social situation in which
someone’s name was revealed. After exposure to the name in some way,
participants were asked to guess the person’s political ideology or parti-
sanship. We find that individuals do indeed infer political leanings that
are probabilistically concordant with an ideology based on a name alone.
We begin our assessment examining pairs of names that might signal
race and religiosity as pathways to inferring political leanings. As we
discussed previously, there are both strong empirical relationships
between these factors and partisanship, and high levels of recognition
among the American public about the existence of these patterns. A long
line of research uses names to signal race and ethnicity (e.g. Butler and
Broockman 2011; Butler and Homola 2017; Lajevardi 2020; Gell-
Redman et al. 2018).
We picked a subtle manipulation, and we randomly assigned partici-
pants to evaluate someone17 named Dwayne (a stereotypically Black
name and spelling) or someone named Duane (a spelling more common
among White men) (Figlio 2005). Using data from Clarity Campaign
Labs,18 we see that 65% of registered voters named Dwayne are
Democrats, while 56% of Duanes are registered Republicans.19 In our
experiment, 61% of respondents in the Dwayne condition thought that
Dwayne was Black and 58% of participants in the Duane condition
thought that Duane was White. Furthermore, Duane was viewed as
significantly more conservative (overall, fiscally, and socially), and more
likely to be a Republican than Dwayne, who was viewed as more liberal
and more likely to be a Democrat.
To examine whether names cue religiosity, we selected two religious
names from the Old Testament (Gideon and Jedidiah) that were roughly
equally common, but one that was more common among Democrats and
one that was more common among Republicans, according to Clarity
Campaign Labs data.20 We also used a secular name, Brenden, as a
control because it was roughly as common21 as Jedidiah and Gideon
but was evenly split between Democrats and Republicans (49%
Democrat, 51% Republican). The biblical names signaled religiosity:

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


98 Detection: Mapping the Political Landscape (Stage 1)

75% of participants thought that Jedidiah or Gideon was more religious


than Brenden, while only 24% of participants thought that Brenden was
more religious. Participants perceived Jedidiah and Gideon to be signifi-
cantly more conservative and more likely to be a Republican than
Brenden. Moreover, the religious cue trumped the empirical relationship
between name and ideology: There were no significant differences in
perceived ideology or partisanship between Gideon and Jedidiah even
though there are real-world differences in their party registration.
We could have continued to test out other names that fall into racial
or religious categories, but instead we were more intrigued by the
possibility of testing a more systematic pattern in naming preferences
that had been established in the literature. Oliver and colleagues (2016)
show that liberals and conservatives (or Liberallas and Konservatives, as
they call them) tend to favor names with different sounds (phonemes).
Specifically, they find that liberals tend to give their children names with
softer, feminine sounds, while conservatives favor names with harder,
masculine sounds.
We selected names from the examples provided in Oliver et al. (2016)
and our own brainstorming based on their findings. The research by
Oliver and colleagues (2016) shows us which phonemes should be more
likely to be liberal or conservative. We cross-referenced these names with
data from Clarity Campaign Labs that shows the percentage of registered
Republicans and registered Democrats with each name. Table 4.3 shows
all of the names that we used in our experiments. The first three columns
show that names fitting the phonemes identified by Oliver and colleagues
(2016) do correlate with party registration. For instance, Liam is a name
identified as having a liberal phoneme and 61 percent of American
registered voters named Liam were registered as Democrats.
With the exception of Lenny, the three columns on the right show that
participants viewed individuals with phonetically liberal names as liberal
or moderate. Participants viewed those with phonetically conservative
names as ideologically conservative. For both liberal and conservative
names, the patterns hold across different dimensions of ideology (social,
fiscal). Overall, the results presented in Table 4.3 show that individuals do
infer political ideology from a cue as subtle as a name.22
To push on these results a bit, we conducted another experiment in
which we provided some context instead of asking subjects to evaluate
someone based on a name alone. We created six short vignettes providing
the context under which the participant should imagine encountering the
hypothetical character, how they learned his name, and how a

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press

 .. Actual and perceived political leanings of phonetically ideological names

Percent Percent Number of


Registered Registered Registered Mean Social Mean Fiscal
Name Democrat Republican Voters Mean Ideology Ideology Ideology
Liberal Phonemes
Liam 61 39 6,762 3.50 3.37 3.78
Ari 67 33 7,172 2.91 2.69 3.28
Lenny 59 41 6,166 4.24 4.19 4.42
Luca 60 40 1,858 3.15 3.03 3.26
99

Conservative Phonemes
Kent 40 60 63,988 4.78 4.72 4.89
Dirk 43 57 11,416 4.27 4.17 4.37
Kurt 42 58 84,893 3.99 3.81 4.23
Tucker 45 55 5,731 4.95 5.01 4.99
Note: Columns 1–3 reflect data from Clarity Campaign Labs, as of 2014. Columns 4–6 show the average ideology ratings for each name given by
508 Mechanical Turk study participants in February of 2015. Ideology was measured on a 7-point scale, ranging from 1 (extremely liberal) to 7
(extremely conservative).
100 Detection: Mapping the Political Landscape (Stage 1)

conversation might have started. We tried to describe scenarios in which


the hypothetical character was a stranger in a public place. We also tried
to select contexts that were relatively neutral ideologically.23 The full
vignettes are available in the Appendix.24 We focused on just the names
Liam and Kent in these studies.
First, aggregating across all contextual conditions, we find that
individuals perceived Kent to be significantly more conservative than
Liam. Looking within each contextual condition is a bit less clear,
likely due to a lack of statistical power because of the small sample
sizes. Kent was perceived as more conservative than Liam in all six
scenarios, but this difference was only statistically significant in two of
them. This suggests that, at minimum, adding context could dampen
the strength of the name cue. This is certainly likely to be true in the
experimental context, but we can imagine it being true in the real
world as well when individuals often have more information about
someone than just his or her name.

   :  


 
What have we uncovered so far? First, not everyone can or will attempt
to guess others’ political views: About 27 percent of our sample left the
free response question about inference blank or said they would not or
could not guess. Among the 73 percent who did provide an approach,
about half reported that they would use a nonverbal signal or latch onto
a characteristic of the discussant or their lifestyle. We also showed that a
very subtle form of many of these characteristics – a person’s first name –
can be informative about his or her views. As we discuss in more detail
in Chapter 9, there are important individual differences between people
that shape the types of cues people use, a fact that has interesting
implications for which kinds of discussions emerge among which kinds
of people.
In an affectively and socially polarized world, what goes through
individuals’ heads when they recognize someone’s political identity and
realize they are either part of their political ingroup or their outgroup? In
the next section, we dive into some of the assumptions that people make
about individuals who identify as Democrats and Republicans, first
exploring what social scientists have shown about trait association differ-
ences and then exploring what Americans think about the policy attitudes
of the outparty.

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


Linking Detection to Decision: The Role of Assumption 101

Character and Trait Associations


Most of the research on traits associated with political parties focuses on
how individuals evaluate politicians. However, recent work suggests that
these trait associations can extend to how people evaluate partisans more
generally. Hayes (2005) shows that Democrats are viewed as empathetic
and compassionate, while Republicans are viewed as leaders and moral.
Using ANES data from 1984–2008, Goggin and Theodoridis (2017, table
1) present characteristics attributed to Democrats and Republicans,
largely replicating Hayes’ (2005) work. The results show that, on average,
Democrats are viewed as more caring, compassionate, intelligent, know-
ledgeable, inspiring, and decent, while Republicans are viewed as more
moral and having more leadership qualities. Interestingly, these associ-
ations may extend to the way individuals perceive faces: Rule and
Ambady (2010) show that as the faces of target individuals were per-
ceived to be more powerful, they were more likely to be perceived as
Republicans, and as the faces of target individuals were perceived to be
warmer, they were more likely to be perceived as Democrats.
These results are also consistent with those presented by Rothschild,
Howat, Shafranek, and Busby (2018). The authors used structural topic
modeling on free response data in which respondents described
Democrats and Republicans. They found that Republicans described
Democrats as ignorant, poor, lazy, and unrealistic, while Democrats
described Republicans as rich, prejudiced, ignorant, closed-minded, self-
interested, and mean. In contrast, Republican respondents described
Republican supporters as rich, smart, educated, honest, individualist,
and patriotic, while Democrats described Democratic supporters as
caring, open-minded, smart, and educated. Thus, individuals appear to
describe their co-partisans with more positive traits and their outpartisans
with more negative traits. In a related book, Busby, Howat, Rothschild,
and Shafranek (2022) find that these stereotypes are deeply connected to
polarization. Outparty animus is much higher when individuals think
about outpartisans in terms of what they are like, rather than differences
in issue priorities between the parties.
Our own study, fielded on the Thanksgiving Study in 2014, reinforces
these patterns from the literature. We asked participants to indicate the
extent to which they thought a variety of positive and negative traits
characterized Republicans, and then separately make the same evalu-
ations about Democrats.25 The bottom line is that, as expected, partisans
attribute more negative characteristics to outpartisans and more positive

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


102 Detection: Mapping the Political Landscape (Stage 1)

characteristics to copartisans. Reflecting patterns in previous research,


Democrats and Republicans agree that Democrats are warm and open,
and they agree that Republicans are aggressive. A majority of people
think that members of their outparty are shallow, jealous, neurotic, and
impatient.
These are all characteristics that might make for an uncomfortable
conversation, regardless of one’s political views. Thus, it might not be
political disagreement per se that Democrats are avoiding when they opt
out of a discussion with a Republican, but they might be avoiding an
interaction with someone they believe to be cold, hard-hearted, and
aggressive. Likewise, Republicans may shy away from conversations with
Democrats if they expect them to be distractible, impatient, and jealous.
In contrast, it could be the positive trait stereotypes about one’s own party
that drive people to prefer discussions with their co-partisans. Even when
conversations do occur, despite these stereotypes, anticipating these char-
acteristics might shape how individuals feel in the discussion, what they
are willing to disclose, how they process information they gather, and
their evaluations of their discussant(s).

Issue and Behavioral Assumptions


In addition to thinking that members of the outparty are more likely to
have negative traits, people also tend to believe that members of the
outparty hold more extreme viewpoints than they actually do (Dawes,
Singer, and Lemons, 1972; Judd and Downing 1995; Robinson et al.
1995; Sherman, Nelson, and Ross, 2003; Chambers, Baron, and Inman,
2006; Chambers and Melnyk 2006; Ahler 2014). In a study on a nation-
ally representative sample, Levendusky and Malhotra (2016) find that
respondents perceived the two political parties to be about 20 percent
farther apart across a broad set of issues than they are in reality.
Building on these past findings, our twofold contribution here is subtle
but important. First, while research has shown that people think their
outgroup is extreme, we show that they also agree with exaggerated
caricatures of what the outgroup believes and that they denigrate the
way their opponents arrive at their opinions. This is consistent with points
raised by Druckman et al. (2021). Second, we show that these attitudes
are held about partisans they personally know, not just their abstract
notion of members of the outgroup.
We fielded a survey experiment where we asked subjects to indicate
their level of agreement with a set of highly stylized depictions of the

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


Linking Detection to Decision: The Role of Assumption 103

political attitudes that people might hold on polarizing issues for each
party.26 We also include information-processing stereotypes as well as
three neutral or positive statements about each party. Subjects were
randomly assigned to assess the stereotypes for three target groups in
one of two orders: (1) candidates, voters, and known partisans; or (2)
known partisans, voters, and candidates. The full text of each stereotype
can be found in the Appendix.
Figure 4.1 shows the top-line results for the proportion of subjects who
agree with stereotypes about each target group of the outparty. The first
thing to note is the high rate of agreement. While the study design does
not eliminate the possibility that we are detecting expressive partisanship,
it does allow us to evaluate people’s assessments of outpartisans they
personally know, either unanchored or anchored by their evaluations of
more distant targets. As shown in Figure 4.1, greater proportions of
people agree with the characterizations about candidates, but they also
agree with the stereotypes for voters and known partisans. In some
instances, the proportions are statistically indistinguishable (analysis
shown in the Appendix). Most of the differences are not statistically
significant, but when they are, subjects who evaluated known partisans
first indicated more agreement with the statements than those who
anchored on candidates and voters. This suggests that people reduce their
agreement with stereotypes of known outpartisans only when comparing
them to outparty elites. This finding is consistent with the argument that
affective polarization, as often measured by feeling thermometers toward
“Republicans” or “Democrats,” is biased by individuals thinking about
political elites or extreme, politically engaged outpartisans (Druckman
et al. 2021). However, that we observe small differences on most meas-
ures by varying the order suggests that this might not be as big of a
measurement problem as some argue, depending on the stereotypes or
negative traits measured.
People are not making strong distinctions between the elites and the
non-elites in a party when conjuring up negative evaluations. While we
cannot rule out “partisan cheerleading,” we can at least state that people
are willing to cheerlead at the expense of evaluations of their friends and
family. Thus, not only do these attitudes matter for vote choice, but we
think they matter for interactions between members of the public. The
most partisan are the most likely to hold these attitudes, and they are also
the most likely to engage in political discussion. This means that while
stereotypes may not dissuade those who hold them from talking about
politics, they likely affect how people choose to communicate.

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press

 .. Proportion of respondents agreeing with stereotype statements


Note: The first panel shows agreement with the statements for known outpartisans, the middle panel shows evaluations for outpartisan
voters, and the third panel shows evaluations for outpartisan candidates. Each panel separates results by ordering treatment condition.
Data come from the Thanksgiving Study, fielded on Mechanical Turk in 2014, N ¼ 757 for Known First condition (squares), N ¼ 765
for Candidates First condition (triangles), however please see Appendix for specific numbers of observations for each item after
accounting for missing data (730–754 for Candidates First and 716–746 for Known First).
Linking Detection to Decision: The Role of Assumption 105

From Detection to Discussion


Throughout this chapter, we have shown that people seem to have
political detection systems, but individuals uniquely tune them to focus
on different characteristics. Moreover, these perceptions go hand in hand
with partisan stereotypes that could affect the way individuals approach
political discussions. But an important question still remains: Is detection
a prerequisite for discussion? Will people only engage in a discussion if
they know others’ views in advance? To answer these questions, we
analyzed responses to a direct question on the CIPI II Survey about how
selective people are in picking political discussants. About 36% of our
participants reported that they would talk about politics with someone,
even if they didn’t know their political views, while 37% indicated that
they avoid talking about politics if at all possible. Situated between these
two groups are the remaining 27% of our respondents who reported only
discussing politics if they know others’ political views ahead of time. For
more than a quarter of respondents, a well-oiled detection system is
necessary for political discussion to occur.
We pushed on these results to see if people who only participate in
political discussions if they know others’ views in advance were also
more likely to report that they actively try to guess others’ views.
Our expectation is that people who require some information about
others’ political views before engaging in a discussion should have
developed the most refined detection skills for inferring the political
views of others. Those who will talk to anyone will likely make some
attempt to map the discussion landscape around them, but because
they care less, they are less practiced. Those who avoid discussing
politics at all costs shouldn’t be very good at detection because
knowing others’ views doesn’t matter: They are always disinclined
to talk about politics.
Table 4.4 shows the percentage of people of each type who report
trying to guess the views of others and their confidence in those guesses.
Consistent with our expectations, those who report that they need to
know others’ views before engaging in a discussion also report trying to
guess others’ views and are the most confident in doing so.27 Specifically,
about 52% of those who report that they need to know others’ views
before having a discussion report that they actually try to guess views in
advance. This percentage is markedly higher than those who report that
they will talk to anyone about politics (32%) and those who avoid
discussing politics if at all possible (25%).

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


106 Detection: Mapping the Political Landscape (Stage 1)

 .. Frequency of guessing views and confidence in guesses

Tries to Guess Confidence in Guesses


Type Views (Percent) (Mean, Five-Point Scale) N
Talk to Anyone 32 2.14 366
Need to Know Views 52 2.67 275
Avoid if Possible 25 1.93 371
Note: Data come from the CIPI II Survey. The “Talk to Anyone” category includes those
who responded “I’ll talk about politics with someone, even if I don’t know their political
views.” The “Need to Know Views” category includes those who reported that they will talk
about politics with someone, but only if they know their political views; those who
responded that they will only talk about politics with someone if they know that they have
the same political views; and those who responded that they will only talk about politics with
someone if they know they have different political views.

Turning to confidence in these guesses, it is no surprise that those who


need to know others’ views report being the most confident in their ability
to accurately guess the political views of others. Respondents were asked
“[w]hen you discuss politics with someone, how confident are you of their
political views before starting the discussion?” On a five-point scale
ranging from 1=not very confident, no better than chance, to 5=virtually
certain, we can see that overall, confidence is fairly low, never reaching
the midpoint on the scale on average. However, difference-of-means tests
between these groups indicate that those who need to know others’ views
were significantly more confident in their guesses than were those who
will talk about politics with anyone and those who avoid discussing
politics if at all possible. Those who will talk to anyone about politics
report being significantly more confident than those who avoid discussing
politics at all costs.
These results suggest that there is considerable heterogeneity in the
recognition, use, and confidence of the detection system based on the
preferences people have over other facets of their political discussion
behavior. The importance of detecting others’ political views ahead of
time is conditional on how necessary that information is for decision-
making about the conversation itself.

:    ?


The process of signaling and detection plays an important role in
MacKuen’s (1990) formal model of political discussion, changing the
model’s predictions about which kinds of conversations emerge with

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


Conclusion: Why Does This Matter? 107

what implication for democratic discourse (p. 84). Yet thirty years later,
while scholars have assumed that people signal and detect, there are few
systematic tests of how, whether they do, and whether people are any
good at reading others’ signals. We began this chapter with a mystery:
How do people figure out the political views of potential discussants
before a conversation begins?
We took a first step in demonstrating some of the ways in which
individuals infer the political leanings of others in advance of a political
discussion. While about 27 percent of our subjects said they did not know
how or would not guess another’s views, the approximately 73 percent
who did provide an answer gave a wide-ranging set of responses, includ-
ing how others speak, dress, make a living, and otherwise engage in their
everyday lives. We also presented experimental evidence that individuals
draw consistent inferences about others’ political views based on their
first names. Finally, we introduced a host of stereotypes that individuals
hold about outpartisans. There is relatively high agreement with these
negative stereotypes, consistent with previous research and current ideas
about affective polarization in the United States. Moreover, individuals
hold these stereotypes about outpartisans they know personally, in add-
ition to outpartisan voters and outpartisan candidates.
We make no claim that this analysis is exhaustive. For instance, our
coding scheme should not be interpreted as assessing the proportion of
people who regularly or ever use each cue we explored; different research
designs would be needed to answer that question. In fact, this chapter
raises more questions than it answers. How do people learn these associ-
ations? How do they reconcile signals that may compete with one
another? What about other cues that we were not able to explore in this
chapter? Future research can explore an abundance of remaining ques-
tions about how people uncover others’ political views.
Our goal was to show that people are able to make some inferences
about potential discussion partners, so that at the moment at which
they make a decision about whether to engage in a discussion, they are
not arriving with a blank slate. Not only do they carry with them the
effects of their own predispositions, but they carry with them their
expectations about potential discussants. Many people have assump-
tions about the kind of people their political opponents are and endow
them with traits that make fruitful conversations about political differ-
ence less likely.
The process of perception also matters because people make assump-
tions about what others think about them. The notion of reflected

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


108 Detection: Mapping the Political Landscape (Stage 1)

appraisals – also called intergroup meta-perceptions or meta-stereotypes –


can be thought of as “the expectations one has about how her ingroup is
viewed by a member of her outgroup” (Appleby 2018 p. iii). A growing
body of work in this area finds that there are a variety of situational and
individual-level factors that can shape our meta-perceptions: group iden-
tification, sensitivity to rejection from a group, intergroup conflict, and
the presence of an outgroup member, just to name a few (Sigelman and
Tuch 1997; Vorauer, Main and O’Connell 1998; Frey and Tropp 2006).
Greater assumptions of dissimilarity between one’s ingroup and outgroup
can lead individuals to rely more heavily on stereotyping. Americans
actually overestimate the negativity with which outgroup members think
about their ingroup (Appleby and Borgida, n.d.).
As we will explore later on in the book, people often have concerns
about being judged by others for their political opinions, and we speculate
that these meta-stereotypes play a role in this concern, perhaps even
causing people to actively attempt to conceal their political views from
others. For example, in the lead up to and aftermath of the 2016 election,
a cottage industry of journalists reported on this phenomenon that
seemed especially heightened in the context of a contentious election.
The emphasis of the story line was that “closet Trump supporters” felt
the need to hide their political preferences at the workplace (Ng 2017),28
at school (Ambrose 2016),29 at home (Jamieson 2016),30 in response to
pollsters (Williams 2016),31 and in Washington, DC (Fisher 2016; Green
2017).32 Compiling emails in which The Guardian readers explained why
they supported Trump, Amber Jamieson (2016) finds that some even hid
their Trump support from their spouses.33
Why is it that Trump supporters hid their preferences from others?
Presumably because they were concerned about being negatively judged
by others who disagree. While it is easy to argue that this was a one-off
“Trump Effect,” there is nothing idiosyncratic about their behavior. Both
Republicans and Democrats realize that others can recognize their polit-
ical identities and will evaluate them based on assumptions about their
beliefs. Yet as we show in this chapter, even if people conceal explicit
signals of their political views, they are unlikely to prevent people from
drawing inferences about their views based on what they reveal about
themselves, their lifestyle preferences, and their values. All of these fea-
tures of signaling likely play a role in the decision to engage in a political
discussion, a point to which we turn in the next chapter.

https://doi.org/10.1017/9781108912495.004 Published online by Cambridge University Press


5

Decision

To Talk or Not to Talk? (Stage 2)

When we last left Joe in the beginning of Chapter 4, he’d just arrived at a
neighborhood party with his wife, who had promptly walked off to go
mingle. Joe clutched his beer. In the time he’d been figuring out which
group to join at the barbeque, the duo with an earthy vibe had moved
closer to the grill and buffet table to refill their plates. That left the pair
of middle-aged men. Joe thought back to this week’s episode of his
favorite sports podcast, racking his brain for some of the factoids
discussed on the show about the MLB season. He made his way over
to the men.
They introduced themselves as Ken, from “the red brick house down the
road,” and Jack, the “one with the big truck in the driveway.” They talked
amicably for a bit about Joe’s job in the city, the adjustment to the neigh-
borhood, and the unseasonably warm weather, until finally the conversa-
tion turned to the holiday. “I think it’s great that we still have a day to
remember the true patriots of this country. Nothing has been easy for our
boys in blue lately,” Jack said. Ken nodded in agreement, mentioning how
little respect it seemed like most people had these days for police officers.
Jack then launched into a long-winded story that had something to do with
an altercation he’d observed last time he was in the city between the police
and some teenagers.
While Jack talked, Joe’s mind started racing. Why hadn’t he remembered
the Blue Lives Matter bumper sticker on Jack’s truck? He really didn’t like
where this conversation was heading and wanted to extricate himself from
the situation. But at the same time, he didn’t want to be rude and he
couldn’t afford to alienate these new neighbors and potential friends. If he
was going to excuse himself, it would be best to do so quickly, before the
political conversation really took off.

109

https://doi.org/10.1017/9781108912495.005 Published online by Cambridge University Press


110 Decision: To Talk or Not to Talk? (Stage 2)

What does Joe do? Does he catch Katie’s eye, creating an excuse to wrap up
the conversation and walk away? Does he stay put, trying to navigate the
conversation toward less contentious topics but otherwise keeping his
mouth shut? Or does he stick it out, actively but carefully participating in
the conversation he would’ve preferred to avoid?

Most methods of studying political discussion mask important nuance in


decision-making related to the onset of political discussion. By focusing
only on the discussions that actually occur or on conversations with
regular political discussants in networks that have already formed, previ-
ous research excludes important dynamics that we believe are central to
the conversation process. Past studies focusing on the composition of
discussion networks argue that network structure reflects the choices
people make. To the extent that network composition deviates from the
distribution of available discussants, the networks we observe can reveal
the preferences people have for discussion. But this approach is centered
on the attitudes of discussants, an important point but an incomplete one
if there are other social considerations that affect discussant selection.
These factors – such as the balance of knowledge between discussants –
have been understudied in past work, relative to the study of disagree-
ment, but play a large role in how individuals navigate decisions related to
political discussion.
In this chapter we explore the factors that differentiate full participa-
tion in a conversation (which we explore in Chapter 7) from interactions
that lead to deflected, abbreviated, or lopsided political discussions, where
a person is physically present for a conversation but does not contribute
to it. These abbreviated or avoided interactions have been included in a
formal model of discussion (MacKuen 1990) and observed in qualitative
studies,1 but are generally overlooked in quantitative work.
Measuring a more complete range of discussion decisions – including
the absence of political discussion – is methodologically challenging, but
necessary so that we can detect systematic patterns that differentiate the
conversations that happen from those that fail to fully materialize. We
present evidence from three unique sets of studies collectively showing
that regardless of the political topic or setting, individuals prefer to
discuss politics with their closer social ties, especially those who agree
with them and have similar or lower levels of political knowledge.
The first study in this chapter, the True Counterfactual Study, analyzes
the characteristics of discussions in which individuals say they engaged,
compared to discussions that individuals say they avoided. The research

https://doi.org/10.1017/9781108912495.005 Published online by Cambridge University Press


Conceptualizing Choice in Political Discussion 111

design in our second study employs a series of vignette experiments where


subjects report how they expect hypothetical characters to respond in
situations where a political conversation emerges; we manipulate features
of the context and composition of a political discussion to evaluate when
subjects anticipate that the characters would opt to derail a conversation
in some way.
The results from the first two studies suggest that the strongest factors
contributing to people’s choices are the characteristics of the other indi-
viduals in the conversation. Thus, our third study, the Name Your Price
Study, pushes further on these findings using a paradigm introduced in
Settle and Carlson (2019). Here, participants are asked to report how
much they would need to be paid to engage in a variety of discussions, a
technique that allows us to evaluate which kinds of discussions are most
and least preferable.

    


A political discussion ensues when someone initiates talk about politics
and one or more people respond. A person initiating a conversation
must decide which topic to raise in a particular context with a particular
set of discussants, what tone to strike, and how precisely to get the
conversation rolling. A person on the receiving end of an initiated
discussion is faced with choices about how to respond. What are his
or her options? A discussion responder who enjoys political discussion is
likely to respond conversationally, as might a person who does not mind
talking about politics conditional on the specific circumstances of the
situation. These types of people might be willing to thoughtfully disclose
their true political opinions, argue passionately, or even disparage the
other side. But what about a discussion responder who habitually avoids
political discussion, or wants to avoid this particular conversation in this
particular context?
A first potential response would be to physically extricate themselves
from the situation.2 While this may be possible under some circum-
stances, it is easy to think of times when it is either physically impossible
(on an airplane) or socially unacceptable (the Thanksgiving dinner table).
A second option would be to derail the conversation in some way,
perhaps by changing the subject.3 This avoids the social faux pas of
simply walking away but has the same intended effect of truncating the
conversation. A third option might be to stay silent. This is only appro-
priate under certain conditions – in a dyadic conversation, completely

https://doi.org/10.1017/9781108912495.005 Published online by Cambridge University Press


112 Decision: To Talk or Not to Talk? (Stage 2)

ignoring what someone just said typically violates social norms – but this
response is available in many other situations.4 Finally, the person could
respond with a pertinent comment, commencing their involvement in a
political discussion they do not want to have. That comment, however,
might include conversational defense mechanisms, such as self-censorship
or conformity to soften any potential disagreement. We explore these
defense mechanisms in Chapters 6 and 7, but focus here on the crucial
decision about whether to engage at all.
What has been studied previously about this moment of decision? The
most thorough theoretical treatment comes from MacKuen’s (1990)
formal models of the decision to talk about politics. At the core of the
models is an argument about the cost-benefit trade-off of political talk,
where costs and benefits are calculated according to the anticipated
agreeableness of the conversation and a person’s tolerance for exposure
to oppositional views. Since its publication, dozens of scholars have cited
the models for their implication that disagreeable discussion should be
extinguished in many contexts. The model set the stage for the literature
in the 1990s and 2000s that sought to assess the extent to which disagree-
ment persists in discussion networks despite predictions otherwise (e.g.
Huckfeldt, Johnson, and Sprague 2004).
However, as far as we can tell, subsequent scholarship has not tested
rigorously the core assumptions, parameters, and strategies from the
models. In the Appendix, we walk through the similarities and differences
between MacKuen’s models and the way we conceptualize the decision to
talk, but we note the key similarities and differences here. In terms of
similarities, MacKuen’s model notes the importance of viewpoint detec-
tion, although he flips the concept to focus on the way that people might
intentionally signal their political viewpoints to others to facilitate the
decision about starting a conversation. MacKuen also incorporates the
notion of variation between individuals in their tolerance for different
types of conversations, what he calls the “expressivity criterion.” We also
think that individual differences between people affect their propensity to
decide to talk (and how to respond), and we assess a number of charac-
teristics underpinning that notion in Chapter 9.
Our studies deviate from his model in several important ways, based
on the empirical evidence that has been accumulated about political
discussion in the thirty years since he developed the model, as well as
our own findings. First, MacKuen’s model is written for dyadic conversa-
tions, whereas we explore multi-person conversations as well because of
the importance of group dynamics in behavioral decision-making (e.g.

https://doi.org/10.1017/9781108912495.005 Published online by Cambridge University Press


Conceptualizing Choice in Political Discussion 113

Asch 1956; Mintz and Wayne 2016). Second, his model specifies only two
strategies (what he calls “talk” and “clam”) whereas we explore a wide
range of behavioral responses once someone decides to talk (a point to
which we return in Chapter 7). Third, his model assumes that a person
decides to talk when they anticipate more pleasure than pain from the
conversation, and otherwise stays silent. He acknowledges both that there
may be situations in which a person cannot be silent, as well as that some
people may develop more stable strategies (i.e. to always talk or always
stay silent) instead of actively monitoring the environment and potential
discussants, but these factors remain outside the parameters of the model.
We engage these factors directly, given the broad patterns we described in
Chapter 1 that people seem to talk about politics more than they desire,
and that there are some people who never talk about politics.
Although subsequent research has deemphasized the choice stage of
political discussion in order to focus on the consequences of those
decisions, there has been wide recognition that shared attitudes
between discussants is an important factor influencing the likelihood
of political discussion. Incontrovertibly, when Americans talk about
politics, they are more likely to do so with like-minded discussion
partners (Huckfeldt and Sprague 1995; Mutz 2002b; Huckfeldt,
Johnson, and Sprague 2004; Mutz 2006; Gerber et al. 2012;
Klofstad, Sokhey, and McClurg 2013; Minozzi et al. 2020).
Americans appear to self-select like-minded discussants at a higher rate
than do the residents of other countries (Mutz 2006). Whatever lack of
consensus remains about the amount of disagreement in political talk is
largely due to discrepancies in measurement (Klofstad, Sokhey, and
McClurg 2013). Yet, while social scientists have ample descriptive data
on the patterns of homophily within networks and the (in)frequency of
political discussion with those who disagree, we do not have a full
understanding of why this occurs.
Scholars have put forth three main theories that could account for
this homophily: discussant availability, discussant selection, and social
influence. First, homophily could be driven by homogeneity in the
available supply of political discussants, particularly as the United
States becomes more geographically sorted. Second, political discussion
network homogeneity could be attributed to active choices made by the
discussants. This could be based on political views, coined political
selection (Bello and Rolfe 2014), or purposive selection (Minozzi et al.
2020); or it could be due to incidental selection, where individuals
choose their political discussants based on apolitical characteristics that

https://doi.org/10.1017/9781108912495.005 Published online by Cambridge University Press


114 Decision: To Talk or Not to Talk? (Stage 2)

are spuriously correlated with political views (Minozzi et al. 2020).


Finally, social influence could drive the high rates of homophily we see
in networks: Political discussants are chosen for reasons completely
unrelated to politics, but over time, individuals change their political
views to match others in their networks. Questions remain about which
processes are at work in different situations. Distinguishing between
these theories of homophily is empirically challenging to say the least,
but recent work has used novel approaches and extraordinary data
collection efforts to investigate political discussion network formation
in complete networks over time. Minozzi et al. (2020) find support for
the incidental model, and Song (2015) finds similar patterns. Other
models may explain behavior for some individuals, but the leading
evidence to date generally lends support to the incidental model.
MacKuen’s model and the scholarship building on it do not adequately
acknowledge the way that social costs can moderate a person’s calcula-
tions. Other factors besides perceived agreement could affect the estima-
tion of the benefits or costs of talking politics. What else could matter? We
were guided by past literature on what is known about the characteristics
of discussants as well as the location of political discussion to identify
previous assumptions about which factors lead to the emergence of
political discussion. Probing these assumptions led us to interrogate five
additional features of potential conversations that might affect individ-
uals’ decision to participate in addition to perceived agreement with
discussants: tie strength with discussants, context of the conversation,
power dynamic between discussants, perceived knowledge level of dis-
cussants, and topic of the conversation.
The findings in this chapter explicitly test these factors using a very
different set of techniques from previous approaches. We converge on a
result reinforcing past research: The tie strength and attitudes of poten-
tial discussants are two strong factors associated with a person’s choice
to engage in a discussion. However, contrary to past work, we find that
people prefer to talk about politics with others who share their level of
knowledge, instead of seeking out more knowledgeable discussants,
which previous work has suggested is strategically advantageous (e.g.
Lupia and McCubbins 1998; Ahn, Huckfeldt, and Ryan 2014). We
found limited evidence that the topic of conversation, the social setting,
or the power dynamic had independent effects on how inviting a discus-
sion may be. We assess the factors affecting the choice to discuss politics
using three different research designs: (1) the True Counterfactual
Study; (2) vignette experiments; and (3) the Name Your Price Study.

https://doi.org/10.1017/9781108912495.005 Published online by Cambridge University Press


To Talk or Not to Talk 115

We will review the pertinent components of each study in turn before


analyzing what we can learn about the conversations that emerge com-
pared to those that fail to materialize.

     


Our three types of studies were designed to test some or all of the factors
we just identified that could be important to the choices that people make
on the cusp of a political discussion. We begin by examining the conver-
sations that respondents say they successfully avoided before considering
the factors that lead people to deflect meaningful participation, even if
they are physically present for the conversation. We then turn to studies
that more specifically characterize individuals’ preferences and experience
in conversations with discussants of varying levels of agreement
and knowledge.

The Discussions That Were Not: The True Counterfactual Study


I make it a point to never discuss politics. It generally ends worse than
religion talk. When I hear people openly discussing it, especially when they
get heated over opposing views, I find it annoying at most and generally
I just ignore the conversation because there will be nothing to add on either
side that wouldn’t offend someone that’s there.
Thanksgiving Study participant

As we will detail more in Chapter 9, there are several individual


differences – such as interest in politics, strength of partisanship, conflict
avoidance, willingness to self-censor, and social anxiety – that affect
individuals’ inclination to engage in discussions. For the moment, to
avoid the selection bias shaping who is most likely to discuss politics,
we utilize an experimental design.5 On the CIPI I Survey, we randomized
the prompt asking our respondents to recall a previous opportunity for
political discussion: Some individuals were asked to recall conversations
they avoided and others to recall conversations in which they engaged.6
The key innovation here is to ask individuals to describe situations in
which they could have discussed politics, but actively chose not to. This
allows us to study an otherwise unobserved behavior: the avoidance of
discussion.
After participants thought about the discussion that they either
avoided or engaged in, we asked them a series of questions about the

https://doi.org/10.1017/9781108912495.005 Published online by Cambridge University Press


116 Decision: To Talk or Not to Talk? (Stage 2)

 .. Agreement in discussions that were avoided and engaged in

Avoided Engaged
Everyone [would have] disagreed with me 14 7
Most [would have] disagreed with me 21 11
About half [would have] disagreed with me 43 41
Most [would have] agreed with me 18 28
Everyone [would have] agreed with me 4 13
Note: Data collected in CIPI I Survey. Avoided: N ¼ 1,490; Engaged: N ¼ 1,515.

situation. We first wanted to get a sense for how many people were
involved in each situation and how much disagreement the participants
sensed (or experienced). It turns out that discussions that were avoided
had significantly more people involved than discussions that were pur-
sued. For instance, 23% of the discussions that occurred were one-on-one
discussions, whereas only 13% of the discussions that were avoided were
one-on-one.
It should be clear by this point in the book – and the extant political
discussion literature – that anticipated political disagreement is an
important decision criterion. Our evidence here further supports this
point. We asked participants to report how many people would have
agreed with them about politics had they participated in the discussion
(or how many people agreed with them about politics in the discussion
that occurred). Perhaps to no surprise, discussions that were avoided
had significantly more anticipated disagreement than the discussions in
which respondents participated. As shown in Table 5.1, only about
4% of discussions that were avoided included complete agreement,
whereas 13% of discussions that occurred included complete agree-
ment. This, once again, highlights the importance of detection:
Individuals sniff out disagreement before a conversation fully begins,
often in order to avoid it.
We also asked participants to list the topics that they [would have]
discussed. There was not substantial variation in topics between the
discussions that were and the discussions that were not. Similarly, when
we asked participants to describe their relationships with each person
who was in the discussion, the same broad patterns held across the two
prompts: Most conversations were among close ties, such as family and
friends. However, we did note that there was somewhat more disagree-
ment among acquaintances and coworkers, which is consistent with
previous research (Mutz and Mondak 1998).

https://doi.org/10.1017/9781108912495.005 Published online by Cambridge University Press


https://doi.org/10.1017/9781108912495.005 Published online by Cambridge University Press

 .. Motivation for engagement or avoidance, coded free response answers

Total Avoided Engaged


Accuracy 16% 17% 15%
did not have all the facts to fully participate because i had knowledge of what was being
i didnt know much about the things being talked discussed i have followed politics very closed
about in the discussion and so didnt for about one year and it is good to know the
participate because i was knowledgeable [sic] political issues at hand [sic]
didnt want people to think that i didnt know because i thought i would leran something new
what i was talking about [sic] and it made me challenge myself [sic]
i intervene with my opinion when the ignorance
117

becomes unbearable and i feel the need to


enlighten others of the other side of the story
[sic]
Affirmation 17% 10% 23%
i wasnt sure my opnion was actually valid [sic] it was important for me to have my
worry if they will not like my opinion opinion heard
i was afraid of being judged because my opinion interesting to self evaluate myself
was different from everyone elses [sic] i felt very strongly about the topic and wanted
my opinion to be considered [sic]

(continued)
https://doi.org/10.1017/9781108912495.005 Published online by Cambridge University Press

 .. (continued)

Total Avoided Engaged


Affiliation 19% 28% 11%
The person was a friend of mine and I was afraid i wanted to be included in the group [sic]
that my opposite opinion would hurt it was about the upcoming presidential election
our friendship. with my sister i wanted to know her feelings
Because they were family members and I didn’t [sic]
want to create a rift. I don’t see them often so to be involved and a part of the group
I felt it was better to just let it go rather than
speak up.
my inlaws are very set in their ways my opinion
would have upset them and caused unneeded
stress on my husbandi did not want to hurt
118

any relationships [sic]


Note: Data collected in CIPI I Survey. Results based on the full set of 2,808 responses that were coded and at least one coder considered the response to
fall into the category. Percentages do not sum to 100 because only 1,314 responses were able to be coded into one or more of the categories. 21% of
responses were meaningful but gave a reason that did not fit our typology; 19% did not give a reason behind their decision; 13% were uninformative
(e.g. blank, missing).
To Talk or Not to Talk 119

Finally, we asked participants to tell us, in their own words, why


they chose to avoid or engage in the discussion. In the context of
thinking about the decision to talk or not, these responses should be
thought of as anticipatory considerations. What was going on inside
respondents’ heads when they made the decision to pursue or avoid a
particular conversation? As we described in Chapter 3, we worked
with a team of research assistants to develop a coding scheme that
mapped onto the AAA Typology we presented in Chapter 2 about the
motivations driving behavior during a political discussion: accuracy,
affirmation, and affiliation. Full details about the coding scheme
are presented in the Appendix and we present the most important
results shortly.
Overall, the most frequent answer category was affiliation consider-
ations, followed by affirmation considerations and accuracy consider-
ations. These percentages are not substantively different from one
another. However, when we analyze the results by the experimental
treatment (avoidance or participation), we see a more interesting pattern.
People were significantly more likely to list affiliative motivations when
they described conversations they avoided, and significantly more likely
to list affirmation motivations to describe conversations they pursued.
Accuracy goals were slightly more likely to be referenced in conversations
that were avoided than those that happened, but this difference is only
significant at the 10 percent level.
Altogether, the results from the True Counterfactual Study give us a
sense for the features of the conversations that could have occurred but
did not, compared to the conversations that did. Individuals tend to avoid
conversations where they anticipate disagreement, especially when they
will be in an opinion minority, and especially when they consist of larger
groups. Individuals tend to avoid discussions when they anticipate that
they will damage their social relationships or when they sense that they
will be negatively evaluated for their political opinions, but they choose to
engage in discussions when they feel a desire to express their opinions
to others.
With these core differentiations in the conversations that people say
did not happen compared to those that did, we move on to focus on
discussions that were not completely able to be avoided. We use vignette
experiments to build on what we have learned, manipulating some of
these conversational features, such as the knowledge levels, power
dynamics, and strength of social ties.

https://doi.org/10.1017/9781108912495.005 Published online by Cambridge University Press


120 Decision: To Talk or Not to Talk? (Stage 2)

The Discussions That Were Deflected: The Vignette Experiments


My brother has extremely different political views than I do and he was
spouting things that made me very angry i had to change the subject because
i didnt want to say anything to hurt his feelings or our relationship.
However i have expressed my views to him on other occasions.[sic]
True Counterfactual Study participant

At my aunt’s house the other day, they started discussing current politics.
That part of the family is extremely conservative, whereas I am extremely
liberal. I did not participate in the discussion at all. I am not comfortable
with arguing or debating with people, so mostly just listened to them say
things that I did not agree with and that were flat out wrong and waited for
them to switch to a much more neutral topic.
Thanksgiving Study participant

The True Counterfactual Study is useful for making inferences in the


aggregate about the unobserved discussions that never came to fruition.
But to understand the circumstances that lead to discussions that are
incompletely avoided, we need to have more control to compare features
of various conversational situations. Because it is extremely costly to
construct these social contexts and political discussions in a lab environ-
ment in which we could observe how a participant would respond –
which we do some of in this book – we turn to a different research design.
The idea was to present participants with hypothetical political discussion
scenarios and see how they think the character would respond, a form of
vignette experiment.
After rigorous pilot testing, the vignette experiments presented in this
book follow the same general structure. The vignettes are written in the
third person, following a short paragraph about either John (for male
participants) or Sarah (for female participants).7 John or Sarah was
described as engaging in some kind of social encounter, such as walking
into the breakroom at work, mingling at a neighborhood gathering, or
participating in other similar situations. Importantly, John or Sarah never
raise the topic of politics themselves; they stumble into the conversation
inadvertently. We described whether the others said things that agreed or
disagreed with John or Sarah’s opinions. At the end of the vignette, one of
the others turned to John or Sarah and asked what he or she thought
about the topic they were discussing. Here’s an example from the CIPI
I Survey for a female respondent:

https://doi.org/10.1017/9781108912495.005 Published online by Cambridge University Press


To Talk or Not to Talk 121

Sarah is at a small neighborhood party with some of her friends and acquaintances
and everyone is enjoying some snacks and good company. As Sarah mingles
through the party, she steps into a conversation with a group of people. As she
listens to the conversation, she realizes it is about the upcoming election. It quickly
becomes clear to her that they have very different political views from hers, as they
discuss their support for the candidate Sarah opposes. They sound highly know-
ledgeable and well-informed. It sounds to Sarah like they have been following the
news and campaign a lot more closely than Sarah has. As the conversation
continues, the person who seems the most knowledgeable turns to Sarah and asks
about her thoughts on the candidates.

Participants were asked to report how they thought John or Sarah would
respond, with the options covering deflection (changing the subject or
saying nothing),8 conformity (saying what the group believes, even if it is
against what John/Sarah believes), censorship (moderating what John/
Sarah believes in the direction of the group), true expression (stating what
John/Sarah truly believes), and entrenchment (stating an opinion that is
stronger or more extreme than what John/Sarah truly believes, but in
disagreement with the group).
We will return to the bulk of the analysis of these studies in Chapter 7,
when we consider how the psychological experience of discussion affects
what opinions individuals choose to express during the conversation
itself. Here, we focus narrowly on just one of the response options: the
choice respondents had to say they would derail a conversation by being
silent or changing the subject, compared to the choice to engage in a
conversation in one of four ways. We dichotomize the choice to get a
sense for which contextual factors are associated with higher rates of
deflection.
In sum, we evaluated dozens of different conversation permutations.
Across all the different contexts, approximately 20 percent of the subjects
thought that the character would deflect the conversation in some way.
We do not make a claim that aggregating across these studies captures the
actual proportion of conversations that are deflected or silenced. That
statistic would be impossible to systematically evaluate given the thou-
sands of conversational permutations that exist in the real world. But this
finding does give credence to the idea that the discussions in which people
fully participate are only a subset of the conversations that could occur.
We present in the Appendix more detailed findings about the specific
results from the full set of vignette studies but provide an overview here
about the effects of our experimental manipulations of features of the
situation. Our most consistent findings pertained to knowledge differen-
tials and tie strength. We found consistent evidence across three studies

https://doi.org/10.1017/9781108912495.005 Published online by Cambridge University Press


122 Decision: To Talk or Not to Talk? (Stage 2)

that individuals were more likely to expect a hypothetical character to


silence him or herself in a discussion with others who were more know-
ledgeable than in a discussion with others who were less knowledgeable.
Similarly, individuals seem to be more likely to silence themselves and
avoid participating in the discussion when they are interacting with those
they do not know as well (weak ties), compared to when the discussants
are strong ties; this is consistent with our findings from the free response
experiment.
We found mixed evidence for the effect of location: In one study we
found higher rates of deflection in the workplace compared to a neigh-
borhood social gathering, but we were not able to replicate that finding.
Our results were similarly mixed for the effect of the distribution of
opinion: Individuals do expect others to be more likely to silence their
beliefs when they are in an opinion minority, but this finding seems to be
sensitive to the balance of knowledge between the discussants. For
example, in one study, we found that both those in the partisan minority
and partisan majority conditions were most likely to think the character
would silence when the other discussants were more knowledgeable. In
fact, we find that 28% of respondents thought a character in the partisan
majority with more knowledgeable discussants would silence, compared
to 21% of those in the partisan minority with more knowledgeable
discussants. Finally, across two studies, we did not find consistent evi-
dence that power dynamics affected anticipated silencing behavior, where
power dynamics were conceptualized as a formal hierarchy in the work-
place and as an informal social hierarchy in a social setting.
In our CIPI I Survey, we built upon these patterns uncovered through a
wealth of pilot studies to conduct one vignette experiment on a nationally
representative sample, the CIPI I Vignette Experiment. As shown in the
previous vignette, we kept the character in the partisan minority and
randomly assigned the relative knowledge between the character and
the discussants. We found that 17% of participants in the condition
where the discussants were more knowledgeable thought that the charac-
ter would silence, compared to 11% in the condition where the discuss-
ants were less knowledgeable. While silencing was not the most common
expected response, as we discuss more in Chapter 7, we observe a gap in
conversation deflection based on the relative knowledge gaps between
discussants.
Pushing on this a bit more, we examined the considerations partici-
pants thought the character would make when thinking about how to
respond. For more details on how we developed and measured the

https://doi.org/10.1017/9781108912495.005 Published online by Cambridge University Press


To Talk or Not to Talk 123

considerations, please refer back to Chapter 3. Among respondents who


thought the character would silence, the most frequent consideration
selected as most important was the “concern that expressing a dissenting
opinion will damage the relationship John/Sarah has with these people”
(12%). This is a clear example of an affiliation concern – individuals who
expected the character to silence thought that maintaining a social rela-
tionship would be most important in structuring that decision. In con-
trast, the least common consideration among silencers was the
“opportunity to solidify his/her opinion” (1%), which could be viewed
as a more instrumental, political motivation that we situate in the affirm-
ation framework.
Altogether, our first two studies assess some of the factors that influ-
ence how individuals decide “to talk or not to talk.” We find that in
addition to the conversations that people avoid completely, there are
substantial opportunities for political discussion that people deflect by
changing the subject or staying quiet. Our work here is only a first step in
evaluating these situations, and there are many other questions that could
be explored. We chose to narrow down to two factors that emerge as
important from our first two studies: whether a conversation is antici-
pated to be disagreeable and the knowledge differentials between
participants.

The Discussions That Were Costly: The Name Your Price Study
I hate getting into conversations about politics.
True Counterfactual Study participant

In this section, we describe our Name Your Price paradigm in which we


invite participants to indicate how much they would need to be compen-
sated to participate in a five-minute discussion featuring several charac-
teristics.9 The Name Your Price design serves as a nice complement to the
vignette experiments. In the vignette experiments, we have the luxury of
causal identification over the socio-contextual features of the conversa-
tion because we randomly assigned participants to consider discussion
behavior in different contexts. However, by doing so, we lack within-
subject variation in preferences. Moreover, we have to make the assump-
tion that the anticipated behavior of the character in the vignette is similar
to the behavior the participant would actually exhibit. In the Name Your
Price design, we asked participants for feedback on how much they would

https://doi.org/10.1017/9781108912495.005 Published online by Cambridge University Press


124 Decision: To Talk or Not to Talk? (Stage 2)

want to be paid to complete a discussion under various conditions. While


we do not observe whether the participants would actually engage in or
opt out of the discussions, we can get a sense for their preference order-
ings over a wide range of conversation configurations.
We manipulated three main features of the discussion: the presence and
type of disagreement, the knowledge levels of the other participants, and
the topic of the conversation. Over the course of several studies, we
operationalized these features in different ways. Disagreement was
depicted by candidate support, partisanship, or stated disagreement.
Knowledge levels were depicted by describing participants as knowledge-
able or unknowledgeable, as well as (in a student sample) by describing
them as discussants recommended (or not) by a faculty member based on
their superior classroom performance. We compared not only a range of
political topics – presidential primary candidates, abortion, immigration –
but also compared political to apolitical conversations, including pop
culture, contentious campus issues, app-enabled rideshare services (Uber
and Lyft), and Pokémon Go.
Our strongest findings pertain to the manipulations related to disagree-
ment. Across three studies conducted on a student sample and MTurk
samples, and reported in Settle and Carlson (2019), we find consistent
evidence that individuals demand more compensation to discuss politics
with outpartisans than with copartisans. We found that students
demanded on average $6.41 to have a five-minute discussion about a
contentious political issue with copartisans, but $10.70 – nearly twice as
much – to have the same discussion with outpartisans. Another study,
conducted on MTurk, described disagreement in terms of candidate
support instead of partisanship. Participants were asked to name their
price to have a five-minute discussion about various political topics with a
group of Clinton supporters, Sanders supporters, a mix of Clinton and
Sanders supporters, Trump supporters, Cruz supporters, Rubio support-
ers, a mix of Trump, Cruz, and Rubio supporters, and a mix of all five
candidate supporters.10 While we found no differences in price demanded
based on the specific political issues to be discussed, we found a consistent
pattern where Democrats demanded more to discuss politics with
Republican candidate supporters, and Republicans demanded more to
discuss politics with Democratic candidate supporters.
Importantly, our third Name Your Price study revealed an important
nuance in the role of anticipated disagreement in conversational prefer-
ence. This study, also conducted on MTurk, shows that individuals
demand more compensation to discuss politics with non-like-minded

https://doi.org/10.1017/9781108912495.005 Published online by Cambridge University Press


To Talk or Not to Talk 125

others when that difference is described in terms of partisan identity, as


compared to the level of disagreement about the topic. This suggests that
the costs of conversation with an outpartisan encompass more than just
the costs of disagreement about the topic at hand. While our study did not
probe further on this point, we think it reflects our findings from the
previous chapter about the assumptions people hold of outpartisans.
Our results related to knowledge differentials build on our findings
from the first two sets of studies in this chapter. Although we did not code
at this level of specificity, many of the subjects in the True Counterfactual
Study who described a conversation they avoided mentioned they were
concerned about being evaluated by others for their knowledge level. The
vignette studies also revealed the importance of knowledge differentials:
Subjects expected higher rates of conversational deflection when the
character was described as being less knowledgeable than the others in
the conversation.
In the original MTurk Name Your Price Study, we found that partici-
pants generally demanded more compensation to discuss politics with those
who were more knowledgeable than to discuss politics with those who
were less knowledgeable, consistent with our previous results. However,
the pattern was different in our student sample, which was comprised of
highly educated, politically engaged students. Among this nonrepresenta-
tive sample, we find that subjects demand slightly more to discuss politics
with others who are unknowledgeable or ignorant.11 Why?
In follow-up work, it seems that individuals simply find it annoying and
frustrating to discuss politics with those who are ignorant, even if they
might be more comfortable doing so. This is likely to be wrapped up in
preferences for agreement, too. Recall that in Chapter 4, we showed that
individuals consider outpartisans to be ignorant, ill-informed, and ill-
equipped to process political information. These themes emerge in the free
response studies in this chapter, as well. For example, one study participant
described a recent political discussion as: “With my step father. We dis-
agree on virtually everything. He is an extreme right wing bible thumping
republican. I am a moderate. Therefor [sic] he thinks I am stupid, and
I think he is a hypocrite.” Another participant echoed the point of avoiding
discussing politics because of knowledge asymmetries by writing: “I don’t
talk politics with anyone. Ever. I keep my opinions to myself because
people are stupid and I don’t want to discuss it with stupid people.”
Finally, also consistent with the other studies in this chapter, we did
not find that the political topic of the conversation was an important
factor driving respondent preferences. However, we note here a point we

https://doi.org/10.1017/9781108912495.005 Published online by Cambridge University Press


126 Decision: To Talk or Not to Talk? (Stage 2)

will return to in Chapters 8 and 10: Subjects demanded more money to


talk about even apolitical topics, such as pop culture, with those with
whom they disagreed about politics (Settle and Carlson 2019). Thus,
conversational avoidance with disagreeable others may extend beyond
dodging conversations about politics, a finding that fits with the evidence
we will present later in the book that people avoid other sorts of social
interactions with those who do not share their political views.


We covered a lot of ground in this chapter as we unpacked the Decision
stage of the 4D Framework. In Table 5.3, we provide an overview of our
findings. What are the key patterns? Conversation is most likely to emerge
among discussants who agree with one another, especially if they share a
political identity. Not only were a greater proportion of the avoided
conversations disagreeable in the True Counterfactual Study, but we also
discovered a higher rate of conversational deflection in disagreeable con-
versations in the vignette studies. In the Name Your Price studies, subjects
demanded more money to have disagreeable conversations.
Perceived knowledge differentials between discussants also affect
which discussions emerge. Not only does the concern about being nega-
tively evaluated lead people to avoid discussion, but people are more
likely to deflect the conversation when they perceive that they are at a
knowledge disadvantage. In general, participants tend to demand more to
discuss politics with those who are perceived as more knowledgeable.
However, among more knowledgeable samples, the opposite is true.
Largely driven by a desire to avoid feeling annoyed or making the other
person uncomfortable, more knowledgeable respondents demand more to
talk politics with their less knowledgeable peers.
Some readers might find this chapter to be redundant, telling us some-
thing we already had evidence to demonstrate from prior research. We
believe that we make four main contributions.
First, the results from the True Counterfactual Study suggest that
political conversation happens more frequently in small groups than in
two-person interactions. Yet, one of the most common methods to date of
soliciting and analyzing information about people’s political discussion
experience is to ask them about their discussion partners. The mismatch
between the kinds of conversational dynamics to which people are
exposed, and the way we measure that exposure, suggests that our focus
on dyadic disagreement (however measured) or the overall composition

https://doi.org/10.1017/9781108912495.005 Published online by Cambridge University Press


https://doi.org/10.1017/9781108912495.005 Published online by Cambridge University Press

 .. Summary of results about Stage 2 (Decision)

True Counterfactual Study Vignette Experiments Name Your Price Studies


Disagreement Conversations less likely to occur when Silencing more likely when character Conversation is costliest when
there is an imbalance of opinions is in opinion minority disagreeable, especially when
disagreement is framed in terms of
identity
Knowledge Silencing more likely when character Conversation is costlier with
is at an informational disadvantage knowledge differentials
Power Mixed evidence that silencing more
likely when character is less
powerful
127

Tie Strength Conversations less likely to occur Silencing more likely when character
between weaker social ties is interacting with weak ties
Context Conversations less likely to occur in No evidence that there is more
larger groups, but the vast majority silencing in the workplace
of conversations are in small groups, compared to social situations
not dyads
Topic Topic does not seem to be Even nonpolitical topics are costlier
systematically related to the when discussed with disagreeable
emergence of conversation others
Note: Cells describe the results of the study pertaining to the relationship between the listed factor (e.g. disagreement) and the emergence of conversation.
Cells that are left blank represent relationships that were not tested in a given study.
128 Decision: To Talk or Not to Talk? (Stage 2)

of a person’s discussion network systematically misses what appears to be


a very prevalent experience: group interaction. The dynamics present in
group interactions are significantly more complicated than in dyadic
interactions. For example, a focus on group conversation raises a crucial
insight about people’s exposure to disagreement: The True
Counterfactual Study suggests that people are especially likely to avoid
conversations when they are in the opinion minority. It simply is not
possible to be in an opinion minority in a two-person conversation.
While we can identify people who are in opinion minorities in their
discussion networks using conventional measurement approaches, we
would miss the group dynamic that gives the opinion majority its power
during a discussion. Similarly, some conversational defense mechanisms,
such as silencing, might be easier to use in a group conversation. In a
dyadic conversation, it would be awkward to simply not respond, but
with more than two people, an uncomfortable person trying to avoid the
conversation can more easily hide in silence.
Second, decades of previous work suggest that discussion is more
common among people who agree with each other. However, the work
in this chapter demonstrates that the pattern detected in previous litera-
ture – that people tend to talk with like-minded close social ties – is not
simply an artifact of the way political scientists have measured political
discussion, nor is it entirely a consequence of the available supply of
discussants. Our findings, derived from an entirely different set of
methods, also converge on the notion that these are the conversational
situations in which people are most likely to meaningfully participate, not
deflect, in a discussion. We contribute to the debate over the active nature
of discussant selection by showing that even when there are alternatives,
most people prefer these kinds of discussions. Moreover, concerns about
social relationships (affiliation) seem to be driving people’s decisions to
opt out of conversations, while interest in self-expression (affirmation)
seems to be driving participation. The evidence presented in this chapter
adds empirical grit to our assumptions that individuals actually prefer to
discuss politics with others who agree with them, and that these active
choices can contribute to the network homogeneity we observe.
Third, people might have strong preferences over whether and with
whom they engage in a political discussion, but the True Counterfactual
Study clearly demonstrates that they do not always get to act on those
preferences. Lab experiments in the next chapter will shine light on what
it feels like to be trapped in a conversation that someone else initiates.
Sometimes, people find themselves in a political conversation with no way

https://doi.org/10.1017/9781108912495.005 Published online by Cambridge University Press


Conclusion 129

out but to suffer through it. In Chapter 7, we will return to this idea and
describe some of the ways in which individuals “grin and bear it,” such as
censoring the opinions they share with others or fully conforming to the
group’s opinions to avoid disagreement (Suhay 2015; Carlson and Settle
2016; Levitan and Visser 2016).
Finally, we have evidence to suggest that people prefer discussions with
people of a similar knowledge level to themselves. In other words, in
addition to homophily in political views, we might also expect homophily
in knowledge levels in political discussions. We elaborate more on this in
the concluding chapter to the book, where we consider the implications
for the health of a democracy if people self-sort by knowledge level into
political conversations with agreeable others. However, this is a direct
challenge to “opinion leadership” theories of democracy that assume
people want to seek out others who know more about politics as a
shortcut to help them make informed vote choices.
In the next chapter, we turn to focus on what people feel on the cusp
of and during a political conversation, using data from two laboratory
studies where we collected psychophysiologically informative data. The
constraints of a lab experiment prevented us from exploring the full
range of contextual and configural factors that we explored in this
chapter. We narrow our focus to the two factors that seemed to matter
most: the presence of disagreement, and the relative knowledge level
of discussants.

https://doi.org/10.1017/9781108912495.005 Published online by Cambridge University Press


6

Discussion

The Psychophysiological Experience of Political


Discussion (Stage 3)

Joe stares at his plate, mixing together his mashed potatoes and the gravy
from the roast. He takes another sip of wine, trying to relax. He’d been
dreading this dinner all week. He’d paid enough attention to the news cycle
in recent days to know that his in-laws, Frank and Susan, were going to be
fired up and ready to pick a fight. Katie fell far from the metaphorical tree,
and her politics were about as different as could be from her parents’
opinions. It didn’t help that in recent years they’d come to take the incessant
chatter of cable news pundits as the absolutely infallible truth. Part of what
Joe loved about his wife was her willingness to stand up for her viewpoints
and counter her parents’ sound bites with facts. He always nodded in
encouragement when she spoke and frequently put his arm around the back
of her chair to try and signal that they were united in their beliefs. But he
still felt a pit in his stomach when he was inevitably put on the spot to share
his opinion.
At any moment, his father-in-law would pull his classic move, turning
toward Joe and intimating in a vaguely patronizing way that Joe shared
his opinion, before looking him in the eye and asking directly. Joe felt his
father-in-law turn in his direction. “Talk some sense into that daughter of
mine, Joe. You’re the man of the house,” chuckling as his comment made
his daughter roll her eyes. “Surely you agree with us, don’t you?”

To this point, we have focused on the psychological processes and deci-


sions that lead up to a political discussion. We have shown that many
individuals prefer to guess potential discussants’ views before engaging in
the discussion, and that they use political and apolitical cues to do so. We
also have shown that anticipation of features of the discussion itself – such
as issue disagreement, partisan identity clash, or knowledge differentials –
can lead individuals to be wary of political talk.

130

https://doi.org/10.1017/9781108912495.006 Published online by Cambridge University Press


The Psychophysiological Anticipation of Discussion 131

Suppose Joe has weighed the pros and cons of engaging in the discus-
sion and chooses to reply to his father-in-law with something relevant
instead of staying silent or deflecting the conversation. What happens
next? Political science research examining the actual dynamics of political
discussion is scant. Most of what we know is learned from asking people
to characterize their conversations or discussion partners, and then study-
ing associations between these features, such as the amount of disagree-
ment or the gender composition, and what happens afterward. Do
people’s attitudes change? Did individuals become more participatory?
Did participants become more tolerant of other views? With the exception
of some research on formal deliberation (e.g. Mendelberg and Karpowitz
2014), qualitative focus groups (Conover, Searing, and Crewe 2002;
Conover and Searing 2005), and limited ethnographic research on large,
regular discussion groups (Cramer 2004) or social groups (Eliasoph
1998), political scientists do not know much about what people actually
say or how they act, let alone what motivates those choices.
In this chapter and in Chapter 7, we aim to “open the black box” on
political talk. We assess the psychophysiological, psychological, and
behavioral dynamics of actively engaging in a political discussion. We
begin by exploring what political discussion feels like. Our goal here is to
characterize what people experience as they anticipate and participate in a
political discussion. We focus on how situational factors, such as dis-
agreement and knowledge asymmetries, affect these experiences. In the
analyses in this chapter, we let participants “speak” to us directly, using
psychophysiological measurement to capture what they feel.

    


I had a political discussion at church recently. It got pretty heated and that
made me feel uncomfortable. Everybody had the same view point [sic] but it
made me nervous that someone was lying just so they wouldn’t have to
argue. It made my palms sweat, it was extremely nerve wracking.
Thanksgiving Study participant

In the previous chapter, the True Counterfactual Study, vignette experi-


ments, and Name Your Price studies all pointed to the same general
pattern: Individuals prefer to avoid discussing politics with those who
disagree, especially those whose knowledge levels do not match their
own. However, in each of these studies, we have failed to capture what
people actually feel as they anticipate discussions with individuals from

https://doi.org/10.1017/9781108912495.006 Published online by Cambridge University Press


132 Discussion: Psychophysiological Experience (Stage 3)

different political stripes or knowledge levels. We argue that psycho-


logical experiences preceding and during discussion shape behavior both
during and after the conversation. We use a different research design – a
lab experiment – to better understand the psychophysiological responses
these experiences can evoke.
We are not the first to assert that political discussions can be uncom-
fortable, upsetting, or distressing, but previous work has not cleanly
tested this assumption. For example, in Hearing the Other Side, Diana
Mutz (2006) writes that “few claim that conversations across lines of
difference are comfortable or particularly enjoyable” (p. 55), stating a
conventional wisdom but not providing empirical evidence. Similarly,
many of the personality-based theories for political discussion network
homogeneity assume that disagreement makes some people more uncom-
fortable than others (Ulbig and Funk 1999; Gerber et al. 2012).
We want to measure this discomfort more directly, and to do so, pair
psychophysiological measurement with self-report data about emotional
experience during an anticipated political discussion. As introduced in
Chapter 3, behavioral physiology research is premised on the idea that we
biologically respond to stimuli in our environments and that these physio-
logical responses can inform our subsequent behavior. The experience of
psychophysiological arousal in response to political discussion – a racing
heart or sweaty palms – gets factored into a person’s calculations about
the costs and benefits of future discussions. For instance, social psycholo-
gists suggest that the emotional and psychophysiological experience of
interracial interactions shape decisions about future interactions
(Trawalter, Richeson, and Shelton 2009), and people who had past
interracial contact experienced faster psychophysiological recovery after
a stressful interracial interaction (Page-Gould, Mendes, and Major 2010),
suggesting a feedback cycle between psychophysiological response and
interpersonal interaction patterns. This perhaps indicates that if individ-
uals have more psychophysiologically uncomfortable experiences discuss-
ing politics with their outgroup, they might be more likely to select
discussion partners who share their views in future interactions.
As we previewed in Chapter 3, our Psychophysiological Anticipation
Study was designed to answer two questions. First, how do individuals
psychophysiologically respond to anticipating a political discussion? We
answer this question by evaluating individuals’ psychophysiological
responses when anticipating a political discussion as compared to watch-
ing others engage in contentious social interactions in video clips. Given
previous research demonstrating that exposure to incivility on video can

https://doi.org/10.1017/9781108912495.006 Published online by Cambridge University Press


The Psychophysiological Anticipation of Discussion 133

be psychophysiologically arousing (Mutz and Reeves 2005), this serves as


a useful benchmark against which to compare responses to anticipating
an actual discussion experience. We hypothesized that people would have
stronger psychophysiological responses to being told they had to talk
about politics than simply watching depictions of contentious politics
on a screen. Second, we wanted to assess whether situational factors,
such as disagreement and knowledge, affected levels of psychophysio-
logical response. Based on the findings from our other studies, we
expected that anticipated discussions with disagreeable, knowledgeable
others would provoke stronger psychophysiological responses than other
kinds of conversations.
Our experiment had both a within-subjects component – to compare
the reaction to watching videos versus anticipating a political discussion –
as well as a 2x2 between-subjects component – in which participants were
randomly assigned to anticipate a discussion with someone who either
shared their partisan identity or identified as a member of the other
political party, and who was either high or low in political knowledge.

Comparing Psychophysiological Responses to Videos


and Discussion Anticipation
We begin by analyzing psychophysiological responses to each stimulus
type. Some of the earliest psychophysiologically informative research in
political science examined responses to watching contentious and uncivil
interactions (Mutz and Reeves 2005). How does that experience compare
to the anticipation of a political conversation? Subjects had much higher
increases in heart rate and electrodermal activity in response to the
discussion stimulus than in response to the contentious video stimuli.1
Figure 6.1 shows the average change in heart rate, measured in beats per
minute, for the political videos, apolitical videos, and the initial discussion
stimulus. Watching contentious political videos did not lead to any stat-
istically significant increase in heart rate, nor did watching contentious
apolitical videos. But there is a statistically significant and large response
to the discussion treatment.2
Nearly 90 percent of subjects experienced an increased heart rate at
some point during the discussion treatment. Upon initially being told they
were going to discuss politics, heart rate increased by 1.5 beats per minute
on average; heart rate increased by 7.8 beats per minute on average over
the total course of the discussion treatment as subjects learned more
details about the anticipated conversation. Not only is this statistically

https://doi.org/10.1017/9781108912495.006 Published online by Cambridge University Press


134 Discussion: Psychophysiological Experience (Stage 3)

 .. Average change in psychophysiological response to video and


discussion stimuli
Note: Average change in heart rate (panel A) and electrodermal activity (panel B)
in response to each stimulus: political videos, apolitical videos, and discussion
prompt. Change is measured between the stimulus period and the ISI (the blank
screens that allow participants to come back to baseline between stimuli)
immediately preceding it. Vertical lines represent 95 percent confidence intervals.
The discussion stimulus always came last, but the order of political and apolitical
videos was randomized. See Appendix for more details on sample size and
measurement.

significant, but substantively, Kulas (2017) reports that during a panic


attack, one might experience an increase in heart rate between 8 and 20
beats per minute; 44.5 percent of our subjects experienced a heart rate
increase greater than or equal to 8 beats per minute. Thus, for nearly half
our participants, anticipating a political discussion was associated with a
racing heart that approaches the feeling of a mild panic attack.
Figure 6.1 also shows the same patterns for electrodermal activity
response, measured in microsiemens. Mutz and Reeves (2005) found that
watching uncivil political discourse among elites can lead to increased
electrodermal activity, and we also find statistically significant but sub-
stantively small increases in electrodermal activity in response to both
political and apolitical videos. However, that effect pales in comparison
to the effect of being told that a political discussion is imminent. It is
difficult to compare the magnitude of EDA response across samples and
stimuli, and different scholars use different approaches to calculate the
effects of their stimuli (Settle et al. 2020). However, within our own study,
we can assert that the EDA response to the discussion stimulus was at

https://doi.org/10.1017/9781108912495.006 Published online by Cambridge University Press


The Psychophysiological Anticipation of Discussion 135

least four times as large as the EDA response to the videos, even though
the videos preceded the discussion stimulus and therefore we might expect
attenuated response to the discussion prompt, due to habituation.

Psychophysiological Reaction to Discussion Composition


The answer to our first research question shows that individuals did
indeed have a substantial psychophysiological response to the idea of
engaging in a political discussion, much more so than simply observing
others experience contentious interactions. But, did participants react
differently conditional on the additional information they received about
the characteristics of their discussants?
The discussion portion of the experiment was divided into three
sections. First, a message appeared for thirty seconds informing the subject
that he or she would be participating in a political discussion with another
subject, and that they had a few minutes to prepare for the conversation.
The other “participant” was identified vaguely as another member of the
subject pool. A second screen (also thirty seconds) then appeared telling
the subject more about his or her supposed discussion partner. Specifically,
participants learned their discussion partner’s partisan affiliation and pol-
itical knowledge level based on the study pretest. Participants were ran-
domly assigned to a discussion partner who was a copartisan or an
outpartisan and who was high or low in political knowledge. Finally, a
third screen appeared stating that their discussion would include three
potential topics: Obamacare, tax cuts, and abortion. The participants were
asked to mentally prepare to express and defend their positions on any of
these topics. After two minutes, the proctor entered the room to tell the
subjects that the discussion partner was not able to attend, therefore they
so that he or she did not have to complete the discussion.
Figure 6.2 shows the results by treatment group for each of the three
stages of the discussion treatment, for heart rate (left panel) and electro-
dermal activity (right panel). Only the pattern in the heart rate data
conforms to our expectations. Note that the mean changes in heart rate
immediately following the discussion stimulus (being informed that they
will be having a discussion) does not show remarkable differences
between treatment groups at all. This is to be expected because all
participants were treated in the same way. All treatment groups’ mean
heart rates increase when subjects are told information about their dis-
cussant. This could reflect the fact that both positive and negative emo-
tions are associated with increases in heart rate. We see the outpartisan

https://doi.org/10.1017/9781108912495.006 Published online by Cambridge University Press


136 Discussion: Psychophysiological Experience (Stage 3)

 .. Average change in psychophysiological response to discussion


stimuli
Note: Mean change in heart rate (left panel) and electrodermal activity (right
panel) relative to baseline at each point of the discussion stimulus. “Discussion”
represents when participants were simply told that they would be having a
discussion; “Partner” refers to when they learned the partisanship and knowledge
level of their discussant; and “Topic” refers to when they learned the three
potential discussion topics. Vertical lines represent 95 percent confidence
intervals. See Appendix for more details on sample size and measurement.

treatment groups diverge in average heart rate increase when we expect


them to – when subjects learn the contentious nature of the conversation
they are slated to have.
We find strong evidence for a main effect of the outpartisan treat-
ment on heart rate: Subjects anticipating a discussion with an out-
partisan had bigger increases in heart rate than subjects who thought
their discussion would be with a copartisan. This finding holds in a
regression model where we control for gender, race, partisanship, and
political interest, and holds whether we use all subjects in the analysis
or exclude subjects with lower quality physiology data.3 We do not
find an effect of disagreement on electrodermal activity, nor do we find
evidence for a main effect of the knowledge treatment on heart rate or
electrodermal activity.
Additionally, we find suggestive evidence in favor of our hypothesis of
an interaction effect between treatments, such that subjects should have
greater psychophysiological response to knowledgeable outpartisans, but
only for changes in heart rate, not electrodermal activity.4 This finding is
robust to the inclusion of covariates to the regression model, but only at

https://doi.org/10.1017/9781108912495.006 Published online by Cambridge University Press


The Psychophysiological Anticipation of Discussion 137

the maximal level of exclusion based on the quality of the subjects’ data.
This tentative result could reflect the subtleties we discussed in the Name
Your Price Study. Individuals might be more comfortable discussing
politics with less knowledgeable outpartisans, which would be detected
as smaller psychophysiological responses here, even if they are more
annoyed while doing so.

Emotional Response
Heart rate and EDA data can give us a sense for the magnitude of an
individual’s psychophysiological experience but cannot provide a defini-
tive answer as to the valence of a person’s emotional response. For
instance, an increase in heart rate could indicate anxiety or stress, but it
could also signal excitement or enthusiasm. In an effort to better under-
stand the valence behind what individuals experience, we complement the
psychophysiological data with self-report measures of participants’ emo-
tional responses.5
After the three-pronged political discussion stimuli, participants were
asked to indicate which emotions they experienced when they learned
they were going to have a political discussion. Overall, 48% of the sample
reported one or more negative emotions, while 37% reported one or more
positive emotions.6 A greater proportion of the subjects in the outpartisan
treatment group experienced negative emotion compared to the coparti-
san group (60% vs. 43%), but the knowledge level treatment did not have
a significant main effect on overall positive or negative emotion.

 .. Mean emotional response, treatment main effects

High Low
Outpartisan Copartisan p-value Knowledge Knowledge p-value
Angry 1.41 1.24 0.18 1.30 1.33 0.85
Annoyed 2.06 1.69 0.03 1.76 1.97 0.21
Anxious 3.39 3.00 0.04 3.21 3.05 0.37
Motivated 3.07 2.64 0.02 2.86 2.79 0.71
Happy 2.23 2.26 0.86 2.24 2.26 0.91
Relieved 1.49 1.53 0.79 1.48 1.55 0.56
Note: Data come from the Psychophysiological Anticipation Study. Subjects responded to
the question “[h]ow did the idea of having a political discussion with someone make you
feel? Please select all that apply and indicate the strength of your response” In a grid-style
question, subjects were asked to respond to each emotion listed in the rows on a five-point
scale (1=weak, 5=strong). See Appendix for more details on sample size and measurement.

https://doi.org/10.1017/9781108912495.006 Published online by Cambridge University Press


138 Discussion: Psychophysiological Experience (Stage 3)

Table 6.1 presents the mean emotional response of participants in each


treatment group for each emotion we examined. Overall, anxiety was the
most intensely experienced emotion, followed by motivation. While there
are no significant differences based on the knowledge treatment, subjects
in the disagreement condition did report both more annoyance and
anxiety but also more motivation.
Was there an interaction between the treatments, such that those who
were told they were going to discuss politics with a knowledgeable out-
partisan had the strongest emotional response? No. Regressions modeling
the interactive effects of the treatment do not support this expectation.
While there are interesting substantive variations in the emotional experi-
ence across the four treatment groups (see the Appendix for details), these
differences are not statistically significant.
The psychophysiological evidence paired with the exploratory analysis
of self-reported emotion suggests that the anticipation of political discus-
sion is an emotional one, potentially motivated by concerns specific to the
nature of the conversation anticipated. People have a stronger psycho-
physiological response to just the thought of talking about politics than
they do to watching contentious video clips, and disagreement activates
not only stronger negative emotions, but for many people, an increased
heart rate as well. Does this psychophysiological activation continue
throughout an actual conversation? We endeavored to find out in a
second psychophysiologically informed study: the Psychophysiological
Experience Study.

    


We used this study to push further on two key questions that emerged
from our analyses in other studies. First, are there differences in psycho-
physiological response between agreeable and disagreeable conversations,
as our previous psychophysiological lab study suggested? Second, what
kind of disagreement matters: the clash of political identities, the clash of
disagreeable opinions, or a person’s perception that they disagree with
their discussants (regardless of the veracity of their perception)?7 The
design in our first lab study did not cleanly separate these potentially
different sources of discomfort.
In addition to the core questions driving the design of the study, we
were also interested in capturing more exploratory ideas that had
emerged from our research. How do people communicate their polit-
ical identities to others, when instructed? To what extent do people

https://doi.org/10.1017/9781108912495.006 Published online by Cambridge University Press


The Psychophysiological Experience of Discussion 139

perceive disagreement where it exists? Does a person’s own discomfort


during the discussion affect their perception of their discussant’s
experience? We will tackle some of these questions in this chapter,
and some in the next.

Research Design and Rationale


As we described in Chapter 3, we conducted a second study in the fall of
2015 with undergraduate student participants. Two participants came to
the lab at a time, were outfitted with equipment allowing us to measure
their heart rates and electrodermal activity, and were situated in a room
together in front of a television screen that delivered instructions to them.
At the beginning of the discussion, the participants were asked to talk
about their favorite course that semester. They were then instructed to
disclose to each other their partisanship and whether they were an in-state
or out-of-state student. Finally, they were told they were going to have a
discussion and proceeded to discuss a total of four issues in a randomized
order. (For more information about the topics discussed, refer to
Chapter 3 and the Appendix). After the discussion concluded, partici-
pants completed a brief post-test survey in which they reflected on the
discussion experience and reported their opinions on these issues.
Figure 6.3 shows the average heart rate change and electrodermal
(EDA) change for each of the seven segments of the protocol where
subjects interacted with each other. It shows significant increases in heart
rate across six of the seven discussion segments, but a more mixed pattern
of results for electrodermal activity with changes that are indistinguish-
able from zero.
Our goal was to assign people to a treatment of “agreeable” or
“disagreeable” discussions to see if people who had disagreeable discus-
sions reported more negative emotions, experienced more discomfort,
and displayed more psychophysiological reactivity. We operationalized
our concept of disagreement in three ways: partisan identity clash,
actual disagreement on the issues; and perceived disagreement on the
issues. We elaborated on our rationale for issue selection in Chapter 3,
but we settled on four issues covering both political and nonpolitical
topics that were pertinent to our college-aged sample: whether the
difference between in-state and out-of-state tuition is fair, whether the
university should require standardized test scores for application, if
government education policy puts too much emphasis on standardized
testing in high school, and whether undocumented immigrants who

https://doi.org/10.1017/9781108912495.006 Published online by Cambridge University Press


140 Discussion: Psychophysiological Experience (Stage 3)

 .. Psychophysiological response throughout Psychophysiological


Experience Study
Note: Mean change in heart rate (left panel) and electrodermal activity (right
panel) relative to baseline at each segment of the Psychophysiological Experience
Study. Each bar represents a segment of the study: Intro represents when subjects
were asked to introduce themselves and discuss their favorite class; Party
represents when they revealed their partisan identity; State represents when they
revealed whether they were an in-state or out-of-state student; WM Tuition
represents when they discussed whether it was fair for out-of-state tuition to be
higher than in-state tuition; WM Testing represents when they discussed whether
the college should require standardized testing as part of admissions; Testing
represents when they discussed whether national education policy should include
standardized testing; and Immigrant Financial Aid represents when they discussed
whether undocumented immigrants should be eligible to receive financial aid.
Vertical lines represent 95 percent confidence intervals.

attended high school in the United States should qualify for financial aid
from the government to attend college. In some analyses, we combine all
four issues together.
On average, we expected that discussions where the subjects had
clashing partisan identities would have higher levels of issue disagree-
ment, and as a result, they would perceive more disagreement. These three
measurements are obviously not an exhaustive list of the way disagree-
ment could be operationalized, but they fit nicely with past operationali-
zations (Klofstad, Sokhey, and McClurg 2013). Moreover, we report in
Carlson and Settle (2016) that the way someone’s opinion is elicited does
not seem to affect people’s level of reported discomfort. Thus, we focused
on the configuration of discussion partners or opinions as opposed to

https://doi.org/10.1017/9781108912495.006 Published online by Cambridge University Press


The Psychophysiological Experience of Discussion 141

how disagreement was made apparent, especially since there were no


confederates in the study to steer the tone of the conversation itself.
We test two core hypotheses, one of which can be examined with several
different self-reported, psychophysiological, and behavioral dependent
variables. Our hypotheses are outlined in Table 6.2. The first hypothesis
relates to the moment in which subjects make their political identities
known to each other: We anticipate that subjects whose partisan identities
do not align will be more psychophysiologically responsive to sharing their
identity, compared to subjects who are copartisans. The second hypothesis
tests the idea that people have stronger psychophysiological response to
disagreement, whether that is based in identity clash, actual disagreement
on the issues as measured on the pre-survey, or the subjects’ perception that
they disagree. We have two psychophysiological dependent variables for
each hypothesis: subjects’ change in heart rate (measured in beats per
minute) and electrodermal activity (measured in microsiemens). These are
measured for each segment of the treatment. Here, we report results using
the minimal exclusion criteria for psychophysiological data quality, but we
report results with varying degrees of exclusion in the Appendix.
In addition to the psychophysiological measurements, we capture two
sets of self-report measurements. The first is a measure of self-reported
discomfort for each discussion topic. We ask subjects to evaluate their
own discomfort, as well as their partner’s discomfort. Since these meas-
ures are captured for each topic, they can be analyzed for all three forms
of disagreement at the segment level (i.e. each topic). The second set of
self-report measures relates to the overall emotional experience of the
conversation. We first asked subjects if they experienced any positive
feelings about the chance to have a conversation, and then followed up
with a list of the opportunities that could have led to positive emotion. We
then asked subjects if they experienced any negative feelings about the
chance to have a conversation, and then followed up with a list of the
concerns that could have led to negative emotion. These two measures are
captured at the conversation level, instead of the topic level. Thus, for the
analysis between these measures and issue disagreement and perceived
disagreement, we aggregate total levels of actual and perceived disagree-
ment across all four issues during the conversation.

Partisan Clash
The results from the Name Your Price Study in Chapter 5 suggest that
people were less inclined to talk with outpartisans about both political

https://doi.org/10.1017/9781108912495.006 Published online by Cambridge University Press


https://doi.org/10.1017/9781108912495.006 Published online by Cambridge University Press

 .. Hypotheses in Psychophysiological Experience Study

Study Segment
Identity Revelation Political Discussion Announcement Discussion of Topics
Partisan Clash Subjects whose partisan identities Subjects whose partisan identities Subjects whose partisan identities
clash will experience increased clash will experience increased clash will experience increased
psychophysiological activation psychophysiological activation psychophysiological activation,
when revealing their partisan when told they will have a political compared to subjects whose
identity, compared to subjects discussion, compared to subjects partisan identities do not clash
whose identities do not clash whose identities do not clash
Issue NA NA Subjects whose issue opinions do not
142

Disagreement match on the pre-survey will


experience increased
psychophysiological activation,
compared to subjects whose issue
opinions are the same on the pre-
survey
Perceived NA NA Subjects who report higher levels of
Disagreement perceived disagreement will
experience increased
psychophysiological activation,
compared to subjects who report
lower levels of perceived
disagreement
The Psychophysiological Experience of Discussion 143

and nonpolitical topics. Thus, as described in the top row of Table 6.2, we
expect that people might have a stronger psychophysiological reaction to
engaging with an outpartisan.
We test this manipulation at three different points during the study.
First, we measure what happens when the discussion partners initially
reveal their partisan identities to each other. Second, we measure what
happens when they are told they will have a political discussion. And
third, we assess whether people in cross-partisan conversations experi-
enced more discomfort or psychophysiological activation during the dis-
cussion topics themselves.
The design of our study guided participants to talk about a nonpolitical
topic (their favorite class) while we collected a baseline psychophysio-
logical reading. They were then instructed to state their partisanship
verbally, responding to the question: “Are you a Republican or a
Democrat? When the screen goes blank, please state your answer out
loud.” In Table 6.2, we term this “Identity Revelation.” We structured
this part of the study to create a clear moment where the subjects revealed
their identity. In our pretest survey, 91 percent of the sample identified
with a party, including 26 percent of the sample identifying as
Independents who leaned toward a party. Thus, most respondents should
have been able to respond to the question with “Democrat” or
“Republican,” but respondents had the freedom to explain their partisan
identities in whatever words they preferred. We acknowledge that this
question would be difficult to answer for Independents, who might have
felt less sure of what to say.
Our operationalization of partisan clash was a “hard test.” We
return to this point in more detail in Chapter 7, but for now note that
in many instances, the partisan clash treatment during the conversation
was weaker than it appeared on paper. We did not pick highly conten-
tious issues for which partisan identity was especially salient. Our
student sample contained very few self-identified Republicans, which
meant we had a limited number of conversations with true partisan
clash. Of the forty-one conversations where subjects’ partisan identities
did not align, only six occurred between a self-identified Republican
and a self-identified Democrat, although that number increases to
twenty-one if we code leaners as partisans.8 Additionally, as we dis-
covered when watching the recordings of the interactions, people were
not particularly forthcoming about their political identities! Many of
our subjects were quite verbose when describing their partisanship,
suggesting that they qualified their identity expression in ways that

https://doi.org/10.1017/9781108912495.006 Published online by Cambridge University Press


144 Discussion: Psychophysiological Experience (Stage 3)

may have served to reduce the amount of perceived differences between


themselves and their discussion partners.
Keeping these points in mind, how did people respond psychophysio-
logically to disclosing their partisan identities? We do not see a clear
pattern for electrodermal activity (EDA), but somewhat surprisingly,
people’s heart rates increased more when in the aligned identity condition
compared to the clashing identities condition, although the differences are
not statistically significant. At the next prompt, when the subjects are told
that they are having a discussion,9 termed “Political Discussion
Announcement” in Table 6.2, we expected those facing identity clash to
experience more psychophysiological activation than those who were not
facing identity clash, but we see little evidence of a clear pattern for the
EDA data, and no clear pattern for the heart rate.
Is partisan clash a salient treatment after the initial disclosure? As
described, we used subjects’ partisan identities as our “randomized”
treatment. Our expectation was that if simply talking about politics with
a member of the outgroup was off-putting, then people might experience
more discomfort, more negative emotion, and higher psychophysiological
response regardless of the topic or the level of disagreement on the issue.
We turn to analyzing differences during the discussion topics. Our expect-
ations are outlined in the “Discussion of Topics” column in Table 6.2.
Figure 6.4 shows the raw heart rate and EDA data for each topic. We run
linear regressions with psychophysiological response as the dependent
variable, clustering standard errors for each subject, pooling all four
issues together, and do not find any evidence that partisan clashing
conversations were associated with higher psychophysiological response
than partisan aligned conversations.
Finally, we turn to the self-reported information from the post-survey.
There is no evidence at all that people experienced more self-reported
discomfort during the discussion topics when their identities clashed with
their discussants’ identities, nor did they perceive their partner to be more
uncomfortable. We do, however, find some evidence for differences
related to emotional experience. Subjects in the partisan clash condition
did not experience less positive emotion, but we did find a general
pattern – though not statistically significant – that they were slightly more
likely to report negative emotion. Overall, 34% of the clashing partisan
group experienced negative emotion compared to 21% of the aligned
partisan group, and they listed more reasons on average for that negative
emotion (2.03) than those in the aligned partisanship condition (1.22).
The most common concerns – all more prevalent in the partisan clash

https://doi.org/10.1017/9781108912495.006 Published online by Cambridge University Press


The Psychophysiological Experience of Discussion 145

Heart Rate Change EDA Change


10 0.1
Aligned
9 Clash 0.09
Heart Rate Change over ISI, in BPM

EDA (log) Change over ISI, in mS


8 0.08
7 0.07
6 0.06
5 0.05
4 0.04
3 0.03
2 0.02
1 0.01
0 0
–1 –0.01
–2 –0.02
Immigrant Immigrant
Campus Campus Standardized Financial Campus Campus Standardized Financial
Tuition Testing Testing Aid Tuition Testing Testing Aid

 .. Psychophysiological response to discussion prompts by partisan


identity concordance
Note: Average increase in heart rate (left panel) and electrodermal activity (right
panel) relative to baseline during each discussion topic. Light gray bars reflect
discussion pairs in which the subjects had the same partisan identity; dark gray
bars reflect discussion pairs in which the subjects had different partisan identities.
Independent leaners are coded as partisans and cases in which true Independents
were paired with partisans were considered to be in the partisan clash condition.
Vertical lines represent 95 percent confidence intervals.

condition – were uncertainty about one’s opinion (27% in clashing, 12%


in aligned), basing one’s opinion on factually inaccurate information
(26% in clashing, 14% in aligned), being judged for one’s knowledge
level (24% in clashing, 7% in aligned), ability to defend opinion (24% in
clashing, 12% in aligned), and making one’s discussion partner uncom-
fortable (23% in clashing, 12% in aligned). While the general pattern
described persists, the only relationship that was statistically significant
was the comparison of being judged for one’s knowledge level.

Issue Disagreement
While we did not assign subjects to conversations based on the amount of
issue disagreement between the discussants, our hope was that we would
also achieve variation in the extent to which subjects disagreed with one
another on the issues discussed, and as we note in Chapter 3, we did. In
this section, we compare psychophysiological and emotional responses to
conversation segments on topics where the participants agreed, to

https://doi.org/10.1017/9781108912495.006 Published online by Cambridge University Press


146 Discussion: Psychophysiological Experience (Stage 3)

conversation segments where they did not, based on their pretest answers.
We outline our expectations in the second row of Table 6.2. A given dyad
could have both agreeable and disagreeable exchanges across the four
topics. In some analyses, we also aggregate the disagreement to get a sense
for how much overall disagreement existed across the four issues based on
their pre-survey answers.
Looking first at psychophysiological response, for each topic, we pool
discussion segments where respondents agreed and compare to the pool
of discussion segments where they did not. Figure 6.5 shows a clear
pattern for both heart rate and EDA, even if not all differences are
statistically significant at the topic level: Disagreeable conversations tend
to elicit higher psychophysiological response. In regression models includ-
ing observations for each discussion topic with clustered standard errors
for each individual, actual issue disagreement was positively and

 .. Psychophysiological response to discussion prompts by issue


disagreement
Note: Average change in heart rate (left panel) and electrodermal activity (right
panel) relative to baseline during each discussion topic. Light gray bars reflect
discussion pairs in which the subjects agreed on the issue discussed, according to
their pretest responses; dark gray bars reflect discussion pairs in which the subjects
disagreed on the topics discussed, according to their pretest responses. The pretest
survey questions asked respondents to report whether they “agreed,” “disagreed,”
or “don’t know” for each of the four issue statements. Dyads where one of the
subjects reported “didn’t know” were excluded from the analysis. Vertical lines
represent 95 percent confidence intervals.

https://doi.org/10.1017/9781108912495.006 Published online by Cambridge University Press


The Psychophysiological Experience of Discussion 147

significantly associated with increased heart rate, but the coefficient on


disagreement was not significant for change in EDA.
However, when we turn to the self-reported information from the post-
survey, we do not find consistent evidence that higher levels of actual
disagreement are associated with more discomfort or a change in emo-
tional experience. The issues we selected were not particularly conten-
tious, and many subjects did not have strong opinions about them. Thus,
it is possible that the lack of results on self-reported measures is related to
the relatively benign “dose” of issue disagreement that most subjects
experienced.

Perceived Disagreement
The third way to conceptualize our treatment was in terms of the amount
of disagreement that subjects perceived, based on their report in the post-
test survey questions. We describe our expectations for perceived dis-
agreement in the third row of Table 6.2. While the other two measures
are plausibly exogenous to the participants’ discussion preferences and
behaviors, this measure clearly is not: We expect there to be variation in
the extent to which people recognize and perceive a conversation as
“disagreeable.”
While we can’t think of another study that assesses subjects’ percep-
tions of disagreement immediately following a discussion, there are clues
from previous literature suggesting that perception is not a perfect
mirror of reality. Research utilizing name generators to assess discussion
network composition almost always relies exclusively on subjects’ per-
ceptions, and although early studies reported relatively high levels of
perception accuracy (Huckfeldt and Sprague 1987), later work has
questioned the veracity of people’s perceptions because of known cog-
nitive biases such as the false-consensus effect. Moreover, Eveland et al.
(2019) argue that previous measures of others’ political views might
conflate inaccuracy with uncertainty. Carlson, Abrajano, and García
Bedolla (2020) also find variation between ethnoracial groups in will-
ingness to guess the political party identification of those in their discus-
sion networks, with Latinos being most likely to refuse to answer
the question.
Therefore, perhaps it is not surprising that overall we find high rates of
recall, but low rates of accuracy. Table 6.3 provides an overview of recall
and accuracy for our subjects’ reports of their discussion partners’ opin-
ions. We asked subjects, in separate questions, to report their partner’s

https://doi.org/10.1017/9781108912495.006 Published online by Cambridge University Press


https://doi.org/10.1017/9781108912495.006 Published online by Cambridge University Press

 .. Recall and accuracy in discussion partner’s opinion

Percent Percent
Percent Percent Accurate Accurate
DK about DK about Recall in Recall in
Partner’s Perceived Agreeable N Agreeable Disagreeable N Disagreeable
Opinion Agreement Dyads Dyads Dyads Dyads
WM Tuition 11 2 94 34 11 36
148

WM Testing 8 1 85 46 29 34
Testing 8 1 95 84 29 24
Immigrant Financial 13 2 88 48 38 48
Aid
Note: Data come from the post-test administered to subjects after completing the discussion. Subjects reported their partners’ opinion (agree or disagree)
on each issue, as well as the level of agreement between themselves and their partner (on a six-point scale). Both questions included a “don’t know”
response.
The Psychophysiological Experience of Discussion 149

opinion on each issue discussion prompt (agree or disagree), as well as the


level of agreement between themselves and their partner (on a six-point
scale). Both questions included a “don’t know” response. In our study,
about 10% of subjects could not report their partner’s opinion on a given
issue, although we do not know how much of this is due to poor recall
versus ambiguity in the way that the discussion partner expressed his or
her opinion, which we explore in Chapter 7 as a possible strategy for
qualifying or censoring one’s opinion.
Interestingly, however, there are much lower rates of recall problems
when we asked respondents to report the level of agreement they had with
their partner on each issue. The vast majority of respondents were able to
provide an answer. Overall, there were only six responses of “don’t
know” across the four issues, a rate of 1%. This suggests that subjects
were able to report agreement and disagreement between themselves and
their partners at a higher rate than they were able to report their partner’s
actual opinion on the issues.
We next turn to assessing subjects’ accuracy.10 Here, we focus only on
the conversations where both subjects had an opinion on the pretest.
While subjects who talked about issues where they agreed with their
partner were able to recognize that agreement at very high rates, subjects
were not particularly accurate at recognizing that disagreement existed
where it did on paper, based on the pre-survey responses. Among those
subjects who talked with someone who disagreed with them on the pre-
survey, accurate recognition of that disagreement varied from a high of
38% for the immigration financial aid issue to a low of 11% on the
tuition issue. In other words, the vast majority of people in disagreeable
discussions did not report that their partner actually disagreed with them.
Additionally, people underestimated the strength of their partners’ opin-
ions, when asked to place their partner’s opinion on a five-point scale.
Why did so few people perceive disagreement where it existed? Perhaps
subjects were vague when sharing their opinions with their peers. Maybe
they masked their disagreement with conciliatory language or censored
their opinions compared to their survey responses. Or perhaps our sub-
jects just had a high bar for what counted as disagreement. While we
cannot answer this question conclusively, as we explore in the next
chapter, we think it is due at least in part to the way that subjects
expressed their opinions, minimizing the expression of direct conflict.
For the purposes of the analysis that follows, we simply note that subjects
tended to underperceive the existence of disagreement where it did in fact
exist based on pre-survey self-report.

https://doi.org/10.1017/9781108912495.006 Published online by Cambridge University Press


150 Discussion: Psychophysiological Experience (Stage 3)

Heart Rate Change EDA Change

12 0.12
More Agreement
11 0.11
Less Agreement
Heart Rate Change over ISI, in BPM

10 0.1

EDA (log) Change over ISI, in mS


9 0.09
8 0.08
7 0.07
6 0.06
5 0.05
4 0.04
3 0.03
2 0.02
1 0.01
0 0
–1 –0.01
–2 –0.02
Immigrant Immigrant
Campus Campus Standardized Financial Campus Campus Standardized Financial
Tuition Testing Testing Aid Tuition Testing Testing Aid

 .. Psychophysiological response to discussion prompts, by perceived


disagreement
Note: Average change in heart rate (left panel) and electrodermal activity (right
panel) relative to baseline during each discussion topic. Light gray bars reflect
discussion pairs in which the subjects perceived agreement based on post-test
responses; dark gray bars reflect discussion pairs in which the subjects perceived
disagreement based on post-test responses. Vertical lines represent 95 percent
confidence intervals.

Turning to the psychophysiological data, we see a clear pattern in


Figure 6.6 for heart rate, that people who perceive less agreement have
a higher heart rate. This pattern is statistically significant in a regression
analysis pooling all topics together and clustering standard errors for each
individual. Again, we do not see a consistent pattern for electrodermal
activity. The clear results for heart rate are intriguing given our analysis of
the low rates of perceived disagreement: Very few subjects reported that
they had strong disagreement with their discussion partners, so the rela-
tionship between heart rate and perception would likely be even stronger
for more contentious issues. It is unclear whether those who perceived
disagreement did so in response to objectively more contentious discus-
sions or because they are more sensitive to the existence of disagreement,
but this perception seems to matter for people’s psychophysiological
experience.
Finally, there is some evidence that perceptions of disagreement are
correlated with subjects’ own discomfort and their perceptions of their

https://doi.org/10.1017/9781108912495.006 Published online by Cambridge University Press


Conclusion 151

partner’s discomfort, but the pattern is not consistent across issues.


There is no evidence that people who perceived more disagreement
reported differential levels of emotional response or reasons driving that
emotional response. Again, this may be due to the nature of the issues
we selected.


What does political discussion feel like? The first half of our exploration
into the experience of political discussion reveals that it is neither uni-
formly a positive nor a negative experience, but it is a physiologically
activating one. For the vast majority of subjects in our study, political
discussion entailed an increased heart rate, especially so for anticipated or
experienced disagreement. While we found less consistent evidence for
electrodermal activity during discussion, the anticipation of discussion
was four times as activating as simply watching contentious interactions
on television.
The data we have presented so far paint a nuanced picture about
the effect of disagreement, suggesting that different facets of the con-
cept may matter in different ways. For example, disagreement as
captured by partisan clash seems to matter most in anticipation of a
discussion. Building on our findings in Chapter 5 that people prefer
conversations with copartisans, it seems that the anticipation of cross-
partisan conversations is emotionally activating. But the results from
our second lab study suggest that in and of itself, discussion between
cross-partisans may not evoke higher reactivity than it does among
copartisans.
There are several caveats about our studies that suggest more investi-
gation into these processes is merited. First, as we explore more in
Chapter 7, it is not clear that the partisan clash treatment in our second
study was delivered clearly. Many subjects who identified as partisans on
the pre-survey were coy about those identities when asked to state them
out loud. Moreover, how the discussion partners interpreted this ambigu-
ity is also unknown. When asked in the post-test questions to recall their
partners’ partisanship, people were largely able to correctly identify the
partisanship of partisans but overestimated the extent to which self-
identified Independents leaned toward a party; about one-third of
Independents were remembered as partisans.11
Similarly, in an experimental design where subjects were instructed
to talk about a series of issues, without confederates directing the

https://doi.org/10.1017/9781108912495.006 Published online by Cambridge University Press


152 Discussion: Psychophysiological Experience (Stage 3)

conversation, it is not clear that the disagreement between the discussants


that existed based on private self-reports was effectively communicated.
Our analysis in this chapter of exposure to issue disagreement was based
on differences in subjects’ opinions on the pretest survey. But when asked
to report on the level of disagreement they perceived in the conversation,
subjects reported very little disagreement. Two possibilities explain this
discrepancy. First, as we explore more in the next chapter, it is possible
that people censor their opinions to minimize the disagreement that does
exist. Thus, while we measured disagreement “on paper,” the opinions
that were verbally communicated did not disagree. Second, it is possible
that our subjects had a high threshold for what they considered to be
“disagreement.” We did not choose particularly polarizing issues, and
subjects may have interpreted a lack of intensity of opinion for a lack of
disagreement. Whatever the explanation, our findings raise many ques-
tions about the traditional survey measures used to assess the rates of
disagreement in people’s organic discussion networks.
Interestingly, subjects’ reports of their own discomfort and their part-
ners’ discomfort are exceptionally highly correlated, but the two subjects’
reports of their own discomfort are not correlated. This suggests consider-
able projection: When people themselves feel uncomfortable in a situ-
ation, they assume that others do, too. There are not consistent patterns
suggesting that certain kinds of disagreement elicit more discomfort
among participants. But we should note that this may be in part a
function of our sample pool: political science majors or intended majors
who have much higher than average levels of interest in and knowledge
about politics. We would expect them to find political discussion less
discomfiting than Americans at large.
In Chapter 7, we turn to the second half of our exploration of the
experience of discussion: What affects what people say during a conver-
sation? Readers should not lose sight of these results about psychophysi-
ology as we push forward to understand the influences on people’s
verbal communication, the focus of the next chapter. While we did not
take a deep dive into the emotional processes that help generate the
psychophysiological response we capture, we recognize fully that the
decisions people make about how to express their opinions and how to
navigate a conversation are influenced by their emotional and physio-
logical state. In a review of the relationship between emotion and
decision-making, Lerner and colleagues (2015) argue that “emotion
and decision-making go hand in hand” (p. 801) and that “decisions

https://doi.org/10.1017/9781108912495.006 Published online by Cambridge University Press


Conclusion 153

can be viewed as a conduit through which emotions guide everyday


attempts at avoiding negative feelings (e.g. guilt and regret) and increas-
ing positive feelings (e.g. pride and happiness), even when they do so
without awareness” (p. 801). Even more pertinent for our argument,
emotions are an inherent part of social communication because they
help people optimally navigate social decisions (Lerner et al. 2015, 810).
But as we will see in the next chapter, optimizing social decisions may
lead to suboptimal political expression.

https://doi.org/10.1017/9781108912495.006 Published online by Cambridge University Press


7

[further] Discussion

Expression in Political Discussions (Stage 3)

Let’s return to the storyline presented in the beginning of Chapter 6. Joe is


at dinner with his in-laws, and his father-in-law Frank has just put him on
the spot to weigh in on a disagreement with Katie. Joe strongly disagrees
with Frank. Based on what we learned in the previous chapter, we now
know that underneath the table, the tapping of Joe’s foot mirrors the
beating of his racing heart: The disagreement that Joe anticipates and then
experiences has heightened his physiological response and activated a cas-
cade of anxiety. Seemingly irritated at Joe’s delayed response, Frank clears
his throat, which only further heightens Joe’s sense of dread.
How does Joe respond to his father-in-law’s question? Does he nudge Katie
with his foot, hoping she’ll take the hint to jump in and if she doesn’t,
mutter a nonresponse to change the topic? Does he try to defuse the tension
by pretending to agree with Frank, even if it means he’ll get an earful from
Katie on the drive home? Or does he speak his mind and answer honestly,
bringing the disagreement out into the open?

The psychophysiological reaction that people experience during a polit-


ical discussion likely informs what they say and how they behave during a
conversation. The evidence presented thus far makes clear that much of
the action of the 4D Framework is driven by the detection, anticipation,
and experience of disagreement, and in the previous chapter we estab-
lished that disagreeable conversations are particularly psychophysiologi-
cally activating. We pick up where we left off to more thoroughly explore
the decisions people make during a political discussion: what they choose
to say, as well as why and how they say it.
As described in Chapter 2, we draw upon research from social
psychology (e.g. Cialdini and Goldstein 2004) and communication

154

https://doi.org/10.1017/9781108912495.007 Published online by Cambridge University Press


[further] Discussion 155

(e.g. Dailey and Palomares 2004) to illustrate that the decision about
whether and how to engage in a political discussion is guided by three
goals: (1) to be accurate; (2) to affiliate with others and maintain
relationships; and (3) to affirm a positive self-concept. To shine light
on the psychological experience of a discussion, we examine the consid-
erations individuals hold when deciding how to behave in a discussion.
We are interested in what is going through people’s heads as they
navigate the conversation and consider how to participate. There are
many ways that political conversations could threaten people’s accur-
acy, affiliation, and affirmation goals; we thus expect individuals to alter
their behavior to reduce those threats. Here we zero in to study the
opinions people choose to express.
In the first half of the chapter, we present survey-based work that
explores a number of questions. First, what are people thinking about
during a political conversation? We report the prevalence of consider-
ations that underpin people’s discussion experiences. Next, what is the
relationship between people’s considerations and their reluctance to
express their true opinion, such as self-censorship? Finally, what is the
effect of knowledge differentials in disagreeable conversations on people’s
considerations and expression?
Are these findings about censorship and conformity merely an artifact
of the research design in our survey experiments? Do people withhold
their opinions when talking about politics, or do they simply expect that
other people do? In the second half of the chapter, we push on the insights
gleaned from our survey work to test some of the key ideas in a laboratory
setting. In the first study, we test whether people actually censor or
conform their viewpoints during political discussion. In the second study,
we build on the findings from Chapter 6 to explore linguistic markers in
the conversations to assess differences between agreeable and disagree-
able conversations. Our lab experiments provide confirmation that people
do in fact deviate from expressing their true opinions when they encoun-
ter disagreement.
The findings in this chapter reveal that the conversations that do
occur are far from the ideal envisioned by deliberative theorists and
practitioners working to heal political tensions in the public. Paired with
the considerations people hold in their mind for the ramifications of a
discussion, a substantial portion of the population may censor or silence
their viewpoints to mitigate the potential for adverse consequences,
especially when they find themselves at a structural disadvantage in
a conversation.

https://doi.org/10.1017/9781108912495.007 Published online by Cambridge University Press


156 [further] Discussion: Expression (Stage 3)

 
We examined the patterns and relationships between discussion configur-
ation, considerations, and expression using the series of vignette experi-
ments first described in Chapter 3 and analyzed in part in Chapter 5. The
idea was to present respondents with a hypothetical scenario in which a
character was faced with the decision to engage in a discussion with a
group of people who disagreed. We can then measure how our respondents
thought the character would psychologically and behaviorally respond to
the conversation, conditional on various characteristics of the context that
we manipulated experimentally. As we discussed previously, we chose to
use vignette experiments for two reasons. First, we wanted to maximize the
number of conversational features we could manipulate, something very
difficult to do in lab studies. Second, we wanted to use a method that
mitigated social desirability bias. Although it is somewhat ambiguous as to
what the socially desirable response is,1 we anticipated at the outset of this
project that individuals would be reluctant to report that they would not
express their true opinion, since conforming to a group in an individualistic
culture, such as the United States, is generally frowned upon.
In our pilot work on Mechanical Turk, we tested a variety of configur-
ations of discussion contexts as our independent variables and randomly
assigned people to different vignettes to bolster our ability to make causal
claims. We present details on the full vignettes in Chapters 3 and 5, but as
a refresher, we describe a character (same gender as the subject) who finds
him or herself in a political conversation and is asked to share his or her
opinion by one of the other characters. The pattern of findings in the pilot
work led us to narrow the factors we tested on the CIPI I Survey and for
the core analysis in this chapter, we focus on testing differences in the
distribution of knowledge in situations where the character is in a parti-
san minority. In the Appendix, we explore where our pilot work using a
wider range of contextual factors (such as the balance of viewpoints, the
distribution of power, and the strength of social connection) corroborated
the main results we present.
We proceed by presenting the descriptive characterizations of subjects’
responses about considerations and expression before moving to test the
hypotheses outlined in Table 7.1 that assess the effects of the knowledge
treatment. Recall from Chapter 3 that the high-knowledge condition is a
situation in which the other discussants were more knowledgeable than
the character and the low-knowledge condition is a situation in which the
discussants were less knowledgeable than the character.

https://doi.org/10.1017/9781108912495.007 Published online by Cambridge University Press


Vignette Experiments 157

 .. Vignette experiment hypotheses

Outcome Knowledge Differential Hypotheses


Considerations: People perceive more opportunities when the character is
Opportunities more knowledgeable, compared to when he or she is
less knowledgeable, than the discussants
Considerations: People perceive more concerns when the character is less
Concerns knowledgeable, compared to when he or she is more
knowledgeable, than the discussants
Behavior People are more likely to expect the character to express
his or her true opinions or entrench when he or she is
more knowledgeable than the discussants
People are more likely to expect a character to silence,
censor, or conform when he or she is less
knowledgeable than the discussants

Considerations
In Chapter 3, we introduced our operationalization of the AAA
Typology: the considerations that people hold in their mind that affect
their behavioral decisions, even if these decisions happen without much
conscious thought and in a matter of seconds. We identified these as
inputs to the 4D Framework that are likely affected by both individuals’
traits and the contextual factors of the conversation.
We constructed a list of both concerns (e.g. considerations that suggest
discussion could lead to a negative outcome, likely making someone less
likely to engage meaningfully) and opportunities (e.g. considerations that
suggest that discussion could lead to a positive outcome, likely making
someone more likely to engage meaningfully) (see Table 3.4). After read-
ing the vignette and providing information for the behavioral dependent
variables, respondents were asked, “[w]hich of the following seem like
plausible considerations for John/Sarah? Check all that apply.”
Deciding how to engage in a political discussion could theoretically
involve weighing both positive and negative considerations, and for half
of our sample that is true: 51% of subjects selected some combination of
concerns and opportunities. Within that group, on average 53% of the
considerations they selected were concerns. Yet not everyone felt con-
flicted: 26% of our respondents selected only concerns and 22% selected
only opportunities.
What types of considerations did respondents think the hypothetical
character would make when deciding how to engage in the discussion? To

https://doi.org/10.1017/9781108912495.007 Published online by Cambridge University Press


158 [further] Discussion: Expression (Stage 3)

answer this, we begin by broadly examining the percentage of respond-


ents who selected a consideration in each one of our AAA categories.2 The
results indicate that overall, affirmation opportunities were the most
commonly selected consideration, with 57% of respondents selecting a
consideration in this category. Meanwhile, affiliation opportunities were
the least common considerations, with only 37% of respondents choosing
these considerations. Indeed, while opportunities were more common
than concerns for accuracy and affirmation, concerns were more common
for affiliation.
Figure 7.1 breaks down the results in Table 7.2 by aggregating the
considerations into concerns and opportunities, by AAA categories. The

 .. Most important concerns and opportunities by AAA Typology


Note: Data come from CIPI I Vignette Experiment. Percentages calculated with
the number of participants who selected each AAA consideration as the most
important consideration as the denominator. For accuracy, N ¼ 927; for
affiliation, N ¼ 979; and for affirmation, N ¼ 1,128. Vertical lines represent
95 percent confidence intervals.

https://doi.org/10.1017/9781108912495.007 Published online by Cambridge University Press


https://doi.org/10.1017/9781108912495.007 Published online by Cambridge University Press

 .. Percentage of respondents selecting each consideration as most important

Category Type Consideration Percent N


Accuracy Opportunity Opportunity to discuss important issues with these people 11 347
Affirmation Concern Concern that these people would judge him/her for his/her opinion 9 270
Affirmation Opportunity Opportunity to express his/her real political opinions 8 252
Affiliation Concern Concern that expressing a dissenting opinion will damage the relationship John/ 8 248
Sarah has with these people
Affiliation Concern Concern that expressing disagreement will make people uncomfortable 6 183
Affiliation Concern Concern that expressing his/her opinion will make people uncomfortable 7 170
Accuracy Concern Concern that his/her opinion is based on factually inaccurate information 5 161
Affirmation Opportunity Opportunity to justify his/her opinion 5 153
Accuracy Concern Concern that people would judge him/her for his/her knowledge level 5 150
159

Affirmation Concern Concern about defending his/her true opinion 5 150


Accuracy Concern Concern about expressing an opinion about which s/he is uncertain 5 144
Affiliation Opportunity Opportunity to engage more with these people 5 143
Affiliation Concern Concern that expressing a dissenting opinion will negatively affect his/her chance 4 128
of getting invited to another neighborhood gathering
Accuracy Opportunity Opportunity to persuade these people to change their minds 4 125
Affirmation Opportunity Opportunity to do his/her civic duty by discussing politics and exercising free 4 119
speech
Affiliation Opportunity Opportunity to get to know these people on a deeper level 4 107
Affirmation Concern Concern about public speaking 3 95
Affirmation Opportunity Opportunity to solidify his/her opinion 3 89
Note: Data come from the CIPI I Vignette Experiment, pooling across treatment groups.
160 [further] Discussion: Expression (Stage 3)

results make clear a pattern that is harder to discern in the table. We find
that if respondents selected an accuracy consideration as most important,
they were equally likely to choose an opportunity (51%) or a concern
(49%). If respondents chose an affirmation consideration, they were
slightly more likely to select an opportunity (54%) than a concern
(46%). However, if someone chose an affiliation consideration, they were
nearly three times as likely to select a concern (75%) as an opportunity
(25%). This pattern for affiliation underscores one of the key arguments
of the book: Social considerations matter in structuring our political
discussion preferences, and the people who think of conversations in these
terms are much more likely to be concerned about the consequences than
be excited about the opportunities.
What were the effects of the experimental treatment, manipulating the
level of knowledge of the hypothetical discussants? Broadly, we find that
62% of participants in the high-knowledge condition selected a concern
as the most important consideration the character would make, compared
to 50% of participants in the low-knowledge condition. This difference is
statistically significant. Figure 7.2 provides more nuance, illustrating
variation in the opportunities and concerns selected within our AAA
Typology by treatment group. Figure 7.2 shows theoretically consistent
patterns for the ways in which knowledge asymmetries affect the consid-
erations that are perceived to be most important in navigating a political
discussion. Accuracy and affirmation display a similar pattern:
Participants were more likely to select a concern as most important when
they were at a knowledge disadvantage and an opportunity when they
were advantaged. In contrast, knowledge advantages do not confer the
perception of affiliation opportunities compared to knowledge disadvan-
tages. Interestingly, when the character was more knowledgeable than the
discussants, affiliation concerns were more common.

Expression Behavior
With an understanding of the considerations that run through people’s
minds and the effects of knowledge asymmetries, we turn to assess pat-
terns in expected expression behavior. We did not aim to create an
exhaustive list of behavioral choices in a political discussion and instead
focused on how people chose to verbally express their opinions, if at all,
in conversations. Thus, our categorization does not include behaviors
such as name-calling, cursing, or yelling, nor does it include micro (but
still observable) responses, such as avoiding eye contact, crossing one’s

https://doi.org/10.1017/9781108912495.007 Published online by Cambridge University Press


Vignette Experiments 161

 .. Most important concerns and opportunities, by treatment group


Note: Data come from the CIPI I Vignette Experiment. White bars represent
participants in the low-knowledge condition (N ¼ 1,515, bars sum to 100%); gray
bars represent participants in the high-knowledge condition (N ¼ 1,519, bars sum
to 100%). Vertical lines represent 95 percent confidence intervals.

arms, or eye-rolling. We think that these behaviors are important to


consider but leave them to future research.
Our first dependent variable is the question “[w]hat is the likelihood
that John/Sarah expresses his/her true opinion to the group?” This ques-
tion was measured on a six-point scale ranging from “very unlikely” to
“very likely.” While not particularly nuanced about the character’s
response, the measure does allow us to more clearly understand whether
the subjects anticipated that the character would be truthful.
We wanted also to gather more information from our subjects to
capture the specifics of their expression, and we theorized that within a

https://doi.org/10.1017/9781108912495.007 Published online by Cambridge University Press


162 [further] Discussion: Expression (Stage 3)

given conversation, there are five principal ways in which individuals


could verbally respond with respect to their opinion.3 The first, which
we analyzed separately in Chapter 5, is to silence their opinions, thereby
avoiding the conversation altogether. Individuals who feel uncomfortable
might be likely to simply opt out of the conversation by saying nothing at
all or changing the subject.
The second response is to conform to the group’s opinion. Here,
individuals might be willing to at least engage in the conversation, but
they lead the others in the group to believe that they are all in agreement.
This might be a common strategy for people who would prefer to avoid
the conversation or be silent, but the context is such that it would be
socially unacceptable to not say anything at all. Previous research on
political networks discusses conformity as a possible explanation for
why we observe such politically homogeneous discussion networks (e.g.
Huckfeldt, Johnson, and Sprague 2004; Sinclair 2012), but there is not
much research to date investigating how conformity works within a given
discussion. Third, we include a softer version of conformity: self-censor-
ship. Individuals might not be willing to fully misrepresent their true
political beliefs by conforming to the group, but they might censor their
views to appear closer to the group. Here, individuals might moderate the
views that they share with others, suggesting that they hold an opinion
that is closer to what the group believes but still disagrees slightly.
Perhaps the most natural response to come to mind is simply express-
ing one’s true opinion. In this case, individuals may read all of the social
cues that indicate disagreement, variation in knowledge levels, social tie
strength, or power, and still decide to express their true opinions. Some
people might be comfortable doing this in any social context – and we will
return to these individual differences in Chapter 9. Finally, individuals
might take expressing their true opinion one step further and entrench,
meaning that they express an opinion that is even more extreme than
what they truly believe. The idea here is that individuals might choose to
present themselves as having even more extreme views when they are
presented with disagreement.
We begin by examining the likelihood that respondents thought the
character would express his or her true opinion to the group. Pooling
across randomly assigned treatment conditions, the mean response on the
six-point scale was 3.5. On average, respondents thought it was some-
what likely that the character would express his or her true opinion to the
group. For the second question assessing the character’s expression, the
first column of Table 7.3 shows the percentage of respondents who

https://doi.org/10.1017/9781108912495.007 Published online by Cambridge University Press


Vignette Experiments 163

 .. Expression response from vignette experiment, by


treatment condition

Response All Respondents High Knowledge Low Knowledge p-value


Silence 14 17 11 p < .001
Conform 9 10 7 p < .01
Censor 33 34 31 p ¼ .16
True 41 35 46 p < .001
Entrench 4 3 4 p ¼ .13
Note: Data come from the CIPI I Vignette Experiment. High Knowledge: N ¼ 1,521; Low
Knowledge: N ¼ 1,518. P-values reflect difference of proportions tests between treatment
groups.

reported that a hypothetical character would respond in each way. The


ordering of these results – expressing one’s true opinion as the most
common response, followed by censorship – reflects a clear pattern that
was consistent across the wide range of vignette experiments that we
conducted and report in the Appendix.
As we outline in Table 7.1, we expected subjects to be more likely to
report that the character would behave in ways that obscure their true
views – silencing, censoring, or conforming to the viewpoints of the
group – when they are less knowledgeable than the others in the group.
Conversely, when individuals feel that they are more knowledgeable, we
expect them to be more willing to express their true opinions, and perhaps
even to double down and entrench on their viewpoints. Recall that we
capture true opinion expression in two ways. We first ask a stand-alone
question about the likelihood that the character would express his or her
true opinion to the group. Then, we follow up by asking specifically how
they thought the character would respond, with expressing his or her true
opinion as one of five options.
We begin by analyzing the perceived likelihood of expressing one’s true
opinion to the group, conditional on the knowledge composition in the
group. While participants overall and in both treatment groups tended
toward the middle of the scale, we observe a statistically significant
difference in the mean likelihood between treatment groups, such that
participants who read about a character who was less knowledgeable
than the discussants thought the character would be significantly less
likely to express his or her true opinion (mean ¼ 3.3) than did participants
who read about a character who was more knowledgeable than the
discussants (mean ¼ 3.7). While this difference is statistically significant,
it could be interpreted as substantively small since both treatments still

https://doi.org/10.1017/9781108912495.007 Published online by Cambridge University Press


164 [further] Discussion: Expression (Stage 3)

fall between “somewhat unlikely” and “somewhat likely” on average. We


do, however, see that “somewhat unlikely” was the modal response
option when the character was less knowledgeable than the discussants
(36%) and “somewhat likely” was the most common response when the
character was more knowledgeable than the discussants (31%).
Next, we analyze the specific response option. Table 7.3 shows the
percentage of respondents who thought the character would engage in
each expression response broken down by treatment group. We can see
that overall, participants were most likely to expect the character to
express his or her true opinion to the group (41%) or censor that opinion
(33%). While silence, conform, and entrench were less commonly selected
expression responses, there are important differences based on relative
knowledge levels, as indicated by the treatment groups. For example, we
find that silencing and conforming were significantly more common when
the discussants were more knowledgeable than when the discussants were
less knowledgeable. There was not a statistically significant difference in
the likelihood of choosing entrench based on the knowledge treatment.
Although expressing one’s true opinion was the most common response
in both knowledge treatment groups, which is consistent with what we
observed when asked about opinion expression in the direct question, our
participants thought it significantly more likely that the character would
express his or her true opinion when the discussants were less knowledge-
able (46%) than when they were more knowledgeable (35%).

The Relationship between Considerations and Expression Behavior


Finally, we explore the relationships between the considerations respond-
ents anticipated the character would make and the behavior they antici-
pated he or she would exhibit in the political discussion. We expect the
psychological considerations individuals hold to shape their expression in
political discussions. While individuals likely ponder these considerations
before choosing how to behave in a discussion, we measured the consider-
ations after respondents reported their anticipated expression behavior.
This made it more of a reflective exercise, largely because we did not want
to prime respondents to consider the concerns and opportunities before
making their behavioral selections. We also recognize that many times
these considerations might be processed subconsciously or quickly, but
upon reflection, might become apparent.
As a result, we do not make any formal causal claims about the
patterns that we report in this section. We do not test whether holding

https://doi.org/10.1017/9781108912495.007 Published online by Cambridge University Press


Vignette Experiments 165

various considerations causes certain types of expression to be more


likely. We do not model these considerations as causal mediators or
moderators because we expect individual dispositions and situational
factors to affect both the considerations individuals hold and the ultimate
behavioral choices they make, confounding any causal analysis. Instead,
we simply want to illustrate the connections between considerations and
expression. In Chapter 9, we analyze the relationship between individual
dispositions and considerations.
Table 7.4 shows the average likelihood of expressing one’s true opinion
for those who selected each consideration as the most important. Note that
this collapses across the knowledge treatment groups. The results indicate
that those who thought the character would be most concerned about
being judged for his or her knowledge level were least likely to expect the
character would express his or her true opinion. In contrast, those who
thought the character would most strongly consider the opportunity to do
his or her civic duty by discussing politics and exercising free speech were
most likely to expect that the character would express his or her true
opinion. Overall, we observe that those who selected a concern as the most
important consideration thought the character would be less likely to
express his or her true opinion than those who selected an opportunity.
Thinking more about the specific expression expectations, we find that
for every single concern, the modal expression expectation was that the
character would censor his or her views in the discussion. In contrast, for
every single opportunity, the modal expression response was that the
character would express his or her true opinion.
Instead of thinking about the behavioral expectations conditional on
the considerations selected, we can also examine the most common
considerations within each behavioral choice. While opportunities were
most important for those who thought the character would entrench or
express his or her true opinion,4 concerns take over as most important
when the character was expected to censor, conform, or silence.
Specifically, those who thought the character would censor or conform
both thought the character would be most concerned about people
judging him or her for his or her opinion, an affirmation consideration.
Those who thought the character would be silent in the discussion were
most likely to report that the concern that expressing a dissenting opinion
would damage the relationship that John/Sarah has with these people was
the most important consideration, an affiliation consideration.
Taking a step back to think about these patterns more broadly,
Figure 7.3 shows the percentage of total considerations that were

https://doi.org/10.1017/9781108912495.007 Published online by Cambridge University Press


166 [further] Discussion: Expression (Stage 3)

 .. Likelihood of expressing true opinion, by most


important consideration

Likelihood of
Expressing
Consideration True Opinion
Concern that people would judge him/her for his/her 2.91
knowledge level
Concern about expressing an opinion about which s/he is 2.94
uncertain
Concern that these people would judge him/her for his/her 3.02
opinion
Concern that his/her opinion is based on factually inaccurate 3.14
information
Concern that expressing disagreement will make people 3.15
uncomfortable
Concern about defending his/her true opinion 3.17
Concern that expressing a dissenting opinion will negatively 3.25
affect his/her chance of getting invited to another
neighborhood gathering
Concern that expressing his/her opinion will make people 3.28
uncomfortable
Concern that expressing a dissenting opinion will damage the 3.29
relationship John/Sarah has with these people
Concern about public speaking 3.62
Opportunity to engage more with these people 3.72
Opportunity to solidify his/her opinion 3.80
Opportunity to get to know these people on a deeper level 3.92
Opportunity to express his/her real political opinions 3.92
Opportunity to persuade these people to change their minds 3.98
Opportunity to justify his/her opinion 3.98
Opportunity to discuss important issues with these people 4.09
Opportunity to do his/her civic duty by discussing politics and 4.09
exercising free speech
Note: Data come from CIPI I Vignette Experiment. Sample sizes for each consideration can
be viewed in Table 7.2.

concerns or opportunities within each behavioral response selected.


We observe a strong pattern that indicates that if respondents thought
the character would entrench or express his or her true opinion, they
selected a greater percentage of opportunities than concerns. However,
if respondents thought that the character would censor, conform, or
silence, they selected a greater percentage of concerns instead of
opportunities.

https://doi.org/10.1017/9781108912495.007 Published online by Cambridge University Press


Summary 167

 .. Considerations by expression response


Note: Data come from the CIPI I Vignette Experiment. For Entrench, N ¼ 114,
for True Opinion, N ¼ 1,238, for Censor, N ¼ 987, for Conform, N ¼ 260, and
for Silence, N ¼ 432. Analysis restricted to respondents who selected at least one
consideration. Vertical lines represent 95 percent confidence intervals.

Again, we make no causal claims about the direction of the relation-


ship between considerations and expression. However, regardless of the
direction in which we analyze the patterns, the pattern is hard to ignore:
Concerns tend to go hand in hand with expression associated with
discomfort, such as censoring, conforming, and silencing. In contrast,
opportunities tend to be paired with expression that signals more comfort
in a discussion, such as entrenching or expressing one’s true opinion.


Using a large experiment on a nationally representative sample, we find
strong evidence that the balance of knowledge between discussants affects
both the considerations individuals think a character holds and the
behavior that they anticipate the character will exhibit. Individuals
appear to be more sensitive to the negative considerations (e.g. concerns
about expressing their opinions, being judged by others, damaging social
relationships) when they are less knowledgeable than the other

https://doi.org/10.1017/9781108912495.007 Published online by Cambridge University Press


168 [further] Discussion: Expression (Stage 3)

discussants. However, when individuals are more knowledgeable, they


seem to be drawn to the positive components of a discussion, such as the
opportunity to get to know people on a deeper level, express their real
opinions, and talk about important issues. Similarly, when individuals
have an informational advantage, they are more likely to express their
true opinions to the group than when they are at a disadvantage. Many of
these patterns are reproduced in our pilot studies, testing other types of
advantages, such as power dynamics within the workplace and variation
in being in the opinion majority. While these studies have provided a rich
set of findings, they are limited in that they capture how respondents
expect that others will behave in a discussion – not how they actually
behave. In the next section, we examine this more carefully by observing
actual political discussion expression in lab experiments.

 
The vignette experiments were designed to maximize our ability to
explain variation in psychological and behavioral response across differ-
ent discussion configurations, as well as to minimize social desirability
concerns. However, it is reasonable to ask whether people’s attribution of
response to a character is an accurate way to measure their own behavior.
While vignette survey research has been shown to flexibly allow research-
ers to measure expression in situations that are difficult to randomize, we
wanted to complement the survey work with lab experiments designed to
push further on our key findings.

Conformity and Censorship


Our vignette experiments revealed relatively high rates of anticipated
conformity. While an average across all studies (including the pilots) is
not meaningful given the experimental manipulations, between 25% and
50% of subjects in the vignette experiments anticipated that the character
would censor or conform their opinions. Across the studies, between 49%
and 71%5 answered above the midpoint that the character was unlikely
to express their true opinion. Are rates of censorship and conformity that
high when measured behaviorally? We endeavored to find out in a lab
study we conducted in the fall of 2013, dubbed the Political Chameleons
Study. See Carlson and Settle (2016) for the full description of the study.
A total of seventy participants6 took part in the study, which included
a pretest (embedded within a large survey, taken online three days prior to

https://doi.org/10.1017/9781108912495.007 Published online by Cambridge University Press


Lab Studies 169

the lab experiment), a lab session, and a posttest. Participants were asked
to indicate the extent to which they agreed with fourteen policies, using
questions adapted from the American National Election Studies about
political issues.
During the lab session, participants were told that they were joining a
“focus group” about students’ political opinions on campus with two
other “participants,” who were actually confederates in the study. The
treatment itself was the ordering in which the students shared their
opinions. In the control condition, subjects were assigned to give
their responses before the confederates; because they would be giving
their responses to each political question without knowing the opinions
of the confederates on the issue at hand, they had limited information
about how to conform to the group on the particular issue. Those
randomly assigned to give their responses last were in the treatment
condition because they would only give their response after hearing that
the confederates disagreed with them on an issue, giving them a position
with which to conform.7 Three days after the lab session, participants
were emailed a posttest survey that included the same fourteen questions
(buried within a larger survey) that they answered in the large pretest
survey and in the lab session.8
Our expectation was that participants in the treatment condition
would conform at a higher frequency and to a greater degree than
participants in the control condition.9 Our primary dependent variable
in this analysis is the number of times participants conformed across the
issues during the session, measured in two ways. Potential conformity
means that in the lab, a participant gave an answer that differed from his
or her pretest response, moved in the direction of the confederates, and
crossed the midpoint on the scale, such that the lab response actually
countered the pretest response.10 Pure conformity includes the require-
ments of potential conformity, in addition to requiring participants to
give the same response on the pretest and the posttest. Pure conformity
captures the construct most clearly – subjects altering their opinions only
in the presence of others who disagree – whereas potential conformity
allows for the possibility that subjects changed their mind over the course
of the study period.11
While this distinction between pure and potential conformity is
important conceptually, our measurement cannot account for response
instability between the pretest, lab session, and posttest. While we cannot
know with certainty, we are confident that our measures capture some-
thing more than attitude instability given that we impose the strict

https://doi.org/10.1017/9781108912495.007 Published online by Cambridge University Press


170 [further] Discussion: Expression (Stage 3)

requirement that expressed attitudes cross the midpoint on the scale. On


the polarizing issues we selected, we find it unlikely that someone’s
response would change so much over time that it is detected as conform-
ity. This is part of the reason why we focus this study on conformity,
rather than censorship, which could be defined as movement in the
direction of the confederates, but not necessarily crossing the midpoint
to holding a countering view. Measuring censorship would be much more
likely to reflect response instability than conformity. However, absence of
conformity on an issue using this measure leaves open the possibility that
the subject was censoring, entrenching, or expressing their true opinions.
Although participants did not conform on the majority of their
responses, about 89% of participants conformed on at least one ques-
tion by potential conformity measures (94% in the treatment group and
83% in the control group), and 59% of participants conformed at least
once by pure conformity measures (65% in the treatment group and
52% in the control group). These distributions resemble those found in
the Asch (1952) experiments, the design on which the experiment was
modeled. Moreover, that even individuals in the control group in our lab
experiment conformed, albeit less frequently, suggests that individuals
are generally hesitant to express their real opinions to others. They
might also be trying to predict what their peers thought about the issues
in advance, preemptively conforming to what they expected the confed-
erates to believe.
We test the hypothesis about conformity using standard difference of
means tests and randomization inference. First, using a difference
of means test to examine the effect of the treatment on conformity, we
found that participants in the treatment condition conformed significantly
more frequently than participants in the control condition for potential
conformity. Participants in the treatment condition conformed more fre-
quently than in the control condition by pure conformity standards as
well, but this difference is not statistically significant by standard thresh-
olds.12 The results from the randomization inference show that only 0.10
percent of the permuted differences were greater than the observed differ-
ence for potential conformity, as were 3.02 percent of the permuted
differences for the pure conformity measure.
How frequently do people censor their views or conform their opinions
during a political discussion? An exact metric would be nearly impossible
to obtain, but our survey and lab experiment results suggest that the
answer is: more frequently than is normatively preferable. The vast
majority – nearly 60% – of students in the lab study conformed while

https://doi.org/10.1017/9781108912495.007 Published online by Cambridge University Press


Linguistic Markers 171

expressing at least one opinion during the study, and a significant per-
centage of participants in all configurations of the vignette experiments –
ranging from 25% to 50% – expected the character to censor or conform
their viewpoints. Thus, when people reported high rates of conformity or
censorship in our vignette studies, they were not simply evaluating their
expectations of others’ behavior: They were likely projecting their own.

 
The Political Chameleons Study demonstrates empirically that individuals
do indeed conform and self-censor to better match group opinions in
political conversations. How do these behaviors manifest linguistically
during more organic political conversation? We return to the
Psychophysiological Experience Study described in the previous chapter,
where participants in a student subject pool were hooked up to psycho-
physiological measurement equipment while they talked about a series of
political issues. We recorded videos of the conversations, allowing us to
assess the linguistic markers associated with the expression of their iden-
tities and viewpoints. We first used a speech-to-text automated tool to
extract the transcripts of the conversations.13 We then watched the videos
to manually clean words or phrases missed or incorrectly transcribed and
to divide the text into speaking turns that could be attributed to each
individual subject.
In our analysis here, we focus on subtle verbal indicators that capture
meaningful variation in the dynamic, quality, and tone of the conversa-
tions. Table 7.5 outlines the linguistic variables we measure, using a mix
of hand coding and automated coding to capture several constructs, such
as how many words were exchanged, how much of a subject’s speech
consisted of verbal hedging, and the sentiment14 of the conversation.
These constructs do not map directly to the expression we studied in
the vignette experiments related to conformity and censorship, because
the “ground truth” we have for the subjects’ true opinion – the pretest
survey questions – are difficult to use as the basis for assessment about
whether a subject was truthful. For example, consider a subject who
indicated she agreed with a policy in the pretest. In the conversation,
the other person spoke first, and the subject engaged with her discussant
about the discussant’s opinion before the allotted time expired. If the
subject never clearly states her own opinion, has she been truthful? Is
she avoiding answering the question, or was she just curious to learn what

https://doi.org/10.1017/9781108912495.007 Published online by Cambridge University Press


https://doi.org/10.1017/9781108912495.007 Published online by Cambridge University Press

 .. Linguistic markers in Psychophysiological Experience Study

Descriptive Statistics, Overall


Linguistic Marker Description Conversation
Number of Words The number of words spoken, measured for the overall conversation, for the Range 365–1,325 words, mean
identity revelation segment, and for the issue discussion 846 words
Verbal Hedging The percentage of words or phrases spoken that indicated hedging, relative to Range 2–12%, mean 7%
the total number of words spoken, using an automated approach, measured
for the overall conversation, for the identity revelation segment, and for the
issue discussion. Examples include “i don’t know,” “i guess,” “i would
say,” “i mean,” “i’m not sure,” “not really,” “probably,” “not sure,”
“maybe,” “like,” “well,” “kind of,” “um.”
Partisan Use of phrasing indicative of uncertainty during the identity revelation NA
Qualification segment, using hand coding for key phrasing such as “[i]f I had to choose”
or “I’m undecided, but.”15
172

Used Party Name An indicator for whether a subject used “Democrat,” “Republican,” or NA
“Independent” when they described their partisan identity in the identity
revelation segment, using hand coding
Conversational An indicator for whether a subject spoke first during each conversational Range 0–7, mean 3.5
Initiation segment. For the overall conversation, we measure the number of times the
respondent spoke first in each segment; for analyses of the identity
revelation segment and each issue discussion segment, we use binary
indicators for whether the respondent spoke first
Sentiment Sentiment coded using the Lexicoder Sentiment Dictionary, interpreted as the Range 0.001–0.041, mean
gap between the percentage of positive and negative words used, for the 0.02
overall conversation, for the identity revelation segment, and for the issue
discussion. A score of 1 indicates a 1 percent percentage-point gap.
Note: Data come from the Psychophysiological Experience Study. Linguistic measures available for 138 subjects.
Linguistic Markers 173

her discussant thought and ran out of time before she could express her
own thoughts?
We calculated the metrics in Table 7.5 separately for the partisan
identity revelation portion of the study (where subjects described their
partisanship) and the issue discussion (where subjects discussed four
issues in separate conversational segments).16 Within each of the two
portions of the study, we looked for effects of the disagreement treatment
(whether subjects’ partisan identities matched or clashed, and whether
subjects’ pretest attitudes agreed or disagreed), as well as subjects’ indi-
vidual traits. In the issue discussion portion of the study, we also asked
subjects a series of questions about their perceptions during each of the
four issue discussions. Table 7.6 outlines the explanatory variables.

Partisan Identity Revelation


Our study was structured in a way that allows us to assess particular
segments of the conversation, as analyzed in Chapter 6 and previewed in
Table 6.2. We focus first on the segment where subjects revealed their
partisan identities, assessing a set of behaviors during this identity revela-
tion. The design of the study guided participants to talk about a non-
political topic while we collected a baseline psychophysiological reading.
They were then instructed to state their partisanship, responding to the
question: “Are you a Republican or a Democrat? When the screen goes
blank, please state your answer out loud.” We had structured this part of
the study to create a clear moment where the subjects revealed their
identity.17 How did participants respond? While we anticipated our
respondents would give succinct answers to what we considered a
straightforward question, people were more verbose. The average number
of words exchanged in response to this question was 15, but the range
was from 2 to 91 words.
We found limited effects of the partisan clash condition on the number
of words exchanged; slightly more words were spoken on average in
conversations where participants’ partisan identities clashed, but this
difference is not statistically significant.18 We did not detect any differ-
ences between partisan aligned and partisan clash conversations on the
automated measures of verbal hedging in speech or on the sentiment
score. However, we found a strong effect of the partisan clash condition
on our hand coded measure of partisan qualification, depicted in
Figure 7.4. Importantly, this relationship is robust to the way that parti-
san leaners are coded, suggesting that it is not simply an artifact of the

https://doi.org/10.1017/9781108912495.007 Published online by Cambridge University Press


174 [further] Discussion: Expression (Stage 3)

 .. Explanatory variables for linguistic marker analysis

Construct Variable Analysis Level


Disagreement (a) Partisan Clash (a) Identity
(b) Issue Disagreement Revelation,
Issue Discussion
(b) Issue Discussion
Perceptions of For each issue, self-reported Issue Discussion
the Conversation measures of subjects’
evaluations of:
Perceived Disagreement
Own Discomfort Level
Partner Discomfort Level
Own Knowledge Level
Partner Knowledge Level
Partner Knowledge Advantage
Knowledge Differential
Individual Traits Race Identity Revelation
Gender Issue Discussion
Partisan Strength
Political Interest
Social Interaction Anxiety
Conflict Avoidance
Willingness to Self-Censor
Note: Data come from the Psychophysiological Experience Study. See Chapter 6 for a more
detailed explanation of the Partisan Clash and Issue Disagreement conditions. See
Chapters 3 and 9 for a more detailed explanation of our measurement of individual traits.
See the Appendix for more information about perceptions of the conversation.

way leaners expressed their identity. However, in regression models, this


relationship disappears when controlling for partisan strength, a pattern
we explore more shortly. We interpret this pattern of findings to suggest
that respondents are more likely to distance themselves from their parti-
san identity or express uncertainty about it when disclosing to someone
who does not share their identity, but that pattern is driven by the
behavior of both weak partisans and partisan leaners. Strong partisans
do not seem to qualify their identity expression in the same way.
In Chapter 9, we conduct a robust individual-level analysis of 4D
Framework expression behaviors, but we preview here some results that
pertain directly to the expression of identity. Overall, people expressed
less certainty when they revealed their partisan identity than we antici-
pated, despite the high levels of political interest and knowledge in
the sample.

https://doi.org/10.1017/9781108912495.007 Published online by Cambridge University Press


Linguistic Markers 175

60 Leaners Coded as Independents Leaners Coded as Partisans

50
Percent Qualifiying Response

40

30

20

10

0
Partisan Match Partisan Clash Partisan Match Partisan Clash

 .. Qualification in partisan identity expression, by partisan clash


condition
Note: Data come from the Psychophysiological Experience Study. Bars show the
percent of subjects who qualified their response in the identity revelation segment.
See Chapter 6 for a more detailed explanation of the Partisan Clash condition. We
coded partisanship in two ways: one where individuals who leaned toward a party
are counted as partisans (right panel), and one where they are not (left panel). To
make this clearer, take a dyad where a Democratic leaner is talking to a strong
Democrat. In the figure on the left, this dyad would be coded as an example of
partisan clash, because the leaner would be coded as an Independent. In the figure
on the right, this dyad would be coded as an example of aligned partisanship
because the leaner would be coded as a Democrat.

The strongest and most consistent effects were between partisanship


strength and linguistic measures during the identity revelation segment of
the conversation. In a linear regression model controlling for gender, race,
political interest and psychological traits, strong partisans used fewer
words to describe their identities, used less filler language when doing
so, and were much less likely to qualify their answers. While only 11% of
strong partisans qualified their identities, 46% of weak partisans did, as
did 57% of leaners. Nearly twice as many Republicans as Democrats used
one of the qualification phrases (42% compared to 21%), but we have
too few Republicans to test whether this difference is significant. We note
here that subjects high in willingness to self-censor and social interaction
anxiety were more likely to use partisan qualification phrases in their
answers when they were in the partisan clash condition, even controlling
for their partisan strength.

https://doi.org/10.1017/9781108912495.007 Published online by Cambridge University Press


176 [further] Discussion: Expression (Stage 3)

To elaborate a bit more, we further coded to see whether subjects


mentioned their party name in their response. While 90% of
Republicans and 92% of Democrats used the party name in their initial
response, rates were considerably lower for partisan leaners. Over half of
leaning Democrats (58%) identified their party directly by name, while
only 33% of leaning Republicans did, but there were too few subjects in
either category to test for statistically significant differences in this rate.
Only 38% of true Independents identified themselves as such, using the
word “independent,” initially in their response.
Strong partisans were also more likely to speak first when announcing
their partisanship: 74% of strong partisans and 49% of weak partisans
spoke first, compared to only 26% of leaners and 23% of Independents.
In a regression model controlling for gender, race, political interest, and
psychological traits, partisanship strength is a positive and significant
predictor of speaking first during the identity revelation segment.19
Finally, we found several statistically significant results between other
traits and the number of words that people spoke in the identity revelation
measure. We cannot be certain how to interpret this measure in this
segment of the conversation: Increased verbiage could be a sign both of
interest or of uncertainty. In regression models controlling for the set of
variables described, we found that men used more words than women
when describing their identities, as did the politically interested compared
to the less interested. People who scored highly on measures of conflict
avoidance and willingness to self-censor used more words to describe
their identities when they were in the partisan clash condition. We con-
textualize these results in Chapter 9.

Issue Discussion
We next examine the conversation segments over policy issues. To assess
the relationship between the variables outlined in Table 7.6 and linguistic
expression during the policy segments, we pool all issue segments
together, meaning that we have eight observations per conversation (each
participant’s speech during four policy segments) and then run a series of
regressions between the independent variables from Table 7.6 and the
core conversational behaviors from Table 7.5, clustering standard errors
by subject.
We find no evidence that either the partisan clash treatment or the
actual amount of issue disagreement between subjects was related to any
of the linguistic measures during the issue discussion segment. Nor did we

https://doi.org/10.1017/9781108912495.007 Published online by Cambridge University Press


Conclusion 177

find effects for political variables such as political interest or partisanship


strength. However, we found clear and consistent patterns between sub-
jects’ perceptions of the conversation and their linguistic response, as well
as several individual demographic and psychological traits, outlined in
Table 7.7.
The findings from the linguistic analysis suggest far stronger and more
consistent effects for the characteristics of the individuals – such as
partisanship strength, psychological factors, and their perceptions of the
conversation – as compared to factors of the conversation itself, such as
the presence of disagreement. The exception to this is the role of partisan
identity clash in the way that subjects described their partisanship, where
people talking to others who did not share their identities were much
more likely to minimize or qualify their descriptions. However, even this
relationship seems to be accounted for by partisanship strength. In
Chapter 9, we will refer back to these results to more thoroughly charac-
terize how a broader range of individual traits affect the way people
communicate when talking about politics.


What did Joe choose to do in response to his father-in-law’s question?
Based on the results presented in this chapter, probabilistically Joe would
choose to censor his viewpoints to Frank, given that he felt he was at a
knowledge disadvantage and was concerned about maintaining the social
relationship. The experience was physiologically and psychologically
taxing, and it likely involved more verbal hedging and negative sentiment
than if he had talked to someone with whom he agreed.
Our exploration of the experience of a political discussion showed that
individuals consider a wide range of concerns and opportunities as they
contemplate what to say in a discussion, and that these choices are often
made with a backdrop of physiological arousal and emotional activation.
Importantly, the examination of the psychological considerations individ-
uals make indicates that individuals do indeed consider the social conse-
quences of their expression in the discussion. We found that 32 percent of
respondents select an affiliation consideration as most important in
driving a character’s expression in a vignette, and among those who do,
that they are much more likely to express a concern about the situation
than see an opportunity to strengthen a social relationship. Our vignette
experiment also showed that it is not only a structural disadvantage that
heightens these social concerns: Our subjects anticipated a higher rate of

https://doi.org/10.1017/9781108912495.007 Published online by Cambridge University Press


https://doi.org/10.1017/9781108912495.007 Published online by Cambridge University Press

 .. Pattern of findings for linguistic markers in issue discussion in Psychophysiological Experience Study

Linguistic
Marker Perceptions Individual Traits
Number of Subjects used fewer words to discuss an issue when they Subjects with higher SIAS scores used fewer words to
Words perceived their discussant to be more knowledgeable discuss an issue, especially when subjects disagreed on
on that issue the issue
Verbal Hedging Subjects used more filler words when discussing an issue
with a partner they perceived to disagree with them
on that issue
Conversational Subjects were less likely to speak first about an issue Whites were more likely to initiate conversation than
Initiation when they were less comfortable discussing ethno-racial minorities
that issue Subjects with higher SIAS scores were less likely to
Subjects were less likely to speak first about an issue speak first about an issue
when they perceived their partner to be Subjects with higher CA scores were less likely to
uncomfortable discussing that issue speak first about an issue
178

Subjects with higher WTSC scores were less likely to


speak first about an issue, but only when subjects
disagreed on the issue
Sentiment Subjects used more positive words to discuss an issue Whites used more negative words to discuss an issue
when they felt less comfortable discussing that issue than did ethno-racial minorities
Subjects used more positive words to discuss an issue Subjects with higher SIAS scores used more negative
when they perceived their partner to be less words to discuss an issue
comfortable discussing that issue Subjects with higher CA scores used more negative
words to discuss an issue
Subjects with higher WTSC scores used more negative
words to discuss an issue
Note: Data come from the Psychophysiological Experience Study. See Chapters 3 and 9 for a more detailed explanation of our measurement of
individual traits. “CA” refers to conflict avoidance, “SIAS” refers to Social Interaction Anxiety, and “WTSC” refers to willingness to self-censor. See the
Appendix for more information about perceptions of the conversation. All results statistically significant at least at the p < .10 level in linear regression
models controlling for partisan strength, political interest, gender, and race.
Conclusion 179

affiliation concerns when the character was more knowledgeable than the
hypothetical discussants.
We find that a majority of our respondents expected individuals to
deviate from their true opinions through censorship, conformity, or silen-
cing. The behavioral path they chose was conditional on the relative
knowledge levels of the discussants, but it was also related to whether
the subject reported that a concern or an opportunity was the most
important consideration for the character. People who emphasized the
importance of a concern were more likely to report that the character
would not express his or her true opinion. This tendency toward reticence
was reinforced by findings from lab experiments that not only validate
high rates of censorship and conformity but also shed light on the subtle
dynamics of disagreeable conversations. We do not have lab results to
validate the findings about the effects of knowledge differentials on what
subjects say and how they say it, but we think this is a fruitful avenue for
further research.
In Chapter 8, we conclude our 4D Framework with the Determination
stage. Here, we examine the extent to which contentious political inter-
actions, such as those described in this chapter, might lead individuals to
avoid political discussions, politics more generally, and even nonpolitical
social experiences in the future. One participant in our Thanksgiving
Study captured this notion well, when asked to describe a recent political
discussion:
It was with a co-worker and we were debating homosexual marriage. I was
against and he was for. We were at work in a team room and it was just hard
to convince him of my views. Plus he wasn’t discussing but rather trying to agitate
me, and it worked. I hate feeling like that and it just made me angry. I learned
from that lesson to not get involved in these types of discussions so much.

This person described a contentious political discussion that involved


emotions he did not like experiencing, including agitation and anger.
Moreover, he reports that he “learned from that lesson not to get involved
in these types of discussions so much.” While we did not follow up with
this participant to see if he followed through on the lesson he learned, it
seems that he had every intention of avoiding political discussions in the
future, upon reflecting on the experience itself.

https://doi.org/10.1017/9781108912495.007 Published online by Cambridge University Press


8

Determination

When Discussion Divides Us (Stage 4)

For the umpteenth weekend in a row, Joe rakes the leaves from his yard to
the curb, the annual Sisyphean ritual of suburban home ownership. He
stares down the street at his neighbor Jack’s house, where the weekly
driveway auto shop gathering has switched entertainment from baseball
to football alongside the change in season. After the conversation at the
barbeque earlier in the year, Joe had tried again last month to get to know
Jack and his friends, joining them in the driveway as everyone was figuring
out who they wanted to recruit for their fantasy football teams. But once
again, as soon as the conversation turned away from small talk and on to
the news, it was clear Joe’s views were in the minority. He uncomfortably
muddled through the conversation and retreated back to his yardwork as
soon as he could. Now, a month later, he was still mulling over the situation
as he once again saw men gathering in the driveway. Would it be rude to
pop over for a few minutes to talk sports, but immediately leave when
politics came up? Should he continue to grin and bear it through more
political banter every week for the sake of being neighborly? Could he get
away with just a friendly wave from afar? Or should he avoid Jack at all
costs, changing his raking schedule and taking a different route to avoid
jogging past him in the mornings?

Joe’s feelings exemplify an experience that has preoccupied pundits and


the popular press in recent years, and one that emerged in our CIPI
I Survey: Americans are deeply concerned about the consequences of
talking to each other about politics. Intuitively, we know that people’s
experiences shape their future behavior. With respect to political discus-
sion, scholars have examined the ways in which political discussion might
affect other forms of political engagement or levels of tolerance. But
previous research on political discussion has not considered the social

180

https://doi.org/10.1017/9781108912495.008 Published online by Cambridge University Press


The Social Consequences of Political Talk 181

implications of political discussion, assessing the extent to which conten-


tious political interactions affect future choices about political discussion
and social interaction with past political discussants.
Relying on observational data – using both survey questions assessing
respondents’ own behavior as well as their responses about characters’
behavior in vignette experiments – we illustrate a sort of ripple effect
stemming from political discussion. Not only do people alter their
political discussion habits after an uncomfortable discussion, but in some
circumstances, they also alter their social networks. In Chapter 10, we
discuss how political discussion experiences could affect broader societal
attitudes, either by mending our social fabric and reducing affective
polarization, or by tearing us farther apart.

     


In assessing the consequences of political discussion, previous scholars
have focused primarily on instrumental political outcomes, such as atti-
tude change, political participation, and political knowledge. Some work
has focused more on the normative consequences, such as tolerance for
the other side and outparty animosity. Both lines of work seek to under-
stand whether the day-to-day building blocks of democracy (political
conversations) affect outcomes that matter for the health of a democracy.
These outcomes richly characterize the political consequences of discus-
sion, but those consequences are only part of the story. We know a lot
about how conversations facilitate learning, attitude change, and vote
choice, but we know very little about how political discussions affect
social outcomes.
Throughout our depiction of the 4D Framework, we have referred to
the AAA motivational typology characterizing people’s decision-making:
How people choose to behave in advance of and during a conversation is
directly related to the benefit or threat posed by the consequences of the
conversation to their accuracy, affirmation, and affiliation goals. It is clear
that people are motivated both positively and negatively by a variety of
different goals, such as people’s desire to learn about politics to make
informed choices (accuracy), or the concerns that grappling with conflict-
ing information can challenge one’s sense of self (affirmation) or jeopard-
ize their relationships with others (affiliation).
Our results thus far suggest that about roughly equal numbers of
people are motivated by each goal. For example, in the True
Counterfactual Study presented in Chapter 5, we found that the

https://doi.org/10.1017/9781108912495.008 Published online by Cambridge University Press


182 Determination: When Discussion Divides Us (Stage 4)

open-ended responses that could be coded about why someone did or did
not engage in a conversation were nearly evenly divided between each
category. In our vignette experiments in Chapter 7, we found that 37% of
respondents selected affirmation, 32% selected affiliation, and 31%
selected accuracy as the character’s most important consideration when
determining his or her expression during a conversation.
Our results show that while all three categories of consideration
matter, people who think of conversation in terms of affiliation are much
more likely to be threatened by potential negative consequences than
motivated by potential positive benefits. With respect to the decision to
participate, we found in Chapter 5 that affiliation concerns were most
prevalent overall, based on the free response answers provided by
respondents. But this pattern was driven by the people who were asked
why they avoided a particular political conversation: 28% of people
mentioned affiliation concerns, compared to only 17% who mentioned
accuracy and 10% who mentioned affirmation concerns. Moreover, only
11% of participants identified an affiliation consideration as their reason
for engaging in a conversation. It seems that affiliation considerations are
a much stronger driver of opting out than opting in.
Similarly, in our exploration of the motivations for behavior during a
discussion in Chapter 7, Figure 7.1 demonstrated that people were much
more likely to pick an affiliation concern as their most important consid-
eration compared to an affiliation opportunity. The difference in the
concern–benefit ratio was not nearly as stark for the other two categories.
Moreover, concerns about affiliation are not limited to those who were
disadvantaged in the conversation. People in the vignette experiment
condition who had a knowledge advantage relative to their discussants
were actually more concerned about affiliation consequences than those
in the condition with a knowledge disadvantage! Thus, while affiliation is
not a more prevalent or more important category of consideration for
people overall, on average it is a more negative consideration, one that
frequently has an adverse effect on individuals’ engagement in a
conversation.
Affiliation considerations have not been ignored in previous litera-
ture, but neither have they been fully explored. The foundational work
studying core discussion networks theorized about social considerations
in discussant selection but did not rigorously test those ideas. For
example, when assessing the strategic function that political discussion
can play in informing the public, Ahn, Huckfeldt, and Ryan (2014)
describe the decisions individuals make in choosing the people with

https://doi.org/10.1017/9781108912495.008 Published online by Cambridge University Press


The Social Consequences of Political Talk 183

whom they will discuss politics and for what purposes. They argue that
“individuals have multiple preferences in the construction of communi-
cation networks, and politics is only one among a long list of preferen-
tial criteria – sparkling personalities, trustworthiness, a hatred for the
Yankees, and so on” (p. 10). While the authors raise the point that
social – as opposed to purely political – considerations might affect
political discussion choices, they do not test the extent to which these
social considerations affect behavior.
Other research on political expression and tolerance speaks somewhat
to social considerations and fits between Stages 3 and 4 of our cycle.
Gibson (1992) demonstrates that those who do not feel free to express
their political views have more homogeneous networks and live in less
tolerant communities. Recent work in this vein suggests that individuals
are driven to self-censorship – a key component of Stage 3 in our cycle –
primarily because they fear that expressing unpopular views will alienate
them from those in their social networks (Gibson and Sutherland 2020).
Relatedly, Mutz (2006) argues that individuals in politically diverse net-
works avoid political activity “mainly out of a desire to avoid putting
their social relationships at risk” (p. 123). While Mutz’s work is theoret-
ically rich and largely consistent with much of our argument in this book,
the social implications of discussion are not tested. Weber and Klar
(2019) find support for the idea that people who are sensitive to norma-
tive social pressure are more likely to sort themselves socially and ideo-
logically, but they do not actually assess the experience of social pressure
itself. Thus, while there is evidence that social pressure is at play in
structuring our choices, the specific mechanisms about the desire to
preserve our social relationships remain largely untested.
Where affiliation motivations have received considerable attention as a
driving force in political discussion behavior is the focus on social concerns
in studies utilizing qualitative approaches. Conover and colleagues use a
multi-method approach to specifically focus on why people avoid political
discussion, which we would situate in Stage 2 of the 4D Framework
(Decision). Political conversations raise affiliation concerns in a number
of ways. Conversation may reveal information about a person’s view-
points that elicit judgment from others,1 a concern that is most pro-
nounced in conversations with weaker ties.2 Conversation creates the
possibility to learn information about other people’s views that reveal
fundamental differences3 or make it difficult to maintain respect for them.4
Finally, the act itself of disengaging from a conversation that becomes
contentious can be offensive,5 alienating others by refusing to continue to

https://doi.org/10.1017/9781108912495.008 Published online by Cambridge University Press


184 Determination: When Discussion Divides Us (Stage 4)

talk. This dynamic is made clear in Talking about Politics, where Cramer
(2004) contrasts the men she studies (the “Old Timers”) to a group of
women in a guild who do not know each other as well and avoid discuss-
ing politics to keep the conversation polite (p. 37).
Scholars concerned with affective polarization have uncovered plenty of
evidence of altered social relationships. Study after study reveals new ways
in which Democrats and Republicans report their dislike for each other.
Instead of simply examining whether partisans rate outpartisans “colder”
on a feeling thermometer scale, many of these studies explore whether
people would be unhappy about their progeny marrying an outpartisan
(Iyengar, Sood, and Lelkes 2012), would discriminate against outpartisans
(Iyengar and Westwood 2015), would be less willing to date outpartisans
(Huber and Malhotra 2017; Easton and Holbein 2020), would avoid selling
football tickets to outpartisans (Engelhardt and Utych 2018), and would not
want their children playing with the children of outpartisans (Mason
2018).6 A wealth of articles in the popular press echo the same points.
The Daily Beast published an article headlined “Friends unfriend over
politics”;7 NPR had a segment on All Things Considered called “‘Dude,
I’m done’: When politics tears families and friendships apart”;8 Reuters
published a piece headlined “‘You are no longer my mother’: A divided
America will struggle to heal after Trump era”;9 and The Washington Post
published a piece headlined “Politics and conspiracy theories are fracturing
relationships. Here’s how to grieve those broken bonds.”10
But scholars have not interrogated what provokes the disruption to
people’s social networks based on partisan antipathy. The core argument
is that strong patterns of partisan sorting along social, cultural, and
geographic lines have activated political tribalism (Mason 2018) or polit-
ical sectarianism (Finkel et al. 2020), and that Social Identity Theory can
explain the resultant change in attitudes. But what does that unfolding
process look like in the daily lives of Americans? How do day-to-day
interactions about politics actually affect our social relationships?
Our aim in this chapter is to integrate together these previous veins of
literature with our own findings to conceptualize what happens after a
political discussion with respect to social relationships. We shed light on
whether individuals’ affiliation concerns are warranted. Our findings
suggest that political conversation puts people in a bind. As their political
and social worlds collide, individuals are left with a choice about how to
reconcile their social relationships with their political attitudes. Do they
downplay the importance of their political views to allow for harmonious
disagreement with their peers and maintain the friendship? Or do they

https://doi.org/10.1017/9781108912495.008 Published online by Cambridge University Press


The Social Consequences of Political Talk 185

 .. Summary of Determination stage questions

Concept Driving Question Measure


Social How does the discussion Vignette Experiment
Distancing experience of being in the “What is the likelihood that
political minority affect the John/Sarah discusses
likelihood of politics with this group in
communicating about the future?”
political topics with those
individuals in the future?
Social How does the discussion Vignette Experiment
Distancing experience of being in the “What is the likelihood that
political minority affect the John/Sarah attends a social
likelihood of gathering that this group
communicating about will be at in the future?”
nonpolitical topics with
those individuals in the
future?
Social How does the discussion CCES 2018
Polarization experience of being in the Social Polarization Battery (be
political minority affect the neighbors with, be friends
extent to which individuals with, spend social time with,
avoid out-partisans in social and marry outpartisan)
contexts?

reduce their social contact with those who disagree in order to maintain
their political allegiances and political identities?
We examine social repercussions in terms of social distancing and social
polarization, as shown in Table 8.1. By social distancing, we refer to behav-
iors as they relate directly to the people who were part of the discussion
under consideration. This can include both future political and nonpolitical
(social) interactions, but the focus is on the people with whom someone has
already talked about politics. By social polarization, we refer to behaviors
that are targeted toward others who were not part of the conversation that
capture the increasing social discomfort between Democrats and
Republicans (Mason 2018).11 In both cases, we are interested in how the
experience of being in an uncomfortable political discussion – here, meas-
ured as being in the political opinion minority – affects how individuals
approach future political and social interactions.
When it comes to social distancing, we are interested in the likelihood
with which individuals will discuss politics with that same group again, as
well as the likelihood with which they will otherwise engage socially with

https://doi.org/10.1017/9781108912495.008 Published online by Cambridge University Press


186 Determination: When Discussion Divides Us (Stage 4)

that group again. A conversation that went well could be positively


reinforcing and increase future political (or nonpolitical) communication
with discussants. But a conversation that went poorly might lead a person
to avoid future interactions with those people. These choices and experi-
ences aggregate into consequences for social polarization. While any one
trip around the discussion cycle might only have proximate impacts on
discussant consequences (e.g. avoiding future interactions with those
specific people in the future), individuals might start to generalize their
experiences to others who were not part of the conversation itself.
Patterns of behavior can accumulate to impact broader social relation-
ships. In the extreme, people may develop preferences that are socially
polarized, preferring contexts and communities of similar others.
In the sections that follow, we begin by laying the landscape for the
extent to which affiliation concerns are justified. We broadly measured
whether individuals had cut ties or distanced from a friend because of their
political views. After demonstrating that affiliation concerns are indeed
justified, we move on to examining the extent to which the outcomes of
political discussions realize these affiliation concerns within the context of
social distancing and social polarization, as outlined in Table 8.1.

   ?


Are affiliation concerns justified? An initial answer is yes: About 1 in 4
people report that talking about politics has affected a relationship. In our
CIPI II Survey, we asked respondents directly about distancing from
friends because of political views:12 23% of our respondents reported
that they had distanced themselves and 26% of respondents indicated
that they perceived a friend to have distanced away from them because of
their political views. This group of respondents may have distanced in a
variety of ways and for a variety of reasons and we followed up to ask
them about how they distanced themselves from their friend. These
approaches included some forms of purely political social distancing, such
as stopping all talk about politics, and some forms of social distancing
above and beyond politics, such as stopping all communication entirely.
As shown in Figure 8.1, among this group of socially distanced respond-
ents, the methods of distancing were used with similar frequency, regard-
less of whether respondents were the ones who distanced or felt that their
friends had distanced from them.
The most common answer, provided by 57% of the socially distanced,
was that they simply stopped talking about politics. This means that a

https://doi.org/10.1017/9781108912495.008 Published online by Cambridge University Press


Are Affiliation Considerations Justified? 187

 .. Types and frequency of political and social distancing


Note: Percentage of respondents who reported that they had ever distanced
socially from a friend because of politics (white bars) or think a friend distanced
from them because of politics (gray bars) who engaged in each method of
distancing. Data come from the CIPI II Survey, N ¼ 237. Vertical lines represent
95 percent confidence intervals.

nontrivial portion of the public – by our estimates about 1 in 8 people –


has altered their political discussion network because of politics.
Turning toward social distancing more broadly, 12% reported that
they stopped talking about topics other than politics. Over a quarter
(29%) of this group reported that they stopped communication entirely,
with 13% reporting that they stopped replying to calls, texts, and emails,

https://doi.org/10.1017/9781108912495.008 Published online by Cambridge University Press


188 Determination: When Discussion Divides Us (Stage 4)

specifically. These patterns speak to a point we raised in Chapter 5 and in


our article (Settle and Carlson 2019), that individuals do not even like
talking about nonpolitical topics with those who disagree. Finally, while
rare, we found that 7% of this group stopped attending events that their
(former) friends would be at, 15% stopped inviting them to events, and
4% avoided letting their children play together.
We also asked about their social media distancing behaviors. Previous
research from Pew Research Center in 2016 suggests that 31% of social
media users have altered their news feed settings to reduce their exposure
to content posted by their friends and 27% have blocked or defriended
someone because of politics (Duggan and Smith 2016).13 As shown in
Figure 7.2, we find that 30% of our social distancers defriended someone
on Facebook, 19% blocked someone’s content, 23% hid someone, and
25% stopped liking, commenting, and sharing someone’s Facebook
posts. About 25% also reported unfollowing their friend on Twitter. It
seems, then, that social distancing is not limited to offline interactions –
individuals are also severing ties by ceasing engagement on social media.
We cannot be certain that individuals were only distancing themselves
from friends who disagreed, since our question was just about distancing
for political views more generally. For example, it could be that individ-
uals wanted to avoid their friends because they talked about politics too
much (or maybe not enough!), even if they had similar views (e.g. Klar
and Krupnikov 2016; Krupnikov and Ryan 2022). We also cannot be
certain that the distancing observed is because of political discussions,
rather than learning that the other person had a different worldview.
We view the patterns described as a way to set the stage for the reality
that affiliation concerns can be justified. In the sections that follow, we
dig deeper to analyze the social consequences of political discussions,
specifically.

 
The 4D Framework is premised on the idea that what someone learns
from one discussion experience feeds into their a priori expectations of
political discussion the next time they are presented with an opportunity
to talk about politics. At its most proximal level, this affects whether they
change their pattern of interaction with the people in the discussion, either
in terms of talking about politics again, or in terms of modifying their
social relationship, such that they avoid future nonpolitical interactions
with them, too.

https://doi.org/10.1017/9781108912495.008 Published online by Cambridge University Press


Social Distancing 189

We examine the relationship between political discussion and social


distancing in two ways and we focus both on avoiding political inter-
actions and avoiding social interactions. First, we revisit the social dis-
tancers analyzed in Figure 8.1. We asked this group to tell us why they
distanced from a friend due to their political views. We interrogate these
responses to understand the extent to which the responses mentioned
political discussion experiences. Second, we examine two pilot vignette
experiments in which we randomly assigned the partisan composition of
the political discussion and measured the likelihood with which respond-
ents thought the main character would discuss politics with this group in
the future.

Future Political Interactions: Free Response


While we acknowledge that individuals are not always good at remem-
bering – or even knowing – why they do what they do, we still found these
qualitative descriptions in respondents’ own words to be meaningful. We
first examined the explanations among those who reported that they only
distanced by stopping political discussions. This group was pushed
enough to make changes to their proximal discussion behaviors by
avoiding future political discussions with certain people, but would still
be willing to interact with them in social, nonpolitical settings.
The vast majority of the responses pointed toward disagreement as the
key reason they distanced. Several respondents simply wrote that their
friend was “too conservative” or “his views were too liberal.” Others
pointed to broader disagreement, simply saying that they could not agree
on things. One respondent wrote “I am willing to agree to disagree, some
people are not,” suggesting that the tensions in the conversations were
driven by the other person. Another wrote “[t]here was no sense in
wasting my breath with someone who disagrees a lot with me. No
common interest.” Many respondents echoed this sentiment, wanting to
avoid discussions that led to anger, frustration, and confrontation.
Still, some respondents pointed directly to affiliation concerns as the
key reason why they made these adjustments to their proximal political
discussion behavior. One respondent wrote that it was “better to distance
than to end the friendship all together.” Similarly, another wrote, “I
haven’t had a lot of time in general, but I also don’t want to get involved
in any political discussion with someone such that it can ruin a friendship.
Some might be able to keep friendships despite political differences, but
it’s hard for me to do so.”

https://doi.org/10.1017/9781108912495.008 Published online by Cambridge University Press


190 Determination: When Discussion Divides Us (Stage 4)

The point that these respondents clearly make is that they chose to
forego political discussions in order to maintain their social relationships.
This is a textbook case of acting on an affiliation concern. These respond-
ents did not stop talking politics because they felt unknowledgeable on the
topic, felt they were being persuaded to adopt a different view, or any-
thing that more clearly aligns with accuracy motivations or strategic
discussion behavior. Rather, they stopped talking politics to avoid des-
troying a relationship.
Another important point from these responses is that although many
described their friends’ views or the disagreement stemming from the
political discussions in deeply emotional language, they still desired to
keep the friendships. That is, this particular subset of respondents who
only distanced politically valued friendship over politics, even if they had
some colorful things to say about their friends’ views. For example, one
respondent wrote that their friend “says crazy shit.” Another, more
eloquently, wrote that his or her friend “expressed decidedly prejudiced
views after Obama was nominated for [p]resident.” In this case, the
respondent raised concerns about prejudiced comments his or her friend
made, but chose to stop talking about politics instead of ending the
relationship altogether. As we discuss later in this chapter, other respond-
ents were not so forgiving. And eliminating political talk from a relation-
ship can inadvertently affect the overall relationship. As one respondent
wrote: “My goddaughter and I have decreed a no talk zone on politics so
have distanced ourselves in that topic area . . . we still love and respect
each other but much of disposable social time is spent in supporting those
issues that it limits our social time.”

Future Political Interactions: Vignette Experiments


Are the experiences reported by these respondents idiosyncratic, or do
people actually anticipate avoiding people in the future based on uncom-
fortable discussion experiences? We endeavored to find out, using the
vignette experiments we analyzed in Chapters 5 and 7.
In 2015, we conducted a pilot vignette experiment aimed at under-
standing how various political discussion contexts affected future
behaviors – above and beyond what the character might say in the
discussion itself, as we explored in Chapter 7. The study manipulated
the partisan composition of the discussion, randomly assigning respond-
ents to read about a character that was in the partisan minority, partisan
majority, or in a group evenly split between Democrats and

https://doi.org/10.1017/9781108912495.008 Published online by Cambridge University Press


Social Distancing 191

Republicans. We also manipulated the knowledge level between the


discussants and the character, randomly assigning respondents to read
about a character that was more knowledgeable, of the same knowledge
level, or less knowledgeable than the others in the group. After reading
the vignette and reporting how they thought the character would behave
immediately in the discussion, we asked respondents about a number of
future behaviors. Our focus here is on the perceived likelihood that the
character would avoid discussing politics with these people again in the
future, a measure of distancing. We measured this dependent variable on
a six-point scale ranging from “very unlikely” to “very likely.” This
means that higher values indicate that respondents thought it was more
likely that the character would avoid discussing politics with this group
in the future.
Figure 8.2 presents the main effects of partisan composition on the
likelihood of avoiding a future discussion in both of our studies.14
The results indicate that participants who read about a character in the
partisan minority thought he or she would be significantly more likely to
6
5
Likelihood of Avoiding
2 3
1
0 4

Minority Balanced Majority


 .. Main effects of partisan composition on avoiding future political
discussions with this group
Note: Data come from the Knowledge x Partisan Composition Pilot Study,
N ¼ 592, after accounting for missing data. Means pool across the knowledge
manipulations to show the main effects of partisan composition. Vertical lines
represent 95 percent confidence intervals.

https://doi.org/10.1017/9781108912495.008 Published online by Cambridge University Press


192 Determination: When Discussion Divides Us (Stage 4)

avoid future political discussions with this group than those who read
about a character in a balanced – but contentious – group or who was in
the partisan majority.15
We also asked how likely they thought it would be that the character
would go to these people for information about the election in the
future. We find that characters in the partisan minority were perceived
to be significantly less likely to go to these discussants for political
information in the future (mean ¼ 2.4) than those in the partisan major-
ity (mean ¼ 3.4) and a balanced group (mean ¼ 3.3). Turning to the
main effects of the knowledge treatment, we observe suggestive evidence
that respondents expected a less knowledgeable character (mean ¼ 4.1)
to be more likely to avoid discussing politics with this group in the
future than a character who was more knowledgeable than the group
(mean ¼ 3.8). This is consistent with results we presented in Chapters 5
and 7, suggesting that individuals tend to avoid conversations in which
they are less knowledgeable than others in the group. However, we also
observe that respondents expected a character described as less know-
ledgeable than the discussants (mean ¼ 3.6) to be more likely to turn to
these people for information about future elections, compared to char-
acters described as being more knowledgeable than the other discussants
(mean ¼ 2.3).
While we do not observe an interaction effect between the knowledge
treatment and partisan composition on the perceived likelihood that the
character would avoid future political discussions with the group, we do
observe a statistically significant interaction between knowledge and
partisan composition such that characters who interacted with coparti-
sans who were more knowledgeable were more likely to report that they
would ask them for information in the future. This fits nicely with the
general notion that individuals should be relying on others who are more
knowledgeable than they are, especially if they are copartisans, for infor-
mation about politics (e.g. Lupia and McCubbins 1998; Ahn, Huckfeldt,
and Ryan 2014; Carlson 2019).
The results from this study complete our picture of the relationship
between political discussion experiences and future political interactions
with discussants. Echoing the results from Chapter 5 – where we found
that people were much more likely to avoid discussions in which they
would be in an opinion minority – we see here that the experience of a
discussion where one is in the opinion minority has the strongest effects
on future political interactions. Knowledge asymmetries between discuss-
ants also seemed to affect these behaviors, but to a lesser extent.

https://doi.org/10.1017/9781108912495.008 Published online by Cambridge University Press


Social Distancing 193

Future Social Interactions: Free Response


The potentially negative experience of discussion does more than deter
people from future political conversations. It also has the potential to
change their social (nonpolitical) interactions with the discussants. This
consequence is the crux of the question about whether affiliation concerns
about relationships with discussants are justified.
We return to the free response data from our CIPI II Survey refer-
enced previously. Here, we focus on explanations among those who
reported that they distanced in ways beyond avoiding political discus-
sions. Similar to the reasons for avoiding political discussion described
in the previous section, most respondents identified their primary motiv-
ation for distancing was because of excessive disagreement; many
respondents noted that it “just was not worth their time” to engage in
the discussions and alluded to the fact that the political gaps were too
wide to bridge. Others seemed to want to avoid arguments or incivility.
For example, one respondent wrote that “[s]he who instigate [sic] argu-
ments about politics on purpose. Baiting me and it wasn’t worth it.”
Others noted that the other person became “too adamant and offen-
sive.” Another wrote, “I’m guessing that when we stopped being able to
have civilized conversations about politics or just agreeing to disagree
and leave it alone, it was time to part ways.”
Other motivations emerged as well. Some respondents acknowledged
that they would be okay with their friends disagreeing with them but did
not want to be pushed or persuaded. For example, one respondent wrote
“I have enough problems in my life without having to listen to a liberal
relative that insists on trying to change my mind.” Others focused on the
fact that the conversation seemed to reveal a discussant’s judgmental or
hateful attitudes about their political opponents more broadly. For
example, one wrote: “I’m ok with them [Democrats] not agreeing with
me, but I will not accept the hatred [from Democrats] for everyone who
disagrees with them.” Another wrote about an “extreme political view
that put everyone outside of that viewpoint down or categorized as
inferior thinking.” Relatedly, some respondents noted that they were
not bothered as much by the disagreement itself, as the way in which that
disagreement was expressed: “Their political beliefs didn’t bother me as
much as how they voiced their opinions about it. The remarks were often
derogatory towards those who did not agree with them. The remarks
were extremely hurtful, not at all fact based, and were intentionally meant
to degrade those who did not agree with them.”

https://doi.org/10.1017/9781108912495.008 Published online by Cambridge University Press


194 Determination: When Discussion Divides Us (Stage 4)

Some respondents were more focused on how their friends’ political


views signaled something deeper, either about the friend or the rela-
tionship itself, particularly in the wake of the 2016 election. One
respondent wrote, “[t]he election of this president is so abhorrent, so
unlike the type of behavior that I want to associate with [sic] that
spending any amount of time or influence with someone who voted for
them was just undoable.” Another noted that these political differ-
ences also revealed a lack of shared interests on which to bolster
a friendship: “If you don’t have much of anything in common, then
the relationship doesn’t serve much of a purpose.” While some of our
respondents did report that the political tension led them to completely
cut ties, others viewed distancing – even beyond just avoiding political
discussion – as a means of keeping the peace to preserve their frayed
friendship. One respondent even wrote that they distanced because
of political views because they “want[ed] to remember him as a good
person.”

Future Social Interactions: Vignette Experiments


We first explored the notion of relationship severing in a question
following the vignette experiment published in Carlson and Settle
(2016), where we found that subjects who read about a character in
the partisan minority – compared to conversation with an equal balance
of opinions – thought that the character would be more likely to look
for a new job. We asked a variation of this dependent variable in
the 2015 pilot vignette experiment as well. We asked respondents
how likely they thought it was that the character would attend another
social gathering that this group would be at in the future. This question
was measured on a six-point scale ranging from “very unlikely” to
“very likely.”
As shown in Figure 8.3,16 we find largely consistent patterns for
partisan composition. The results suggest that those in the partisan
minority condition thought the character would be significantly less likely
to attend another gathering that this group would be at in the future than
those in the balanced and partisan majority conditions. However, the
mean scores are all quite high, and although there were significant differ-
ences between treatment groups, the overwhelming majority do not
expect relationships to be severed. We do not find evidence that know-
ledge differentials affected the likelihood of future social behaviors
examined here.

https://doi.org/10.1017/9781108912495.008 Published online by Cambridge University Press


Social Polarization 195

6 5
Likelihood of Future Social Interaction
1 2 3 0 4

Minority Balanced Majority


 . Main effects of partisan composition on avoiding future social
interactions with this group
Note: Data come from the Knowledge x Partisan Composition Pilot Study,
N ¼ 590, after accounting for missing data. Means pool across the knowledge
manipulations to show the main effects of partisan composition. Vertical lines
represent 95 percent confidence intervals.

 
One implication of the 4D Framework is that the experiences an individ-
ual has with political discussion impact future decision-making, accumu-
lating decisions that ultimately have an impact on the structure of their
discussion networks, political and otherwise, as well as the frequency with
which they communicate, whether about politics or pop culture.
Causality in this process is very difficult to establish, as we expect that
these choices are self-reinforcing, but we think that the associations are
telling. We use data from the 2018 CCES to examine the relationship
between political discussion network characteristics and broader social
polarization behaviors.17 This analysis tells us how political discussions
are associated with changes in our likelihood of avoiding future social
interactions with outpartisans.
As we saw in Chapter 5, people demanded more money to talk about
even nonpolitical topics with outpartisans. This suggests that the social
effects of political discussion may extend beyond avoiding future political

https://doi.org/10.1017/9781108912495.008 Published online by Cambridge University Press


196 Determination: When Discussion Divides Us (Stage 4)

discussions with specific people, as we explored in the previous section.


People may come to develop preferences for their broader social network
and context. In this section, we assess whether the partisan composition
of political discussion networks is associated with broader social
polarization.
We measured political discussion network composition by asking
respondents to reflect on how many of the people with whom they talk
about politics, candidates, and elections identify with the same political
party as them. This was measured on a five-point scale including none,
less than half, about half, more than half, and all.18 This measure will not
fully capture the complexities of political conversations, but it should
roughly proxy for how much exposure individuals had to disagreement.
The scale we use captures being in a partisan minority on one end, to
being in a partisan majority (with complete agreement) on the other end.
The middle category is where subjects are exposed to about equal levels
of partisans.
We measured social polarization using the items in Mason (2018).
Specifically, respondents were asked to report how likely it is that they
would spend occasional social time with an outpartisan, be next-door
neighbors with an outpartisan, be close friends with an outpartisan, or
marry an outpartisan.19 The response options were “I absolutely would
do this,” “I probably would do this,” “I probably would not do this,” and
“I absolutely would not do this.” With the exception of the marriage
question (which showed a nearly uniform distribution), the majority of
respondents were willing to engage with outpartisans on some level and
only a minority of respondents were “probably” or “absolutely” unwill-
ing to do a given activity with an outpartisan.20
Is discussion network composition correlated with social polarization?
We first examine the correlation between the partisan composition of
one’s network and the likelihood with which he or she reported engaging
in each activity with an outpartisan. We find mixed evidence of a rela-
tionship between discussion network composition and social polariza-
tion. Looking at each activity separately, we find no evidence of a
statistically significant correlation between network composition and
spending occasional social time with an outpartisan, nor with being close
friends with an outpartisan. We observe a statistically significant, negative
correlation between network homogeneity and avoid being next-door
neighbors with an outpartisan (r=-0.07), but a moderate positive, statis-
tically significant correlation between network homogeneity and avoiding
marrying an outpartisan (r=0.14). In other words, those in more

https://doi.org/10.1017/9781108912495.008 Published online by Cambridge University Press


Social Polarization 197

homogeneous discussion networks are less opposed to being neighbors


with an outpartisan but more opposed to marrying an outpartisan. It is
hard to draw a meaningful conclusion from such a mixed pattern of
results.
However, as we have noted throughout the book, not all political
discussions that occur are desired, and asking questions about people’s
discussion networks without understanding the agency they have in
constructing those discussion networks means that we are conflating
the discussants with whom subjects willingly talk, and those discussants
who initiate conversations the subject could not avoid. As a proxy to try
to capture this agency, we use a measure of strength of partisanship. We
expect that political discussion network homogeneity occurs more inten-
tionally for partisans. Because strong partisans are more interested in
politics and generally enjoy talking about politics more, they are likely
to initiate more political conversations. Thus, a homogenous network
reflects their active choice to cull their networks of outpartisans, which
we expect to be correlated with higher values on our social polarization
measures, while a more heterogenous network could reflect their toler-
ance for exposure to the other side. Conversely, we can infer less about
the agency involved in shaping network composition for weaker parti-
sans and Independents, and thus do not necessarily expect a consistent
pattern of results. Those with weaker partisan attachments, on average,
are less interested in politics and are less likely to initiate discussions.
Moreover, those who identify as pure Independents will have more
heterogenous discussant networks simply because of the overall distri-
bution of partisan attachment in the population. Therefore, for those
who only lean toward one party or do not attach at all, their discussion
networks are more likely to be a reflection of who initiates conversations
with them, and we do not expect their network composition to be as tied
to their social polarization.
To test this, we examined whether there was an interaction between
strength of partisanship and network homogeneity in explaining social
polarization behaviors. Across all four social polarization behaviors, we
find a statistically significant, positive interaction between strength of
partisanship and network homogeneity on the likelihood of polariza-
tion.21 Each model controls for interest in politics, race, and gender in
addition to the interaction and constitutive terms (network homogeneity
and strength of partisanship). We visualize the substantive magnitude of
the interaction effects in Figure 8.4. We can see that in each of the four
plots, there is separation among the strongest partisans on the

https://doi.org/10.1017/9781108912495.008 Published online by Cambridge University Press


198 Determination: When Discussion Divides Us (Stage 4)

 .. Predicted likelihood of polarization, by strength of partisanship and


network composition
Note: Data come from the 2018 CCES. Results estimated from an OLS model in
which the dependent variable is the likelihood with which the respondent would
distance in each way. Higher values reflect more distancing, saying “absolutely
would not do this.” Models control for political interest, race, and gender and
include CCES survey weights. The interaction term leaves strength of partisanship
and network composition as continuous. Figures plot strength of partisanship on
the X-axis and bars are shaded to reflect network composition, such that dark bars
represent participants in an opinion minority, medium bars represent participants in
a balanced network, and light bars represent participants in the partisan majority.
Vertical lines represent 95 percent confidence intervals.

polarization measure, based on network composition. Strong partisans


with the most homogeneous discussion networks were more likely to be
socially polarized on all four measures than strong partisans who were
political minorities in their networks. The relationship between network
composition and social polarization is different among pure independents
compared to partisans.
How do we interpret these findings? We suggest that the association
between homogenous network composition and increased social

https://doi.org/10.1017/9781108912495.008 Published online by Cambridge University Press


Conclusion 199

polarization from outpartisans among strong partisans reflects the deter-


mination process that individuals make after discussions. Strong partisans
who are in the minority in their discussion network have made deliberate
choices to talk with others who disagree; it makes sense that they exhibit
lower levels of outparty antipathy. Conversely, strong partisans who
shelter themselves in homogenous networks have likely done so – at least
in part – because of their outpartisan antipathy.
While we cannot tease out a causal relationship with these data, we
have presented some evidence connecting the experience of political dis-
cussion to the choices people make about their broader social networks.
We found that network composition was significantly associated with
important forms of social polarization. Strength of partisanship and
network composition interact, such that strong partisans in homogeneous
networks are more likely to be socially polarized than strong partisans
who are minorities in their networks. Thus, it seems that exposure to
disagreement in political conversations can influence the ways in which
individuals distance themselves from others – politically and socially.


Experiences throughout the 4D Framework shape future political and
social behavioral intentions. Individuals do indeed report that they dis-
tance themselves socially from their friends because of politics. We found
that about a quarter of Americans report cutting ties with others for
political reasons, but these reasons vary from person to person.
Sometimes, the behavior stems from pure intolerance of disagreeable
views or a distaste for political engagement more broadly, while other
times it stems from the signals that political views can send about under-
lying values or lack of common interests between friends. We also
explored the extent to which social polarization is linked to political
discussion network composition. We find mixed evidence that exposure
to disagreement in discussion networks is connected in some way to social
polarization, but more clear patterns about strong partisans, who we
think have the most agency in structuring their discussion networks.
Among this subset of the public, those who have the most homogenous
discussion networks are the most socially polarized.
Throughout this chapter, we have taken care to note some of the
strengths and limitations of our analyses. Our hope is that future
researchers can improve upon the groundwork we have laid in this
chapter to more carefully answer these nuanced questions. For example,

https://doi.org/10.1017/9781108912495.008 Published online by Cambridge University Press


200 Determination: When Discussion Divides Us (Stage 4)

our analyses of the CCES data rely upon one question aimed at assessing
network composition, but this does not capture the complexities of polit-
ical discussion that we have tried carefully to unpack throughout this
book, especially in Chapter 7. This analysis also uses a social polarization
measure that might be susceptible to the biases raised by Druckman et al.
(2021), leading us to overestimate the degree of social polarization by not
teasing out the differences between perceptions of ideologically extreme,
engaged partisans from perceptions of the more typical moderate, less
engaged partisans.
Synthesizing the findings from this chapter together raises an important
question: Is the picture bleak? On the one hand, we might argue “yes.” We
found that the very kinds of conversations where people are likely to
encounter new information – conversations with outpartisans – are pre-
cisely the ones that individuals are least likely to repeat. Our vignette
experiments repeatedly demonstrated that those in the partisan minority
were perceived to be less likely to engage in a political discussion in the
future with the same discussants. In an analysis detailed in the Appendix,
we also find that individuals in the partisan minority in their political
discussion networks discuss politics less often than individuals who are in
the partisan majority. These heterogeneous conversations are potentially
the types of discussions that have the potential to increase tolerance for the
other side through interparty contact. This means that some of the foun-
dational theorized benefits of political conversation – exposure to diverse
views, access to a wider variety of information, and increased tolerance –
might be increasingly rare. Moreover, we also find that these political
interactions have the potential to change individuals’ social networks
through social polarization, which makes future inadvertent exposure less
likely. As individuals sever ties with their friends because of politics, they
reduce the likelihood that they are incidentally exposed to political infor-
mation and conversation in the future. Finally, we observe some evidence
for social polarization. It seems that exposure to disagreement in political
discussion networks can be associated with increased social polarization
among weak partisans and independents, while homogeneous networks
are associated with increased social polarization among strong partisans.
However, we might look at the evidence presented in this chapter and
conclude that there is a glimmer of hope. First and foremost, the vast
majority of respondents reported that they have not socially distanced
themselves from a friend because of politics. Similarly, in our vignette
experiments, the average likelihood of engaging in future political and
social interactions was above the midpoint on the scale, indicating that

https://doi.org/10.1017/9781108912495.008 Published online by Cambridge University Press


Conclusion 201

most respondents still thought it likely that the character would continue
to engage socially. We also asked respondents in a vignette experiment
about the likelihood that the character would engage in future political
discussions with other people who were not part of the conversation in
the vignette. We found no evidence that the partisan composition of the
discussion under consideration affected the likelihood of future political
discussion in general. It is easy to point to the quarter of respondents
who have distanced and worry about the state of the social fabric in
America. But most people have not severed social ties because of polit-
ical discussion. Moreover, when we dig into the types of social polariza-
tion we explored, we observe that this seems to be most common in
romantic partnerships. The majority of respondents still seem relatively
comfortable engaging with outpartisans as neighbors, friends, or
acquaintances; they just draw the line at marriage (Iyengar, Konitzer,
and Tedin 2018).
In Chapter 10, we revisit these findings and discuss how our political
discussion experiences might affect our broader societal attitudes, such as
hostility toward outpartisans and political elites. For the moment, we
leave it up to the reader to decide for herself whether these results present
an optimistic or pessimistic window into the state of political discussion
in America.

https://doi.org/10.1017/9781108912495.008 Published online by Cambridge University Press


9

Individual Dispositions and the 4D Framework

[Sound of videotape rewinding]. Readers have become familiar with political


discussion as seen through Joe’s eyes. But how does his wife, Katie, experi-
ence discussion? What about his in-laws, Frank and Susan, or his neighbors
Jack and Ken? While Joe’s palms sweat and heart races when a political
discussion is afoot, Katie is steadfast in her opinions but even-keeled in
expressing them, politely stating her opinion even if she disagrees, but not
getting worked up during the conversation itself. Frank’s heart seems to race,
but out of pure excitement for engaging in a spirited debate, at least until he
gets himself so worked up that he needs to take a walk to cool off after a
shouting match with Joe or Katie. Susan remains calm. She’s quiet, but
attentive and interested, even if the discussion takes a turn that leaves Joe
shaking in his boots. The neighbors down the road, Jack and Ken, are more
Frank’s flavor, but they don’t seem eager to cut people out of their lives for
holding different political views. Cheering for a rival sports team or driving a
Chevy instead of a Ford? That might merit social derision, but supporting a
different political candidate earns little more than an eye roll.

The previous five chapters have unpacked the 4D Framework, empiric-


ally characterizing the ways in which individuals detect the views of
their discussants, decide whether to engage in a discussion, discuss their
opinions in that conversation, and determine how they will engage in
future social and political interactions. Throughout these chapters, we
have shown that individuals vary considerably in their behavior across
these facets of political discussion. For example, 34% of respondents to
our CIPI II Survey reported that they try to guess someone’s political
views before discussing politics with them, but 66% of respondents did
not. Similarly, 12% of participants in one of our Name Your Price
Studies reported that they would discuss politics with outpartisans for

202

https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press


Individual Dispositions and the 4D Framework 203

free, while 43% demanded $10 or more to have a five-minute conversa-


tion with those who did not share their partisan identity. A handful of
participants experienced decreased heart rates while anticipating a pol-
itical discussion, whereas many others experienced anxiety and had
heart rate increases on par with a panic attack. Within an actual discus-
sion, 89% of participants in our Political Chameleons Study conformed
to the group in the opinions they expressed, yet 11% of participants
expressed their true opinions to the group throughout the entire study.
Perhaps more consequentially, 23% of respondents in our CIPI II Survey
reported that they have distanced themselves from a friend because of
politics, while 77% have not.
Why do people respond to the same political discussion situations so
differently? In answering this question, we return to what we argued in
Chapter 2 was one of the core contributions of this book: Individual
dispositions structure how people behave at each stage of the 4D
Framework. This argument aligns with a large literature that demon-
strates the powerful role of innate characteristics – rooted in biological
differences but manifesting as variation in personality, cognitive style, or
threat sensitivity, for example – shaping political attitudes, identities, and
behavior, above and beyond the effects of demographic and political
characteristics (for a review, see Detert and Settle 2023).
Previous research has examined the relationship between a handful of
individual dispositions or traits and discussion behavior, specifically.
Demographic characteristics, such as race and gender, seem to affect
political discussion frequency or network composition, as well as behav-
ior within formal deliberations (e.g. Leighley and Matsubayashi 2009;
Morehouse Mendez and Osborn 2010; Mendelberg and Karpowitz 2014;
Djupe, McClurg, and Sokhey 2018; Carlson, Abrajano, and García
Bedolla 2019, 2020). Similarly, political characteristics, such as interest
in politics and political engagement, are associated with discussion fre-
quency (Huckfeldt and Sprague 1995). Mondak’s (2010) book on the
personality foundations of political behavior analyzes political discussion
as one of several forms of political participation, finding that those who
are more open to experience, less conscientious, more extraverted, and
less agreeable are more likely to participate in political discussions, a
finding reinforced by Gerber et al. (2012).
In this chapter, we push further to examine the relationships between
individual dispositions – demographic, political, and psychological – and
behavior in the four stages of the 4D Framework. We do not test the
relationship between every trait and every facet of discussion behavior we

https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press


204 Individual Dispositions and the 4D Framework

measured. Instead, we focus on theoretically driven expectations about


how different characteristics or traits should affect different kinds of
decisions and experiences.1 We will describe more of the theoretical
intuition behind the relationships we test as we present the analysis and
highlight the core findings (summarized in Table 9.1) to reveal the key
takeaway points. Our general approach is to use regression to isolate the
effect of traits on discussion behavior, selecting an appropriate model
given the structure of our dependent variables and controlling for all
other traits of interest. We analyze the psychological variables in separate
models, given the high rates of correlation between those traits. We
include the full set of model specifications and results in the Appendix
for interested readers. We can borrow intuition from Mondak (2010) in
arguing that personality dispositions precede political discussion prefer-
ences and behavior in a chain of causality, but we cannot be certain of
that. Moreover, some traits that we explore, such as interest in politics,
are not entirely stable and could be affected by political discussion experi-
ences, an argument echoed by Bakker, Lelkes, and Malka (2021) who
demonstrate that the relationship between personality and political ideol-
ogy and attitudes is bidirectional. As such, we find these associations to be
informative, but we are hesitant to interpret them as causal.

 
Among the most frequently measured demographic characteristics in
political behavior research are race and gender. For decades, scholars
have been interested in understanding how race and gender structure
political behavior: Do Whites and minorities have different political
preferences? Why do minorities tend to participate in politics at lower
rates? Do men and women have distinct political preferences? Why do we
observe women participating in politics at increasing rates over time?
While the vast majority of this research has been focused on questions
of participation (Schlozman Burns, and Verba 1999; Leighley and Vedlitz
1999; Lawless and Fox 2011; Anoll 2018); vote choice (Rouse 2013;
Dolan 2014; Setzler and Yanus 2018); and political knowledge (Dolan
2011; Abrajano 2015; Pérez 2015; Dolan and Kraft n.d.), there is an
important body of research specifically focused on understanding the role
race and gender play in political discussion.
Scholars of political discussion, and particularly deliberation, have
focused some attention on gender and race because they are characteris-
tics intricately linked to structural inequalities in American society. The

https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press


https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press

 .. Summary of key findings

Political PID
Stage Outcome Data Women Minorities Interest Strength SIAS CA WTSC
Detection Try to Guess Views in Advance CIPI I & CIPI II + + + +
Makes a Guess (free response) CIPI II + – – –
Directly Ask Others’ Views (free CIPI II – – –
response)
Decision Generally Try to Avoid Political CIPI I + – – + + +
Discussions
Deflect (expect vignette CIPI I + – + +
character to silence)
Perceived Value of Initiating an CIPI I & CIPI II + + –
205

Agreeable Discussion
Perceived Value of Initiating a CIPI I & CIPI II – –
Disagreeable Discussion
Discussion Perceived Value of a Political CIPI I & CIPI II + –
Conversation with a Stranger
Perceived Value of an Agreeable CIPI I & CIPI II +
Discussion
Perceived Value of a CIPI I & CIPI II – –
Disagreeable Discussion
Proportion of Concerns about a CIPI I – + + +
Conversation Vignette
Character Would Have

(continued)
https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press

 .. (continued)

Political PID
Stage Outcome Data Women Minorities Interest Strength SIAS CA WTSC
Likelihood of Expressing True CIPI I – + + – –
Opinion for Vignette
Character
Expecting Vignette Character to CIPI I
Entrench
Expecting Vignette Character to CIPI I – – –
Express True Opinion
Expecting Vignette Character to CIPI I
Censor
Expecting Vignette Character to CIPI I + + +
206

Conform
Expecting Vignette Character to CIPI I – + +
Silence
Determination Socially Distanced from a CIPI I & CIPI II – + + + + +
Friend Due to Politics
Distanced on Social Media Due CIPI I & CIPI II +
to Politics
Note: Symbols in each cell denote the direction of the observed relationship between the individual characteristic in the column and the discussion
behavior in the row: + denotes a positive relationship; – denotes a negative relationship; and a blank cell means that we did not observe a statistically
significant relationship at the p < .05 level. All individual disposition data come from the CIPI I survey. Analyses using the combined CIPI I & CIPI II
data include individual dispositions measured in CIPI I and dependent variables measured in CIPI II. Demographic and political dispositions were also
measured in CIPI II and results are robust to using the CIPI I or CIPI II measures of these dispositions.
Demographic Dispositions 207

systematic barriers that women and racial and ethnic minorities face in
society more generally might affect how they participate in a political
discussion or deliberation, if at all. Just as people from underrepresented
communities are silenced representationally, so too might they be less
likely or able to raise their voices in political conversations.
If the effects of these structural inequalities are strong enough, we
expect that we might be able to detect gender or racial differences in
our measures of the 4D Framework, because people would have intern-
alized these disadvantages and altered their attitudes and expectations
accordingly. But we want to be clear about two things. First, our
expectations and analyses do not give full credence to the complexity
of gender and racial identities. Plenty of research has shown that
simply dichotomizing gender into men and women or race into White
and non-White crudely washes over important heterogeneity within
each group. Moreover, if we consider race and gender to be social
constructs, these characteristics are proxies for differences in lived
experiences, such as exposure to gendered stereotypes. Future research
focused directly on the effects of gender and race on political discussion
should think critically about how to best conceptualize and measure
these traits. Second, we want to be clear that we did not design our
studies in ways to specifically activate the inequality that women or
racial minorities might experience in organic conversations. Unlike the
approach in some deliberative participation research (e.g. Karpowitz,
Mendelberg, and Shaker 2012; Mendelberg, Karpowitz, and Oliphant
2014; Mendelberg and Karpowitz 2016), we did not manipulate the
gender (or racial) composition in our discussion studies. We do not test
any interactions with gender or race from our experimental studies and
instead focus on the main effects of gender or race. Thus, the results we
present in the next section should be seen as a first step in an explor-
ation of the role of demographics in nuanced discussion behaviors, not
the final verdict.

Gender
On average, women participate in political discussions less frequently
than do men.2 While the gender gap is relatively small, Mendelberg and
Karpowitz (2016) forcefully argue that it is “worth the time to under-
stand, because it demonstrates how power—the lifeblood of politics—
animates the political system . . . gender affects how power is instantiated,
reinforced, or undermined when people exercise voice. It does so in ways

https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press


208 Individual Dispositions and the 4D Framework

both overt and subtle, through means that may be simultaneously polit-
ical, psychological, and social” (p. 1–2).
Beyond frequency, how does gender affect behavior in political discus-
sion? Some have connected the frequency gap to preferences about dis-
agreement. Djupe, McClurg, and Sokhey (2018) find that disagreement
decreases women’s participation in political discussion but increases par-
ticipation from men. One possible mechanism for this could be conflict
avoidance. Women tend to be more conflict avoidant, meaning that they
might be less likely to engage in discussions, especially when disagreement
is present. Along this same vein, Wolak (2020) finds that gender gaps in
political engagement are driven more by men’s preference for (or enjoy-
ment of ) conflict than by women’s preference to avoid it.
Within formal deliberative settings, previous research has shown that
the gender composition and decision rules employed strongly affect the
extent to which women speak up (or not) in these discussions. Specifically,
when there are many women in the group and when the decision rule is
unanimous, rather than majority rule, women are more likely to speak up
(e.g. Karpowitz, Mendelberg, and Shaker 2012; Mendelberg, Karpowitz,
and Oliphant 2014). The theoretical foundation behind the “strength
in numbers” component of these findings is rich (see Karpowitz,
Mendelberg, and Mattioli 2015 for a thoughtful review). Much of this
research points to socialized gender norms about both whether politics is
for women and how women and men should interact socially. Although
we do not explore gender dynamics, the lessons gleaned from this previ-
ous work is useful. For example, women are more likely to be socially
sanctioned when they speak confidently, instead of following the norm of
being modest (Babcock and Laschever 2003). Karpowitz, Mendelberg,
and Mattioli (2015) write “[g]roup interactions between men and women
tend to crystallize women’s ex ante inferior authority, rendering them less
likely to exercise influence in a deliberative setting” (p. 151).
Within the context of the 4D Framework, we do not have strong
expectations for how and why gender should affect Detection (Stage 1)
based on past research, but these previous findings suggest that gender
could play an important role in the Decision stage (Stage 2). In particular,
we expect women to be less likely to engage in political discussions than
men, following from the broad findings we outlined, as well as patterns
uncovered from previous research on the rates of discussion between men
and women (e.g. Huckfeldt and Sprague 1995). Thinking about the
effects of situational factors such as disagreement and knowledge, we
should expect gender gaps to be particularly large in response to

https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press


Demographic Dispositions 209

disagreement. We expect women to be less likely to engage in disagreeable


conversations than men and to place lower value on initiating disagree-
able conversations, but this pattern may be driven by either women’s
distaste for conflict or men’s enjoyment of it. Given women’s general
tendency to “tend and befriend” and their generally more prosocial
behavior (Taylor et al. 2000), they might see increased value in agreeable
political discussions, relative to men. Karpowitz and Mendelberg (2014,
p. 240) suggest that in order to participate in deliberation, women tend to
require more positive and affirmative norms of conversation. Women
may anticipate that agreeable political conversations would be more
likely to create that socially supportive environment, although the authors
find that disagreement itself is not “deflating” to women, given a suffi-
ciently warm and friendly environment.
We test these expectations about the relationship between gender and
Stage 2 behaviors in a few ways, as outlined in Table 9.2. In each of
these models, the independent variable is gender and we control for race,
interest in politics, strength of partisanship, and the psychological
dispositions of conflict avoidance, social anxiety, and willingness to
self-censor.
In general, we find support for these expectations. We find that 28%
of women reported that they prefer to avoid political discussions,
whereas only 19% of men reported that they prefer to avoid discussing
politics. Similarly, 18% of women reported that they enjoy political
discussions, compared to 36% of men. The results from our ordered
logit model suggest a statistically significant relationship between gender
and discussion avoidance, even in the model with controls. We also find
that women were significantly more likely to think that the character in
the CIPI I Vignette Experiment would deflect the conversation by
remaining silent, compared to men. Thinking about the value placed
on initiating conversations, we find that women placed higher value on
initiating agreeable political discussions than did men, but we do not
find evidence that they placed lower value on initiating disagreeable
discussions than did men.
Beyond the decision to engage in discussion in the first place, there
could be – and are – dramatic differences in how men and women
participate in these discussions, which is directly tied to Stage 3 of our
framework. At least in formal deliberative settings, women are generally
more likely to seek consensus and be less assertive (e.g. Mendelberg and
Karpowitz 2016). In part because of societal power structures, women
generally have lower status in political discussions, compared to men,

https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press


210 Individual Dispositions and the 4D Framework

 .. Empirical approach for evaluating individual dispositions


and Stage 2 behavior

Construct Dependent Variable Model


Discussion Do you usually try to avoid political Ordered logit
Avoidance discussions, enjoy them, or fall
somewhere in between?
Enjoy, Somewhere in between, Avoid
Conversation How do you think John/Sarah would Logit
Deflection respond to the person’s question?
Say nothing on the subject, even though s/
he disagrees with them
Value of Some people say that the costs of various OLS
Conversation political activities are high, either in
Initiation terms of time, energy, interest, or the
negative outcomes of the activity. Other
people say that the benefits of various
political activities are high, either in
terms of their effect on government or
policy, personal gratification, fulfilling a
civic duty, or other positive outcomes of
the activity. How would you evaluate
the costs and benefits of each political
activity below?
Initiating political discussions with others
who usually agree with me
Initiating political discussions with others
who usually disagree with me
0 means high costs; 50 means costs =
benefits; 100 means high benefits

and behave accordingly: by speaking up less, being interrupted more,


and abandoning their own views to go along with others. Directly
exploring these power dynamics, Karpowitz and Mendelberg (2014)
introduce a series of pathbreaking deliberative experiments in which
they randomize deliberation features, such as the relative status a
woman participant has via institutional decision rules (e.g. majority
rule, unanimous, etc.), and the gender composition of the group. They
find that women are less likely to express their privately held preferences
(what we would consider to be their true opinions), especially when they
have lower status in the group, but men seem to be more likely to
express their true preferences, regardless of their assigned status in the
group. There are several possible mechanisms to explain these patterns,

https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press


Demographic Dispositions 211

which Mendelberg and Karpowitz (2016) thoughtfully review. Of par-


ticular note here is that these results could be driven by women’s
increased conflict aversion or tendency to prioritize others’ needs ahead
of their own. Both tendencies are largely driven by gendered socializa-
tion patterns that begin early in life.
Building on these findings from the deliberative democracy literature,
we can develop expectations about how gender might affect the
Discussion stage (Stage 3). Generally speaking, we expect women to be
more sensitive to the negative consequences of political discussions, com-
pared to men, and to structure their behavior accordingly. We expect
women to place lower value on engaging in casual political conversations
with strangers and disagreeable political discussions, but to place higher
value on engaging in agreeable political discussions, compared to men.
We expect that women will have a greater proportion of concerns about
conversations, compared to men. We expect women, relative to men, in
our CIPI Vignette Experiment to expect the character to be less likely to
express her true opinion to the group, more likely to conform, more likely
to censor, and less likely to entrench. We detail our approach for testing
these expectations in Table 9.3. As with our Stage 2 tests, all models
control for race, interest in politics, strength of partisanship, and psycho-
logical dispositions.
Although we found strong relationships between gender and several
measures of the decision to engage in a political discussion, we find
absolutely no relationships between gender and behavior within political
discussions, using survey measures and vignette experiments. As we pre-
viewed in Chapter 7, we did find that men used more words than women
when revealing their partisan identities, but did not observe other vari-
ation in the way men and women spoke during the discussions in our lab
experiments. This is surprising given the compelling evidence on variation
between men and women in formalized deliberative settings. Our null
results could be driven by the fact that the differences between men and
women in discussion behavior that we measure in this book are too small
in magnitude for us to detect. Given that some of our analyses rely on the
combined CIPI I and CIPI II Surveys, the sample size was reduced dra-
matically, meaning that we could be underpowered to detect some of
these differences. The absence of evidence here should not be interpreted
as evidence for the absence of any effect of gender, and we hope future
researchers engage with this question more directly.
Our expectations for how gender is associated with Stage 4,
Determination, are weaker. Because women tend to prioritize social

https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press


212 Individual Dispositions and the 4D Framework

 .. Empirical approach for evaluating individual dispositions


and Stage 3 behavior

Construct Dependent Variable Model Type


Value of Some people say that the costs of various OLS
Conversation political activities are high, either in terms
of time, energy, interest, or the negative
outcomes of the activity. Other people say
that the benefits of various political
activities are high, either in terms of their
effect on government or policy, personal
gratification, fulfilling a civic duty, or other
positive outcomes of the activity. How
would you evaluate the costs and benefits
of each political activity below?
Discussing politics with others who usually
agree with me
Discussing politics with others who usually
disagree with me
Having a casual conversation about politics
with a stranger while waiting in line, riding
the bus, or in some other public place
0 means high costs; 50 means costs = benefits;
100 means high benefits
Concerns Which of the following seem like plausible OLS
about the considerations for John/Sarah? [Asked
Conversation after reporting how John/Sarah would
respond in the CIPI I Vignette Experiment]
Calculate proportion of concerns relative to
total considerations selected
Likelihood of What is the likelihood that John/Sarah OLS
Expressing expresses his/her true opinion to the group?
True Opinion [CIPI I Vignette Experiment]
6-point scale from “very unlikely” to “very
likely”
Expected How do you think John/Sarah would respond Multinomial
Response to the person’s question? [CIPI I Vignette Logit
Experiment]
Entrench, True Opinion, Censor, Conform,
Silence

relationships and prosocial behavior, we expect women to be less likely to


distance themselves from friends because of politics than men. Instead of
cutting ties altogether, we expect women to “grin and bear it” when faced
with political views or engagement by their friends that they do not like.

https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press


Demographic Dispositions 213

 .. Empirical approach for evaluating individual dispositions


and Stage 4 behavior

Construct Dependent Variable Model Type


Social and Political Have you ever distanced yourself from Logit
Estrangement a friend because of his or her
political views?
Yes, No

They might try to steer conversations away from politics, but they should
be less likely to simply sever the tie than men.
As detailed in Table 9.4, we test these expectations using a logit model
that controls for race, interest in politics, strength of partisanship, and
psychological dispositions. We found that women were significantly less
likely to report that they had distanced themselves from a friend because
of politics, compared to men, consistent with our expectation.

Ethnorace
While there was ample previous research on gender and political discus-
sion, there is far less research on which to draw to develop expectations
about how race and ethnicity affect political discussion. As researchers
have recently noted, the majority of political discussion research has
focused on samples of White respondents or has not over-sampled minor-
ities in a way that allows for meaningful inferences (Leighley and
Matsubayashi 2009; Carlson, Abrajano, and García Bedolla 2019,
2020). Our book is also vulnerable to this critique. What research has
been conducted demonstrates that there are dramatic differences in the
partisan (Carlson, Abrjano, and García Bedolla 2020; Eveland and
Appiah 2020) and ethnoracial (Leighley and Matsubayashi 2009;
Eveland and Appiah 2020) composition of discussion networks between
Whites and minorities, as well as the network size. For example, Eveland
and Appiah (2020) find that Black Americans have more racial diversity
in their discussion networks, while White Americans have more political
diversity in their discussion networks. In general, Whites tend to discuss
politics more frequently than individuals from ethnoracial minority
groups, but the reason behind this remains relatively unknown. As
Carlson, Abrajano, and García Bedolla (2020) argue, it could be that
Whites are socialized to be positively inclined toward political discussion
and political engagement more so than ethnoracial minority groups. It

https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press


214 Individual Dispositions and the 4D Framework

could also be because the supply of available discussants differs, leaving


minorities with fewer social contacts interested in politics with whom to
have conversations (Leighley and Matsubayashi 2009; Sokhey and Djupe
2014). As a result of these differences in network structure and discussion
frequency, there are differences between Whites and minorities on the
effects of discussion on political knowledge, efficacy, engagement, and
public opinion.
We think that the complex relationship between race, ethnicity, and
political discussion merits additional theoretical development and empir-
ical analysis in the future. While we did not set out to write a book about
how political discussion is uniquely experienced between individuals of
different ethnoracial groups, it is important to engage with these ideas to
the best of our ability given the data that we have. Like many researchers
before us, our data are limited in the number of minority respondents in
our samples, making it difficult to be confident in some of the patterns
(null or otherwise) in our results. Moreover, given the unique characteris-
tics of political discussion networks across ethnoracial groups and the
diverse experiences with politics that each group has, our studies are not
ideally suited for testing theories about political discussion experiences
within and between all ethnoracial groups. It is possible that many of our
results will only explain behavior among White respondents. Indeed, as
we describe shortly, there are some important nuances that we do uncover
between how White and minority Americans navigate the 4D
Framework. We want to acknowledge these shortcomings up front and
strongly encourage future researchers to consider how the 4D Framework
would (or would not) apply to individuals from different ethnoracial
groups. With that said, we move forward in this section to examine
how ethnorace was associated with behavior throughout the 4D
Framework, as studied in this book.
We expect race and ethnicity to play a role at the Decision stage, just as
we did with gender. We expect Whites to be more likely to engage in a
political discussion than ethnoracial minorities. We test this expectation
using the same approach we did with gender (see Table 9.2), but this time
we evaluate the coefficient on race, which we have dichotomized into
White or ethnoracial minority group.
As previewed in Table 9.1, we find little evidence that race is associated
with the decision to engage in a political discussion. In contrast to our
expectations, results suggest that there is no association between race and
political discussion avoidance. Similarly, we do not find evidence of a
relationship between race and the likelihood a vignette character would

https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press


Political Dispositions 215

deflect, nor with the value placed on initiating agreeable or disagreeable


political discussions.
Given the dearth of existing research on race and political discussion
itself, we are hesitant to develop further expectations for the effects of
race at other stages of the cycle. We could imagine a scenario in which
race affects Stage 3 (Discussion) behavior similarly to the way gender
affects this behavior, based on relative status and power structures
within society, but as we noted, our studies were not designed to
manipulate the racial balance within discussions. Our exploratory find-
ings show that individuals from minority groups were less likely to think
that a character in the CIPI I Vignette Experiment would express his or
her true opinion to the group, compared to Whites. However, when
asked to report how they thought the character in the vignette would
respond to the discussion scenario, we do not find evidence that race was
associated with any of the response options. Nor do we find evidence
that race was associated with the value placed on having agreeable or
disagreeable political discussions, or having casual political conversa-
tions with a stranger. Race was also not associated with the proportion
of concerns respondents thought the vignette character would have
when faced with a tricky discussion scenario.

 
Perhaps the most foundational individual dispositions to consider while
trying to explain variation in political discussion behavior are political
characteristics. Specifically, we focus on interest in politics and strength of
partisanship, as we expect strong Democrats and strong Republicans to
have similar political discussion preferences throughout the cycle. We
outline our expectations in two ways. First, we assess when we expect
these traits to have similar effects, even if the mechanism of the relation-
ship might differ. Given that strength of partisanship and interest in
politics are correlated in our CIPI I Survey data (r = .29), we should
anticipate similar effects in many instances. Second, we focus on the
unique effect each characteristic should have, independent of its relation-
ship to the other characteristic.
When might partisanship strength and interest in politics have similar
effects on discussion behavior? We expect large effects at the Detection
stage (Stage 1). Both traits incline people to pay more attention to politics.
Paying more attention to politics heightens people’s ability to see more
differences between the political parties and to recognize patterns of

https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press


216 Individual Dispositions and the 4D Framework

 .. Empirical approach for evaluating individual dispositions


and Stage 1 behavior

Construct Measurement Model


Detection When you discuss politics with someone new, do Logit
you typically try to guess his or her political
views before starting the discussion?
Yes, No
Method of Imagine that you were trying to guess someone’s Multinomial
Detection political views, but you couldn’t ask them Logit
directly. How would you go about guessing
their political views?
Free Response Code:
Would make a guess – gave informative response
Directly ask their political views
Subtle cues

preferences and demographics that are associated with certain political


parties. People might be better able to pick up on subtle cues or read
between the lines on heavily censored statements to figure out where
others stand. They might even find this fun!3
These refined detection skills suggest that those who are more inter-
ested in politics and those who are more strongly attached to their party
will (1) report that they try to guess others’ views in advance of a discus-
sion; (2) actually make a guess when asked to report how they would go
about trying to guess someone’s views; and (3) directly ask someone
about their political views (see Table 9.5). These types of people will be
eager to identify and label their potential discussants in an effort to
prepare for the conversation that could unfold. In this case, their behavior
is likely less about trying to detect views in an effort to avoid the discus-
sion, but more about trying to predict how the conversation will go.
Moreover, both traits incline people to be better equipped to detect
others’ views accurately, even if they use that information differently.
While strong partisans and those interested in politics might enjoy playing
detective, they might also simply be more comfortable directly asking
individuals what they think about political views to get a direct answer.
In contrast, those who are less interested in politics or attached to their
parties might simply try to avoid all forms of political discussion, leaving
the task of detection irrelevant.
We find support for two of these expectations in Stage 1. We find that
those who are more interested in politics and strong partisans were more

https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press


Political Dispositions 217

Interest in Politics Strength of Partisanship

0.7
0.7

0.6
0.6
Predicted Probability of Guessing Others' Views

Predicted Probability of Guessing Others' Views

0.5
0.5

0.4
0.4

0.3
0.3

0.2
0.2

0.1
0.1

0.0
0.0

Not at all Not very Somewhat Very Pure Independent Weak Strong
Interested Interested Interested Interested Independents Leaners Partisans Partisans

 .. Predicted probability of guessing views, by political dispositions


Note: Predicted probability of guessing others’ views by interest in politics (left
panel) and strength of partisanship (right panel). Results based on a logit model
controlling for gender, race, strength of partisanship, interest in politics, social
anxiety, conflict avoidance, and willingness to self-censor. Predicted probabilities
are calculated holding race as White, gender as male, and all other characteristics
at their means. Vertical lines represent 95 percent confidence intervals. Results use
the combined CIPI I and CIPI II data, N ¼ 469, reflecting listwise deletion for
missing values in the model.

likely to report that they would try to guess others’ views in advance of a
conversation. Specifically, the predicted probability of guessing someone’s
views in advance for the least interested in politics was .14, but this rose
to .43 for those who were most interested in politics. Likewise, predicted
probabilities based on our logit model suggest that the probability of
guessing someone’s views for pure Independents was about .24, but
.39 among strong partisans. Note that this did not mean that knowing
someone else’s views in advance was a requirement for their participation,
but rather that they simply engage in this detection behavior. Second,
assessing the free response data, we find that the highly interested were
more likely to provide an answer about how they would guess others’ views
and directly ask someone their views, but that strong partisans were not.

https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press


218 Individual Dispositions and the 4D Framework

Our expectations for the two traits diverge for Stage 2, as we expect
them to have different preferences for the kinds of conversations in which
they prefer to engage, but we do expect that on average, strong partisans
and those who are more interested in politics will be less likely to report
that they would avoid a political discussion and less likely to expect a
character in the CIPI I Vignette Experiment to deflect by remaining silent in
the conversation. We have similar expectations for their opinion expression
in Stage 3. Both traits can function in ways to make politics a “hobby,”
where people derive enjoyment from talking about politics and both likely
make people more confident in and willing to express their true opinions.
We find mixed support for these expectations in Stage 2 and Stage 3. We
do not find a relationship between interest in politics or strength of parti-
sanship and a preference to avoid political discussion, a point we expand
on shortly when we assess the traits individually. As shown in Figure 9.2,
we do not observe many differences in the anticipated expression response
in the vignette experiment based on interest in politics or strength of
partisanship. Those who were more interested in politics were less likely
to expect the character to be silent, but we do not observe any other
statistically significant differences between the most and least interested,
nor by strength of partisanship. However, the more interested in politics
someone is and the stronger their partisan attachment is, the more likely
they are to expect the vignette character from the CIPI I Survey to express
his or her true opinion, as measured using the six-point Likert scale.
Finally, at Stage 4, our overarching expectation is that those who are
more interested in politics and more partisan will be more likely to engage
in social distancing and affective polarization. These individuals likely
care more deeply and will be more sensitive to the presence of politics in
their daily lives. Their political views are important to them and they may
be more likely to distance themselves from their friends because of polit-
ical views, compared to those who are less interested in politics, whether
the salient difference is due to disagreement or a discrepancy in their level
of political engagement or interest (Klar and Krupnikov 2016; Krupnikov
and Ryan 2022).
In general, we find support for these expectations. Among respondents
who completed both the CIPI I and CIPI II Surveys, a logit model reveals
that those who are more interested in politics are more likely to report
that they have distanced themselves socially from a friend because of
politics. Specifically, the model suggests that the predicted probability of
distancing from a friend because of politics for the least interested is .10,
but this nearly triples (.29) for the most interested.

https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press


Political Dispositions 219

Interest in Politics Strength of Partisanship


1.0

1.0
Least Interested Pure Independents
Most Interested Most Partisans
0.8

0.8
Predicted Probability

Predicted Probability
0.6

0.6
0.4

0.4
0.2

0.2
0.0

0.0

Entrench True Censor Conform Silence Entrench True Censor Conform Silence
Opinion Opinion

 .. Predicted probability of expression responses, by political


dispositions
Note: Predicted probability of expecting each behavioral response in the CIPI
I Vignette Experiment by interest in politics (left panel) and strength of
partisanship (right panel). Results based on a multinomial logit regression
model, controlling for race, gender interest in politics, strength of partisanship,
social anxiety, conflict avoidance, and willingness to self-censor. Predicted
probabilities are calculated holding race as White, gender as male, and all other
characteristics at their means. Vertical lines represent 95 percent confidence
intervals. Results based on CIPI I Survey data, N ¼ 2,747, reflecting listwise
deletion for missing values.

We find similar results for the effects of partisanship strength. We find


that stronger partisans were more likely to report that they distanced
themselves from a friend because of politics. Specifically, the model sug-
gests that the predicted probability of distancing from a friend for polit-
ical reasons was .13 for pure Independents, but .30 for strong partisans.

Unique Contributions of Interest in Politics


What is unique about the effects of political interest, apart from a strong
attachment to one’s political party? Where might we expect political

https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press


220 Individual Dispositions and the 4D Framework

interest to affect 4D Framework behaviors in ways that partisanship


strength would not? Generally speaking, interest in politics should make
people less sensitive to the presence of disagreement and thus more
interested in a wider set of discussion experiences.
We expect participants who are more interested in politics to see greater
value (higher benefits, lower costs) in a wide range of discussions: casual
conversations about politics with strangers, as well as agreeable and dis-
agreeable conversations, compared to those who are less interested in polit-
ics. Those who are interested in politics should find value in political
discussion, regardless of whether they know the person or anticipate dis-
agreement. In other words, those who enjoy politics are likely to take
advantage of more opportunities to engage in their hobby. We find positive,
statistically significant associations between interest in politics and the value
placed on having a casual political conversation with a stranger and having
an agreeable political discussion, but do not find evidence of a relationship
between interest in politics and the value placed on disagreeable discussion.
Relatedly, thinking back to the psychological considerations we ana-
lyzed in Chapter 7, we expect that those who are more interested in
politics will think the character in the vignette would have fewer concerns.
However, we find no evidence of a relationship between interest in politics
and the proportion of concerns they expected a character to have in
response to a discussion scenario.

Unique Contributions of Strength of Partisanship


What is unique about the effects of partisanship strength, apart from
interest in politics? Where might we expect partisanship strength to affect
4D Framework behaviors in ways that political interest would not? As
opposed to the effects of political interest, we expect strong partisans to
be more sensitive to disagreement and thus more wary of conversations in
which it might emerge.
In Stage 3, we expect strong partisans might be interested in talking
about politics, but only with agreeable others and not necessarily with
strangers. Strong partisans might be especially sensitive to disagreement –
not because they are uncomfortable and want to avoid it, but because
they hold their partisan identities so strongly that they strongly dislike
associating with others who disagree. This might make them more likely
to see the value of agreeable political discussion, but not necessarily
disagreeable discussion. However, we found no relationship between

https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press


Psychological Dispositions 221

strength of partisanship and the value of a casual conversation with a


stranger, disagreeable discussion, or agreeable discussion. This helps
distinguish the effects of strength of partisanship from the effects of
interest in politics. While those more interested in politics see greater
value in having a casual conversation with a stranger about politics,
strong partisans value this behavior the same as weak partisans. We also
find that strength of partisanship is associated with thinking that the
character in our vignette experiment on the CIPI I Survey would have
fewer concerns about the discussion, suggesting that stronger partisans
are more likely to weigh the opportunities of engaging in a discussion,
rather than the negative consequences.

 
To this point in the chapter, we have largely focused on the “usual
suspects” for individual characteristics that are associated with political
discussion behavior. We now turn to psychological characteristics. Our
analysis of the three traits we explore shortly is meant to illustrate a
broader point: There is much to be gained from developing more theor-
etically driven inquiries that link psychological variation between people
to the kinds of preferences and behaviors they exhibit at each stage of the
4D Framework. We selected two traits that have been well-studied in the
political discussion literature (conflict avoidance and willingness to self-
censor) and one trait that we think should make people especially sensi-
tive to the social concerns inherent in political discussion (social inter-
action anxiety). Each of these personality characteristics becomes relevant
at different stages of the cycle.
In the sections that follow, we describe our expectations for the ways
in which each psychological disposition could be related to particular
stages of the 4D Framework, as well as any empirical evidence to
support these expectations. We take each disposition in turn for both
theoretical and methodological reasons. We find it theoretically import-
ant to analyze each of these factors independently from one another, but
controlling for the effects of the “usual suspects” explored previously.
Methodologically, we also note that these psychological dispositions are
strongly correlated with one another, much more so than the demo-
graphic and political dispositions we examine. As a consequence, we
hope to avoid issues of multicollinearity by analyzing separate models
for each psychological disposition.

https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press


222 Individual Dispositions and the 4D Framework

Social Anxiety
As we describe in more detail in Chapter 3, social anxiety captures the
extent to which individuals are uncomfortable engaging with other
people. Those who are more socially anxious are more likely to experi-
ence this discomfort during interactions in which their behavior is
dependent upon the behavior of others, such as political discussion.
Because our measure of social anxiety (SIAS scale) addresses the inter-
action itself, we are relatively agnostic about how the trait might affect the
Detection stage. On the one hand, people who are more socially anxious
might be especially attuned to political cues as they try to prepare to tailor
their behavior in the forthcoming interaction. On the other hand, people
who are more socially anxious might not care to detect others’ views
because they simply prefer to avoid all social interactions, regardless of
the presence of disagreement or the political nature of the conversation
more generally. The finding reported in Figure 9.4 suggests the latter.
We do, however, have strong expectations for how social anxiety
structures Stage 2 (Decision) and Stage 3 (Discussion) behaviors. We
expect that those who are more socially anxious will be more likely to
avoid political discussions. We test this expectation using the measures
described in Table 9.2. We found suggestive evidence that those who are
more socially anxious were more likely to report that they avoid political
discussions. Substantively, we found that moving from the lowest to the
highest level of social anxiety increased the odds of avoiding political
discussions (“avoiding” or “somewhere in between” relative to
“enjoying”) by about 24 percent. However, we do not find evidence that
social anxiety was associated with anticipated deflection in the vignette
experiment or variation in the value placed on initiating agreeable and
disagreeable political discussions.
Just as we expected social anxiety to be associated with the Decision
stage (Stage 2), we also expected social anxiety to be relevant in Stage 3,
the discussion itself. This is a bit trickier to evaluate given that we expect
people who are more socially anxious to be more likely to avoid these
interactions in the first place, but we still have expectations for how social
anxiety might affect the ways in which individuals participate in a con-
versation, should they find themselves in one. We expect that those who
are more socially anxious will place lower value on political conversation
itself, whether casual conversations with strangers or agreeable or dis-
agreeable conversations. Their anticipated discomfort in the social inter-
action more generally should lead them to find these experiences costly. In

https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press


Psychological Dispositions 223

the vignette experiment on the CIPI I Survey, we develop several expect-


ations for the relationship between social anxiety and discussion behav-
ior. First, we expect those who are more socially anxious to expect the
character to have more concerns structuring his or her behavior, rather
than opportunities. Increased social anxiety should be linked to greater
discomfort in a social interaction, which should make concerns more
salient than opportunities. Second, we expect that those who are more
socially anxious will be more likely to anticipate that the vignette charac-
ter will conform, censor, or silence, and less likely to express his or her
true opinion or entrench.
We find mixed evidence for our Stage 3 expectations. Similar to our
results from Stage 2, we do not find evidence of an association between
social anxiety and the value placed on engaging in discussions. We do,
however, find that those who are more socially anxious anticipate that the
vignette character will consider a greater proportion of concerns. Finally,
for the actual expression behavior in the discussion based on our vignette
experiment, we first conducted an ordinary least squares model in which
the dependent variable was the perceived likelihood that the character
would express his or her true political opinion. We found no evidence of a
relationship between social anxiety and likelihood of expressing one’s
true political opinion. However, when we separately analyze the five
behavioral responses in a multinomial logit model, we find that those
who are more socially anxious were more likely to expect the character to
conform and less likely to expect the character to express his or her true
opinion. As shown in Figure 9.3, we calculated predicted probabilities of
giving each expression response and found that the predicted probability
of expecting the character to express his or her true opinion decreased
about 29.7 percentage points going from the lowest to the highest levels of
social anxiety. In contrast, the predicted probability of expecting the
character to conform increased by about 26.8 percentage points going
from the lowest to highest levels of social anxiety. In Chapter 7, we also
showed that those who were more socially anxious were more likely to
qualify their language when revealing their partisan identities when faced
with disagreement. More socially anxious participants also used fewer
words when discussing issues, especially in the presence of disagreement,
were less likely to initiate conversations, and were more likely to use
negative words, than less socially anxious participants.
Altogether it seems that social anxiety plays some role in structuring
the decision to engage in a discussion and how one behaves in a discus-
sion, should it occur, but the results are not always consistent. We do not

https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press


https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press

224

 .. Predicted probability of expression response, by psychological dispositions


Note: Predicted probability of anticipated behavioral response in the CIPI I Vignette Experiment by psychological dispositions. Results
based on multinomial logit regression models where the dependent variable is the anticipated response (entrench, true opinion, censor,
conform, silence). The independent variable in each model is the psychological disposition (social anxiety, conflict avoidance, or willingness
to self-censor). Each model controls for race, gender, interest in politics, and strength of partisanship. Predicted probabilities are generated
varying the psychological disposition from its lowest to highest values, holding gender as male, race as White, and interest in politics and
strength of partisanship at their means. Data come from the CIPI I Survey, N ¼ 2,836 for SIAS model; N ¼ 2,961 for Conflict Avoidance
model; and N ¼ 2,928 for WTSC model, after listwise deletion for missing values. Vertical lines reflect 95 percent confidence intervals.
Psychological Dispositions 225

have strong expectations for how social anxiety should be associated with
Determination, but we find that it is positively associated with socially
distancing from friends because of politics. This is consistent with a
finding we published in a separate article (Carlson, McClean, and Settle
2019), where we used data from the Psychophysiological Anticipation
Study to show that those who are more socially anxious and have
stronger psychophysiological reactions to anticipating a political discus-
sion have more politically homogeneous discussion networks. We leave to
future research the challenge of unpacking why social anxiety is related to
the aftermath of political discussion, although we can speculate that it
might be due to a tendency to spend more time analyzing social inter-
actions before and after they occur.

Conflict Avoidance
Conflict avoidance involves the extent to which individuals prefer to
evade disagreement. Throughout this book, we have shown that disagree-
ment is a central feature affecting behavior in the 4D Framework. Based
on the findings of other researchers who have studied the role of conflict
avoidance in political discussion, we expect the trait to play a role in
structuring behavior at each stage of the political discussion cycle.
At the Detection stage (Stage 1), we expect that individuals who are
more conflict avoidant will have more finely tuned detection systems and
will engage in detection more frequently than those who are less conflict
avoidant. Because individuals who are more conflict avoidant prefer to
dodge disagreement, they are likely to be on the lookout for cues signaling
disagreement in an effort to avoid those interactions. Specifically, we
expect those who are more conflict avoidant to be more likely to report
that they try to guess others’ political views in advance of a conversation
and to be more likely to offer a guess in our free response question, but
less likely to report that they would ask someone their views directly.
Asking someone what they think about politics could bring out explicit
disagreement, so we expect those who are conflict avoidant to be more
likely to turn to subtle cues instead.
We find mixed evidence for these expectations. We find no evidence of
a relationship between conflict avoidance and guessing political views in
advance, and when we turn to the free response data, we find that the
more conflict avoidant someone was, the less likely they were to answer
the question with a description of how they would go about guessing
someone else’s views. This stands in contrast to our expectation.

https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press


226 Individual Dispositions and the 4D Framework

However, consistent with our expectations, we find that the more conflict
avoidant a respondent was, the less likely they were to report that they
would directly ask someone about their political views, instead turning to
subtle cues, as visualized in Figure 9.4.
While we found mixed support for our expectations regarding the
relationship between conflict avoidance and Detection, we find strong,
consistent support for our expectations for the role conflict avoidance
plays in the Decision stage. Put simply, we expected that individuals who
were more conflict avoidant would be more likely to avoid political
discussions. We captured this using the four tests described in Table 9.2:
self-reported discussion avoidance, anticipating the CIPI I vignette charac-
ter to remain silent, and placing lower value on initiating both agreeable
and disagreeable discussion. Using the same modeling strategies described
previously, we found that the more conflict avoidant someone was, the
more likely he or she was to avoid political discussions. Specifically, we
found that the odds of avoiding political discussions increase about
237 percent going from the lowest to highest levels of conflict avoidance.
This effect is much stronger than the effect we found for social anxiety,
which was about a 29 percent increase in discussion avoidance. The results
on perceived likelihood of deflection from the CIPI I Vignette Experiment
are also consistent with our expectations. Specifically, we find that the
predicted probability of remaining silent (deflecting the conversation) rises
about 16 percentage points going from the lowest to highest levels of
conflict avoidance. We find suggestive evidence that the more conflict
avoidant a respondent was, the less they valued initiating even agreeable
political discussions, but we – to no surprise – find strong evidence that the
more conflict avoidant a respondent was, the less they valued initiating
disagreeable political discussions.
At the Discussion stage, we expect conflict avoidance to lead to a
lower likelihood of expressing true opinions. Individuals who are more
conflict avoidant, but are forced into a conversation, will likely engage in
strategies to temper disagreement. Following the findings from Stage 2,
we expect those who are more conflict avoidant to place lower value on
engaging in casual conversations about politics with strangers and dis-
agreeable discussions. A priori, we would not expect conflict avoidance
to be associated with the value of agreeable discussion, but our results
from Stage 2 indicate that we might expect it to be associated with lower
perceived benefit of even agreeable discussion. Thinking about the psy-
chological considerations individuals make when contemplating how to
respond in a disagreeable political discussion, we expect conflict

https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press


https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press

227

 .. Predicted probability of relying on cues, by psychological dispositions


Note: Predicted probability of using each type of cue to detect political views by psychological dispositions. Results based on multinomial logit
regression models where the dependent variable is the type of cue (non-guesser, directly political, subtle, direct and subtle). The independent
variable in each model is the psychological disposition (social anxiety, conflict avoidance, or willingness to self-censor). Each model controls
for race, gender, interest in politics, and strength of partisanship. Predicted probabilities are generated varying the psychological disposition
from its lowest to highest values, holding gender as male, race as White, and interest in politics and strength of partisanship at their means.
Data come from the combined CIPI I and CIPI II Surveys, N ¼ 447 for SIAS model; N ¼ 471 for Conflict Avoidance model; N ¼ 464 for
WTSC model, reflecting listwise deletion for missing values. Vertical lines reflect 95 percent confidence intervals.
228 Individual Dispositions and the 4D Framework

avoidance to be positively correlated with the proportion of concerns


selected in our CIPI I Vignette Experiment. Those who are more conflict
avoidant, when in the presence of disagreement, should be more likely to
focus on the negative implications of a conversation. Finally, when it comes
to expressive behavior within the discussion, we expect that those who are
more conflict avoidant will expect the character in our CIPI I Vignette
Experiment to be less likely to express his or her true opinion, more likely
to censor or conform, and less likely to entrench. As described in Chapter 7,
when we examined expression in the Psychophysiological Experience
Study, we found that those who were more conflict avoidant were less
likely to initiate conversations, used more negative words, and were more
likely to qualify their language when asked to reveal their partisan identity,
especially when their discussant was an outpartisan.
We find evidence to support all of these expectations. Ordinary least
squares regressions, consistent with our approach earlier in this chapter,
reveal that the more conflict avoidant someone was, the lower value they
placed on casual political conversations with a stranger and on disagree-
able political discussions. We found no relationship between conflict
avoidance and the value placed on agreeable political discussions. Just
as with social anxiety, we found that the more conflict avoidant someone
was, the greater the proportion of concerns they selected when thinking
about how the character in the CIPI I Vignette Experiment would behave.
Similarly, we find that the more conflict avoidant someone was, the less
likely they were to think that the character would express his or her true
opinion to the group and the more likely they were to think the character
would conform or silence. We do not find evidence of a relationship
between conflict avoidance and censoring or entrenching. As visualized
in Figure 9.3 the magnitude of the effect of conflict avoidance on true
opinion expression and conformity is large. The predicted probability of
expecting the character to express his or her true opinion decreases by
about 22 percentage points going from the lowest to highest levels of
conflict avoidance, whereas the predicted probability of conformity
increases by about 12 percentage points.
Finally, at the Determination stage, we expect conflict avoidance to be
strongly associated with social and political estrangement. Because indi-
viduals who are conflict avoidant prefer to evade disagreement, we expect
that they will be more likely to cut ties with those who do not share their
political views. While we did not specifically ask respondents if they
distanced from a friend because of political disagreement, free response
descriptions of the reasons behind their distancing indicate that disagree-
ment was a key reason, as we discussed in Chapter 8.

https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press


Psychological Dispositions 229

0.7
Lowest in Psychological Disposition
Highest in Psychological Disposition

0.6
0.5
Predicted Probability
0.4
0.3
0.2
0.1
0.0

Social Conflict Willingness


Anxiety Avoidance to Self-Censor

 .. Predicted probability of distancing, by psychological dispositions


Note: Predicted probability of distancing from a friend because of politics by
psychological dispositions. Results based on logistic regression models where the
dependent variable is whether the respondent reported distancing from a friend
because of politics (1) or not (0). The independent variable in each model is the
psychological disposition (social anxiety, conflict avoidance, or willingness to self-
censor). Each model controls for race, gender, interest in politics, and strength of
partisanship. Predicted probabilities are generated varying the psychological
disposition from its lowest to highest values, holding gender as male, race as
White, and interest in politics and strength of partisanship at their means. Data
come from the combined CIPI I and CIPI II Surveys, N ¼ 480 for SIAS model;
N ¼ 504 for Conflict Avoidance model; N ¼ 498 for WTSC model; reflecting listwise
deletion for missing values. Vertical lines reflect 95 percent confidence intervals.

We find evidence in support of these two expectations. We find that the


more conflict avoidant a respondent is, the more likely they were to
distance from a friend because of politics, according to data from our
combined CIPI I and CIPI II Surveys. As visualized in Figure 9.5, the
predicted probability of distancing from a friend because of politics
increases by about 17.4 percentage points as conflict avoidance goes from
its lowest to highest levels.

Willingness to Self-Censor
Finally, the willingness to self-censor disposition focuses on whether
individuals moderate what they say in conversations with others, largely
to avoid conflict or social judgment. Similar to social anxiety, we do not

https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press


230 Individual Dispositions and the 4D Framework

have strong expectations for how willingness to self-censor affects the


Detection stage, but we have strong expectations for how it affects the
decision to engage in a discussion and more importantly how individuals
behave in the discussion itself. We do not have strong expectations for
how it affects the Determination stage. In general, in Stages 2 and 3,
we expect willingness to self-censor to operate similarly to conflict
avoidance.
At the Decision stage, we expect that those who are more willing to
self-censor will be more likely to avoid political discussions and place
lower value on disagreeable political discussions. Similarly, they should
be more likely to view initiating political discussions as a costly activity.
We do not have strong expectations for how willingness to self-censor
should affect the value placed on initiating agreeable political discussions.
We find evidence to support these expectations. Using the same
ordered logit modeling strategy as before (see Table 9.2), we find that
the more willing to self-censor someone is, the greater their likelihood of
avoiding political discussions. Specifically, we find that the odds of
avoiding political discussions increases by about 217 percent as willing-
ness to self-censor moves from its lowest to highest levels. The magnitude
of this effect is similar to that of conflict avoidance and much larger than
that of social anxiety. Using data from the CIPI I Vignette Experiment, we
find that the predicted probability of anticipating that the character will
deflect the conversation by being silent increases by about 16 percentage
points as willingness to self-censor increases from its lowest to highest
levels. We do not find a relationship between willingness to self-censor
and the value placed on agreeable political discussions, but, consistent
with expectations, we find a statistically significant association between
willingness to self-censor and the value placed on disagreeable discussion.
The more willing to self-censor someone was, the less they valued initiat-
ing political discussions with people who disagree.
Our expectations for willingness to self-censor are also similar to those
for conflict avoidance at the Discussion stage. We expect that those who are
more willing to self-censor will place lower value on casual political discus-
sions with strangers and engaging in disagreeable political discussions.
We anticipate that those who are more willing to self-censor will expect
the character in the CIPI I Vignette Experiment to have more concerns
about the conversation described, and to be less likely to express his or her
true opinion and entrench, and more likely to censor and conform.
We find strong evidence in support of most of these expectations. We
do not find that willingness to self-censor was associated with lower value

https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press


Psychological Dispositions 231

placed on casual political conversations with strangers, but it was signifi-


cantly, negatively associated with placing a lower value on engaging in
disagreeable discussions. The more willing to self-censor a respondent
was, the lower he or she valued disagreeable political discussions. Turning
to the CIPI I Vignette Experiment, we found that those who were more
willing to self-censor selected a greater proportion of concerns, indicating
that they were more likely to think that the character in the vignette
would weigh the negative consequences of the interaction. As visualized
in Figure 9.3, the more willing to self-censor respondents were, the less
likely they were to expect the character to express his or her true opinion
to the group, and the more likely they were to expect the character to
conform. Specifically, we find that the predicted probability of expecting
the character to express his or her true opinion decreases by about
39 percentage points, going from the lowest to highest levels of willing-
ness to self-censor. In contrast, the predicted probability of conformity
increases by about 15 percentage points.
We do not find evidence for a relationship between willingness to self-
censor and the expected likelihood of entrenching or censoring. It might
seem surprising that willingness to self-censor is not associated with the
very behavior that it is designed to capture, censorship. We suspect that
this is because censoring was already one of the most common response
options for all respondents, so willingness to self-censor pushes people
toward the less common response options, such as conformity and silen-
cing. As described in Chapter 7, we found that willingness to self-censor
was associated with linguistic markers in actual political discussions in the
Psychophysiological Experience Study, such that those who were more
willing to self-censor were less likely to initiate conversations when the
discussant disagreed, more likely to use negative words, and more likely
to qualify their descriptions of their partisanship when asked to reveal
their identity when they were conversing with an outpartisan.
We did not have expectations about how willingness to self-censor
would be associated with the Detection and Determination stages, but we
found interesting patterns in both. We found that those who were more
willing to self-censor were more likely to have socially distanced from a
friend because of politics. These patterns are the same as social anxiety
and conflict avoidance. Similarly, the trait operated much like social
anxiety during the Detection stage: The more willing to self-censor
respondents were, the more likely they were to try to guess others’ views
in advance of a discussion and the less likely they were to directly ask
someone about their political views.

https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press


232 Individual Dispositions and the 4D Framework


Overall, we find that gender asserts strong influence over the Decision
stage, but perhaps less so after that point and we do not find strong
evidence that race consistently affects behavior in political discussion.
Significantly more work is needed to fully connect the structural inequal-
ity that women and minorities face in society with their experience in
political discussion. The 4D Framework offers a theoretical scaffold
through which future scholars could more rigorously manipulate conver-
sational dynamics that specifically activate gender or racial inequality or
power structures.
The political dispositions studied here are more consistently related to
behavior in the 4D Framework than the demographic dispositions of race
and gender. Interest in politics is associated with behavior at each stage of
the cycle, as is strength of partisanship, though less consistently. It is
unsurprising that interest in politics and strength of partisanship are
associated with discussion behavior, but the implications of these rela-
tionships are important for understanding the composition of people who
participate most actively and honestly in political discussions.
The results about the psychological dispositions that we examined also
have powerful implications. Much like interest in politics (though often in
the opposite direction), conflict avoidance and willingness to self-censor
are associated with behavior at every stage of the 4D Framework. Social
anxiety has somewhat more mixed effects but is associated with at least
one behavior at each stage. Reviewing the summary of our results at the
beginning of this chapter in Table 9.1, it is noteworthy that the psycho-
logical dispositions seem to be just as consistently associated with political
discussion behaviors as the “usual suspects,” if not more so.
Together, the results in this chapter add more nuance to our story. The
individual dispositions – demographic, political, and psychological – that
we examined help to explain some of the variation we observed in the
empirical results presented in the previous five chapters. Our goal was not
to identify the single most important disposition that predicts behavior
throughout the cycle, but rather to demonstrate the value in theoretically
linking innate characteristics to specific facets of discussion behavior.
Certain characteristics will be more influential in some stages than others
and identifying when and how traits matter in the process of discussion
will allow researchers to draw stronger conclusions about which kinds of
people are most likely to talk about politics, and how they communicate
when they do.

https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press


Conclusion 233

We view the findings in this chapter as an important starting point for


future research on political discussion. By identifying some of the ways in
which individuals with distinct demographic, political, and psychological
dispositions navigate political discussion, we hope that future researchers
develop rigorous theories about these unique groups and design appro-
priate studies to test them. For example, it is entirely possible that topics
of discussion vary dramatically between ethnoracial groups, with minor-
ity groups more focused on racialized issues like policing, immigration,
and discrimination, and Whites more focused on issues characterized by
partisan conflict. The interaction between the racial salience of the topic
and the racial composition of the group might bring unique consider-
ations and behaviors to the fore. It could be that the balance between
opportunities and concerns or the rates of censorship shift around, or
there could be entirely new behaviors we did not measure in this book.
The ease or difficulty of political discussion through language barriers,
particularly in immigrant communities, could completely shift the reasons
for silencing in political discussion. We think that all of these topics are
important to consider as the field moves forward. While we regret these
potential limitations in how our results can speak to discussion across
racial lines or around racialized topics, we are excited to see how new
research could build upon our work to extend the 4D Framework
more broadly.

https://doi.org/10.1017/9781108912495.009 Published online by Cambridge University Press


10

The Costs of Conversation

Joe watches the evening news with dismay. Everything seems so contentious
and everyday life choices have become politicized. The conversations with
his in-laws have turned even more extreme. The discussions no longer dance
solely around differences in policy preferences; the conversations now
reveal disagreement about what constitutes fact. He has been friendly, but
from a distance, with Jack and Ken, unwilling to risk another conversation
like he had earlier in the year. And the human resources department at his
office circulated a memo reminding everyone that in order to foster a
workplace that values inclusivity and civility, conversations about politics
were discouraged.
His instinct is to hunker down and double up on his efforts to avoid
political interaction for the sake of preserving his personal and professional
relationships. But he has the sinking suspicion that doing so makes him part
of a bigger problem: Americans no longer communicate effectively about
their political differences. Which path was costlier for the country: frag-
menting social relationships or losing the vitality of civil interactions across
lines of difference?

American democracy was more strained in the aftermath of the 2020 elec-
tion than it had been in over a century, according to historians and
political scientists.1 Organizations that track the quality of democracy
worldwide downgraded America’s rating as a result of the absence of a
peaceful transfer of power, growing distrust in institutions among the
public, and a growing proportion of Americans endorsing violence as a
solution for political disagreement.2 Building on concerns about the
accelerated degradation of norms during the Trump administration and
the growth of antidemocratic attitudes among the American public,3

234

https://doi.org/10.1017/9781108912495.010 Published online by Cambridge University Press


The Costs of Conversation 235

experts rang the bell in an effort to call attention to the magnitude of the
problems Americans face as a citizenry.4 Amid the chorus of voices calling
for major institutional reforms were calls for civility and dialogue,5
arguing that the divisions most visible in Washington, DC had permeated
the interactions that everyday people had with each other. From this point
of view, the solution to the country’s troubles needed to come from the
bottom up, as well as the top down.6
A plethora of civil society groups seeking to address the growing rift
between Democrats and Republicans have latched onto these ideas of
civility and dialogue. Ranging from the Koch Foundation7 to National
Public Radio’s StoryCorps,8 dozens of organizations have funded and
prioritized efforts to get Americans to communicate across lines of
difference, in order to build perspective and empathy for those whose
experiences and opinions diverge from their own. They are joined by
advocates of deliberative democracy, who suggest that we can find
consensus solutions for seemingly intractable policy problems if we
structure fora for people to inform themselves and respectfully debate
their opinions.
But at the same time, others weigh in that it is preferable to avoid airing
our differences. From advice columns about social niceties9 to changes to
company policies that discourage political conversation in the work-
place,10 the collective wisdom from etiquette experts and human
resources professionals is to keep our mouths shut. Echoing the call that
less is more, many people severed social relationships.11 Others vocally
stated their intention to leave social media platforms altogether because
they were tired of the infusion of politics, while some retreated into niche
platforms catering to particular ideologies. Podcasts sprung up, counsel-
ing people about what to do with conservative relatives who still had not
accepted Trump’s loss in 202012 or liberal “snowflake” relatives.13
If Americans talked to each other more about their political differ-
ences, would those interactions repair or fracture the divisions in the
country? This question recasts14 a fundamental tension at the core of
the study of democracy: Can the goals of participatory and deliberative
democracy coexist? Our hope is that the findings in this book inform
further study about when discussion might help and when it might hurt,
and what potential pitfalls lie ahead if increased political discussion is
pursued as a remedy for the consequences of high levels of polarization.
The stakes are high, and while many of the problems facing the United
States are structural and institutional in nature, many of the proposed
solutions call on everyday Americans to lead the charge.

https://doi.org/10.1017/9781108912495.010 Published online by Cambridge University Press


236 The Costs of Conversation

      


We began Chapter 1 with the canonical findings from the field of political
discussion: Individuals tend to discuss politics with those who agree and
political discussion occurs somewhat frequently, although many people
try to avoid these conversations. We argued that it was important to
understand the decision-making processes that led to this pattern of
findings, not just the regularity of the pattern or its consequences on
downstream political behaviors. In the intervening chapters, we unpacked
the dynamic process of political discussion, characterizing the 4D
Framework that organizes the series of behaviors in the process of
Detection, Decision, Discussion, and Determination. This characteriza-
tion moves us beyond the depiction of political discussion previously
analyzed, such as the frequency of discussion or the pattern of agreement
within social networks. Rather, it encourages us to focus on the motiv-
ations guiding people’s choices, demonstrating how goals related to
affirmation, accuracy, and affiliation can lead to the patterns of political
discussion documented in previous work. What have we uncovered?
Prior to discussion, many people make assessments about the likely
political views of potential discussants. This process is probably not
entirely conscious, but we show that more than 70 percent of people have
some idea about how they might guess others’ views and a third of people
say that they actually do try to guess someone’s views before initiating a
conversation. The people most likely to engage in detection are stronger
partisans and those most interested in politics, but also those who have
psychological traits that make discussion particularly threatening. This
detection behavior is understudied in the field of political science, but it
will only become more important as our society becomes more divided.
Following the trajectory of research from Greene (1999) to Green,
Palmquist, and Schickler (2002) to Mason (2018) reveals the way that
social identities have sorted along political lines, leading partisanship to
function as a social identity; following the trajectory of research from
Ahler and Sood (2014) to Settle (2018) to Deichert (2019) to Carlson and
Hill (2021) to Lee (2021) suggests that people are becoming increasingly
aware of these patterns, may actually overestimate the alignment of social
groups into political parties, and that these stereotypes impact decisions
about social interactions.
Our results about detection make clear that many people are able to
glean enough information about potential discussants to make informed
decisions about discussion before it happens. We show that the

https://doi.org/10.1017/9781108912495.010 Published online by Cambridge University Press


A Portrait of Political Discussion in America 237

conversations that do occur are systematically different from those that


could occur: They are smaller, more likely to be between closer ties, and
more likely to be between people who perceive that they agree with one
another. Women are more likely to deflect or avoid conversations, as are
those with high levels of social anxiety, conflict avoidance, and a willing-
ness to censor themselves. While instrumental considerations do structure
the choice to opt out for some, many people are driven primarily by social
considerations. These affiliative motivations are uniquely negative:
Rather than seeing conversation as a way to strengthen connections,
people are much more likely to be concerned about the possibility of
damaging their relationships with others.
The conversations that actually happen occur most frequently in
small groups, calling attention to the necessity of continuing to incorp-
orate group dynamics into the study of political discussion. Consistent
with previous work examining the impacts of disagreement, we find that
people seem to be especially sensitive to discussions where their opinion
is in the minority. Not only does this situation discourage the emergence
of conversation, but it seems to heighten people’s discomfort when those
discussions do emerge. Discussion is experienced physiologically, and
most people’s hearts race in anticipation of and during a discussion,
especially when they perceive or anticipate disagreement or hold issue
positions at odds with their discussants. Political conversations are
marked by high levels of verbal censorship and conformity, perhaps
contributing to people’s underestimation of the amount of disagreement
between their own opinions and those of others in the conversation.
Again, we see a role for psychological traits associated with perceiving
higher costs of discussion, inclining individuals to be less likely to
express their true opinions and more likely to conform to the
group opinion.
In the aftermath of these conversations, people’s concerns about dam-
aging their social relationships can be realized. Although the majority of
people report that they have not estranged themselves from others
because of political conversation, about 25 percent of people report that
they have. Even if relationships are not entirely severed, people have a
preference for talking with copartisans even about nonpolitical topics.
Here, we see that political and psychological characteristics affect
estrangement in the same direction: Strong partisans, the highly politically
interested, and the psychologically predisposed all are more likely to have
distanced themselves from a friend due to politics, although presumably
for different reasons.

https://doi.org/10.1017/9781108912495.010 Published online by Cambridge University Press


238 The Costs of Conversation

Our primary contribution in this book is not to present a new theory


that outperforms existing theories of communication to explain funda-
mental puzzles (e.g. are publics self-educating? Does disagreement persist
in networks?). Rather, we provide a new framework for synthesizing
previous work and examining the social and psychological processes of
political discussion. Understanding these processes is essential for under-
standing the outcomes of discussion, and we hope that the 4D Framework
brings new questions to the forefront of study that have bearing on a
multitude of questions relevant to the study of democratic citizenship.

 
In addition to the contribution of the 4D Framework and the specific
findings from each of its stages, synthesizing across our results suggests a
series of bigger takeaway points about the process of political discussion.
Future research on political discussion must grapple with the implications
of these points.
Lesson #1:
The social processes of discussion begin before words are exchanged and
do not end when the conversation stops. Previous research has focused
primarily on the instrumental aspects of political discussion, such as
whether people learn, become more tolerant, or vote at higher rates.
Our work suggests that the social aspects of conversation – the consider-
ations people hold in their mind, but also the dynamics of conversations
themselves – are integrally interwoven into the decisions people make at
all stages of the discussion process.
Social considerations affect which conversations people pursue and
what they choose to say during those conversations. Our findings show
that social considerations are as important as considerations about infor-
mation and expression, and that the people who have concerns about
political discussion are especially likely to report the importance of social
considerations. People’s concerns about the social ramifications of political
discussion are not unfounded, as political conversations have consequences
for people’s future political discussion and social networks: About one in
four people report that they have altered a social relationship with someone
because of their political views. This is most common among those most
attached to the political system (partisans and the interested) but also those
most psychologically susceptible to the costs of discussion.
Supporting the findings gleaned from past qualitative work, we find
that most people report talking about politics in small groups, not in

https://doi.org/10.1017/9781108912495.010 Published online by Cambridge University Press


Key Lessons 239

pairs. Thus, group dynamics matter within conversations, not just within
discussion networks. Previous political science research has focused on
the composition of people’s regular discussant network, not the compos-
ition of the discussions to which people are exposed. This distinction is
subtle, but important. Our work demonstrates that these group dynamics
have an immense impact on which conversations occur, who chooses to
participate, and what gets said.
Against this backdrop, we encourage future research to take seriously
the social contours of political discussion. In qualitative or survey-based
work, this could mean asking questions about the social relationships
between discussants, attitudes discussants have toward each other, and
the distribution of opinions within conversations. In experimental work,
this could mean more realistically incorporating social considerations into
research designs to allow for greater generalizability outside the lab or
survey context.

Lesson #2:
Not all political discussions are created equally. They differ in motive,
participation rate, and participation type. We do not intend to make a
straw man argument: No scholar of political discussion advocates or
agrees with the premise of uniformity in discussion. But this heterogeneity
typically is not factored into discussion research, as a result of focusing
primarily on discussion frequency and discussion network composition.
Our descriptive work shows the rich variation in the interactions grouped
together as “political discussion” and our inferential work shows that
both contextual and dispositional factors affect the way a person’s par-
ticipation in political discussion unfolds.
Previous research has assumed implicitly that when people report that
they have talked about politics that they have been a verbal contributor to
the conversation, expressing their true opinion and engaging in the conver-
sation with positive goals to learn or persuade. But our findings imply –
although we do not test this directly – that most conversations do not match
that account, based on the preferences, context, and behavior of the people
participating. Our results suggest that people are not reaping the benefits of
political discussion, and instead may suffer its consequences. Moreover, the
absence of a political discussion is more than an absence of exposure to
other people’s political views; it can reflect an active choice of avoidance, a
choice that shapes the way a person conceptualizes both herself (“I am not a
person who talks about politics”) as well as her mapping of her political
context (“Those kinds of conversations are threatening and should be

https://doi.org/10.1017/9781108912495.010 Published online by Cambridge University Press


240 The Costs of Conversation

avoided.”) Thus, both political discussion and the intentional choice not to
have a political discussion can impact a person’s behavior.
We can no longer ignore the fact that many conversations are
undesired. Engaging in a political conversation involuntarily will likely
have different consequences than participating in one willingly. People
make active choices about political discussion, but they cannot always
avoid the conversations they wish they could. This point could be inferred
based on the gap in estimates of the population who say they talk about
politics versus the proportion who say that they enjoy talking about
politics. We push further on this point, to show that political discussion
is physiologically activating and for many people, is experienced similarly
to the feeling of a panic attack. Our work reveals what happens when
people talk about politics involuntarily. The coping mechanism for these
unwanted conversations appears to be twofold.
First, some people silence themselves or try to discretely change the
subject in order to avoid the conversation. As a result, when people report
that they have had a political discussion, they lump together a wide variety
of experiences, including those where they do not contribute. This is high-
lighted by the fact that when we asked people to tell us about a discussion
they had, many described a situation in which they did not actually talk!
Second, not everyone communicates honestly. Some behave like cha-
meleons, pretending to agree with the group to save face. Others simply
self-censor, hedging their language or moderating their opinions. We esti-
mate noticeable rates of conversational deflection, both in our vignette
studies but also in our lab studies where we recorded people talking about
politics. All of these behaviors are more common when individuals disagree
with the group and are less knowledgeable about the topic.
These two points suggest that an update to our standard measure of
political discussion behavior is due. We see two problems. The first is that
common measures of discussion frequency cannot distinguish presence
during a conversation from participation within it. The frequency of
political discussion behavior is typically measured with a single question
asking about the number of days a person has talked about politics in the
last week. We suggest that this measure is inadequately nuanced to
capture the heterogeneity of discussions that people have. The variation
between people that is captured by this measure seems less instructive
than a measure that better captures people’s engagement within a conver-
sation. Which facet of discussion researchers address will depend on their
interest in the behavior, but we encourage scholars to think about con-
versational role and individual agency, capturing variation in discussion

https://doi.org/10.1017/9781108912495.010 Published online by Cambridge University Press


Key Lessons 241

initiation, the desirability of engaging in the discussion, and the veracity


of opinion expression.
Second, assessing disagreement within networks is not adequate to
capture the effects of disagreement within conversations. We encourage
more research involving face-to-face or online discussions between real
people. There have been many studies of both of these types in recent
years, but scholars should do more to fully capture how the internal
dynamics of a conversation – through transcripts of the conversations,
analysis of variation in tone or vocal pitch, and assessment of the number
of interruptions, body language, eye contact, or other measures – affect
other, perhaps more instrumental, outcomes of interest. We acknowledge
that group-based studies can be immensely difficult and costly to imple-
ment in practice. However, even survey-based work could try to think
more carefully about measuring actual conversations and responses to
them. Social media also presents an incredible opportunity to examine
dynamics of online conversations in a purely observational setting.

Lesson #3:
There is meaningful heterogeneity across individuals that affects the
process and outcome of discussion.
Scholars of political discussion long have considered the importance of
variation in individual dispositions, but these studies have focused on the
overall patterns of discussion, such as frequency or network composition.
Doing so masks the way that these traits affect all the small decisions
involved in the process of discussion and may have the effect of circum-
scribing the importance of individual differences.
Previous work has revealed that there are biases in who participates in
conversation, as well as what kind of discussion they are likely to encoun-
ter based on the composition of their discussion network. Our work adds
two insights. First, the characteristics that describe someone who prefers
to avoid conversations altogether are also associated with suboptimal
discussion behaviors in undesired conversations. Second, these traits
matter before and after a conversation as well. It is not just that some
people talk less frequently and are more likely to talk with like-minded
others. In other words, in the Choose Your Own Adventure novel of
political discussion, people can follow entirely different storylines.
Our findings suggest that roughly one-third of American adults
attempt to avoid conversation if at all possible. The socially anxious,
conflict avoidant, and those willing to censor themselves are dispropor-
tionately represented in this group, as are the politically uninterested and

https://doi.org/10.1017/9781108912495.010 Published online by Cambridge University Press


242 The Costs of Conversation

less partisan. Those who prefer to avoid discussion are not always suc-
cessful, and they get roped into conversations from time to time. These
individuals are less likely to report that they try to guess the viewpoints of
others before they talk about politics, so presumably they are more likely
to arrive “blind” about what awaits them in the conversations they do
have. When they do, they are less likely to express their true opinion and
are more likely to be concerned about the negative consequences of the
conversation than they are excited by its opportunities. When their hearts
race during these conversations, we speculate that their physiological
reaction is driven by negative emotions.
Another third of the public reports that they like talking about politics.
In many ways, they are the mirror opposite of the first group: politically
interested and attached, and less likely to be concerned about conflict or
plagued with social anxiety. Many, but likely not all, in this group would
be what Krupnikov and Ryan (2022) consider the “deeply engaged,” a
small group whose passion for politics is so forceful that it can repel
others. This shows up in a number of ways. This group of people who
enjoy talking politics are likely to feel more comfortable directly asking
people what their political views are, but they are also better equipped to
detect other people’s views in advance of a conversation. Once in a
conversation, they are more likely to express their true opinion.
Depending on their tolerance for disagreement, they may or may not talk
about politics with people who disagree. Their physiological activation is
much more likely to map to positive emotion.
The final third of the public report that they will talk about politics, but
conditional on knowing the viewpoints of their discussants. These indi-
viduals are more likely to report that they guess the viewpoints of other
people and are more confident in their ability to do so. But they are the
most selective and sensitive to their preferences. This group is more
heterogeneous than the previous two groups – some are selective about
the people with whom they discuss politics, but are comfortable express-
ing their views once opting in. Others are selective, but still shy away once
the discussion occurs. A better understanding of the factors that shape the
preferences of this group is essential to identifying how political discus-
sion can maximize desirable aims while minimizing detrimental ones.
Just as no two political discussions are the same, neither are any two
people involved in such a discussion and the rich variation in decision-
making behavior at all stages of discussion affects not only the frequency
and composition of discussion, but likely the effects of the conversation
itself. The 4D Framework suggests that discussion experiences

https://doi.org/10.1017/9781108912495.010 Published online by Cambridge University Press


Discussion as the Foundation of Democratic Citizenship 243

accumulate. What we consider most important about the specificities of


our findings is that the individuals most inclined to talk about politics
unintentionally make political discussion more uncomfortable for every-
one else. The people most interested in conversation are more likely to be
direct and inquisitive, as well as forthcoming with their opinions. That is
the very kind of intensity that creates concerns for others. This implica-
tion ties directly to Krupnikov and Ryan (2022), who argue that it is those
who are deeply engaged in politics that the rest of the country detests.

    


 
As a discipline, we study political discussion because we think citizen
communication about politics is an essential part of the functioning of a
democracy. How do our findings come to bear on this notion? In the next
section we assess some of the downstream democratic consequences that
talk is supposed to foster, in light of the findings of our research. We
tackle three main ideas here: that discussion can serve as the foundation
of information transfer, increasing the public’s overall level of knowledge;
that discussion can motivate other forms of participation; and finally that
discussion can operate to increase mutual toleration.

Informational Benefits
Talking about politics is seen as key to the flow of information through
society. The earliest theories in political behavior emphasized the notion
of a “two-step flow” of information through which the people who were
most interested in politics paid the most attention, and then passed what
they gleaned along to others in their social networks. Thus, even if most
people did not take the time to stay informed, collectively people could act
as though they had enough information to make “correct” choices in the
voting booth.
The core principles of the two-step flow are that people who know
more about politics share their knowledge with people who know less,
and that discussion across lines of difference can facilitate greater learning
by increasing the variety of information to which individuals are exposed.
Eveland and Hively (2009), for example, find that discussing politics with
outpartisans can lead to increased political knowledge. But this process is
not perfect. Ahn, Huckfeldt, and Ryan (2014) explain that political
discussion is a political process and individuals might face incentives to

https://doi.org/10.1017/9781108912495.010 Published online by Cambridge University Press


244 The Costs of Conversation

mislead others when they share information. Even when individuals are
motivated by accuracy, they seek out discussants with a more diverse set
of views, but they ultimately do not make more accurate judgments
(Pietryka 2016).
There are already cracks in the idea that this informational flow is
helpful in an era with high levels of political polarization, as polarization
might exacerbate these problems and create others. In a hyperpartisan
environment, the people paying the most attention are likely to be the
most opinionated (Krupnikov and Ryan 2022), perhaps even engaging in
politics as though it is a hobby (Hersh 2020). The media environment that
is both a cause and result of hyperpartisanship makes the situation even
worse. Druckman, Levendusky, and McLain (2018) show that partisan
media bias becomes amplified through political conversations, leading to
more extreme opinions among those who were exposed to a discussion
with people who read partisan news but were not exposed to partisan
news themselves. Aarøe and Petersen (2020) find that some types of
media frames are more likely to propagate through social transmission
than others. For example, Bøggild, Aarøe, and Petersen (2020) find that
individuals are more likely to transmit information about self-serving
politicians, which decreases political trust among those who receive that
information. Carlson (2018, 2019) finds that the quality of information
transmitted interpersonally degrades over time and is subject to signifi-
cant bias. Individuals inject their own, biased opinions into the infor-
mation they pass on to their peers, leading others to become less informed
than they would have been had they read the news directly. A caveat to
this is that social information can facilitate learning on par with reading
the news directly if that information comes from someone who is more
knowledgeable and a copartisan.
The challenge, however, as we show in this book, is that individuals are
more likely to avoid conversations with those who are more knowledgeable
and those who are most knowledgeable are sensitive to the social ramifica-
tions of the conversation. The salubrious effect of the two-step process
breaks down under these conditions. The most beneficial political discus-
sions (in theory) are those that Americans seem most interested in avoiding.
Further undermining the flow of information, previous research has shown
that individuals tend to overestimate the expertise of their political discus-
sion networks (Ryan 2011). This means that if our finding generalizes –
that individuals are less comfortable discussing politics with someone who
is more knowledgeable – people might be avoiding more political

https://doi.org/10.1017/9781108912495.010 Published online by Cambridge University Press


Discussion as the Foundation of Democratic Citizenship 245

discussions than “necessary” by writing off discussants that they think are
more knowledgeable than they actually are.
If the vast majority of conversations that occur are between people
who share the same views, and if conversations where people nominally
disagree do not manifest that disagreement (as we show in our lab studies
and vignette experiments), then we must turn toward the literature that
considers the effects of conversation among those who share opinions or
identities. The findings do not bode well for democratic goals:
Homogenous groups are much more likely to polarize and arrive at more
extreme opinions (Wojcieszak 2011).
The concerns about the flow of information are amplified in an era
where cable news shows are fanning the flames of misinformation.
Scholars have focused on the rise of Fox News, but in recent years even
more extreme outlets such as Newsmax and OANN have increased their
market share. Layer on top of this the known problems of misinformation
in the social media environment, and the resulting information environ-
ment is quite polluted. How is this being addressed? Many social media
platforms banish producers of disinformation or remove misinformation
related to particular topics, such as COVID-19. Factually incorrect infor-
mation that is not removed from a platform is sometimes labeled as such,
and news and fact-checking organizations seek to offer near real-time
corrections to widely circulating false information. All of these efforts
build on research assessing which individual characteristics make people
more prone to spreading or believing low-quality information or conspir-
acy theories, such as low levels of digital literacy or cognitive reflection, or
high levels of partisanship.
Despite attempts to stop the dissemination of false information, to
label it when it appears, and to encourage people to stop and reflect
before they spread low-quality information, mis- and disinformation is
bound to permeate face-to-face political conversation in addition to its
spread on social media. The question is thus whether interpersonal com-
munication, both online and offline, will make the problem better or
worse. To the extent that current tactics to stop misinformation do
consider the role of social networks, scholarship has focused on whether
people are more likely to believe information when it comes from a friend
or a copartisan stranger. While some hold hope that the increased cred-
ibility of family and friends as sources of news may position ordinary
people to be powerful disruptors to the spread of misinformation, our
work suggests a much more complicated process.

https://doi.org/10.1017/9781108912495.010 Published online by Cambridge University Press


246 The Costs of Conversation

The implications of the dynamics of interpersonal communication


extend far beyond credibility. Our framework can help us think about
the social repercussions of engaging in the spread of misinformation. For
example, accuracy goals might take a backseat to affiliation goals. We
know that most people do not prioritize learning in political conversa-
tions (Eveland et al. 2011) and that at least as many people prioritize
social considerations, so many people may go along with what their
misinformed friends say in an effort to avoid being shunned from the
group. Our findings, paired with findings from the misinformation litera-
ture, suggest an especially pernicious dynamic. The people most likely to
be misinformed or believe conspiracy theories are often the strongest
partisans, people who say they follow the news closely, or the most
politically knowledgeable (Nyhan and Reifler 2010; Miller, Saunders,
and Farhart 2015). We can speculate that these people are perceived as
more knowledgeable in a conversation, and results from this book suggest
that silencing, censorship, and conformity are more likely when people
perceive others in the conversation as more well-informed. Changing this
dynamic will require more fully understanding the social calculus people
make when they spread or receive low-quality information. It is possible
that socially shaming of those who spread misinformation or hateful
speech may be a possibility, but this would require a major change to
the norms of conversation.
How do our findings inform theories about information flow in soci-
ety? In short, people avoid the conversations that could be most beneficial
(with knowledgeable people who disagree), do not communicate effect-
ively in the disagreeable conversations that do occur (because of biased
information and self-censorship), and are thus more likely to be exposed
to polarizing influences of homogenous discussion, including the spread
of misinformation.

Participation
The relationship between discussion and participation is a central focus of
previous research. There is broad agreement and evidence that the kind of
people who are inclined to talk frequently about politics are also the kind
of people who are inclined to participate in other ways such as voting.
Our findings support this well-established relationship. Analyzing data in
our CIPI I Survey, we find that those who avoid political discussions are
less likely to vote (53 percent compared to 79 percent of those who enjoy
political discussions); engage in fewer political activities (such as

https://doi.org/10.1017/9781108912495.010 Published online by Cambridge University Press


Discussion as the Foundation of Democratic Citizenship 247

attending rallies, working on campaigns, donating, holding office, or


contacting elected officials); and have lower levels of political efficacy.
Of course, much of this effect is due to selection, but other work (e.g.
Klofstad 2009) has shown a causal effect of conversation in motivating
participation. Our contribution here is to suggest that the lack of political
discussion may entail more than lack of exposure to information or
mobilizing influences. We echo Mutz (2006) in suggesting that people
who make choices over and over again not to engage in discussion might
actively opt out of other forms of participation. This mechanism would be
observationally equivalent but conceptually distinct from simply failing to
be mobilized, a distinction that could be important for an intervention
designed to boost participation.
The bigger debate in the field has centered around whether disagree-
able conversation stifles participation. As we have referenced throughout
this book, some research suggests that those in more diverse political
discussion networks have lower levels of political engagement (Mutz
2006), but other work suggests that this finding might be restricted to
those who are purely in the minority in their networks (Nir 2011; Bello
2012). We push further on this idea to raise additional questions,
although we do not have the answers. As we emphasized, we think it is
important to make distinctions between the amount of disagreement
people have in their discussant networks versus the amount of disagree-
ment they encounter in discussion itself, and our findings are informative
in terms of how we should think about the relationship between exposure
to disagreement and participation.
First, consistent with past findings (Huckfeldt 1987) but using different
methods, we show that although many people can detect others’ political
identities, people are not good at recognizing disagreement during discus-
sion itself. What is the implication? We are likely undermeasuring the
amount of disagreement that exists simply by measuring the partisan
alignment or perceived agreement of discussants. Importantly, the rela-
tionship between the potential for exposure to disagreement and the
actual exposure to disagreement deviates even from what we can measure
on paper about the concordance between discussants’ views or identities.
Our findings suggest that the low level of disagreement recognition stems
at least in because people obscure their disagreement in political discus-
sions. If the discrepancy in past findings about the effects of exposure to
disagreement is in part a measurement issue, we must push further than
we have to better understand the variation in people’s ability to recognize
disagreement as well as their desire to mask it during a discussion.

https://doi.org/10.1017/9781108912495.010 Published online by Cambridge University Press


248 The Costs of Conversation

Second, our findings make clear that the experience of disagreement is


an immersive one, engaging people psychologically and physiologically.
The emotion that people experience before, during, and after a political
conversation is not just about political candidates or issues, and there are
good reasons to think that this altered physiological state might impact
the political choices that people make. For example, anger is typically
associated with increased participation, but less information seeking. It is
possible that people who get angry in a political discussion participate
more but based on less information. In contrast, those who experience
anxiety after a discussion might be less driven to participate but seek more
information. Thinking holistically about the experience of disagreement
may give rise to more refined expectations about its effects.

Mutual Toleration
Even if we have concerns about the potential of conversation to inform
the public or bias participation, there may be other benefits that outweigh
those concerns, namely the promise of increasing understanding and
mutual toleration, while minimizing stereotypes and animus. This vein
of thought permeates literatures focused on both democratic deliberation
and everyday talk and is the premise justifying the work of civil society
organizations bringing people together to dialogue about their political
and worldview differences. While each of these three types of interper-
sonal interaction has unique theoretical foundations, they share one in
common: contact theory. This theory posits that communication has the
possibility of increasing perspective-taking and building empathy. If
people who disagree come to understand what goals, values, and prefer-
ences they have in common, they will be better equipped to arrive at
common solutions or to lessen their reliance on stereotypes they have of
the “other.” While initially theorized to respond to racial tensions, the
contact hypothesis has been tested in many contexts including artificially
created group identities, suggesting the power of discussion to improve
partisan relations.
Research on the contact hypothesis writ large suggests that increased
contact with outgroup members can decrease prejudice, but the pattern of
results with respect to the efficacy of the contact hypothesis for partisan
interactions is mixed (see Busby 2021 for an elaboration). Based on
observational data about discussion networks, Mutz (2006) argues that
disagreeable political discussions can facilitate tolerance for those who
disagree. A flurry of experimental work in recent years finds some positive

https://doi.org/10.1017/9781108912495.010 Published online by Cambridge University Press


Discussion as the Foundation of Democratic Citizenship 249

effects as well, albeit conditionally. Using an experiment in which oppos-


ing partisans were randomly assigned to interact socially online in a
discussion, Rossiter (2020) finds that both political and nonpolitical
discussions with outpartisans reduce both outparty animosity and the
use of negative outparty stereotypes. However, interparty conversations
may be most effective in times of low political salience or among those
who experience electoral reassurance, rather than electoral threat
(Rossiter and Carlson n.d.). Wojcieszak and Warner (2020) similarly find
that interparty contact – both real and imagined – can reduce outparty
animus through perceived commonality. However, this only works if the
contact is positive: Negative interparty contact amplifies anxiety and
dampens empathy, serving to exacerbate outparty animus.
Busby (2021) proposes a framework for experiments testing the polit-
ical consequences of intergroup contact theory, emphasizing the import-
ance of researcher control and intentional design choices. He deeply
engages the problems of study generalizability, identifying ways in which
researchers can design studies where results transport more easily to other
contexts. This framework is a step in the right direction, but our findings
suggest that even these recommended steps toward “experimental real-
ism” will not adequately recreate the social dynamics and psychological
considerations that structure much organic political discussion. The find-
ings in this book, and the 4D Framework more generally, can help us
think about why and when contact might be more or less effective in the
real world. Our assessment of the efficacy of intergroup contact theory to
operate in day-to-day political talk proceeds in three parts: the assump-
tions on which contact theory is built; the conditions for when contact
theory is thought to be most effective; and the mechanisms through which
contact theory is thought to work.
Beginning with the assumptions of contact theory, Rossiter (2020)
makes the point that the contact hypothesis takes as a given that group
membership is known, and the vast majority of contact research – espe-
cially experimental work – assesses identity based on visible, ascriptive
characteristics. However, it is not clear that people always know the
views of their political discussants. Our results from Chapter 4 suggest
that some people cannot detect others’ views, others will not, and most
who try are not very confident in their ability to do so. As our lab
experiments show, people are not particularly clear when stating their
viewpoints, suggesting that discussion itself does not always clarify iden-
tity in a way that would improve people’s recognition of group
membership.

https://doi.org/10.1017/9781108912495.010 Published online by Cambridge University Press


250 The Costs of Conversation

Another problematic assumption is the definition of which kinds of


experiences count as “contact”; as Busby (2021) notes, there is consider-
able ambiguity in contact research both in terms of the relationship people
have with the outgroup member as well as the characteristics of the
experience itself. Contact studies conducted in the lab assume that sub-
jects do not know one another, and many of the innovative tests that have
been done in more naturalistic settings leverage either contact with
strangers or contact in settings where subjects do not know each other
before the contact begins. Our work shows that people find incidental
political interactions with strangers to be very costly, and they avoid
political conversations with their weak ties. Thus, contact is most likely
to occur between people with a preexisting relationship. But our work
also suggests that people generally are not willing to risk their friendships
in order to dialogue about politics in an honest manner.
Turning to assess the conditions that make contact theory most effica-
cious raises different concerns. The original conceptualization of the
contact hypothesis argued that four conditions must be present in order
for intergroup contact to reduce prejudice: There must be equal status
between groups; the groups must share common goals; there must be
intergroup cooperation; and there must be support from authorities, law,
or custom for the groups to interact (Allport 1954). Are these conditions
met when we think about political talk as a solution to the divisions in
America? While the structured formats of deliberation or dialogue can be,
we have more doubts about organic political discussion.
First, equal status is typically defined as status within the conversation,
including the confidence and competence that people display (Riordan
1978; Riordan and Ruggiero 1980). Our results about the impacts of
knowledge differentials between discussants make clear that people alter
their behavior in response to imbalances in competence. While we do not
assess how prevalent imbalances are in the discussions people have, a
literature in democratic deliberation suggests that power imbalances
quickly emerge. Thus, without intentional interventions or institutional
rules to “level the playing field,” it is unlikely that most political conver-
sations meet the criterion of equal status. Next, it is not clear what
“common goals” or a “cooperative setting” mean in the context of
everyday talk about politics. Since most conversations are incidental in
the sense of arising in the context of other conversation, people are not
approaching these interactions instrumentally. While some of our
respondents indicated that they did see benefits related to expression or
information acquisition, it is not clear that those goals are shared between

https://doi.org/10.1017/9781108912495.010 Published online by Cambridge University Press


Discussion as the Foundation of Democratic Citizenship 251

discussants. People who do not share discussion preferences are unlikely


to share goals for the conversation. For example, if one person is trying to
extricate herself as quickly as possible to avoid damaging the relationship
while the other person is unsuccessfully trying to initiate a conversation,
these people are working at odds with one another. Finally, the success of
intergroup contact is conditional on whether the interaction has support
from authorities, customs, or norms. We cannot speak to this directly, but
again suggest that mixed messages from elites and the heterogeneity in
people’s preferences for discussion likely undermines the creation of
shared norms or customs. As Eliasoph (1998) writes, people struggle to
find their footing in political conversations.
We doubt that the conditions of intergroup contact are met in every-
day political talk, but the scholarly literature has not firmly established
how essential these conditions are for intergroup contact to reduce
animus. Pettigrew and Tropp (2006) show that all the conditions do
not have to be met, although the positive effects of contact tend to be
largest if they are. Their meta-analysis on contact hypothesis studies
revealed that very few studies meet these conditions, and Paluck, Green,
and Green’s (2019) meta-analysis finds that not one randomized contact
hypothesis study with over-time outcome measures has varied all four of
Allport’s conditions. Thus, organic political conversation might be suc-
cessful in the absence of the core conditions, but the deck is stacked
against that outcome. Civil society organizations may have more luck in
structured dialogue opportunities but these groups should not expect
participants to be able to easily replicate those experiences back into
their everyday lives.
Finally, we assess the mechanisms through which contact theory is
thought to work, in light of our findings. Even if the assumptions and
conditions are not entirely fulfilled, if the nature of political talk across
lines of difference can foster the mechanisms of contact theory, talk might
succeed. One primary mechanism thought to underpin contact theory is
empathy. Some research focuses on the way contact can heighten
empathic concern, which then serves to mediate contact’s effect of
lowering bias or prejudice. The second mechanism is individuating infor-
mation, from self-disclosure. Disclosing individuating information with
an outgroup member can lead people to view the outgroup as more
heterogeneous, which can decrease outgroup bias and stereotyping
(Miller 2002). Instead of thinking about someone purely as a member
of the outgroup, individuals come to think about them as individual
people (Brewer and Miller 1988).

https://doi.org/10.1017/9781108912495.010 Published online by Cambridge University Press


252 The Costs of Conversation

Related to these two mechanisms is a third posited by Levendusky,


who argues that highlighting what Republicans and Democrats have in
common, as opposed to their differences, can reduce affective polariza-
tion. This could be accomplished in a number of ways: by priming
common identities, such as a superordinate identity as Americans; by
thinking about outpartisans in our networks we know personally and
respect, which allows these positive attitudes to trickle up to the outparty
overall; or through interparty political discussion allowing outpartisans
to see that some outpartisans do not live up to the stereotypes they have
come to loathe, allowing for deeper perspective taking.
While we are somewhat skeptical about the efficacy of empathy, we are
more optimistic about the role of individuating information. We did not
explicitly measure empathy or individuating information as consider-
ations, but we did ask respondents about related ideas, with a mixed
pattern of results. Our coding of positive affiliation benefits when assess-
ing what conversations actually occur (in Chapter 5) shows that around
10 percent of people who described a conversation they did have listed an
affiliation opportunity as a rationale. We also identified specific oppor-
tunities in the CIPI I Vignette Experiment, where one of our consider-
ations was the “opportunity to get to know these people on a deeper
level,” as well as in the Physiological Experience Study, where we asked
subjects if their positive feelings about the conversation were driven by the
opportunity to “engage more to get to know my discussion partner.” In
the CIPI I Vignette Experiment, that opportunity was the third least
popular consideration: 19% of people reported it was a consideration
but only about 4% of people reported that it was the most important
consideration for the character in the vignette. However, in the lab study,
69% of subjects agreed that the positive feeling they had from the con-
versation was driven by a desire to get to know their discussion partner.
This was the most popular rationale.
Emphasizing common bonds along the lines of Levendusky’s work
might be fruitful, but considerably more work needs to be done to assess
its efficacy outside of an experimental context. We found in Chapter 4
that fewer individuals agree with outparty stereotypes when those out-
partisans are described as people they know personally, as opposed to
outparty candidates, although rates of agreement are still very high. While
some of our study participants clearly expressed that they were able to
“agree to disagree” with people in their lives they cared about the most
and instead talk about topics about which they shared common bonds
(sports, their kids, new recipes, etc.), many still reported that they severed

https://doi.org/10.1017/9781108912495.010 Published online by Cambridge University Press


Discussion as the Foundation of Democratic Citizenship 253

ties with others who disagreed, even if they had other identities in
common. Explicitly priming people to think about their positive inter-
actions with outpartisans might work to reduce outparty hostility, but if
these respected outpartisans represent the minority of outpartisans with
whom people interact, we are skeptical that priming alone will overcome
people’s more readily available negative associations.
Weaving this altogether, we are pessimistic about the potential of
organic political talk to heal the fractures in our society, without the
guidance and structure provided by a formalized dialogue. We anticipate
that researchers conducting experiments premised on contact theory may
find positive outcomes in controlled settings, and while most civil society
organizations do not often make public any sort of rigorous program
evaluation, their efforts are likely doing more good than harm for the self-
selected group of people who volunteer to participate (although these are
likely not the people most in need of the intervention!). But these experi-
ences simply do not translate to the kinds of conversations that our
research suggests people actually have in their daily lives.
We have several concerns. We do not think people approach political
conversation with the same sense of purpose and goal-orientation that is
created in civil dialogues or experimental instructions. People do expect
there to be some benefits from discussion, but we do not see much
evidence that people view political discussion as a channel through which
to reach compromise or agreement. We are optimistic that people are
interested in learning about each other, but day-to-day talk does not
incentivize the same kind of honesty and disclosure that is provided by
the anonymity of an experiment or the commitment established at the
beginning of a dialogue. Moreover, our results suggest that people experi-
ence physiological activation during discussion in ways that may under-
mine effective communication, such as the relationship between
emotional “flooding” and avoidance behavior. Calling on people to
“gut check” themselves or be more reflective is likely much easier said
than done.
Our final concern is rooted in the fact that many of these studies and
dialogues happen in a social vacuum. In deliberation, people assume roles
as citizens or delegates tasked with making decisions. In empathy-
building exercises, strangers are brought together because their lives
would never intersect otherwise. The whole goal is to encounter difference
and engage directly with it. But our research suggests that left to their own
devices, people seem more inclined to avoid conversations with people
they do not know well. Thus, even when the opportunity arises

https://doi.org/10.1017/9781108912495.010 Published online by Cambridge University Press


254 The Costs of Conversation

organically to engage across lines of difference, those are the very conver-
sations in which people are most likely to opt out. Engaging across lines
of difference with close social connections raises concerns that do not
exist when engaging with strangers in structured spaces. People cannot
just walk away at the end of the day if interactions go poorly. If friendship
with an outgroup member is enough to convey the positive effects of
contact theory, there is an increased likelihood of success: Most
Americans are still very willing to be friends with outpartisans. But as
we show, preserving that friendship might mean sidestepping meaningful
conversations about politics. If it is meaningful conversations that allow
for the perspective-taking and signaling of mutual respect that are able to
reduce outparty animosity, then we might be out of luck.

  -


In many ways, writing about contentious political conversations felt
quaint during the 2020 election cycle. In the period over which we have
been working on this project together – from 2013 through 2022 – we
have felt, observed, and read about changes to the political system that
both complicate how we interpret our results and make them all the more
important. We began this project after what seemed like a contentious
election between Barack Obama and Mitt Romney. In hindsight, that
election pales in comparison to what was to come: calls for martial law,15
threats to the lives of election workers,16 and an insurrection at the
Capitol that delayed the certification of electoral results.
Are our results specific to the context of the time period in which we
collected the data, in the build up to and aftermath of the 2016 election?
Methodologically, this is a thorny question, and one that we cannot rule
out with certainty. Although we do not have the data to test this
conclusively, we do not believe the patterns we uncover are temporally
bound. It is possible that the surprise, anxiety, and contention that
coincided with the 2016 election might have made interpersonal polit-
ical interaction more fraught. Some of our survey respondents even
likened the election to a “tipping point” of sorts: “[O]nly the current
election caused me to distant [sic] myself from a close friend. her views
on Trump was more than I could deal with. she was reposting foolish-
ness at a level I couldn’t beleive [sic] her comments.” Some survey
evidence suggests that individuals perceive politics to be more conten-
tious today than in previous years, but researchers were not regularly
asking this question over a long period of time. This makes it

https://doi.org/10.1017/9781108912495.010 Published online by Cambridge University Press


The Talk Trade-Off 255

challenging to truly know whether politics indeed feels more tense, or if


we just have better evidence of it now.
What we think is more likely is a shift over time that coincides with
increasing levels of polarization, though again the evidence to test this
idea is largely circumstantial. There is some evidence that people might be
talking about politics more frequently. On our CIPI I Survey, we asked
participants to make direct comparisons between their perceptions of the
2008 and 2016 elections. About half of our participants reported that
they discussed politics more often in 2016 than they did in 2008, but
approximately 38 percent reported that they discussed politics about the
same amount. We observed the same trend for how often participants
reported that they paid attention to politics. We do not know if this is part
of a general trend toward more discussion or if this was an aberration;
only time and repeated data collection can tell us that. However, we note
that there is no evidence that people want to be talking about politics
more, even if they are.
There is also some evidence that the nature of political interactions has
changed over time. Using a combination of ANES data and batteries they
deployed on the CCES, Butters and Hare (2020) argue that rates of
disagreeable discussion have gone down between 2000 and 2016, but
differences in the format of the network battery make a clean comparison
difficult. There is growing evidence that people are more likely to date and
marry copartisans (Huber and Malhotra 2017; Iyengar, Lelkes,
Levendusky, Malhotra, and Westwood 2019; Easton and Holbein
2020), which suggests more homophily in spousal or romantic partner
discussions. This is important given that spouses are core discussants for
many people.
What about the experience of discussion itself? Gibson and colleagues
find that self-censorship has increased over time. Despite increasing rates
of self-censorship, an ANES pilot study fielded in 2020 reveals that
individuals do not report that they have a harder time talking about
politics with friends and family than they have in the past. This is puzzling
and merits future research. Have Americans become so comfortable self-
censoring that they are essentially desensitized to political discussion,
unaware of their difficulty expressing their true opinions?
We cannot say conclusively that people are talking about politics more
often while opting out of disagreeable discussion more frequently, or
whether the experience of political conversation has changed for the
worse as our country has become more polarized and fractured. But our
concern, at its core, is not whether there are too many or too few people

https://doi.org/10.1017/9781108912495.010 Published online by Cambridge University Press


256 The Costs of Conversation

talking about politics, who talks to whom, or whether they are enjoying
the experience. Our concern is about the subtle but systematic biases in
who opts in and who opts out of discussion, and what gets communicated
in the conversations that do happen. This worry echoes and expands on
the conclusion Mutz (2006) draws from her study of political discussion
in the early 2000s, that “the meek and mild abstain from participation so
as not to offend anyone, while ideologically extreme political bullies rule
the Earth” (p. 124). We cannot say conclusively that the biases we explore
are worse than they were in the past, but we are confident that they have
not improved. And while we lack evidence to show that social concerns
are a more prevalent driver of people’s behavior now than they were in
the past, we expect that they are, given the rise of social sorting and social
polarization.
Our findings raise questions about the benefits of discussion and
amplify its concerns, calling attention to the variation in the discussion
experience people actually have. Americans make socially optimal deci-
sions that have detrimental outcomes for their political engagement. In
other words, in an effort to protect their social relationships and their
sense of self, people shy away from expressing their true opinions. We
anticipate that the effect of these biases is to create distortions in how
people view their own attitudes in relationship to others’ attitudes, as well
as to tilt the content and tone of public discourse away from moderation.
We hope that those who call on the American people to talk to each
other more about politics – opinion editorialists, civil society organiza-
tions, and politicians – will be more nuanced in their request. Any
recommendation that talk is the answer to our problems must be
research-driven and reflect an understanding of how findings from experi-
mental studies would realistically translate to organic discussions. For
example, political talk entails people’s prior assumptions about their
discussants; suggesting that people can communicate across lines of dif-
ference without taking into account the stereotypes they bring with them
is naive. More research into the perceptions, as well as the meta-
perceptions, that people have is necessary. We should also be mindful
that calling on people to talk may risk that they censor or conform their
opinions; the people who most need the encouragement to engage are
most likely to engage in these democratically suboptimal behaviors if left
to their own devices. We also encourage more research into the idea that
people see conversation as a channel in which to get to know others as
people, as our findings are mixed on this point. Better understanding
human curiosity on this very social facet of discussion seems like a

https://doi.org/10.1017/9781108912495.010 Published online by Cambridge University Press


The Talk Trade-Off 257

tractable way forward, where specific recommendations could be made


about the rules of engagement for fruitful political discussion.
Thinking about conversation as a building block for democracy
requires us to grapple with the fact that discussion is a social process,
with all the messiness that entails. It has never been more important for
academics studying political discussion to connect their work to the real
world. We hope others will push beyond what we demonstrate here to
pay more attention to the process of discussion instead of reducing it to a
black box, focusing exclusively on inputs and outputs. Collectively, we
need more theorization about the psychology of political discussion and
must develop more refined measures to capture nuanced facets of discus-
sion behavior that take seriously the possibility of deleterious outcomes of
talking about politics.

https://doi.org/10.1017/9781108912495.010 Published online by Cambridge University Press


https://doi.org/10.1017/9781108912495.010 Published online by Cambridge University Press
Notes

Chapter 1
1 This variation could be due to legitimate ebbs and flows in political discussion
frequency. For example, there are higher rates of political discussion in
2000 and 2016, after incredibly close presidential elections. However, there is
also variation in survey question wording and structure in these years that could
alter rates of measured political discussion frequency.
2 See Neblo (2007) for a helpful discussion of the theoretical diversity of deliber-
ation from a normative perspective, as well as the measurement challenges that
stem from this diversity.
3 Conover, Searing, and Crewe (2002) make important comparisons between
respondents from the United States and Britain. While both had discussions
most frequently in their homes (31.5% of American respondents and 30.1% of
British respondents reported that they have discussions in the home often),
Americans had discussions at work much more frequently than did the
British: 28.6% of Americans reported having these discussions at work often,
whereas only 16.7% of British respondents did so.
4 Several news articles suggest that politics frequently comes up at Thanksgiving
meals and that individuals try to avoid these dreadful conversations: www
.theatlantic.com/politics/archive/2017/11/go-ahead-talk-about-politics-at-thanks
giving/546536/; www.cnn.com/2016/11/22/health/thanksgiving-holiday-conver
sation-survival-guide-trnd/index.html; www.cnn.com/2017/11/22/politics/
thanksgiving-dinner-politics/index.html; www.nbcnews.com/better/health/
how-survive-thanksgiving-when-politics-loom-large-ncna821206; www.npr
.org/2017/11/21/565482714/americans-say-to-pass-the-turkey-not-the-politics-
at-thanksgiving-this-year; www.npr.org/2017/11/21/565613945/americans-
prefer-a-politics-free-discussion-at-thanksgiving-dinner.
5 www.pewresearch.org/fact-tank/2016/12/22/how-americans-are-talking-about-
trumps-election-in-6-charts.
6 While we are cognizant and appreciative of the fact that much political inter-
action now occurs in online contexts – and we have each written separately on

259

https://doi.org/10.1017/9781108912495.011 Published online by Cambridge University Press


260 Notes to pages 10–37

this topic in other work – our focus here is on in-person communication. We do


not believe the increase in political engagement online has displaced face-to-face
communication. If anything, the opposite could be true: The exchange of infor-
mation on the Internet and social media provides more information to use when
choosing discussants and has created more fodder for in-person conversations.
7 To be clear, we are not talking about deliberation. There is a large literature, both
theoretically and empirically, on deliberation. Here, we are less interested in the
features of a discussion that might lead groups to make “better” collective choices
or might ensure that all members of a group are heard. While our research
agenda could be extended to democratic outcomes, such as collective choice, in
this book we are focused on the preliminary stages of the day-to-day political
interactions in which many Americans (sometimes begrudgingly) engage.
8 We used to call this concept “social distancing.” However, in 2020, the term
“social distancing” took on a whole new meaning. For our purposes, we are not
referring to keeping six feet away from others or staying home to reduce the
spread of COVID-19. Instead, we are referring to individuals cutting ties or
reducing communication with their social contacts.

Chapter 2
1 www.people-press.org/2019/06/19/the-publics-level-of-comfort-talking-politics-
and-trump
2 See Bullock et al. (2015) and Prior et al. (2015) for important critiques of these
findings. See Bullock and Lenz (2019) for a review. These studies argue that the
partisan knowledge gaps that we observe are due to expressive responding or
“partisan cheerleading.” The authors find that when participants are given
small monetary incentives for correct answers, the partisan gaps shrink sub-
stantially, suggesting that partisans are aware of the correct answers but choose
to report answers that support their party on surveys. It is also possible that
individuals respond to knowledge questions as opinion questions.
3 Mutz’s operationalization of social accountability is a measure of the individual
difference trait of conflict avoidance. Thus, a precise interpretation of what she
finds is that conflict avoidant individuals are less likely to anticipate voting
when they have cross-cutting networks.
4 Readers interested in a modern spin on this concept should check out Black
Mirror’s Netflix adaptation of this concept, Bandersnatch.
5 Specifically, their seminal study revealed that only 66 percent of Reagan voters
correctly identified Mondale voters in their networks and only 55 percent of
Mondale voters correctly identified Reagan voters in their networks.
6 Surely, political science majors, graduate students, and faculty members can
relate to the scenario in which a seatmate on a plane or a rideshare driver tries
to make innocent, polite conversation with you by asking what you do or what
you’re studying in school. We’re all then faced with a decision: (a) to reveal that
we study political science, risking a potentially long, contentious discussion
about current events and candidates; or (b) to dodge the conversation with a
white lie about what we study (applied statistics is usually a safe – and not

https://doi.org/10.1017/9781108912495.011 Published online by Cambridge University Press


Notes to pages 40–71 261

totally inaccurate – choice); or (c) hide behind our headphones. We acknow-


ledge that this situation happens more frequently to those whose occupations
relate to politics, but analysis of our free response data confirms that it happens
to those who do not work directly with politics as well.
7 Silencing one’s views is important based on a multitude of research pioneered by
Noelle-Neumann suggesting that individuals will silence their views when faced
with others who disagree. However, we consider silencing behavior as one that
derails a conversation, or an indication that a person opts out of a political
discussion, so we focus more on silencing in Chapter 5 (Stage 2: Decision).

Chapter 3
1 Respondents were presented with the full list of considerations in a randomized
order. They were not shown the accuracy, affiliation, and affirmation labels.
The specific question wording was “which of the following seem like plausible
considerations for John/Sarah? Check all that apply.”
2 An additional 21 percent of responses were deemed by both coders to be a
meaningful answer to the question of “why” a subject did or did not participate
in a conversation but could not be coded into one of the three key categories.
Finally, 19 percent of responses provided information about other facets of the
conversation but did not provide an answer about why a subject did or did not
talk. The remainder of the answers were uninformative.
3 The sample was originally supposed to be 1,500 respondents, but errors on
SSI’s end required them to increase the sample to ensure it was representative.
4 Specifically, respondents were shown the following prompt: “At this point in
the survey, we would like you to stop for a moment and think about a recent
time when you had a political discussion with someone or with a group of
people. Think about who was there, where you were, what you discussed, and
how you felt. Now please describe the experience you just thought about, giving
as much detail as possible.”
5 For more information about the poll, see www.scribd.com/document/
325387759/TargetSmart-William-Mary-Poll-Ohio-Statewide.
6 We strongly urge you to read the Acknowledgements section of this book to see
for yourself the tremendous work that they completed. Please.
7 We note here that we make reference to some of the findings that first appeared
in Carlson, McClean, and Settle (2019), but that the article only uses the fall
2014 data collection, given the key role of network analysis in that paper. It
only made sense to collect the network data in 2014 in the full student subject
pool, not in the ad hoc pool we created in September 2015. Therefore, there are
some slight discrepancies between the reported results, though they do not
change the overall pattern in the data.
8 It is difficult to compare the magnitude of psychophysiological effects across
samples (Settle et al. 2020) and therefore we did not want to rely on compari-
sons with the larger body of work using psychophysiology to measure response
to contention or negativity after media exposure (Mutz and Reeves 2005; Mutz
2015; Renshon, Lee and Tingley 2015; Soroka, Fournier, and Nir 2019).

https://doi.org/10.1017/9781108912495.011 Published online by Cambridge University Press


262 Notes to pages 71–86

9 Previous research suggests that self-report emotional responses and physio-


logical response should not necessarily be correlated (Settle et al. 2020).
Therefore, we do not integrate the two measures together.
10 Of the 182 students eligible to participate in the study, 165 students completed
the pre-survey. Of those 165, 144 showed up for the lab portion of the study,
but we only have physiological data for 128 of the subjects who participated.
This rate of data loss is consistent with the observation in Settle et al. (2020) to
expect a rate of about 10–15 percent data loss during physiological studies.
11 If only one student signed up for a time slot, a student research assistant acted
as a confederate. As a result, there were more than seventy-two conversations.
Unfortunately, we must exclude subjects who were paired with a confederate
in some data analyses, as we do not have any pre-survey results for the
confederate.
12 We ended up with heterogeneity in our discussion dyads, although the high
rate of self-identified Independents meant that we ended up with few dyads
where a self-identified Republican was talking to a self-identified Democrat. If
we code partisan leaners as partisans, we get a larger number of partisan
discordant discussions.

Chapter 4
1 www.washingtonpost.com/lifestyle/style/the-maga-hat-is-not-a-statement-of-
policy-its-an-inflammatory-declaration-of-identity/2019/01/23/9fe84bc0-1f39-
11e9-8e21-59a09ff1e2a1_story.html; www.npr.org/2019/01/27/689191278/
the-symbol-of-the-maga-hat; www.pussyhatproject.com/our-story; https://
en.wikipedia.org/wiki/Yellow_vests_movement; www.aclu.org/blog/free-
speech/what-black-armband-means-forty-years-later; www.ippfwhr.org/
resource/the-green-hankerchief-the-new-symbol-of-the-international-womens-
resistance.
2 If you research political discussion and are unfamiliar with MacKuen (1990),
do yourself a favor: Put down our book and track down MacKuen’s piece for a
thorough read before you continue with Chapters 4 and 5.
3 She writes in a section of her book subtitled A New Human Ability Discovered:
Perceiving the Climate of Opinion that a series of studies conducted in
Germany in the 1970s, “consistently confirmed the people’s apparent ability
to perceive something about majority and minority opinions, to sense the
frequency distribution of pro- and con viewpoints, and this is all quite inde-
pendently of any published poll figures” (1993, 9). For a useful review of the
Spiral of Silence literature, see Scheufele and Moy (2000).
4 Specifically, participants were asked “Imagine that you were trying to guess
someone’s political views, but you couldn’t ask them directly. How would you
go about guessing their political views?” Participants could then type
their responses.
5 See Eveland et al. (2019) and Eveland and Hutchens (2013) for an important
discussion of challenges in measuring accuracy in reporting discussants’
political views.

https://doi.org/10.1017/9781108912495.011 Published online by Cambridge University Press


Notes to pages 86–94 263

6 Confidence was measured on a five-point scale, where 0=no better than


chance; 1=somewhat confident; 2=fairly confident; 3=very confident; and
4=virtually certain.
7 We asked respondents how confident they would be guessing someone’s
political party if they knew (1) demographic characteristics, such as their
age, gender, or race; (2) information they post on social media, such as
Facebook or Twitter; (3) which candidate most of their friends support; (4)
how frequently they attend religious services; (5) where they live; and (6)
which news sources they use most regularly.
8 Gosling (2008) argues that where and how individuals choose to display
things that could explicitly signal political views reflect the intentionality of
their expression. For example, if someone puts a “NOPE Keep the Change”
bumper sticker on his or her car, facing outward for all to see, it means that he
or she is likely trying to project that as part of his or her identity, wanting
others to know how he or she feels about Obama. However, someone else
might feel just as strongly, but put the bumper sticker on his or her dashboard
instead, facing inward, as a more private expression of his or her political
views.
9 In fact, we once had a student share that she deliberately put political stickers
on her laptop to “keep conservatives away.”
10 www.theguardian.com/us-news/2016/sep/29/republican-democrat-fashion-
yougov.
11 For instance, California is widely recognized as a Democratic stronghold,
voting for the Democratic presidential candidate in every election since
1992, with 39 of its 53 seats in the House currently being occupied by
Democrats. In contrast, Southern states are typically viewed as Republican
strongholds. Alabama, for example, has voted for the Republican presidential
candidate in every election since 1980 and in most sessions of Congress since
1996, at least five of its seven congressional seats have been held by
Republicans.
12 www.pewforum.org/2012/10/09/nones-on-the-rise.
13 Using a sample of the Twitter followers of members of Congress, the authors
identify who else those Twitter users follow, uncovering a broad range of
cultural leaders and icons. Their analysis of a set of Fortune 100 companies,
restaurants, musicians, and musical genres demonstrates strong correlations
between the objective measures of ideology and the ideological leanings of
Twitter followers of these entities.
14 Participants were given the following prompt: “Below we have listed some
characteristics about people. How would you describe someone’s political
party affiliation if all you knew was that he or she: . . . ” We adapted the
language of the question and the characteristics listed in Table 4.2 from a Pew
Research Center 2016 report in which they asked participants if it would
make it easier or harder to get along with someone new in their community if
they had each of these characteristics. www.people-press.org/2016/06/22/4-
partisan-stereotypes-views-of-republicans-and-democrats-as-neighbors.
15 Interested readers can even take a quiz: www.psychologistworld.com/tests/
conservativism-language.

https://doi.org/10.1017/9781108912495.011 Published online by Cambridge University Press


264 Notes to pages 95–103

16 www.journalism.org/2014/10/21/political-polarization-media-habits.
17 Specifically, participants were shown the following prompt: “Sometimes our
first impressions of people are quite accurate. We are interested in your first
impressions of the person listed below.” In cases in which participants evalu-
ated two people, the prompt read: “Sometimes our first impressions of people
are quite accurate. We are interested in your first impressions of the people
listed below” (emphasis added). The party identification question was:
“NAME is an American registered voter. With which party do you think he
is registered?” with response options Democrat and Republican. The ideology
question was: “We hear a lot of talk these days about liberals and conserva-
tives. Here is a seven-point scale on which the political views that people might
hold are arranged from extremely liberal to extremely conservative. Where
would you place NAME on this scale?”
18 www.claritycampaigns.com/names.
19 There were 61,830 total registered voters named Dwayne and 95,864 total
registered voters named Duane.
20 58 percent of registered voters named Gideon were registered Democrats and
57 percent of registered voters named Jedediah were registered Republicans
21 There were 2,301 Jedidiahs, 3,320 Gideons, and 4,958 Brendens registered
to vote.
22 We shy away from the phrase “accurate inference” because there are liberals
with phonemically conservative names and vice versa. However, the majority
of subjects identified the name as being liberal or conservative in a manner
concordant with the probability that name is more associated with one party
or the other.
23 Building on the findings from the apolitical cues analysis, we did not want to
describe Kent (phonetically conservative) spending time at the Met, when Kent
would be unlikely to be at the Met in the first place, at least relative to Liam
(phonetically liberal). Importantly, we did not want participants to infer
ideology based on the context (e.g. assume that Liam was liberal because he
was spending time at the Met) instead of the name as a cue. We therefore tried
to write scenarios where the character was in a situation that would not evoke
sociopolitical associations.
24 This study was fielded on Mechanical Turk in July 2015, N ¼ 661.
25 We collected the list of traits from Crawford, Modri, and Motyl (2013).
Participants rated their agreement on a scale from 1 (strongly disagree) to 6
(strongly agree). We present the full results in the Appendix. In short, When
forced to “take a side” – in other words, when we bisect the six-point scale –
Americans seem willing to give their outgroup the benefit of the doubt on a
variety of traits. Majorities of both groups rated their ingroup and outgroup as
passionate, sociable, friendly, extraverted. They also rated them as thorough,
organized, competent, intelligent, skillful, capable, conscientious. And finally
they saw them as polite, but emotional.
26 For example, we consider one of the most polarizing issues in contemporary
American politics: abortion. The stereotype about Republicans on abortion
might be “[t]hey use ‘family values’ as a justification to try to impose their
morals on everyone else’s reproduction choices.” Here, we extrapolate greatly

https://doi.org/10.1017/9781108912495.011 Published online by Cambridge University Press


Notes to pages 105–110 265

from the Republican policy position of wanting government involvement in


making abortion illegal, borrowing terms from pundits to suggest that
Republicans are using the term “family values” to cloak their intrusion on
an individual’s right to choose. In contrast, the Democratic stereotype for
abortion might be “[t]hey see abortion as a solution to careless behavior
without considering the sanctity of life.” Here, we distort extensively from
the party–line Democratic Party position of legalizing abortion, allowing
women to choose, to highlight the implicit encouragement of engaging in
sexual behavior and failing to consider the life of the unborn child.
27 Some might argue that only 52 percent of the conditional conversationalists
reporting trying to guess the views of potential discussants ahead of time is
surprisingly low. If they are truly trying to guess the views of others before
engaging in a discussion, we should expect this number to be much higher. We
note here that we are only capturing the extent to which people are aware of
their guessing behavior.
28 www.latimes.com/business/hollywood/la-fi-ct-conservatives-hollywood-2017
0311-story.html.
29 www.washingtonpost.com/news/grade-point/wp/2016/09/20/at-the-countrys-
most-elite-and-liberal-colleges-some-trump-supporters-stay.
30 www.theguardian.com/us-news/2016/mar/03/secret-donald-trump-voters-
speak-out.
31 www.usnews.com/news/articles/2016-07-01/are-voters-too-embarrassed-to-
say-they-support-trump.
32 www.washingtonpost.com/local/its-hard-enough-to-be-a-republican-in-deep-
blue-dc-try-being-a-trump-voter/2016/02/22/d8f18b96-d4f2-11e5-9823-02b9
05009f99_story.html; www.washingtonian.com/2017/01/18/what-its-like-to-
be-a-trump-supporter-in-dc.
33 Describing the hostility in the workplace in liberal Hollywood, LA Times
reporter David Ng concludes that “[f]or the vast majority of conservatives
who work in entertainment, going to set or the office each day has become a
game of avoidance and secrecy. The political closet is now a necessity for
many in an industry that is among the most liberal in the country.” Writing for
U.S. News and World Report, Joseph P. Williams describes the “Trump
Effect” as “the notion that voters won’t admit they support him, because it’s
distasteful to back a populist celebrity billionaire who’s unafraid to offend
immigrants, women and minorities.”

Chapter 5
1 For example, Cramer writes, “[w]hen disagreement appeared to surface, it was
not met with challenge or counterpoint but with silence or an abrupt change in
topic” (2004, p. 111). In the women’s guild she studies, she observes that,
“[w]hen the topic of Bill Clinton is touched upon, the women avoid stating
their views, rather than jump at the chance to criticize or support him. In a
context in which they are not equipped with a perspective rooted in group
identity, they enter into conversations about public figures tentatively, if at all”
(p. 116). She chose to record these as instances of political talk: “Even their

https://doi.org/10.1017/9781108912495.011 Published online by Cambridge University Press


266 Notes to pages 111–124

attempts to change the topic—for example, when someone raised their cup of
coffee and said, ‘Here’s to Bill Clinton!’ and another responded, ‘Oh, please,
I haven’t had my breakfast yet . . . ’—were coded as political” (p. 38).
2 As an example, one of our respondents wrote: “[I] was talking to my sister
who is very liberal the conversation changed to presidential politics and i had
to leave because my sister argues without facts she wont [sic] even entertain
the facts so it was just easier to leave than fight a battle that i couldnt
[sic] win.”
3 An example of this comes from a respondent who said she avoided a conver-
sation “because it offended my friends [sic] husband and things were getting
tense so i changed the conversation to make a friendlier atmosphere.”
4 For instance, a survey respondent in our Thanksgiving Study wrote: “I have
these [political] discussions at work, and I am the black sheep in an office of
liberals. I usually bite my tongue to avoid longer arguments.”
5 Of course, there are all sorts of limitations with this approach. We do not
argue that the discussions people recount are a random sample of all of the
discussions they have. Certain kinds of discussions are likely to be more
memorable than others. However, our larger argument is about the way in
which the experience of discussion feeds into future choices about discussion
and other political behaviors. Therefore, we think that the discussions people
choose to share are likely those that are most important in shaping their
attitudes about the process of discussion itself.
6 Specifically, respondents were shown one of two prompts: (1) “Please think
about a single time when you could have discussed politics with someone or a
group of people, but chose not to participate in the discussion. Think about
who was there and your relationship with those people, what topics you could
have discussed, whether the others would have agreed with you, the emotions
you experienced, and why you chose not to participate in the discussion”; (2)
“Please think about a single time when you discussed politics with someone or
a group of people. Think about who was there and your relationship with
those people, what topics you discussed, whether the others agreed with you,
the emotions you experienced, and why you chose to participate in the
discussion.”
7 Our pilot testing did not indicate significant differences between vignettes
about John or Sarah. Previous research on vignette experiments suggests that
the character of interest in the vignettes should be as similar to the participant
as possible, so we chose to match the names on gender.
8 In our vignettes, deflecting is operationalized through response wording in the
vignette in a few ways, including the character saying nothing at all or trying
to change the subject. It is important to note that silencing here does not
include walking away from the conversation altogether, which might be a
stronger behavior more reflective of truly avoiding a conversation.
9 See Settle and Carlson (2019).
10 This study was fielded early in 2016 when primaries were still underway. At
the time, Marco Rubio and Ted Cruz still looked like promising Republican
nominees, while most news coverage assumed Trump would never get
the nomination.

https://doi.org/10.1017/9781108912495.011 Published online by Cambridge University Press


Notes to pages 125–138 267

11 Because this finding ran counter to all of our previous work, we wanted to
reproduce the result. We conducted a follow-up study on our student sample,
specifically focusing on knowledge. In the fall of 2017, we asked them to name
their price to discuss the politics of health care reform with groups of know-
ledgeable or unknowledgeable Republicans and Democrats. Our sample was
dominated by Democrats, so we were underpowered to analyze Republicans
separately. We found that Democrats demanded significantly more to discuss
health care reform with Republicans overall, consistent with the premise of
avoiding disagreement. We then find suggestive evidence that they demanded
more to discuss health care reform with unknowledgeable Republicans than
with knowledgeable Republicans.

Chapter 6
1 In the Appendix, we elaborate on the variety of criteria that could be used for
data exclusion given the finnicky nature of psychophysiological data. For this
analysis, we chose to use the full set of participants from whom we collected
psychophysiological data. We measured the psychophysiological response to
the initial stimulus in which subjects found out they were to have a political
discussion. We measured the psychophysiological response to the videos using
only the measurements of subjects who saw that category of video first in a
counterbalanced design. The full pattern of results using other criteria are
reported in the Appendix but alternate specifications do not change our
interpretation and in fact, tend to increase the differences between stimuli type.
2 Although the confidence intervals of the political video and the discussion
stimulus overlap in Figure 6.1, there is a significant difference of means
between the conditions if we use the full set of responses for the contentious
political videos (e.g. both subjects who watched that set of videos first and
those who watched them after the apolitical videos).
3 More details are provided in the Appendix about our data cleaning processes
and exclusion criteria. See also Carlson, McClean, and Settle (2019) and Settle
et al. (2020).
4 We are almost certainly underpowered to detect significant differences
between the groups, as there are only between forty and fifty subjects in each
group. While the pattern of results for the heart rate data suggests that we are
underpowered, we were surprised to find a complete lack of support for our
expectations in the EDA data.
5 Previous research suggests that self-report emotional responses and psycho-
physiological response should not necessarily be correlated (Settle et al. 2020).
Therefore, we do not integrate the two measures together.
6 Respondents were coded as experiencing the emotion if they marked a 4 or 5
on the five-point scale. See Appendix for more details.
7 Originally, we had intended to answer a third question as well: Is political
disagreement distinct from disagreement on other contentious issues?
However, as we explain in the Appendix, the operationalization of this test
was unsuccessful, rendering it difficult to draw conclusions from that aspect of
the design.

https://doi.org/10.1017/9781108912495.011 Published online by Cambridge University Press


268 Notes to pages 143–162

8 The remaining conversations were between self-identified true Independents


and leaners or partisans. Refer to Chapter 3 for a summary of the study, but
we note here that the partisan clash analyses are based on the 112 subjects
where we have both physiologically informative data as well as information
about the partisanship of both the subject and his or her discussion partner.
This represents a subset of the 144 participants who participated in the
lab study.
9 This prompt was not accompanied by a thirty-second period for discussion
afterwards. Thus, we are measuring their EDA while they are reading the
instruction prompt.
10 In both the pre-survey and the post-survey, the issue question response options
were “agree,” “disagree,” or “don’t know.” We constructed variables where
accuracy was defined as correctly identifying the partner’s opinion as “agree”
or “disagree.” We exclude from analysis cases where the discussion partner
reported that they did not have an opinion on the pre-survey.
11 That being said, partisan identity was a much more salient identity than the
other identity recognition we solicited: identity as an in-state or out-of-state
student. About 60 percent of out-of-state students were perceived as being in-
state students.

Chapter 7
1 We thank Lisa Argyle for this important and thought-provoking suggestion
and encourage future research on political discussion to unpack what the
socially desirable response to disagreement really is.
2 There are important caveats to how we should interpret these percentages. First,
the response options for the considerations were not evenly distributed across
the AAA categories. For example, there were seven considerations (three con-
cerns and four opportunities) that fell into the affirmation category, but only five
considerations (three concerns and two opportunities) in the accuracy category.
This means that if respondents were clicking at random, they would be more
likely to select an affirmation consideration than an accuracy or affiliation
consideration, for example. Second, much like our analysis of free response
data in Chapter 4, we do not have a way of distinguishing between respondents
who left this question blank because they did not think any of the considerations
applied and those who left it blank because they skipped the question altogether.
Because of this, we follow our method in Chapter 4 and exclude the eight
percent of respondents who left the question blank.
3 What would John/Sarah do in response to the person’s question?
Say that s/he strongly disagrees with them, even though s/he really just
disagrees with them (coded as entrench)
Say that s/he disagrees with them, which s/he does (coded as true opinion)
Say that s/he slightly disagrees with them, even though s/he really disagrees
with them more than slightly (coded as censor)
Say that s/he agrees with them, even though s/he really disagrees with them
(coded as conform)

https://doi.org/10.1017/9781108912495.011 Published online by Cambridge University Press


Notes to pages 165–169 269

Say nothing on the subject, even though s/he disagrees with them (coded as
silence/deflect)
4 Those who thought the character would entrench were most likely to think
that the opportunity to express his/her real political opinions was the most
important consideration. Those who thought the character would express his
or her true opinion were most likely to report the opportunity to discuss
important issues with these people as the most important consideration.
5 The lower bound is from the CIPI I Vignette Experiment; the upper bound is
from the Power x Partisan Composition Pilot Study (see Table 3.8).
6 A total of seventy students participated in this study, but seven were removed
from the analysis because of treatment administration errors such as missing
confederates, confederates using the wrong script, and participants knowing
the confederates personally. The remaining sixty-three participants were
included in most analyses
7 Participants shared their opinion in the randomly assigned order for one
issue at a time. Each of the fourteen issues was presented one at a time on a
screen that changed to the next issue automatically after one minute.
Participants and confederates were instructed to state their opinion on the
question on the screen and discuss it if they wanted. Because the questions
presented on the screen were the same as the pretest, participants were
sometimes asked to report a number to indicate where their opinion fell on
a scale. Aside from the order in which participants were randomly assigned
to give their responses, the procedures were the same across the treatment
groups. In other words, both treatments involved exposing participants to
differing viewpoints from their own but varied the order in which that
information was disclosed. The treatment, therefore, was deliberately very
subtle and designed to test whether people would conform to a group’s
opinion when given the opportunity to do so. Overall, the lab session
included ten “critical” questions on which the confederates disagreed with
the participant according to the script, and four “faux” questions designed
to make the study more realistic, with confederates disagreeing with each
other, agreeing with the participant, or providing a neutral response, as
shown in Table 2 and Table 7 in the article appendix.
8 The primary purpose of the posttest was to examine the distinction between
persuasion or attitude change and conformity. If participants gave the same
responses on the pretest and posttest, but gave a different response in the lab
session, then we have strong evidence that individuals were indeed conforming
in the lab. However, if individuals gave the same response in the lab session
and on the posttest, but this response differed from the pretest, then this could
be evidence of attitude change or persuasion. While this distinction between
temporary conformity in the discussion and potential persuasion as a result of
the discussion is important conceptually, this method cannot account for
general response instability.
9 Based on extant findings on conformity in social psychology, we expected
participants to conform to a group’s political opinion when they had heard the
confederates state opinions with which they disagreed. In the control condi-
tion, participants would not know the political opinions of the confederates

https://doi.org/10.1017/9781108912495.011 Published online by Cambridge University Press


270 Notes to pages 169–171

before stating their opinions, so they would have limited information with
which to conform. It is possible that participants could intuit that the confed-
erates generally disagreed with the participant over the course of the study,
which means that we might observe some preemptive “conformity” in the
control condition. However, we expect to see a greater frequency of conform-
ity in the treatment condition, when participants are certain of the group’s
opinions prior to stating their own opinion, compared to the control condition
where they can only surmise the group’s opinions over time in the study.
10 For example, if on the pretest a participant indicated that he or she strongly
agreed with something, but in the lab only said that he or she agreed, that
would not be coded as potential conformity. If that participant said that he or
she disagreed or strongly disagreed in the lab, that would be considered
potential conformity. We call this potential conformity because the observed
attitude shift in the lab has the potential to be conformity, but it could also be
genuine attitude change.
11 Note that both of our measures of conformity require movement across a
midpoint in the scale, a much stricter requirement than previous studies
exploring the public expression of opinions (Levitan and Verhulst 2015).
We do this in order to differentiate the concept of conformity from other
factors that could induce movement on a response scale for an issue position
between a pretest and a lab session. On questions utilizing a response scale
with more than five points, some movement is likely to be expected simply
because of the lack of distinction in a subject’s mind on the scale points, for
example a “5” and “6” on a seven-point scale. We cannot say with certainty
that this movement would represent conformity and is not simply a form of
response instability. By limiting the measurement of our construct to opinions
that actually “flip sides,” we can be more confident that subjects are publicly
expressing an opinion that is meaningfully different from the opinion they
expressed privately on the pretest.
12 It is possible that we are statistically underpowered to detect a significant
difference between the treatment groups by pure conformity standards.
Because pure conformity is measured based on posttest results, only those
participants who completed the posttest can be included in the analysis, which
reduces our sample to forty-six participants for the pure conformity tests. As
Figure 3 in the article illustrates, participants conformed in both conditions,
but the frequency of conformity was significantly higher in the treatment
group for potential conformity. Although it is possible that participants in
the control condition were able to guess the group’s opinion over the course of
the study, we find that participants were no more likely to conform at the
beginning of the study than at the end, making this less likely.
13 We thank Erin Rossiter for her expertise and research assistance in designing
and executing this process.
14 We used Young and Soroka’s (2012) Lexicoder Sentiment Dictionary to count
positive words (1,709 in the dictionary) and negative words (2,858 in
the dictionary) and followed their approach to construct the measure as the
proportion of positive words minus the proportion of negative words in the
text in question. A score of 1 should be interpreted as a one percentage–point

https://doi.org/10.1017/9781108912495.011 Published online by Cambridge University Press


Notes to pages 172–183 271

gap between the number of positive words and the number of negative words
used in the text on average.
15 The phrases we hand coded for in the partisan qualification measure would be
considered verbal hedging. The key distinctions are (1) the partisan qualifica-
tion measure is strictly examined in the identity revelation segment of the
study and the phrases had to be used at the moment of revelation; and (2) we
hand coded the partisan qualification measure but used automated coding for
the verbal hedging measure.
16 Although these metrics are available for the overall conversation, relationships
studied at that level would be harder to interpret given the heterogeneity of the
discussion prompts. For example, increased words spoken during the identity
revelation likely reveals less certainty about one’s identity, whereas increased
words spoken during the issue discussion might be better thought of as interest
in the topic.
17 We structured the question as a forced choice in order to push the subjects
toward stating a partisan identity if they had one: More than 91 percent of the
sample leaned to or identified with a party in our pretest survey. However, in
hindsight, we wished we had allowed people to identify as Independents. An
alternative explanation for the pattern of results described here is that
Independents were simply less sure how to answer the question.
18 Here, the unit of analysis is the conversation, cutting our N to the fifty-eight
conversations where we had pretest information for both subjects. Thus, we
think we are likely underpowered to detect the small difference in the number
of words expressed in the partisan aligned versus partisan clash conversations.
19 This partisan tendency did not seem to extend to the conversations about
specific policy positions, suggesting that it may be limited to the expression of
one’s identity, or may be an artifact of the way we asked the identity question.

Chapter 8
1 “People are afraid to be disliked, or to be cast out of the group, or appear
different from the rest, as opposed to standing up and saying, ‘hey!’”
(Conover, Searing, and Crewe 2002, p. 55)
2 “Citizens are more willing to state their preferences in private discussions with
family and close friends, because the risks of disclosing something unknown
about oneself are lower; not because these discussions are less revealing, but
because we are more likely to be talking with people who we already know
and accept who we are. By contrast, public discussions with acquaintances or
strangers pose a greater danger; because citizens are less known to each other,
there is much more to reveal – or hide. By opting out of public discussion,
people can protect the privacy of their preferences and thus the privacy of their
identities” (Conover, Searing, and Crewe 2002, p. 57).
3 “By not discussing controversial issues, we avoid learning more than we really
‘need’ to know about friends and acquaintances, things that might disrupt our
ongoing relationships with them. ‘You might find out something that maybe
you don’t want to find out about somebody . . . And these come down to real

https://doi.org/10.1017/9781108912495.011 Published online by Cambridge University Press


272 Notes to pages 183–184

value systems. So what you do is to back off a little bit, allow that person to
believe what they want to believe.’ Here again, discussions reveal not just the
preferences of the participants but also their characters and sometimes their core
identities. And this can alert discussants to deeper, more fundamental differ-
ences that can make it difficult to maintain relationships” (Conover, Searing,
and Crewe, p. 55). They also write: “[M]utual respect can lead a citizen to
recognize that there is no common ground to share on an issue, and therefore, in
the interest of friendship, perhaps little point to discussing it” (pp. 55–56).
4 “Although citizens are committed to reciprocity in principle, in practice they
sometimes find it difficult to respect their fellow citizens” (Conover and
Searing 2005, p. 278).
5 “Close relationships are strong enough to withstand the potential disruption
that might occur from either abruptly – and rudely – disengaging from a
contested discussion or turning it into a real argument full of passion and
anger. With close friends and family, ‘you feel like they’re going to accept
you . . . You might have a temporary argument but they love you and you love
them. And you’re not going to lose that love just because of politics.’ By
contrast, persuasive and argumentative discussions with acquaintances run
the risk of alienating people and disrupting social relations that must be
maintained (such as co-workers). Outside of close relationships, you cannot
be sure if you will be accepted ‘for yourself or just by what you say or how you
act’” (Conover, Searing, and Crewe 2002, p. 57).
6 Despite the mounting empirical evidence depicting troubling patterns of affect-
ive polarization, others question the extent to which this is a product of
question wording and anchoring around our attitudes toward elites.
Druckman and Levendusky (2019) find that individuals show lower levels of
affective polarization when thinking about outpartisan voters, compared to
outpartisan elites, and Druckman et al. (2021) argue that affective polariza-
tion is really driven by hostility toward ideologically extreme and politically
active outpartisans. Others speculate that affective polarization is largely a
product of expressive responding on surveys: Individuals would not actually
be upset if their child married an outpartisan, but when asked on a survey,
they cheerlead for their party and report that they would be upset.
Methodologically addressing the concern about expressive responding is diffi-
cult, but the overwhelming evidence across surveys, experiments, and con-
texts – paired with observational patterns in sorting – would be inconsistent
with the idea that expressive responding accounts for all of the affective
polarization that has been measured. Even if people imagine unrepresentative
caricatures of outpartisans when they answer survey questions, they very well
could use those same exaggerations when they make real-world evaluations
of outpartisans.
7 www.thedailybeast.com/friends-unfriend-over-politics,
8 www.npr.org/2020/10/27/928209548/dude-i-m-done-when-politics-tears-
families-and-friendships-apart.
9 www.reuters.com/article/us-usa-election-trump-families/you-are-no-longer-
my-mother-a-divided-america-will-struggle-to-heal-after-trump-era-idUSK
BN27I16E.

https://doi.org/10.1017/9781108912495.011 Published online by Cambridge University Press


Notes to pages 184–195 273

10 www.washingtonpost.com/lifestyle/wellness/relationship-broken-grief-ambiguous-
loss/2021/02/09/0c28c2b6-673f-11eb-8468-21bc48f07fe5_story.html.
11 The scale that Mason tests, and we use, derives from a “social distance” scale
first used by sociologists. However, we use it the way that Mason does: to
capture social discomfort between partisans that can be measured by levels of
willingness to engage in social contact with outgroup partisans.
12 We are agnostic about whether this should underestimate or overestimate the
actual percentage of people who have engaged in this behavior. On the one
hand, there could be social desirability bias acting to keep people from
answering honestly in the affirmative, if on average people think they should
not cut friends out of their lives based on politics. On the other hand, this
question may activate expressive partisanship, providing an outlet for people to
signal their dislike for the outgroup. It is worth noting that our direct approach
of simply asking people whether they had ever engaged in any of these social
distancing behaviors because of politics should be relatively well-shielded from
the measurement concerns raised by Druckman et al. (2021). The authors argue
that when we ask respondents about their attitudes toward vaguely described
Republicans or Democrats, individuals are really thinking about the stereotypes
of highly engaged, ideologically extreme partisans commonly portrayed in the
media. However, our approach here does not ask people to reflect upon their
behavior or attitudes toward a specific group. Instead, it allows individuals to
reflect upon their experiences and report whether they think they have distanced
(or been distanced from) because of politics. This approach also allows individ-
uals to consider situations in which the political reasons were not based on
partisanship alone. For example, it could have been based on political engage-
ment, regardless of whether they disagreed; differences over policy or candidate
preferences; or some other form of political expression.
13 www.pewinternet.org/2016/10/25/the-political-environment-on-social-media.
14 These results for partisan composition replicate in another vignette experiment
in which we manipulated the power dynamics instead of the knowledge
asymmetries.
15 Specifically, in the knowledge study, we found that those in the partisan
minority were perceived to be the most likely to avoid a future discussion
(mean ¼ 4.6), compared to those in the partisan majority (mean ¼ 3.3), and
those in the balanced condition (mean ¼ 3.6). There is suggestive evidence at
the p < .10 level that those in the balanced condition were more likely to think
the character would avoid a future discussion than those in the majority
condition. In the power study, those in the partisan minority were perceived
to be more likely to avoid a future discussion (mean ¼ 5.2), compared to those
in the partisan majority (mean ¼ 3.6), and those in the balanced condition
(mean ¼ 3.9). There is suggestive evidence at the p < .10 level that those in the
balanced condition were more likely to avoid a future discussion than those in
the balanced condition.
16 We replicated these results for partisan composition in an experiment exam-
ining power dynamics instead of knowledge asymmetries.
17 The CCES module we analyzed includes a nationally representative sample of
1,000 respondents. Respondents were surveyed before and after the

https://doi.org/10.1017/9781108912495.011 Published online by Cambridge University Press


274 Notes to pages 196–207

2018 midterm elections. The questions we analyze were both measured on the
preelection wave. We thank the Center for American Politics at UC San Diego
for funding support.
18 We acknowledge that estimating the distribution of partisanship within one’s
network can be a challenging task. Individuals might have incentives to
conform to the group (Carlson and Settle 2016; Levitan and Verhulst 2016)
and false consensus biases might lead us to overestimate agreement.
Researchers more commonly use name generators to measure network com-
position characteristics, but we were limited in survey space. Moreover,
Eveland et al. (2019) note that accuracy in inferring others’ views, even in a
name generator approach, can be misleading. We hope that future researchers
can explore methods for measuring network composition in a way that is
efficient on surveys and as accurate as possible.
19 Respondents who reported identifying as Democrats, including Independents
who leaned toward the Democratic Party, were asked to report how likely
they would do each activity with a Republican. Republicans, including
Independents who leaned toward the Republican Party, were asked to report
how likely they would do each activity with a Democrat. Pure Independents
were randomly assigned to view either “Republican” or “Democrat.” In the
analyses that follow, we include pure Independents, but the results remain
unchanged if we exclude them.
20 For example, 5% reported that they absolutely would not spend occasional
social time with an outpartisan, 4%reported that they absolutely would not be
next-door neighbors with an outpartisan, and 6% reported that they abso-
lutely would not be close friends with an outpartisan.
21 The interaction is significant at least at the p <. 05 level for all four
weighted models.

Chapter 9
1 We do not expect that every single characteristic we examine will be associated
with every single outcome we measured in this book. From a theoretical
standpoint, that would be expecting individual dispositions to do a lot of work.
From a practical standpoint, that would simply be messy to evaluate critically
for two reasons. First, as we described in Chapter 3, we chose to pursue breadth
over depth in our approach to triangulating the behaviors at each stage of the
cycle. This led to many studies and many outcomes measured, but that would
make for a lot of hypotheses to test when then examined against several
individual dispositions. Second, the personality batteries that we use to measure
the psychological dispositions of interest are somewhat long and we were not
able to include them on each one of our studies. As a result, there are some
outcomes we examined in the previous chapters for which we simply do not
have the individual disposition data.
2 For example, in the cumulative ANES data, we find that 74% of women report
that they have ever discussed politics, whereas 78% of men report the same.
When it comes to discussion frequency, the cumulative ANES data suggests that

https://doi.org/10.1017/9781108912495.011 Published online by Cambridge University Press


Notes to pages 216–235 275

men discuss politics more often, but the gap is relatively small: About 13% of
men report discussing politics every day, compared to 12% of women; 26%
of men report never discussing politics, compared to 31% of women.
3 For example, thousands of people engaged with a quiz about guessing whether
someone was a Trump or Biden supporter based on images of what they had in
their refrigerators. The quiz was published by the New York Times in October
2020. While many engaged with the quiz, it faced some criticism of classism.
Find the quiz here: www.nytimes.com/interactive/2020/10/27/upshot/biden-
trump-poll-quiz.html and a critique here www.refinery29.com/en-us/2020/10/
10131989/fridge-politics-quiz-trump-biden-new-york-times-classism.

Chapter 10
1 www.theatlantic.com/ideas/archive/2020/11/the-crisis-of-american-democracy-
is-not-over/616962; www.npr.org/2020/11/15/935112333/how-the-2020-elec
tion-has-changed-trust-in-u-s-democracy; https://fivethirtyeight.com/features/
how-much-danger-is-american-democracy-in.
2 www.vanderbilt.edu/lapop/news/022317.US-WashingtonPost.pdf
3 www.pewresearch.org/fact-tank/2021/01/15/in-their-own-words-how-americans-
reacted-to-the-rioting-at-the-u-s-capitol.
4 www.insidehighered.com/quicktakes/2021/01/07/hundreds-political-scientists-
call-removing-trump; http://brightlinewatch.org/american-democracy-on-
the-eve-of-the-2020-election/a-democratic-stress-test-the-2020-election-and-
its-aftermathbright-line-watch-november-2020-survey.
5 For example, speaking about the January 6, 2021, riots at the Capitol, House
Minority Leader Kevin McCarthy said in an interview with Full Court Press that
“everybody across this country has some responsibility.” He went on to say,
“[w]hat do we write on our social media? What do we say to one another? How
do we disagree and still not be agreeable even when it comes to opinion.” In a
video posted on Facebook on January 16, he said “[w]e all owe some responsibility
here too, what our rhetoric has been, the different language that people have used,
what you said on social media. We have risen these temperatures so great. In the
last inaugural, we started with, ‘resist.’ We have members of Congress saying we
should get in other people’s faces. We need to lower the temperature. We need to
understand that we’re all Americans, and we need to start respecting differences of
opinion.” In a speech delivered in the aftermath of the insurrection at the Capitol,
Joe Biden said “[t]hrough war and strife, America’s endured much and we will
endure here and we will prevail again, and we’ll prevail now. The work of the
moment and the work of the next four years must be the restoration of democracy,
of decency, honor, respect the Rule of Law, just plain, simple decency, the renewal
of a politics. It’s about solving problems, looking out for one another, not stoking
the flames of hate and chaos. As I said, America is about honor, decency, respect,
tolerance. That’s who we are. That’s who we’ve always been.”
6 www.washingtonpost.com/video/opinions/opinion-our-political-divide-is-danger
ous-a-neuroscientist-and-political-scientist-explain-why/2020/12/24/f470ab01-
a6a9-4e29-8187-2fdaa572176d_video.html?itid=ap_katewoodsome.

https://doi.org/10.1017/9781108912495.011 Published online by Cambridge University Press


276 Notes to pages 235–254

7 www.charleskochfoundation.org/courageous-collaborations
8 https://storycorps.org/discover/onesmallstep.
9 www.washingtonpost.com/lifestyle/advice/miss-manners-why-must-i-be-a-teen
ager-in-love/2019/06/02/3ab570d2-7e31-11e9-8bb7-0fc796cf2ec0_story.html.
10 www.washingtonpost.com/technology/2019/08/23/google-says-only-talk-
about-work-workand-definitely-no-politics.
11 In an interview with Reuters, Mayra Gomez said that her twenty-one-year-old
son cut her out of his life, telling her “[y]ou are no longer my mother, because
you are voting for Trump.” www.reuters.com/article/us-usa-election-trump-
families/you-are-no-longer-my-mother-a-divided-america-will-struggle-to-
heal-after-trump-era-idUSKBN27I16E.
12 www.npr.org/2022/01/05/1070362852/trump-big-lie-election-jan-6-families
13 www.npr.org/2019/12/24/791125357/trump-campaign-site-offers-help-in-
winning-arguments-with-snowflake-relatives.
14 “Theories of participatory democracy are in important ways inconsistent with
theories of deliberative democracy. The best possible social environment for
purposes of either one of these two goals would naturally undermine the
other” (Mutz 2006, p. 16).
15 www.washingtonpost.com/history/2020/12/22/martial-law-trump-flynn-history.
16 www.nytimes.com/2020/12/03/us/election-officials-threats-trump.html.

https://doi.org/10.1017/9781108912495.011 Published online by Cambridge University Press


Works Cited

Aarøe, Lene, and Michael Bang Petersen. 2020. “Cognitive biases and communi-
cation strength in social networks: The case of episodic frames.” British
Journal of Political Science 50(4): 1561–1581.
Abrajano, Marisa. 2015. “Reexamining the ‘racial gap’ in political knowledge.”
The Journal of Politics 77(1): 44–54.
Ahler, Douglas J. 2014. “Self-fulfilling misperceptions of public polarization.”
The Journal of Politics 76(3): 607–620.
Ahler, Douglas J., and Gaurav Sood. 2018. “The parties in our heads:
Misperceptions about party composition and their consequences.” The
Journal of Politics 80(3): 964–981.
2018. “Measuring perceptions of shares of groups.” In Brian G. Southwell,
Emily A. Thorson, and Laura Sheble (eds.), Misinformation and Mass
Audiences (pp. 71–90). Austin: University of Texas.
Ahn, T. K., Robert Huckfeldt, and John Barry Ryan. 2014. Experts, Activists, and
Democratic Politics: Are Electorates Self-Educating? New York: Cambridge
University Press.
Allport, Gordon. W. 1954. The Nature of Prejudice. Cambridge, MA: Addison-
Wesley.
Ambrose, Graham. 2016. “At the country’s most elite and liberal colleges, some
Trump supporters stay closeted.” The Washington Post, September 20.
www.washingtonpost.com/news/grade-point/wp/2016/09/20/at-the-countrys-
most-elite-and-liberal-colleges-some-trump-supporters-stay-closeted/?noredirect=
on&utm_term=.4ce3d1f660ff.
Anoll, Allison P. 2018. “What makes a good neighbor? Race, place, and norms of
political participation.” The American Political Science Review 112(3):
494–508.
Anspach, Nicolas M., and Taylor N. Carlson. 2020. “What to believe? Social
media commentary and belief in misinformation.” Political Behavior 42(3):
697-718.

277

https://doi.org/10.1017/9781108912495.012 Published online by Cambridge University Press


278 Works Cited

Appleby, Jacob. 2018. “Do they like us? Meta-stereotypes and meta-evaluations
between political groups.” PhD diss., University of Minnesota.
Appleby, Jacob, and Eugene Borgida. “Ideological metastereotypes:
Overestimations of intergroup antipathy and sources of anxiety.” Manuscript
in preparation. https://jacobappleby.wordpress.com/home/research.
Asch, Solomon E. 1956. “Studies of independence and conformity: A minority of
one against a unanimous majority.” Psychological Monographs 70(9): 1–70.
Bakker, Bert N., Yphtach Lelkes, and Ariel Malka. 2021. “Reconsidering the link
between self-reported personality traits and political preferences.” American
Political Science Review 115(4): 1482–1498.
Bartels, Larry M. 2002. “Beyond the running tally: Partisan bias in political
perceptions.” Political behavior 24(2): 117–150.
Bello, Jason. 2012. “The dark side of disagreement? Revisiting the effect of
disagreement on political participation.” Electoral Studies 31(4): 782–795.
Bello, Jason, and Meredith Rolfe. 2014. “Is influence mightier than selection?
Forging agreement in political discussion networks during a campaign.”
Social Networks 36: 134–156.
Benjamin, Daniel J., and Jesse M. Shapiro. 2009. “Thin-slice forecasts of guber-
natorial elections.” The Review of Economics and Statistics 91(3): 523–536.
Benoit Kenneth, Kohei Watanabe, Haiyan Wang, et al. 2018. “Quanteda: An
R package for the quantitative analysis of textual data.” Journal of Open
Source Software, 3(30): 774.
Bøggild, Troels, Lene Aarøe, and Michael Bang Petersen. 2021. “Citizens as
complicits: Distrust in politicians and biased social dissemination of political
information.” American Political Science Review, 115(1): 269–285.
Brewer, Marilyn B., and Sonia Roccas. 2001. “Individual values, social identity,
and optimal distinctiveness.” In C. Sedikides and M. B. Brewer (eds.),
Individual Self, Relational Self, Collective Self (pp. 219–237). New York:
Psychology Press.
Brown, Elissa J., Julia Turovsky, Richard G. Heimberg, Harlan R. Juster, Timothy
A. Brown, and David H. Barlow. 1997. “Validation of the social interaction
anxiety scale and the social phobia scale across the anxiety disorders.”
Psychological Assessment 9(1): 21.
Bullock, John G., Alan S. Gerber, Seth J. Hill, and Gregory A. Huber. 2015.
“Partisan bias in factual beliefs about politics.” Quarterly Journal of Political
Science 10(4): 519–578.
Bullock, John G., and Gabriel Lenz. 2019. “Partisan bias in surveys.” Annual
Review of Political Science 22: 325–342.
Busby, Ethan. 2021. Should You Stay Away from Strangers? Experiments on the
Political Consequences of Intergroup Contact. Cambridge: Cambridge
University Press.
Busby, Ethan, Adam Howat, Jacob Rothschild, and Richard Shafranek. 2021.
The Partisan Next Door: Stereotypes of Party Supporters and Consequences
for Polarization in America. Cambridge: Cambridge University Press.
Butler, Daniel M., and David E. Broockman. 2011. “Do politicians racially
discriminate against constituents? A field experiment on state legislators.”
American Journal of Political Science 55(3): 463–477.

https://doi.org/10.1017/9781108912495.012 Published online by Cambridge University Press


Works Cited 279

Butler, Daniel M., and Jonathan Homola. 2017. “An empirical justification for
the use of racially distinctive names to signal race in experiments.” Political
Analysis 25(1): 122–130.
Butters, Ross, and Christopher Hare. “Polarized networks? New evidence on
American voters’ political discussion networks.” Political Behavior (2020):
1–25.
Carlson, Taylor N. 2018. “Modeling political information transmission as a game
of telephone.” The Journal of Politics 80(1): 348–352.
2019. “Through the grapevine: Informational consequences of interpersonal
political communication.” American Political Science Review 113(2):
325–339.
Carlson, Taylor N., Marisa Abrajano, and Lisa García Bedolla. 2020. Political
discussion networks and political engagement among voters of color.
Political Research Quarterly 73(1), 79–95.
Carlson, Taylor N., Marisa Abrajano, and Lisa García Bedolla. 2020. Talking
Politics: Political Discussion Networks and the New American Electorate.
New York: Oxford University Press.
Carlson, Taylor N., and Seth J. Hill. 2021. “Experimental measurement of mis-
perception in political beliefs.” Journal of Experimental Political Science.
1–14.
Carlson, Taylor N., Charles T. McClean, and Jaime E. Settle. 2020. “Follow your
heart: Could psychophysiology be associated with political discussion net-
work homogeneity?” Political Psychology 41(1): 165–187.
Carlson, Taylor N., and Jaime E. Settle. 2016. “Political chameleons: An explor-
ation of conformity in political discussions.” Political Behavior 38(4):
817–859.
Carney, Dana R., John T. Jost, Samuel D. Gosling, and Jeff Potter. 2008. “The
secret lives of liberals and conservatives: Personality profiles, interaction
styles, and the things they leave behind.” Political Psychology 29(6):
807–840.
Carpinella, Colleen M., and Kerri L. Johnson. 2013. “Appearance-based politics:
Sex-typed facial cues communicate political party affiliation.” Journal of
Experimental Social Psychology 49(1): 156–160.
Chambers, John R., and Darya Melnyk. 2006. “Why do I hate thee? Conflict
misperceptions and intergroup mistrust.” Personality and Social Psychology
Bulletin 32(10): 1295–1311.
Chambers, John R., Robert S. Baron, and Mary L. Inman. 2006. “Misperceptions
in intergroup conflict: Disagreeing about what we disagree about.”
Psychological Science 17(1): 38–45.
Chen, M. Keith, and Ryne Rohla. 2018. “The effect of partisanship and political
advertising on close family ties.” Science 360(6392): 1020–1024.
Cialdini, Robert B., and Melanie R. Trost. 1998. “Social influence: Social
norms, conformity and compliance.” In D. T. Gilbert, S. T. Fiske, and G.
Lindzey (eds.), The Handbook of Social Psychology. 151–192. New York:
McGraw-Hill.
Cialdini, Robert B., and Noah J. Goldstein. 2004. “Social influence: Compliance
and conformity.” Annual Review of Psychology 55(1): 591–621.

https://doi.org/10.1017/9781108912495.012 Published online by Cambridge University Press


280 Works Cited

Cichocka, Aleksandra, Michał Bilewicz, John T. Jost, Natasza Marrouch, and


Marta Witkowska. 2016. “On the grammar of politics – or why conserva-
tives prefer nouns.” Political Psychology 37(6): 799–815
Coe, Chelsea M., Kayla S. Canelo, Kau Vue, Matthew V. Hibbing, and Stephen P.
Nicholson. 2017. “The physiology of framing effects: Threat sensitivity and the
persuasiveness of political arguments.” The Journal of Politics 79(4): 1465–1468.
Conover, Pamela Johnston, and Donald D. Searing. 2005. “Studying ‘everyday
political talk’ in the deliberative system.”Acta Politica 40(3): 269–283.
Conover, Pamela Johnston, Donald D. Searing, and Ivor M. Crewe. 2002. “The
deliberative potential of political discussion.” British Journal of Political
Science 32(1): 21–62.
Cramer, Katherine. 2004. Talking about Politics: Informal Groups and Social
Identity in American Life. Chicago: University of Chicago Press.
2006. “Communities, race, and talk: An analysis of the occurrence of civic
intergroup dialogue programs.” The Journal of Politics 68(1): 22–33.
2008. Talking about Race: Community Dialogues and the Politics of
Difference. Chicago: University of Chicago Press.
Crawford, Jarret T., Sean A. Modri, and Matt Motyl. 2013. “Bleeding-heart
liberals and hard-hearted conservatives: Subtle political dehumanization
through differential attributions of human nature and human uniqueness
traits.” Journal of Social and Political Psychology 1(1): 86–104.
Dailey, Rene M., and Nicholas A. Palomares. 2004. “Strategic topic avoidance:
An investigation of topic avoidance frequency, strategies used, and relational
correlates.” Communication Monographs 71(4): 471–496.
Dawes, Robyn M., David Singer, and Frank Lemons. 1972. “An experimental
analysis of the contrast effect and its implications for intergroup communi-
cation and the indirect assessment of attitude.” Journal of Personality and
Social Psychology 21(3): 281–295.
Deichert, Maggie Ann. 2019. “Partisan cultural stereotypes: The effect of every-
day partisan associations on social life in the United States.” PhD diss.
Vanderbilt University.
DellaPosta, Daniel, Yongren Shi, and Michael Macy. 2015. “Why do liberals
drink lattes?” American Journal of Sociology 120(5): 1473–1511.
Denny, Elaine. 2016. “The good intention gap: Poverty, anxiety, and implications
for political action.” Working Paper.
Detert, Laurel, and Jaime Settle. 2023. “Biopolitics.” In Leonie Huddy, David
Sears, Jack Levy, and Jennifer Jerit (eds.), Oxford Handbook of Political
Psychology. Oxford: Oxford University Press. Forthcoming.
Djupe, Paul A., and Anand Edward Sokhey. 2014. “The distribution and deter-
minants of socially supplied political expertise.” American Politics Research
42(2): 199–225.
Djupe, Paul, Scott McClurg, and Anand Edward Sokhey. 2018. “The political
consequences of gender in social networks.” British Journal of Political
Science 48(3): 637–658.
Doherty et al. 2019. “Public highly critical of state of political discourse in the U.S.”
Pew Research Center Report. Available at www.pewresearch.org/politics/
2019/06/19/public-highly-critical-of-state-of-political-discourse-in-the-u-s.

https://doi.org/10.1017/9781108912495.012 Published online by Cambridge University Press


Works Cited 281

Dolan, Kathleen. 2011. “Do women and men know different things? Measuring
gender differences in political knowledge.” The Journal of Politics 73(1):
97–107.
2014. When Does Gender Matter? Women Candidates and Gender Stereotypes
in American Elections. New York: Oxford University Press.
Dolan, Kathleen and Patrick Kraft. Asking the Right Questions: A Framework to
Develop Gender-Balanced Knowledge Batteries. Unpublished manuscript.
Druckman, James N., and Matthew S. Levendusky. 2019. “What do we measure
when we measure affective polarization?” Public Opinion Quarterly 83(1):
114–122.
Druckman, James N., Matthew S. Levendusky, and Audrey McLain. 2018. “No
need to watch: How the effects of partisan media can spread via interpersonal
discussions.” American Journal of Political Science 62(1): 99–112
Druckman, James N., Samara Klar, Yanna Krupnikov, Matthew Levendusky, and
John Barry Ryan. “(Mis-)estimating affective polarization.” Forthcoming,
Journal of Politics.
Duggan, Maeve, and Aaron Smith. 2016. “The political environment on social
media.” Pew Research Center. October 25. www.pewinternet.org/2016/10/
25/the-political-environment-on-social-media.
Efran, Michael G. 1974. “The effect of physical appearance on the judgment of
guilt, interpersonal attraction, and severity of recommended punishment in a
simulated jury task.” Journal of Research in Personality 8(1): 45–54.
Eliasoph, Nina. 1998. Avoiding Politics: How Americans Produce Apathy in
Everyday Life. Cambridge: Cambridge University Press.
Engelhardt, Andrew M., and Stephen M. Utych. 2020. “Grand old (tailgate)
party? Partisan discrimination in apolitical settings.” Political Behavior
42(3): 769–789.
Eveland, William P., Alyssa C. Morey, and Myiah J. Hutchens. 2011. “Beyond
deliberation: New directions for the study of informal political conversation
form a communication perspective.” Journal of Communication 61(6):
1082–1103.
Eveland, William P., Jr., Hyunjin Song, Myiah J. Hutchens, and Lindsey Clark
Levitan. 2019. “Not being accurate is not quite the same as being inaccurate:
Variations in reported (in)accuracy of perceptions of political views of net-
work members due to uncertainty.” Communication Methods and Measures
13(4): 305–311.
Eveland, William P., and Myiah J. Hutchens. 2009. “Political discussion fre-
quency, network size, and ‘heterogeneity’ of discussion as predictors of
political knowledge and participation.” Journal of Communication 59(2):
205–224.
2013. “The role of conversation in developing accurate political perceptions:
A multilevel social network approach.” Human Communication Research 39
(4): 422–444.
Eveland, William P., and Osei Appiah. 2021. “A national conversation about
race? Political discussion across lines of racial and partisan difference.”
Journal of Race, Ethnicity, and Politics 6(1): 187–213.

https://doi.org/10.1017/9781108912495.012 Published online by Cambridge University Press


282 Works Cited

Figlio, David N. 2005. Names, Expectations and the Black-White Test Score Gap.
No. w11195. National Bureau of Economic Research.
Fisher, Marc. 2016. “It’s hard enough to be a Republican in deep-blue D.C. Try being
a Trump voter.” The Washington Post, February 22. www.washingtonpost
.com/local/its-hard-enough-to-be-a-republican-in-deep-blue-dc-try-being-a-
trump-voter/2016/02/22/d8f18b96-d4f2-11e5-9823-02b905009f99_story.
html?utm_term=.9ada7154080e.
Fitzgerald, Jennifer. 2013. “What does ‘political’ mean to you?” Political
Behavior 35(3): 453–479.
Fouka, Vasiliki. 2020. “Backlash: The unintended effects of language prohibition
in US schools after World War I.” The Review of Economic Studies 87(1):
204–239.
Frey, Frances E., and Linda R. Tropp. 2006. “Being seen as individuals versus as
group members: Extending research on metaperception to intergroup con-
texts.” Personality and Social Psychology Review 10(3): 265–280.
French, Jeffrey A., Kevin B. Smith, John R. Alford, Adam Guck, Andrew K. Birnie,
and John R. Hibbing. 2014. “Cortisol and politics: variance in voting behav-
ior is predicted by baseline cortisol levels.” Physiology & Behavior 133:
61–67.
Funder, David C. 1995. “On the accuracy of personality judgment: A realistic
approach.” Psychological review 102(4): 652.
Gadarian, Shana Kushner, and Bethany Albertson. 2014. “Anxiety, immigration,
and the search for information.” Political Psychology 35(2): 133–164.
Gell-Redman, Micah, Neil Visalvanich, Charles Crabtree, and Christopher J.
Fariss. 2018. “It’s all about race: How state legislators respond to immigrant
constituents.” Political Research Quarterly 71(3): 517–531.
Gerber, Alan S., Gregory A. Huber, David Doherty, and Conor M. Dowling.
2012. “Disagreement and the avoidance of political discussion: Aggregate
relationships and differences across personality traits.” American Journal of
Political Science 56(4): 849–874.
Gerber, Alan S., Gregory A. Huber, David Doherty, Conor M. Dowling, and
E. Ha Shang. 2010. “Personality and political attitudes: Relationships across
issue domains and political contexts.” American Political Science Review
104(1): 111–133.
Gibson, James L. 1992. “The political consequences of intolerance: Cultural
conformity and political freedom.” The American Political Science Review
86(2): 338–356.
Gibson, James L., and Joseph L. Sutherland. 2020. “Keeping your mouth shut:
Spiraling self-censorship in the United States.” Working Paper.
Glynn, Carroll J., Andrew F. Hayes, and James Shanahan. 1997. “Perceived
support for one’s opinions and willingness to speak out: A meta-analysis of
survey studies on the ‘spiral of silence.’” Public Opinion Quarterly 61(3):
452–463.
Goggin, Stephen N., and Alexander G. Theodoridis. 2017. “Disputed ownership:
Parties, issues, and traits in the minds of voters.” Political Behavior 39(3):
675–702

https://doi.org/10.1017/9781108912495.012 Published online by Cambridge University Press


Works Cited 283

Gosling, Sam. 2008. Snoop: What Your Stuff Says about You. London: Profile
Books.
Gosling, Samuel D., Peter J. Rentfrow, and William B. Swann. 2003. “A very brief
measure of the big-five personality domains.” Journal of Research in
Personality 37(6): 504–528.
Gosling, Samuel D., Sam Gaddis, and Simine Vazire. 2008. “First impressions
based on the environments we create and inhabit.” In Nalini Ambady and
John Joseph Skowronski (eds.), First Impressions (pp. 334–356). New York:
The Guilford Press.
Graham, Jesse, Jonathan Haidt, and Brian A. Nosek. 2009. “Liberals and conser-
vatives rely on different sets of moral foundations.” Journal of Personality
and Social Psychology 96(5): 1029.
Green, Stephanie. 2017. “What it’s like to be a Trump supporter in DC.” The
Washingtonian, January 18. www.washingtonian.com/2017/01/18/what-its-
like-to-be-a-trump-supporter-in-dc.
Greene, Stephen. 1999. “Understanding party identification: A social identity
approach.” Political Psychology 20(2): 393–403.
Haidt, Jonathan. 2014. “Your personality makes your politics.” TIME, January 9.
https://science.time.com/2014/01/09/your-personality-makes-your-politics.
Haidt, Jonathan, and Chris Wilson. 2014. “Can TIME predict your politics?”
TIME, January 9. http://time. com/510/can-time-predict-your-politics.
Hampton, Keith, Lee Rainie, Weixu Lu, Maria Dwyer, Inyoung Shin, and Kristen
Purcell. 2014. “Social media and the ‘spiral of silence.’” Pew Research
Center.
Harris-Lacewell, Melissa Victoria. 2010. Barbershops, Bibles, and BET:
Everyday Talk and Black Political Thought. Princeton, NJ: Princeton
University Press.
Hatemi, Peter K., John R. Hibbing, Sarah E. Medland, et al. 2010. “Not by
twins alone: Using the extended family design to investigate genetic influ-
ence on political beliefs.” American Journal of Political Science 54(3):
798–814.
Hayes, Danny. 2005. “Candidate qualities through a partisan lens: A theory of
trait ownership.” American Journal of Political Science 49(4): 908–923.
Hayes, Andrew F., Carroll J. Glynn, and James Shanahan. 2005. “Willingness to
self-censor: A construct and measurement tool for public opinion research.”
International Journal of Public Opinion Research 17(3): 298–323.
Heimberg, Richard G., Gregory P. Mueller, Craig S. Holt, Debra A. Hope, and
Michael R. Liebowitz. 1992. “Assessment of anxiety in social interaction and
being observed by others: The social interaction anxiety scale and the social
phobia scale. Behavior Therapy 23(1): 53–73
Hersh, Eitan. 2020. Politics Is for Power: How to Move beyond Political
Hobbyism, Take Action, and Make Real Change. New York: Scribner.
Hetherington, Marc J., and Jonathan D. Weiler. 2009. Authoritarianism and
Polarization in American Politics. Cambridge: Cambridge University Press.
2018. Prius or Pickup? How the Answers to Four Simple Questions Explain
America’s Great Divide. Boston: Houghton Mifflin Harcourt.

https://doi.org/10.1017/9781108912495.012 Published online by Cambridge University Press


284 Works Cited

Hibbing, John R., and Elizabeth Theiss-Morse. 2002. Stealth Democracy:


Americans’ Beliefs about How Government Should Work. Cambridge:
Cambridge University Press.
Hibbing, John. R., Kevin. B. Smith, and John. R. Alford. 2013. Predisposed:
Liberals, Conservatives, and the Biology of Political Differences.
Philadelphia: Routledge.
Hibbing, Matthew V., Melina Ritchie, and Mary R. Anderson. 2011. “Personality
and political discussion.” Political Behavior 33(4): 601–624.
Huber, Gregory A., and Neil Malhotra. 2017. “Political homophily in social
relationships: Evidence from online dating behavior.” The Journal of
Politics 79(1): 269–283.
Huckfeldt, Robert. 2001. “The social communication of political expertise.”
American Journal of Political Science 45(2): 425–438.
Huckfeldt, Robert, Paul E. Johnson, and John Sprague. 2004. Political
Disagreement: The Survival of Diverse Opinions within Communication
Networks. Cambridge: Cambridge University Press
Huckfeldt, Robert, and John Sprague. 1987. “Networks in context: The social
flow of political information.” American Political Science Review 81(4):
1197–1216.
1995. Citizens, Politics, and Social Communication: Information and Influence
in an Election Campaign. Cambridge: Cambridge University Press.
Iyengar, Shanto, Gauray Sood, and Yphtach Lelkes. 2012. “Affect, not ideology:
A social identity perspective on polarization.” Public Opinion Quarterly
76(3): 405–431.
Iyengar, Shanto, and Sean J. Westwood. 2015. “Fear and loathing across party
lines: New evidence on group polarization.” American Journal of Political
Science 59(3): 690–707.
Iyengar, Shanto, Tobias Kontizer, and Kent Tedin. 2018. “The home as a political
fortress: Family agreement in an era of polarization.” Journal of Politics
80(4): 1326–1338.
Jamieson, Amber. 2016. “‘Not even my wife knows’: Secret Donald Trump voters
speak out.” The Guardian, March 3. www.theguardian.com/us-news/2016/
mar/03/secret-donald-trump-voters-speak-out.
Jerit, Jennifer, and Jason Barabas. 2012. “Partisan perceptual bias and the infor-
mation environment.” The Journal of Politics 74(3): 672–684.
Johnson, Dominic D. P., Rose McDermott, Emily S. Barrett, et al. 2006.
“Overconfidence in wargames: Experimental evidence on expectations,
aggression, gender and testosterone.” Proceedings of the Royal Society B:
Biological Sciences 273(1600): 2513–2520.
Jost, John T. 2006. “The end of the end of ideology.” American Psychologist
61(7): 651.
Jost, John T., Jack Glaser, Arie W. Kruglanski, and Frank J. Sulloway. 2003.
“Political conservatism as motivated social cognition.” Psychological
Bulletin 129(3): 339.
Joubert, Charles E. 1994. “Relation of name frequency to the perception of social
class in given names.” Perceptual and Motor Skills 79(1): 623–626.

https://doi.org/10.1017/9781108912495.012 Published online by Cambridge University Press


Works Cited 285

Judd, Charles M., and James W. Downing. 1995. “Stereotypic accuracy in judg-
ments of the political positions of groups and individuals.” In Milton Lodge
and Kathleen McGraw (eds.), Political Judgment: Structure and Process
(pp. 65–90). Ann Arbor: University of Michigan Press.
Karpowitz, Christopher F., and Tali Mendelberg. 2014. The Silent Sex: Gender,
Deliberation, and Institutions. Princeton, NJ: Princeton University Press.
Karpowitz, Christopher F., Tali Mendelberg, and Lauren Mattioli. 2015. “Why
women’s numbers elevate women’s influence, and when they do not: Rules,
norms, and authority in political discussion.” Politics, Groups, and Identities
3(1): 149–177.
Karpowitz, Christopher F., Tali Mendelberg, and Lee. Shaker (2012). Gender
inequality in deliberative participation. American Political Science Review
106(3): 533–547.
Katz, Josh. 2016. “‘Duck Dynasty’ vs. ‘Modern Family’: 50 maps of the U.S.
cultural divide” The New York Times, December 27. www.nytimes.com/
interactive/2016/12/26/upshot/duck-dynasty-vs-modern-family-television-
maps.html.
Klar, Samara, and Yanna Krupnikov. 2016. Independent Politics: How American
Disdain for Parties Leads to Political Action. New York: Cambridge
University Press.
Klofstad, Casey A. 2009. “Civic talk and civic participation: The moderating
effect of individual predispositions.” American Politics Research 37(5):
856–878.
2010. Civic Talk: Peers, Politics, and the Future of Democracy. Philadelphia:
Temple University Press.
Klofstad, Casey A., Anand Edward Sokhey, and Scott D. McClurg. 2013.
“Disagreeing about disagreement: How conflict in social networks affects
political behavior.” American Journal of Political Science 57(1): 120–134.
Klofstad, Casey A., Scott D. McClurg, and Meredith Rolfe. 2009. “Measurement
of political discussion networks.” Public Opinion Quarterly 73(3): 462–483.
Kreibig, Sylvia D. 2010. “Autonomic nervous system activity in emotion:
A review.” Biological Psychology 84(3): 394–421.
Krockow, Eva M. 2018 “How many decisions do we make each day?”
Psychology Today, www.psychologytoday.com/us/blog/stretching-theory/
201809/how-many-decisions-do-we-make-each-day.
Krupnikov, Yanna, and John Barry Ryan. 2022. The Other Divide: Polarization and
Disengagement in American Politics. Cambridge: Cambridge University Press.
Kulas, Michelle. 2017. “The normal heart rate during a panic attack.” www
.livestrong.com/article/344010-the-normal-heart-rate-during-a-panic-attack.
Ladd, Jonathan McDonald, and Gabriel S. Lenz. 2008. “Reassessing the role of
anxiety in vote choice.” Political Psychology 29(2): 275–296.
2011. “Does anxiety improve voters’ decision making?” Political Psychology
32(2): 347–361
Lajevardi, Nazita. 2020.”Access denied: Exploring Muslim American representa-
tion and exclusion by state legislators.” Politics, Groups, and Identities 8(5):
957–985.

https://doi.org/10.1017/9781108912495.012 Published online by Cambridge University Press


286 Works Cited

Lawless, Jennifer L., and Richard L. Fox. 2010. It Still Takes a Candidate: Why
Women Don’t Run for Office. New York: Cambridge University Press.
Leighley, Jan E., and Arnold Vedlitz. 1999. “Race, ethnicity, and political partici-
pation: Competing models and contrasting explanations.” The Journal of
Politics 61(4): 1092–1114.
Leighley, Jan E., and Tetsuya Matsubayashi. 2009. “The implications of class,
race, and ethnicity for political networks.” American Politics Research 37(5):
824–855.
Lerner, Jennifer S., Ye Li, Piercarlo Valdesolo, and Karim S. Kassam. 2015.
“Emotion and decision making.” Annual Review of Psychology 66:
799–823.
Levendusky, Matthew S., and Dominik A. Stecula. 2021. We Need to Talk: How
Cross-Party Dialogue Reduces Affective Polarization. Cambridge:
Cambridge University Press.
Levendusky, Matthew S., and Neil Malhotra. 2016. “(Mis)perceptions of partisan
polarization in the American public.” Public Opinion Quarterly 80(S1):
378–391.
Levitan, Lindsey, and Brad Verhulst. 2016. “Conformity in groups: The effects of
others views on expressed attitudes and attitude change.” Political Behavior
38(2): 277–315.
Lieberson, Stanley. 2000. A Matter of Taste: How Names, Fashions, and Culture
Change. New Haven, CT: Yale University Press.
Lieberson, Stanley, and Eleanor O. Bell. 1992. “Children’s first names: An
empirical study of social taste.” American Journal of sociology 98(3):
511–554
Long, Jacob A., and William P. Eveland Jr. 2018. “Entertainment use and political
ideology: Linking worldviews to media content.” Communication Research.
Lupia, A., and M. D. McCubbins. 1998. The Democratic Dilemma: Can
Citizens Learn What They Need to Know? Cambridge: Cambridge
University Press.
Lyons, Jeffrey and Anand E. Sokhey. 2014. “Emotion, motivation, and social
information seeking about politics.” Political Communication 31(2):
237–258.
2017. “Discussion networks, issues, and perceptions of polarization in the
American electorate.” Political Behavior 39(4): 967–988.
MacKuen, Michael. 1990. “Speaking of politics: Individual conversational choice,
public opinion, and the prospects for deliberative democracy.” In John A.
Ferejohn and James H. Kuklinski (eds.), Information and Democratic
Processes (pp. 59–99). Urbana: University of Illinois Press.
Makse, Todd, Scott Minkoff, and Anand Sokhey. 2019. Politics on Display: Yard
Signs and the Politicization of Social Spaces. New York: Oxford University
Press.
Mansbridge, Jane J. 1980. Beyond Adversary Democracy. New York: Basic
Books.
1999. “Everyday talk in the deliberative system.” In S. Macedo (ed.), Essays on
Democracy and Disagreement. New York: Oxford University Press.

https://doi.org/10.1017/9781108912495.012 Published online by Cambridge University Press


Works Cited 287

Marcus, Bernd, Franz Machilek, and Astrid Schütz. 2006. “Personality in cyber-
space: Personal web sites as media for personality expressions and impres-
sions.” Journal of Personality and Social Psychology 90(6): 1014–1031.
Marcus, George E., John L. Sullivan, Elizabeth Theiss-Morse, and Daniel Stevens.
2005. “The emotional foundation of political cognition: The impact of
extrinsic anxiety on the formation of political tolerance judgments.”
Political Psychology 26(6): 949–963.
Marcus, George E., and Michael B. MacKuen. 1993. “Anxiety, enthusiasm, and
the vote: The emotional underpinnings of learning and involvement during
presidential campaigns.” American Political Science Review 87(3): 672–685.
Mason, Lilliana. 2018. Uncivil Agreement: How Politics Became Our Identity.
Chicago: University of Chicago Press.
Mattick, Richard P., and J. Christopher Clarke. 1998. “Development and valid-
ation of measures of social phobia scrutiny and social interaction anxiety.”
Behaviour Research and Therapy 36(4): 455–470.
McClurg, Scott D. 2006. “Political disagreement in context: The conditional effect
of neighborhood context, disagreement and political talk on electoral partici-
pation.” Political Behavior 28(4): 349–366.
McCrae, Robert R. 1996. “Social consequences of experiential openness.”
Psychological Bulletin 122(3): 323–337.
McDermott, Rose, Dustin Tingley, and Peter K. Hatemi. 2014. “Assortative
mating on ideology could operate through olfactory cues.” American
Journal of Political Science 58(4): 997–1005.
McLeod, Jack M., Dietram A. Scheufele, and Patricia Moy. 1999. “Community,
communication, and participation: The role of mass media and interpersonal
discussion in local political participation.” Political Communication 16(3):
315–336.
McLeod, Jack M., Katie Daily, Zhongshi Guo, et al. 1996. “Community integra-
tion, local media use, and democratic processes.” Communication Research
23(2): 179–209.
Mendelberg, Tali, and Christopher F. Karpowitz. 2016. “Power, gender, and
group discussion.” Political Psychology 37(1): 23–60.
Mendelberg, Tali, Christopher F. Karpowitz, and Nicholas Goedert. 2014. “Does
descriptive representation facilitate women’s distinctive voice? How gender
composition and decision rules affect deliberation.” American Journal of
Political Science 58(2): 291–306.
Minozzi, William, Hyunjin Song, David M. J. Lazer, Michael A. Neblo, and
Katherine Ognyanova. 2020. “The incidental pundit: Who talks politics with
whom, and why?.” American Journal of Political Science 64(1): 135–151.
Mintz, A., and C. Wayne. 2016. The Polythink Syndrome: US Foreign Policy
Decisions on 9/11, Afghanistan, Iraq, Iran, Syria, and ISIS. Stanford, CA:
Stanford University Press.
Mondak, Jeffery J. 2010. Personality and the Foundations of Political Behavior.
Cambridge: Cambridge University Press.
Mondak, Jeffery J., and Karen D. Halperin. 2008. “A framework for the study of
personality and political behaviour.” British Journal of Political Science
38(2): 335–362.

https://doi.org/10.1017/9781108912495.012 Published online by Cambridge University Press


288 Works Cited

Mondak, Jeffery J., Matthew V. Hibbing, Damarys Canache, Mitchell A.


Seligson, and Mary R. Anderson. 2010. “Personality and civic engagement:
An integrative framework for the study of trait effects on political behavior.”
American Political Science Review 104(1): 85–110.
Morehouse Mendez, Jeanette, and Tracy Osborn. “Gender and the perception of
knowledge in political discussion.” 2010. Political Research Quarterly 63(2):
269–279.
Morey, Alyssa C., and William P. Eveland Jr. 2016. “Measures of political talk
frequency: Assessing reliability and meaning.” Communication Methods &
Measures 10(1): 51–68.
Morey, Alyssa C., William P. Eveland Jr., and Myiah J. Hutchens. 2012. “The
‘who’ matters: Types of interpersonal relationships and avoidance of political
disagreement.” Political Communication 29(1): 86–103.
Mutz, Diana C. 2002. “The consequences of cross-cutting networks for political
participation.” American Journal of Political Science 46(4): 838–855.
2002. “Cross-cutting social networks: Testing democratic theory in practice.”
American Journal of Political Science 96(1): 111–136.
2006. Hearing the Other Side: Deliberative versus Participatory Democracy.
Cambridge: Cambridge University Press.
2015. In Your Face Politics: The Consequences of Uncivil Media. Princeton,
NJ: Princeton University Press.
Mutz, Diana C., and Byron Reeves. 2005. “The new video malaise: Effects of
televised incivility on political trust.” American Political Science Review
99(1): 1–15.
Mutz, Diana C., and Jeffery J. Mondak. 2006. “The workplace as a context for
cross-cutting political discourse.” The Journal of Politics 68(1): 140–155.
Neblo, Michael A. 2007, “Change for the better? Linking the mechanisms of
deliberative opinion change to normative theory.” Working Paper.
2015. Deliberative Democracy between Theory and Practice. New York:
Cambridge University Press.
Neblo, Michael A., Kevin M. Esterling, Ryan P. Kennedy, David M. J. Lazer, and
Anand E. Sokhey. 2010. “Who Wants to Deliberate – And Why?” American
Political Science Review 104(3): 566–583
Ng, David. 2017. “In liberal Hollywood, a conservative minority faces backlash
in the age of Trump.” Los Angeles Times, March 11. www.latimes.com/
business/hollywood/la-fi-ct-conservatives-hollywood-20170311-story.html.
Nir, Lilach. 2011. “Disagreement and opposition in social networks: Does dis-
agreement discourage turnout?” Political Studies 59(3): 674–692.
Noelle-Neumann, Elisabeth. 1974. “The spiral of silence: A theory of public
opinion.” The Journal of Communication 24(2): 43–51.
1993. The Spiral of Silence: Public Opinion – Our Social Skin. Chicago:
University of Chicago Press.
Ohbuchi, Ken-Ichi, and James T. Tedeschi. 1997. “Multiple goals and tactical
behaviors in social conflicts.” Journal of Applied Social Psychology 27(24):
2177–12199.
Oliphant, Baxter, and Samantha Smith. 2016. “How Americans are talking about
Trump’s election in 6 charts.” Pew Research Center Report. December 22.

https://doi.org/10.1017/9781108912495.012 Published online by Cambridge University Press


Works Cited 289

www.pewresearch.org/fact-tank/2016/12/22/how-americans-are-talking-
about-trumps-election-in-6-charts.
Oliver, J. Eric, Thomas Wood, and Alexandra Bass. 2016. “Liberellas versus
Konservatives: Social status, ideology, and birth names in the United
States.” Political Behavior 38(1): 55–81.
Olivola, Christopher Y., and Alexander Todorov. 2010. “Elected in 100 millisec-
onds: Appearance-based trait inferences and voting.” Journal of Nonverbal
Behavior 34(2): 83–110.
Oxley, Douglas R., Kevin B. Smith, John R. Alford, et al. 2008. “Political attitudes
vary with physiological traits.” Science 321(5896): 1667–1670.
Page-Gould, Elizabeth, Wendy Berry Mendes, and Brenda Major. 2010.
“Intergroup contact facilitates physiological recovery following stressful
intergroup interactions.” Journal of Experimental Social Psychology 46(5):
854–858.
Paluck, Elizabeth Levy, Seth A. Green, and Donald P. Green. 2019. “The contact
hypothesis re-evaluated.” Behavioural Public Policy 3(2): 129–158.
Parsons, Bryan M. 2010. “Social networks and the affective impact of political
disagreement.” Political Behavior 32(2): 181–204.
Pérez, Efrén O. 2015. “Mind the gap: Why large group deficits in political
knowledge emerge – and what to do about them.” Political Behavior 37(4):
933–954.
Pettigrew, Thomas F., and Linda R. Tropp. 2006. “A meta-analytic test of inter-
group contact theory.” Journal of Personality and Social Psychology 90(5):
751.
Pietryka, Matthew T. 2016. “Accuracy motivations, predispositions, and social
information in political discussion networks.” Political Psychology 37(3):
367–386.
Prior, Markus, Gaurav Sood, and Kabir Khanna. 2015. “You cannot be serious:
The impact of accuracy incentives on partisan bias in reports of economic
perceptions.” Quarterly Journal of Political Science 10(4): 489–518.
Rahim, M. Afzalur. 1983. “A measure of styles of handling interpersonal con-
flict.” The Academy of Management Journal 26(2): 368–376.
Reifen Tagar, Michal, Christopher M. Federico, Kristen E. Lyons, Steven Ludeke,
and Melissa A. Koenig. 2014. “Heralding the authoritarian? Orientation
toward authority in early childhood.” Psychological Science 25(4): 883–892.
Renshon, Jonathan, Jooa Julia Lee, and Dustin Tingley. 2015. “Physiological
arousal and political beliefs.” Political Psychology 36(5): 569–585.
Richey, Sean. 2009. “Hierarchy in political discussion.” Political Communication
26(2): 137–152.
Riordan, Cornelius. 1978. “Equal-status interracial contact: A review and revi-
sion of the concept.” International Journal of Intercultural Relations 2(2):
161–185.
Riordan, Cornelius, and Josephine Ruggiero. 1980. “Producing equal-status
interracial interaction: A replication.” Social Psychology Quarterly 43(1):
131–136.
Robinson, Robert J., Dacher Keltner, Andrew Ward, and Lee Ross. 1995. “Actual
versus assumed differences in construal: ‘Naive realism’ in intergroup

https://doi.org/10.1017/9781108912495.012 Published online by Cambridge University Press


290 Works Cited

perception and conflict.” Journal of Personality and Social Psychology 68(3):


404–417.
Rojas, Hernando. 2008. “Strategy versus understanding: How orientations toward
political conversation influence political engagement.” Communication
Research 35(4): 452–480.
Rosenberg, Morris. 1954–1955. “Some determinants of political apathy.” Public
Opinion Quarterly 18(4): 349–366.
Rossiter, Erin. 2020. “The consequences of interparty conversation on outparty
affect and stereotypes.” Working Paper.
Rossiter, Erin, and Taylor N. Carlson. “Electoral threat and the impact of inter-
party contact on affective polarization.” Working Paper.
Rothschild, Jacob E., Adam J. Howat, Richard M. Shafranek, and Ethan C.
Busby. 2018. “Pigeonholing partisans: Stereotypes of party supporters and
partisan polarization.” Political Behavior 41(2): 423–443.
Rouse, Stella M. 2013Latinos in the Legislative Process: Interests and Influence.
New York: Cambridge University Press.
Rule, Nicholas O., and Nalini Ambady. 2010. “Democrats and Republicans can
be differentiated from their faces.” PLoS ONE 5(1): e8733.
Ryan, John Barry. 2011. “Social networks as a shortcut to correct voting.”
American Journal of Political Science 55(4): 752–765.
Samochowiec, Jakub, Michaela Wänke, and Klaus Fiedler. 2010. “Political ideol-
ogy at face value.” Social Psychological and Personality Science 1(3): 206–213.
Scheufele, Dietram A., Matthew C. Nisbet, Dominique Brossard, and Erik C.
Nisbet. 2004. “Social structure and citizenship: Examining the impacts of
social setting, network heterogeneity, and informational variables on political
participation.” Political Communication 21(3): 315–338.
Scheufele, Dietram A., and Patrica Moy. 2000. “Twenty-five years of the spiral of
silence: A conceptual review and empirical outlook.” International Journal of
Public Opinion Research 12(1): 3–28.
Schlozman, Kay Lehman, Nancy Burns, and Sidney Verba. 1999. “‘What
happened at work today?’: A multistage model of gender, employment, and
political participation.” The Journal of Politics 61(1): 29–53.
Schoonvelde, Martijn, Anna Brosius, Gijs Schumacher, and Bert N. Bakker. 2019.
“Liberals lecture, conservatives communicate: Analyzing complexity and
ideology in 381,609 political speeches.” PloS One 14(2): e0208450.
Settle, Jaime E. 2018. Frenemies: How Social Media Polarizes America.
Cambridge: Cambridge University Press.
Settle, Jaime E., Matthew V. Hibbing, Nicolas M. Anspach, et al. 2020. “Political
psychophysiology: A primer for interested researchers and consumers.”
Politics and the Life Sciences 39(1): 101–117.
Settle, Jaime E., and Taylor N. Carlson. 2019. “Opting out of political discus-
sions.” Political Communication 36(3): 476–496.
Setzler, Mark, and Alixandra B. Yanus. 2018. “Why did women vote for Donald
Trump?” PS, Political Science & Politics 51(3): 523–527.
Sherman, David K., Leif D. Nelson, and Lee D. Ross. 2003. “Naïve realism and
affirmative action: Adversaries are more similar than they think.” Basic and
Applied Social Psychology 25(4): 275–289.

https://doi.org/10.1017/9781108912495.012 Published online by Cambridge University Press


Works Cited 291

Shi, Yongren, Kai Mast, Ingmar Weber, Agrippa Kellum, and Michael Macy.
2017. “Cultural fault lines and political polarization.” WebSci ‘17, June
25–28, Troy, NY.
Sigelman, Lee, and Steven A. Tuch. 1997. “Metastereotypes: Blacks’ perceptions
of Whites’ stereotypes of Blacks.” The Public Opinion Quarterly 61(1):
87–101.
Sinclair, Betsy. 2012. The Social Citizen: Peer Networks and Political Behavior.
Chicago: University of Chicago Press.
Sokhey, Anand E., and Paul A. Djupe. 2014. “Name generation in interpersonal
political network data: Results from a series of experiments.” Social
Networks 36: 147–161.
Song, Hyunjin. 2014. “Uncovering the structural underpinnings of political dis-
cussion networks: Evidence from an exponential random graph model.”
Journal of Communication 65(1): 146–169.
Soroka, Stuart N. 2019. “Skin conductance in the study of politics and communi-
cation.” In Gigi Foster (ed.), Biophysical Measurement in Experimental
Social Science Research: Theory and Practice (pp. 85–104). London:
Academic Press.
Soroka, Stuart N., Patrick Fournier, and Lilach Nir. 2019. “Cross-national evi-
dence of a negativity bias in psychophysiological reactions to news.”
Proceedings of the National Academy of Science 116(38): 18888–18892.
Soroka, Stuart N., Patrick Fournier, Lilach Nir, and John Hibbing. 2019.
“Psychophysiology in the study of political communication: An expository
study of individual-level variation in negativity biases.” Political
Communication 36(2): 288–302.
Stern, Robert Morris, William J. Ray, and Karen S. Quigley. 2001.
Psychophysiological Recording. New York: Oxford University Press.
Straits, Bruce C. 1991. “Bringing strong ties back in interpersonal gateways to
political information and influence.” Public Opinion Quarterly 55(3): 432–448.
Suhay, Elizabeth 2015. “Explaining group influence: The role of identity and
emotion in political conformity and polarization.” Political Behavior 37(1):
221–251.
Sumaktoyo, Nathanael Gratias. 2021. “Friends from across the aisle: The effects of
partisan bonding, partisan bridging, and network disagreement on outparty
attitudes and political engagement.” Political Behavior 43(1) : 223-245.
Taylor, Shelley E., Laura Cousino Klein, Brian P. Lewis, Tara L. Gruenewald,
Regan A. R. Gurung, and John A. Updegraff. 2000. “Biobehavioral
responses to stress in females: Tend-and-befriend, not fight-or-flight.”
Psychological Review 107(3): 411–429.
Thompson, Dennis F. 2008. “Deliberative democratic theory and empirical polit-
ical science.” Annual Review of Political Science 11(1): 497–520.
Todorov, Alexander, Anesu N. Mandisodza, Amir Goren, and Crystal C. Hall.
2005. “Inferences of competence from faces predict election outcomes.”
Science 308(5728): 1623–1626.
Trawalter, Sophie, Jennifer A. Richeson, and J. Nicole Shelton. 2009. “Predicting
behavior during interracial interactions: A stress and coping approach.”
Personality and Social Psychology Review 13(4): 243–268.

https://doi.org/10.1017/9781108912495.012 Published online by Cambridge University Press


292 Works Cited

Tskhay, Konstantin O., and Nicholas O. Rule. 2013. “Accuracy in categorizing


perceptually ambiguous groups: A review and meta-analysis.” Personality
and Social Psychology Review 17(1): 72–86.
Ulbig, Stacy G., and Carolyn Funk. 1999. “Conflict avoidance and political
participation.” Political Behavior 21(3): 265–282.
Vazire, Simine, and Samuel D. Gosling. 2004. “E-Perceptions: Personality impres-
sions based on personal websites.” Journal of Personality and Social
Psychology 87(1): 123–132.
Vorauer, Jacquie D., Kelley J. Main, and Gordon B. O’Connell. 1998. “How do
individuals expect to be viewed by members of lower status groups? Content
and implications of meta-stereotypes.” Journal of Personality and Social
Psychology 75(4): 917–937.
Wagner, Markus, and Davide Morisi. 2019. “Anxiety, fear, and political decision
making.” In Thompson, W.R. and R. Dalton (eds.), Oxford Research
Encyclopedia of Politics. Oxford: Oxford University Press.
Wansink, Brian, and Jeffery Sobal. 2007. “Mindless eating: The 200 daily food
decisions we overlook.” Environment and Behavior 39(1): 106–123.
Weber, Christopher, and Samara Klar. 2019. “Exploring the psychological foun-
dations of ideological and social sorting.” Advances in Political Psychology
40(1): 215–243.
Williams, Joseph P. 2016. “The Trump effect: Is Trump down in the polls because
voters are too embarrassed to admit they are voting for him?” U.S. News and
World Report. July 1. www.usnews.com/news/articles/2016-07-01/are-
voters-too-embarrassed-to-say-they-support-trump.
Wojcieszak, Magdalena. 2011. “Deliberation and attitude polarization.” Journal
of Communication 61(4): 596–617.
Wojcieszak, Magdalena, and Benjamin R. Warner. 2020. “Can interparty contact
reduce affective polarization? A systematic test of different forms of inter-
group contact.” Political Communication 37(6): 789–811.
Wolak, Jennifer. 2020. “Conflict avoidance and gender gaps in political engage-
ment.” Political Behavior. 1–24.
Wood, Wendy. 2000. “Attitude change: Persuasion and social influence.” Annual
review of psychology 51(1): 539–570.
Wyatt, Robert O., Elihu Katz, and Joohan Kim. 2000. “Bridging the spheres:
Political and personal conversation in public and private spaces.” Journal of
Communication 50(1): 71–92.
Wyatt, Robert O., Joohan Kim, and Elihu Katz. 2000. “How feeling free to talk
affects ordinary political conversation, purposeful argumentation, and civic
participation.” Journalism & Mass Communication Quarterly 77(1):
99–114.
Young, Lori, and Stuart Soroka. 2012. “Affective news: The automated coding of
sentiment in political texts.” Political Communication, 29(2): 205–231.
Zaller, John R. 1992. The Nature and Origins of Mass Opinion. New York:
Cambridge University Press.

https://doi.org/10.1017/9781108912495.012 Published online by Cambridge University Press


Index

AAA Typology, 25, 39–40, 154–155, psychophysiological response in,


157–158, 160, 181. See also accuracy; 131–138
affiliation; affirmation anxiety, 138. See also social interaction
concerns and opportunities by, 158 anxiety
considerations coded into, 54–55 apparel, political identity and, 78, 88
in 4D Framework, 49–54 Arceneaux, Kevin, 203–204
measurement of, 49–53 Asch, Solomon E., 24–25, 170
in vignette experiments, 158 autonomic nervous system (ANS), in
Aarøe, Lene, 244 psychophysiological lab experiments,
Abrajano, Marisa, 147, 213–214 68
accuracy, 25–27, 34–35, 39–40, 54–55, 264 avoidance of discussion, 41, 49–53, 105,
concerns, 25–27 110–111, 115–119, 125, 185–186,
considerations, 246 190–191
opportunities, 25–26 AAA Typology in, 117–119
affective polarization, 15, 28–29, 184 free response answers to, 117–118,
affiliation, 29–31, 39–40 189–190, 193–194
concerns, 30, 122–123, 160, 177–179, in True Counterfactual Study, 115
182–184, 186
considerations, 182–183, 186–188 Bakker, Bert N., 203–204
opportunities, 29, 182 behavior. See also proximal behaviors
affirmation, 39–40, 119, 122–123, 128, in vignette experiments, 160–164
154–155, 158–160, 181–182 behavioral economics, 11, 20, 40
concerns, 27–29, 165 Big Five personality traits, 7, 32–34
considerations, 119, 122–123, 128, 160 Busby, Ethan, 101, 249–250
opportunities, 28, 157–158
age, 87 candidate disagreement, 49–50. See also
Ahn, T. K., 25–26, 243–244 presidential vote choice
American National Election Study (ANES), Carlson, Taylor N., 34–35, 90, 111, 124,
2–3, 45–46, 87–88, 168–169, 255 140–141, 147, 194–200, 213–214,
ANS. See autonomic nervous system 236, 244, 261
anticipation. See also Psychophysiological CCES. See Cooperative Congressional
Anticipation Study Election Study
of political discussion, 37 censorship. See self-censorship

293

https://doi.org/10.1017/9781108912495.013 Published online by Cambridge University Press


294 Index

character and trait associations. See also deliberation, 11–12, 204–207, 209–211,
individual disposition 260
assumption and, 101–102 democracy, 181
of Democrats, 101 interpersonal interactions and, 13–14
of Republicans, 101 political discussion and, 18, 243–254
CIPI I Survey, 60, 62–65, 122, 156, strains on, 234–235
209–210, 215, 218, 226, 230–231, 255 Democrats, 79, 143, 193, 263–265,
CIPI II Survey, 62–65, 82–84, 91, 105, 186, 267
193, 202–203 character and trait associations, 101
Citizen Participation Study, 37–38 in Name Your Price Study, 124–125
civil orientation to conflict, 7 demographics
Clarity Campaign Labs, 97–98 of Democrats, 90
cognitive psychology, 20 individual dispositions and, 204–215
comfort. See discomfort of Republicans, 90
conflict avoidance, 7, 37–38, 65, 241–242 detection, 15, 79–84
in individual disposition, 46 assumption in, 100–106
measurement of, 46–47 CIPI II Survey studying, 82–84
political discussion in, 33 cue categories, 83–84
as psychological disposition, 225–227 Directly Political detection system, 84,
self-expression and, 33 95–96
conformity, 40, 269–270 facial features, political identity and,
in group dynamics, 24–25 88–89
lab studies on, 168–171 Facts of Life detection system, 84, 89–91
measures of, 270 Just by Looking at Them detection
potential, 169 system, 84, 87–89
pure, 169 Media Usage detection system, 84, 94–95
in vignette experiments, 162 Non-Guesser detection systems, 84–86
Conover, Pamela Johnston, 7–17, 22–23, political discussion and, 105–106
25–26, 28, 30 of political identities, 82–83
contact hypothesis, 24, 40–41, 248–254 self-reporting of, 84–100
Cooperative Congressional Election Study disagreement, 28–29, 126–128, 247
(CCES), 61–65, 67 avoidance of, 78
COVID-19, 245 candidate, 49–50
Cramer, Katherine, 7–17, 22, 30, 183–184 discomfort and, 39, 140–141, 150–151
Crewe, Ivor M., 22–23, 25–26, 28, 30 gender and, 208–209
crosscutting networks, 7 general, 49–50
identity-based, 48–49
data collection issue, 145–147
in 4D Framework, 44–45, 61 measurement of, 49–50, 151–152
in lab experiments, 67–72, 135–136, in Name Your Price Study, 124
138–141 with outpartisans, 124
survey experiments, 73–75 partisan identity, 38, 49–50, 72
surveys, 61–67 perceived, 146, 150–151
vignette experiments, 73–75 polarization and, 13
decision-making, 5–6, 11, 35, 41–42, 110 policy, 49–50, 72
emotional response and, 149–153 in Psychophysiological Experience Study,
group dynamics in, 112–113, 126–128 72, 139–140
perception in, 85–86 recognition of, 247
deflection, in discussion, 240 seeking out, 39
in vignette experiments, 120–123 discomfort, 24
Deichert, Maggie Ann, 34–36, 236 disagreement and, 140–141, 150–151

https://doi.org/10.1017/9781108912495.013 Published online by Cambridge University Press


Index 295

measurement of, 69, 132 fight or flight response, 68


projection of, 152 first impressions, social psychology on, 88
self-reporting of, 67, 132 4D Framework, 9–10, 14–15, 31–32,
discussion networks 41–42, 78, 126, 174–175, 181,
construction of, 197 192–195, 199, 202–203
outpartisans in, 197 AAA Typology in, 49–54
partisan identities and, 198 analogy for, 31–32
social polarization and, 196–197 data collection in, 44–45, 61
diverse perspectives Decision, 15–16, 37–38, 42, 57–60, 127,
in democracy, 12 222–223
social media and, 13 Detection, 15, 34–36, 42, 57–60, 225,
Druckman, James N., 102, 199–200, 244 231
DSM III-R, 46 Determination, 17–18, 40–41, 57–60,
Dynata, 65 168–179, 228, 231
Discussion, 16–17, 38–40, 57–60,
echo chambers, 13 222–223, 225, 230
EDA. See electrodermal activity gender and, 208–209
egocentric analysis, 48–49 general tendencies in, 56, 61
electrodermal activity (EDA), 39, 134, 137, individual dispositions in, 45–48
144, 146–147 inputs, 44–55
in psychophysiological lab experiments, 68 key findings on, 205–206
Eliasoph, Nina, 7–17, 23, 26, 250–251 key lessons from, 238–243
emotional responses, 137–138 navigation of, 17–18
decision-making and, 149–153 outputs, 44, 54–60
flooding, 253 proximal behaviors in, 56, 60
to outpartisans, 137 psychological considerations in, 49–54
in psychophysiological lab experiments, situational factors in, 48–49
68 stages of, 10–11, 15, 20–21
empathy, 252 free response questions, 53–54, 82–83
entrenchment, 13, 40, 269 on CIPI II Survey, 193
in vignette experiments, 162 on future political interactions, 189–190
ethnorace. See race on future social interactions, 193–194
Eveland, William P., 4, 6, 21–22, 25–26, on social distancing, 189–190
147, 213–214 friends, 78
expression. See also self-expression social distancing from, 190
lab studies on, 168–171 viewpoints of, 34–35
linguistic markers, 171–177 Funk, Carolyn, 6, 37–38
political disposition and, 219 future political interactions
psychological considerations and, 167 free response questions on, 189–190
psychological dispositions and, 224 partisan bias and, 191
of true opinions, 39–40, 162, 165–166, in vignette experiments, 190–192
242 future social interactions
in vignette experiments, 156–168 free response questions on, 193–194
extraversion, 7 partisan bias and, 195
in vignette experiments, 191–194
Facebook, 13, 66, 95
fake news, 27 Gallup, 95
false information, 245 gender, 232
false-consensus effect, 147 disagreement and, 208–209
family members, 78 4D Framework and, 208–209
viewpoints of, 34–35 individual disposition and, 207–213

https://doi.org/10.1017/9781108912495.013 Published online by Cambridge University Press


296 Index

gender (cont.) polarization and, 13


structural inequalities and, 209–211 political discussion as, 21–24
true opinion expression and, 211 issue disagreement
general tendencies psychophysiogical response to, 146
defining, 56 in Psychophysiological Experience Study,
in 4D Framework, 56, 61 145–147
measurement of, 61 issue discussion
geography, polarization and, 90 linguistic markers in, 176–177
group dynamics in Psychophysiological Experience Study,
conformity in, 24–25 178
in decision-making, 112–113, 126–128
in political discussion, 11–12 Karpowitz, Christopher F., 207–211
guesses, 242 Klar, Samara, 29, 183
confidence in, 106 knowledge. See political knowledge
frequency of, 106 Krupnikov, Yanna, 29, 242
political disposition and correct, 217
lab studies. See also psychophysiological lab
Hayes, Andrew F., 47, 101 experiments
heart rates. See psychophysiogical response on conformity, 168–171
Hetherington, Marc J., 34–36 on expression, 168–171
Hibbing, Matthew V., 7 on self-censorship, 168–171
Hill, Seth J., 34–35, 90, 236 Lazer, David M. J., 27–28
homophily, 113–114, 129 learning, 25–27
Huckfeldt, Robert, 25–26, 34–35, 85–86, Levendusky, Matthew S., 102, 244,
243–244 252–253
Hutchens, Myiah J., 4, 6, 21–22, 25–26 lifestyle, political identity and, 90–91
linguistic markers
identities. See also partisan identities; explanatory variables for, 174
political identities; religious identity expression, 171–177
disagreement linked to, 48–49 in issue discussion, 176–177
polarization and, 35–36 in partisan identity revelation, 173–176
Independents, 151, 197–198 in Psychophysiological Experience Study,
individual dispositions 172
conflict avoidance in, 46 long-term memory, 34–36
correlations between, 48
demographics and, 204–215 MacKuen, Michael, 112, 114, 262
in 4D Framework, 45–48 TALK-CLAM model, 106–107
gender and, 207–213 Mansbridge Jane J., 5, 30
political, 215–221 Mason, Lilliana, 67, 196, 236
psychological, 221–231 mass media, 13
race and, 213–215 McClean, Charles T., 261
self-censorship in, 47 McClurg, Scott D., 40–41, 208
social interaction anxiety in, 46 Mechanical Turk, 65–66, 96–97, 124–125,
stage 1 behavior and, 216 156
stage 2 behavior and, 209–210 Mendelberg, Tali, 207–211
stage 3 behavior and, 206–212 meta identity, 79
stage 4 behavior and, 213 meta-perceptions, 107–108, 256–257
informational benefits, 243–246 meta-stereotypes, 107–108
Internet, 13 misinformation, social consequences of, 246
interpersonal interactions, 245–246 Mondak, Jeffrey J., 203–204
democracy and, 13–14 Morey, Alyssa C., 4, 6, 21–22, 25–26

https://doi.org/10.1017/9781108912495.013 Published online by Cambridge University Press


Index 297

motivations, 39–40, 239–241. See also AAA social polarization and, 197–198
Typology socioeconomic traits and, 92
goals and, 181 stereotypes, 91
for political discussion, 6, 21–22, 24–25 strength of, 176, 197–198, 215–216,
mutual toleration, 248–254 220–221
Mutz, Diana, 7–17, 21–22, 24, 71, 132, perceived disagreement
134–135, 183, 247–249, 255–256 in Psychophysiological Experience Study,
146
Name Your Price Study, 15–16, 73, 111, psychophysiological response to,
114–115, 123–126, 136–137 150–151
Democrats and Republicans in, 124–125 perception, 35, 176–177
disagreement in, 124 assumptions and, 107–108
names in decision-making, 85–86
phonetically ideological, 99 meta-perceptions, 107–108, 256–257
political identity and, 96–100 personality traits
Names As Cues Studies, 73 Big Five, 7, 32–34
Neblo, Michael A., 27–28 political ideology and, 203–204
negative outcomes, 53–54, 157 Pew Research Center, 6, 24, 33–35, 46–47,
network homogeneity, 113–114. See also 94–95, 188
homophily polarization, 14, 17–18
Noelle-Neumann, Elisabeth, 7–17, 30, 261 affective, 15, 28–29, 184
disagreement and, 13
online communication, 13 geography and, 90
opinion leadership, 129 identity and, 35–36
opinion v. fact, 2 interpersonal interactions and, 13
outpartisans, 101, 107, 136, 196, 198–200 stereotypes linked to, 101
disagreement with, 124 policy disagreement, 49–50, 72
in discussion networks, 197 Political Chameleons Study, 168
emotional response to, 137 potential conformity, 169
heart rates in conversations with, pure conformity, 169
135–136 political discussion, 8–9
rating of, 184 absence of, 110
stereotypes about, 15, 65–66, 79–80, 103 American dislike of, 8–9
anticipation of, 37
partisan bias, 27, 79. See also outpartisans; benefits of, 200
stereotypes choice in, 111–115
cheerleading, 79–80, 103, 260 conflict avoidance in, 33
future political interactions and, 191 dangers in, 23
future social interactions and, 195 defining, 4, 10
in-group and out-groups in, 35–36 democracy and, 18, 243–254
partisan clash, 151, 173–175 detection systems and, 105–106
in Psychophysiological Experience Study, disagreement/contention in, 6 (See also
141–145 disagreement)
partisan identities, 151, 268 discussants in, 4–5
demographics of, 92 dyads in, 4, 262
disagreement, 38, 49–50, 72 enjoyment of, 6
discussion networks and, 198 footing in, 23
linguistic markers in, 173–176 frequency of, 2–3
political disposition and, 220–221 group dynamics in, 11–12
psychophysiological response to, 145 initiation of, 78, 115–119
revelation of, 143, 173–176 as interpersonal interactions, 21–24

https://doi.org/10.1017/9781108912495.013 Published online by Cambridge University Press


298 Index

political discussion (cont.) psychological considerations


measures of, 5 behavior relationship with, 164–167
motivations for, 6, 21–22, 24–25 expression and, 167
navigation of, 6–8 in 4D Framework, 49–54
participation in, 239–241, 246–248 as measured by AAA Typology,
preferences in, 16 49–55
as process, 8–9 for true opinion expression, 166
psychology underpinning, 7 in vignette experiments, 157–158
psychophysiogical response to, psychological disposition, 232
135–136 conflict avoidance as, 225–227
in public spaces, 1–2 cue reliance and, 227
race and, 215 expression and, 224
self-censorship in, 34 in 4D Framework, 221–231
self-expression in, 23, 121 social anxiety as, 222–225 (See also Social
social consequences of, 181–186 Interaction Anxiety Scale)
social considerations in, 7–17, 238–239 social distancing and, 227
as social process, 7–17, 22, 238–239 willingness to self-censor and, 229–231
social psychology of, 12–14 psychophysiogical response, 154
studies on, 9 ANS, 68
topics in, 3 in anticipation, 131–138
tradeoffs in, 254–257 average changes in, 134
in United States, 11–12, 236–238 to discussion composition, 135–136
unknown elements of, 2–6 EDA, 39, 68, 134, 137, 144, 146–147
political disposition, 215–221, 232 heart rate, 135–136
correct guesses and, 217 to issue disagreement, 146
expression and, 219 SCL, 68
interest in politics and, 219–220 somatic nervous system, 68–69
partisan identity strength and, Psychophysiological Anticipation Study,
220–221 16–17, 49–50, 69–71, 132–133
political identities. See also Democrats; questions in, 71
partisan identities; Republicans Psychophysiological Experience Study,
apparel and, 78, 88 16–17, 69–72, 138–150, 252
categorization of, 88–89 disagreement in, 72, 139–140
demographics of, 87 hypotheses in, 141–142
facial features and, 88–89 issue disagreement in, 145–147
lifestyle and, 90–91 issue discussion in, 178
names and, 96–100 linguistic markers in, 172
religious identity and, 90 participant accuracy in, 148–149
revealing of, 143 partisan clash in, 141–145
political ideology, personality traits and, perceived disagreement in, 146
203–204 psychophysiological response in,
political knowledge, 129 140
asymmetries, 52–53, 126 randomization in, 72
measurement of, 49–50 recall in, 147–149
political participation, 239–241, 246–248 research design of, 139–141
power dynamics, 25 self-reporting in, 141
presidential vote choice, 34–35, 124, psychophysiological lab experiments
254–255 ANS in, 68
proximal behaviors EDA in, 68
defining, 56 emotional responses in, 68
in 4D Framework, 56, 60 4D Framework, 67–76
measurement of, 60 measures in, 68

https://doi.org/10.1017/9781108912495.013 Published online by Cambridge University Press


Index 299

SCL in, 68 silencing, 121–122, 179, 240, 261


survey experiments, 73–75 situational factors, 208–209
psychophysiological response in 4D Framework, 48–49
to partisan identities, 145 in psychophysiological response, 132–133
to perceived disagreement, 150–151 skin conductance level (SCL), in
in Psychophysiological Experience Study, psychophysiological lab experiments, 68
140 social accountability, 260
situational factors in, 132–133 social consequences, 181–186
psychophysiological responses, to videos, social considerations, 7–17, 238–239
133–135 social costs, 34, 66
social distancing, 185–186, 260. See also
race, 87, 232 future social interactions
individual disposition and, 213–215 free response questions on, 189–190
political discussion and, 215 from friends, 190
Rahim scale, 46–47 psychological dispositions and, 227
recall, in Psychophysiological Experience types of, 186–188
Study, 147–149 in vignette experiments, 190–192
religious identity, 97–98 Social Identity Theory, 184
political identity and, 90 social interaction anxiety, 33, 241–242
Republicans, 79, 143, 264–265, 267 in individual disposition, 46
character and trait associations, 101 measurement of, 46
media use of, 95 psychological disposition and, 222–225
in Name Your Price Study, 124–125 Social Interaction Anxiety Scale (SIAS), 46,
sartorial preferences of, 88 65
Rossiter, Erin, 248–249 social interactions, stereotypes and, 79–80
Ryan, John Barry, 25–26, 29, 242–244 Social Phobia, 46
social polarization, 185, 195–200. See also
SCL. See skin conductance level social distancing
Searing, Donald D., 7, 22–23, 25–26, 28, 30 discussion network composition and,
self-censorship, 40, 128–129, 165, 240–242. 196–197
See also willingness to self-censor partisan identities and, 197–198
in individual disposition, 47 social processes, political discussion as,
lab studies on, 168–171 7–17, 22, 238–239
measurement of, 47, 170–171 social psychology
in political discussion, 34 on first impressions, 88
rates of, 255 of political discussion, 12–14
in vignette experiments, 162 social sorting, 255–256
self-expression social ties, in decision-making, 128
conflict avoidance and, 33 socioeconomic traits, partisan identity and,
dangers of, 28–29 92
options, 121 Sokhey, Anand Edward, 27–28, 40–41, 208
in political discussion, 23, 121 somatic nervous system, 68–69
self-reporting, 69 speech patterns, 94
of comfort, 67 Sprague, John, 34–35, 85–86
of detection systems, 84–100 SSI. See Survey Sampling International
of discomfort, 67, 132 stereotypes, 79, 96, 107
in Psychophysiological Experience Study, meta-stereotypes, 107–108
141 outpartisan, 15, 65–66, 79–80, 103
self-selection, 113 partisan, 91
Settle, Jaime, 95, 111, 124, 140–141, polarization linked to, 101
194–200, 236, 261 respondent agreement with, 104
SIAS. See Social Interaction Anxiety Scale social interactions and, 79–80

https://doi.org/10.1017/9781108912495.013 Published online by Cambridge University Press


300 Index

Stereotypes Anchoring Experiment, 73 gender and, 211


structural inequalities, 204–207 psychological considerations for, 166
gender and, 209–211 in vignette experiments, 162
studies Twitter, 66, 90–91, 188, 263
American National Election Study, 2–3, two-step flow, 14, 21–22, 243–245
45–46, 87–88, 168–169, 255
CIPI I Survey, 60, 62–65, 122, 156, Ulbig, Stacy G., 6, 37–38
209–210, 215, 218, 226, 230–231, 255 uncertainty, measurement of, 34–35
CIPI II Survey, 62–65, 82–84, 91, 105, United States
186, 193, 202–203 democracy in, 234–235
Cooperative Congressional Election political discussion in, 8–9, 11–12,
Study, 61–65, 67 236–238
lab studies on expression, 168–171
Name Your Price Study, 15–16, 73, 111, verbal hedging, 171, 173–174
114–115, 123–126, 136–137 videos, psychophysiological responses to,
Names As Cues Studies, 73 133–135
Political Chameleons Study, 168 vignette experiments, 17, 53–54, 60–75,
Psychophysiological Anticipation Study, 114–115, 171–173, 200, 266. See also
16–17, 49–50, 69–71, 132–133, 139–150 CIPI I Survey
Psychophysiological Experience Study, AAA Typology in, 158
16–17, 69–72, 138–150, 252 behavior in, 160–164
Stereotypes Anchoring Experiment, 73 in CIPI I Survey, 122, 230–231
TargetSmart, 61–66, 86 conformity in, 162
Thanksgiving Study, 62–66, 101–102, design of, 73
120, 168–179 discussion deflection in, 120–123
True Counterfactual Study, 15–16, entrenchment in, 162
49–53, 60, 73, 95–96, 110–111, expression in, 156–168
114–123, 125–129, 235 future political interactions in, 190–192
Vignette experiments, 17, 53–54, 60–75, future social interactions in, 191–194
114–115, 171–173, 266, 302.220. hypotheses, 157
Survey Sampling International (SSI), 65 limitations of, 75
overview, 74–75
Talking about Politics (Cramer), 183–184 psychological consideration and behavior
TargetSmart, 61–66, 86 relationship in, 164–167
tells, 34–36. See also detection psychological considerations in,
Ten Item Personality Inventory (TIPI), 32–34 157–158
Thanksgiving Study, 62–66, 101–102, 120, respondents in, 159
168–179 self-censorship in, 162
Theodoridis, Alexander G., 101 social distancing in, 190–192
TIPI. See Ten Item Personality Scale structure of, 120
tolerance, 28–29, 183
mutual, 248–254 willingness to self-censor (WTSC), 47, 65
trait associations. See character and trait psychological disposition and, 229–231
associations Wojcieszak, Magdalena, 248–249
True Counterfactual Study, 15–16, 49–53, workplace, 4–5
60, 73, 95–96, 110–111, 114–123, deflection in, 122
125–129, 235 WTSC. See willingness to self-censor
true opinions, expression of, 39–40, 165,
179, 242 YouGov, 88

https://doi.org/10.1017/9781108912495.013 Published online by Cambridge University Press

You might also like