Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

1104948

research-article2022
NMS0010.1177/14614448221104948new media & societyXiao

Article

new media & society

Let’s verify and rectify!


1­–24
© The Author(s) 2022
Article reuse guidelines:
Examining the nuanced sagepub.com/journals-permissions
DOI: 10.1177/14614448221104948
https://doi.org/10.1177/14614448221104948
influence of risk appraisal journals.sagepub.com/home/nms

and norms in combatting


misinformation

Xizhu Xiao
Qingdao University, China

Abstract
Mounting concerns about COVID-19 misinformation and its insidious fallout drive
the search for viable solutions. Both scholarly and practical efforts have turned
toward raising risk appraisal of misinformation and motivating verification and
debunking behaviors. However, individuals remain reluctant to verify and correct
misinformation, suggesting a need to develop persuasion strategies to motivate
such behaviors. Therefore, with an experiment of 256 participants recruited from
Amazon MTurk, this study examines how effectively norm-based messages improve
positive behavioral intentions during the COVID-19 pandemic. Findings suggest
that among individuals with high perceived severity of misinformation, exposure
to both descriptive and injunctive norms about verification reduced their intention
to rectify misinformation. However, both descriptive and injunctive norms about
debunking misinformation increased intentions to engage in preventive behaviors.
By probing the “self–other” discrepancy and the “trade-off effect” of risk appraisal,
the study further reveals that the perceived severity of misinformation merits in-
depth exploration in future research.

Keywords
COVID-19, debunking, descriptive norm, injunctive norm, misinformation correction,
perceived severity, perceived susceptibility, verification

Corresponding author:
Xizhu Xiao, School of Literature, Journalism and Communication, Qingdao University, Qingdao 266071,
Shandong, China.
Email: xizhu.xiao@foxmail.com
2 new media & society 00(0)

As social media approach ubiquity in our society, their role in spreading and amplifying
misinformation concerns many scholars (e.g. Chou et al., 2018; Wang et al., 2019; West,
2018). Misinformation is false or inaccurate information that is not supported by clear
evidence and expert opinion (Nyhan and Reifler, 2010). Although prior evidence indi-
cates that misinformation only represents roughly 5% of people’s news diets, its potential
impact on people’s perceptions and behaviors cannot be underestimated (Acerbi et al.,
2022; Lyons et al., 2021). During the Coronavirus Disease (COVID-19) pandemic, mis-
information has wreaked havoc on public health. In the United States, myriad conspiracy
theories and false claims regarding COVID-19 may trigger public panic and cause those
who hear them to refuse vaccination (e.g. Ahmed et al., 2020a, 2020b). For instance,
people erroneously claim that Bill Gates uses the COVID-19 vaccine to implant tracking
devices in human beings (Bruns et al., 2020; Shahsavari et al., 2020). Similar falsehoods
have inflicted harm on people worldwide (e.g. Loomba et al., 2021). For instance, 700
people died after drinking toxic methanol alcohol in Iran, wrongly believing it could cure
COVID-19 (Forrest, 2020). These unwarranted and false statements have quickly piled
up and created an “infodemic,” destroying lives and compromising society’s intellectual
well-being, which the World Health Organization (WHO) has sternly condemned (WHO,
2020b).
As such, a growing body of research encourages and supports such behaviors as veri-
fication and debunking, to combat misinformation (e.g. Schuetz et al., 2021; Tully et al.,
2020; Van der Meer and Jin, 2020). Research further identifies risk appraisal of misinfor-
mation (i.e. the perceived severity and the perceived susceptibility) as a possible driving
force behind combatting it (e.g. Sun et al., 2022; Tandoc et al., 2020). Authoritative
organizations also incorporate risk appraisal in social media campaigns, to promote criti-
cal responses that aim to counter misinformation (e.g. United Nations [UN], 2020b).
However, relatively low confidence in verification and limited willingness to contradict
social media misinformation during public crises suggest the need to examine promo-
tional strategies beyond risk appraisal (e.g. Gottfried, 2020; Tandoc et al., 2020; Tully
et al., 2020).
The construct of norms may be beneficial in this context. Constructing a notion of
social expectations for combatting misinformation helps to explain the issue’s urgency
(Fishbein and Ajzen, 2010). Cumulative social corrections and actions to counter misin-
formation also help to reduce inaccurate perceptions and motivate precautionary behav-
iors (Sun et al., 2021; Tully et al., 2020). Although norms have consistently played a
substantial role in improving health intentions and behaviors (e.g. Bewick et al., 2010;
Chadwick et al., 2021; Cialdini et al., 1991), experimental research rarely examines
norm-based appeals in mobilizing social media users to respond to misinformation and
perform preventive behaviors. The interaction effects between norms and risk appraisal
also remain underexplored.
Thus, using a randomized experiment, this study contributes to the theory in two
important ways. First, prior research has called upon scholars and practitioners to foster
“a critical mass of trusted, willing, and informed correctors” and develop “appropriate
social norms” to combat misinformation (Vraga and Bode, 2020: S279). This study
heeds the call and extends the understanding of norms to the context of COVID-19 mis-
information. By empirically examining the persuasive differences between injunctive
Xiao 3

and descriptive norms, this study adds to previous debate regarding the effectiveness of
the two (e.g. Mollen et al., 2010) and provides a more granular picture of norm-based
appeals in health promotion. Second, prior research often examines risk appraisal and
norms separately (e.g. Xiao and Borah, 2020; Sun et al., 2022) and ignores their potential
complementary effects on behavioral promotion (Barnum and Armstrong, 2019;
Pogarsky et al., 2017; Scheufele and Krause, 2019). Persuasion scholars have also repeti-
tively highlighted the importance of combining theoretical features in health promotion
(O’Keefe and Hoeken, 2021). Therefore, by incorporating norms and risk appraisal in
message development and further probing the nuances of risk appraisal and its potential
interaction effects with norms, this study aims to afford a novel way of thinking in pro-
moting social corrections and preventive behaviors. The findings of this study are also
expected to shed light on future interventions in countering misinformation and its
related outcomes.

Combatting misinformation and appraising risk


Scholars and practitioners have long grappled with misinformation in the context of
social media and its negative impact (e.g. Chadwick et al., 2018; WHO, 2020b). Some
have reservations about combatting misinformation, due to the insignificant or tempo-
rary positive results (e.g. Larson and Broniatowski, 2021; Nyhan, 2021; Nyhan and
Reifler, 2010; Thorson, 2016). However, a large body of research offers promising
behavioral solutions for countering misinformation (Chan et al., 2017; Walter and
Murphy, 2018; Walter et al., 2019, 2020). Specifically, two behaviors are crucial: verifi-
cation and debunking (Chou et al., 2020). Verification, stemming from healthy skepti-
cism, refers to information-seeking actions via credible sources, to determine the
accuracy and validity of online information (Brandtzaeg et al., 2016; Fridkin et al., 2015;
Graves et al., 2016). Debunking takes one step further and refers to corrective actions
against misinformation, such as posting rebuttals (Vraga and Bode, 2020). Prior research
sustains that performing verification alone serves personal purposes (e.g. reducing vul-
nerability) (Liu and Huang, 2020); performing debunking alone is sometimes considered
reckless (e.g. Tandoc et al., 2020). Valid correction of misinformation often entails both
verification and debunking, in succession (e.g. Chou et al., 2020).
Much research shows that verification and debunking have a substantial positive
effect on perceptions and behaviors (e.g. Huang and Wang, 2020; Swire-Thompson
et al., 2020; Walter et al., 2019). For example, Liu and Huang (2020) find that people
who perform active fact verification are less vulnerable to COVID-19 misinformation.
Van der Meer and Jin (2020) show that debunking misinformation by providing factual
elaboration elicits preventive behavioral intentions during public health crises. Thus,
many calls for interventions that emphasize the importance of verification and debunk-
ing in the current media environment (e.g. Schuetz et al., 2021; Tully et al., 2020).
Research further identifies risk appraisal as the possible driving force behind verifi-
cation and debunking behaviors (e.g. Arif et al., 2017; Sun et al., 2022; Tandoc et al.,
2020). Risk appraisal consists of two distinct dimensions: perceived susceptibility and
perceived severity (El-Toukhy, 2015). Perceived susceptibility refers to the “possibility
of experiencing a risk,” while perceived severity refers to the “seriousness or
4 new media & society 00(0)

harmfulness of the risk” (El-Toukhy, 2015: 500). Abundant empirical evidence shows
that the decision-making to perform positive and preventive behaviors often depends
upon risk appraisal (e.g. Arif et al., 2017; Rimal and Juon, 2010; Sun et al., 2021). For
example, through qualitative interviews with Singaporeans, Tandoc et al. (2020) find that
perceiving fabricated information as detrimental to their well-being motivates individu-
als to rectify misinformation. Another interview confirms that the severity of misinfor-
mation and its possible negative consequences are a critical part of the motivation to
combat it (Arif et al., 2017). A survey of 599 adults by Sun et al. (2022) also find a posi-
tive association between risk appraisal of misinformation and negative emotions, such as
anger and guilt, that increase intentions to confront misinformation.
In practice, risk appraisal has also been extensively operationalized to help counter
COVID-19 misinformation. For instance, in the UN’s global “PAUSE” campaign (UN,
2020b) and WHO’s anti-misinformation social media effort (WHO, 2020a), health prac-
titioners on Facebook raise susceptibility and severity perceptions of misinformation to
motivate verification behaviors among the public (UN, 2020a; WHO, 2022):

We are all vulnerable to vaccine misinformation . . . Before you share check it against trusted
sources.

Sharing misinformation can have huge consequences, especially during a crisis like the
#COVID19 pandemic . . . Pause and take time to verify facts before you share something
online.

However, recent national surveys in the United States and elsewhere still show that
individuals exhibit low levels of confidence in verifying COVID-19 information online
and remain somewhat reluctant to debunk misinformation (e.g. Gottfried, 2020; Tandoc
et al., 2020; Tully et al., 2020). These disappointing findings suggest the need for addi-
tional strategies to further motivate verification and debunking behaviors, other than
promoting risk appraisal of COVID-19 misinformation alone.

Combatting misinformation using norms


Norms may be especially relevant and helpful in strategy development. Indeed, prior
research shows that norms serve as a critical piece of information for risk evaluations
(e.g. Pogarsky et al., 2017). When a behavior is prevalent or socially approved, it is often
deemed positive or low-risk; whereas if a behavior is socially disapproved, it is often
considered negative or high-risk (Barnum and Armstrong, 2019). In processing health
and scientific information, it is particularly so that individuals rely on social norms as a
mental shortcut to make behavioral decisions (Metzger and Flanagin, 2013; Scheufele
and Krause, 2019). Thus, increasing normative perceptions about combatting misinfor-
mation is likely to facilitate the persuasive effect of risk appraisal on subsequent deci-
sion-making. Moreover, consisting of descriptive and injunctive norms, the construct of
norms also has a direct impact on behaviors (e.g. Xiao and Borah, 2020; Bewick et al.,
2010; Robinson et al., 2014) and has effectively guided decades of research in health
promotion and behavioral research (e.g. Fishbein and Ajzen, 2010; Xiao and Wong,
Xiao 5

2020). Descriptive norms refer to perceptions that “others are or are not performing the
behavior in question” and illustrate “what most people do” (Cialdini et al., 1991: 203;
Fishbein and Ajzen, 2010: 131). Injunctive norms refer to perceptions of important ref-
erents’ approval or disapproval of a given behavior, illustrating “what ought to be done”
(Cialdini et al., 1991; Fishbein and Ajzen, 2010: 131).
Social media seem to have redefined norms and somewhat conflated descriptive and
injunctive norms due to their most distinctive characteristic—interactivity (Sundar,
2008). Specifically, social media, such as Facebook and Twitter, allow individuals to
engage with content and interact with other users. Aggregate numbers of “likes,” com-
ments, and “shares” imply normative approval or disapproval of a particular behavior or
position (e.g. Metzger et al., 2010; Sundar et al., 2009). Researchers often refer to these
interactive features of social media as “social endorsement” or a “bandwagon effect” and
study them through the lens of heuristic cues (e.g. Metzger and Flanagin, 2013; Phua and
Ahn, 2016). They find that in an age of information overload, people tend to believe
information or perform behaviors that many others recommend or support. The accumu-
lated “likes” and “shares” provide heuristic shortcuts and serve as a “collaborative filter”
for decision-making (Borah and Xiao, 2018; Chadwick et al., 2021; Thai and Wang,
2020). At this juncture, this study defines and manipulates norms on the basis of their
conventional conceptualization (e.g. Cialdini et al., 1991; Fishbein and Ajzen, 2010), for
two reasons. First and foremost, this study wishes to test messaging strategies for possi-
ble use in future interventions and promotions. While examining interactivity is critical
to understanding health promotion and communication in the context of social media, it
is not an integral part of message development, nor do health organizations and practi-
tioners entirely control it. Second, no study has examined the effectiveness of conven-
tional normative messages in combatting misinformation. Thus, accordingly, this study
grounds the research on injunctive and descriptive norms.
Ample research demonstrates the effectiveness of norm-based messages in promoting
positive decision-making (e.g. Bewick et al., 2010; Cialdini et al., 2006; Yang and Nan,
2018). For example, by using norm-based messages to improve vaccination intention,
Xiao and Borah (2020) find that injunctive norms guiding promotional messages indi-
rectly enhance vaccination intention. Similarly, descriptive norms can potentially reduce
misperceptions about vaccination. Examining college students’ drinking behaviors,
Bewick et al. (2010) find that normative messages stressing alcohol avoidance signifi-
cantly reduce binge-drinking intentions. Robinson et al. (2014) reveal that messages
highlighting the norm of eating vegetables led recipients to exhibit more healthy-eating
intentions and behaviors. A plethora of cross-sectional research and meta-analyses also
consistently highlight the significant influence of norms in promoting positive behaviors
and behavioral changes (e.g. Mollen et al., 2010; Rudert and Janke, 2021; Xiao and
Wong, 2020). In the field of combatting misinformation, while social media’s imperfect
mechanism for correcting misinformation discourages some (e.g. Chadwick et al., 2018),
others remain positive and reveal the untapped potential of norm-based persuasion (e.g.
Altay et al., 2022). For instance, in a survey of 1072 participants reflecting US demo-
graphics, around 70% of respondents deeply believe in carefully addressing and correct-
ing COVID-19 misinformation on social media. Similarly, the majority of the respondents
think combatting misinformation should be everyone’s responsibility (Bode and Vraga,
6 new media & society 00(0)

2020). A meta-analysis of 71 primary studies further confirms subjective norms as the


most influential motivator of information-seeking behaviors during health crises (Chang
and Huang, 2020).
Despite its well-documented effectiveness, no experimental research has examined
normative messaging on top of risk appraisal, in the context of COVID-19. Moreover,
less is known regarding the cross-impact of normative messaging. For example, a valid
misinformation correction often requires first verifying information and then debunking
it. Thus, it stands to reason that a normative message about debunking may potentially
motivate both verification and debunking behaviors. Moreover, a normative message
about verification only stresses the importance of double-checking and authenticating
information; it may keep impulsive individuals from refuting or correcting online infor-
mation without the facts in hand. More importantly, much pandemic-related misinforma-
tion has concentrated on COVID-19 prevention, lowering compliance with precautionary
measures including mask-wearing, social distancing, and vaccination recommended by
the authoritative health organizations (e.g. He et al., 2021; Mourali and Drake, 2022).
Strengthened normative perceptions of combating misinformation may increase risk
appraisal of misinformation (Barnum and Armstrong, 2019), which could reinforce the
trust in health organizations and conformity to the recommended prevention measures
(Vinck et al., 2019). Besides, misinformation regarding COVID-19 prevention has also
raised heated debates and controversies, inducing attitude ambivalence that puts indi-
viduals in a behavioral dilemma (e.g. Saey, 2020). To reduce ambivalence, people pay
close attention to the related information that facilitates the resolution of the conflict, and
social norms have been identified as “a crucial feature” in the process (e.g. Hohman
et al., 2016: 11). Thus, normative approval of combatting COVID-19 misinformation
may not directly demonstrate the norm of practicing COVID-19 prevention, but it helps
draw people’s attention, raise awareness, and prompt critical evaluation of COVID-19
prevention-related information and the associated behaviors. Nonetheless, the persuasive
effectiveness of norms (descriptive vs injunctive) and anti-misinformation information
(verification vs debunking) in promoting prevention behaviors has hitherto never been
tested. This study thereby proposes the first set of research questions:

RQ1a. How does adding norm-based messages (descriptive norm vs injunctive norm)
about verification to risk appraisal of misinformation influence (a) verification inten-
tion, (b) debunking intention, and (c) preventive behavioral intention, compared to the
message that only raises risk appraisal of COVID-19 misinformation.
RQ1b. How does adding norm-based messages (descriptive norm vs injunctive norm)
about debunking to risk appraisal of misinformation influence (a) verification inten-
tion, (b) debunking intention, and (c) preventive behavioral intention, compared to the
message that only raises risk appraisal of COVID-19 misinformation.

Nuances in risk appraisal


Much remains unknown regarding incorporating risk appraisal and normative messaging
and the effects of their potential interaction on subsequent decision-making. This study
Xiao 7

argues that susceptibility and severity may moderate the main effect for two reasons,
namely, the third-person effect (TPE) and the trade-off effect of risk appraisal.
First, TPE stipulates that people often overestimate media influences on others, rela-
tive to themselves (Davison, 1983). Such “self–other” asymmetry becomes more evident
for media messages with presumed undesirable impacts, due to the “self-serving bias”
(e.g. Gunther and Mundy, 1993). The “self-serving bias” occurs when “people tend to
feel they are smarter or more knowledgeable or less vulnerable than others, and thus less
susceptible to media influence” (Gunther and Storey, 2003: 200). Simply put, to protect
or enhance one’s ego, individuals may deny risk-appraisal-based messages that imply the
undesirable consequences of misinformation. Moreover, multiple national surveys reveal
that most people erroneously assume that others have less critical judgment than them-
selves and, thus, are more vulnerable to misinformation (e.g. Corbu et al., 2020; Jang and
Kim, 2018; Riedl et al., 2021). Even more concerning is that these overconfident indi-
viduals are also the least likely to acknowledge their deficiency and the most likely to
internalize and spread misinformation (Lyons et al., 2021). Thus, these biased individu-
als may also not respond to message stimuli that emphasize their susceptibility and the
severity of the misinformation. Therefore, while acknowledging the norm of combatting
misinformation, the appraisal of its risk to oneself is generally low. Situations such as
this put individuals in a behavioral predicament, the result of ambivalence or indiffer-
ence (e.g. Rimal and Real, 2003; Zanna and Rempel, 1988). They not only refuse to seek
more information but also show reluctance to perform the recommended behaviors
(Rimal and Juon, 2010; Rimal and Real, 2003).
Second, a trade-off effect of risk appraisal emerges from previous research. That is,
perceived susceptibility and severity weigh distinctively and unequally in predicting
behaviors (e.g. El-Toukhy, 2015; Weinstein, 2007). For instance, if the perceived sever-
ity of a health issue is irrefutable (e.g. HIV), individuals often downplay their suscepti-
bility; if perceived susceptibility to a risk is irrefutable (e.g. seasonal flu), individuals
often downplay its severity (El-Toukhy, 2015). In the context of misinformation, a sur-
vey (Sun et al., 2022) finds a positive association between the perceived susceptibility of
misinformation and approval of regulating misinformation and supporting emotions that
lead to correcting it. In contrast, the perceived severity of misinformation reveals a nega-
tive association with intentions to correct it (Sun et al., 2022). Another experiment fur-
ther suggests that stimuli intended to raise perceptions of both susceptibility and severity
of misinformation increased perceived severity slightly more than susceptibility (Sun
et al., 2021). In turn, perceived severity increased intentions to correct COVID-19 mis-
information via guilt (Sun et al., 2021). These findings indicate a natural perceptual dis-
crepancy between susceptibility to and severity of misinformation, which may persist
after media exposure and contribute differently to forming behavioral intentions.
Taken together, these nuances of risk appraisal drive this study to inquire about its
moderating impact (self vs other, susceptibility vs severity) on the proposed main effect.

RQ2a. Holding perceived susceptibility (self) constant, how does perceived severity
moderate the relationship between message manipulation and (a) verification inten-
tion, (b) debunking intention, and (c) preventive behavioral intention?
8 new media & society 00(0)

RQ2b. Holding perceived severity (self) constant, how does perceived susceptibility
to misinformation (self) moderate the relationship between message manipulation
and (a) verification intention, (b) debunking intention, and (c) preventive behavioral
intention?
RQ3a. Holding perceived susceptibility (other) constant, how does perceived severity
of misinformation (other) moderate the relationship between message manipulation
and (a) verification intention, (b) debunking intention, and (c) preventive behavioral
intention?
RQ3b. Holding perceived severity (other) constant, how does perceived susceptibility
to misinformation (other) moderate the relationship between message manipulation
and (a) verification intention, (b) debunking intention, and (c) preventive behavioral
intention?

Method
Participants
Following prior research (e.g. Cohen, 1988; Erdfelder et al., 1996; Faul et al., 2007), this
study used G*Power, a power analysis program common in communication and behav-
ioral research, to compute an appropriate sample size. A priori power analysis indicated
that a minimum sample size of 196 could detect significant main effects with a recom-
mended moderate effect size of .25, power (1 – β) of .80, and an alpha of .05, two-tailed
(e.g. Cohen, 1988; Lakens, 2013; Walter and Murphy, 2018; Walter et al., 2020; Xiao
et al., 2021). Through Amazon MTurk, this study recruited a total of 300 participants in
the United States, and 44 individuals who failed attention checks were excluded. The
final sample consisted of 256 individuals aged 18 and above (Mage = 33.82). Slightly
over half were male (58.2%), and the majority were Caucasians (67.2%). Most of the
participants held bachelor’s degrees or above (67.3%). Politically, 32.42% of the partici-
pants leaned toward conservatism, 41.8% considered themselves liberal, and 25.78%
were neutral. Regarding social media, 78.51% of the participants used Facebook at least
once a week or more frequently. A post hoc analysis showed that with a sample of 256
participants, the power to detect obtained effects at the alpha of .05 level exceeds the
recommended minimum of .80 (Cohen, 1988; Lakens, 2013). Thus, the sample size was
adequate for the current analysis.

Design and procedure


To investigate the research questions, this study used a between-subjects experiment
(descriptive norm about debunking, descriptive norm about verification, injunctive norm
about debunking, injunctive norm about verification) with a baseline group (“risk appraisal
only”). Following suggestions from prior research (Montgomery et al., 2018), before tak-
ing part in the experiment, participants completed a pretest questionnaire that contained
demographics and media-use variables. This study then randomly assigned participants to
one of the five conditions. In the baseline group, participants viewed a Facebook post
Xiao 9

about the susceptibility to and severity of misinformation amid the COVID-19 pandemic.
Participants in the experimental conditions viewed not only a Facebook post relating to
risk appraisal of misinformation but also additional Facebook posts manipulating types of
norms and behaviors. This study used the randomizer function embedded in Qualtrics;
differences in demographic variables were not significant across conditions, suggesting
successful randomization (see Online Appendices for details).

Stimuli and pretest


In the descriptive norm and debunking condition, the message stated that seven out
of 10 people would debunk misinformation when they see it online. In the descrip-
tive norm and verification condition, the message stated that seven out of 10 people
would verify facts before sharing something online. In the injunctive norm and
debunking condition, the message stated that their important referents would want
them to debunk misinformation. In the injunctive norm and verification condition,
the message stated that their important referents would want them to verify facts.
This study adapted message content and detailed statistics from previous studies and
practices (e.g. Xiao and Borah, 2020; Centers for Disease Control and Prevention
[CDC], 2022; Sun et al., 2021). The manipulation of norms was mainly inspired by
prior research (Bode and Vraga, 2020), which suggests that around 70% of Americans
approve of combatting misinformation on social media. All Facebook posts were
associated with WHO and UN, mimicking a joint effort by these two authoritative
organizations. This decision was based on prior evidence (Glocalities, 2022), which
shows that the UN is more trusted than any other international or governmental
organization, and the WHO is a crucial information source for many Americans
(Yum, 2020). Since trust in national health organizations such as the CDC has
decreased since the beginning of the pandemic (Medscape, 2021), using the two
credible international sources is more appropriate for the current study. Moreover, in
practice, WHO and UN are on the frontline condemning and fighting misinforma-
tion; they have also worked closely together to launch global campaigns related to
misinformation and other health-related issues (e.g. UN, 2020b; WHO, 2020b).
Thus, associating the stimuli with both organizations would seem authentic and
trustworthy. This study pretested the stimuli with a separate sample of 32 individuals
aged 18 and above (Mage = 30), recruited from a company-wide online research por-
tal. Using an open-ended questionnaire, participants mainly evaluated two aspects of
the stimuli: (a) the operationalization of theoretical constructs and (b) credibility and
necessary adjustments (e.g. look, wording, images, etc.). Participants generally
found the posts to be credible and believable, and minor modifications were applied
to the final stimuli. A complete list of research materials, randomization details, and
measurements appears in the Online Appendices.

Measures
Manipulation checks.  Adapted from previous studies (e.g. Xiao and Borah, 2020), two
items using a Likert-type scale, with responses ranging from strongly disagree (0) to
10 new media & society 00(0)

strongly agree (6), checked whether the descriptive norms and injunctive norms were
successfully manipulated. Participants were asked whether the posts tell them “most
people” would do something about misinformation (M = 5.27, SD = 1.54) or “important
others (e.g. family, friends)” would want them to deal with misinformation (M = 5.09,
SD = 1.72). One item using a Likert-type scale ranging from strongly disagree (0) to
strongly agree (6) checked whether norm conditions were significantly different from the
risk-appraisal-only condition (M = 3.77, SD = 2.01).

Perceived severity. Adapted from previous studies (e.g. Nan and Madden, 2012; Xiao,
2019), perceived severity of misinformation was measured using three items that regis-
tered responses with a Likert-type scale ranging from “strongly disagree” (0) to “strongly
agree” (6) (Mself = 3.86, SDself = 1.57, αself = .93; Mother = 4.73, SDother = 1.08, αother =
.88). The items were, “I believe that being influenced by COVID-19 related misinforma-
tion has serious negative consequences on me/others”; “I believe that being influenced
by COVID-19 related misinformation is extremely harmful to me/others”; and “I believe
that being influenced by COVID-19 related misinformation causes severe health prob-
lems for me/others.”

Perceived susceptibility. Adapted from previous studies (e.g. Nan and Madden, 2012;
Xiao, 2019), perceived susceptibility to misinformation was measured using three items
that registered responses with a Likert-type scale ranging from “strongly disagree” (0) to
“strongly agree” (6) (Mself = 3.50, SDself = 1.55, αself = .90; Mother = 4.83, SDother = 1.02,
αother = .91). The items were, “It is likely that I’m/others are influenced by COVID-19
related misinformation”; “I’m/others are at risk of being affected by COVID-19 related
misinformation”; and “It is possible that I’m/others are susceptible to COVID-19 related
misinformation.”

Verification.  Adapted from previous studies (e.g. Liu and Huang, 2020), active verifica-
tion was measured with one item—“I’d check sites specialized in detecting incorrect
information”—on a Likert-type scale ranging from “strongly disagree” (0) to “strongly
agree” (6) (M = 4.02, SD = 1.76).

Debunking.  Adapted from previous research (Sun et al., 2021; Tandoc et al., 2020),
debunking intention was measured using three items with a Likert-type scale ranging
from “strongly disagree” (0) to “strongly agree” (6) (M = 3.54, SD = 1.63, α = .82).
Items included, “I’d post a comment saying it’s wrong”; “I’d message the person who
posted it to say the post is wrong”; and “I’d post a correction on my own social media
account.”

Preventive measures.  Adapted from previous research and announcements from authori-
tative health organizations (e.g. Yıldırım et al., 2020), this study presented participants
with a statement: “Authoritative organizations suggested that preventive measures
against COVID-19 are quite effective, including getting the COVID-19 vaccines, wear-
ing facemasks, and social distancing.” Participants were then asked to indicate the likeli-
hood of taking action on the three recommended preventive measures on a Likert-type
Xiao 11

scale ranging from “extremely unlikely” (0) to “extremely likely” (6) (M = 4.64, SD =
1.49, α = .77).

Covariates.  Based on previous studies (e.g. Bayram and Shields, 2021; Cornelis et al.,
2014; Gollwitzer et al., 2020), issue involvement with misinformation and COVID-19 as
well as political ideology were included as control variables. On Likert-type scales rang-
ing from “strongly disagree” (0) to “strongly agree” (6), issue involvement with misin-
formation (M = 3.68, SD = 1.49, α = .79) and with COVID-19 (M = 4.09, SD = 1.43,
α = .85) was measured using three similar items: “The issue of misinformation/COVID-
19 is very important to me”; “In general, I have a strong interest in misinformation/
COVID-19”; and “Misinformation/COVID-19 is personally relevant to me.” Partici-
pants were asked to rate their political ideology on a Likert-type scale ranging from
“very conservative” (0) to “very liberal” (6) (M = 3.14, SD = 1.83).

Analytical strategy
To check whether descriptive norm manipulation differed significantly from that of
injunctive norms, four experimental conditions were recoded as the descriptive norm
group (N = 102) vs the injunctive norm group (N = 103). To check whether experimen-
tal conditions differed from the baseline condition, all five conditions were recoded as
the baseline group (N = 51) vs the experimental group (N = 205). This study then used
a t-test to examine message manipulation. Analysis of covariance (ANCOVA) control-
ling for political ideology and issue involvement with misinformation and COVID-19
was used to examine RQ1ab (Field, 2013). Hayes’ (2018) PROCESS for SPSS (Model
2) was used to probe RQ2ab and RQ3ab, controlling for political ideology and issue
involvement with misinformation and COVID-19. Following prior research (Field,
2013; Gravetter and Wallnau, 2014), histograms and Levene’s test were carried out to
check normality of residuals and homogeneity of variance, respectively. Results sug-
gested that the residuals were approximately normally distributed (all skewness and kur-
tosis scores fall between ±2), and equal group variances were assumed (pverification =
.180, pdebunking = .802, ppreventive measures = .151).

Results
Manipulation checks showed that descriptive norm conditions featured the prevalence of
combatting misinformation and differed significantly from injunctive norm conditions
(t(203) = 4.13, p < .001). Similarly, injunctive norm conditions highlighted important
referents’ approval of combatting misinformation and differed significantly from descrip-
tive norm conditions (t(203) = −3.93, p < .001). Experimental conditions were signifi-
cantly different from the baseline condition (t(254) = 4.92, p < .001). Thus, message
manipulation was successful.
RQ1ab inquired about the influence of additional exposure to norm-based messages
on verification, debunking, and preventive measures, compared to the risk-appraisal-
only condition. No main effect emerged from the results. RQ2ab and RQ3ab further
probed the moderating effects of perceived severity and perceived susceptibility in
12 new media & society 00(0)

Figure 1.  Two-way interaction effect of perceived severity of misinformation on oneself and
message manipulation on debunking.

influencing the relationship between message manipulation and behavioral intentions.


To uncover the nature of the interaction, we estimated the effects of norm-based mes-
sages at two values of the level of risk appraisal: one standard deviation above the mean
(High) and one standard deviation below the mean (Low).
As for verification, a marginally significant interaction effect of perceived severity
(self) and message manipulation (b = .47, SE = .24, p = .05) emerged. That is, when
holding perceived susceptibility (self) constant, descriptive norm-based messages about
debunking increased verification intention among individuals with high perceived sever-
ity (self). For the sake of academic rigor, this study does not discuss this finding any
further and recommends caution in interpreting it (Pritschet et al., 2016).
As for debunking, a significant interaction effect of perceived severity (self) and
message manipulation emerged from the results (bdescriptive = −.46, SEdescriptive = .23,
p = .042; binjunctive = −.47, SEinjunctive = .22, p = .03). Specifically, when holding per-
ceived susceptibility (self) constant, both types of normative messages about verifica-
tion reduced debunking intentions among individuals with high perceived severity
(self) of misinformation, compared to the baseline group (Figure 1). Issue involve-
ment with COVID-19 was the only significant covariate in both analyses (Online
Appendix D).
As for preventive measures (Figure 2), results showed a significant interaction effect
between perceived severity (self) and message manipulation (bdescriptive = .39, SEdescriptive
= .17, p = .019; binjunctive = .58, SEinjunctive = .17, p = .001). When holding perceived
susceptibility (self) constant, both types of normative messages about debunking elicited
more positive preventive behavioral intentions among individuals with high perceived
severity (self) of misinformation. Results also showed that injunctive normative mes-
sages about debunking were more persuasive for individuals with high perceived sever-
ity (other) of misinformation, compared to the baseline group (b = .64, SE = .31, p =
.04). Issue involvement with COVID-19 and political ideology were the significant
covariates in the analysis (Online Appendix D). A full table of detailed results appears in
Online Appendices.
Xiao 13

Figure 2.  Two-way interaction effect of perceived severity of misinformation on oneself (up)
and others (down) and message manipulation on preventive measures.

Discussion
Social media have long drawn criticism as fertile ground for propagating misinforma-
tion (Wang et al., 2019). Public turmoils, such as the COVID-19 pandemic, have nour-
ished misinformation (Chou et al., 2018). Although some research demonstrates
success in combatting misinformation, the identification and implementation of an
effective strategy for motivating the related behaviors have yet to appear (Vraga and
Bode, 2020). Thus, grounded in the literature of norms and risk appraisal, this study
empirically examined whether norm-based messages would improve verification,
debunking, and preventive behavioral intentions in the context of COVID-19. By
incorporating the nuances of risk appraisal (e.g. El-Toukhy, 2015; Rimal and Real,
2003), the study further investigated the interaction effects between risk appraisal and
norms on individuals’ decision-making.
Contrary to expectations, message manipulation failed to yield the main effect on
behavioral intentions. This null finding may be due to two factors: (a) message dosage
and (b) the nuanced effect of norms. First, prior research shows that a single-dose health
promotion message is less influential than a multi-dose message (Ratcliff et al., 2019).
Thus, using one dose of norm-based appeals may not have enough strength to activate
behavioral changes. This study encourages future research to replicate the current experi-
ment using a multi-dose message design. Second, prior research also finds that the effect
14 new media & society 00(0)

of norms on behavioral intentions is not always direct; sometimes, personal and contex-
tual factors mediate or influence it (Xiao and Borah, 2020; Fishbein and Ajzen, 2010).
Therefore, relevant mediators, such as confidence and self-efficacy, merit further inves-
tigation. In short, since experimental research on norm-based appeals remains relatively
scarce (Mollen et al., 2010; Vraga and Bode, 2020), our understanding of norms and their
role in countering misinformation urgently awaits further expansion.
Another explanation for the null finding is that the message effect is contingent upon
risk appraisal. Results partially buttressed this suspicion, revealing that perceived sus-
ceptibility and perceived severity moderate debunking and preventive behavioral inten-
tions. Interestingly, exposure to both types of norms regarding verification resulted in
lower level intentions to debunk social media misinformation among people with high-
level perceived severity of misinformation. A possible reason may lie in the different
behavioral emphases of verification and debunking. Verification relates to prudence and
scrutiny toward the information encountered (e.g. Liu and Huang, 2020; Schuetz et al.,
2021), while debunking relates to taking actions toward information, sometimes regarded
as reckless if performed alone, before verification (e.g. Tandoc et al., 2020). As such, for
individuals who perceive misinformation as highly detrimental, intensified normative
perceptions about verification may increase their discretion about posting immediate
rebuttals. In particular, individuals who often actively engage with online information
may think twice and validate the content and source before performing any immediate
corrective actions. The practical implication is rather straightforward. For individuals
who perceive high-level severity of misinformation, accentuating verification norms
may not be ideal for motivating positive information behaviors. Simply stating the risks
associated with misinformation may result in a better outcome.
Gratifyingly, exposure to norm-based messages about debunking elicited more pre-
ventive behavioral intentions among people with high-level perceived severity of misin-
formation than the risk-appraisal-only messages. This is delightful news for health
practitioners and organizations. As life slowly returns to normal and public facilities plan
to reopen, preventive measures, such as social distancing, masking, and vaccination,
have begun to fade, even though COVID-19 still causes an average of 154 deaths a day
in the United States (CDC, 2021). This study indicates that increasing the normative
perceptions in favor of combatting misinformation would further enhance adherence to
COVID-19 prevention measures among individuals who perceive misinformation as
harmful. This partly echoes prior research that shows a positive relationship between risk
appraisal of misinformation and trust toward authoritative organizations (e.g. Vinck
et al., 2019). Social approval of fighting misinformation may amplify the risk appraisal
of misinformation (Barnum and Armstrong, 2019; Scheufele and Krause, 2019), which
in turn buttresses institutional trust and behavioral conformity (Vinck et al., 2019).
Moreover, considering the prevalence of prevention-related misinformation and contro-
versies (e.g. He et al., 2021; Mourali and Drake, 2022), people may have high attitude
ambivalence toward preventive measures (Hohman et al., 2016). Prior research suggests
that high attitude ambivalence drives people to pay special attention to the relevant and
consensus information (e.g. Hodson et al., 2001). Thus, normative messaging about
combatting COVID-19 misinformation may raise their attention, trigger critical con-
sumption of prevention-related information, and activate prevention behavioral
Xiao 15

responses. In-depth qualitative research should delve into this particular effect and
unearth the emotions and reasoning at hand.
Finally, this study inquired about two characteristics of risk appraisal: the self–other
discrepancy and the trade-off effect. Self–other perpetual differences are not evident in
the current context. However, for individuals with high levels of perceived severity of
misinformation effects on others, important referents’ approval of debunking misinfor-
mation could potentially ensure self-protection during the pandemic. Other than the
facilitating effect of heightened risk appraisal of misinformation and critical consump-
tion of prevention information discussed earlier, the possibility may also underlie the
emphasis on “close others” in injunctive norms (as opposed to the relatively “distant
others” in descriptive norms; Rimal and Real, 2005). In the COVID-19 pandemic, per-
ceived personal and family risks more strongly determine individuals’ actions than soci-
etal and collective risks (e.g. Deressa et al., 2021; Tsai, 2020). Thus, the injunctive norms
might remind people to protect their “close others” and motivate them to practice essen-
tial safety precautions. Moreover, in line with prior research (e.g. El-Toukhy, 2015; Sun
et al., 2022), our study highlights the relative importance of severity in the context of
misinformation. Apparently, the influence of severity is more nuanced than that of sus-
ceptibility. It adds subtle but important variations to the relationship of misinformation
correction and COVID-19 prevention. This study thereby calls for more focused research
on the perceived severity of misinformation and interventions to enhance the public’s
perceptions of that severity.
As an aside, issue involvement with COVID-19 and political ideology are influential
covariates in these inquires. Prior research suggests that when issue involvement is rela-
tively weak, message processing and acceptance tend to be limited (Cornelis et al.,
2014). As such, to combat COVID-19-related misinformation, increasing the personal
relevance of COVID-19 may be a premise to implement successful interventions.
Moreover, the timing and familiarity of the issue may have differentiated impacts on
people’s perceptions and behaviors. For instance, misinformation regarding unfamiliar
health and crisis events (e.g. COVID-19, Zika) may have more severe negative conse-
quences than those long-standing controversial issues (e.g. global warming, gun owner-
ship). Thus, future research could further explore the influences of topics on the
effectiveness of misinformation-related interventions. Furthermore, the significance of
political ideology reflects a grim reality backed by copious empirical evidence (e.g.
Bayram and Shields, 2021; Gollwitzer et al., 2020). That is, individuals’ responses to
COVID-19 are subject to a deep partisan divide, and trust toward authoritative informa-
tion sources is also profoundly polarized. Therefore, investigating communication strate-
gies that are politically sensitive and motivating a nonpartisan approach to the problem
may be particularly crucial in combatting misinformation.
This study is not without limitations. First, this study used the MTurk sample since it
contains a relatively large participant pool including hard-to-reach populations (Hitlin,
2016; Robinson et al., 2019; Smith et al., 2015), and its data validity is equivalent to that
of laboratory experiments with sufficient internal validity (Thomas and Clifford, 2017).
However, some researchers express concerns about the data quality, representativeness
of health status, and certain perceptual biases of MTurk samples (e.g. Péer et al., 2021;
Walters et al., 2018). Moreover, although the current sample size exceeds the minimum
16 new media & society 00(0)

number the power analysis stipulates, a larger sample size is more desirable in the field
of health communication and behavioral research (e.g. Freeman et al., 2021; Loomba
et al., 2021). Furthermore, the sample’s gender ratio is slightly skewed toward male par-
ticipants (Nmale = 149, Nfemale = 107). Thus, future research could use a larger, more
representative, and gender-balanced sample with diverse health and education back-
grounds, to strengthen this study’s external validity. Second, this study manipulated
norms using a conventional approach. In the age of media convergence, social media
affordances (e.g. “likes”) may also represent a type of norm relating to an issue, exert
unexpected influences on decision-making (Borah & Xiao, 2019; Chadwick et al., 2021),
and may further motivate behaviors beyond the message’s inherent persuasiveness (e.g.
Bond et al., 2012). Moreover, this study manipulated norms based on a previous study
that demonstrates a normative approval of addressing misinformation (Bode and Vraga,
2020). However, the statistics may not reflect the actual experiences of each individual
since (a) there may be a discrepancy between self-reported intention and behavior and
(b) people may count on authoritative organizations to address misinformation. Thus,
future research could further explore these intriguing facets of norms in strategies for
combatting misinformation. In addition, this study used an interrogative sentence in
descriptive norms and an assertive sentence in injunctive norms. This manipulation
closely modeled after prior research and practices (e.g. Xiao and Borah, 2020; CDC,
2022) and also reflected previous findings which suggest that people often underestimate
how well others perform positive behaviors (Graupensperger et al., 2021; Smith et al.,
2021). Thus, using a question to illustrate descriptive norms in a public intervention
campaign seems more appropriate to compensate for the perceptual gap. That being said,
future research should examine whether forms of sentences would have differentiated
impacts on behavioral decision-making.
Third, this study used the risk-appraisal-only condition as the baseline for practical
reasons. As risk appraisal was identified as the underlying force for combatting misinfor-
mation, it not only has been extensively examined in misinformation research but also
widely applied in misinformation-related interventions (e.g. Sun et al., 2021; UN, 2020a,
2020b). As such, using the risk-appraisal-only group as the baseline is theoretically
sound and realistic, and the experimental results of additional strategies could contribute
directly to the current health communication practices. However, a growing line of
research suggests that misinformation only accounts for a small portion of information
consumption, and amplifying the risk of misinformation may elicit undesired effects
(Acerbi et al., 2022; Lazer et al., 2018; Nyhan, 2019). Scholars also proposed that
improving trust in reliable sources rather than fighting misinformation may be a more
effective solution (Acerbi et al., 2022). Therefore, future research is encouraged to repli-
cate the current design with a pure control condition to examine the isolated effects of
norm-based messaging in both combatting misinformation and increasing trust toward
credible sources. Finally, this study measured verification using a single-item scale since
it was the most appropriate and literature-supported scale (Liu and Huang, 2020) when
this study was conducted. A multi-item scale of verification is worthy of exploration in
future misinformation research (e.g. Tifferet, 2021).
Despite the limitations, this study takes the first step, in the context of COVID-19,
toward examining and orchestrating theory-guided messaging strategies for motivating
Xiao 17

information and prevention behaviors. Theoretically, this study substantiates the utility
of norms in guiding behavioral promotion beyond pure health contexts. Findings dem-
onstrate that message persuasion grounded in social norms could potentially motivate
information and health behaviors amid the COVID-19 pandemic. Moreover, by directly
addressing the nuances of norms and risk appraisal in countering misinformation, this
study sustains the importance of using and investigating multiple theoretical features in
behavioral persuasion. Practically, this study provides empirical evidence that helps
explain the “dos and don’ts” in future misinformation-related interventions during pub-
lic health crises. Findings sustain that strategies grounded in social norms could further
contribute to the persuasive effectiveness of behavioral interventions based solely on
risk appraisal.

Declaration of conflicting interests


The author(s) declared no potential conflicts of interest with respect to the research, authorship,
and/or publication of this article.

Funding
The author(s) received no financial support for the research, authorship, and/or publication of this
article.

Ethics approval
The questionnaire and methodology for this study was approved by the Institutional Review Board
(IRB) committee of the Qingdao University.

Informed consent
Informed consent was obtained from all individual participants included in the study.

ORCID iD
Xizhu Xiao https://orcid.org/0000-0002-7833-134X

Data availability statement


The data that support the findings of this study are available from the corresponding author upon
reasonable request.

Supplemental material
Supplemental material for this article is available online.

References
Acerbi A, Altay A and Mercier H (2022). Research note: Fighting misinformation or fighting for
information?. Harvard Kennedy School Misinformation Review 1(3).
Ahmed W, Seguí FL, Vidal-Alaball J, et al. (2020a) Covid-19 and the “film your hospital” con-
spiracy theory: social network analysis of Twitter data. Journal of Medical Internet Research
22(10): e22374.
18 new media & society 00(0)

Ahmed W, Vidal-Alaball J, Downing J, et al. (2020b) COVID-19 and the 5G conspiracy theory:
social network analysis of Twitter data. Journal of Medical Internet Research 22(5): e19458.
Altay S, Hacquin A and Mercier H (2022) Why do so few people share fake news? It hurts their
reputation. New Media & Society 24: 1303–1324.
Arif A, Robinson JJ, Stanek SA, et al. (2017) A closer look at the self-correcting crowd: examining
corrections in online rumors. In: Proceedings of the ACM conference on computer supported
cooperative work, Portland, OR, 25 February–1 March 1, pp. 155–168. New York: ACM.
Barnum T and Armstrong TA (2019) Sensation seeking to marijuana use: exploring the mediating
roles of risk appraisal and social norms. Addictive Behaviors 92: 76–83.
Bayram AB and Shields TG (2021) Who trusts the WHO? Heuristics and Americans’ trust in
the World Health Organization during the COVID-19 pandemic. Social Science Quarterly
102(5): 2312–2330.
Bewick BM, West RM, Gill J, et al. (2010) Providing web-based feedback and social norms infor-
mation to reduce student alcohol intake: a multisite investigation. Journal of Medical Internet
Research 12: e59.
Bode L and Vraga EK (2020) Americans are fighting coronavirus misinformation on social media.
The Washington Post. Available at: https://www.washingtonpost.com/politics/2020/05/07/
americans-are-fighting-coronavirus-misinformation-social-media/
Bond RM, Fariss CJ, Jones JJ, et al. (2012) A 61-million-person experiment in social influence and
political mobilization. Nature 489: 295–298.
Borah P and Xiao X (2018) The importance of “likes”: the interplay of message framing, source,
and social endorsement on credibility perceptions of health information on Facebook. Journal
of Health Communication 23: 399–411.
Brandtzaeg PB, Lüders M, Spangenberg J, et al. (2016) Emerging journalistic verification prac-
tices concerning social media. Journalism Practice 10: 323–342.
Bruns A, Harrington S and Hurcombe E (2020) “Corona? 5G? or both?”: the dynamics of COVID-
19/5G conspiracy theories on Facebook. Media International Australia 177(1): 12–29.
Centers for Disease Control and Prevention (CDC) (2021) A needle today helps keep COVID
away. Available at: https://www.cdc.gov/coronavirus/2019-ncov/covid-data/covidview/
index.html
Centers for Disease Control and Prevention (CDC) (2022) Did you know? Misinformation spread
on social media can affect confidence in COVID-19 vaccines and vaccination rates. Available
at: https://www.facebook.com/photo/?fbid=264551425706303&set=a.218181283676651
Chadwick A, Kaiser JW, Vaccari C, et al. (2021) Online social endorsement and Covid-19 vaccine
hesitancy in the United Kingdom. Social Media + Society 7: 1–17.
Chadwick A, Vaccari C and O’Loughlin B (2018) Do tabloids poison the well of social media?
Explaining democratically dysfunctional news sharing. New Media & Society 20: 4255–4274.
Chan MS, Jones C, Jamieson KH, et al. (2017) Debunking: a meta-analysis of the psychological
efficacy of messages countering misinformation. Psychological Science 28: 1531–1546.
Chang C and Huang M (2020) Antecedents predicting health information seeking: a systematic
review and meta-analysis. International Journal of Information Management 54: 102115.
Chou W, Gaysynsky A and Vanderpool RC (2020) The COVID-19 Misinfodemic: moving beyond
fact-checking. Health Education & Behavior 48: 9–13.
Chou W-YS, Oh A and Klein WMP (2018) Addressing health-related misinformation on social
media. JAMA 320(23): 2417–2418.
Cialdini RB, Demaine LJ, Sagarin BJ, et al. (2006) Managing social norms for persuasive impact.
Social Influence 1(1): 3–15.
Xiao 19

Cialdini RB, Kallgren CA and Reno RR (1991) A focus theory of normative conduct: a theo-
retical refinement and reevaluation of the role of norms in human behavior. Advances in
Experimental Social Psychology 24: 201–234.
Cohen J (1988) Statistical Power Analysis for the Behavioral Sciences. 2nd ed. Hillsdale, NJ:
Lawrence Erlbaum.
Corbu N, Oprea D, Negrea-Busuioc E, et al. (2020) “They can’t fool me, but they can fool the
others!” Third person effect and fake news detection. European Journal of Communication
35: 165–180.
Cornelis E, Cauberghe V and Pelsmacker PD (2014) The inoculating effect of message sidedness
on adolescents’ binge drinking intentions: the moderating role of issue involvement. Journal
of Drug Issues 44(3): 254–268.
Davison WP (1983) The third-person effect in communication. Public Opinion Quarterly 47(1):
1–15.
Deressa W, Worku A, Abebe W, et al. (2021) Risk perceptions and preventive practices of
COVID-19 among healthcare professionals in public hospitals in Addis Ababa, Ethiopia.
PLoS ONE 16: e0242471.
El-Toukhy S (2015) Parsing susceptibility and severity dimensions of health risk perceptions.
Journal of Health Communication 20(5): 499–511.
Erdfelder E, Faul F and Buchner A (1996) GPOWER: a general power analysis program. Behavior
Research Methods, Instruments, & Computers 28: 1–11.
Faul F, Erdfelder E, Lang A, et al. (2007) G*Power 3: a flexible statistical power analysis program
for the social, behavioral, and biomedical sciences. Behavior Research Methods 39: 175–191.
Field AP (2013) Discovering Statistics using IBM SPSS Statistics: And Sex and Drugs and Rock
“n” Roll. 4th ed. London: SAGE.
Fishbein M and Ajzen I (2010) Predicting and Changing Behavior: The Reasoned Action
Approach. New York: Psychology Press, Taylor & Francis Group.
Forrest A (2020) Coronavirus: 700 dead in Iran after drinking toxic methanol alcohol to “cure
Covid-19.” Independent. Available at: https://www.independent.co.uk/news/world/middle-
east/coronavirus-iran-deathstoxic-methanol-alcohol-fake-news-rumours-a9487801.html
Freeman D, Loe BS, Yu L, et al. (2021) Effects of different types of written vaccination infor-
mation on COVID-19 vaccine hesitancy in the UK (OCEANS-III): a single-blind, parallel-
group, randomised controlled trial. The Lancet Public Health 6: e416–e427.
Fridkin K, Kenney P and Wintersieck A (2015) Liar, Liar, pants on fire: how Fact-checking influ-
ences citizens’ reactions to negative advertising. Political Communication 32: 127–151.
Glocalities (2022) Report: trust in the United Nations. Available at: https://glocalities.com/reports/
untrust
Gollwitzer A, Martel C, Brady WJ, et al. (2020) Partisan differences in physical distancing are
linked to health outcomes during the COVID-19 pandemic. Nature Human Behaviour 4:
1186–1197.
Gottfried J (2020) Around three-in-ten Americans are very confident they could fact-check news
about COVID-19. Available at: www.pewresearch.org/fact-tank/2020/05/28/around-three-in-
ten-americans-are-very-confident-they-could-fact-check-news-about-covid-19/%3famp=1
Graupensperger SA, Lee C and Larimer ME (2021) Young adults underestimate how well peers
adhere to covid-19 preventive behavioral guidelines. The Journal of Primary Prevention 42:
309–318.
Graves L, Nyhan B and Reifler J (2016) Understanding innovations in journalistic practice: a
field experiment examining motivations for fact-checking. Journal of Communication 66:
102–138.
20 new media & society 00(0)

Gravetter F and Wallnau L (2014) Essentials of Statistics for the Behavioral Sciences. 8th ed.
Belmont, CA: Wadsworth.
Gunther A and Mundy P (1993) Biased optimism and the third person effect. Journalism Quarterly
70: 58–67.
Gunther A and Storey JD (2003) The influence of presumed influence. Journal of Communication
53(2): 199–215.
Hayes AF (2018) Introduction to Mediation, Moderation, and Conditional Process Analysis. 2nd
ed. New York: Guilford.
He L, He C, Reynolds TL, et al. (2021) Why do people oppose mask wearing? A comprehensive
analysis of U.S. tweets during the COVID-19 pandemic. Journal of the American Medical
Informatics Association 28: 1564–1573.
Hitlin P (2016) Research in the crowdsourcing age: a case study. Available at: https://www.
pewresearch.org/internet/wp-content/uploads/sites/9/2016/07/PI_2016.07.11_Mechanical-
Turk_FINAL.pdf
Hodson G, Maio GR and Esses VM (2001) The role of attitudinal ambivalence in susceptibility to
consensus information. Basic and Applied Social Psychology 23: 197–205.
Hohman ZP, Crano WD and Niedbala EM (2016) Attitude ambivalence, social norms, and behav-
ioral intentions: developing effective antitobacco persuasive communications. Psychologists
in Addictive Behaviors 30(2): 209–219.
Huang Y and Wang W (2020) When a story contradicts: correcting health misinformation on social
media through different message formats and mechanisms. Information, Communication &
Society. Epub ahead of print 29 November. DOI: 10.1080/1369118X.2020.1851390.
Jang SM and Kim JK (2018) Third person effects of fake news: fake news regulation and media
literacy interventions. Computers in Human Behavior 80: 295–302.
Lakens D (2013) Calculating and reporting effect sizes to facilitate cumulative science: a practical
primer for t-tests and ANOVAs. Frontiers in Psychology 4: 863.
Larson HJ and Broniatowski DA (2021) Why debunking misinformation is not enough to change
people’s minds about vaccines. American Journal of Public Health 111(6): 1058–1060.
Lazer D, Baum MA, Benkler Y, et al. (2018) The science of fake news. Science 359: 1094–
1096.
Liu PL and Huang L (2020) Digital disinformation about COVID-19 and the third-person effect:
examining the channel differences and negative emotional outcomes. Cyberpsychology,
Behavior and Social Networking 23(11): 789–793.
Loomba S, de Figueiredo A, Piatek SJ, et al. (2021) Measuring the impact of COVID-19 vac-
cine misinformation on vaccination intent in the UK and USA. Nature Human Behaviour 5:
337–348.
Lyons BA, Montgomery JM, Guess AM, et al. (2021) Overconfidence in news judgments is
associated with false news susceptibility. Proceedings of the National Academy of Sciences
118(23): e2019527118.
Medscape (2021) Do you trust the CDC and FDA? Available at: https://www.medscape.com/
viewarticle/951793
Metzger MJ and Flanagin AJ (2013) Credibility and trust of information in online environments:
the use of cognitive heuristics. Journal of Pragmatics 59(Part B): 210–220.
Metzger MJ, Flanagin AJ and Medders RB (2010) Social and heuristic approaches to credibility
evaluation online. Journal of Communication 60(3): 413–439.
Mollen S, Ruiter RA and Kok G (2010) Current issues and new directions in Psychology and
Health: what are the oughts? The adverse effects of using social norms in health communica-
tion. Psychology & Health 25(3): 265–270.
Xiao 21

Montgomery JM, Nyhan B and Torres M (2018) How conditioning on posttreatment variables
can ruin your experiment and what to do about it. American Journal of Political Science 62:
760–775.
Mourali M and Drake C (2022) The challenge of debunking health misinformation in dynamic
social media conversations: online randomized study of public masking during COVID-19.
Journal of Medical Internet Research 24(3): e34831.
Nan X and Madden K (2012) HPV vaccine information in the blogosphere: how positive and neg-
ative blogs influence vaccine-related risk perceptions, attitudes, and behavioral Intentions.
Health Communication 27: 829–836.
Nyhan B (2019) Why fears of fake news are overhyped. Available at: https://gen.medium.com/
why-fears-of-fake-news-are-overhyped-2ed9ca0a52c9
Nyhan B (2021) Why the backfire effect does not explain the durability of political mispercep-
tions. Proceedings of the National Academy of Sciences 118(15): e1912440117.
Nyhan B and Reifler J (2010) When corrections fail: the persistence of political misperceptions.
Political Behavior 32(2): 303–330.
O’Keefe DJ and Hoeken H (2021) Message design choices don’t make much difference to per-
suasiveness and can’t be counted on—not even when moderating conditions are specified.
Frontiers in Psychology 12: 664160.
Péer E, Rothschild DM, Evernden Z, et al. (2021) Data quality of platforms and panels for online
behavioral research. Decision-Making & Management Science. Epub ahead of print 29
September. DOI: 10.3758/s13428-021-01694-3.
Phua J and Ahn SJ (2016) Explicating the “like” on Facebook brand pages: the effect of intensity
of Facebook use, number of overall “likes,” and number of friends’ “likes” on consumers’
brand outcomes. Journal of Marketing Communications 22(5): 544–559.
Pogarsky G, Roche SP and Pickett JT (2017) Heuristics and biases, rational choice, and sanction
perceptions. Criminology 55: 85–111.
Pritschet L, Powell D and Horne Z (2016) Marginally significant effects as evidence for hypoth-
eses: changing attitudes over four decades. Psychological Science 27(7): 1026–1042.
Ratcliff CL, Jensen JD, Scherr CL, et al. (2019) Loss/gain framing, dose, and reactance: a message
experiment. Risk Analysis 39: 2640–2652.
Riedl MJ, Whipple KN and Wallace R (2021) Antecedents of support for social media con-
tent moderation and platform regulation: the role of presumed effects on self and oth-
ers. Information, Communication & Society. Epub ahead of print 26 January. DOI:
10.1080/1369118X.2021.1874040.
Rimal RN and Juon H (2010) Use of the risk perception attitude framework for promoting breast
cancer prevention. Journal of Applied Social Psychology 40(2): 287–310.
Rimal RN and Real K (2003) Perceived risk and efficacy beliefs as motivators of change: use
of the risk perception attitude (RPA) framework to understand health behaviors. Human
Communication Research 29(3): 370–399.
Rimal RN and Real K (2005) How behaviors are influenced by perceived norms: a test of the
theory of normative social behavior. Communication Research 32(3): 389–413.
Robinson EJ, Fleming A and Higgs S (2014) Prompting healthier eating: testing the use of health
and social norm based messages. Health Psychology 33(9): 1057–1064.
Robinson J, Rosenzweig C, Moss AJ, et al. (2019) Tapped out or barely tapped? Recommendations
for how to harness the vast and largely unused potential of the Mechanical Turk participant
pool. PLoS ONE 14(12): e0226394.
Rudert SC and Janke S (2021) Following the crowd in times of crisis: descriptive norms predict physical
distancing, stockpiling, and prosocial behavior during the COVID-19 pandemic. Group Processes
& Intergroup Relations. Epub ahead of print 23 July. DOI: 10.1177/13684302211023562.
22 new media & society 00(0)

Saey TH (2020) Why scientists say wearing masks shouldn’t be controversial. Science News.
Available at: https://www.sciencenews.org/article/covid-19-coronavirus-why-wearing-
masks-controversial
Scheufele DA and Krause NM (2019) Science audiences, misinformation, and fake news.
Proceedings of the National Academy of Sciences 116(16): 201805871.
Schuetz SW, Sykes TA and Venkatesh V (2021) Combating COVID-19 fake news on social media
through fact checking: antecedents and consequences. European Journal of Information
Systems 30: 376–388.
Shahsavari S, Holur P, Wang T, et al. (2020) Conspiracy in the time of corona: automatic detec-
tion of emerging COVID-19 conspiracy theories in social media and the news. Journal of
Computational Social Science 3(2): 279–317.
Smith NA, Sabat IE, Martinez LR, et al. (2015) A convenient solution: using MTurk to sample
from hard-to-reach populations. Industrial and Organizational Psychology 8: 220–228.
Smith S, DeJong W, Turner MM, et al. (2021) Determining whether public communications cam-
paigns based on the social norms approach are a viable COVID-19 prevention strategy for
college campuses. Journal of Health Communication 26: 792–798.
Sun Y, Chia SC, Lu F, et al. (2022) The battle is on: factors that motivate people to combat anti-
vaccine misinformation. Health Communication 37: 327–336.
Sun Y, Oktavianus J, Wang S, et al. (2021) The role of influence of presumed influence and antici-
pated guilt in evoking social correction of COVID-19 misinformation. Health Communication.
Epub ahead of print 18 February. DOI: 10.1080/10410236.2021.1888452.
Sundar SS (2008) The MAIN model: a heuristic approach to understanding technology effects
on credibility. In: Metzger MJ and Flanagin AJ (eds) Digital Media, Youth, and Credibility.
Cambridge, MA: The MIT Press, pp. 72–100.
Sundar SS, Xu Q and Oeldorf-Hirsch A (2009) Authority vs. peer: how interface cues influ-
ence users. In: Proceedings of the 27th international conference extended abstracts on
human factors in computing systems (CHI’09), Boston, MA, 4–9 April, pp. 4231–4236.
New York: ACM.
Swire-Thompson B, DeGutis JM and Lazer D (2020) Searching for the Backfire effect: measure-
ment and design considerations. Journal of Applied Research in Memory and Cognition 9:
286–299.
Tandoc EC, Lim D and Ling R (2020) Diffusion of disinformation: how social media users respond
to fake news and why. Journalism 21: 381–398.
Thai TD and Wang T (2020) Investigating the effect of social endorsement on customer brand rela-
tionships by using statistical analysis and fuzzy set qualitative comparative analysis (fsQCA).
Computers in Human Behavior 113: 106499.
Thomas KA and Clifford S (2017) Validity and Mechanical Turk: an assessment of exclusion
methods and interactive experiments. Computers in Human Behavior 77: 184–197.
Thorson E (2016) Belief echoes: the Persistent effects of corrected misinformation. Political
Communication 33: 460–480.
Tifferet S (2021) Verifying online information: development and validation of a self-report scale.
Technology in Society 67: 101788.
Tsai C (2020) Personal risk and societal obligation amidst COVID-19. JAMA 323(16): 1555–1556.
Tully M, Bode L and Vraga EK (2020) Mobilizing users: does exposure to misinformation and its
correction affect users’ responses to a health misinformation post? Social Media + Society
6: 1–12.
United Nations (UN) (2020a) Facebook posts. Available at: https://www.facebook.com/
page/54779960819/search/?q=MISINFORMATION
Xiao 23

United Nations (UN) (2020b) United Nations launches global “pause” campaign to tackle
spread of misinformation. Available at: https://www.un.org/sites/un2.un.org/files/pause_
pr_final_30jun.pdf
Van der Meer TG and Jin Y (2020) Seeking formula for misinformation treatment in public health
crises: the effects of corrective information type and source. Health Communication 35(5):
560–575.
Vinck P, Pham PN, Bindu KK, et al. (2019) Institutional trust and misinformation in the response
to the 2018–19 Ebola outbreak in North Kivu, DR Congo: a population-based survey. The
Lancet. Infectious Diseases 19(5): 529–536.
Vraga EK and Bode L (2020) Correction as a solution for health misinformation on social media.
American Journal of Public Health 110(S3): S278–S280.
Walter N and Murphy S (2018) How to unring the bell: a meta-analytic approach to correction of
misinformation. Communication Monographs 85: 423–441.
Walter N, Brooks JJ, Saucier CJ, et al. (2020) Evaluating the impact of attempts to correct health
misinformation on social media: a meta-analysis. Health Communication. Epub ahead of
print 6 August. DOI: 10.1080/10410236.2020.1794553.
Walter N, Cohen J, Holbert RL, et al. (2019) Fact-checking: a meta-analysis of what works and for
whom. Political Communication 37: 350–375.
Walters K, Christakis DA and Wright DR (2018) Are Mechanical Turk worker samples repre-
sentative of health status and health behaviors in the U.S.? PLoS ONE 13(6): e0198835.
Wang Y, Mckee M, Torbica A, et al. (2019) Systematic literature review on the spread of health-
related misinformation on social media. Social Science & Medicine 240: 112552.
Weinstein ND (2007) Misleading tests of health behavior theories. Annals of Behavioral Medicine
33: 1–10.
West SM (2018) Censored, suspended, shadowbanned: user interpretations of content moderation
on social media platforms. New Media & Society 20(11): 4366–4383.
World Health Organization (WHO) (2020a) Immunizing the public against misinformation.
Available at: https://www.who.int/news-room/feature-stories/detail/immunizing-the-public-
against-misinformation
World Health Organization (WHO) (2020b) Managing the COVID-19 infodemic: promoting
healthy behaviours and mitigating the harm from misinformation and disinformation.
Available at: https://www.who.int/news/item/23-09-2020-managing-the-covid-19-inf-
odemic-promoting-healthy-behaviours-and-mitigating-the-harm-from-misinformation-
and-disinformation
World Health Organization (WHO) (2022) Help stop the spread of vaccine misinformation.
Available at: https://www.facebook.com/watch/?v=671757437296353
Xiao X (2019) Follow the heart or the mind? Examining cognitive and affective attitude on HPV
vaccination intention. Advance Online Publication. Atlantic Journal of Communication
29(2): 93–105.
Xiao X and Borah P (2020) Do norms matter? Examining norm-based messages in HPV vaccina-
tion promotion. Health Communication 36(12): 1476–1484.
Xiao X and Wong RM (2020) Which is better? Theory of reasoned action or theory of planned
behavior: a meta-analysis of vaccination research. Vaccine 38(33): 5131–5138.
Xiao X, Lee D, Wong M, et al. (2021) The impact of theory in HPV vaccination promotion
research: a systematic review and meta-analysis. American Journal of Health Promotion
35(7): 1002–1014.
Yıldırım M, Geçer E and Akgül Ö (2020) The impacts of vulnerability, perceived risk, and fear on
preventive behaviours against COVID-19. Psychology, Health & Medicine 26: 35–43.
24 new media & society 00(0)

Yang B and Nan X (2018) Influence of norm-based messages on college students’ binge drink-
ing intentions: considering norm type, regulatory mode, and level of alcohol consumption.
Health Communication 34: 1711–1720.
Yum S (2020) Social network analysis for coronavirus (COVID-19) in the United States. Social
Science Quarterly 101(4): 1642–1647.
Zanna MP and Rempel JK (1988) Attitudes: a new look at an old concept. In: Bar-Tal D and
Kruglanski AW (eds) The Social Psychology of Knowledge. Cambridge: Cambridge
University Press, pp. 315–334.

Author biography
Xizhu Xiao (Ph.D., Washington State University) is an Assistant Professor at the School of
Literature, Journalism and Communication, Qingdao University. Her research interests lie at the
intersection of health communication, strategic communication, and new media.

You might also like