Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

How Effective Are Interventions Against Misinformation?

Sacha Altay
Department of Political Science, University of Zurich, Switzerland.

Abstract
Efforts to combat misinformation have intensified in recent years. In parallel, our scientific
understanding of misinformation and of our information ecosystem has improved. Here, I
propose ways to improve interventions against misinformation based on this growing body of
knowledge. First, because misinformation consumption is minimal and news consumption is
low, more interventions should aim at increasing the uptake of reliable information. Second,
because most people distrust unreliable sources but fail to sufficiently trust reliable sources,
there is more room to improve trust in reliable sources than to reduce trust in unreliable sources.
Third, because misinformation is largely a symptom of deeper socio-political problems,
interventions should try to address these root causes, such as by reducing partisan animosity.
Fourth, because a small number of powerful individuals give misinformation most of its
visibility, interventions should try to target these ‘superspreaders’. Fifth, because false
information is not necessarily harmful and true information can be used in misleading ways,
misleadingness should take precedence over veracity in defining misinformation.
Policymakers, journalists, and researchers would benefit from considering these arguments
when thinking about the problem of misinformation and how to tackle it.

Keywords: Misinformation; Fact-checking; Media literacy; Friction; Prebunking; Nudge;


Interventions.
1. Introduction
People across the world are worried about misinformation [1], scientists are studying it more
closely than ever before [2], and policymakers are looking for solutions to curb its spread and
limit its impact. Following the 2016 US presidential election, efforts to combat misinformation
have intensified. Fact-checking initiatives are blooming, various kinds of media literacy
training are being developed to help people navigate the information ecosystem, and small
changes in people’s online environment are being made to deter the sharing of misinformation.
In parallel, our scientific understanding of misinformation and of our information ecosystem
has improved. In this article I highlight a disconnect between this growing body of knowledge
and the premises on which interventions against misinformation are built. Most interventions
against misinformation still build on the premises that misinformation is widespread and a
major cause of complex socio-political problems, that people are gullible and easily swayed by
misinformation, and that people share and believe misinformation because they lack something
(such as education, literacy skills, cognitive abilities, etc.). Many of these ideas have gained
momentum in recent years with moral panics about the influence of fake news on the 2016 US
presidential election and the Brexit, but have since been weakened, if not refuted [3]. Yet, few
attempts have been made to move away from alarmist narratives on misinformation and build
interventions on more psychologically plausible foundations. Misinformation is used in this
article as an umbrella term encompassing false or misleading content spread without the intent
to do harm.
Interventions against misinformation have been under great scrutiny recently. For
instance, a systematic review of interventions against misinformation found that most
interventions focused on fact-checking, lacked theoretical foundations, and did not address the
causes of misinformation susceptibility such as motivated reasoning or partisanship [4].
Another scoping review highlighted the need for interventions to adapt to video-based social
media, to be tested in ecological settings outside of the lab, to target people older than 65 years,
and to extend their scope outside of the US and Western Europe [5]. Others have highlighted
the need for system-level solutions [6], or the importance of testing interventions on both
misinformation and reliable information [7] since some interventions indiscriminately increase
skepticism [8].
The present article is organized as follow. In Section 2, I present five arguments about
misinformation and outline their implications for interventions against misinformation (see
Table 1 for a summary). These arguments are a sample of the growing body of knowledge on
misinformation and our information ecosystem that have largely been ignored so far, and that
interventions against misinformation would benefit most from considering. In Section 3, I
briefly review the most common interventions against misinformation and highlight their
strengths and limitations. In particular, I detail the extent to which these interventions rely on
premises in line with our current knowledge on misinformation. In Section 4, I propose concrete
ways to adapt, and hopefully improve, current interventions against misinformation. At the end
of the article, I question whether the objective of interventions against misinformation, most
notably reducing false beliefs, could be achieved more efficiently by other means, such as by
increasing the acceptance of reliable information. I also stress the need for systemic
interventions and the importance of targeting the root causes of the problem. Ultimately, the
goal of this article is to propose news ways to think about the problem of misinformation and
how to best address it.

Arguments Rationale Implications

A) Interventions against misinformation are bound to have small


a) Unreliable news represents ~ 5% of people’s news diet and a
effects, especially compared to interventions increasing the uptake
Misinformation small group of active, vocal users consume most of it.
of reliable information.
consumption is low b) Most people consume little news and show little interest in
B) Many hold false beliefs because they are uninformed rather than
politics.
misinformed.

c) People distrust unreliable news outlets but fail to trust reliable C) There is much more room to improve trust in reliable sources than
People are overly news outlets enough. to reduce trust in unreliable sources.
skeptical d) People underuse social informaiton and are more likely to D) Instead of teaching laypeople to be more critical, it would be more
reject reliable information than to accept misinformation. fruitful to make them more trusting.

e) Counter-discourses fostered by misinformation attract people E) There will be misinformation as long as people do not trust
Misinformation is a with low trust in institutions or high partisan animosity. institutions and are affectively polarized.
symptom f) Misinformation is more prevalent in countries with weak F) Building trust in institutions and reducing partisan animosity could
democratic institutions and high political polarization. help address some of the root causes.

Misinformation often g) A small number of powerful individuals give misinformation G) Efforts to combat misinformation should not rest primarily on
comes from the top most of its visibility. laypeople’s shoulders and need to target superpreaders.

False information is not h) False information can be used in non-misleading ways while H) Interventions against misinformation should focus on misleading
necessarily harmful true information can be used in misleading ways. statemnet rather than false statements per se.

Table 1. The main arguments are laid out in column one, the rationale behind the
arguments in column two, and the implications of the arguments for interventions
against misinformation in column three.
2. Five arguments on misinformation

2.1. Misinformation consumption is low


It is notoriously difficult to measure misinformation consumption precisely due to the
challenges of defining and identifying misinformation on a large scale. To date, the most
exhaustive estimates of misinformation consumption define misinformation at the source level,
and combine web browsing data of hundreds of thousands of individuals with reliability ratings
of thousands of websites. Most websites are classified as unreliable (and included in the
misinformation category) because of their low journalistic standards of credibility and
transparency, not because they repeatedly or predominantly publish false content [9]. These
studies found that in Europe and the US, misinformation represents between 0.7% and 6% of
people’s online news diet [2, 9–13]. This proportion is higher on platforms like Facebook,
where in some countries it represents up to one-fifth of the news people are exposed to or
engage with [9, 14, 15]. However, people do not use social media and the internet primarily for
news [16]. News consumption represents about 5% of the time people spend on the internet
[10]. Thus, when zooming out and considering everything people do on the internet,
misinformation consumption goes down to ~ 0.15% of people’s online media diet [2, 10]. Yet,
these are averages, and the distribution of misinformation consumption is not normally
distributed. A small number of people account for most of the misinformation consumed online
[11]. And a sizeable portion of people not only consume almost no misinformation, but also
consume very little news from reliable sources [2, 10, 17].
Since misinformation consumption is low in Western democracies, training people to spot
it, debunking it, or discouraging people from sharing it is bound to have small effects outside
of experimental settings. It has been estimated that, everything else being equal, interventions
wiping out all misinformation would only be as effective at improving the overall accuracy of
people’s beliefs as interventions increasing the acceptance of reliable information by 1% [18].
And since people are exposed to reliable information more often than to misinformation, nudges
intended to fight misinformation will predominantly affect reliable information. For instance,
people will use the critical thinking skills acquired from media literacy training mostly to
scrutinize reliable information. These interventions are betting against the odds that, given the
low prevalence of misinformation, true positives (rejecting or not sharing misinformation) will
be more likely than false positives (rejecting or not sharing reliable information). This bet has
been lost in some experimental settings [8, 19] despite the fact that participants were exposed
to similar amounts of true and fake news. It suggests that the risk of false positives is real and
should be systematically investigated (as Guess and colleagues [20] did by measuring the effect
their intervention would have given participants’ exposure to reliable and unreliable sources
outside of experimental settings).

2.2. People are overly skeptical


In most countries, trust in news is low, and in many countries trust in news is declining [1].
People browse through social media with a ‘general skepticism mindset’ and report not trusting
most of the news they encounter on these platforms [21]. On average, people in the US and
Europe are able to discern reliable news outlets from unreliable ones [22, 23], i.e., when asked
to rate the reliability of news outlets, people’s ratings are strongly correlated with those of
professional fact-checkers. In the US, this correlation is mainly driven by their distrust of
hyperpartisan and fake news websites, not their trust in mainstream media, which fact-checkers
trust much more than laypeople do [22]. In experiments, participants rate false headlines as
much less accurate than false headlines but are more likely to incorrectly rate a true headline as
false than to rate a false headline as true [24–26]. This is well encapsulated in the words of this
participant justifying why they rated a claim as false: ‘[…] I’d rather not believe something that
is true than believe something that is actually false’ [27]. On a different note, the literature on
social learning has shown that people rely too much on their priors and miss out on learning
opportunities by being too stubborn, i.e., people tend to underuse social information in advice-
taking tasks and a variety of social learning tasks (a phenomenon known as 'egocentric
discounting' [28]). This excess of skepticism leads people to discount contradicting information
[29] and to reject reliable information more often than to accept false information [30].
These high levels of skepticism suggest that interventions aimed at reducing trust in
unreliable sources are likely to encounter floor effects. Most people already distrust sources
that fact-checkers consider unreliable, so there is little room for improvement. Targeting the
minority of people who consume news from these unreliable sources will help, but if they
consume and share misinformation for non-informational purposes (something we will come
back to later), the accuracy of the source may be of secondary importance to them. More
broadly, it suggests that it could be more effective to promote trust in reliable sources than to
encourage people to be more ‘critical’ than they already are [18, 31, 32]. Instead of training
laypeople to behave as fact-checkers by doing their own research and verifying the content they
come across (things that people who engage with misinformation tend to do too much of [33]),
it might be more fruitful to help them identify reliable sources of information and encourage
them to trust these sources as much as fact-checkers do.

2.3. Misinformation is a symptom


Unreliable news websites that frequently publish and share misinformation define themselves
in opposition to the establishment, claiming to cover truths that mainstream media are hiding
[34]. This type of counter-discourse does not appeal to most people, but it does to people with
low trust in institutions or high partisan animosity [33, 35, 36]. People who believe in
conspiracy theories do so in part because their lack of trust in the news and institutions more
broadly makes such content appealing to them [37]. Political misinformation consumption is
predominantly politically congenial [38] and people who share political misinformation are
animated by partisan motives, notably to derogate political opponents [13]. In the US,
misinformation consumption is heavily concentrated among older Republicans [15], as this
population is more affectively polarized and particularly fond of hyper-partisan news [39].
Low trust in institutions and affective polarization creates the fertile ground for
misinformation to take hold. Rumors and misinformation naturally emerge and thrive in such
environments. Interventions fighting misinformation are doomed to play a game of ‘whack-a-
mole’, where as soon as a piece of misinformation is debunked, another appears elsewhere. If
there is a demand for misinformation, the offer will meet it. When misinformation is used to
justify pre-existing beliefs and attitudes, and to fulfill social goals such as expressing
dissatisfaction with political institutions, debunking misinformation can only do so much. And
if older people are indeed responsible for most of the misinformation consumed and shared
online, interventions aimed at young people may not be the best targeted, especially considering
that young people are generally less interested in politics and consume less news [1]. In addition
to teaching young people to spot misinformation, it would be fruitful to cultivate their interest
in the news and politics, especially among those with a low socioeconomic background—who
are the least interested in the news and politics [40].

2.4. Misinformation often comes from the top


Not all posts are equal. Misinformation shared by the median social media user has no reach
and no impact compared to misinformation shared by politicians with millions of followers [41,
42]. For instance, it has been suggested that in the US a small number of actors contributed to
giving COVID-19 misinformation most of its visibility on social media [43]. It has also been
argued that mainstream media may inadvertently (and sometimes purposefully) amplify
misinformation, and that most people would be exposed to misinformation because of
mainstream media coverage [44]—yet it is not clear if this coverage actually increases belief in
misinformation [45].
Since interventions against misinformation target laypeople, who play a minor role in
spreading misinformation, these interventions may overlook the most consequential
misinformation. Interventions targeting the actors who give misinformation most of its
visibility—such as a few politicians or influencers—would be much more effective than
existing interventions. Everything else being equal, preventing an individual with 1 million
followers from sharing misinformation would be as effective in reducing the visibility of
misinformation as preventing 5,000 median social media users with 200 followers from doing
so. Laypeople are part of the problem, and some interventions should indeed target them, but
more efforts should be put into designing interventions directed at powerful actors rather than
ordinary users. As we will see at the end of Section 4, such interventions exist and are feasible.

2.5. False information is not necessarily harmful


Misinformation is most often defined in terms of veracity, and the broader fight against
misinformation is motivated by a willingness to eradicate false beliefs, from which detrimental
behaviors may arise. Yet, false information is not necessarily harmful, and true information can
be used in misleading and harmful ways [46]. For instance, ironic or satirical statements are
literally false but are not necessarily misleading as they can imply something that is not false.
Interventions against misinformation would benefit from defining misinformation in terms
of misleadingness instead of veracity [46], and to focus on potentially harmful misleading
information, regardless of its veracity. Otherwise, these interventions risk overlooking true
information used in misleading ways or paying too much attention to benign false information.
Focusing on harmful and misleading information could increase the impact of interventions
against misinformation, but it will create new challenges. For instance, it is easier to fact-check
blatantly false statements than it is to fact-check true statements used in misleading ways. And
in practice it may be harder for researchers to find misleading claims that are widely recognized
as such compared to false claims recognized as such. Finally, pragmatic definitions of
misinformation focusing on misleadingness need to account for the context in which a statement
was uttered and what the speaker meant (not only what they said), making it impossible to
automate misinformation detection [46], and raising fundamental questions about our ability to
infer people’s mental states based on the digital traces that they leave [47].

3. Current interventions against misinformation


This section reviews the most common interventions against misinformation and highlights
their strengths and limitations.

3.1. Fact-checking and debunking


A large body of work has shown that fact-checks are effective in reducing misperceptions and
rarely backfire [48, 49]. If fact-checks can correct beliefs in the short-run [50], they largely fail
to affect (a) beliefs when they run counter to political elites’ cues [51], (b) attitudes such as
preferences for a political candidate or vaccine attitudes [52, 53], and (c) behaviors such as
voting or vaccination intentions [54].
Moreover, fact-checkers can’t keep pace with the production and circulation of
misinformation – they need to focus on the most viral pieces of misinformation. Crowd-sourced
fact-checking is a promising solution [55], but the crowd is unlikely to be good enough to
replace fact-checkers, especially for evaluating recent articles [56], and partisan dynamics could
weaken the effectiveness of crowd-sourced fact-checking [57].
Most studies on the effectiveness of fact-checking have been conducted in controlled
experimental settings, raising questions about its effectiveness in the wild. First, can fact-checks
reach individuals who have been exposed to false claims and who believe them? Empirical
evidence regarding the reach of fact-checking is scarce and not very optimistic. Guess and
colleagues (2020) found that only ~3% of people who read an article from an untrustworthy
website also read a fact-check of the article. Second, can fact-checks convince the misinformed,
or do they only ‘preach to the choir’? Fact-checking is primarily done by mainstream media,
which is distrusted by people who consume news from unreliable websites, identify themselves
in opposition to mainstream media, and routinely ‘fact-check’ the fact-checkers [58]. Still, a
recent study suggests that fact-checking can convince frequent consumers of COVID-19
misinformation [59]. However, this study also found that since COVID-19 misinformation
consumption had minimal (negative) effects on COVID-19 misperceptions, fact-checking also
had minimal (positive) effects on misperceptions.
More broadly, it is not clear whether the most misleading and harmful claims can be
convincingly fact-checked [60]. The most problematic type of information is not blatantly false
information, such as fake news, but truths used in misleading ways and the strategic omission
of true information [61]. For instance, exposure to partisan news may have more harmful effects
than exposure to false information, through well-documented mechanisms like partisan
coverage, agenda-setting, or framing [62]. And even if exposure to partisan media may not
have strong direct effects on people’s attitudes or behaviors— such effects tend to be minimal—
they could have detrimental indirect effects such as fueling partisan animosity or eroding trust
in mainstream media [63].
Finally, it has been suggested that people share misinformation not because they are unable
to evaluate its veracity, but because they are unwilling to do so [64]. More broadly, ‘motivated
reasoning’ is a long-known obstacle to the success of fact-checking. When a headline is shared
to denigrate political opponents or express dissatisfaction with institutions, veracity is
secondary to the political orientation of the headline and its potential usefulness to the political
cause. Partisan dynamics in the sharing of news on social media can also be witnessed in the
sharing of fact-checks themselves, which are commonly shared to denigrate the opposing party
or cheerlead for one’s party [65].

3.2. Nudges
One of the most widely used nudges against misinformation by social media companies in
recent years is friction. It has been implemented in various ways, but the core principle is to
increase the number of clicks required to share content, such as by asking users whether they
want to read the article they want to share before sharing it. Friction is very effective at deterring
sharing [52] and requires only minimal tweaks in the design of social media platforms. The
main problem is that friction risk reducing not only the sharing of misinformation, but also the
sharing of reliable information. And since people are more exposed to reliable information than
to misinformation, friction will mostly reduce the sharing of reliable information. This is
concerning given that, contrary to the negative effects of misinformation, the positive effects
of access to reliable information are well documented [66]. It is thus important to develop and
test forms of friction targeting specifically unreliable news. But since friction will inevitably
reduce sharing in general, efforts should be made to compensate for this decrease, notably by
promoting the sharing of news from reliable sources on social media—which might be
particularly important to reach the least interested in the news who get most of their news
incidentally when scrolling on social media [67].
The most famous nudge against misinformation in the scientific literature is the accuracy
nudge. Simply asking participants to rate the accuracy of headlines before sharing them reduces
their willingness to share false news and increases (to a smaller extent) their willingness to
share true news [68]. The accuracy nudge is easy to implement and has been shown to be
effective across countries [69]. Yet, to be effective outside of experimental settings, and in the
long run, it needs to be paired with other interventions to minimize habituation effects [70].
Finally, source labels providing information about the reliability of a source have been
developed by some companies (e.g. NewsGuard) and have been embraced by some search
engines (e.g. Microsoft Edge). Source labels are easy to implement but evidence for their
effectiveness is mixed [71]. Instead of labeling unreliable sources—which most people already
distrust—it could be more fruitful to label mainstream media sources—which people fail to
trust as much as fact-checkers do.

3.3. Media literacy training and pre-bunking


Instead of correcting misperceptions after the harm is already done, and often difficult to
mitigate [72], pre-bunking works by preventing misperceptions to take hold in the first place.
A growing body of research has shown that various forms of pre-bunking (e.g. fact- or logic-
based, gamified or not) can help people identify false claims using specific manipulation
techniques [73]. More traditional educational interventions, such as media or digital literacy
programs developing critical thinking skills, can also help people detect misinformation [20],
but these interventions are not always effective, even when they last up to an hour [74].
The goal of most of these interventions is to increase skepticism toward unreliable sources
and false information. We have seen in Section 2 that considering the low prevalence of
misinformation, and people’s distrust of unreliable sources, these interventions are bound to
have small effects. But some could even have deleterious effects by increasing skepticism of
both unreliable and reliable sources [8, 19, 20, 31, 32, 75]. For instance, Clayton and colleagues
(2020) found that a general warning highlighting the existence of misleading information and
of malicious actors increased skepticism towards reliable news. The last sentence of the
warning reads ‘It is important to remain skeptical when reading headlines […]’. Similarly,
Facebook tips to spot false news, that were promoted at the top of users’ news feeds in 14
countries in 2017, aimed at increasing skepticism. The first tip was “Be skeptical of headlines”
while the last tip was “Some stories are intentionally false”. In experimental settings these
Facebook tips have been shown to “reduce the perceived accuracy of both mainstream and false
news headlines” (p. 1. with larger effects on false news) [20]. They illustrate well the premises
upon which these interventions are designed: they assume the problem is that people are too
gullible, so the solution is to make them more skeptical. Similarly, interventions based on
inoculation theory aim explicitly at building resistance to persuasion and attitude change [77].
However, there are reasons to think that people are excessively skeptical [21, 28, 30] and that
changing people’s mind is a daunting task [30, 78, 79]. Instead of trying to make people more
skeptical, digital literacy and pre-bunking interventions could aim to foster trust in reliable
sources and make people more inclined to revise their beliefs in the light of new evidence.
Another premise of these interventions is that people share and believe misinformation
because they lack something, whether it is critical thinking, attention, or skills. There is some
truth to this, but it misses the bigger picture: people are often motivated to engage with
misinformation [39]. They are not passively influenced by it, but actively look for it [80], use
it to define themselves, and proudly harbor it [33]. People do not ‘fall into the rabbit hole’ [81].
They jump in and dig. Motivations to derogate political opponents and distrust of institutions
will not be cured by skills and knowledge. As Bennett and Livingston wrote in a seminal article
[82], ‘Part of this broadening of perspective is to resist easy efforts to make the problem go
away by fact-checking initiatives and educating citizens about the perils of fake news. Many
citizens actively seek such information to support identities and political activities […]’ (p.
135).
Finally, conspiracy believers may not be as gullible as often portrayed [30, 33, 83]. They
have a ‘chronically distrusting mindset’ [84] and are ‘epistemic individualists’ [85],
encouraging everyone to ‘think for themselves’ and ‘do their own research’. They are more
likely to say that the opinion of someone has no value compared to their own experience and
that they don’t need others to understand the world. For them, knowledge ought to be
constructed from the bottom-up and should not be ‘blindly’ acquired based on trust in a top-
down fashion [33]. QAnon followers call this knowledge-making activity ‘baking’, and many
of the guidelines structuring this activity are similar to those advocated in media literacy
courses. Instead of being gullible and accepting any conspiracy theory, people on the popular
r/conspiracy subreddit [83], the 8chan imageboard [33], or even within the flat earth community
[86], passionately disagree with one another about the veracity of different conspiracy theories
and the quality of evidence supporting them. They also routinely ridicule those who believe in
the ‘wrong’ conspiracy theories (e.g., the lizard people or the flat earth with a dome) and
sometimes suspect them to be outside agents trying to delegitimize more plausible and
legitimate conspiracy theories [83].

4. Future interventions against misinformation


This final section offers concrete ways to adapt, and hopefully improve, interventions against
misinformation.
4.1. Promote reliable information
To reduce the negative consequences of false beliefs, interventions do not need to focus on
misinformation. People can hold false beliefs, and make suboptimal decisions, not because they
have been misinformed, but simply because they are uninformed. For instance, people can be
COVID-19 vaccine-hesitant because they have not been exposed to reliable information about
the vaccines and do not like the idea of having a weakened dose of a disease injected into their
body through a needle. More broadly, many of our intuitions about the world are at odd with
scientific knowledge and medicine. When not exposed to reliable information, these intuitions
lead us astray on a number of topics such as the safety of vaccines, genetically modified
organisms, or nuclear energy [30, 87]. Since many more people are uninformed because they
don’t follow the news, rather than misinformed because they consume misinformation [2], the
fight against false beliefs is most likely to be effective when pursued in concert with the broader
fight for reliable information [18].
So far, I have referred to the uninformed in broadly negative terms, but it is important to
acknowledge that the uninformed do not necessarily hold false beliefs or engage in harmful
behaviors. Many people do not know exactly how the COVID-19 vaccines work, or how their
effectiveness is tested, but still get vaccinated because they trust scientists and the government.
These people are not considered a priority target population for interventions against
misinformation because they already ‘do the right thing’ and do not believe misinformation.
Yet, they represent an unused potential for interpersonal mass persuasion [88, 89]. When the
uninformed encounter misinformation they are often disarmed. Interventions aimed at
strengthening their argumentative arsenal (by providing them with reliable information) could
allow them to better resist persuasion attempts and importantly convince their peers via
interpersonal communication [88]. The role of interpersonal communication goes beyond
helping the uninformed convince their vaccine-hesitant uncle at a family dinner. It could help
interventions against misinformation to scale up and reach parts of the population that are
usually impervious to such interventions (e.g., see: ‘postinoculation talk’ [90]). More broadly,
interpersonal communication is known to play a key role in relaying information from the
media to the public [91].

4.2. Reduce partisan animosity


We have seen in Section 2 that misinformation consumption and sharing are not normally
distributed in the population because a minority of older and highly politicized users account
for most of it. This has led many scholars to suggest that instead of teaching literacy skills, the
focus should be on reducing the partisan animosity that motivates misinformation consumption
and sharing [39, 92, 93]. For instance, Lyons et al. (2023) argues that “older Americans do not
suffer a particular deficit in such skills and interventions aiming to improve the quality of online
news sharing may be better served by instead targeting the increasing partisan animosity among
these news consumers” (p. 25). Interventions to reduce partisan animosity propose not only to
correct misperceptions (and meta-misperceptions) about the outgroup, but also to highlight
commonalities so that partisan identity encompasses the outgroup, develop dialogue skills to
communicate across political divides, foster positive contact, and address the institutional
factors that give rise to a polarizing environment [94]. If successful, such interventions may
have a variety of positive effects that go beyond reducing belief in misinformation. For
example, they could increase people’s willingness to discuss with political opponents [92].
Researchers and nonprofit organizations have tested several de-polarization techniques [94],
many of which are promising, although concerns remain about their replicability
and generalizability to other contexts and countries.

4.3. The indirect effects of fact-checking


We have seen in section 2.4. that the most consequential misinformation often comes from
powerful actors, such as politicians. Yet, very few interventions target these actors and instead
largely focus on ordinary users. One important exception is a study by Nyhan and Reifler [95]
testing the effectiveness of fact-checking on elites. They sent letters to U.S. state legislators to
warn them about the reputational risks of making questionable statements. Those who were
sent these letters were ‘substantially less likely to receive a negative fact-check rating or to have
their accuracy questioned publicly’ (p. 1). It suggests that even if fact-checking has little direct
effect on attitudes or behaviors in the wild, its mere existence could deter the sharing of false
claims by powerful actors [96]. The indirect effect of fact-checking as a reputational threat
deserves further investigation. For instance, could it incentivize politicians to be more accurate
in political debates? And are some ways of highlighting the reputational threat of fact-checking
more effective than others?

4.4. Increase trust in reliable sources


The division of cognitive labor that characterizes modern societies is so strong that it is
impossible to individually verify the flow of information we encounter. Instead, we must rely
to a large extent on others, and believe by delegation, i.e., accept information because it comes
from reliable sources without being able to vet its content. To evaluate the veracity of the
content they encounter online, people rely on snap judgments (e.g. the source, the police and
font, the quality of visuals, etc.) and are not willing to engage in laborious content verification
[97]. Interventions against misinformation may want to operate at the level of the source rather
than at the level of the content. For instance, interventions fostering critical thinking are
increasingly focusing on helping people to trust the right sources, and be better social learners,
rather than to believe the right content, and be better individual reasoners [98]. This is
particularly important given that the logical fallacies students learn about in critical thinking
courses, and that are supposed to help them detect misleading arguments, can be used in non-
fallacious ways [99].
If the main epistemic flaw of people who believe in conspiracy theories is to rely
excessively on their knowledge and not enough on others [85], then promoting intellectual
humility—making people aware of their ignorance—and open-minded thinking—increasing
their ability to consider viewpoints they disagree with—could help. For instance, people who
are more open to revising their beliefs are less susceptible to misinformation [100]. Similarly,
people higher on intellectual humility are less likely to believe and share hostile conspiratorial
news [101]. Yet, there is no guarantee that intellectual humility and open-mindedness can
effectively be fostered, especially among people who consume misinformation. And what
happens after people are made aware of their ignorance and are willing to consider opposing
viewpoints? They might be more willing to defer to those who know best, but it won’t help
them identify who knows best. Media literacy training could fill this gap, by helping people
identify reliable sources.
More broadly, many interventions that have proven to be effective to reduce belief in
misinformation could be adapted to promote trust in reliable sources. For example, instead of
explaining the manipulation techniques used by people who spread disinformation, these
interventions could also explain the verification techniques that journalists use to fact-check
information and produce high-quality information.
Unfortunately, the reasons behind people’s low trust in reliable sources are complex and
multifactorial, and there is no quick fix. To make things worse, this lack of trust may often be
justified, and stem from systemic inequalities [35]. For instance, conspiracy ideation is higher
among people experiencing financial insecurity stemming from objective material strain [102].
Conspiracy ideation is higher in countries with higher corruption and lower press freedom [103,
104] or in countries with higher unemployment rates [105].
Systemic interventions, aimed at reducing inequalities, precarity, unemployment, or
corruption, would be effective in addressing the broader information disorder problem [106,
107]. Systemic interventions are much more costly than psychological interventions, but their
implementations would bring many other obvious benefits to society than reducing
misinformation. While both types of interventions are not mutually exclusive and are even
complementary, researchers must ensure that the promotion of psychological interventions does
not reduce support for systemic interventions [108]. System level solutions do not need to target
the root causes of the problem or operate at the level of society, they can also operate at the
platform level and, for instance, promote reliable news on social media or facilitate access to
social media data for researchers.
Finally, the watchdog function and impartiality of the media need to be strengthened, and
the media need to better represent diverse and minority views [109]. The search engines and
social media companies shaping our information ecosystem need to be more transparent and
provide researchers from all over the world (not just the US) with better data access. In the
meantime, many of the interventions reviewed here can help, but their effectiveness could be
improved if they also aimed at improving the acceptance of accurate information or at
improving trust in reliable sources, and tried to target the proximate causes of the problem (such
as partisan animosity or distrust in institutions) – in default of being able to target the root
causes of the problem (such as corruption, precarity or political institutions).

5. Conclusion
To be clear, I do not argue that efforts to curb the spread of misinformation should be reduced,
or that current interventions against misinformation are futile. In various experimental settings
fact-checking, nudges, and media literacy training have proven to be effective. The effect sizes
are rather small, and often short-lived. Their ecological validity is limited, and their
generalizability in the Global South is uncertain. Yet, I do not dispute that fact-checking,
nudges, and media literacy training can help. Instead, this contribution should be taken as an
invitation to think differently about the problem of misinformation. In particular, policymakers,
journalists, and researchers should not consider misinformation in isolation, e.g. as a purely
cognitive problem, but in the context of people’s overall low trust in the media and partisan
animosity.
The main limitation of this article is that it mostly relies on evidence from Western
democracies. On the one hand, some arguments made in this article weaken when applied to
the Global South. In particular, many have expressed concerned that misinformation
consumption is higher in the Global South than in Western democracies. Thus, everything else
being equal, interventions against misinformation may be more effective in the Global South.
Moreover, increasing trust in the news would be vain in countries where access to high-quality
news is difficult or heavily restricted, and it would be counterproductive in countries with low-
quality media ecosystems.
On the other hand, some arguments strengthen when applied to the Global South. For
instance, promoting reliable information may be particularly fruitful in countries with low-
quality media ecosystems, where finding out the good stuff amidst of all the bad stuff may be
harder and require more skills. Importantly, the need for systemic solutions is greater in some
Global South countries, where people have good reasons to distrust institutions, believe in
conspiracy theories and adhere to counter-discourses, given, among other things, the higher
levels of corruption and lower levels press freedom [103].
In conclusion, I hope this article will help interventions against misinformation adapt to the
growing body of knowledge on misinformation and our information ecosystem. First,
misinformation consumption is minimal in western democracies, but many people around the
world avoid the news and are not interested in politics. Many hold false beliefs because they
are uninformed rather than misinformed, stressing the importance of promoting reliable
information. Second, people are not gullible – if anything, they tend to be overly skeptical.
Instead of trying to make people more critical, it would be more fruitful to build trust in reliable
sources. Third, in Western democracies people hardly ever share misinformation publicly
online, but the minority of active users who regularly share it have deep-seated motivations that
will be difficult to address. Similarly, individual-level interventions like digital literacy
programs will not be enough to fight misinformation in the Global South, systemic
interventions are needed to address the root causes of the problem. Fourth, harmful
misinformation often comes from powerful actors. Efforts to combat misinformation should
not rest primarily on laypeople’s shoulder and need to target superspreaders as well. Fifth, false
information is not necessarily harmful and true information can be used in misleading ways, it
is therefore paramount to tackle misleading information, regardless of its truthfulness. Overall,
more efforts should be devoted to promoting reliable information, building trust in reliable
sources, cultivating interest in news and politics, increasing elites’ accountability and reducing
partisan animosity.

References
[1] Newman N, Fletcher R, Schulz A, et al. Digital news report 2021. Reuters Institute for
the Study of Journalism.
[2] Allen J, Howland B, Mobius M, et al. Evaluating the fake news problem at the scale of
the information ecosystem. Science Advances 2020; 6: eaay3539.
[3] Altay S, Berriche M, Acerbi A. Misinformation on misinformation: Conceptual and
methodological challenges. Social Media+ Society 2023; 9: 20563051221150412.
[4] Ziemer C-T, Rothmund T. Psychological underpinnings of disinformation
countermeasures: A systematic scoping review.
[5] Gwiaździński P, Gundersen AB, Piksa M, et al. Psychological interventions
countering misinformation in social media: A scoping review. Frontiers in Psychiatry 2023;
13: 974782.
[6] Roozenbeek J, Culloty E, Suiter J. Countering Misinformation. European
Psychologist.
[7] Guay B, Pennycook G, Rand D. How To Think About Whether Misinformation
Interventions Work.
[8] Modirrousta-Galian A, Higham PA. Gamified inoculation interventions do not
improve discrimination between true and fake news: Reanalyzing existing research with
receiver operating characteristic analysis. Journal of Experimental Psychology: General.
[9] Altay S, Nielsen RK, Fletcher R. Quantifying the “infodemic”: People turned to
trustworthy news outlets during the 2020 pandemic. Journal of Quantitative Description:
Digital Media. Epub ahead of print 2022. DOI: https://doi.org/10.51685/jqd.2022.020.
[10] Cordonier L, Brest A. How do the French inform themselves on the Internet? Analysis
of online information and disinformation behaviors. Fondation Descartes,
https://hal.archives-ouvertes.fr/hal-03167734/document (2021).
[11] Grinberg N, Joseph K, Friedland L, et al. Fake news on twitter during the 2016 US
Presidential election. Science 2019; 363: 374–378.
[12] Guess A, Nagler J, Tucker J. Less than you think: Prevalence and predictors of fake
news dissemination on Facebook. Science advances 2019; 5: eaau4586.
[13] Osmundsen M, Bor A, Vahlstrup PB, et al. Partisan polarization is the primary
psychological motivation behind political fake news sharing on Twitter. American Political
Science Review 2021; 1–17.
[14] Allen J, Mobius M, Rothschild DM, et al. Research note: Examining potential bias in
large-scale censored data. Harvard Kennedy School Misinformation Review.
[15] Guess A, Aslett K, Tucker J, et al. Cracking Open the News Feed: Exploring What US
Facebook Users See and Share with Large-Scale Platform Data. Journal of Quantitative
Description: Digital Media; 1. Epub ahead of print 2021. DOI:
https://doi.org/10.51685/jqd.2021.006.
[16] Newman N, Fletcher R, Robertson C, et al. Reuters Institute digital news report 2022.
Reuters Institute for the Study of Journalism.
[17] Wojcieszak M, de Leeuw S, Menchen-Trevino E, et al. No Polarization From Partisan
News: Over-Time Evidence From Trace Data. The International Journal of Press/Politics
2021; 19401612211047190.
[18] Acerbi A, Altay S, Mercier H. Research note: Fighting misinformation or fighting for
information? Harvard Kennedy School (HKS) Misinformation Review. Epub ahead of print
2022. DOI: https://doi.org/10.37016/mr-2020-87.
[19] Clayton K, Blair S, Busam JA, et al. Real solutions for fake news? Measuring the
effectiveness of general warnings and fact-check tags in reducing belief in false stories on
social media. Political Behavior 2020; 42: 1073–1095.
[20] Guess A, Lerner M, Lyons B, et al. A digital media literacy intervention increases
discernment between mainstream and false news in the United States and India. Proceedings
of the National Academy of Sciences 2020; 117: 15536–15545.
[21] Fletcher R, Nielsen RK. Generalised scepticism: how people navigate news on social
media. Information, Communication & Society 2019; 22: 1751–1769.
[22] Pennycook G, Rand DG. Fighting misinformation on social media using
crowdsourced judgments of news source quality. Proceedings of the National Academy of
Sciences 2019; 116: 2521–2526.
[23] Schulz A, Fletcher R, Popescu M. Are News Outlets Viewed in the Same Way by
Experts and the Public ? A Comparison across 23 European Countries. Reuters institute
factsheet, https://reutersinstitute.politics.ox.ac.uk/are-news-outlets-viewed-same-way-experts-
and-public-comparison-across-23-european-countries (2020).
[24] Batailler C, Brannon SM, Teas PE, et al. A signal detection approach to understanding
the identification of fake news. Perspectives on Psychological Science 2022; 17: 78–98.
[25] Bryanov K, Vziatysheva V. Determinants of individuals’ belief in fake news: A
scoping review determinants of belief in fake news. PLoS one 2021; 16: e0253717.
[26] Pfänder J, Altay S. Spotting Fake News and Doubting True News: A Meta-Analysis of
News Judgements.
[27] Jahanbakhsh F, Zhang AX, Berinsky AJ, et al. Exploring lightweight interventions at
posting time to reduce the sharing of misinformation on social media. Proceedings of the
ACM on Human-Computer Interaction 2021; 5: 1–42.
[28] Morin O, Jacquet PO, Vaesen K, et al. Social information use and social information
waste. Philosophical Transactions of the Royal Society B 2021; 376: 20200052.
[29] Taber CS, Lodge M. Motivated skepticism in the evaluation of political beliefs.
American Journal of Political Science 2006; 50: 755–769.
[30] Mercier H. Not Born Yesterday : The Science of Who We Trust and What We Believe.
Princeton University Press, 2020.
[31] Boyd D. Did media literacy backfire? Journal of Applied Youth Studies 2017; 1: 83–
89.
[32] Mihailidis P. Beyond cynicism: How media literacy can make students more engaged
citizens. University of Maryland, College Park, 2008.
[33] Marwick A, Partin WC. Constructing Alternative Facts: Populist Expertise and the
QAnon Conspiracy. New Media & Society. Epub ahead of print 2022. DOI:
https://doi.org/10.1177/14614448221090201.
[34] Holt K, Ustad Figenschou T, Frischlich L. Key dimensions of alternative news media.
Digital Journalism 2019; 7: 860–869.
[35] Drążkiewicz E. Study conspiracy theories with compassion. Nature 2022; 603: 765–
765.
[36] Zimmermann F, Kohring M. Mistrust, disinforming news, and vote choice: A panel
survey on the origins and consequences of believing disinformation in the 2017 German
parliamentary election. Political Communication 2020; 37: 215–237.
[37] Freeman D, Waite F, Rosebrock L, et al. Coronavirus conspiracy beliefs, mistrust, and
compliance with government guidelines in England. Psychological medicine 2022; 52: 251–
263.
[38] Guess A, Nyhan B, Reifler J. Exposure to untrustworthy websites in the 2016 US
election. Nature human behaviour 2020; 4: 472–480.
[39] Lyons B, Montgomery J, Reifler J. Partisanship and older Americans’ engagement
with dubious political news. Psyarxiv. Epub ahead of print 2023. DOI: 10.31219/osf.io/etb89.
[40] Boyadjian J. Désinformation, non-information ou sur-information? Reseaux 2020; 21–
52.
[41] Bennett L, Livingston S. A Brief History of the Disinformation Age: Information
Wars and the Decline of Institutional Authority. The disinformation age. Epub ahead of print
2020. DOI: 10.1017/9781108914628.001.
[42] Nyhan B. Why the backfire effect does not explain the durability of political
misperceptions. Proceedings of the National Academy of Sciences; 118.
[43] CCDH. The disinformation dozen. 2021,
https://www.theguardian.com/world/2021/jul/17/covid-misinformation-conspiracy-theories-
ccdh-report (2021).
[44] Tsfati Y, Boomgaarden H, Strömbäck J, et al. Causes and consequences of
mainstream media dissemination of fake news: literature review and synthesis. Annals of the
International Communication Association 2020; 1–17.
[45] Altay S, Nielsen RK, Fletcher R. The impact of news media and digital platform use
on awareness of and belief in COVID-19 misinformation. Epub ahead of print 2022. DOI:
10.31234/osf.io/7tm3s.
[46] Søe SO. A unified account of information, misinformation, and disinformation.
Synthese 2021; 198: 5929–5949.
[47] Boyd D, Crawford K. Critical questions for big data: Provocations for a cultural,
technological, and scholarly phenomenon. Information, communication & society 2012; 15:
662–679.
[48] Walter N, Murphy ST. How to unring the bell: A meta-analytic approach to correction
of misinformation. Communication Monographs 2018; 85: 423–441.
[49] Wood T, Porter E. The elusive backfire effect: Mass attitudes’ steadfast factual
adherence. Political Behavior 2019; 41: 135–163.
[50] Carey JM, Guess AM, Loewen PJ, et al. The ephemeral effects of fact-checks on
COVID-19 misperceptions in the United States, Great Britain and Canada. Nature Human
Behaviour 2022; 1–8.
[51] Berlinski N, Doyle M, Guess AM, et al. The effects of unsubstantiated claims of voter
fraud on confidence in elections. Journal of Experimental Political Science 2021; 1–16.
[52] Barrera O, Guriev S, Henry E, et al. Facts, alternative facts, and fact checking in times
of post-truth politics. Journal of Public Economics 2020; 182: 104123.
[53] Porter E, Wood TJ, Velez Y. Correcting COVID-19 Vaccine Misinformation in Ten
Countries. Royal Society open science. Epub ahead of print 2023. DOI: 10.1098/rsos.221097.
[54] Nyhan B, Porter E, Reifler J, et al. Taking Fact-checks Literally But Not Seriously?
The Effects of Journalistic Fact-checking on Factual Beliefs and Candidate Favorability.
Political Behavior 2019; 1–22.
[55] Allen J, Arechar AA, Pennycook G, et al. Scaling up fact-checking using the wisdom
of crowds. Science Advances.
[56] Godel W, Sanderson Z, Aslett K, et al. Moderating with the Mob: Evaluating the
Efficacy of Real-Time Crowdsourced Fact-Checking. Journal of Online Trust and Safety; 1.
Epub ahead of print 2021. DOI: https://doi.org/10.54501/jots.v1i1.15.
[57] Allen JNL, Martel C, Rand D. Birds of a feather don’t fact-check each other:
Partisanship and the evaluation of news in Twitter’s Birdwatch crowdsourced fact-checking
program. Psyarxiv. Epub ahead of print 2021. DOI: 10.31234/osf.io/57e3q.
[58] Berriche M. En quête de sources. Politiques de communication 2021; 115–154.
[59] Carey J, Guess A, Nyhan B, et al. COVID-19 Misinformation Consumption is
Minimal, Has Minimal Effects, And Does Not Prevent Fact-Checks from Working.
[60] Park S, Park JY, Kang J, et al. The presence of unexpected biases in online fact-
checking. The Harvard Kennedy School Misinformation Review. Epub ahead of print 2021.
DOI: 10.37016/mr-2020-53.
[61] Broockman D, Kalla J. The manifold effects of partisan media on viewers’ beliefs and
attitudes: A field experiment with Fox News viewers. OSF Preprints; 1. Epub ahead of print
2022. DOI: 10.31219/osf.io/jrw26.
[62] Weeks BE, Menchen-Trevino E, Calabrese C, et al. Partisan media, untrustworthy
news sites, and political misperceptions. New Media & Society 2021; 14614448211033300.
[63] Guess AM, Barberá P, Munzert S, et al. The consequences of online partisan media.
Proceedings of the National Academy of Sciences; 118. Epub ahead of print 2021. DOI:
https://doi.org/10.1073/pnas.2013464118.
[64] Pennycook G, Rand DG. Lazy, not biased: Susceptibility to partisan fake news is
better explained by lack of reasoning than by motivated reasoning. Cognition 2019; 188: 39–
50.
[65] Shin J, Thorson K. Partisan selective sharing: The biased diffusion of fact-checking
messages on social media. Journal of Communication 2017; 67: 233–255.
[66] Snyder JM, Strömberg D. Press Coverage and Political Accountability. Journal of
Political Economy 2010; 118: 355–408.
[67] Fletcher R, Nielsen RK. Are people incidentally exposed to news on social media? A
comparative analysis. New media & society 2018; 20: 2450–2468.
[68] Pennycook G, Epstein Z, Mosleh M, et al. Shifting attention to accuracy can reduce
misinformation online. Nature 2021; 592: 590–595.
[69] Arechar AA, Allen J, Berinsky AJ, et al. Understanding and combatting
misinformation across 16 countries on six continents. Nature Human Behaviour 2023; 1–12.
[70] Roozenbeek J, van der Linden S. How to Combat Health Misinformation: A
Psychological Approach. American Journal of Health Promotion 2022; 36: 569–575.
[71] Aslett K, Guess AM, Bonneau R, et al. News credibility labels have limited average
effects on news diet quality and fail to reduce misperceptions. Science Advances 2022; 8:
eabl3844.
[72] Lewandowsky S, Ecker UK, Seifert CM, et al. Misinformation and its correction:
Continued influence and successful debiasing. Psychological Science in the Public Interest
2012; 13: 106–131.
[73] Roozenbeek J, van der Linden S, Nygren T. Prebunking interventions based on the
psychological theory of “inoculation” can reduce susceptibility to misinformation across
cultures. Harvard Kennedy School Misinformation Review; 1.
[74] Badrinathan S. Educative Interventions to Combat Misinformation: Evidence from a
Field Experiment in India. American Political Science Review 2021; 1–17.
[75] Vraga E, Tully M, Bode L. Assessing the relative merits of news literacy and
corrections in responding to misinformation on Twitter. New Media & Society 2021;
1461444821998691.
[76] Epstein Z, Berinsky AJ, Cole R, et al. Developing an accuracy-prompt toolkit to
reduce COVID-19 misinformation online. Harvard Kennedy School Misinformation Review.
Epub ahead of print 2021. DOI: https://doi.org/10.37016/mr-2020-71.
[77] Compton J, van der Linden S, Cook J, et al. Inoculation theory in the post‐truth era:
Extant findings and new frontiers for contested science, misinformation, and conspiracy
theories. Social and Personality Psychology Compass 2021; 15: e12602.
[78] Gilardi F. Digital Technology, Politics, and Policy-Making. Elements in Public Policy.
Epub ahead of print 2022. DOI: https://doi.org/10.5167/uzh-218931.
[79] Coppock A. Persuasion in parallel: How information changes minds about politics.
University of Chicago Press, 2023.
[80] Motta M, Hwang J, Stecula D. What Goes Down Must Come Up? Misinformation
Search Behavior During an Unplanned Facebook Outage. Epub ahead of print 2022. DOI:
10.31235/osf.io/pm9gy.
[81] Chen A, Nyhan B, Reifler J, et al. Subscriptions and external links help drive resentful
users to alternative and extremist YouTube videos. Epub ahead of print 2022. DOI:
https://doi.org/10.48550/arXiv.2204.10921.
[82] Bennett WL, Livingston S. The disinformation order: Disruptive communication and
the decline of democratic institutions. European journal of communication 2018; 33: 122–
139.
[83] de Wildt L, Aupers S. Participatory conspiracy culture: Believing, doubting and
playing with conspiracy theories on Reddit. Convergence 2023; 13548565231178914.
[84] Newman D, Lewandowsky S, Mayo R. Believing in nothing and believing in
everything: The underlying cognitive paradox of anti-COVID-19 vaccine attitudes.
Personality and Individual Differences 2022; 189: 111522.
[85] Tomas F, Nera K, Schöpfer C. “Think for Yourself, or Others Will Think for You”:
Epistemic Individualism Predicts Conspiracist Beliefs and Critical Thinking. Psyarxiv. Epub
ahead of print 2022. DOI: 10.31219/osf.io/qgtzb.
[86] Dieguez S, Wagner-Egger P. Réflexions sur la forme de la Terre. L’irrationnel
aujourd’hui 2021; 323–400.
[87] Mercier H, Altay S. Do cultural misbeliefs cause costly behavior? In Musolino, J.,
Hemmer, P. & Sommer, J. (Eds.). The Science of Beliefs, 2022.
[88] Altay S, Mercier H. Framing messages for vaccination supporters. Journal of
Experimental Psychology: Applied 2020; 26: 567–578.
[89] Goldberg MH, van der Linden S, Maibach E, et al. Discussing global warming leads
to greater acceptance of climate science. Proceedings of the National Academy of Sciences
2019; 116: 14804–14805.
[90] Ivanov B, Miller CH, Compton J, et al. Effects of postinoculation talk on resistance to
influence. Journal of Communication 2012; 62: 701–718.
[91] Katz E, Lazarsfeld PF. Personal influence: The part played by people in the flow of
mass communications. Glencoe: Free Press, 1955.
[92] Bail C. Breaking the social media prism. Princeton University Press, 2021.
[93] Mihailidis P, Viotty S. Spreadable spectacle in digital culture: Civic expression, fake
news, and the role of media literacies in “post-fact” society. American behavioral scientist
2017; 61: 441–454.
[94] Hartman R, Blakey JW, Womick J, et al. Interventions to Reduce Partisan Animosity.
Nature Human Behaviour. Epub ahead of print 2022. DOI: https://doi.org/10.1038/s41562-
022-01442-3.
[95] Nyhan B, Reifler J. The effect of fact‐checking on elites: A field experiment on US
state legislators. American Journal of Political Science 2015; 59: 628–640.
[96] Lim C. Can Fact-checking Prevent Politicians from Lying? Disponible en.
[97] Metzger MJ, Flanagin AJ, Medders RB. Social and Heuristic Approaches to
Credibility Evaluation Online. Journal of Communication 2010; 60: 413–439.
[98] Pasquinelli E, Farina M, Bedel A, et al. Naturalizing Critical Thinking: Consequences
for Education, Blueprint for Future Research in Cognitive Science. Mind, Brain, and
Education.
[99] Hahn U, Oaksford M. The rationality of informal argumentation: A bayesian approach
to reasoning fallacies. Psychological Review 2007; 114: 704–732.
[100] Roozenbeek J, Maertens R, Herzog SM, et al. Susceptibility to misinformation is
consistent across question framings and response modes and better explained by open-
mindedness and partisanship than analytical thinking. Judgment and Decision Making.
[101] Marie A, Petersen MB. Moralization of rationality can stimulate, but intellectual
humility inhibits, sharing of hostile conspiratorial rumors. Psyarxiv. Epub ahead of print
2022. DOI: https://osf. io/k7u68.
[102] Adam‐Troian J, Chayinska M, Paladino MP, et al. Of precarity and conspiracy:
Introducing a socio‐functional model of conspiracy beliefs. British journal of social
psychology 2023; 62: 136–159.
[103] Alper S. There are higher levels of conspiracy beliefs in more corrupt countries.
European Journal of Social Psychology. Epub ahead of print 2022. DOI: 10.1002/ejsp.2919.
[104] Cordonier L, Cafiero F. Public Sector Corruption is Fertile Ground for Conspiracy
Beliefs: A Comparison Between 26 Western and Non-Western Countries. Psyarxiv. Epub
ahead of print 2023. DOI: 10.31219/osf.io/b24gk.
[105] Cordonier L, Cafiero F, Bronner G. Why are conspiracy theories more successful in
some countries than in others? An exploratory study on Internet users from 22 Western and
non-Western countries. Social Science Information 2021; 60: 436–456.
[106] Kuo R, Marwick A. Critical disinformation studies: History, power, and politics.
Harvard Kennedy School Misinformation Review 2021; 2: 1–11.
[107] Marwick AE. Why do people share fake news? A sociotechnical model of media
effects. Georgetown law technology review 2018; 2: 474–512.
[108] Chater N, Loewenstein G. The i-frame and the s-frame: How focusing on individual-
level solutions has led behavioral public policy astray. Behavioral and Brain Sciences 2022;
1–60.
[109] Jungherr A, Schroeder R. Disinformation and the Structural Transformations of the
Public Arena: Addressing the Actual Challenges to Democracy. Social Media+ Society 2021;
7: 2056305121988928.

You might also like