Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

752166

research-article2018
SCXXXX10.1177/1075547017752166Science CommunicationDumas-Mallet et al.

Research Note
Science Communication
2018, Vol. 40(1) 124­–141
Scientific Uncertainty © The Author(s) 2018
Reprints and permissions:
in the Press: How sagepub.com/journalsPermissions.nav
DOI: 10.1177/1075547017752166
https://doi.org/10.1177/1075547017752166
Newspapers Describe journals.sagepub.com/home/scx

Initial Biomedical
Findings

Estelle Dumas-Mallet1,2,3 , Andy Smith1,


Thomas Boraud2,3, and François Gonon2,3

Abstract
Newspapers preferentially cover initial biomedical findings although they
are often disconfirmed by subsequent studies. We analyzed 426 newspaper
articles covering 40 initial biomedical studies associating a risk factor with 12
pathologies and published between 1988 and 2009. Most articles presented
the study as initial but only 21% mentioned that it must be confirmed by
replication. Headlines of articles with a replication statement were hyped less
often than those without. Replication statements have tended to disappear
after 2000, whereas hyped headlines have become more frequent. Thus,
the public is increasingly poorly informed about the uncertainty inherent in
initial biomedical findings.

Keywords
uncertainty, initial biomedical studies, hype, health communication,
newspapers

1Centre Emile Durkheim, CNRS UMR 5116, Pessac, France


2CNRS UMR5293, Institute of Neurodegenerative Diseases, Bordeaux, France
3University of Bordeaux, Bordeaux, France

Corresponding Author:
Estelle Dumas-Mallet, Institute of Neurodegenerative Diseases, University of Bordeaux,
146 Rue Léo Saignat, 33076 Bordeaux, France.
Email: estelle.mallet@u-bordeaux.fr
Dumas-Mallet et al. 125

Introduction
The production of scientific knowledge is an incremental process where early,
promising but yet tentative findings are validated through replication. Thus,
initial scientific results are uncertain per se. Over the past two decades, numer-
ous scientific studies and editorials have highlighted the “reproducibility cri-
sis” in biomedical sciences where many initial findings failed to be replicated
(Baker, 2016; Ioannidis, 2005b; Yong, 2012). The highly competitive system
for funding research and getting an academic position is often blamed: scien-
tists are increasingly evaluated on the number of their publications (Lawrence,
2003; Reich, 2013). Scientists are therefore rewarded when they publish new,
positive findings in high–impact factor journals. This evolution is visible in
the way scientific publications are written: there is a marked increase in the
use of positive words such as “robust,” “novel,” or “unprecedented” in the
summaries (Vinkers, Tijdink, & Otte, 2015). Moreover, editors of prestigious
scientific journals favor newsworthy studies by preferentially publishing new,
positive, and exciting results (Bucchi, 2015; Franzen, 2012; Smith, 2006).
They also issue press releases highlighting these studies. Scientific institutions
are also increasing their public relation activities, in particular by issuing press
releases (Peters et al., 2008; Schafer, 2011). This trend has been described by
scholars in science communication as the medialization of science (Franzen,
2012; Peters, 2012; Weingart, 1998) where scientific communication is
adapted to journalistic values and media routines.
Observational studies tend to confirm this inclination toward the media:
the interest of the findings is often exaggerated in press releases through
various types of omission, simplification, and overgeneralization (Bartlett,
Sterne, & Egger, 2002; Sumner et al., 2016) and even in the abstract and
discussion of scientific papers (Boutron, Dutton, Ravaud, & Altman, 2010).
Exaggerations and distortions present in press releases often spread to news-
papers articles (Brechman, Lee, & Cappella, 2009; Schwartz, Woloshin,
Andrews, & Stukel, 2012; Sumner et al., 2016). Furthermore, scientific
papers described in press releases are more likely to be covered by newspa-
pers (Stryker, 2002). Indeed, most journalists consider peer-reviewed jour-
nals to be trustworthy sources of news stories (Hansen, 1994; Stryker, 2002)
and tend to copy and paste press releases when covering single scientific
papers (Autzen, 2014; Taylor et al., 2015). They also favor the results of
initial studies published in high–impact factor journals (Dumas-Mallet,
Smith, Boraud, & Gonon, 2017). Yet initial results are poorly replicated
(Dumas-Mallet, Button, Boraud, Munafo, & Gonon, 2016; Ioannidis, 2005a,
2008), and journalists scarcely mention these invalidations (Dumas-Mallet
et al., 2017). Consequently, because many biomedical findings covered by
126 Science Communication 40(1)

newspapers fail to be reproduced, the general public mainly receives flawed


information about biomedical discoveries (Gonon, Konsman, Cohen, &
Boraud, 2012).
Scientific uncertainty in the media has been the subject of many publica-
tions (Friedman, Dunwoody, & Rogers, 1999; Lehmkuhl & Peters, 2016;
Maier et al., 2016; Peters & Dunwoody, 2016). These analyses have focused
on the uncertainty related to controversial studies (Friedman et al., 1999;
Nelkin, 1995), the conflicting results (Stocking & Holstein, 2009), and the
risks associated with new technologies (Dudo, Dunwoody, & Scheufele,
2011; Kitzinger & Reilly, 1997; Lehmkuhl & Peters, 2016; Ruhrmann,
Guenther, Kessler, & Milde, 2015). Other authors observed that news sto-
ries often omit important information about the scientific context of the
finding they cover (Holtzman et al., 2005; Lai & Lane, 2009; Pellechia,
1997; Singer, 1990). More specifically, some scholars have acknowledged
the uncertainty inherent to initial scientific findings (Friedman et al., 1999;
Peters & Dunwoody, 2016), but we are not aware of any study investigating
its media coverage. It is especially relevant as the coverage of single scien-
tific papers is considered as media routine and is “mostly linear, unprob-
lematic and solidly backed by scientific sources” (Bauer, Ragnarsdottir,
Rudolfsdottir, & Durant, 1995; Bucchi & Mazzolini, 2003, p. 13). Studies
have also documented the increasing and uncritical reliance of journalists
toward press releases (Autzen, 2014; E. Jensen, 2010; Taylor et al., 2015;
Weitkamp, 2014). The degradation of journalists’ working conditions, with
fewer journalists left to deal with the same number of subjects, makes it
difficult for them to maintain their role as gatekeepers (Weingart &
Guenther, 2016).
As the public is still mainly receiving information about scientific discov-
eries through the media (Weitkamp, 2014), it is timely to study how journal-
ists present the results of biomedical findings. Here we investigated how
journalists presented the results of 40 initial biomedical studies covered by
426 newspaper articles. These studies were selected during a research project
designed to explore the validity of a large number of initial biomedical stud-
ies covered by the press. In a first step, we selected, through a PubMed search,
663 meta-analyses associating a risk factor (e.g., smoking) with a pathology.
We considered all robust meta-analyses published between 2008 and 2012
and related to 12 different pathologies in three biomedical domains: psychia-
try (attention deficit hyperactivity disorder [ADHD], schizophrenia, major
depression disorder, and autism), neurology (Parkinson’s and Alzheimer’s
diseases, multiple sclerosis, and epilepsy), and a set of four somatic diseases
(breast cancer, psoriasis, glaucoma, and rheumatoid arthritis). Meta-analyses
were considered robust if they included at least seven independent primary
Dumas-Mallet et al. 127

studies. Less extensive meta-analyses were discarded. We then identified the


corresponding initial studies and observed that only 45.2% of them were con-
firmed by the corresponding meta-analyses. This low reproducibility rate was
independent of the biomedical domain and of the journal impact factor
(Dumas-Mallet et al., 2016). Thus, initial studies published in high–impact
factor journals were not more reliable.
In a second step, we investigated the press coverage of scientific studies
included in our database (initial studies, subsequent studies, and meta-analy-
sis). We used the database Dow Jones Factiva to identify which scientific
articles were covered by English-written newspapers. Our database included
5,029 studies of which 161 were covered by newspapers (156 primary studies
and 5 meta-analysis articles). We observed that newspapers preferentially
covered initial positive studies: 13.1% of initial studies were covered,
whereas only 2.4% of subsequent studies were. They also favor studies
related to lifestyle risk factors (e.g., smoking). Moreover, newspapers also
preferentially echoed studies published in high–impact factor journals.
Finally, 51.3% of these 156 primary studies were disconfirmed by the corre-
sponding meta-analyses, and newspapers rarely mentioned these invalida-
tions. The third step of our research project, described here, is aimed at
investigating how the results of initial studies are presented in newspapers.
The study was designed to answer three research questions:

Research Question 1: Do newspaper articles mention the uncertainty that


is inherent in initial studies?

In each press article, we searched for wordings describing the study as


initial and for statements indicating the need for the results to be confirmed
by replication. We used the latter as representative of how journalists con-
veyed the uncertainty that is inherent in initial studies.

Research Question 2: What factors are associated with replication


statements?

We analyzed the influence of six different factors: the length (in words) of
the newspaper article, the tone of the title (exaggerated or neutral), the publi-
cation year (1988-1999 vs. 2000-2009), the presence of a quotation of an
author of the study, the presence of a claim emphasizing the robustness of the
scientific results, and the description of the study as initial.

Research Question 3: Is the acknowledgment of uncertainty country


dependent?
128 Science Communication 40(1)

Four countries are highly represented in the newspapers of our database:


Australia, Canada, the United Kingdom, and the United States. We investi-
gated if the acknowledgment of the uncertainty of initial biomedical results
varies between these countries.

Method
Selection of Newspaper Articles
We started from a database of 5,029 biomedical studies associating a risk fac-
tor with 12 pathologies (Dumas-Mallet et al., 2016). Then, we used the Dow
Jones Factiva database to find newspaper articles covering the scientific stud-
ies of our database (Dumas-Mallet et al., 2017). Briefly, each search began
with unspecific keywords (scientist* OR research*) combined with specific
ones (e.g., gene, smoking) and the name of the pathology. Each search was
restricted to 1 month after the publication date of the study. We only consid-
ered newspaper articles written in English and published in the general press.
Articles published in the specialized press (e.g., Pharma Business Week) or
by any press agency were not taken into account. This search retrieved 1,561
newspaper articles (Dumas-Mallet et al., 2017). The present study focused on
a subpopulation of them. Here, we have only analyzed the newspaper cover-
age of all initial scientific studies covered by 2 or more newspaper articles
(40 studies yielding 426 newspaper articles) and the coverage of subsequent
studies on the same topics (10 studies yielding 111 newspaper articles).

Content Analysis
We searched for the following elements in each newspaper articles: wordings
describing the study as initial, replication statements mentioning the need for
confirmatory experiments, and claims overstating the strength of the results.
We also classified every headline as exaggerated or neutral. Headlines over-
stating the study by using words such as “breakthrough” or “key finding”
were classified as exaggerated, as well as those likely to mislead the reader
by mentioning a possible cure or a diagnostic tool when scientists have only
identified a risk factor. When headlines were factual statements such as
“Genetic variations found in depression,” they were classified as neutral.
When headlines were not classified as exaggerated according to both above-
mentioned criteria and used the word “study” or “researchers” or “research,”
they were also classified as neutral. Some examples are given in Table 1.
Two authors (EDM and FG) independently classified all headlines as
hyped or neutral. After comparing their classification and resolving their
Dumas-Mallet et al. 129

Table 1. Typical Examples of Hyped Headlines Versus Factual or Prudent


Headlines.

Hyped headlines Neutral or prudent headlines


Genetics may hold key to curing Genetic defect studied
misunderstood neural disorder
Breakthrough claimed for attention Scan may spot attention deficit disorder
disorder
Older mothers “lead to autism” Autism in children is linked to older
mothers
The depression gene Study links genes to stress, depression
Insanity linked to size of the brain Schizophrenia linked to brain
abnormalities
Secret code behind MS [multiple Researchers find genes linked to MS
sclerosis] finally cracked
Antiinflammatory drugs ward off Painkillers may reduce the risk of
Parkinson’s Parkinson’s
Magic key to breast cancer fight Scientists pinpoint genes that raise your
breast cancer risk
Two more markers in road to cure New cancer clues

disagreements by discussion, they obtained a first classification. Because this


classification was partly subjective, its reliability was verified independently
by a third author (AS), who was not involved in the first coding. His classifi-
cation was compared to the first one, and the Cohen’s kappa coefficient was
calculated without weighting (Sim & Wright, 2005). Disagreements were
resolved by discussion to establish the final headlines classification.
The content of each newspaper article was also screened for wordings
indicating that the journalist adequately presented the study as initial. To
select specific wordings, two authors (EDM and FG) read the 45 news sto-
ries covering 4 initial studies related to ADHD and the 111 news stories
covering 10 subsequent (i.e., noninitial) studies. Accordingly, we consid-
ered wordings such as “for the first time” or “in a preliminary survey” as
presenting the study as an initial one but wordings such as “the new find-
ing” as nondiscriminating. Some examples are given in Table 2. Then, both
authors independently classified the 381 remaining newspaper articles cov-
ering initial studies related to other pathologies. Both classifications were
compared to calculate the Cohen’s kappa coefficient, and disagreements
were resolved by discussion.
The content of each newspaper article was screened for statements about
the robustness or uncertainty of the results. Typical examples of replication
130 Science Communication 40(1)

Table 2. Typical Wordings Suggesting That a Scientific Study Is Perceived by the


Journalist to Be an Initial Study.

Wordings suggesting that it is an initial study Wordings with low specificity


The first study to show a measurable . . . The findings were based on a
survey . . .
For the first time . . . A new study that indicates . . .
Initial test . . . A major advance . . .
The finding is still preliminary . . . The breakthrough will have
major . . .
Scientists have discovered a gene . . . A breakthrough in brain scan
technology . . .
Researchers have uncovered another gene . . . Researchers found that . . .
The discovery, which links the disorder to a A discovery that may help
gene that regulates . . . scientists develop new
treatments.a
The discovery of this gene variant . . . A discovery that could lead to
earlier diagnosis.a
The first convincing evidence . . .b Researchers have identified a
gene linked . . .
aWhen the word “discovery” is not associated with a description, it was considered

insufficient evidence that the study was reported as initial. bWhen this wording was associated
in the same article with a statement that the reported study replicated previous studies, we
assumed that “first” qualified “convincing” rather than “evidence,” and this wording was no
longer considered as identifying an initial study.

statements and robustness claims are given in Table 3. We also identified who
made these statements: an author of the study or an independent expert. When
these statements were not inserted in a quotation, their authorship was attrib-
uted by default to journalists. Finally, we searched each article for any
authors’ quotations. To avoid errors or omissions regarding the presence or
absence of replication statements, robustness claims, and authors’ quotations,
two authors (EDM and FG) independently performed these searches in all
426 newspaper articles. Classifications were compared and Cohen’s kappa
coefficients were calculated without weighting. Disagreements were resolved
by discussion to establish the final classification. Raw data are available on
request to the corresponding author (EDM).

Statistical Methods
Binary logistic regressions were performed using a standard iterative method
and calculated using R-CRAN software.
Dumas-Mallet et al. 131

Table 3. Typical Statements That Replication Is Needed or That the Data Are
Robust.

Replication is needed The data are robust


Replication of the results is The initial data are robust
absolutely essential
Their results must be replicated It provides rigorous confirmation of a link
Tests on larger populations of It establishes which gene plays an
adults must be performed important role
[The data] need to be verified by A proven direct genetic link
further studies
If the work stands up A clear-cut chemical abnormality
More work is needed to confirm These results demonstrate . . .
the findings
The study needs to be repeated This kind of evidence makes it clear
They cautioned that the number of This is the first demonstration of a gene
patients tested was small affecting a brain response
It will be important for other The statistical significance is the highest
scientists to confirm that
The study is preliminary Irrefutable evidence

Results
Validation of the Coding Methods
The 426 newspaper headlines were classified as hyped or neutral by two
authors (EDM and FG) and then compared to the classification performed
by a third author (AS). Cohen’s kappa coefficient obtained for this com-
parison was 0.71. This value is within the range of a “substantial” strength
of agreement (0.61-0.8; Sim & Wright, 2005). In order to determine
whether newspaper articles presented each study as initial, two authors
jointly screened 45 news stories covering four initial studies about ADHD
to select specific wordings (Table 2). Then, both authors independently
applied these selection criteria to the remaining 381 news stories covering
36 other initial studies. Cohen’s kappa coefficient obtained by comparing
both classifications was 0.896. This value is within the range of an “almost
perfect” strength of agreement (0.81-1.0; Sim & Wright, 2005). Finally,
the presence or absence of an author’s quotation, a replication statement,
and a robustness claim in the 426 newspaper articles were checked inde-
pendently by two authors (EDM and FG). Cohen’s kappa coefficients
obtained by comparing classifications were 0.977, 0.856, and 0.994,
respectively.
132 Science Communication 40(1)

Overview of Newspaper Articles Covering Initial Studies


The 40 initial studies included in our analysis received, on average, a cover-
age of 11 newspaper articles (range: 2-50). Their average length was 417
words (range: 14-1,111). Thirteen of these 40 initial studies were validated by
the corresponding meta-analysis, whereas 27 were invalidated (67.5%). This
poor replication validity is similar to the one more generally observed for
initial studies published in peer-reviewed biomedical journals (Dumas-Mallet
et al., 2016).
The 426 newspaper articles of our database were published between 1988
and 2009. Among them, 120 covered the 19 initial studies published between
1988 and 1999 and 306 echoed the 21 studies published from 2000 to 2009.
Among these 426 newspaper articles, 400 were published by general news-
papers printed in 4 countries (39 in Australia, 95 in Canada, 99 in the United
Kingdom, and 167 in the United States) and 26 in several other countries
(Hong-Kong, India, Ireland, Korea, New Zealand, Pakistan, and Singapore).
We rated 150 headlines as hyped (35.2%). We found wordings suggesting
that the journalist presented the study as initial in 243 newspaper articles (57%).
The authors of the study were quoted in 302 newspaper articles (70.9%). Finally,
we found 91 replication statements (21.4%) and 105 robustness claims (24.6%)
overstating the strength of the findings. Most replication statements (81.3%) and
most robustness claims (78%) appeared in quotations either from the authors of
the study or from experts in the field. The list of the 426 newspaper articles and
their characteristics are available on request to the corresponding author.

Factors Associated With the Presence of a Replication


Statement
Using binary logistic regression, we tested whether the presence of a replication
statement was associated with six predictor variables: the presence of a robust-
ness claim, the presence of an author’s quotation, the presence of wordings pre-
senting the study as initial, the presence of an exaggerated headline, whether the
article length exceeded 200 words, and whether it was published after 1999
(Table 4). Three predictor variables were strongly associated with the presence of
a replication statement: the presence of an author’s quotation, the presence of an
exaggerated headline, and the publication period. Indeed, newspaper articles
including an author’s quotation were almost 3 times more likely to mention a
replication statement (Table 4). This is consistent with the fact that 44/91 replica-
tion statements appeared within authors’ quotations. The negative association
between the presence of a replication statement and of an exaggerated headline is
statistically significant (odds ratio [OR] = 0.38, p = 0.0025). It is also observed if
Dumas-Mallet et al. 133

Table 4. Factors Associated With the Presence of a Replication Statement.

Confidence
Dependent variable interval
and 6 predictor
variables Yes No Odds ratio Low High p
Replication statement 91 335
Robustness claim 105 321 1.31 0.75 2.29 0.35
Published after 1999 306 120 0.36 0.20 0.61 0.0002
Length >200 words 329 97 1.74 0.77 3.93 0.19
Hyped headline 150 276 0.38 0.20 0.71 0.0025
Initial statement 243 183 1.77 1.02 3.07 0.043
Authors’ quotation 302 124 3.69 1.73 7.87 0.0007

Note. Logistic regression: overall model fit: χ2 = 63.16.


p < 0.0001.

the exaggerated headline is chosen as the dependent variable and the presence of
a replication statement as a predictor variable (OR = 0.37, p = 0.0015). The pub-
lication period inversely affected the presence of a replication statement (Table 4)
and of an exaggerated headline. Indeed, the percentage of hyped headlines was
much lower before 2000 (15%) than during the 2000s (43.1%), whereas the per-
centage of replication statements was much higher before (35%) than after 2000
(16%). Finally, wordings suggesting that the covered study is an initial one mod-
erately predict the presence of a replication statement (OR = 1.77, p = 0.043).
The logistic regression indicates that the length of the newspaper articles
does not affect the presence of a replication statement (Table 4). However, the
97 short articles (<200 words) less often quoted the authors (38%) and less
often mentioned a replication statement (9.3%) than longer ones (80.2% and
24.9%, respectively). This is expected because replication statements often
appeared within authors’ quotations. This explains why the logistic regres-
sion with six predictor variables does not reveal the association of a replica-
tion statement with the story length. Indeed, when author’s quotation is
excluded from the predictor variables, the logistic regression shows that rep-
lication statements are almost 3 times more frequent in longer news stories
(>200 words; OR = 2.70 p = 0.012). Finally, the presence of a robustness
claim is not associated with the presence of a replication statement.

Subanalysis of Newspaper Articles According to Their Validation


Status
Because subsequent studies and meta-analyses were yet to be performed
when newspapers reported initial studies, it might seem rather illogical, at
134 Science Communication 40(1)

Figure 1. Percentage of news with an exaggerated headline in function of the


percentage of news mentioning a replication statement in four countries.

first glance, to test an association between the validation status and the pres-
ence of a replication statement and/or a robustness claim. However, this test
uncovered an unexpected observation: all but 4 of the 91 replication state-
ments were found in the 257 newspaper articles covering the 27 initial studies
that were subsequently invalidated (chi-square test: χ2 = 60.2, p < 0.0001). In
contrast, we found no difference regarding the presence of robustness claims
whether the study was subsequently validated or not.

Hyped Headlines and Replication Statement: Geographical


Differences
We tested whether the percentage of newspaper articles mentioning a replica-
tion statement and the percentage of hyped headlines varied between the four
countries widely represented in our database: Australia, Canada, the United
Kingdom, and the United States. We observed that the distribution of these
two parameters greatly varied among these four countries (Figure 1). Indeed,
Dumas-Mallet et al. 135

while 59/164 U.S. news stories mentioned a replication statement, only 3/97
did so in U.K. newspapers. Conversely, 57/97 headlines were hyped in U.K.
newspapers but only 28/164 in U.S. ones. Articles published by Australian and
Canadian newspapers are in-between these extreme cases (Figure 1). The fact
that U.K. news almost never mentioned a replication statement is not related
to their length (412 words on average versus 425 words in U.S. news and 417
in the whole sample) or to a preponderance of tabloid newspapers in our sam-
ple. Indeed, 44/97 U.K. news stories were published by quality newspapers
(The Daily Telegraph, The Guardian, The Independent, and The Times). Also,
authors’ quotations were almost as frequent in U.K. newspaper articles (63/97
65%) as in U.S. ones (72%) and in all newspaper articles (71%). This sub-
analysis by country of origin further documents that hyped headlines are nega-
tively associated with the presence of a replication statement.

Discussion
Because the majority of early biomedical findings are invalidated by subse-
quent studies (Dumas-Mallet et al., 2016; Ioannidis, 2005b; Prinz, Schlange,
& Asadullah, 2011), newspapers covering them should acknowledge the
uncertainty inherent to initial findings. According to a survey of 16 science
journalists, 13 actually considered that it is essential to mention that the study
they cover has been replicated or needs replication (Holtzman et al., 2005;
Mountcastle-Shah et al., 2003). However, the content analysis of 228 news
stories reporting genetic findings revealed that this replication status was
specified in only a third of them (Holtzman et al., 2005). In agreement with
this study, we found only 91 replication statements (21%) in the 426 newspa-
per articles covering initial biomedical association studies.
Half of these statements were presented as quotations from the authors
of the study and a third from scientific experts in the field. Moreover, these
statements were infrequent in newspaper articles that did not quote scien-
tific authors and in articles covering initial findings that have later been
confirmed by subsequent studies. These observations suggest that scientists
play a major role regarding the presence of a replication statement. They
might have warned journalists or press release writers that their findings
needed to be confirmed, especially when they felt that the data were uncer-
tain, or they might have done it in response to journalists’ questions. In any
case, the fact that only 21% of the newspaper articles mentioned a replica-
tion statement is consistent with studies showing that journalists consider
peer-reviewed journals to be trustworthy sources of new stories (Bauer
et al., 1995; Bucchi & Mazzolini, 2003; Hansen, 1994). Unfortunately, the
common belief that findings published in influential journals are more
136 Science Communication 40(1)

trustworthy than those reported in less prestigious peer-reviewed journals


has been contradicted by observational studies (Dumas-Mallet et al., 2016;
Prinz et al., 2011).
We observed a strong inverse relationship between the presence of a rep-
lication statement and the fact that the headline was exaggerated. Because
readers get their first or only impressions from headlines, hyped headlines are
especially misleading. Previous reception studies showed that “the public
who read headlines about genetics most often indicated an expectation that
the article would talk about a ‘cure’” (Caulfield & Condit, 2012, p. 214).
Newspaper editors, who write the headlines in most cases (Nelkin, 1995),
might be tempted to hype them to attract readers’ interest by satisfying this
expectation. Our observations suggest that the presence of a replication state-
ment might discourage newspaper editors from hyping their headlines.
We used the presence of a replication statement as representative that jour-
nalists actually informed the public about the uncertainty of the finding they
cover. However, in this respect the press coverage of each of the 40 studies
was rarely homogenous. Indeed, the systematic mention of a replication
statement in all press articles covering a single study was observed only for
two of them. Regarding the media coverage of the 38 other findings, either
none or only some press articles mentioned the need for replication. We can-
not tell why many press articles, although they quoted the authors, did not
mention a replication statement whereas some covering the same study did.
Several explanations can be suggested. First, scientists might have mentioned
the need for replication to some journalists only. Second, the press releases
covering the study might not have mentioned it. Third, the press release
might have mentioned the need for replication, but only some journalists
included it in their articles. Fourth, journalists had actually written the repli-
cation statement, but their editors subsequently cut it out due to limits of
space. These omissions are in line with Kitzinger and Reilly’s (1997) opin-
ion: “scientific uncertainty per se is not attractive to journalists—it is new
and definitive findings and controversy that draw media attention” (p. 344).
However, cultural habits and media structuring also seem to be involved in
this issue, as evidenced by the marked differences we observed between the
United Kingdom and the United States.

Limitations
We analyzed the newspaper coverage of 40 initial findings associating a risk
factor with 12 pathologies. The generalization of our observations to other
pathologies and to other biomedical domains, such as clinical trials of treat-
ment effectiveness, deserves further investigation. Moreover, we selected
Dumas-Mallet et al. 137

these 40 initial studies because they were included in robust meta-analyses.


Indeed, meta-analyses represent the best estimate of a real effect, or of its
absence, and we aimed to correlate the replication validity of initial studies
with their media coverage. However, initial studies included in meta-analyses
represent a tiny fraction of all initial association studies. Whether our obser-
vations also apply to the possible media coverage of all initial studies remains
an open question.

Conclusion
Initial biomedical findings are preferentially covered by newspapers although
they are often disconfirmed by subsequent studies (Dumas-Mallet et al.,
2017). Therefore, their media coverage constitutes a particularly relevant
material for investigating how newspapers deal with research uncertainty.
Sadly, our study confirms that most newspaper articles do not inform the
public about the uncertainty inherent in initial studies. This is not in the long-
term interest of science, however. Indeed, contrary to common beliefs, recep-
tion studies show that the public views science as more trustworthy when the
newspaper coverage of health research acknowledges its uncertainty, espe-
cially if this is attributed to scientific authors (J. D. Jensen et al., 2017; J. D.
Jensen, Krakow, John, & Liu, 2013).
We observed an inverse relationship between the presence of a replication
statement and that of an exaggerated headline. However, the media are not
solely to blame for this hype phenomenon (Caulfield & Condit, 2012). In
particular, because scientists are behind most replication statements, those
who omit to mention the need for replication, are indirectly responsible for
hyped headlines. With this in mind, the recent disappearance of replication
statements and the concomitant surge in hyped headlines further support the
view that the medialization of scientific research is increasing (Peters, 2012)
and emphasize the relevance of this concept regarding science-media interac-
tions. Finally, the marked differences observed between countries regarding
the acknowledgement of uncertainty inherent to initial biomedical findings in
newspapers is intriguing and deserves further investigation.

Acknowledgments
We thank André Garenne for his advice about statistical methods.

Declaration of Conflicting Interests


The author(s) declared no potential conflicts of interest with respect to the research,
authorship, and/or publication of this article.
138 Science Communication 40(1)

Funding
The author(s) received no financial support for the research, authorship, and/or publi-
cation of this article.

ORCID iD
Estelle Dumas-Mallet https://orcid.org/0000-0002-6011-8428

References
Autzen, C. (2014). Press releases: The new trend in science communication. Journal
of Science Communication, 13(3), 1-8.
Baker, M. (2016). 1,500 scientists lift the lid on reproducibility. Nature, 533(7604),
452-454.
Bartlett, C., Sterne, J., & Egger, M. (2002). What is newsworthy? Longitudinal study
of the reporting of medical research in two British newspapers. British Medical
Journal, 325(7355), 81-84.
Bauer, M., Ragnarsdottir, A., Rudolfsdottir, A., & Durant, J. R. (1995). Science and
technology in the British Press, 1946-1990: A systematic content analysis of the
press. London, England: The Science Museum.
Boutron, I., Dutton, S., Ravaud, P., & Altman, D. G. (2010). Reporting and interpre-
tation of randomized controlled trials with statistically nonsignificant results for
primary outcomes. Journal of the American Medical Association, 303, 2058-2064.
Brechman, J., Lee, C. J., & Cappella, J. N. (2009). Lost in translation, A comparison
of cancer-genetics reporting in the press release and its subsequent coverage in
the press. Science Communication, 30, 453-474.
Bucchi, M. (2015). Norms, competition and visibility in contemporary science: The
legacy of Robert K. Merton. Journal of Classical Sociology, 15, 233-252.
Bucchi, M., & Mazzolini, R. G. (2003). Big science, little news: Science coverage in
the Italian daily press, 1946-1997. Public Understanding of Science, 12, 7-24.
Caulfield, T., & Condit, C. (2012). Science and the sources of hype. Public Health
Genomics, 15, 209-217.
Dudo, A., Dunwoody, S., & Scheufele, D. A. (2011). The emergence of nano news:
Tarcking thematic trends and changes in U.S. newspaper coverage of nanotech-
nology. Journalism & Mass Communication Quarterly, 88, 55-75.
Dumas-Mallet, E., Button, K., Boraud, T., Munafo, M., & Gonon, F. (2016).
Replication validity of initial association studies: A comparison between psy-
chiatry, neurology and four somatic diseases. PLoS One, 11(6), e0158064.
Dumas-Mallet, E., Smith, A., Boraud, T., & Gonon, F. (2017). Poor replication valid-
ity of biomedical association studies reported by newspapers. PLoS One, 12(2),
e0172650.
Franzen, M. (2012). Making science news: The press relations of scientific journals
and implications for scholarly communication. In S. Rödder, M. Franzen & P.
Weingart (Eds.), The sciences’ media connection: Public communication and its
repercussions (pp. 333-352): Dordrecht, Netherlands: Springer Science.
Dumas-Mallet et al. 139

Friedman, S. M., Dunwoody, S., & Rogers, C. L. (1999). Communicating uncer-


tainty: Media coverage of new and controversial science. Mahwah, NJ: Lawrence
Erlbaum.
Gonon, F., Konsman, J. P., Cohen, D., & Boraud, T. (2012). Why most biomedical
findings echoed by newspapers turn out to be false: The case of attention deficit
hyperactivity disorder. PLoS One, 7(9), e44275.
Hansen, A. (1994). Journalistic practices and science reporting in the British press.
Public Understanding of Science, 3, 111-134.
Holtzman, N. A., Bernhardt, B. A., Mountcastle-Shah, E., Rodgers, J. E., Tambor, E.,
& Geller, G. (2005). The quality of media reports on discoveries related to human
genetic diseases. Community Genetics, 8, 133-144.
Ioannidis, J. P. (2005a). Contradicted and initially stronger effects in highly cited
clinical research. Journal of the American Medical Association, 294, 218-228.
Ioannidis, J. P. (2005b). Why most published research findings are false. PLoS Med,
2(8), e124.
Ioannidis, J. P. (2008). Why most discovered true associations are inflated.
Epidemiology, 19, 640-648.
Jensen, E. (2010). Between credulity and scepticism: Envisaging the fourth estate in
21st-century science journalism. Media, Culture & Society, 32, 615-630.
Jensen, J. D., Krakow, M., John, K. K., & Liu, M. (2013). Against conventional wis-
dom: When the public, the media, and medical practice collide. BMC Medical
Informatics and Decision Making, 13(Suppl. 3), S4-S10.
Jensen, J. D., Pokharel, M., Scherr, C. L., King, A. J., Brown, N., & Jones, C. (2017).
Communicating uncertain science to the public: How amount and source of
uncertainty impact fatalism, backlash, and overload. Risk Analysis, 37, 40-51.
Kitzinger, J., & Reilly, J. (1997). The rise and fall of risk reporting: Media coverage
of human genetics research, “false memory syndrome” and “mad cow disease.”
European Journal of Communication, 12, 319-350.
Lai, W. Y., & Lane, T. (2009). Characteristics of medical research news reported on
front pages of newspapers. PLoS One, 4(7), e6103.
Lawrence, P. A. (2003). The politics of publication. Nature, 422(6929), 259-261.
Lehmkuhl, M., & Peters, H. P. (2016). Constructing (un-)certainty: An explora-
tion of journalistic decision-making in the reporting of neuroscience. Public
Understanding of Science, 25, 909-926.
Maier, M., Milde, J., Post, S., Gunther, L., Ruhrmann, G., & Barkela, B. (2016).
Communicating scientific evidence: Scientists’, journalists’ and audiences’
expectations and evaluations regarding the representation of scientific uncer-
tainty. Communications, 41, 239-264.
Mountcastle-Shah, E., Tambor, E., Bernhardt, B. A., Geller, G., Karaliukas, R., &
Holtzman, N. A. (2003). Assessing mass media reporting of disease-re- lated
genetic discoveries. Science Communication, 24, 458-478.
Nelkin, D. (1995). Selling science: How the press covers science and technology.
New York, NY: W. H. Freeman.
Pellechia, M. G. (1997). Trends in science coverage: A content analysis of three US
newspapers. Public Understanding of Science, 6, 49-68.
140 Science Communication 40(1)

Peters, H. P. (2012). Scientific sources and the mass media: Forms and consequences
of medialization. In S. Rodder, M. Franzen & P. Weingart (Eds.), The sciences’
media connection: Public communication and its repercussions (pp. 217-240).
Dordrecht, Netherlands: Springer.
Peters, H. P., Brossard, D., de Cheveigne, S., Dunwoody, S., Kallfass, M., Miller,
S., & Tsuchida, S. (2008). Science communication: Interactions with the mass
media. Science, 321(5886), 204-205.
Peters, H. P., & Dunwoody, S. (2016). Scientific uncertainty in media content:
Introduction to this special issue. Public Understanding of Science, 25, 893-908.
Prinz, F., Schlange, T., & Asadullah, K. (2011). Believe it or not: How much can
we rely on published data on potential drug targets? Nature Reviews. Drug
Discovery, 10, 712.
Reich, E. S. (2013). Science publishing: The golden club. Nature, 502(7471), 291-
293.
Ruhrmann, G., Guenther, L., Kessler, S. H., & Milde, J. (2015). Frames of scientific
evidence: How journalists represent the (un)certainty of molecular medicine in
science television programs. Public Understanding of Science, 24, 681-696.
Schafer, M. S. (2011). Sources, characteristics and effects of mass media communi-
cation on science: A review of the literature, current trends and areas for future
Research. Sociology Compass, 5/6, 399-412.
Schwartz, L. M., Woloshin, S., Andrews, A., & Stukel, T. A. (2012). Influence of
medical journal press releases on the quality of associated newspaper coverage:
Retrospective cohort study. British Medical Journal, 344, d8164.
Sim, J., & Wright, C. C. (2005). The kappa statistic in reliability studies: Use, inter-
pretation, and sample size requirements. Physical Therapy, 85, 257-268.
Singer, E. (1990). A question of accuracy: How journalists and scientists report
research on hazards. Journal of Communication, 40, 102-116.
Smith, R. (2006). Medical journals and the mass media: moving from love and hate to
love. Journal of the Royal Society of Medicine, 99, 347-352.
Stocking, S. H., & Holstein, L. W. (2009). Manufacturing doubt: Journalists’ roles and
the construction of ignorance in a scientific controversy. Public Understanding
of Science, 18, 23-42.
Stryker, J. E. (2002). Reporting medical information: Effects of press releases
and newsworthiness on medical journal articles’ visibility in the news media.
Preventive Medicine, 35, 519-530.
Sumner, P., Vivian-Griffiths, S., Boivin, J., Williams, A., Bott, L., Adams, R., . . .
Chambers, C. D. (2016). Exaggerations and caveats in press releases and health-
related science news. PLoS One, 11(12), e0168217.
Taylor, J. W., Long, M., Ashley, E., Denning, A., Gout, B., Hansen, K., . . . Newton,
P. M. (2015). When medical news comes from press releases: A case study of
pancreatic cancer and processed meat. PLoS One, 10(6), e0127848.
Vinkers, C. H., Tijdink, J. K., & Otte, W. M. (2015). Use of positive and negative
words in scientific PubMed abstracts between 1974 and 2014: Retrospective
analysis. British Medical Journal, 351, h6467.
Weingart, P. (1998). Science and the media. Research Policy, 27, 869-879.
Dumas-Mallet et al. 141

Weingart, P., & Guenther, L. (2016). Science communication and the issue of trust.
Journal of Science Communication, 15(05), C01.
Weitkamp, E. (2014). On the roles of scientists, press officers and journalists. Journal
of Science Communication, 13(03).
Yong, E. (2012). Replication studies: Bad copy. Nature, 485(7398), 298-300.

Author Biographies
Estelle Dumas-Mallet has a PhD in biomedical sciences since 2004 and, then, was
awarded with several postdoctoral fellowships in the United Kingdom. Since
November 2017 she holds a PhD in political science, titled “Biomedical Research and
Journalism Facing the Uncertainty of Emergent Science.” On this topic, she has
already published as a first author two articles in PLoS ONE (2016 and 2017) and one
in the Royal Society of Open Science (2017).
Andy Smith is a senior researcher in political science. Born and educated in the United
Kingdom, he defended his PhD thesis in political sciences in 1995 at the University of
Grenoble (France). He has published numerous studies on the politics of economic
activities (e.g., The Politics of Economics, Oxford University Press, 2016). His recent
research interests are related to the pharmaceutical industry in the European Union and
to the politics of scientific research. He supervised Dumas-Mallet’s thesis.
Thomas Boraud is a senior neuroscientist and neurologist, PhD and MD. He has
already published 73 scientific studies in peer-reviewed journals about various aspects
of the neurophysiology of basal ganglia, the processes of decision making, and the
pathophysiology of Parkinson’s disease. Since 2011 he has collaborated with François
Gonon on the ethics of neuroscience communication in the media.
François Gonon is an emeritus researcher. As a neurochemist, he published 94 scien-
tific studies about the neurotransmission mediated by dopamine. In 2008 he realized
that there is often a huge gap between scientific observations and misleading conclu-
sions reported by the mass media. Since then he has published 10 studies in peer-
reviewed journals investigating how and why this gap is generated. He showed that
scientists, scientific institutions, and the process of knowledge production are chiefly
responsible for this gap.

You might also like