Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

Read Writ (2016) 29:1629–1652

DOI 10.1007/s11145-016-9638-8

How source information shapes lay interpretations


of science conflicts: interplay between sourcing, conflict
explanation, source evaluation, and claim evaluation

Eva Thomm1 • Rainer Bromme1

Published online: 23 March 2016


 Springer Science+Business Media Dordrecht 2016

Abstract When laypeople read controversial scientific information in order to


make a personally relevant decision, information on the source is a valuable
resource with which to evaluate multiple, competing claims. Due to their bounded
understanding, laypeople rely on the expertise of others and need to identify whe-
ther sources are credible. The present study examined under which conditions
readers acknowledge and consider available source information. University students
read two conflicting scientific claims put forward by sources whose credibility was
varied in terms of either expertise or benevolence. They then rated their subjective
explanations for the conflicting claims, perceived source credibility, and personal
claim agreement. Results showed that when evaluating and explaining the conflict,
participants became vigilant to source information specifically when source credi-
bility was questioned. Conflict explanation through differences in sources’ com-
petencies mediated the impact of sourcing on source credibility. Information about a
source’s benevolence revealed a strong direct effect on credibility judgments.
However, motivation explanations did not clarify the relationship. Overall, findings
show that readers consider source information and apply it adaptively when han-
dling conflicting scientific information.

Electronic supplementary material The online version of this article (doi:10.1007/s11145-016-9638-8)


contains supplementary material, which is available to authorized users.

& Eva Thomm


eva.thomm@uni-muenster.de
Rainer Bromme
bromme@uni-muenster.de
1
Department of Psychology, Westfälische Wilhelms-Universität Münster, Fliednerstraße 21,
48149 Münster, Germany

123
1630 E. Thomm, R. Bromme

Keywords Sourcing  Conflict explanation  Scientific conflicts 


Conflict evaluation

Introduction

Science affects daily life in manifold ways, and this obliges people to acquire and
process science-based information. For example, laypeople may need to read medical
information to clarify a health-related question concerning a treatment or peruse
biogenetic knowledge to weigh the risks of genetically manipulated food. Unfortu-
nately, however, scientific knowledge is seldom conclusive or unambiguous. Instead,
it is commonly dynamic, frequently inconsistent, and sometimes even conflicting
(Longino, 2002). This is all the more true for scientific information that is easily
accessible for the general public, especially via the Internet. Although the Internet has
revolutionized the accessibility of science-based information, it has also increased the
spread of inconsistent information (Stadtler & Bromme, 2014). Given the question-
able nature of online information, its selection, evaluation, and interpretation remain a
paramount challenge for laypersons (Brand-Gruwel & Stadtler, 2011; Goldman,
Braasch, Wiley, Grasser, & Brodowinska, 2012; Hendriks, Kienhues, & Bromme,
2015; Metzger & Flanagin, 2013).
One particular challenge when handling science-based online information arises
from the fact that its contents usually go far beyond laypeople’s own everyday
knowledge. Due to their bounded understanding (Bromme & Goldman, 2014),
laypeople rely on the expertise of others and may possess only limited abilities to
evaluate the veracity of claims directly. To base their personal decisions on experts’
recommendations, they have to identify and evaluate relevant, credible sources
(Bråten, Strømsø, & Salmerón, 2011; Britt & Aglinskas, 2002; Bromme, Kienhues
& Porsch, 2010). Therefore, the accurate processing of source information known as
sourcing (e.g., Bråten et al., 2011; Goldman & Scardamalia, 2013; Tabak, 2015;
Wineburg, 1991) is a highly relevant skill for evaluating controversial science-based
claims; and, as such, a factor influencing individuals’ interpretations of such
controversies. Moreover, individuals’ subjective assumptions about why different
sources may arrive at competing claims about the same topic may affect their
conclusions about what and whom to believe and, hence, how they resolve
inconsistencies (Barzilai & Eshet-Alkalai, 2015). The present study examines the
interplay of laypeople’s sourcing and conflict explanations when dealing with
contradictory science-based information. We aimed to scrutinize whether the impact
of sourcing on the evaluation of rival accounts and their sources is mediated by
laypeople’s assumptions about the reasons for the conflict.

Considering source information: a valuable but infrequently applied


approach

How individuals’ consider source information has been investigated specifically in the
context of social psychological research on persuasion (e.g., Chen & Chaiken, 1999;
Petty & Caccioppo, 1986) as well as in research on reading literacy and the text

123
How source information shapes lay interpretations of… 1631

comprehension of multiple documents (e.g., Bråten, Strømsø, & Britt, 2009; Perfetti,
Rouet, & Britt, 1999; Rouet & Britt, 2011). Petty and Caccioppo (1986) studied of the
persuasiveness of arguments and examined the relevance of source information in
argument elaboration. They postulated that motivated people are more likely to
elaborate the contents of arguments. In contrast, if people are not motivated (or not
able, due, e.g., to a lack of knowledge) to process argument contents, they are more
inclined to rely on superficial processing based on information about the arguments’
sources. Thus source information is assumed to be considered mainly when readers do
not attend to the contents (Sparks & Rapp, 2011). However, in light of laypeople’s
limited capacities to fully understand and evaluate science-based content, reasoning
about the credibility of sources is not necessarily a less elaborated way to assess
information veracity (Bromme et al., 2010; Stadtler & Bromme, 2014). In line with
this reasoning, current research on reading comprehension stresses the relevance of
sourcing capacities. Sourcing is conceived as being essentially relevant for readers in
order to select, evaluate, and integrate knowledge from diverse, multiple information
sources (Barzilai, Tzadok, & Eshet-Alkalai, 2015; Goldman et al., 2012; Kammerer &
Gerjets, 2012; Keck, Kammerer & Starauschek, 2015; Perfetti et al., 1999; Rouet &
Britt, 2011; Strømsø, Bråten, & Britt, 2010; Stadtler & Bromme, 2014).
However, although research points to the importance of sourcing capacities,
readers are rarely or only infrequently seen to engage in sourcing activities.
Empirical evidence suggests that readers seldom recognize or regard source
information spontaneously (e.g., Britt & Aglinskas, 2002; Kammerer, Bråten,
Gerjets, & Strømsø, 2013; Wiley et al., 2009). Britt and Aglinskas (2002) reported
that high school students and undergraduate students hardly reflect on source
information when reading multiple documents about a historical event unless
instructed explicitly to do so. When investigating the impact of source information
on text comprehension, Sparks and Rapp (2011) showed similarly that university
students tend to consider source credibility only when encouraged explicitly to
reason about a source’s reliability. Further findings suggest that undergraduate
students may well attend to source information spontaneously when handling
multiple texts about science-based issues, but they do not necessarily use it to
evaluate the given information (Kobayashi, 2014; Strømsø; Bråten, Britt, &
Ferguson, 2013; Tabak, 2015). In a recent think-aloud study, Barzilai et al. (2015)
examined readers’ sourcing practices while reading discrepant expert accounts
about a socioscientific issue. The authors pointed out explicitly that even though
most readers identified and noticed the source information, they did not apply it to
evaluate the discrepant accounts. Flanagin and Metzger (2007) demonstrated that
readers of online information seldom verify the information given about online
sources, although they reported checking for source information when being
interviewed about strategies to evaluate websites. This raises the question when and
in which ways sourcing may occur.

Prior evidence of sourcing: When do readers source?

There is some evidence that having to deal with inconsistent, conflicting information
especially draws individuals’ attention to source information and hence may elicit

123
1632 E. Thomm, R. Bromme

sourcing (e.g., Stadtler & Bromme, 2014; Strømsø et al., 2013). When establishing the
discrepancy-induced source comprehension effect, Braasch, Rouet, Vibert, and Britt
(2012) demonstrated that adult readers were more sensitive to source descriptions and
reported more source-related information in summary and memory tasks when
provided with two discrepant rather than two consistent assertions put forward by
different sources. Braasch et al. (2012) found this effect with short and conceptually
simple factual statements. Strømsø et al. (2013) let university students read multiple
controversial documents about a socioscientific issue. They showed that those
documents that expressed the most strongly competing positions evoked increased
sourcing. We would expect that, in particular, lay reasoning on controversial science-
based topics should be sensitive to source information, because laypeople have to
defer to experts to reach conclusions on such issues. Studies suggest that laypeople are
inclined to consult experts for advice rather than base their decision on their own
fragmentary understanding when they perceive science-based information to be
complex (Scharrer, Britt, Stadtler & Bromme, 2012). In an interview-based study,
Bromme, Thomm and Wolf (2015) showed that laypeople focused particularly on
source-related assessments when asked to resolve and decide on controversial online
information about a medical topic.
Additionally, salience and characteristics of the source may also influence
laypeople’s sourcing (e.g., Britt & Rouet, 2012; Sperber et al., 2010). To avoid
misinformation and deception, laypersons must be critical toward experts’ abilities,
and they need to assess the link between their testimony and the motivational state
that might underlie their claim or advice (Keil, 2012; Mayer, Davis, & Schoorman,
1995; Shafto, Eaves, Navarro, & Perfors, 2012; Sperber et al., 2010). According to
Sperber et al. (2010), expertise and intentions influence the epistemic vigilance of
individuals, making them more or less cautious toward source information and thus
to the knowledge claims transmitted by these sources. Laypeople seem to possess
capacities enabling them to evaluate and identify relevant expertise (Bromme &
Thomm, 2016). Further empirical evidence suggests that individuals are specifically
attentive to source information associated with the commercial interests of the
source in providing certain information (e.g., Bråten et al., 2011; Critchley, 2008;
Cummings, 2014). Critchley (2008) revealed that laypersons perceive researchers at
publicly funded universities to be more credible than researchers working in
industry, assuming that the former act in the best interest of the public to a greater
extent than industrial researchers. When developing the Muenster Epistemic
Trustworthiness Inventory, Hendriks et al. (2015) suggested that laypeople may
sense differences between a source’s expertise, benevolence, and integrity. Against
this background, we examined specifically lay readers’ attention to source
information targeting source credibility, and the ways in which they used this
source information to interpret and evaluate conflicting scientific knowledge claims.

Resolving conflicting scientific information: considering the source as origin


of conflict

Source information may also serve as an immediate explanation for the conflict at
stake, and, as such, sourcing may play a relevant role in not only the evaluation but

123
How source information shapes lay interpretations of… 1633

also the interpretation of conflicting scientific information. Readers’ content and


source evaluation may be informed by their understanding of the specific conflict.
We assume that this reasoning is influenced by their subjective explanations for the
conflict. Assumptions about the specific underlying causes of a conflict may help
laypeople to explain and resolve discrepancies. As a result, individuals’ explana-
tions may affect their judgment of what and whom to believe. Information about
sources may impact on individuals’ ideas about the origins of the specific conflict,
and this, in turn, may lead to different judgments about the veracity of the claim as
well as the source’s credibility.
Prior qualitative and quantitative studies have shown that people possess specific,
differentiated explanations for conflicts between experts over scientific topics
(Bromme et al., 2015; Kajanne & Pirttilä-Backman, 1999; Thomm, Hentschke &
Bromme, 2015). These conflict explanations indicate that laypeople consider
possible origins of conflict to be characteristics of the competing experts (i.e.,
expertise and personal motivations) and of the process of scientific knowledge
production. When developing a quantitative measure to tap laypersons’ explanations
for scientific conflicts [Explaining Conflicting Scientific Claims questionnaire
(ECSC)], we demonstrated that they differentiated between four kinds of reasons:
Conflicting scientific claims were explained by two research-process-related
reasons: differences in the research processes and the thematic complexity of the
issue at stake; and two researcher-related reasons: differences in researchers’
competencies and their personal motivations (Thomm et al., 2015). These kinds of
explanations have been established as stable dimensions in both exploratory and
confirmatory factor analyses. We assume that preferences for the different
explanations are influenced by the processing of source information, and this, in
turn, affects claim evaluation. For example, if readers tend to explain conflicts
between researchers primarily in terms of the opponents’ assumed personal motive,
their claim evaluation might differ from that of readers who explain the controversy
in terms of the opponents’ use of different research methods.
Previously, we examined the impact of source information on readers’
preferences for different conflict explanations using the ECSC questionnaire
(Thomm et al., 2015). University students read conflicting scientific claims put
forward by two rival sources that varied systematically in either their expertise or
alleged motivation (i.e., vested interests). Results demonstrated a relationship
between the preference for specific explanations and the available source
information. However, these results have to be interpreted with caution, because
the study set out originally to establish the above-mentioned four explanations of
the ECSC and to empirically test the instrument’s validity. Furthermore, it is an
open question whether conflict explanations actually do influence laypeople’s
evaluations. This study should contribute to strengthening previous findings on the
role of sourcing in conflict explanation, and, in addition, it should fill a gap in
research by examining the influence of conflict explanation on conflict evaluation.
Accordingly, we first test the impact of sourcing on conflict explanations using a
different scenario within the same domain as that investigated earlier (climate
change research) in order to test for the consistency of the effects. Second, we

123
1634 E. Thomm, R. Bromme

inspect whether conflict explanations may, in turn, mediate the effect of sourcing on
source and claim evaluation.

Present study

The present study explored specific conditions and mechanisms that may allow us to
examine the role of sourcing in laypeople’s evaluations of conflicting science-based
information. We investigated when laypeople considered source information,
especially that targeting source credibility, and how they use it to explain and
evaluate a scientific conflict. Research has shown that both the expertise and the
benevolence that can be ascribed to a source contribute to shaping its credibility
(Mayer et al., 1995; Sperber et al., 2010). An expert source can be understood as a
source that is experienced and knowledgeable in the relevant domain and therefore
is assumed to be competent (Hovland & Weiss, 1951; O’Keefe, 2002). A benevolent
source can be understood as one that intends to act in the interest of the reader and is
not guided by, for example, vested interests (Mayer et al., 1995; Thomm et al.,
2015). In this study, participants were always presented with at least one credible
source that was contradicted by an antagonist source that was either less expert or
less benevolent or equally expert and benevolent. Survey research also indicated
that researchers at public universities are perceived to be especially reliable and
credible sources on science (Beseley, 2014; Castell et al., 2014). Therefore, this
reference source was represented consistently as a professor who has been working
for years at a public university and could therefore be perceived as expert and
benevolent. This setting allowed us to study four different research goals.
First, we examined readers’ memory of critical source information in general.
Because it seems questionable whether readers account for source information at all,
we could not take it for granted that our participants would consider source
information when evaluating and explaining scientific conflicts. Therefore we
inspected (1) readers’ source memorization in order to determine whether source
information was acknowledged in principle.
Second, we examined (2) the impact of source information on readers’
assessments of source credibility and of their personal agreement with the claims
at stake. We expected that participants who have reasons (due to available source
information) to assume one source to be less expert or less benevolent compared to
the other one would tend to assess this source’s claim as being less credible than the
concurring claim advocated by the expert and benevolent source. Likewise, we
assumed that participants perceiving one source as having less expertise or as being
less benevolent would tend to agree less strongly with this source’s claim than with
the concurring claim presented by the expert and benevolent source. For example, a
researcher who works for an industrial company might be perceived as less
benevolent and might therefore be judged to be less credible than a researcher who
works at a public university. This source’s claim might also be agreed with less
strongly.
Third, we examined (3) the impact of source information on the preferences for
subjective conflict explanation. We assumed that perceived discrepancies in the

123
How source information shapes lay interpretations of… 1635

sources’ expertise and benevolence would specifically affect readers’ preference for
researcher-related explanations (i.e., differences in competencies and personal
motivations). It seems reasonable for participants to endorse personal motivation
explanations more highly when they perceive a violation of benevolent interests in
one of the competing sources than when they are confronted with equally expert and
benevolent sources. For instance, participants might be more inclined to attribute the
conflict to motivation explanations when the source contradicting the university
professor is a researcher in an industrial company who might be guided by vested
commercial interests.
Likewise, we expected that participants would endorse competence explanations
more strongly when they perceive a lack of expertise in one of the competing
sources than when explaining conflicts between equally expert and benevolent
sources. For example, participants might endorse competence explanations more
strongly when the competing source is a junior researcher who might possess less
working experience than a concurring university professor. In addition to these
specific assumptions, we explored the impact of source information on the
preference for two further kinds of research-process-related explanations that have
been found in preceding studies (i.e., differences in the research processes and in the
thematic complexity of the issue at stake). However, we did not expect specific
effects on these kinds of explanations, because the dimensions did not directly
address the researcher-related facets manipulated in the present study.
Finally we examined (4) the interplay of sourcing, conflict explanation, and
conflict evaluation (i.e., assessment of perceived source credibility and claim
agreement). Conflict explanation may help readers to restore coherence and thus
may affect source and claim evaluation. Because we expected source information to
influence individuals’ evaluations and explanations, it may also play a role in the
relation between the two variables. Source information may evoke or accompany a
preference for specific conflict explanations, and this, in turn, may influence conflict
evaluation. Hence, the effect of source information on conflict evaluation may be
mediated by the individual preference for conflict explanations. Because we
manipulated primarily source characteristics, we expected the relation between
source information and conflict evaluation to be influenced particularly by
researcher-related explanations. It is plausible that personal motivation explanations
in particular may affect readers’ credibility judgments when they perceive
differences in the benevolent interests of the competing sources (see above).
Participants sensing a violation of benevolent interests in one of the rival sources
may attribute the conflict more strongly to personal motivation reasons than when
confronted with equally expert and benevolent sources. This, in turn, may also lead
to decreased claim credibility. Similarly, when sensing a lack of expertise in one of
the rival sources, they may endorse competence reasons more strongly than when
confronted with equally expert sources. This, in turn, may also result in a decrease
in the perceived claim credibility. In addition, we do not rule out the possibility that
research-process-related explanations might also mediate the relationship. It is
possible that participants who are presented with competing but equally expert and
benevolent sources may agree strongly with these explanations because they are not
provided with any contextual information that may suggest any cause or explanation

123
1636 E. Thomm, R. Bromme

other than these causal factors. The preference for research-process-related


explanations might then influence claim credibility as well. Analogously to the
aforementioned predictions, we also examined the relationship between sourcing,
conflict explanation, and the participants’ personal claim agreements. However, it
should be noted that the analysis of mediation also relies on the findings on research
goals 2 and 3.

Methods

Participants

A total of 155 students from a German university participated in the study. Four
participants who reported studying natural sciences or closely adjacent fields were
dropped from the sample because they might have possessed substantial knowledge
of the conflict topic and no longer been laypersons. Eight further participants who
read and answered the questionnaire in less than 7 min were also excluded. This
criterion was based on the time two well-trained readers needed to complete the
questionnaire. Further outlier analyses resulted in the exclusion of the data of four
participants who took much longer to complete the questionnaire (about 1 h).
The final analyses were based on the data of 139 (111 female) participants. On
average, they were 23 years old (SD = 4) and had completed 2 years of study
(SD = 1.5). Participants were told that the questionnaire would deal with a topic
stemming from current research on climate change. Given this context information,
they rated their interest in and knowledge about the specific topic of ‘‘whirlwinds’’
on 6-point scales ranging from 1 (not at all) to 6 (very much). Participants reported
having a low interest in the topic (M = 2.4, SD = 0.91) and they assessed their own
knowledge to be low (M = 1.9, SD = 0.88).

Materials

Scenario

Participants read a scenario containing two opposing knowledge claims (Claim A


and Claim B) about the same science topic proposed by two different sources. The
conflict topic stemmed from the field of climate change research. Specifically, the
claims dealt with whether the increased number of whirlwinds facilitates either the
warming or the cooling of the earth’s temperature in the long term. The conflict was
based on authentic reports, but adapted for experimental purposes. Participants first
received a short introductory text summarizing the topic and also introducing the
competing sources. Subsequently, they read the two opposing claims, each said to
be stated by a different source. Both claims were of almost equal length (Claim A:
31 words; Claim B: 26 words) and structure. Thus, each claim presented two
sentences referring to study findings that led to the expert’s conclusion. The
scenario was pretested on a sample of 19 university students (11 female) who
received the introductory text and the claims without any source information.

123
How source information shapes lay interpretations of… 1637

Participants were given three choices to indicate whether ‘‘the claims contradict
each other,’’ ‘‘the claims do not contradict,’’ or ‘‘I don’t know.’’ Additionally, they
assessed the comprehensibility and credibility of each claim on a scale ranging from
1 (not at all) to 9 (very much). The majority of participants (16) judged the claims to
be contradictory. They perceived them to be generally comprehensible (Claim A:
M = 6.68, SD = 2.06; Claim B: M = 7.05, SD = 1.93) and fairly credible (Claim
A: M = 5.74, SD = 2.08; Claim B: M = 5.26, SD = 1.93). Claims did not differ
significantly in comprehensibility, t(18) = -1.07, r = .24, or credibility,
t(18) = 0.78, r = .18. Because three participants were unsure about the contradic-
toriness, we modified some wording in the claims in order to stress the conflicting
positions (the complete scenario is presented in Appendix A in ESM).

Manipulation of source information

The claims were described as having been retrieved from specific websites hosted
by different expert sources. Drawing on previous procedures (Thomm et al., 2015),
we varied information about the sources systematically. This resulted in three
different experimental conditions (henceforth referred to as the source condition):
Consistently across the conditions, Source A was described as a professor working
at a public university. Depending on the source condition, the competing Source B
was presented as being either

• a researcher in industry (benevolence condition),


• a junior researcher (expertise condition), or
• another university professor working at a different university (control condition).

The source information was reported in the introduction and repeated directly
before presenting each claim. The corresponding claims will be labeled Claim A
and Claim B. The order of source information and claims was randomized within
the scenario in order to operationalize all possible combinations. Source condition
was operationalized as a between-subjects factor. Participants were randomly
assigned to one of the three source conditions.

Dependent measures

Explaining Conflicting Science Claims Questionnaire

Conflict explanation was measured with the Explaining Conflicting Science Claims
Questionnaire (ECSC; Thomm et al., 2015). The ECSC is a scenario-based,
standardized measurement of conflict explanation. Participants read the conflict
scenario and subsequently assessed how far they agreed with different explanations.
They receive 23 statements, each presenting a different explanation, and judge how
far they perceive each explanation to be a relevant reason for the specific conflict on
a 6-point scale ranging from 1 (very much disagree) to 6 (very much agree). The
ECSC scales measure four distinct conflict explanations: (a) differences in
researchers’ personal motivations, (b) differences in researchers’ competence,

123
1638 E. Thomm, R. Bromme

(c) differences in research methods, and (d) differences in thematic complexity.


Internal consistency of the ECSC scales in the current scenario ranged from a = .72
to .89.

Source credibility

After reading the scenario and answering the ECSC, participants were presented
with each source again together with the claim it stated. Participants were then
instructed to assess the source’s credibility. Source credibility was measured for
each claim separately on a 6-point scale ranging from 1 (not at all credible) to 6
(very credible).

Personal claim agreement

Directly after asking for credibility judgments, participants indicated their personal
agreement with each claim on a 6-point scale ranging from 1 (I don’t agree at all) to
6 (I very much agree).

Source memorization

Participants were asked to mark the expert source that had made the claim for both
claims separately. Thus, for each claim, they were presented with all three kinds of
expert sources (university professor, researcher of industry, university junior
researcher), and they indicated which of these three options stated the specific
claim. Furthermore, they could select a fourth choice indicating that they do not
remember the source (‘‘I don’t know’’).

Procedure

Participants first reported some demographic data (age, gender, major study subject,
number of completed semesters) and assessed their own prior topic-specific
knowledge and interest in the conflict topic. Afterwards, they were instructed to read
the scenario carefully. Subsequently, they completed the ECSC questionnaire
indicating their subjective assessments of which explanations might be relevant.
After finishing the questionnaire, participants were prompted to evaluate the
conflicting claims by assessing the source credibility and by indicating their
personal agreement with each claim.1 Finally, we tested source memorization.
While working on the statements, participants were able to reread the scenario by
opening a pop-up window.

1
Further variables were measured but not reported here for reasons of space. Nevertheless, to complete
the given information, we shall list excluded variables: Participants assessed the ECSC from an assumed
expert perspective and indicated whether they would feel confident about deciding on the claims’ veracity
themselves or would need further expert advice to do so. These measures are not reported elsewhere and
were assessed mostly after the reported ones; therefore we do not expect confounding effects.

123
How source information shapes lay interpretations of… 1639

The study was administered online using ‘‘EFS Survey’’ by QuestBack for online
polls. As a reward for their participation, students could participate in a lottery and
win vouchers for a well-known online store.

Analyses

We used Chi square tests to assess whether there was a random frequency
distribution of source memorization between the source conditions. We computed
analyses of variance to investigate the impact of source condition on measures of
claim evaluation and conflict explanation using g2p as a measure of effect size. To
follow up meaningful effects, we computed protected multiple t tests for dependent
and independent samples. Following Field’s (2009) recommendations, we reported
r as a measure of effect size for t tests: Effect sizes above .1 were interpreted as
small effects; those above .3, as medium effects; and those above .5, as large effects.
Following procedures recommended by Hayes and Preacher (2014), we conducted a
mediation analysis to examine whether source condition mediated possible effects
between conflict explanation and claim evaluation measures.

Results

Research Goal 1: Participants’ source memorization

First we analyzed individual source memorization. Source memorization was coded


dichotomously indicating either correct or false memorization of source informa-
tion. Correct source memorization was coded only when participants marked the
correct source option for both of the two claims. Table 1 summarizes the
frequencies and percentages.
Eighty-eight percent of the participants assigned to the condition presenting
experts with different levels of expertise recalled the pair of sources accurately, and
significantly more participants remembered the sources correctly than recalled them
falsely, v2(1) = 14.3, p \ .001. Similarly, 81 % of participants assigned to the
condition presenting experts with different benevolent interests marked the correct
source information. The frequency of participants remembering the sources
correctly again outnumbered the frequency of participants who did not,

Table 1 Frequencies and


Source condition Source memorization
percentages of source
memorization Correct False Total
P P P
% % %

Benevolence condition 30 81.1 7 18.9 37 26.6


Expertise condition 46 88.5 6 11.5 52 37.4
Control condition 25 50.0 25 50.0 50 36.0
Total 101 72.7 38 27.3 139 100

123
1640 E. Thomm, R. Bromme

v2(1) = 30.77, p \ .001. In contrast, only 50 % of participants assigned to the


control condition, and hence who believed the conflict to exist between equally
expert and credible experts, chose the accurate source information. There was a
significant association between source condition and whether participants memo-
rized the presented sources correctly, v2(2) = 20.78, p \ .001. In sum, analyses
showed that participants seem to remember source information especially when
provided with discrepant information about source expertise or source benevolence.

Research Goal 2: Impact of source information on source credibility


and personal claim agreement

Source credibility

We examined the impact of source information on credibility judgments by


comparing participants’ credibility judgments of Source A versus Source B. Table 2
summarizes the descriptive statistics.
We computed a mixed ANOVA with the within-subjects factor source credibility
(Source A vs. Source B) and the between-subjects factor source condition
(benevolence condition vs. expertise condition vs. control condition). Results
confirmed a main effect of source credibility, F(1,136) = 7.55, p = .007, g2p = .05,
and a main effect of source condition, F(2,136) = 3.55, p = .031, g2p = .05.
Furthermore there was a significant interaction between source credibility and
source condition, F(2,136) = 5.28, p = .006, g2p = .07.
An inspection of the effect of source credibility showed that participants judged
Source A (university professor; estimated marginal mean = 4, SE = 0.09) to be
more credible than Source B (varied source; estimated marginal mean = 3.67,
SE = 0.09). Inspection of the effect of source condition revealed that, on average,
credibility judgments were slightly higher in the expertise condition than in the
benevolence condition, t(87) = 2.13, p = .036, r = .22, and the control condition,
t(100) = 2.41, p = .018, r = .23. The latter two did not differ significantly,
t(85) = 0.35, p = .727.
However, the main effects have to be interpreted in light of the interaction term.
Therefore we first scrutinized differences due to source information within each
condition. In the benevolence condition, participants judged Source B to be

Table 2 Means and standard deviations of source credibility and personal claim agreement
Source condition Credibility of sources Personal agreement with claims

Source A Source B Claim A Claim B


M (SD) M (SD) M (SD) M (SD)

Benevolence condition 4.22 (1.00) 3.30 (0.91) 3.76 (1.14) 3.16 (1.04)
Expertise condition 4.08 (0.93) 4.04 (0.97) 3.60 (0.87) 3.60 (1.01)
Control condition 3.72 (1.11) 3.68 (1.10) 3.60 (1.11) 3.36 (1.03)

In each condition, the source of Claim A (Source A) was presented as a credible source, whereas the
source of Claim B (Source B) varied in its credibility

123
How source information shapes lay interpretations of… 1641

significantly less credible than Source A, t(36) = 3.88, p \ .001, r = .54. In contrast,
there was no difference between both sources in the expertise condition, t(51) = 0.21,
p = .835, and in the control condition, t(49) = 0.19, p = .848. Thus, it was
specifically information about possible vested interests that influenced the judgment of
source credibility when comparing the sources directly within a controversy.
To complement the explanation of the interaction, we also compared the
credibility judgments of Source A and Source B, respectively, between conditions.
A one-way ANOVA with the between-subjects factor source condition showed that
there was only a marginal difference in credibility judgments of Source A between
the three conditions, F(2,136) = 2.88, p = .06, g2p = .04. Only participants in the
benevolence condition judged this source’s credibility slightly higher than
participants in the control condition, t(85) = 2.15, p = .034, r = .23. All other
comparisons indicated no differences.
However, the analysis confirmed an effect of source condition on participants’
credibility judgments of Source B, F(2,136) = 5.96, p = .003, g2p = .08. Multiple
t tests for independent samples showed that participants perceived the researcher
working in industry to be less credible than the junior university researcher,
t(87) = 3.65, p \ .001, r = .36. However, there was only a marginal difference in
credibility judgments between the university professor and the junior researcher,
t(100) = 1.75, p = .083, r = .17, and between the university professor and the
researcher from industry, t(85) = 1.73, p = .088, r = .17.

Personal claim agreement

A mixed ANOVA with the within-subjects factor personal agreement (Claim A vs.
Claim B) and the between-subjects factor source condition (benevolence condition
vs. expertise condition vs. control condition) showed neither a main effect of source
condition, F(2,136) = 0.72, p = .488, nor a significant interaction term,
F(2,136) = 1.36, p = .26. The main effect of personal agreement just failed to
attain significance, F(1,136) = 3.74, p = .055, g2p = .03. A t test for dependent
samples did not confirm a meaningful difference between agreement with Claim A
and Claim B, t(138) = 1.72, p = .088.

Research Goal 3: Impact of source information on subjective conflict


explanation

Table 3 reports the means and standard deviations of participants’ personal


assessment of conflict explanations.
We used multiple one-way ANOVAs to test the effect of source condition
(benevolence condition vs. expertise condition vs. control condition) on participants’
score on each explanation scale. There was no significant effect of source condition on
participants’ ratings on complexity explanations, F(2,136) = 1.12, p = .329, and
methods explanations, F(2,136) = 1.74, p = .18. Source condition affected partic-
ipants’ ratings on motivation explanations significantly, F(2,136) = 5.96, p = .003,
g2p = .08, and on competence explanations marginally, F(2,136) = 2.99, p = .054,
g2p = .04.

123
1642 E. Thomm, R. Bromme

Table 3 Means and standard deviations of subjective conflict explanation


Explanation Source condition Subjective assessment
of conflict explanation
M (SD)

Research process Benevolence condition 3.86 (0.93)


Expertise condition 3.96 (0.88)
Control condition 4.18 (0.73)
Complexity Benevolence condition 4.03 (0.92)
Expertise condition 4.00 (0.88)
Control condition 4.24 (0.80)
Motivations Benevolence condition 3.89 (1.12)
Expertise condition 3.44 (1.05)
Control condition 3.12 (0.94)
Competence Benevolence condition 2.43 (0.65)
Expertise condition 2.65 (0.66)
Control condition 2.35 (0.65)

Results of multiple t tests for independent samples indicated that participants


who assumed the conflict to be between a university researcher and a researcher in
industry agreed more strongly with personal motivation explanations than
participants assigned to the control condition, t(85) = 1.61, p \ .001, r = .17.
Participants in the benevolence and expertise conditions differed in their endorse-
ments of this explanation, t(87) = 1.96, p = .054, r = .21, whereas participants in
the expertise and control conditions did not differ significantly, t(100) = 1.61,
p = .111.
Source condition revealed a significantly different effect on participants’
endorsement of competence explanations: Participants who were told the conflict
was between a junior and a senior university researcher endorsed competence
explanations more highly than participants who were told the conflict was between
two university professors, t(100) = 2.36, p = .020, r = .23. However, there were
no differences in explanation preferences between either participants in the
benevolence and control conditions, t(85) = 0.58, p = .567, or participants in the
benevolence and expertise conditions, t(87) = 1.60, p = .113.

Research Goal 4: Interplay of conflict explanation, claim evaluation,


and source information

We tested specifically whether the effect of source information on claim evaluation


was mediated by participants’ conflict explanations. As expected, prior analyses
confirmed the effect of source information on source credibility as well as on the
researcher-related explanations by motivations and competence. Because a
systematic relationship between the independent variable and the mediator is a
necessary requirement for mediation (Baron & Kenny, 1986), we considered only
the researcher-related explanations as possible mediators.

123
How source information shapes lay interpretations of… 1643

Mediation with the multicategorical independent variable source condition was


conducted in line with Hayes and Preacher’s (2014) recommendations. We recoded
the independent variable in k - 1 dummy variables and entered the dummy variable
of interest in the mediation model while listing the other dummy variables as
covariates. In the current mediation analyses, the independent variable source
condition was recoded in the dummy-coded expertise condition and the dummy-
coded benevolence condition with the control condition being the reference group.
Because the effect of source information became particularly salient in participants’
assessments of Source B, the analysis focused on predicting this source’s evaluation.
This procedure allowed us to capture the influence across the conditions.

Testing the mediating effect of competence explanation

We examined whether the endorsement of competence explanations mediated the


effect of source expertise on credibility judgments. In other words, we expected
participants in the expertise condition to attribute the conflict more strongly to
competence explanations (relative to participants in the control condition), and that
this, in turn, would also affect their judgments of source credibility. Using Hayes
(2013) PROCESS Model 4, we entered the dummy-coded expertise condition as
independent variable while listing the dummy-coded benevolence condition as
covariate. Participants in the expertise condition agreed more strongly with
competence explanations than participants in the control condition, b = .32,
p = .02, and the endorsement of competence explanation influenced credibility
judgments on Source B, b = -.31, p = .02. This finding indicates that when the
source was a junior researcher (relative to a university professor, control condition),
participants more strongly endorsed competence explanations that were associated
with lower judgments of source credibility (see Fig. 1). The bias-corrected
confidence intervals for the indirect effect did not include zero, b = -.10, 95 %
CI [-.27, -.01], confirming its significance.
Nonetheless, the total direct and indirect effect together indicated an inconsistent
mediation (MacKinnon, Fairchild, & Fritz, 2007): The total effect just failed to attain
significance, b = .36, p = .07, 95 % CI [-.03, .75], whereas the direct effect was
significant and increased in size, b = .45, p = .02, 95 % CI [.06, .85]. Whereas the

Fig. 1 Meditation with explanations by competences as mediator revealing an indirect effect = -.10,
95 % CI [-.27, -.01]. Note: Pattern of effects indicates an inconsistent mediation (MacKinnon et al.,
2007)

123
1644 E. Thomm, R. Bromme

manipulation of source expertise seemed to evoke higher credibility judgments


relative to the control condition, participants also agreed more strongly with
competence explanations that, in turn, were associated negatively with source
credibility. Consequently, the indirect effect had another sign than the direct effect,
and the mediator acted as a suppressor variable. In this case, it is possible that the total
effect is smaller than the direct effect and not statistically significant while the direct
effect is (MacKinnon et al., 2007). In contrast, participants in the benevolence and
control conditions did not differ in their endorsement of competence explanations,
b = .08, p = .57. As expected, an indirect effect through competence explanation did
not occur with the benevolence condition, b = -.06, 95 % CI [-.15, .07].
We likewise tested whether competence explanations mediated the effect of
source information on personal claim agreement. Participants in the expertise
condition might endorse competence explanations more strongly (relative to
participants in the control condition), and this might lead them to lower their
personal agreement with Claim B. Again, we expected an inconsistent mediation,
meaning that in the absence of a total effect, a mediating effect is still possible (see
above). Applying Hayes (2013) PROCESS Model 4, we considered the dummy-
coded expertise condition as independent variable while listing the dummy-coded
benevolence condition as covariate. However, results confirmed neither a significant
direct effect, b = .33, p = .11, 95 % CI [-.08, .73] nor an indirect effect,
b = -.09, p = .11, 95 % CI [-.24, .00].

Testing the mediating effect of motivation explanation

Following our assumptions, we also expected that the effect of source benevolence on
credibility judgments, respectively personal claim agreement, might be mediated by
motivation explanations. However, a prior ANOVA did not indicate that participants
in the benevolence condition differed from controls in their assessments of Claim B.
This finding already speaks against mediation. An inconsistent mediation was
unlikely, because both the direct and indirect effect would be expected to take the same
sign: Analyses revealed that participants in the benevolence condition judged
credibility lower, which would imply a negative direct effect on credibility judgment.
Because source benevolence was assumed to positively predict endorsement of
motivation explanations, and this, in turn, should be associated negatively with the
credibility of Claim B, the indirect effect would also have a negative sign. The same
applies to the analysis of mediating effects on personal claim agreement. Therefore, it
could be assumed that there is no mediation effect of motivation explanations in the
benevolence condition.

Discussion

Summary and interpretation of results

Our results deliver a range of information on how readers consider source


information when handling conflicting scientific claims. The examination of source

123
How source information shapes lay interpretations of… 1645

memorization revealed differences in readers’ representations. Especially when a


source’s expertise or benevolence was challenged, readers seemed to remember the
available source. When discrepant information on the sources’ expertise or
benevolence was made explicit, more than three-quarters of participants recalled
the sources correctly. In contrast, participants who were presented with competing
sources that nonetheless contained the same source features regarding their
expertise or benevolence made accurate recalls only by chance. This result provides
an interesting link between the account elaborated by Sperber et al. (2010), arguing
that individuals are vigilant when it comes to sources’ expertise and intentions, and
the discrepancy-induced source comprehension effect (Braasch et al., 2012).
Readers’ sensitivity to source benevolence was also reflected in the evaluation of
the source credibility between the conditions as well as within the specific condition
when discrepant source information on benevolence was made available. On
average, participants in this condition conceived Source B, a researcher in industry,
to be less credible than participants in the other conditions did. Moreover, they
perceived the sources’ credibility within the condition differently. Thus, Source A, a
university professor, was conceived to be more credible when the contrasting
Source B was a researcher working for an industrial company. In contrast,
discrepant information about source expertise evoked differences in perceived
source credibility only between the source conditions: On average, participants in
the expertise condition conceived the credibility of both sources to be higher than
participants in the other conditions. Even though Source A, a university professor,
and Source B, a junior researcher at a university, possessed different levels of
professional experience, both might have been assumed to be expert and therefore
similarly credible. Yet, although participants did not perceive the credibility of the
sources differently within this condition, information about the sources’ expertise
nevertheless seemed to increase awareness of the critical source feature ‘‘expertise’’
compared to the other source conditions. This interpretation offers an explanation
for the surprising effect across the conditions—that is, why a junior researcher
(Source B, expertise condition) was perceived to be more credible when compared
to a university professor (Source B, control condition).
The salience of information about source expertise and source benevolence is
also reflected in readers’ endorsement of the conflict explanations. As expected,
information suggesting differences in expertise led readers to emphasize compe-
tence explanations (compared to the control condition), whereas information
suggesting possible differences in the researchers’ intentions led them to attribute
the conflict more strongly to motivational reasons (compared to the control
condition). These results corroborate prior findings (cf. Thomm et al., 2015).
However, results did not confirm an effect of source information on participants’
personal claim agreement. Previous findings suggest that readers differentiate what
they consider to be generally true and credible from what they would personally
accept, and this distinction pertains especially to science-based knowledge claims
(Scharrer, Britt, Stadtler & Bromme, 2013; Thomm & Bromme, 2012). Thus, it is
possible that even though participants acknowledge the source information, they
may not necessarily accept the source’s claims. Being confronted with conflicting
information may contribute all the more to their hesitance to agree with a specific

123
1646 E. Thomm, R. Bromme

claim. As mentioned above, for experimental reasons, participants received only a


limited amount of information about sources and claims. This might also have made
participants more hesitant or cautious when making their agreement judgments.
Finally, we examined whether conflict explanations by competence and
motivations had mediating effects on the relationship between source information
and perceived source credibility. Focusing on predicting the credibility of Source B,
analyses confirmed a mediation by competence explanations. Participants in the
expertise condition indeed attributed the conflict more strongly to competence
differences between the sources (relative to participants in the control condition),
and this, in turn, led to lower credibility judgments of Source B (here: junior
researcher). Consequently, readers who were told that the conflict was between a
university professor and a junior researcher explained the conflict more strongly
with competence reasons, and this led them to lower their credibility perception of
the source. In contrast, mediation by motivation explanations could not be
confirmed, even though findings revealed direct effects of source information on
both source credibility and motivation explanations. Although participants in the
benevolence condition agreed more strongly with motivation explanations and
indicated lower credibility judgments compared with the control condition,
motivation explanations did not mediate the relationship between source informa-
tion and assessment of source credibility. Hence, readers who thought the conflict to
be between a university researcher and a researcher working in industry considered
the source to be less credible and also explained the contradiction more strongly
through motivation reasons. However, this higher endorsement did not impact on
the perceived credibility of the source. It can be concluded that source information
challenging the benevolence of the sources affects readers’ evaluations so strongly
that it ‘‘works’’ (affects the credibility assessment) directly, because it provides an
immediate explanation of why sources disagree and therefore makes it less
necessary to ponder on this feature as a possible cause of doubts about its
credibility.

Sourcing sensitively when explaining and handling conflicting science


information

The current study provides insights into readers’ sourcing and how they apply it
when explaining and evaluating conflicting scientific claims. Overall, the results
suggest that whereas prior research has generally shown that inconsistent
information increases readers’ sourcing activity (cf. Braasch et al., 2012; Stadtler
& Bromme, 2014), the sources (i.e., their credibility-related features) providing the
information matter as well. Thus, both the reception of conflicting scientific claims
and the presentation of sources that differ in critical characteristics interact and may
lead to increased sourcing activities.
Prior research on sourcing has indicated the need to consider the processes
mediating between a reader’s attention to source information and her or his final
claim evaluation. It is necessary to analyze how source-related information is used
when interpreting and evaluating content information (e.g., Barzilai et al., 2015;
Bromme et al., 2010). Tying in with this line of research, we examined whether

123
How source information shapes lay interpretations of… 1647

source information is used in conflict explanation and may, in turn, affect readers’
evaluation of the source and their agreement with the respective claim. The detected
mediation effect provides an indication of the interplay between explanation and
source credibility. Thus, readers considered source expertise and showed an
increased attribution to competence reasons that affected source credibility.
Consequently, under the circumstances modeled in this experimental condition
(two competing sources of different expertise), readers apply their source
knowledge to evaluate knowledge claims provided by these sources. Thus, under
specific conditions, conflict explanation does indeed affect individuals’ decisions on
whom to believe.
Although the size of the mediating effect of researcher-related explanations on
the interplay between discrepant source features and credibility assessment is small
to moderate, it is still remarkable, because, on average, our participants also
strongly endorsed explanations referring to the research processes and the thematic
complexity of the research topic. They hold these explanations for true—
independent of further information about the specific sources. Findings from
research on individuals’ epistemic beliefs may cast light on this pattern of results. In
contrast to knowledge in the humanities (e.g., topics in history), individuals often
tend to perceive knowledge about science topics as being objective, based on
evidence, and less prone to researchers’ interpretation and opinions (e.g., Buehl &
Alexander, 2006; Hofer, 2000; Kuhn, Cheney, & Weinstock, 2000; Limón, 2006).
Such a perspective may affect individuals’ conflict explanations and may
specifically suppress the consideration of source features within conflict explana-
tions. Analyses did not confirm differences in the agreement with these research-
process-related explanations across conditions, although they endorsed these
explanations strongly. However, analyses affirmed differences through source
information in the researcher-related explanations. Thus, participants seemed to
consider aspects of the research process and the thematic complexity, but they also
appeared to be sensitive to social facets of scientific knowledge construction, and
hence the person of the researcher.
This study presents a first step in examining the relationship between the
sourcing, explaining, and evaluating of scientific information. Further research
could not only contribute to our understanding of sourcing and the explanation of
expert disagreement, but also enhance the generalizability of the present results.
Understanding appropriately how and why scientists arrive at different claims may
strengthen readers’ sourcing capacities through increasing their awareness of the
role of sources. This entails not only assumptions about the nature of scientific
knowledge, its generation, and justification, but also assumptions about the related
social practices and the part experts play in knowledge construction. Specifically the
latter may provide the means for readers to evaluate multiple competing sources and
hence to decide which one to believe (besides what to believe). This makes it
relevant to already facilitate school students’ ideas about the active role of experts in
knowledge construction and interpretation as well as draw their attention to the
influence of scientists’ views and motivations.

123
1648 E. Thomm, R. Bromme

Limitations and future directions in research

One could argue that when examining the impact of sourcing, it would have been
consistent to analyze only data from participants who recalled the source
information correctly. However, because source memorization was measured at
the end of the questioning, we could not exclude the possibility that participants
referred to the correct source information while completing the questionnaire.
Therefore, we analyzed the complete sample.
With regard to the mediating effect of explanations, we have already pointed out
above that some effects are rather small. Furthermore, although we uncovered
effects of source information, the effect sizes or the proportion of explained
variance were frequently moderate. On the one hand, this might be due to the
specific social situation we are modeling here: It asks for the assessment of experts
from a layperson perspective. Compared to the status of our participants (who are
laypersons on climate research), all presented sources might have been perceived as
expert, and this may well have had an impact on the assessments. Furthermore, to
disentangle the influence of source information, we assigned great importance to
operationalizing source information as a between-subjects factor. This, in turn, may
have led to an increase in the error variance between participants. However, against
this background, it would seem to be all the more remarkable that we identified
effects of a reasonable magnitude. It might be interesting for future research to also
contrast different source conditions within one person directly. When searching
online for science-based information, readers usually come across diverse kinds of
sources. Therefore, direct comparisons of diverse sources would seem to be a
realistic approach that may therefore also help to extend the applicability of our
results.
Similarly it would be valuable to extend research by investigating a range of
topics to further test the generalizability of our results. Using different topics from
not only the same but also other knowledge domains would help to identify
similarities and differences in the interplay of sourcing and the evaluation of
conflicting expert claims. Depending on the topic at stake, laypeople may
understand the role of experts in science knowledge construction differently. As a
result, they may well evaluate sources differently when reasoning about expert
controversies. In the current study, we used a different topic in climate change
research to the ones used before. The current effect of sourcing on conflict
explanation corroborates previous findings on science topics. This may strengthen
the assumption of a specific relationship between sourcing and conflict explanation
endorsement in the natural sciences.
Furthermore, the study was undertaken in a sample of participants who were
enrolled in higher education and therefore well-educated. These participants might
represent the social group of recipients who turn to the Internet to retrieve
information about science in order to inform their decisions (Fox, 2005). Although
they indicated possessing only low prior knowledge on the topic, they might well
have had some understanding of scientific knowledge construction and of the
different kinds of experts to be found in the Internet. Both could have influenced
their judgments. However, when considering previous research (see above), it

123
How source information shapes lay interpretations of… 1649

cannot be taken for granted that they were indeed attentive to source information.
Nonetheless, it would be worth investigating further samples to extend the present
findings. For instance, it would be interesting to compare the present data with a
dataset of school students addressing the development of individuals’ use of source
information in conflict explanations.
Finally, although we could confirm the impact of source information on
credibility judgments, we were unable to establish a similar effect on participants’
personal claim agreement. On the one hand, this finding might be explained
practically by the study material. Participants expressed only low interest in the
specific topic. Thus, although the broader issue of climate change might concern the
public, this specific topic might not have been perceived to be very relevant for
participants’ immediate daily lives. On the other hand, this finding seems reasonable
in that individuals’ claim agreement is not necessarily determined by their
credibility judgments (Scharrer et al., 2013; Thomm & Bromme, 2012). Readers
may use criteria to evaluate the credibility of scientific claims and sources (e.g.,
consider cues of adherence to good scientific practice) that they would not use to
assess their personal agreement (e.g., follow entrenched beliefs). However, this
leads us to ask which factors ultimately influence readers’ personal decision making.
Not least for that reason, it would also be interesting to continue investigating
factors that may also affect the interplay between sourcing, conflict explanation, and
evaluation. For example, it may well be worth further investigating the role of
sourcing when participants need to infer source information.

Acknowledgments This research was supported by the Deutsche Forschungsgemeinschaft (DFG),


Grant BR 1126/6-2. We would like to thank Fritz Klinkemeyer and Teresa Bartsch for their support in
data gathering, and Jonathan Harrow for advice in language editing.

References
Baron, R. M., & Kenny, D. A. (1986). The moderator–mediator variable distinction in social
psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality
and Social Psychology, 51(6), 1173–1182. doi:10.1037/0022-3514.51.6.1173.
Barzilai, S., & Eshet-Alkalai, Y. (2015). The role of epistemic perspectives in comprehension of multiple
author viewpoints. Learning and Instruction, 36, 86–103. doi:10.1016/j.learninstruc.2014.12.003.
Barzilai, S., Tzadok, E., & Eshet-Alkalai, Y. (2015). Sourcing while reading divergent expert accounts:
Pathways from views of knowing to written argumentation. Instructional Science, 43(6), 737–766.
doi:10.1007/s11251-015-9359-4.
Besely, J. (2014). Science and technology: Public attitudes and understanding. In National Science Board
(Ed.), Science and engineering indicators 2014 (pp. 1–53). Arlington, VA: National Science
Foundation (NSB 14-01).
Braasch, J. L., Rouet, J. F., Vibert, N., & Britt, M. A. (2012). Readers’ use of source information in text
comprehension. Memory & Cognition, 40(3), 450–465. doi:10.3758/s13421-011-0160-6.
Brand-Gruwel, S., & Stadtler, M. (2011). Solving information-based problems: Evaluating sources and
information. Learning and Instruction, 21, 175–179. doi:10.1016/j.learninstruc.2010.02.008.
Bråten, I., Strømsø, H. I., & Britt, M. A. (2009). Trust matters: Examining the role of source evaluation in
students’ construction of meaning within and across multiple texts. Reading Research Quarterly,
44(1), 6–28. doi:10.1598/RRQ.44.1.1.
Bråten, I., Strømsø, H. I., & Salmerón, L. (2011). Trust and mistrust when students read multiple
information sources about climate change. Learning and Instruction, 21, 180–192. doi:10.1016/j.
learninstruc.2010.02.002.

123
1650 E. Thomm, R. Bromme

Britt, M. A., & Aglinskas, C. (2002). Improving students’ ability to identify and use source information.
Cognition and Instruction, 20, 485–522. doi:10.1207/S1532690XCI2004_2.
Britt, M. A., & Rouet, J.-F. (2012). Learning with multiple documents: Component skills and their
acquisition. In J. R. Kirby & M. J. Lawson (Eds.), Enhancing the quality of learning: Dispositions,
instruction, and learning processes (pp. 276–314). New York, NY: Cambridge University Press.
Bromme, R., & Goldman, S. (2014). The public’s bounded understanding of science. Educational
Psychologist, 49(2), 59–69. doi:10.1080/00461520.2014.921572.
Bromme, R., Kienhues, D., & Porsch, T. (2010). Who knows what and who can we believe?
Epistemological beliefs are beliefs about knowledge (mostly) attained from others. In L.
D. Bendixen & F. C. Feucht (Eds.), Personal epistemology in the classroom: Theory, research,
and implications for practice (pp. 163–193). Cambridge: Cambridge University Press. doi:10.1017/
CBO9780511691904.006.
Bromme, R., & Thomm, E. (2016). Knowing who knows: Laypersons’ capabilities to judge experts’
pertinence for science topics. Cognitive Science, 40, 241–252. doi:10.1111/cogs.12252.
Bromme, R., Thomm, E., & Wolf, V. (2015). From understanding to deference: Laypersons’ and medical
students’ views on conflicts within medicine. International Journal of Science Education, Part B:
Communication and Public Engagement, 5(1), 68–91. doi:10.1080/21548455.2013.849017.
Buehl, M. M., & Alexander, P. A. (2006). Examining the dual nature of epistemological beliefs.
International Journal of Educational Research, 45, 28–42. doi:10.1016/j.ijer.2006.08.007.
Castell, S., Charlton, A., Clemence, M., Pettigrew, N., Pope, S., Quigley, A., et al. (2014). Public
attitudes to science 2014. London: Ipsos Mori. Retrieved from https://www.ipsos-mori.com/Assets/
Docs/Polls/pas-2014-main-report.pdf.
Chen, S., & Chaiken, S. (1999). The Heuristic-Systematic Model in its broader context. In S. Chaiken &
Y. Trope (Eds.), Dual-process theories in social psychology (pp. 73–96). New York, NY: Guilford.
Critchley, C. R. (2008). Public opinion and trust in scientists: The role of the research context, and the
perceived motivation of stem cell researchers. Public Understanding of Science, 17(3), 309–327.
doi:10.1177/0963662506070162.
Cummings, L. (2014). The ‘‘trust’’ heuristic: Arguments from authority in public health. Health
Communication, 29(10), 1043–1056. doi:10.1080/10410236.2013.831685.
Field, A. P. (2009). Discovering statistics using SPSS: And sex and drugs and rock ‘n’ roll (3rd ed.).
London: Sage.
Flanagin, A. J., & Metzger, M. J. (2007). The role of site features, user attributes, and information
verification behaviors on the perceived credibility of web-based information. New Media & Society,
9(2), 319–342. doi:10.1177/1461444807075015.
Fox, S. (2005). Health information online. Washington, DC: Pew Internet & American Life Project.
Goldman, S. R., Braasch, J. L., Wiley, J., Graesser, A. C., & Brodowinska, K. (2012). Comprehending
and learning from Internet sources: Processing patterns of better and poorer learners. Reading
Research Quarterly, 47, 356–381. doi:10.1002/RRQ.027.
Goldman, S. R., & Scardamalia, M. (2013). Managing, understanding, applying, and creating knowledge
in the information age: Next-generation challenges and opportunities. Cognition and Instruction,
31(2), 255–269. doi:10.1080/10824669.2013.773217.
Hayes, A. F. (2013). Introduction to mediation, moderation, and conditional process analysis: A
regression-based approach. New York, NY: Guilford Press.
Hayes, A. F., & Preacher, K. J. (2014). Statistical mediation analysis with a multicategorical independent
variable. British Journal of Mathematical and Statistical Psychology, 67(3), 451–470. doi:10.1111/
bmsp.12028.
Hendriks, F., Kienhues, D., & Bromme, R. (2015). Measuring laypeople’s trust in experts in a digital age:
The Muenster Epistemic Trustworthiness Inventory (METI). PLoS ONE, 10(10), e0139309 EP.
doi:10.1371/journal.pone.0139309.
Hofer, B. K. (2000). Dimensionality and disciplinary differences in personal epistemology. Contempo-
rary Educational Psychology, 25, 378–405. doi:10.1006/ceps.1999.1026.
Hovland, C. I., & Weiss, W. (1951). The influence of source credibility on communication effectiveness.
Public Opinion Quarterly, 15(4), 635. doi:10.1086/266350.
Kajanne, A., & Pirttilä-Backman, A. M. (1999). Laypeople’s viewpoints about the reasons for expert
controversy regarding food additives. Public Understanding of Science, 8, 303–315. doi:10.1088/
0963-6625/8/4/303.

123
How source information shapes lay interpretations of… 1651

Kammerer, Y., Bråten, I., Gerjets, P., & Strømsø, H. I. (2013). The role of Internet-specific epistemic
beliefs in laypersons’ source evaluations and decisions during Web search on a medical issue.
Computers in Human Behavior, 29(3), 1193–1203. doi:10.1016/j.chb.2012.10.012.
Kammerer, Y., & Gerjets, P. (2012). Effects of search interface and internet-specific epistemic beliefs on
source evaluations during web search for medical information: An eye-tracking study. Behaviour &
Information Technology, 31(1), 83–97. doi:10.1080/0144929X.2011.599040.
Keck, D., Kammerer, Y., & Starauschek, E. (2015). Reading science texts online: Does source
information influence the identification of contradictions within texts? Computers & Education, 82,
442–449. doi:10.1016/j.compedu.2014.12.005.
Keil, F. C. (2012). Running on empty? How folk science gets by with less. Current Directions in
Psychological Science, 21(5), 329–334. doi:10.1177/0963721412453721.
Kobayashi, K. (2014). Students’ consideration of source information during the reading of multiple texts
and its effect on intertextual conflict resolution. Instructional Science, 42(2), 183–205. doi:10.1007/
s11251-013-9276-3.
Kuhn, D., Cheney, R., & Weinstock, M. (2000). The development of epistemological understanding.
Cognitive development, 15, 309–328. doi:10.1016/S0885-2014(00)00030-7.
Limón, M. (2006). The domain generality–specificity of epistemological beliefs: A theoretical problem, a
methodological problem or both? International Journal of Educational Research, 45, 7–27. doi:10.
1016/j.ijer.2006.08.002.
Longino, H. E. (2002). The fate of knowledge. Princeton, NJ: Princeton University Press.
MacKinnon, D. P., Fairchild, A. J., & Fritz, M. S. (2007). Mediation analysis. Annual Review of
Psychology, 58, 593–614.
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust.
Academy of Management Review, 20, 709–734. doi:10.5465/AMR.1995.9508080335.
Metzger, M. J., & Flanagin, A. J. (2013). Credibility and trust of information in online environments: The
use of cognitive heuristics. Journal of Pragmatics, 59, 210–220. doi:10.1016/j.pragma.2013.07.012.
O’Keefe, D. J. (2002). Persuasion: Theory and research (2nd ed.). Thousand Oaks, CA: Sage.
Perfetti, C. A., Rouet, J.-F., & Britt, M. A. (1999). Toward a theory of documents representation. In H.
van Oostendorp & S. R. Goldman (Eds.), The construction of mental representations during reading
(pp. 99–122). Mahwah, NJ: Lawrence Erlbaum Associates.
Petty, R. E., & Cacioppo, J. T. (1986). Communication and persuasion: Central and peripheral routes to
attitude change. New York, NY: Springer.
Rouet, J.-F., & Britt, M. A. (2011). Relevance processes in multiple document comprehension. In M.
T. McCrudden, J. P. Magliano, & G. Schraw (Eds.), Text relevance and learning from text (pp.
19–52). Greenwich, CT: Information Age Publishing.
Scharrer, L., Britt, M. A., Stadtler, M., & Bromme, R. (2013). Easy to understand but difficult to decide:
Information comprehensibility and controversiality affect laypeople’s science-based decisions.
Discourse Processes, 50, 361–387. doi:10.1080/0163853X.2013.813835.
Scharrer, L., Bromme, R., Britt, M. A., & Stadtler, M. (2012). The seduction of easiness: How science
depictions influence laypeople’s reliance on their own evaluation of scientific information. Learning
and Instruction, 22(3), 231–243. doi:10.1016/j.learninstruc.2011.11.004.
Shafto, P., Eaves, B., Navarro, D. J., & Perfors, A. (2012). Epistemic trust: Modeling children’s reasoning
about others’ knowledge and intent. Development Science, 15(3), 436–447. doi:10.1111/j.1467-
7687.2012.01135.x.
Sparks, J. R., & Rapp, D. N. (2011). Readers’ reliance on source credibility in the service of
comprehension. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37, 230.
doi:10.1037/a0021331.
Sperber, D., Clément, F., Heintz, C., Mascaro, O., Mercier, H., & Origgi, G. (2010). Epistemic vigilance.
Mind and Language, 25, 359–393. doi:10.1111/j.1468-0017.2010.01394.x.
Stadtler, M., & Bromme, R. (2014). The content–source integration model: A taxonomic description of
how readers comprehend conflicting scientific information. In D. N. Rapp & J. Braasch (Eds.),
Processing inaccurate information: Theoretical and applied perspectives from cognitive science and
the educational sciences (pp. 379–402). Cambridge, MA: MIT Press.
Strømsø, H. I., Bråten, I., & Britt, M. A. (2010). Reading multiple texts about climate change: The
relationship between memory for sources and text comprehension. Learning and Instruction, 20,
192–204. doi:10.1016/j.learninstruc.2009.02.001.

123
1652 E. Thomm, R. Bromme

Strømsø, H. I., Bråten, I., Britt, M. A., & Ferguson, L. E. (2013). Spontaneous sourcing among students
reading multiple documents. Cognition and Instruction, 31(2), 176–203. doi:10.1080/07370008.
2013.769994.
Tabak, I. (2015). Functional scientific literacy: Seeing the science within the words and across the web. In
L. Corno & E. M. Anderman (Eds.), Handbook of educational psychology (3rd ed., pp. 269–280).
London: Routledge.
Thomm, E., & Bromme, R. (2012). ‘‘It should at least seem scientific!’’ Textual features of
‘‘scientificness’’ and their impact on lay assessments of online information. Science Education,
96(2), 197–2011. doi:10.1002/sce.20480.
Thomm, E., Hentschke, J., & Bromme, R. (2015). The explaining conflicting scientific claims (ECSC)
Questionnaire: Measuring Laypersons’ explanations for conflicts in science. Learning and
Individual Differences, 37, 139–152. doi:10.1016/j.lindif.2014.12.001.
Wiley, J., Goldman, S. R., Graesser, A. C., Sanchez, C. A., Ash, I. K., & Hemmerich, J. A. (2009). Source
evaluation, comprehension, and learning in Internet science inquiry tasks. American Educational
Research Journal, 46(4), 1060–1106. doi:10.3102/0002831209333183.
Wineburg, S. S. (1991). Historical problem solving: A study of the cognitive processes used in the
evaluation of documentary and pictorial evidence. Journal of Educational Psychology, 83(1), 73–87.
doi:10.1037/0022-0663.83.1.73.

123

You might also like