Professional Documents
Culture Documents
Thomm y Bromme (2016) Sourcing y Source Evaluation - Original
Thomm y Bromme (2016) Sourcing y Source Evaluation - Original
DOI 10.1007/s11145-016-9638-8
123
1630 E. Thomm, R. Bromme
Introduction
Science affects daily life in manifold ways, and this obliges people to acquire and
process science-based information. For example, laypeople may need to read medical
information to clarify a health-related question concerning a treatment or peruse
biogenetic knowledge to weigh the risks of genetically manipulated food. Unfortu-
nately, however, scientific knowledge is seldom conclusive or unambiguous. Instead,
it is commonly dynamic, frequently inconsistent, and sometimes even conflicting
(Longino, 2002). This is all the more true for scientific information that is easily
accessible for the general public, especially via the Internet. Although the Internet has
revolutionized the accessibility of science-based information, it has also increased the
spread of inconsistent information (Stadtler & Bromme, 2014). Given the question-
able nature of online information, its selection, evaluation, and interpretation remain a
paramount challenge for laypersons (Brand-Gruwel & Stadtler, 2011; Goldman,
Braasch, Wiley, Grasser, & Brodowinska, 2012; Hendriks, Kienhues, & Bromme,
2015; Metzger & Flanagin, 2013).
One particular challenge when handling science-based online information arises
from the fact that its contents usually go far beyond laypeople’s own everyday
knowledge. Due to their bounded understanding (Bromme & Goldman, 2014),
laypeople rely on the expertise of others and may possess only limited abilities to
evaluate the veracity of claims directly. To base their personal decisions on experts’
recommendations, they have to identify and evaluate relevant, credible sources
(Bråten, Strømsø, & Salmerón, 2011; Britt & Aglinskas, 2002; Bromme, Kienhues
& Porsch, 2010). Therefore, the accurate processing of source information known as
sourcing (e.g., Bråten et al., 2011; Goldman & Scardamalia, 2013; Tabak, 2015;
Wineburg, 1991) is a highly relevant skill for evaluating controversial science-based
claims; and, as such, a factor influencing individuals’ interpretations of such
controversies. Moreover, individuals’ subjective assumptions about why different
sources may arrive at competing claims about the same topic may affect their
conclusions about what and whom to believe and, hence, how they resolve
inconsistencies (Barzilai & Eshet-Alkalai, 2015). The present study examines the
interplay of laypeople’s sourcing and conflict explanations when dealing with
contradictory science-based information. We aimed to scrutinize whether the impact
of sourcing on the evaluation of rival accounts and their sources is mediated by
laypeople’s assumptions about the reasons for the conflict.
How individuals’ consider source information has been investigated specifically in the
context of social psychological research on persuasion (e.g., Chen & Chaiken, 1999;
Petty & Caccioppo, 1986) as well as in research on reading literacy and the text
123
How source information shapes lay interpretations of… 1631
comprehension of multiple documents (e.g., Bråten, Strømsø, & Britt, 2009; Perfetti,
Rouet, & Britt, 1999; Rouet & Britt, 2011). Petty and Caccioppo (1986) studied of the
persuasiveness of arguments and examined the relevance of source information in
argument elaboration. They postulated that motivated people are more likely to
elaborate the contents of arguments. In contrast, if people are not motivated (or not
able, due, e.g., to a lack of knowledge) to process argument contents, they are more
inclined to rely on superficial processing based on information about the arguments’
sources. Thus source information is assumed to be considered mainly when readers do
not attend to the contents (Sparks & Rapp, 2011). However, in light of laypeople’s
limited capacities to fully understand and evaluate science-based content, reasoning
about the credibility of sources is not necessarily a less elaborated way to assess
information veracity (Bromme et al., 2010; Stadtler & Bromme, 2014). In line with
this reasoning, current research on reading comprehension stresses the relevance of
sourcing capacities. Sourcing is conceived as being essentially relevant for readers in
order to select, evaluate, and integrate knowledge from diverse, multiple information
sources (Barzilai, Tzadok, & Eshet-Alkalai, 2015; Goldman et al., 2012; Kammerer &
Gerjets, 2012; Keck, Kammerer & Starauschek, 2015; Perfetti et al., 1999; Rouet &
Britt, 2011; Strømsø, Bråten, & Britt, 2010; Stadtler & Bromme, 2014).
However, although research points to the importance of sourcing capacities,
readers are rarely or only infrequently seen to engage in sourcing activities.
Empirical evidence suggests that readers seldom recognize or regard source
information spontaneously (e.g., Britt & Aglinskas, 2002; Kammerer, Bråten,
Gerjets, & Strømsø, 2013; Wiley et al., 2009). Britt and Aglinskas (2002) reported
that high school students and undergraduate students hardly reflect on source
information when reading multiple documents about a historical event unless
instructed explicitly to do so. When investigating the impact of source information
on text comprehension, Sparks and Rapp (2011) showed similarly that university
students tend to consider source credibility only when encouraged explicitly to
reason about a source’s reliability. Further findings suggest that undergraduate
students may well attend to source information spontaneously when handling
multiple texts about science-based issues, but they do not necessarily use it to
evaluate the given information (Kobayashi, 2014; Strømsø; Bråten, Britt, &
Ferguson, 2013; Tabak, 2015). In a recent think-aloud study, Barzilai et al. (2015)
examined readers’ sourcing practices while reading discrepant expert accounts
about a socioscientific issue. The authors pointed out explicitly that even though
most readers identified and noticed the source information, they did not apply it to
evaluate the discrepant accounts. Flanagin and Metzger (2007) demonstrated that
readers of online information seldom verify the information given about online
sources, although they reported checking for source information when being
interviewed about strategies to evaluate websites. This raises the question when and
in which ways sourcing may occur.
There is some evidence that having to deal with inconsistent, conflicting information
especially draws individuals’ attention to source information and hence may elicit
123
1632 E. Thomm, R. Bromme
sourcing (e.g., Stadtler & Bromme, 2014; Strømsø et al., 2013). When establishing the
discrepancy-induced source comprehension effect, Braasch, Rouet, Vibert, and Britt
(2012) demonstrated that adult readers were more sensitive to source descriptions and
reported more source-related information in summary and memory tasks when
provided with two discrepant rather than two consistent assertions put forward by
different sources. Braasch et al. (2012) found this effect with short and conceptually
simple factual statements. Strømsø et al. (2013) let university students read multiple
controversial documents about a socioscientific issue. They showed that those
documents that expressed the most strongly competing positions evoked increased
sourcing. We would expect that, in particular, lay reasoning on controversial science-
based topics should be sensitive to source information, because laypeople have to
defer to experts to reach conclusions on such issues. Studies suggest that laypeople are
inclined to consult experts for advice rather than base their decision on their own
fragmentary understanding when they perceive science-based information to be
complex (Scharrer, Britt, Stadtler & Bromme, 2012). In an interview-based study,
Bromme, Thomm and Wolf (2015) showed that laypeople focused particularly on
source-related assessments when asked to resolve and decide on controversial online
information about a medical topic.
Additionally, salience and characteristics of the source may also influence
laypeople’s sourcing (e.g., Britt & Rouet, 2012; Sperber et al., 2010). To avoid
misinformation and deception, laypersons must be critical toward experts’ abilities,
and they need to assess the link between their testimony and the motivational state
that might underlie their claim or advice (Keil, 2012; Mayer, Davis, & Schoorman,
1995; Shafto, Eaves, Navarro, & Perfors, 2012; Sperber et al., 2010). According to
Sperber et al. (2010), expertise and intentions influence the epistemic vigilance of
individuals, making them more or less cautious toward source information and thus
to the knowledge claims transmitted by these sources. Laypeople seem to possess
capacities enabling them to evaluate and identify relevant expertise (Bromme &
Thomm, 2016). Further empirical evidence suggests that individuals are specifically
attentive to source information associated with the commercial interests of the
source in providing certain information (e.g., Bråten et al., 2011; Critchley, 2008;
Cummings, 2014). Critchley (2008) revealed that laypersons perceive researchers at
publicly funded universities to be more credible than researchers working in
industry, assuming that the former act in the best interest of the public to a greater
extent than industrial researchers. When developing the Muenster Epistemic
Trustworthiness Inventory, Hendriks et al. (2015) suggested that laypeople may
sense differences between a source’s expertise, benevolence, and integrity. Against
this background, we examined specifically lay readers’ attention to source
information targeting source credibility, and the ways in which they used this
source information to interpret and evaluate conflicting scientific knowledge claims.
Source information may also serve as an immediate explanation for the conflict at
stake, and, as such, sourcing may play a relevant role in not only the evaluation but
123
How source information shapes lay interpretations of… 1633
123
1634 E. Thomm, R. Bromme
inspect whether conflict explanations may, in turn, mediate the effect of sourcing on
source and claim evaluation.
Present study
The present study explored specific conditions and mechanisms that may allow us to
examine the role of sourcing in laypeople’s evaluations of conflicting science-based
information. We investigated when laypeople considered source information,
especially that targeting source credibility, and how they use it to explain and
evaluate a scientific conflict. Research has shown that both the expertise and the
benevolence that can be ascribed to a source contribute to shaping its credibility
(Mayer et al., 1995; Sperber et al., 2010). An expert source can be understood as a
source that is experienced and knowledgeable in the relevant domain and therefore
is assumed to be competent (Hovland & Weiss, 1951; O’Keefe, 2002). A benevolent
source can be understood as one that intends to act in the interest of the reader and is
not guided by, for example, vested interests (Mayer et al., 1995; Thomm et al.,
2015). In this study, participants were always presented with at least one credible
source that was contradicted by an antagonist source that was either less expert or
less benevolent or equally expert and benevolent. Survey research also indicated
that researchers at public universities are perceived to be especially reliable and
credible sources on science (Beseley, 2014; Castell et al., 2014). Therefore, this
reference source was represented consistently as a professor who has been working
for years at a public university and could therefore be perceived as expert and
benevolent. This setting allowed us to study four different research goals.
First, we examined readers’ memory of critical source information in general.
Because it seems questionable whether readers account for source information at all,
we could not take it for granted that our participants would consider source
information when evaluating and explaining scientific conflicts. Therefore we
inspected (1) readers’ source memorization in order to determine whether source
information was acknowledged in principle.
Second, we examined (2) the impact of source information on readers’
assessments of source credibility and of their personal agreement with the claims
at stake. We expected that participants who have reasons (due to available source
information) to assume one source to be less expert or less benevolent compared to
the other one would tend to assess this source’s claim as being less credible than the
concurring claim advocated by the expert and benevolent source. Likewise, we
assumed that participants perceiving one source as having less expertise or as being
less benevolent would tend to agree less strongly with this source’s claim than with
the concurring claim presented by the expert and benevolent source. For example, a
researcher who works for an industrial company might be perceived as less
benevolent and might therefore be judged to be less credible than a researcher who
works at a public university. This source’s claim might also be agreed with less
strongly.
Third, we examined (3) the impact of source information on the preferences for
subjective conflict explanation. We assumed that perceived discrepancies in the
123
How source information shapes lay interpretations of… 1635
sources’ expertise and benevolence would specifically affect readers’ preference for
researcher-related explanations (i.e., differences in competencies and personal
motivations). It seems reasonable for participants to endorse personal motivation
explanations more highly when they perceive a violation of benevolent interests in
one of the competing sources than when they are confronted with equally expert and
benevolent sources. For instance, participants might be more inclined to attribute the
conflict to motivation explanations when the source contradicting the university
professor is a researcher in an industrial company who might be guided by vested
commercial interests.
Likewise, we expected that participants would endorse competence explanations
more strongly when they perceive a lack of expertise in one of the competing
sources than when explaining conflicts between equally expert and benevolent
sources. For example, participants might endorse competence explanations more
strongly when the competing source is a junior researcher who might possess less
working experience than a concurring university professor. In addition to these
specific assumptions, we explored the impact of source information on the
preference for two further kinds of research-process-related explanations that have
been found in preceding studies (i.e., differences in the research processes and in the
thematic complexity of the issue at stake). However, we did not expect specific
effects on these kinds of explanations, because the dimensions did not directly
address the researcher-related facets manipulated in the present study.
Finally we examined (4) the interplay of sourcing, conflict explanation, and
conflict evaluation (i.e., assessment of perceived source credibility and claim
agreement). Conflict explanation may help readers to restore coherence and thus
may affect source and claim evaluation. Because we expected source information to
influence individuals’ evaluations and explanations, it may also play a role in the
relation between the two variables. Source information may evoke or accompany a
preference for specific conflict explanations, and this, in turn, may influence conflict
evaluation. Hence, the effect of source information on conflict evaluation may be
mediated by the individual preference for conflict explanations. Because we
manipulated primarily source characteristics, we expected the relation between
source information and conflict evaluation to be influenced particularly by
researcher-related explanations. It is plausible that personal motivation explanations
in particular may affect readers’ credibility judgments when they perceive
differences in the benevolent interests of the competing sources (see above).
Participants sensing a violation of benevolent interests in one of the rival sources
may attribute the conflict more strongly to personal motivation reasons than when
confronted with equally expert and benevolent sources. This, in turn, may also lead
to decreased claim credibility. Similarly, when sensing a lack of expertise in one of
the rival sources, they may endorse competence reasons more strongly than when
confronted with equally expert sources. This, in turn, may also result in a decrease
in the perceived claim credibility. In addition, we do not rule out the possibility that
research-process-related explanations might also mediate the relationship. It is
possible that participants who are presented with competing but equally expert and
benevolent sources may agree strongly with these explanations because they are not
provided with any contextual information that may suggest any cause or explanation
123
1636 E. Thomm, R. Bromme
Methods
Participants
A total of 155 students from a German university participated in the study. Four
participants who reported studying natural sciences or closely adjacent fields were
dropped from the sample because they might have possessed substantial knowledge
of the conflict topic and no longer been laypersons. Eight further participants who
read and answered the questionnaire in less than 7 min were also excluded. This
criterion was based on the time two well-trained readers needed to complete the
questionnaire. Further outlier analyses resulted in the exclusion of the data of four
participants who took much longer to complete the questionnaire (about 1 h).
The final analyses were based on the data of 139 (111 female) participants. On
average, they were 23 years old (SD = 4) and had completed 2 years of study
(SD = 1.5). Participants were told that the questionnaire would deal with a topic
stemming from current research on climate change. Given this context information,
they rated their interest in and knowledge about the specific topic of ‘‘whirlwinds’’
on 6-point scales ranging from 1 (not at all) to 6 (very much). Participants reported
having a low interest in the topic (M = 2.4, SD = 0.91) and they assessed their own
knowledge to be low (M = 1.9, SD = 0.88).
Materials
Scenario
123
How source information shapes lay interpretations of… 1637
Participants were given three choices to indicate whether ‘‘the claims contradict
each other,’’ ‘‘the claims do not contradict,’’ or ‘‘I don’t know.’’ Additionally, they
assessed the comprehensibility and credibility of each claim on a scale ranging from
1 (not at all) to 9 (very much). The majority of participants (16) judged the claims to
be contradictory. They perceived them to be generally comprehensible (Claim A:
M = 6.68, SD = 2.06; Claim B: M = 7.05, SD = 1.93) and fairly credible (Claim
A: M = 5.74, SD = 2.08; Claim B: M = 5.26, SD = 1.93). Claims did not differ
significantly in comprehensibility, t(18) = -1.07, r = .24, or credibility,
t(18) = 0.78, r = .18. Because three participants were unsure about the contradic-
toriness, we modified some wording in the claims in order to stress the conflicting
positions (the complete scenario is presented in Appendix A in ESM).
The claims were described as having been retrieved from specific websites hosted
by different expert sources. Drawing on previous procedures (Thomm et al., 2015),
we varied information about the sources systematically. This resulted in three
different experimental conditions (henceforth referred to as the source condition):
Consistently across the conditions, Source A was described as a professor working
at a public university. Depending on the source condition, the competing Source B
was presented as being either
The source information was reported in the introduction and repeated directly
before presenting each claim. The corresponding claims will be labeled Claim A
and Claim B. The order of source information and claims was randomized within
the scenario in order to operationalize all possible combinations. Source condition
was operationalized as a between-subjects factor. Participants were randomly
assigned to one of the three source conditions.
Dependent measures
Conflict explanation was measured with the Explaining Conflicting Science Claims
Questionnaire (ECSC; Thomm et al., 2015). The ECSC is a scenario-based,
standardized measurement of conflict explanation. Participants read the conflict
scenario and subsequently assessed how far they agreed with different explanations.
They receive 23 statements, each presenting a different explanation, and judge how
far they perceive each explanation to be a relevant reason for the specific conflict on
a 6-point scale ranging from 1 (very much disagree) to 6 (very much agree). The
ECSC scales measure four distinct conflict explanations: (a) differences in
researchers’ personal motivations, (b) differences in researchers’ competence,
123
1638 E. Thomm, R. Bromme
Source credibility
After reading the scenario and answering the ECSC, participants were presented
with each source again together with the claim it stated. Participants were then
instructed to assess the source’s credibility. Source credibility was measured for
each claim separately on a 6-point scale ranging from 1 (not at all credible) to 6
(very credible).
Directly after asking for credibility judgments, participants indicated their personal
agreement with each claim on a 6-point scale ranging from 1 (I don’t agree at all) to
6 (I very much agree).
Source memorization
Participants were asked to mark the expert source that had made the claim for both
claims separately. Thus, for each claim, they were presented with all three kinds of
expert sources (university professor, researcher of industry, university junior
researcher), and they indicated which of these three options stated the specific
claim. Furthermore, they could select a fourth choice indicating that they do not
remember the source (‘‘I don’t know’’).
Procedure
Participants first reported some demographic data (age, gender, major study subject,
number of completed semesters) and assessed their own prior topic-specific
knowledge and interest in the conflict topic. Afterwards, they were instructed to read
the scenario carefully. Subsequently, they completed the ECSC questionnaire
indicating their subjective assessments of which explanations might be relevant.
After finishing the questionnaire, participants were prompted to evaluate the
conflicting claims by assessing the source credibility and by indicating their
personal agreement with each claim.1 Finally, we tested source memorization.
While working on the statements, participants were able to reread the scenario by
opening a pop-up window.
1
Further variables were measured but not reported here for reasons of space. Nevertheless, to complete
the given information, we shall list excluded variables: Participants assessed the ECSC from an assumed
expert perspective and indicated whether they would feel confident about deciding on the claims’ veracity
themselves or would need further expert advice to do so. These measures are not reported elsewhere and
were assessed mostly after the reported ones; therefore we do not expect confounding effects.
123
How source information shapes lay interpretations of… 1639
The study was administered online using ‘‘EFS Survey’’ by QuestBack for online
polls. As a reward for their participation, students could participate in a lottery and
win vouchers for a well-known online store.
Analyses
We used Chi square tests to assess whether there was a random frequency
distribution of source memorization between the source conditions. We computed
analyses of variance to investigate the impact of source condition on measures of
claim evaluation and conflict explanation using g2p as a measure of effect size. To
follow up meaningful effects, we computed protected multiple t tests for dependent
and independent samples. Following Field’s (2009) recommendations, we reported
r as a measure of effect size for t tests: Effect sizes above .1 were interpreted as
small effects; those above .3, as medium effects; and those above .5, as large effects.
Following procedures recommended by Hayes and Preacher (2014), we conducted a
mediation analysis to examine whether source condition mediated possible effects
between conflict explanation and claim evaluation measures.
Results
123
1640 E. Thomm, R. Bromme
Source credibility
Table 2 Means and standard deviations of source credibility and personal claim agreement
Source condition Credibility of sources Personal agreement with claims
Benevolence condition 4.22 (1.00) 3.30 (0.91) 3.76 (1.14) 3.16 (1.04)
Expertise condition 4.08 (0.93) 4.04 (0.97) 3.60 (0.87) 3.60 (1.01)
Control condition 3.72 (1.11) 3.68 (1.10) 3.60 (1.11) 3.36 (1.03)
In each condition, the source of Claim A (Source A) was presented as a credible source, whereas the
source of Claim B (Source B) varied in its credibility
123
How source information shapes lay interpretations of… 1641
significantly less credible than Source A, t(36) = 3.88, p \ .001, r = .54. In contrast,
there was no difference between both sources in the expertise condition, t(51) = 0.21,
p = .835, and in the control condition, t(49) = 0.19, p = .848. Thus, it was
specifically information about possible vested interests that influenced the judgment of
source credibility when comparing the sources directly within a controversy.
To complement the explanation of the interaction, we also compared the
credibility judgments of Source A and Source B, respectively, between conditions.
A one-way ANOVA with the between-subjects factor source condition showed that
there was only a marginal difference in credibility judgments of Source A between
the three conditions, F(2,136) = 2.88, p = .06, g2p = .04. Only participants in the
benevolence condition judged this source’s credibility slightly higher than
participants in the control condition, t(85) = 2.15, p = .034, r = .23. All other
comparisons indicated no differences.
However, the analysis confirmed an effect of source condition on participants’
credibility judgments of Source B, F(2,136) = 5.96, p = .003, g2p = .08. Multiple
t tests for independent samples showed that participants perceived the researcher
working in industry to be less credible than the junior university researcher,
t(87) = 3.65, p \ .001, r = .36. However, there was only a marginal difference in
credibility judgments between the university professor and the junior researcher,
t(100) = 1.75, p = .083, r = .17, and between the university professor and the
researcher from industry, t(85) = 1.73, p = .088, r = .17.
A mixed ANOVA with the within-subjects factor personal agreement (Claim A vs.
Claim B) and the between-subjects factor source condition (benevolence condition
vs. expertise condition vs. control condition) showed neither a main effect of source
condition, F(2,136) = 0.72, p = .488, nor a significant interaction term,
F(2,136) = 1.36, p = .26. The main effect of personal agreement just failed to
attain significance, F(1,136) = 3.74, p = .055, g2p = .03. A t test for dependent
samples did not confirm a meaningful difference between agreement with Claim A
and Claim B, t(138) = 1.72, p = .088.
123
1642 E. Thomm, R. Bromme
123
How source information shapes lay interpretations of… 1643
Fig. 1 Meditation with explanations by competences as mediator revealing an indirect effect = -.10,
95 % CI [-.27, -.01]. Note: Pattern of effects indicates an inconsistent mediation (MacKinnon et al.,
2007)
123
1644 E. Thomm, R. Bromme
Following our assumptions, we also expected that the effect of source benevolence on
credibility judgments, respectively personal claim agreement, might be mediated by
motivation explanations. However, a prior ANOVA did not indicate that participants
in the benevolence condition differed from controls in their assessments of Claim B.
This finding already speaks against mediation. An inconsistent mediation was
unlikely, because both the direct and indirect effect would be expected to take the same
sign: Analyses revealed that participants in the benevolence condition judged
credibility lower, which would imply a negative direct effect on credibility judgment.
Because source benevolence was assumed to positively predict endorsement of
motivation explanations, and this, in turn, should be associated negatively with the
credibility of Claim B, the indirect effect would also have a negative sign. The same
applies to the analysis of mediating effects on personal claim agreement. Therefore, it
could be assumed that there is no mediation effect of motivation explanations in the
benevolence condition.
Discussion
123
How source information shapes lay interpretations of… 1645
123
1646 E. Thomm, R. Bromme
The current study provides insights into readers’ sourcing and how they apply it
when explaining and evaluating conflicting scientific claims. Overall, the results
suggest that whereas prior research has generally shown that inconsistent
information increases readers’ sourcing activity (cf. Braasch et al., 2012; Stadtler
& Bromme, 2014), the sources (i.e., their credibility-related features) providing the
information matter as well. Thus, both the reception of conflicting scientific claims
and the presentation of sources that differ in critical characteristics interact and may
lead to increased sourcing activities.
Prior research on sourcing has indicated the need to consider the processes
mediating between a reader’s attention to source information and her or his final
claim evaluation. It is necessary to analyze how source-related information is used
when interpreting and evaluating content information (e.g., Barzilai et al., 2015;
Bromme et al., 2010). Tying in with this line of research, we examined whether
123
How source information shapes lay interpretations of… 1647
source information is used in conflict explanation and may, in turn, affect readers’
evaluation of the source and their agreement with the respective claim. The detected
mediation effect provides an indication of the interplay between explanation and
source credibility. Thus, readers considered source expertise and showed an
increased attribution to competence reasons that affected source credibility.
Consequently, under the circumstances modeled in this experimental condition
(two competing sources of different expertise), readers apply their source
knowledge to evaluate knowledge claims provided by these sources. Thus, under
specific conditions, conflict explanation does indeed affect individuals’ decisions on
whom to believe.
Although the size of the mediating effect of researcher-related explanations on
the interplay between discrepant source features and credibility assessment is small
to moderate, it is still remarkable, because, on average, our participants also
strongly endorsed explanations referring to the research processes and the thematic
complexity of the research topic. They hold these explanations for true—
independent of further information about the specific sources. Findings from
research on individuals’ epistemic beliefs may cast light on this pattern of results. In
contrast to knowledge in the humanities (e.g., topics in history), individuals often
tend to perceive knowledge about science topics as being objective, based on
evidence, and less prone to researchers’ interpretation and opinions (e.g., Buehl &
Alexander, 2006; Hofer, 2000; Kuhn, Cheney, & Weinstock, 2000; Limón, 2006).
Such a perspective may affect individuals’ conflict explanations and may
specifically suppress the consideration of source features within conflict explana-
tions. Analyses did not confirm differences in the agreement with these research-
process-related explanations across conditions, although they endorsed these
explanations strongly. However, analyses affirmed differences through source
information in the researcher-related explanations. Thus, participants seemed to
consider aspects of the research process and the thematic complexity, but they also
appeared to be sensitive to social facets of scientific knowledge construction, and
hence the person of the researcher.
This study presents a first step in examining the relationship between the
sourcing, explaining, and evaluating of scientific information. Further research
could not only contribute to our understanding of sourcing and the explanation of
expert disagreement, but also enhance the generalizability of the present results.
Understanding appropriately how and why scientists arrive at different claims may
strengthen readers’ sourcing capacities through increasing their awareness of the
role of sources. This entails not only assumptions about the nature of scientific
knowledge, its generation, and justification, but also assumptions about the related
social practices and the part experts play in knowledge construction. Specifically the
latter may provide the means for readers to evaluate multiple competing sources and
hence to decide which one to believe (besides what to believe). This makes it
relevant to already facilitate school students’ ideas about the active role of experts in
knowledge construction and interpretation as well as draw their attention to the
influence of scientists’ views and motivations.
123
1648 E. Thomm, R. Bromme
One could argue that when examining the impact of sourcing, it would have been
consistent to analyze only data from participants who recalled the source
information correctly. However, because source memorization was measured at
the end of the questioning, we could not exclude the possibility that participants
referred to the correct source information while completing the questionnaire.
Therefore, we analyzed the complete sample.
With regard to the mediating effect of explanations, we have already pointed out
above that some effects are rather small. Furthermore, although we uncovered
effects of source information, the effect sizes or the proportion of explained
variance were frequently moderate. On the one hand, this might be due to the
specific social situation we are modeling here: It asks for the assessment of experts
from a layperson perspective. Compared to the status of our participants (who are
laypersons on climate research), all presented sources might have been perceived as
expert, and this may well have had an impact on the assessments. Furthermore, to
disentangle the influence of source information, we assigned great importance to
operationalizing source information as a between-subjects factor. This, in turn, may
have led to an increase in the error variance between participants. However, against
this background, it would seem to be all the more remarkable that we identified
effects of a reasonable magnitude. It might be interesting for future research to also
contrast different source conditions within one person directly. When searching
online for science-based information, readers usually come across diverse kinds of
sources. Therefore, direct comparisons of diverse sources would seem to be a
realistic approach that may therefore also help to extend the applicability of our
results.
Similarly it would be valuable to extend research by investigating a range of
topics to further test the generalizability of our results. Using different topics from
not only the same but also other knowledge domains would help to identify
similarities and differences in the interplay of sourcing and the evaluation of
conflicting expert claims. Depending on the topic at stake, laypeople may
understand the role of experts in science knowledge construction differently. As a
result, they may well evaluate sources differently when reasoning about expert
controversies. In the current study, we used a different topic in climate change
research to the ones used before. The current effect of sourcing on conflict
explanation corroborates previous findings on science topics. This may strengthen
the assumption of a specific relationship between sourcing and conflict explanation
endorsement in the natural sciences.
Furthermore, the study was undertaken in a sample of participants who were
enrolled in higher education and therefore well-educated. These participants might
represent the social group of recipients who turn to the Internet to retrieve
information about science in order to inform their decisions (Fox, 2005). Although
they indicated possessing only low prior knowledge on the topic, they might well
have had some understanding of scientific knowledge construction and of the
different kinds of experts to be found in the Internet. Both could have influenced
their judgments. However, when considering previous research (see above), it
123
How source information shapes lay interpretations of… 1649
cannot be taken for granted that they were indeed attentive to source information.
Nonetheless, it would be worth investigating further samples to extend the present
findings. For instance, it would be interesting to compare the present data with a
dataset of school students addressing the development of individuals’ use of source
information in conflict explanations.
Finally, although we could confirm the impact of source information on
credibility judgments, we were unable to establish a similar effect on participants’
personal claim agreement. On the one hand, this finding might be explained
practically by the study material. Participants expressed only low interest in the
specific topic. Thus, although the broader issue of climate change might concern the
public, this specific topic might not have been perceived to be very relevant for
participants’ immediate daily lives. On the other hand, this finding seems reasonable
in that individuals’ claim agreement is not necessarily determined by their
credibility judgments (Scharrer et al., 2013; Thomm & Bromme, 2012). Readers
may use criteria to evaluate the credibility of scientific claims and sources (e.g.,
consider cues of adherence to good scientific practice) that they would not use to
assess their personal agreement (e.g., follow entrenched beliefs). However, this
leads us to ask which factors ultimately influence readers’ personal decision making.
Not least for that reason, it would also be interesting to continue investigating
factors that may also affect the interplay between sourcing, conflict explanation, and
evaluation. For example, it may well be worth further investigating the role of
sourcing when participants need to infer source information.
References
Baron, R. M., & Kenny, D. A. (1986). The moderator–mediator variable distinction in social
psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality
and Social Psychology, 51(6), 1173–1182. doi:10.1037/0022-3514.51.6.1173.
Barzilai, S., & Eshet-Alkalai, Y. (2015). The role of epistemic perspectives in comprehension of multiple
author viewpoints. Learning and Instruction, 36, 86–103. doi:10.1016/j.learninstruc.2014.12.003.
Barzilai, S., Tzadok, E., & Eshet-Alkalai, Y. (2015). Sourcing while reading divergent expert accounts:
Pathways from views of knowing to written argumentation. Instructional Science, 43(6), 737–766.
doi:10.1007/s11251-015-9359-4.
Besely, J. (2014). Science and technology: Public attitudes and understanding. In National Science Board
(Ed.), Science and engineering indicators 2014 (pp. 1–53). Arlington, VA: National Science
Foundation (NSB 14-01).
Braasch, J. L., Rouet, J. F., Vibert, N., & Britt, M. A. (2012). Readers’ use of source information in text
comprehension. Memory & Cognition, 40(3), 450–465. doi:10.3758/s13421-011-0160-6.
Brand-Gruwel, S., & Stadtler, M. (2011). Solving information-based problems: Evaluating sources and
information. Learning and Instruction, 21, 175–179. doi:10.1016/j.learninstruc.2010.02.008.
Bråten, I., Strømsø, H. I., & Britt, M. A. (2009). Trust matters: Examining the role of source evaluation in
students’ construction of meaning within and across multiple texts. Reading Research Quarterly,
44(1), 6–28. doi:10.1598/RRQ.44.1.1.
Bråten, I., Strømsø, H. I., & Salmerón, L. (2011). Trust and mistrust when students read multiple
information sources about climate change. Learning and Instruction, 21, 180–192. doi:10.1016/j.
learninstruc.2010.02.002.
123
1650 E. Thomm, R. Bromme
Britt, M. A., & Aglinskas, C. (2002). Improving students’ ability to identify and use source information.
Cognition and Instruction, 20, 485–522. doi:10.1207/S1532690XCI2004_2.
Britt, M. A., & Rouet, J.-F. (2012). Learning with multiple documents: Component skills and their
acquisition. In J. R. Kirby & M. J. Lawson (Eds.), Enhancing the quality of learning: Dispositions,
instruction, and learning processes (pp. 276–314). New York, NY: Cambridge University Press.
Bromme, R., & Goldman, S. (2014). The public’s bounded understanding of science. Educational
Psychologist, 49(2), 59–69. doi:10.1080/00461520.2014.921572.
Bromme, R., Kienhues, D., & Porsch, T. (2010). Who knows what and who can we believe?
Epistemological beliefs are beliefs about knowledge (mostly) attained from others. In L.
D. Bendixen & F. C. Feucht (Eds.), Personal epistemology in the classroom: Theory, research,
and implications for practice (pp. 163–193). Cambridge: Cambridge University Press. doi:10.1017/
CBO9780511691904.006.
Bromme, R., & Thomm, E. (2016). Knowing who knows: Laypersons’ capabilities to judge experts’
pertinence for science topics. Cognitive Science, 40, 241–252. doi:10.1111/cogs.12252.
Bromme, R., Thomm, E., & Wolf, V. (2015). From understanding to deference: Laypersons’ and medical
students’ views on conflicts within medicine. International Journal of Science Education, Part B:
Communication and Public Engagement, 5(1), 68–91. doi:10.1080/21548455.2013.849017.
Buehl, M. M., & Alexander, P. A. (2006). Examining the dual nature of epistemological beliefs.
International Journal of Educational Research, 45, 28–42. doi:10.1016/j.ijer.2006.08.007.
Castell, S., Charlton, A., Clemence, M., Pettigrew, N., Pope, S., Quigley, A., et al. (2014). Public
attitudes to science 2014. London: Ipsos Mori. Retrieved from https://www.ipsos-mori.com/Assets/
Docs/Polls/pas-2014-main-report.pdf.
Chen, S., & Chaiken, S. (1999). The Heuristic-Systematic Model in its broader context. In S. Chaiken &
Y. Trope (Eds.), Dual-process theories in social psychology (pp. 73–96). New York, NY: Guilford.
Critchley, C. R. (2008). Public opinion and trust in scientists: The role of the research context, and the
perceived motivation of stem cell researchers. Public Understanding of Science, 17(3), 309–327.
doi:10.1177/0963662506070162.
Cummings, L. (2014). The ‘‘trust’’ heuristic: Arguments from authority in public health. Health
Communication, 29(10), 1043–1056. doi:10.1080/10410236.2013.831685.
Field, A. P. (2009). Discovering statistics using SPSS: And sex and drugs and rock ‘n’ roll (3rd ed.).
London: Sage.
Flanagin, A. J., & Metzger, M. J. (2007). The role of site features, user attributes, and information
verification behaviors on the perceived credibility of web-based information. New Media & Society,
9(2), 319–342. doi:10.1177/1461444807075015.
Fox, S. (2005). Health information online. Washington, DC: Pew Internet & American Life Project.
Goldman, S. R., Braasch, J. L., Wiley, J., Graesser, A. C., & Brodowinska, K. (2012). Comprehending
and learning from Internet sources: Processing patterns of better and poorer learners. Reading
Research Quarterly, 47, 356–381. doi:10.1002/RRQ.027.
Goldman, S. R., & Scardamalia, M. (2013). Managing, understanding, applying, and creating knowledge
in the information age: Next-generation challenges and opportunities. Cognition and Instruction,
31(2), 255–269. doi:10.1080/10824669.2013.773217.
Hayes, A. F. (2013). Introduction to mediation, moderation, and conditional process analysis: A
regression-based approach. New York, NY: Guilford Press.
Hayes, A. F., & Preacher, K. J. (2014). Statistical mediation analysis with a multicategorical independent
variable. British Journal of Mathematical and Statistical Psychology, 67(3), 451–470. doi:10.1111/
bmsp.12028.
Hendriks, F., Kienhues, D., & Bromme, R. (2015). Measuring laypeople’s trust in experts in a digital age:
The Muenster Epistemic Trustworthiness Inventory (METI). PLoS ONE, 10(10), e0139309 EP.
doi:10.1371/journal.pone.0139309.
Hofer, B. K. (2000). Dimensionality and disciplinary differences in personal epistemology. Contempo-
rary Educational Psychology, 25, 378–405. doi:10.1006/ceps.1999.1026.
Hovland, C. I., & Weiss, W. (1951). The influence of source credibility on communication effectiveness.
Public Opinion Quarterly, 15(4), 635. doi:10.1086/266350.
Kajanne, A., & Pirttilä-Backman, A. M. (1999). Laypeople’s viewpoints about the reasons for expert
controversy regarding food additives. Public Understanding of Science, 8, 303–315. doi:10.1088/
0963-6625/8/4/303.
123
How source information shapes lay interpretations of… 1651
Kammerer, Y., Bråten, I., Gerjets, P., & Strømsø, H. I. (2013). The role of Internet-specific epistemic
beliefs in laypersons’ source evaluations and decisions during Web search on a medical issue.
Computers in Human Behavior, 29(3), 1193–1203. doi:10.1016/j.chb.2012.10.012.
Kammerer, Y., & Gerjets, P. (2012). Effects of search interface and internet-specific epistemic beliefs on
source evaluations during web search for medical information: An eye-tracking study. Behaviour &
Information Technology, 31(1), 83–97. doi:10.1080/0144929X.2011.599040.
Keck, D., Kammerer, Y., & Starauschek, E. (2015). Reading science texts online: Does source
information influence the identification of contradictions within texts? Computers & Education, 82,
442–449. doi:10.1016/j.compedu.2014.12.005.
Keil, F. C. (2012). Running on empty? How folk science gets by with less. Current Directions in
Psychological Science, 21(5), 329–334. doi:10.1177/0963721412453721.
Kobayashi, K. (2014). Students’ consideration of source information during the reading of multiple texts
and its effect on intertextual conflict resolution. Instructional Science, 42(2), 183–205. doi:10.1007/
s11251-013-9276-3.
Kuhn, D., Cheney, R., & Weinstock, M. (2000). The development of epistemological understanding.
Cognitive development, 15, 309–328. doi:10.1016/S0885-2014(00)00030-7.
Limón, M. (2006). The domain generality–specificity of epistemological beliefs: A theoretical problem, a
methodological problem or both? International Journal of Educational Research, 45, 7–27. doi:10.
1016/j.ijer.2006.08.002.
Longino, H. E. (2002). The fate of knowledge. Princeton, NJ: Princeton University Press.
MacKinnon, D. P., Fairchild, A. J., & Fritz, M. S. (2007). Mediation analysis. Annual Review of
Psychology, 58, 593–614.
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust.
Academy of Management Review, 20, 709–734. doi:10.5465/AMR.1995.9508080335.
Metzger, M. J., & Flanagin, A. J. (2013). Credibility and trust of information in online environments: The
use of cognitive heuristics. Journal of Pragmatics, 59, 210–220. doi:10.1016/j.pragma.2013.07.012.
O’Keefe, D. J. (2002). Persuasion: Theory and research (2nd ed.). Thousand Oaks, CA: Sage.
Perfetti, C. A., Rouet, J.-F., & Britt, M. A. (1999). Toward a theory of documents representation. In H.
van Oostendorp & S. R. Goldman (Eds.), The construction of mental representations during reading
(pp. 99–122). Mahwah, NJ: Lawrence Erlbaum Associates.
Petty, R. E., & Cacioppo, J. T. (1986). Communication and persuasion: Central and peripheral routes to
attitude change. New York, NY: Springer.
Rouet, J.-F., & Britt, M. A. (2011). Relevance processes in multiple document comprehension. In M.
T. McCrudden, J. P. Magliano, & G. Schraw (Eds.), Text relevance and learning from text (pp.
19–52). Greenwich, CT: Information Age Publishing.
Scharrer, L., Britt, M. A., Stadtler, M., & Bromme, R. (2013). Easy to understand but difficult to decide:
Information comprehensibility and controversiality affect laypeople’s science-based decisions.
Discourse Processes, 50, 361–387. doi:10.1080/0163853X.2013.813835.
Scharrer, L., Bromme, R., Britt, M. A., & Stadtler, M. (2012). The seduction of easiness: How science
depictions influence laypeople’s reliance on their own evaluation of scientific information. Learning
and Instruction, 22(3), 231–243. doi:10.1016/j.learninstruc.2011.11.004.
Shafto, P., Eaves, B., Navarro, D. J., & Perfors, A. (2012). Epistemic trust: Modeling children’s reasoning
about others’ knowledge and intent. Development Science, 15(3), 436–447. doi:10.1111/j.1467-
7687.2012.01135.x.
Sparks, J. R., & Rapp, D. N. (2011). Readers’ reliance on source credibility in the service of
comprehension. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37, 230.
doi:10.1037/a0021331.
Sperber, D., Clément, F., Heintz, C., Mascaro, O., Mercier, H., & Origgi, G. (2010). Epistemic vigilance.
Mind and Language, 25, 359–393. doi:10.1111/j.1468-0017.2010.01394.x.
Stadtler, M., & Bromme, R. (2014). The content–source integration model: A taxonomic description of
how readers comprehend conflicting scientific information. In D. N. Rapp & J. Braasch (Eds.),
Processing inaccurate information: Theoretical and applied perspectives from cognitive science and
the educational sciences (pp. 379–402). Cambridge, MA: MIT Press.
Strømsø, H. I., Bråten, I., & Britt, M. A. (2010). Reading multiple texts about climate change: The
relationship between memory for sources and text comprehension. Learning and Instruction, 20,
192–204. doi:10.1016/j.learninstruc.2009.02.001.
123
1652 E. Thomm, R. Bromme
Strømsø, H. I., Bråten, I., Britt, M. A., & Ferguson, L. E. (2013). Spontaneous sourcing among students
reading multiple documents. Cognition and Instruction, 31(2), 176–203. doi:10.1080/07370008.
2013.769994.
Tabak, I. (2015). Functional scientific literacy: Seeing the science within the words and across the web. In
L. Corno & E. M. Anderman (Eds.), Handbook of educational psychology (3rd ed., pp. 269–280).
London: Routledge.
Thomm, E., & Bromme, R. (2012). ‘‘It should at least seem scientific!’’ Textual features of
‘‘scientificness’’ and their impact on lay assessments of online information. Science Education,
96(2), 197–2011. doi:10.1002/sce.20480.
Thomm, E., Hentschke, J., & Bromme, R. (2015). The explaining conflicting scientific claims (ECSC)
Questionnaire: Measuring Laypersons’ explanations for conflicts in science. Learning and
Individual Differences, 37, 139–152. doi:10.1016/j.lindif.2014.12.001.
Wiley, J., Goldman, S. R., Graesser, A. C., Sanchez, C. A., Ash, I. K., & Hemmerich, J. A. (2009). Source
evaluation, comprehension, and learning in Internet science inquiry tasks. American Educational
Research Journal, 46(4), 1060–1106. doi:10.3102/0002831209333183.
Wineburg, S. S. (1991). Historical problem solving: A study of the cognitive processes used in the
evaluation of documentary and pictorial evidence. Journal of Educational Psychology, 83(1), 73–87.
doi:10.1037/0022-0663.83.1.73.
123