2020_thiking epistemico

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Computers & Education 160 (2021) 104038

Contents lists available at ScienceDirect

Computers & Education


journal homepage: http://www.elsevier.com/locate/compedu

Secondary students’ epistemic thinking and year as predictors of


critical source evaluation of Internet blogs
Stephanie Pieschl 1, *, Deborah Sivyer
School of Education, The University of Newcastle, Callaghan, Australia

A R T I C L E I N F O A B S T R A C T

Keywords: Students should develop competent epistemic thinking and critical source evaluation skills during
Epistemic thinking secondary education. This study compared these skills and their interrelation between Australian
Source evaluation students (n = 218) from Years 7, 9, and 11. In an online questionnaire, students critically eval­
Secondary education
uated the Trustworthiness of four fictitious Internet blog posts that varied in Reliability (reliable vs.
Information literacy
21st century abilities
unreliable) and Content (pro vs. contra computer games). They also completed an Epistemic
Thinking Assessment, resulting in scores on Absolutism, Multiplism, and Evaluativism. Results show
no significant differences between Years in Epistemic Thinking, but significant Year differences in
Trustworthiness judgments: Year 9 and 11 students discriminated between reliable and unreliable
blog posts while Year 7 students failed to do so. Additionally, not only being in Year 7 but also
holding Multiplist beliefs (e.g., “everything is subjective”) predicted poor source evaluation skills.
Potential explanations and implications for teaching practice will be discussed.

1. Introduction

The Internet is now the primary information source for secondary school students tasked with research, be it for curriculum based
or personal enquiries (Braasch, Bråten, Strømsø, Ammarkrud, & Ferguson, 2013; Mason, Scrimin, Tornatora, Suitner, & Moè, 2018).
The great diversity of information sources and their varying credibility pose new challenges for students as effective comprehension
requires skills to locate, select, evaluate and integrate multiple texts in information seeking tasks (Braasch et al., 2013; Bråten, Britt,
Strømsø, & Rouet, 2011). Where an information source’s trustworthiness was once in the hands of publishing house editors, the re­
sponsibility to assess credibility now resides with the reader (Brand-Gruwel & Stadtler, 2011). As such, competent evaluation of
Internet sources is an important part of 21st Century digital literacy (Leu, Kinzer, Coiro, Castek, & Henry, 2013) and reading literacy
(Organisation for Economic Co-operation and Development [OECD], 2019). For example, the Australian Curriculum (Australian Cur­
riculum, Assessment and Reporting Authority [ACARA], 2016) mandates that students should be able to assess information using
“given criteria” by the end of Year 6 and to develop and use criteria systematically to evaluate the “quality, suitability and credibility of
located data or information and sources” by the end of Year 10.
However, even though secondary school students use the Internet frequently, this does not ensure sufficient declarative and
procedural knowledge to effectively evaluate sources (Mason, Junyent, & Tornatora, 2014) which is crucial to comprehending
multiple texts (Kammerer, Meier, & Stahl, 2016; Strømsø, Bråten, Britt, & Ferguson, 2013). On average, students pay scant attention to

* Corresponding author.
E-mail addresses: stephanie.pieschl@tu-darmstadt.de (S. Pieschl), deborah.sivyer@uon.edu.au (D. Sivyer).
1
Stephanie Pieschl is now at the Technische Universität Darmstadt, Germany.

https://doi.org/10.1016/j.compedu.2020.104038
Received 31 December 2019; Received in revised form 17 August 2020; Accepted 4 October 2020
Available online 7 October 2020
0360-1315/© 2020 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
S. Pieschl and D. Sivyer Computers & Education 160 (2021) 104038

source information essential for assessing trustworthiness (Braasch et al., 2013; Britt & Aglinskas, 2002), see web-based texts as
authorless (Alexander and Disciplined Reading & Learning Research Laboratory, 2012) and experience difficulty in understanding and
applying website evaluation criteria (Brem, Russell, & Weems, 2001; Potocki et al., 2020).
One important factor that seems to predict how students attend to source information on the Internet and synthesise information
from texts expressing diverse, even contrasting points of view, is their epistemic thinking (Bråten, Britt, et al., 2011a,b; Brem et al.,
2001; Kammerer, Bråten, Gerjets, & Strømsø, 2013; Paul, Macedo-Rouet, Rouet, & Stadtler, 2017; Strømsø, Bråten, & Britt, 2011).
Epistemic thinking refers to laypeople’s thoughts and beliefs about the nature of knowledge and knowing (Hofer & Pintrich, 1997).
Developing an adequate understanding of the epistemic nature of knowledge is an important goal of education (Schiefer et al., 2020).
For example, Australian students should be able to “understand the provisional nature of historical knowledge” by the end of Year 10
(ACARA, 2016). However, even though many studies support the assumption of an orderly developmental progression of epistemic
thinking (King & Kitchener, 2004; Kuhn, Cheney, & Weinstock, 2000), recent studies show mixed results (Chiu, Liang, & Tsai, 2016;
Mason, Boscolo, Tornatora, & Ronconi, 2013; Winberg, Hofverberg, & Lindfors, 2019).
Previous studies about epistemic thinking, source evaluation, and their relationship have mostly focused on university students,
children, or single year groups within secondary education. Those few stsec1.1sec1.2udies with multiple year groups of secondary
school students show inconsistent results (see 1.1–1.3), calling for more replication research in this area (Makel & Plucker, 2014).
Additionally, epistemic thinking and source evaluation seem to be influenced by culture (see 1.1–1.3). For example, Australian values
emphasize “mateship”, egalitarianism, tolerance, fair play, and giving a “fair go” (Department of Home Affairs, 2019; Plage, Willing,
Skribis, & Woodward, 2016; Tranter & Donoghue, 2015). Thus, Australians value conformity significantly less and equality signifi­
cantly more than people from other Western English-speaking countries (Feather, 1998). These Australian values may manifest in
considering knowledge subjective (see 1.1) and doubting “authoritative” information sources in favour of giving less reliable infor­
mation sources a “fair go” (see 1.2). We extend previous research by conducting the first study with Australian secondary school
students that compares epistemic thinking and source evaluation between Year 7, Year 9, and Year 11 students. Results will also
provide practically relevant information to Australian teachers about the status quo of students’ epistemic thinking and critical source
evaluation.

1.1. Epistemic thinking

We use the term “epistemic thinking” in reference to the Epistemic Thinking Assessment (ETA; Barzilai & Weinstock, 2015) used in
this study. Barzilai and Weinstock (2015) define epistemic thinking as “meta-level understandings of the nature of knowledge and
knowing” (p. 144) and argue that these “theories-in-action” emerge when people engage in thinking about specific knowledge claims
and information sources. Despite this narrow definition, we will also review more diverse research about epistemic or epistemological
beliefs, understanding, perspectives, stances, theories, reflection, cognition, or personal epistemology. All of these diverse strands of
research are broadly about how laypeople - rather than philosophers - think about the nature of (scientific) knowledge and the process
of knowing (Barzilai & Eshet-Alkalai, 2015; Barzilai & Ka’adan, 2017); thus, the targeted constructs are lay beliefs or theories (Barzilai
& Ka’adan, 2017) that can be measured via questionnaires or interviews. To facilitate comprehension, we will use the term “epistemic
thinking” in all of these instances, even though not all of the presented theories and studies conform to its narrow definition presented
above. In this section, we will not review research about “epistemic practices” conceptualized as manifest actions or strategies from
which people’s epistemologies can be inferred (rather see 1.3; Chinn, Rinehart, & Buckland, 2011).
There are numerous theoretical models of epistemic thinking, but we will only review those models underlying the ETA. Based on
Perry’s (1970) seminal work, developmental models focus on how people’s epistemic thinking changes by age and education (Hofer &
Pintrich, 1997; Sandoval, Greene, & Bråten, 2016). Kuhn and Weinstock (Kuhn et al., 2000; Kuhn & Weinstock, 2002) proposed the
most prominent model: They suggested that people start out as absolutists, who think that objective knowledge exists and knowledge
claims are either true or false. They may shift to a multiplist perspective, where knowledge is considered uncertain and subjective, to the
point that opinions are valued as much as scientific evidence. Finally, some people may become evaluativists who also accept
knowledge as constructed and uncertain but believe that knowledge claims can be evaluated against established criteria such as
scientific evidence. Developmental frameworks may use different terms for these levels, but agree in positing development along one
dimension.
In contrast, based on Schommer’s (1990) seminal research, dimensional models identify multiple independent dimensions of
epistemic thinking and relate these to cognition, learning, and achievement (Sandoval et al., 2016). Hofer and Pintrich (1997) pro­
posed the most prominent framework. Their first two dimensions pertain to the “nature of knowledge”: (1) Beliefs regarding simplicity
range from views that knowledge is simple and consists of discrete facts to considering knowledge complex and consisting of inter­
woven constructs. (2) Beliefs regarding certainty range from views of knowledge as absolute and stable to considering knowledge
evolving, tentative, and changing. The two other dimensions pertain to the “nature of knowing”: (3) Beliefs regarding the source of
knowledge concern the relationship between the knower and the known and range from views that knowledge resides outside oneself
and is transmitted to considering knowledge individually constructed. (4) Beliefs regarding the justification of knowledge are about
what makes a sufficient knowledge claim, and range from views that knowledge can be justified by authorities to views that it has to be
justified by the rules of inquiry, logic, or reason, based on evidence. Dimensional frameworks posit different kinds and numbers of
dimensions. Justification of knowledge has been proposed as the primary dimension (Greene, Torney-Purta, & Azevedo, 2010b) and
sub-divided into a trichotomous framework of justification by authority, personal justification, and justification by multiple sources
(Ferguson & Bråten, 2013).
Extensive research within both research traditions has shown that domain-general, domain-specific, and task-specific situated

2
S. Pieschl and D. Sivyer Computers & Education 160 (2021) 104038

epistemic thinking (e.g., Chinn et al., 2014; Elby & Hammer, 2010) is relevant for human cognition, affect, and behaviour.
Endorsement of evaluativism and viewing knowledge as complex, tentative, socially constructed, and justified by evidence is generally
associated with beneficial outcomes regarding (self-regulated) learning and (metacognitive) learning strategies, (multiple) text
comprehension, or academic achievement (Hofer & Pintrich, 1997; Sandoval et al., 2016). These findings have been validated by a
recent a meta-analysis which shows significant relationships between epistemic thinking and learning and academic achievement from
elementary through to graduate school (Greene, Cartiff, & Duke, 2018). Such beneficial epistemic thinking has been labelled “so­
phisticated”. However, value-laden normative labels can be challenged, for example, because relying on authority – traditionally
considered “naïve” – can be productive in some contexts (Bromme, Kienhues, & Porsch, 2010; Chinn & Sandoval, 2018). Thus,
“adaptive” epistemic thinking has been suggested as most productive (Bromme, Pieschl, & Stahl, 2010). To avoid these discussions, we
will only use descriptive labels.
Despite extensive research, there are open questions regarding epistemic development during secondary education. Classic lon­
gitudinal studies show increases in epistemic thinking levels (King & Kitchener, 2004) or in beliefs in complex and tentative knowledge
(Schommer, Calvert, Gariglietti, & Bajaj, 1997). Similar results have been found in classic cross-sectional studies (Boyes & Chandler,
1992; Gottlieb, 2007; Hallett, Chandler, & Krettenauer, 2002; King & Kitchener, 2004; Krettenauer, 2005). For example, Kuhn et al.
(2000) surveyed American 5th, 8th and 12th graders as well as adults and observed declining absolutism and increasing evaluativism.
Weinstock, Neuman, and Glassner (2006) surveyed Israeli 7th, 9th and 11th grade students and observed the highest percentage of
evaluativism in Year 11. Thus, conclusions about an “orderly progression in the levels of epistemological understanding” (Kuhn et al.,
2000, p. 324) seem warranted based on these classic studies. However, recent results are mixed: Some studies failed to show epistemic
development (Mason et al., 2013; Winberg et al., 2019). Other studies detected it only for selected domains or dimensions or yielded
inconsistent results (Chiu et al., 2016; Greene, Torney-Purta, & Azevedo, 2010; Iordanou, Muis, & Kenedou, 2019; Muis, Trevors,
Duffy, Ranellucci, & Foy, 2016). To account for such results, theoretical models of epistemic development acknowledge complex
interactions with contexts and posit recursive epistemic development (Rule & Bendixen, 2010; Sandoval et al., 2016). For example,
Schommer-Aikins, Bird, and Bakken (2010) propose that one recursion “occurs at about the beginning of adolescence” (p. 38).
To conclude, evaluativist epistemic thinking is an important goal of (secondary) education (ACARA, 2016; Kuhn et al., 2000;
Schiefer et al., 2020). To pursue this goal, teachers need accurate information about students’ epistemic thinking. But “there is still a
shortage of research on the epistemic development at different educational levels from junior high school through to college” (Chiu
et al., 2016, p. 288). Epistemic thinking also differs between and within cultures (Bernolt, Lindfors, & Winberg, 2019; Chan & Elliot,
2004; Gottlieb, 2007; Strømsø, Bråten, Anmarkrud, & Ferguson, 2016; Weinstock, 2015). Previous studies with Australian samples
have only focused on children, university students, or (pre-service) teachers (Brownlee et al., 2012; Tolhurst, 2007). To fill this gap and
add to epistemic developmental research, we compare epistemic thinking between Year 7, Year 9, and Year 11 Australian secondary
school students. For this purpose, we selected the ETA (Barzilai & Weinstock, 2015) because it is based on both developmental and
dimensional approaches, because it allows tracking students’ epistemic development which is one of our main research interests, and
because it captures students’ epistemic thinking regarding specific controversial scenarios where source evaluation should be relevant
(see 1.2). As the target dilemma, we chose the effects of video games because epistemic development is most likely during secondary
education regarding such an ill-structured social science topic with humanly constructed, uncertain, contested areas of social facts and
values (Chandler & Proulx, 2010; Greene, Torney-Purta, & Azevedo, 2010; Iordanou et al., 2019).

1.2. Source evaluation

In many scientific disciplines, it is necessary to deal with multiple sources that may complement or contradict one another (Scharrer
& Salmeron, 2016). Source evaluation may help reconciling potential conflicts (Bråten, Britt, et al., 2011a,b; Stadtler & Bromme,
2013) and forming a coherent mental model (Stadtler & Bromme, 2014). Even for laypeople, source evaluation skills are essential
nowadays because the absence of editorial “gatekeepers” on the Internet results in public information of varying credibility
(Brand-Gruwel & Stadtler, 2011). We will review research about sourcing and about peoples’ subjective judgments about source
trustworthiness, credibility, authoritativeness, reliability, or quality – terms that we will use interchangeably as used by the cited
sources in this section (cf. Watson, 2014).
In his seminal work, Wineburg (1991) conceptualized sourcing as a heuristic where attention is first given to source attributes, such
as author, text type, and date and place of production, as a means of interpreting a text’s content and judging its trustworthiness. Whilst
Wineburg’s research focused on the domain of history, sourcing is now recognized as an essential component of multiple text
comprehension across topics and domains (List & Alexander, 2017). Concentrating on cognitive processes, Perfetti, Rouet, and Britt
(1999) proposed a theoretical framework for document representation. Competent readers who integrate information from multiple
texts should construct an intertext model, including information about rhetorical goals (intent or audience) and the source, for example
about the author (name, status, motives, or access), setting (place, time, or culture) or the form (language style or type) of texts. Other
theoretical models posit that readers should pay special attention to author expertise and their benevolence (Stadtler & Bromme, 2014).
Research showed that people most frequently use four trustworthiness criteria, namely author, content, document text type, and their
personal opinion (Rouet, Britt, Mason, & Perfetti, 1996). Recent theoretical models elaborate conditions under which readers engage in
sourcing, for example, conflicts between texts may trigger sourcing (Braasch & Bråten, 2017).
Research shows that even adults rarely display perfect source evaluation skills. Domain experts are most likely to pay attention to
source attributes and use relevant criteria to judge trustworthiness, while novices mostly ignore source attributes, focus primarily on
content (Bråten, Strømsø, & Salmeron, 2011; Wineburg, 1991) or use superficial criteria such as document length (Walraven,
Brand-Gruwel, & Boshuizen, 2009) or source popularity (Mason, Ariasi, & Boldrin, 2011). Source evaluation is also influenced by

3
S. Pieschl and D. Sivyer Computers & Education 160 (2021) 104038

reader characteristics such as their attitudes (van Strien, Kammerer, Brand-Gruwel, & Boshuizen, 2016) or topic familiarity
(McCrudden, Stenseth, Bråten, & Strømsø, 2016).
Despite extensive research, there are open questions regarding the development of source evaluation during secondary education.
Because of the lack of longitudinal research, we can only report results from cross-sectional studies: Metzger et al. (2015) conducted a
representative survey among 11-18-year-old US youths and found that older adolescents reported using more analytic credibility
evaluation strategies and were better able to identify a hoax website. Potocki et al. (2020) investigated 5th, 7th, and 9th grade and
undergraduate students and found significant progress regarding their ability to differentiate between competent and non-competent
authors (cf. Macedo-Rouet et al., 2019) and their use of progressively more source-based justifications than content-based justifica­
tions. Similarly, Mason and Boldrin (2008) summarized studies showing that Grade 7 students base their justifications of source
credibility frequently on quantity and content of information, but Grade 11 and 12 students often refer to the authoritativeness of
sources. However, other findings are less consistent because they show no development or poor performance across educational levels
(Brem et al., 2001; Britt & Aglinskas, 2002; McGrew et al., 2018).
Some of these inconsistencies might be explained by the use of different methodologies: If explicitly asked, students demonstrate
some knowledge about sourcing (Barzilai & Zohar, 2012; Mason et al., 2014, 2011; Paul et al., 2017). For example, Paul et al. (2017)
interviewed 9th grade students. Almost all students considered evaluating the trustworthiness of sources useful and referred to
relevant criteria such as author expertise (80%) or the justification of knowledge via scientific research (64%). If prompted, students’
sourcing may also improve (Barzilai & Zohar, 2012; Braasch et al., 2013; Mason et al., 2014). For example, Kammerer, Meier, and Stahl
(2016) instructed German 9th graders to evaluate multiple Internet texts on the topic of mobile phone health risks. Students with
prompting outperformed those without prompting in discerning reliable sources from less reliable ones. However, students seem
limited in their spontaneous sourcing (Brem et al., 2001; Britt & Aglinskas, 2002; Kiili et al., 2018; Maggioni, Fox, & Alexander, 2010;
Mason, Boldrin, & Ariasi, 2010; Mason et al., 2011; Walraven et al., 2009; Wineburg, 1991). They seem to evaluate information
incidentally only and focus primarily on content, vague or superficial criteria (Barzilai & Zohar, 2012; Coiro et al., 2015; Kiili, Laurinen, &
Marttunen, 2008; Mason et al., 2018; Paul et al., 2017; Watson, 2014). Only with significantly disparate information sources, students
were able to consider important author attributes (McCrudden et al., 2016). Thus, there seems to be a gap between students’ source
evaluation knowledge and their actions, a general developmental trend towards better source evaluation during secondary school, but
suboptimal skills even in older students (Potocki et al., 2020).
To conclude, critical source evaluation skills are an important goal of (secondary) education (ACARA, 2016; Bråten, Brante, &
Strømsø, 2019; Iordanou et al., 2019). To pursue this goal, teachers need accurate information about students’ source evaluation, but
research is still “mixed and inconsistent” (Macedo-Rouet et al., 2019). Source evaluation skills also may differ between cultures (OECD,
2019; Paul et al., 2017), but studies about Australian’s source evaluation are rare (Watson, 2014). To fill this gap and add to source
evaluation developmental research, we compare critical source evaluation between Year 7, Year 9, and Year 11 Australian secondary
school students. We use the Trustworthiness and Trust Criteria measure (Strømsø et al., 2011) to prompt students to evaluate the
trustworthiness of four fictitious blogs. Blogs differ systematically in the highly relevant dimension of reliability (Kammerer et al., 2016;
Macedo-Rouet et al., 2019; Paul et al., 2017; Potocki et al., 2020; Rouet et al., 1996). To trigger sourcing (Braasch & Bråten, 2017),
blogs also differ in content, which may bias source evaluation in less skilled students (Bråten et al., 2011a,b; Wineburg, 1991; Potocki
et al., 2020; Rouet et al., 1996; van Strien et al., 2016).

1.3. The relationship between epistemic thinking and critical source evaluation

Epistemic thinking and critical source evaluation are inextricably linked. In research on epistemic thinking, online searching, source
evaluation, and conflicting information have been used frequently for theory building and testing (Hofer, 2004). For example, source
evaluation is conceptualized as situated “epistemic practice” (Chinn et al., 2014; Chinn & Sandoval, 2018; Elby & Hammer, 2010).
Thus, traces of students’ epistemic practices (Greene, Muis, & Pieschl, 2010) can be used to infer their epistemic thinking (Brem et al.,
2001; Mason et al., 2011). In research on source evaluation and multiple document comprehension, epistemic thinking is modelled and
tested as an important component (e.g., Bråten, Britt et al., 2011a,b). In developmental models, source evaluation seems embedded in
the stages of epistemic thinking (Barzilai, Tzadok, & Eshet-Alkalai, 2015; Kuhn & Weinstock, 2002): Absolutists believe in expert
authority; thus, they need to find a reliable expert. Multiplists believe that everything is subjective, thus, they may choose to believe in
anything without source evaluation. Only evaluativists need to engage in critical source evaluation, because they believe that
knowledge claims need to be evaluated. The link between epistemic thinking and source evaluation is also modelled on a metacognitive
level (Bromme et al., 2010a,b; Hofer, 2004; Mason & Boldrin, 2008). For example, source evaluation may operate on a metacognitive
level if reasoning involves beliefs about the certainty, source, structure and justification of knowledge claims (Barzilai & Ka’dan, 2017;
Barzilai & Zohar, 2014).
Prior research, mostly with university students, seems to indicate that epistemic thinking is related to if and how people pay attention
to source information on the Internet (Barzilai & Eshet-Alkalai, 2015; Barzilai et al., 2015; Barzilai & Zohar, 2012; Brante & Strømsø,
2018; Bråten et al., 2011a,b; Brem et al., 2001; Britt & Aglinskas, 2002; Kammerer et al., 2013; Mason et al., 2010, 2011; Pérez,
Potocki, Stadtler, Macedo-Rouet, Paul & Salmerón, 2018). For example, Strømsø et al. (2011) found that epistemic beliefs were an
important predictor of Norwegian undergraduates’ ability to critically evaluate information sources on the topic of climate change.
Barzilai and colleagues (Barzilai & Eshet-Alkalai, 2015; Barzilai et al., 2015) found that Israeli undergraduates’ endorsement of
multiplism was related to less effective sourcing and a greater emphasis on personal opinion. Barzilai and Eshet-Alkalai (2015)
demonstrated that absolutism and multiplism were negative predictors, whilst evaluativism was a positive predictor of comprehension of
multiple author viewpoints regarding a socio-scientific topic. These studies with adults generally show a relationship between

4
S. Pieschl and D. Sivyer Computers & Education 160 (2021) 104038

epistemic thinking and source evaluation, but detailed findings about which epistemic thinking levels or dimensions are related to
source evaluation under which specific conditions are mixed.
Significantly fewer studies have been conducted with secondary school students. Only one of those studies explicitly explores the
development of epistemic thinking and online epistemic processing cross-sectionally by comparing 11-12-year-olds, 13-14-year-olds,
and university students (Iordanou et al., 2019): Evaluativist epistemic thinking and age significantly predicted high epistemic pro­
cessing about the credibility and function of evidence. Epistemic thinking also significantly mediated the relationship between age and
high epistemic processing – for science but not for history. There are more studies with single educational levels: For example, 6th
grade evaluativists and absolutists differed little in their criteria for evaluating website trustworthiness (Barzilai & Zohar, 2012), 9th
grade students’ epistemic thinking did not predict their use of strategies to determine source reliability (Barzilai & Ka’adan, 2017), but
the more 8th grade students believed in the complexity of scientific knowledge, the more they reflected at a higher level about the
justification of online information (Mason et al., 2010), and 18-year-olds who believed in complex and variable knowledge viewed
internet forums more critically (Porsch & Bromme, 2011). Thus, significant relationships between epistemic thinking and source
evaluation were only found in selected domains (Iordanou et al., 2019) and contexts (Porsch & Bromme, 2011) or regarding selected
criteria (Barzilai & Zohar, 2012) and epistemic thinking dimensions (Mason, Boldrin, & Ariasi, 2010) – or not at all (Barzilai &
Ka’adan, 2017). Because of these mixed results, there is an urgent call for more developmental research about the relationship between
epistemic thinking and source evaluation (Iordanou et al., 2019).
To conclude, research about the development of the relationship between epistemic thinking and source evaluation in secondary
education is rare (Iordanou et al., 2019) and the results with singe educational levels are inconsistent. Because of the cultural vari­
ability of both constructs (Bernolt et al., 2019; Chan & Elliot, 2004; Gottlieb, 2007; Paul et al., 2017; Stromso et al., 2016; Weinstock,
2015), their relationship may also vary between cultures. To answer the call for more developmental research in this context, the
present study is the first to explore the relationship between epistemic thinking and source evaluation in Australian secondary school
students from Year 7, Year 9, and Year 11.

1.4. The current study and hypotheses

Policies mandate and theories predict that secondary school students should progress significantly in their epistemic thinking and
in their source evaluation skills throughout their secondary education. However, previous research has mostly focused on university
students, on single secondary school levels only, and/or the results are mixed. Additionally, both variables vary by culture, but
research in Australia is rare. To extend these lines of research, we conducted the first cross-sectional study with Australian secondary
school students from Year 7, Year 9, and Year 11 to test the following hypotheses:
Hypothesis 1. Higher Year Groups display higher Epistemic Thinking levels than lower Year Groups.
Hypothesis 2. Higher Year Groups display better Source Evaluation than lower Year Groups.
Hypothesis 3. Higher Epistemic Thinking levels are positively related to competent Source Evaluation.

2. Material and methods

2.1. Procedure and design

Australian secondary school students from Year 7, Year 9, and Year 11 participated in this cross-sectional field study that was
implemented as an anonymous online questionnaire. At the start, students reported the frequency of their own video game playing
(Game Frequency) and their opinion about video game effects (Game Attitude). In the first main part, students read four fictitious
Internet blogs about the effects of video games that were experimentally manipulated in a two-by-two design regarding the dimensions
of Reliability (unreliable vs. reliable) and Content (pro vs. contra video games) and that were presented in random order. To capture
their critical Source Evaluation skills, students evaluated the overall Trustworthiness of each blog. Additionally, they evaluated how
much they used the trust criteria of Author, Content, Type, and their own Opinion in these judgments. In the second main part, students
completed the scenario-based Epistemic Thinking Assessment (ETA) that diagnosed the extent of their Absolutism, Multiplism, and
Evaluativism. At the end, students reported demographic information about their gender, Year Group, and age. Data was collected in
group sessions at the school. At the end of each session, students were debriefed regarding the fictitious nature of the blog posts. The
University of Newcastle Human Research Ethics Committee (HREC) approved all materials and methods.

2.2. Participants

All students of Year 7, Year 9, and Year 11 from one Australian regional non-private middleclass secondary school were invited to
participate. This school caters predominantly to native-born mixed-ability Australians with English as their only language. Students
needed parental consent to volunteer and had to confirm their informed consent at the end of the online questionnaire. Within each
Year Group, a 50 AUD gift voucher was raffled off as incentive to participate. A priori, we set a minimum sample size of n = 30 per Year
Group.
We collected the data of n = 226 student volunteers, corresponding to a participation rate around 34.2% of approximately N = 660
invited students. We excluded five students who did not confirm their consent and three students who took less time to answer the

5
S. Pieschl and D. Sivyer Computers & Education 160 (2021) 104038

questionnaire than a competent adult reader would need for reading the texts and questions (i.e., 6 min). Our final convenience sample
consisted of n = 218 students. Table 1 gives an overview of the characteristics of this final sample and each Year Group.

2.3. Blogs

We created four fictitious blogs (Appendix A) about the effects of video games that differed in Reliability (unreliable vs. reliable) and
Content (pro vs. contra video games). A priori we assumed that this topic should be personally relevant and interesting for students,
because previous research had shown that Australian adolescents played video games frequently and extensively (Brand et al., 2019;
Yu & Baxter, 2015). Additionally, blogs are a common format used in research (Barzilai et al., 2015) because laypeople as well as
scientists blog frequently (Brumfiel, 2009) and because laypeople consider blogs moderately reliable (Keskenidou, Kyridis, Valsa­
midoo, & Soulani, 2014) and are sensitive to blog features indicating trustworthiness (Hendriks, Kienhues, & Bromme, 2016).
In this study, we use the term “reliability” in reference to reliable blogs that are characterized by author expertise and reliable
corroboration of claims (Coiro et al., 2015; Mason et al., 2010, 2014; Pérez et al., 2018; Potocki et al., 2020). Manipulating author
competence (e.g., layperson vs. PhD/university professor) has been used previously to diagnose source evaluation skills (Keskenidou
et al., 2014; Macedo-Rouet et al., 2019; Paul et al., 2017; Potocki et al., 2020). Based on the rationale that “sources are trustworthy in
their claims to the extent that they use reliable epistemic processes to produce the claims” (Chinn & Rinehart, 2016, pp. 1709–1710),
we went beyond manipulating author features: Unreliable blog posts were ostensibly written by mothers who wrote about their
one-time personal experiences with their own children. Reliable blog posts were ostensibly written by psychologists with PhD degrees
who worked at universities and wrote about key findings of scientific reviews or meta-analyses that summarized recently published
studies. Thus, these blogs also differ in the extent of corroborating their claims. The ability to distinguish between the trustworthiness
of reliable and unreliable blog posts with such challenging but authentic material should be diagnostic of students’ critical source
evaluations skills (see 1.2).
We also manipulated content. Pro blog posts reported that playing in virtual teams could foster cooperation. Contra blog posts
reported that violent video games could cause aggressive behaviour. Experts might be able to compare such claims with their rich
domain knowledge about high-quality scientific research about the effects of video games and validly judge “what is true” (Bromme
et al., 2010a,b). Thus, for experts, content could point to the underlying epistemic processes of scientific inquiry (see above) and could
be a valid indicator of blog reliability. However, please note, that the content of all blogs in this study is based on scientific research
findings. Thus, even experts should not be able to rationally distinguish between the reliability of pro and contra blogs based on
content alone. Novices may intuitively consider some content more trustworthy. However, their personal opinion, experiential un­
derstanding, or anecdotal knowledge might bias such source evaluation (Sinatra, Kienhues, & Hofer, 2014; van Strien et al., 2016) –
such as in the case of gamers who devalue scientific research about violent video games (Nauroth et al., 2013). To conclude, novices
lack the necessary knowledge to rationally judge trustworthiness based on content. Thus, secondary school students should rather
focus on the second-hand evaluation of sources and decide “whom to believe” (Bromme et al., 2010a,b; see above). Presenting con­
tradictory pro and contra blogs may trigger such source evaluation and “promote reader attention to any available source information”
(Braasch & Bråten, 2017, p. 177). Consequently, secondary school students may overcome their general tendency to focus on content
while ignoring source features (see 1.2).
The blogs were modelled after authentic blogs. Each fictitious blog included (from top to bottom) author’s name and credentials,
date of posting, URL, and body of text. The blogs were between 121 and 131 words long (M = 126, SD = 4.69). We ensured readability
as follows: We asked selected Year 7 students about readability issues and revised the blogs accordingly. We used the Lexile Analyzer®
(www.lexile.com) to ensure that the readability scores of the blogs (700L–1300L; higher scores indicate more difficult texts) were
within the suggested range for Australian Year 7 students (i.e., 1000L–1700L; ACARA, 2016). Within this study, students also judged
the difficulty of each blog post (“How difficult is this text to understand?”; 1 = very easy – 10 = very difficult). The unreliable pro video
game blog was judged easiest; for example, Year 7 students judged it to be quite easy (M = 1.94, SD = 1.45). The reliable pro video
games blog was judged most difficult; however, even Year 7 students considered it fairly readable (M = 3.94, SD = 2.14).

Table 1
Sample characteristics.
All Year 7 Year 9 Year 11
a
n 218 (100%) 51 (23.4%) 37 (17.0%) 130 (59.6%)
Gender b
Female 134 (61.5%) 32 (62.7%) 20 (54.1%) 82 (63.1%)
Male 82 (37.6%) 19 (37.3%) 16 (43.2%) 47 (36.2%)
Other 2 (00.9%) – 1 (02.7%) 1 (00.8%)
Agec 15.18 (1.75) 12.43 (0.54) 14.62 (0.55) 16.42 (0.54)

Note.
a
Number of cases, percentages (in brackets) refer to the whole sample (row).
b
Number of cases, percentages (in brackets) refer to specific Year Groups (columns).
c
M and SD (in brackets).

6
S. Pieschl and D. Sivyer Computers & Education 160 (2021) 104038

2.4. Source evaluation

Because of its economical nature and its grounding in empirical research (Rouet et al., 1996), we chose the Trustworthiness and Trust
Criteria measure of Strømsø et al. (2011) and adapted it to our topic. After reading each blog, students answered the following
questions: First, students judged the overall Trustworthiness of the blog post (“To what extent do you trust the information in the above
text?”). Thus, in this study, the term “trustworthiness” refers to these subjective judgments about blog trustworthiness (Hendriks et al.,
2016; Paul et al., 2017; Wineburg, 1991). Second, students rated their use of four trust criteria for their judgments (“When you rated
your trust in the information in the text, to what extent did you base your judgment on?”). The trust criteria were Author (“who wrote the
text”), Content (“the content of the text”), Type (“type of text (blog, wiki, newspaper, article, etc.)”), and Opinion (“my own opinion on the
effects of video games”). Students made these judgments on 10-point scales from 1 = to a very little extent to 10 = to a very large extent.
In our statistical analyses, we consider raw Trustworthiness ratings and trust criteria ratings (Author, Content, Type, and Opinion).
Additionally, we computed composite scores based on the rationale elaborated in 2.3: Focusing on the Reliability of a source is at the
core of competent critical source evaluation and students’ ratings regarding reliable and unreliable blogs should differ substantially.
However, novices should not evaluate blogs of similar reliability differently because of different Content. We combined both aspects via
a formula that aggregates students’ ratings of all blogs into one score by subtracting the combined ratings for unreliable blogs from the
combined ratings of reliable blogs ([reliable pro + reliable contra] – [unreliable pro + unreliable contra]). Composite scores can reach
maximum positive values if students give very high ratings for both reliable blogs and very low ratings for both unreliable blogs. The
highest score would be eighteen ([10 + 10] – [1 + 1]). Negative values would indicate the reverse, namely that students rated the
unreliable blogs higher than the reliable ones. A composite score around zero indicates that students do not differentiate between
reliable and unreliable blog posts in their ratings (or that such differences were overshadowed by significantly different ratings
regarding pro and contra blogs). We computed five composite scores, one each for the overall Trustworthiness judgments and the four
trust criteria judgments (Author, Content, Type, and Opinion).
We consider judgments regarding Trustworthiness and Author to be most indicative of competent source evaluation. Content or Type
ratings may also be relevant if students interpreted these categories in reference to anecdotal personal blogs versus science blogs with
scientific evidence as content. Basing Trustworthiness ratings on personal Opinion would not be indicative of competent source
evaluation.

2.5. Epistemic thinking

The scenario-based Epistemic Thinking Assessment (ETA) was first developed and validated by Barzilai and Weinstock (2015) and
adapted for the secondary school context by Barzilai and Ka’adan (2017). Within the ETA, a controversy is presented and people are
asked questions about the nature, source, certainty, validity, and justification of knowledge regarding this topic. For each question,
they rate three potential answers that correspond to Absolutism, Multiplism, and Evaluativism.
We adapted the ETA previously used with 9th graders (Barzilai & Ka’adan, 2017) in two ways: Students answered these questions
regarding the topic effects of video games. We also did not present an additional controversy, but students were asked to consider the
disagreement between the previously presented pro and contra blogs when answering the ETA questions. For example, one question
was “Can there be certainty about the effects of video games?” with the corresponding answer options “Eventually one could know for
certain” (Absolutism), “One could never know for certain because it is impossible to agree on the topic” (Multiplism), and “There is never full
certainty, but it is possible to improve the degree of certainty” (Evaluativism). Students rated their agreement to all answer statements on
6-point scales from 1 = strongly disagree to 6 = strongly agree. Both the questions and the aligned answers within each question were
presented in random order.
Cronbach’s Alphas of the original ETA scales were mixed with Absolutism α = 0.65, Multiplism α = .72, and Evaluativism α = 0.54.
Because of the low internal consistency of Evaluativism, we tested the structure of the ETA via the procedures outlined by Barzilai and
Ka’adan (2017). However, we could not replicate their solution and found no better fitting solution (see Appendix B). Therefore, we
use the theoretically proposed original scales with eight items per scale despite the low internal consistency of Evaluativism.

3. Results

3.1. Descriptive data

We report descriptives of main variables in Table 2 and correlations between main variables in Table 3. On average, students self-
reported only a moderate frequency of video game playing (Game Frequency) and a moderately critical attitude towards the effects of
video games (Game Attitude, Table 2). Neither was significantly related to any of the main variables of this study2 (Table 3). Therefore,

2
Please note, that there were some correlations of small and medium effect size with fine-grained raw judgments. For example, high Game
Frequency was associated with low Trustworthiness ratings of contra video game blogs, but with pronounced consideration of students’ own Opinion
for pro video game blogs. Critical Game Attitude was associated with high Trustworthiness ratings of contra video game blogs. Thus, Game Frequency
and Game Attitude seem to be related somewhat to students’ source evaluation. However, due to multiple comparisons (5 ratings x 4 blogs x 3 ETA
scales = 60 correlations) any detected statistical significance might be due to chance. As we did not have a priori hypotheses about such fine-grained
relationships, we will not discuss these results further.

7
S. Pieschl and D. Sivyer Computers & Education 160 (2021) 104038

Table 2
Descriptives (M, SD in brackets) of the Main Variables.
Variable All Year 7 Year 9 Year 11

Potential Covariates
Game Frequencya 3.04 (1.82) 2.92 (1.64) 3.41 (2.14) 2.98 (1.80)
Game Attitudeb 4.45 (1.46) 4.90 (1.53) 4.03 (1.24) 4.39 (1.46)
Epistemic Thinking Assessment (ETA) Scales c
Absolutism 3.99 (0.67) 4.01 (0.66) 4.06 (0.67) 3.96 (0.67)
Multiplism 3.55 (0.75) 3.69 (0.60) 3.58 (0.64) 3.49 (0.83)
Evaluativism 4.26 (0.50) 4.30 (0.51) 4.18 (0.53) 4.27 (0.50)
Source Evaluation Composite Scores d
Trustworthiness 1.72 (4.12) 0.00 (3.91) 2.84 (4.30) 2.08 (3.98)
Author 1.15 (4.66) − 0.29 (3.69) 2.00 (4.77) 1.48 (4.87)
Content − 0.16 (3.53) − 0.06 (2.35) 0.59 (3.65) − 0.41 (3.86)
Type 0.85 (3.54) 0.25 (2.68) 1.03 (4.04) 1.03 (3.68)
Opinion − 0.20 (3.12) − 0.41 (3.05) − 0.30 (3.37) − 0.08 (3.09)

Note.
a
Game Frequency indicates answers to the question “How often do you play video games?” (1 = never – 7 = always).
b
Game Attitude indicates answer to the question “Do you think video games have effects on players?” (1 = many positive effects – 4 = no effects – 7 =
many negative effects).
c
Answered on 6-point scales from 1 to 6; higher values indicate stronger endorsement of Absolutism, Multiplism, and Evaluativism respectively.
d
Positive values indicate higher values for reliable blog posts, negative values indicate the reverse. Values around zero indicate no differences
between reliable and unreliable blog posts.

Table 3
Bivariate correlations between the main variables.
Variables 2 3 4 5 6 7 8 9 10

Potential Covariates
Game Frequencya -.45*** -.13 .04 -.08 -.08 -.07 -.09 -.00 -.03
Game Attitudeb – .18 .02 .19 -.02 .06 .91 .09 .04
Epistemic Thinking Assessment (ETA) Scales c
Absolutism – -.26** .17 .17 .12 .09 .08 -.14
Multiplism – .28** -.27** -.06 .08 .02 .01
Evaluativism – -.03 .06 .08 .02 -.03
Source Evaluation Composite Scores d
Trustworthiness – .32*** .27** .18 .12
Author – .31*** .45*** .06
Content – .36*** .18
Type – .04
Opinion –

Note. Reported are Pearson’s product-moment correlations between variables in the whole sample (n = 218).
*p < .05, **p < .01, ***p < .001, after Bonferroni corrections.
a
Game Frequency indicates answers to the question “How often do you play video games?” (1 = never – 7 = always).
b
Game Attitude indicates answer to the question “Do you think video games have effects on players?” (1 = many positive effects – 4 = no effects – 7 =
many negative effects).
c
Answered on 6-point scales from 1 to 6; higher values indicate stronger endorsement of Absolutism, Multiplism, and Evaluativism respectively.
d
Positive values indicate higher values for reliable blog posts, negative values indicate the reverse. Values around zero indicate no differences
between reliable and unreliable blog posts.

we did not include Game Frequency or Game Attitude as covariates in all analyses. We only report significant results at p < .05.

3.2. Hypothesis 1. Higher Year Groups display higher Epistemic Thinking levels than lower Year Groups

We computed one-way ANOVAs with the between-subject factor Year Group (Year 7, Year 9, and Year 11) for all ETA scales
separately. If ANOVA assumptions were violated, we computed non-parametric Kruskal-Wallis tests instead. We found no significant
differences between Year Groups for Absolutism, Multiplism, or Evaluativism.

3.3. Hypothesis 2. Higher Year Groups display better Source Evaluation than lower Year Groups

3.3.1. Trustworthiness
We computed a mixed three-by-two-by-two ANOVA with the between-subject factor Year Group (Year 7, Year 9, and Year 11) and
the within-subject factors Reliability (reliable vs. unreliable) and Content (pro vs. contra) of blogs for students’ raw Trustworthiness
judgments. Results show a significant main effect of Year Group, F(2, 215) = 3.443, p = .034, η2p = 0.031, a significant main effect of
Reliability, F(1, 215) = 27.610, p < .001, η2p = 0.114, and a significant Year Group-by-Reliability interaction, F(2, 215) = 6.636, p = .002,

8
S. Pieschl and D. Sivyer Computers & Education 160 (2021) 104038

η2p = 0.058. Year 7 students did not differentiate between unreliable and reliable blog posts in their Trustworthiness judgments, but Year
9 and Year 11 did (Fig. 1).
Confirming these results, a one-way ANOVA with the between-subject factor Year Group (Year 7, Year 9, and Year 11) for the
Composite Trustworthiness Score shows significant Year Group differences, F(2, 215) = 214.390, p = .002, η2p = 0.058. Tukey HSD tests
show significant differences between Year 7 and Year 9 students at p = .004 and between Year 7 and Year 11 students at p = .005.

3.3.2. Trust criteria


We computed the same mixed three-by-two-by-two ANOVAs with the between-subject factor Year Group (Year 7, Year 9, and Year
11) and the within-subject factors Reliability (reliable vs. unreliable) and Content (pro vs. contra) of blogs for each of the raw trust
criteria judgments (Author, Content, Type, and Opinion). For Author, results show a significant main effect of Year Group, F(2, 215) =
4.885, p = .008, η2p = 0.043, a significant main effect of Reliability, F(1, 215) = 8.790, p = .003, η2p = 0.039, and a significant Year Group-
by-Reliability interaction, F(2, 215) = 3.464, p = .033, η2p = 0.031. Year 7 students did not differentiate between unreliable and reliable
blog posts in their use of the Author trust criterion, but Year 9 and Year 11 did (Fig. 2). For Type, results show a significant main effect of
Reliability, F(1, 215) = 7.861, p = .006, η2p = 0.035; across all Year Groups, students used the Type trust criterion more for reliable than
for unreliable blog posts. We found no significant effects regarding Content or Opinion.
Analyses of the Composite Scores for Author, Content, Type, and Opinion confirm this pattern of results. We computed non-parametric
Kruskal-Wallis H tests because of non-normal distributions. Results show significant differences between Year Groups regarding the
Composite Author Score, X2(2) = 8.297, p = .016. Post-hoc Mann-Whitney-U tests show significant differences between Year 7 and Year
9 students in their Composite Author Scores, U = 681.00, p = .025. We found no significant differences between Year Groups regarding
their Composite Content, Type, or Opinion Scores.

3.4. Hypothesis 3. Higher Epistemic Thinking levels are positively related to competent Source Evaluation

There is only one significant correlation between ETA scales (Absolutism, Multiplism, and Evaluativism) and source evaluation
composite scores (Trustworthiness, Author, Content, Type, and Opinion): Multiplism is negatively related to the Composite Trustworthiness
Score (Table 3). To explore if Epistemic Thinking still explained significant variance in Source Evaluation when controlling for other
variables, we computed a series of forced-order hierarchical linear regression analyses, one for each of the source evaluation composite
scores (Trustworthiness, Author, Content, Type, and Opinion). To control for students’ personal experience of playing video games (Game
Frequency) and their opinion about video games (Game Attitude), we entered these variables in the first step. In the second step, we
entered the dummy coded Year Groups. In the critical third step, we entered the ETA scales (Absolutism, Multiplism, and Evaluativism).
We only report analyses with significant results regarding the ETA scales as Game Frequency and Game Attitude did not emerge as
significant predictors and differences between Year Groups are reported in section 3.3.
Regarding the Composite Trustworthiness Score, Step 2 and Step 3 were significant and Year 7 and Multiplism emerged as significant
predictors. Students in Year 7 – compared to those in Year 9 and Year 11 – and those students who endorsed Multiplism showed lower
Composite Trustworthiness Scores (Table 4). None of the ETA scales were significant predictors regarding any of the other composite
scores (Author, Content, Type, and Opinion).

Fig. 1. Trustworthiness ratings (y-axis) by Year Group (x-axis) and Reliability (lines).

9
S. Pieschl and D. Sivyer Computers & Education 160 (2021) 104038

Fig. 2. Author ratings (y-axis) by Year Group (x-axis) and Reliability (lines).

Table 4
Hierarchical regression model of the composite trustworthiness score.
B SE β

Step 1 a
Game Frequency − 0.24 0.17 -.11
Game Attitude − 0.18 0.21 -.06
Step 2 b
Game Frequency − 0.22 0.17 -.10
Game Attitude − 0.03 0.21 -.01
Year Group
Year 7 − 2.08 0.67 -.21 **
Year 9 0.83 0.75 .08
Step 3 c
Game Frequency − 0.18 0.16 -.08
Game Attitude − 0.07 0.21 -.03
Year Group
Year 7 − 1.84 0.66 -.19 **
Year 9 0.88 0.74 .08
Epistemic Thinking Assessment (ETA) Scales
Absolutism 0.81 0.43 .10
Multiplism − 1.27 0.39 -.23 **
Evaluativism 0.22 0.58 .03

Note. Game Frequency indicates answers to the question “How often do you play video games?” (1 = never – 7 = always).
Game Attitude indicates answer to the question “Do you think video games have effects on players?” (1 = many positive
effects – 4 = no effects – 7 = many negative effects). Year Group is dummy coded with Year 11 as reference category; the
indicated year is always coded as 1, the other year groups as 0 (e.g., for Year 7, Year 7 is coded as 1 and Year 9 and Year
11 are coded as 0). ETA-scales are answered on 6-point scales with higher values indicating stronger endorsement of
Absolutism, Multiplism, and Evaluativism respectively.
*p < .05, **p < .01, ***p < .001.
a
Step 1: F(2, 215) = 0.99, ns., R = .095, R2 = .009, Fchange (2, 215) = 0.99, ns.
b
Step 2: F(4, 213) = 3.81, p = .005, R = .258, R2 = .067, R2change = .058, Fchange (2, 213) = 6.57, p = .002.
c
Step 3: F(7, 210) = 5.86, p < .001, R = .373, R2 = .139, R2change = .073, Fchange (3, 210) = 5.90, p = .001.

10
S. Pieschl and D. Sivyer Computers & Education 160 (2021) 104038

4. Discussion

4.1. Hypothesis 1. Higher Year Groups display higher Epistemid Thinking levels than lower Year Groups

Hypothesis 1 was not confirmed. Students in different Year Groups did not differ significantly in their epistemic thinking. This
finding is inconsistent with classic developmental theories (Kuhn et al., 2000; Kuhn & Weinstock, 2002), the majority of research about
epistemic development (e.g., King & Kitchener, 2004; Kuhn et al., 2000; Schommer et al., 1997; Weinstock et al., 2006), and cur­
riculum standards (ACARA, 2016). However, it is consistent with theoretical models emphasizing complex interactions between
epistemic thinking and context or recursive development (Rule & Bendixen, 2010; Sandoval et al., 2016; Schommer-Aikins et al.,
2010) and studies not finding epistemic development (Mason et al., 2013; Winberg et al., 2019).
Some potential explanations may be related to context: Epistemic thinking is context-sensitive (Barzilai & Weinstock, 2015; Barzilai
& Zohar, 2012; Bromme et al., 2010a,b; Kuhn et al., 2000). Contrary to previous research (Brand et al., 2019; Yu & Baxter, 2015),
students in this sample did not play video games frequently, nor did they have strong opinions about video games. Thus, the topic
effects of video games might not have been sufficiently authentic, challenging, or personally relevant to engage students deeply and
elicit responses reflective of their personal epistemology. Additionally, some of the lower ability or younger students might have been
unable to engage in the higher order thinking necessary to adequately understand the questions or articulate their epistemic thinking.
Moreover, not all students may have explicitly reflected on notions of knowledge and knowing prior to our study (Barzilai & Wein­
stock, 2015), while the strength in which an epistemic perspective is endorsed should predict adoption in a particular context (Barzilai
& Eshet-Alkalai, 2015). Thus, students at different educational levels might have provided similar answers, but from different recursive
stages (Sandoval et al., 2016; Schommer-Aikins et al., 2010). Contrary to Australian cultural values of egalitarianism and giving a “fair
go” (Department of Home Affairs, 2019; Feather, 1998; Plage et al., 2016; Tranter & Donoghue, 2015), we did not find pronounced
endorsement of Multiplism. In comparison with Israeli students (Barzilai & Ka’adan, 2017), we found no indication of cultural variation
(cf. Bernolt et al., 2019; Chan & Elliot, 2004; Gottlieb, 2007; Strømsø et al., 2016; Weinstock, 2015).
Other potential explanations are related to measurement. Evaluativism has low internal consistency and we could not confirm a
previously detected factor structure of the ETA (Barzilai & Ka’adan, 2017; Appendix B). At the same time, the ETA elicited a similar
order of endorsement (Evaluativist > Absolutist > Multiplist; Table 2) and similar absolute scale values as in a study with Arab Israeli 9th
grade students regarding health and nutrition topics (Barzilai & Ka’adan, 2017). To conclude, contrary to previous findings (Barzilai &
Weinstock, 2015), the ETA may not be able to capture fine-grained topic-specific differences in epistemic thinking and/or may elicit
disproportionately high endorsement of evaluativism because such statements might seem more “socially acceptable” than students’
“true” answers (cf. social desirability). Although it can be considered a very promising approach (Mason, 2016), further research is
needed about the psychometric properties of the ETA in general and its validity for use among adolescents specifically. Please note that
this seems to be a global concern of research about epistemic thinking where written self-report questionnaires often suffer from poor
psychometric quality (Greene et al., 2018; Mason, 2016), respondents often do not know what items ask (Greene, Torney-Purta,
Azevedo, & Robertson, 2010; Greene & Yu, 2016), and theoretically proposed dimensions are rarely replicated empirically (Clareb­
out et al., 2001; DeBacker, Crowson, Beesley, Thoma, & Hestevold, 2008).

4.2. Hypothesis 2. Higher Year Groups display better Source Evaluation than lower Year Groups

Hypothesis 2 was mostly confirmed. For Trustworthiness and Author ratings most indicative of competent critical source evaluation,
we found significant year group differences in the expected direction: Students in Years 9 and 11, compared to those in Year 7,
considered the reliable blog posts, compared to unreliable ones, more trustworthy and considered author attributes more. These
findings are consistent with the hypothesis of discrepancy-induced source comprehension (Braasch & Bråten, 2017) and similar to
research with French secondary students showing that younger students did not discriminate between more or less competent sources,
but older students did (Macedo-Rouet et al., 2019; Potocki et al., 2020). Across all Year Groups, students also used the Type criterion
significantly more to judge the Trustworthiness of reliable than unreliable blogs; they might have considered different types of personal
and science blogs when making these judgments. However, we found no significant differences between Year 9 and Year 11 students
and we found no developmental effects regarding the less relevant Trustworthiness criteria (i.e., Content, Type, and Opinion).
Despite some development, our results point to relatively poor source evaluation skills: On average, students considered all blogs
moderately trustworthy (see Fig. 1). Students might have been prone to a central tendency bias due to their general uncertainty and
lack of well-practiced knowledge and skills. This seems most problematic for Year 7 students who also demonstrated an inability to
distinguish between unreliable anecdotal blogs and reliable scientific blogs. However, this finding is in line with prior research
showing that less competent readers perceive texts on the Internet as authorless (Alexander and Disciplined Reading & Learning
Research Laboratory, 2012) and novices tend to ignore source attributes and judge texts primarily on content (Paul et al., 2017;
Wineburg, 1991). Even Year 9 and Year 11 students show only rudimentary source evaluation skills. They discriminated significantly –
but only moderately – between reliable and unreliable blogs regarding Trustworthiness and the use of the Author as a criterion for these
judgments. Prior research reveals that experts at sourcing give saliency to authors in their discernment of text reliability (Paul et al.,
2017; Wineburg, 1991). Thus, expert source evaluators might have completely discarded the unreliable blog posts due to the
non-expert authors and anecdotal evidence and they should have recognized expert authors and valid corroboration of knowledge
claims by scientific evidence in the reliable blogs. Year 9 and 11 students only judged unreliable blogs significantly less trustworthy
than Year 7 students, but they judged the trustworthiness of reliable blogs similarly to Year 7 students. We can only speculate about
potential reasons for this pattern of results: Year 9 and Year 11 students might have had some skills for identifying unreliable sources

11
S. Pieschl and D. Sivyer Computers & Education 160 (2021) 104038

and due to explicit prompting to evaluate sources they might have become more critical of unreliable blogs. However, they might have
lacked the necessary knowledge about scientific methods and about what constitutes adequate evidence to corroborate scientific
claims to be sufficiently certain of judging blogs highly trustworthy. For example, even though we checked the comprehensibility of
our blogs, Year 9 and 11 students might not have been aware of the potentially high-quality evidence that the “meta-analyses” or
“reviews” referenced in the reliable blogs could provide. We also expected significantly better critical source evaluation in Year 11 than
in Year 9 students. The poor source evaluation skills of Year 11 students raise particular concern as they are in their penultimate year of
secondary schooling, yet they are not equipped with these important life skills. Prior research shows that such poor source evaluation –
even at the end of secondary schooling – is a global issue (Brem et al., 2001; Britt & Aglinskas, 2002; Kiili et al., 2018; Maggioni et al.,
2010; Mason et al., 2018; OECD, 2019; Paul et al., 2017; Wineburg, 1991). It is unlikely that Australian cultural values (Department of
Home Affairs, 2019; Feather, 1998; Plage et al., 2016; Tranter & Donoghue, 2015) resulted in significantly inflated doubt of
authoritative sources or giving poor sources more of a “fair go”.

4.3. Hypothesis 3. Higher Epistemic thinking levels are positively related to competent Source Evaluation

Hypothesis 3 was partly confirmed. The Composite Trustworthiness Score, most indicative of competent critical source evaluation,
was significantly and negatively predicted by Multiplism, even when controlling for students’ use of and attitude towards video games
as well as their Year Group. This is consistent with previous research, where Multiplism was associated with poor sourcing (Barzilai &
Eshet-Alkalai, 2015; Barzilai et al., 2015). Multiplists believe that everyone has a right to their opinion and that there can be equally
true knowledge claims (Kuhn & Weinstock, 2002). Thus, students endorsing Multiplism might have considered mothers and scientists
approximately equally authoritative.
However, contrary to previous research, we did not find positive relationships between Evaluativism and source evaluation (cf.
Barzilai & Zohar, 2012; Iordanou et al., 2019) or negative relationships between Absolutism and source evaluation (cf. Barzilai & Zohar,
2012). Furthermore, none of the Trustworthiness Criteria Composite Scores were predicted by any ETA scale. We also explored more
complex interactions between relevant variables to determine developmental progress (not reported). However, the relationship
between Multiplism and the Composite Trustworthiness Score did not differ between year groups and epistemic thinking did not moderate
or mediate the relationship between age and source evaluation (cf. Iordanou et al., 2019). Such minimal relationships between
epistemic thinking and source evaluation are inconsistent with theoretical models from different lines of research that argue
convincingly that these variables should be inextricably linked (e.g., Barzilai et al., 2015; Bråten, Britt et al., 2011a,b; Bromme, Pieschl,
& Stahl, 2010; Chinn et al., 2014; Chinn & Sandoval, 2018; Elby & Hammer, 2010; Hofer, 2004; Kuhn & Weinstock, 2002; Mason &
Boldrin, 2008). However, these mixed findings are consistent with previous research that found relationships only for selected con­
texts, selected levels of epistemic beliefs, and/or selected source evaluation skills (Barzilai & Zohar, 2012; Iordanou et al., 2019; Mason
et al., 2010; Porsch & Bromme, 2011). Please refer to section 4.1 for potential explanations for this lack of effects based on issues with
students’ Epistemic Thinking and to section 4.4 for potential explanations related to methodological limitations. Based on such mixed
findings, we can only recommend further research about the development of the relationship between epistemic thinking and source
evaluation.

4.4. Limitations

The current study has a number of limitations that should be considered. Firstly, participants were volunteers from one Australian
secondary school. Thus, self-selection biases might have occurred and the sample might not be representative of Australian secondary
schools. For example, we had more female than male participants and the majority of participants had little video game experience.
Furthermore, we had unequal numbers of participants per Year Group, which might also have influenced our results. Secondly, stu­
dents evaluated the blogs in an artificial environment and were asked to judge the blogs immediately after reading them. However, this
is no valid indicator of their spontaneous sourcing; such explicit prompting generally results in superior performance (Barzilai & Zohar,
2012; Kammerer et al., 2016; Marcedo-Rouet et al., 2019; Paul et al., 2017; Potocki et al., 2020). Additionally, the Trustworthiness and
Trust Criteria instrument uses one-item measures while trustworthiness may be a more complex construct (Strømsø et al., 2011).
Furthermore, we only presented the trust criteria of author, content, type, and opinion. Therefore, we do not know if students used
additional criteria which might have been more relevant for their trustworthiness judgments. Finally, the relationship between source
evaluation and epistemic thinking was investigated via correlations. Thus, we cannot draw any conclusions about causality and we
cannot rule out the relevance of confounding variables. All of these limitations point to the benefits of future research confirming our
results with more representative samples, with other social science and natural science topics, in more naturalistic environments, and
with alternative research designs and methodologies to capture source evaluation as well as epistemic thinking more thoroughly.

5. Conclusions and implications

Despite these limitations, this exploratory field study contributes significantly to the literature: To our knowledge, it is only the
second study comparing Epistemic Thinking and Source Evaluation between multiple secondary school Year Groups and it is the first
study about both of these topics within an Australian secondary school context.
From an educational perspective, the current study attests to Australian secondary students’ lack of epistemic development
throughout secondary school and their relatively poor epistemic source evaluation skills even at the end of secondary education. It is
surprising, that we found development in the epistemic practice of source evaluation but not in epistemic thinking because usually the

12
S. Pieschl and D. Sivyer Computers & Education 160 (2021) 104038

development of knowledge precedes actions (Potocki et al., 2020). We can only speculate that this might be a methodological artefact
due to the fact that the epistemic thinking assessment was not engaging enough and too abstract and difficult for students (see 4.1) while
assessing the epistemic practice of source evaluation might have been comparably easy due to explicit prompting (see 4.4) and triggering
(Braasch & Bråten, 2017). Our findings are especially alarming given the importance of both of these skills for 21st Century digital
literacy (Leu et al., 2013; OECD, 2019) and their inclusion in the Australian Curriculum (ACARA, 2016). Thus, our results point to the
need for further education or training in these areas throughout secondary schooling.
Previous research indicates that some epistemic thinking interventions (Barzilai and Ka’adan, 2017; Porsch & Bromme, 2011) and
source evaluation interventions (Brante & Strømsø, 2018; Bråten et al., 2019; Britt & Aglinskas, 2002; Stadtler et al., 2016) show
promising results in secondary education. It might be beneficial to combine such interventions to target epistemic thinking, the
epistemic practice of critical source evaluation, and epistemic metacognitive reflection simultaneously (cf. Barzilai & Zohar, 2012;
Chinn et al., 2014) and to embed such training in regular secondary teaching, especially whenever Internet research is used. Inter­
estingly, French and German high school students reported that teachers did not establish sourcing as a common practice, nor did they
provide feedback on sourcing strategy use (Paul et al., 2017). This may point to an unfortunate shortfall in teacher training in these
areas and also emphasizes the continued need for professional development of teachers. We assume that epistemic thinking and source
evaluation skills will become even more important with more nuanced media use: Nowadays, scientists may communicate their latest
findings via blogs and presidents may use Twitter to reach their constituents. Thus, secondary students – and teachers – cannot rely
exclusively on surface author or document type attributes anymore to determine the reliability of sources. Instead, a deeper under­
standing and appreciation of the underlying epistemic processes needed to substantiate scientific knowledge claims is warranted
(Chinn & Rinehart, 2016).

CRediT author statement

Stephanie Pieschl: Conceptualization, Methodology, Formal analysis, Data curation, Visualization, Writing - original draft,
reviewing & editing (lead), Supervision; Deborah R. Sivyer: Conceptualization, Methodology, Software, Investigation, Project
administration, Formal analysis, Data curation, Writing - original draft, reviewing & editing, Project administration.

Declarations of interest

None.
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Acknowledgement

We thank Richard Kurzowski for help with data collection.

Appendix A and Appendix B. Supplementary data

Supplementary data to this article can be found online at https://doi.org/10.1016/j.compedu.2020.104038.

References

Australian Curriculum, Assessment and Reporting Authority [ACARA]. (2016). Curriculum. Retrieved from https://www.acara.edu.au/curriculum/.
Alexander, P. A., & Disciplined Reading & Learning Research Laboratory. (2012). Reading into the future: Competence for the twenty-first century. Educational
Psychologist, 47(4), 259–280. https://doi.org/10.1080/00461520.2012.722511
Barzilai, S., & Eshet-Alkalai, Y. (2015). The role of epistemic perspectives in comprehension of multiple author viewpoints. Learning and Instruction, 36, 86–103.
https://doi.org/10.1016/j.learninstruc.2014.12.003
Barzilai, S., & Ka’adan, I. (2017). Learning to integrate divergent information sources: The interplay of epistemic cognition and epistemic metacognition.
Metacognition and Learning, 12(2), 193–232. https://doi.org/10.1007/s11409-016-9165-7
Barzilai, S., Tzadok, E., & Eshet-Alkalai, Y. (2015). Sourcing while reading divergent expert accounts: Pathways from views of knowing to written argumentation.
Instructional Science, 43(6), 737–766. https://doi.org/10.1007/s11251-015-9359-4
Barzilai, S., & Weinstock, M. (2015). Measuring epistemic thinking within and across topics: A scenario-based approach. Contemporary Educational Psychology, 42,
141–158. https://doi.org/10.1016/j.cedpsych.2015.06.006
Barzilai, S., & Zohar, A. (2012). Epistemic thinking in action: Evaluating and integrating online sources. Cognition and Instruction, 30(1), 39–85. https://doi.org/
10.1080/07370008.2011.636495
Barzilai, S., & Zohar, A. (2014). Reconsidering personal epistemology as metacognition: A multifaceted approach to the analysis of epistemic thinking. Educational
Psychologist, 49(1), 13–35. https://doi.org/10.1080/00461520.2013.863265
Bernholt, A., Lindfors, M., & Winberg, M. (2019). Students’ epistemic beliefs in Sweden and Germany and their interrelations with classroom characteristics.
Scandinavian Journal of Educational Research, 1–17. https://doi.org/10.1080/00313831.2019.1651763
Boyes, M. C., & Chandler, M. (1992). Cognitive development, epistemic doubt, and identity formation in adolescence. Journal of Youth and Adolescence, 21(3),
277–304. https://doi.org/10.1007/BF01537019
Braasch, J. L. G., & Bråten, I. (2017). The Discrepancy-Induced Source Comprehension (D-ISC) model: Basic assumptions and preliminary evidence. Educational
Psychologist, 52(3), 167–181. https://doi.org/10.1080/00461520.2017.1323219
Braasch, J. L., Bråten, I., Strømsø, H. I., Anmarkrud, Ø., & Ferguson, L. E. (2013). Promoting secondary school students’ evaluation of source features of multiple
documents. Contemporary Educational Psychology, 38(3), 180–195. https://doi.org/10.1016/j.cedpsych.2013.03.003

13
S. Pieschl and D. Sivyer Computers & Education 160 (2021) 104038

Brand-Gruwel, S., & Stadtler, M. (2011). Solving information-based problems: Evaluating sources and information. Learning and Instruction, 21(2), 175–179. https://
doi.org/10.1016/j.learninstruc.2010.02.008
Brand, J. E., Jervis, J., Huggins, P. M., & Wilson, T. W. (2019). Digital Australia 2020. Eveleigh, NSW: IGEA.
Brante, E. W., & Strømsø, H. I. (2018). Sourcing in text comprehension: A review of interventions targeting sourcing skills. Educational Psychology Review, 30(3),
773–799. https://doi.org/10.1007/s10648-017-9421-7
Bråten, I., Brante, E. W., & Strømsø, H. I. (2019). Teaching sourcing in upper secondary school: A comprehensive sourcing intervention with follow-up data. Reading
Research Quarterly, 54(4), 481–505. https://doi.org/10.1002/rrq.253
Bråten, I., Britt, M. A., Strømsø, H. I., & Rouet, J. (2011a). The role of epistemic beliefs in the comprehension of multiple expository texts: Toward an integrated model.
Educational Psychologist, 46(1), 48–70. https://doi.org/10.1080/00461520.2011.538647
Bråten, I., Strømsø, H. I., & Salmeron, L. (2011b). Trust and mistrust when students read multiple information sources about climate change. Learning and Instruction,
21(2), 180–192. https://doi.org/10.1016/j.learninstruc.2010.02.002
Brem, S. K., Russell, J., & Weems, L. (2001). Science on the web: Student evaluations of scientific arguments. Discourse Processes, 32(2/3), 191–213. https://doi.org/
10.1080/0163853X.2001.9651598
Britt, M. A., & Aglinskas, C. (2002). Improving students’ ability to identify and use source information. Cognition and Instruction, 20(4), 485–522. https://doi.org/
10.1207/S1532690XCI2004_2
Bromme, R., Kienhues, D., & Porsch, T. (2010a). Who knows what and who can we believe? Epistemological beliefs are beliefs about knowledge (mostly) attained
from others. In L. D. Bendixen, & F. C. Feucht (Eds.), Personal epistemology in the classroom: Theory, research, and implications for practice (pp. 163–193). Cambridge:
Cambridge University Press.
Bromme, R., Pieschl, S., & Stahl, E. (2010b). Epistemological beliefs are standards for adaptive learning: A functional theory about epistemological beliefs and
metacognition. Metacognition & Learning, 5(1), 7–26. https://doi.org/10.1007/s11409-009-9053-5
Brownlee, J., Syu, J.-J., Mascadri, J., Cobb-Moore, C., Walker, S., Johansson, E., et al. (2012). Teachers’ and children’s personal epistemologies for moral education:
Case studies in early years elementary education. Teaching and Teacher Education, 28(3), 440–450. https://doi.org/10.1016/j.tate.2011.11.012
Brumfiel, G. (2009). Science journalism: Supplanting the old media? Nature, 458, 274–277. https://doi.org/10.1038/458274a
Chandler, M. J., & Proulx, T. (2010). Stalking young persons’ changing beliefs about belief. In L. D. Bendixen, & F. C. Feucht (Eds.), Personal epistemology in the
classroom. Theory, research, and implications for practice (pp. 197–219). Cambridge University Press.
Chan, K.-W., & Elliott, R. G. (2004). Epistemological beliefs across cultures: Critique and analysis of belief structure studies. Educational Psychology, 24(2), 123–142.
https://doi.org/10.1080/0144341032000160100
Chinn, C. A., & Rinehart, R. W. (2016). Commentary: Advances in research on sourcing – source credibility and reliable processes for processing knowledge claims.
Reading and Writing, 29, 1701–1717. https://doi.org/10.1007/s11145-016-9675-3
Chinn, C. A., Rinehart, R. W., & Buckland, L. A. (2014). Epistemic cognition and evaluating information: Applying the AIR model of epistemic cognition. In D. Rapp, &
J. Braasch (Eds.), Processing inaccurate information: Theoretical and applied perspectives from cognitive science and the educational sciences (pp. 425–453). Cambridge,
MA: MIT Press.
Chinn, C., & Sandoval, W. (2018). Epistemic cognition and epistemic development. In F. Fischer, C. E. Hmelo-Silver, S. R. Goldman, & P. Reimann (Eds.), International
handbook of the learning sciences (pp. 24–33). Routledge.
Chiu, Y.-L., Liang, J.-C., & Tsai, C.-C. (2016). Exploring the roles of education and Internet search experience in students’ Internet-specific epistemic beliefs. Computers
in Human Behavior, 62, 286–291. https://doi.org/10.1016/j.chb.2016.03.091
Clarebout, G., Elen, J., Luyten, L., & Bamps, H. (2001). Assessing epistemological beliefs: Schommer’s questionnaire revisited. Educational Research and Evaluation, 7
(1), 53–77. https://doi.org/10.1076/edre.7.1.53.6927
Coiro, J., Coscarelli, C., Maykel, C., & Forzani, E. (2015). Investigating criteria that seventh graders use to evaluate the quality of online information. Journal of
Adolescent & Adult Literacy, 59(3), 287–297. https://doi.org/10.1002/jaal.448
DeBacker, T. K., Crowson, H. M., Beesley, A. D., Thoma, S. J., & Hestevold, N. L. (2008). The challenge of measuring epistemic beliefs: An analysis of three self-report
instruments. The Journal of Experimental Education, 76(3), 281–312. https://doi.org/10.3200/JEXE.76.3.281-314
Department of Home Affairs. (2019). Australian values statement. Form 1281. Retrieved from https://immi.homeaffairs.gov.au/form-listing/forms/1281.pdf.
Elby, A., & Hammer, D. (2010). Epistemological resources and framing: A cognitive framework for helping teachers interpret and respond to their students’
epistemologies. In L. D. Bendixen, & F. C. Feucht (Eds.), Personal epistemology in the classroom: Theory, research, and implications for practice (pp. 409–434). New
York, NY: Cambridge University Press.
Feather, N. T. (1998). Attitudes toward high achievers, self-esteem, and value priorities for Australian, American, and Canadian students. Journal of Cross-Cultural
Psychology, 29(6), 749–759. https://doi.org/10.1177/0022022198296005
Ferguson, L. E., & Bråten, I. (2013). Student profiles of knowledge and epistemic beliefs: Changes and relations to multiple-text comprehension. Learning and
Instruction, 25, 49–61. https://doi.org/10.1016/j.learninstruc.2012.11.003
Gottlieb, E. (2007). Learning how to believe: Epistemic development in cultural context. The Journal of the Learning Sciences, 16(1), 5–35. https://doi.org/10.1080/
10508400709336941
Greene, J. A., Cartiff, B. M., & Duke, R. F. (2018). A meta-analytic review of the relationship between epistemic cognition and academic achievement. Journal of
Educational Psychology, 110(8), 1084–1111. https://doi.org/10.1037/edu0000263
Greene, J. A., Muis, K. R., & Pieschl, S. (2010a). The role of epistemic beliefs in students’ self-regulated learning with computer-based learning environments:
Conceptual and methodological issues. Educational Psychologist, 45(4), 245–257. https://doi.org/10.1080/00461520.2010.515932
Greene, J. A., Torney-Purta, J., & Azevedo, R. (2010b). Empirical evidence regarding relations among a model of epistemic and ontological cognition, academic
performance, and educational level. Journal of Educational Psychology, 102, 234–255. https://doi.org/10.1037/a0017998
Greene, J. A., Torney-Purta, J., Azevedo, R., & Robertson, J. (2010c). Using cognitive interviewing to explore elementary and secondary school students’ epistemic
and ontological cognition. In L. D. Bendixen, & F. C. Feucht (Eds.), Personal epistemology in the classroom: Theory, research, and implications for practice (pp.
368–406). Cambridge: Cambridge University Press.
Greene, J. A., & Yu, S. B. (2016). Modeling and measuring epistemic cognition: A qualitative re-investigation. Contemporary Educational Psychology, 39(1), 12–28.
https://doi.org/10.1016/j.cedpsych.2013.10.002
Hallett, D., Chandler, M. J., & Krettenauer, T. (2002). Disentangling the course of epistemic development: Parsing knowledge by epistemic content. New Ideas in
Psychology, 20, 285–307. https://doi.org/10.1016/S0732-118X(02)00011-9
Hendriks, F., Kienhues, D., & Bromme, R. (2016). Disclose your flaws! Admission positively affects the perceived trustworthiness of an expert science blogger. Studies
in Communication Sciences, 16(2), 124–131. https://doi.org/10.1016/j.scoms.2016.10.003
Hofer, B. K. (2004). Epistemological understanding as a metacognitive process: Thinking aloud during online searching. Educational Psychologist, 39(1), 43–55.
https://doi.org/10.1207/s15326985ep3901_5
Hofer, B. K., & Pintrich, P. R. (1997). The development of epistemological theories: Beliefs about knowledge and knowing and their relation to learning. Review of
Educational Research, 67(1), 88–140. https://doi.org/10.2307/1170620
Iordanou, K., Muis, K. R., & Kendeou, P. (2019). Epistemic perspective and online epistemic processing of evidence: Developmental and domain differences. The
Journal of Experimental Education, 87(4), 531–551. https://doi.org/10.1080/00220973.2018.1482857
Kammerer, Y., Bråten, I., Gerjets, P., & Strømsø, H. I. (2013). The role of Internet-specific epistemic beliefs in laypersons’ source evaluations and decisions during Web
search on a medical issue. Computers in Human Behavior, 29(3), 1193–1203. https://doi.org/10.1016/j.chb.2012.10.012
Kammerer, Y., Meier, N., & Stahl, E. (2016). Fostering secondary-school students’ intertext model formation when reading a set of websites: The effectiveness of
source prompts. Computers & Education, 102, 52–64. https://doi.org/10.1016/j.compedu.2016.07.001
Keskenidou, M., Kyridis, A., Valsamidou, L. P., & Soulani, A.-H. (2014). The Internet as a source of information. The social role of blogs and their reliability.
Observatorio, 8(1), 203–228. https://doi.org/10.15847/obsOBS812014688

14
S. Pieschl and D. Sivyer Computers & Education 160 (2021) 104038

Kiili, C., Laurinen, L., & Marttunen, M. (2008). Students evaluating Internet sources: From versatile evaluators to uncritical readers. Journal of Educational Computing
Research, 39(1), 75–95. https://doi.org/10.2190/EC.39.1.e
Kiili, C., Leu, D. J., Martunen, M., Hautala, J., & Leppänen, P. H. T. (2018). Exploring early adolescents’ evaluation of academic and commercial online resources
related to health. Reading and Writing, 31, 533–557. https://doi.org/10.1007/s11145-017-9797-2
King, P. M., & Kitchener, K. S. (2004). Reflective judgment: Theory and research on the development of epistemic assumptions through adulthood. Educational
Psychologist, 39(1), 5–18. https://doi.org/10.1207/s15326985ep3901_2
Krettenauer, T. (2005). The role of epistemic cognition in adolescent identity formation: Further evidence. Journal of Youth and Adolescence, 34(3), 185–198. https://
doi.org/10.1007/s10964-005-4300-9
Kuhn, D., Cheney, R., & Weinstock, M. (2000). The development of epistemological understanding. Cognitive Development, 15(3), 309–328. https://doi.org/10.1016/
S0885-2014(00)00030-7
Kuhn, D., & Weinstock, M. (2002). What is epistemological thinking and why does it matter? In B. K. Hofer, & P. R. Pintrich (Eds.), Personal epistemology: The
psychology of beliefs about knowledge and knowing (pp. 121–144). Mahwah, NJ: Lawrence Erlbaum Associates.
Leu, D. J., Kinzer, C. K., Coiro, J., Castek, J., & Henry, L. A. (2013). New literacies: A dual level theory of the changing nature of literacy, instruction, and assessment.
In D. E. Alvermann, N. J. Unrau, & R. B. Ruddell (Eds.), Theoretical models and processes of reading (6th ed., pp. 1150–1181). Newark, DE: International Reading
Association.
List, A., & Alexander, P. A. (2017). Analyzing and integrating models of multiple text comprehension. Educational Psychologist, 52(3), 14–147. https://doi.org/
10.1080/00461520.2017.1328309
Macedo-Rouet, M., Potocki, A., Scharrer, L., Ros, C., Stadtler, M., Salmerón, L., et al. (2019). How good is this page? Benefits and limits of prompting on adolescents’
evaluation of web information quality. Reading Research Quarterly, 54(3), 299–321. https://doi.org/10.1002/rrq.241
Maggioni, L., Fox, E., & Alexander, P. (2010). The epistemic dimension of competence in the social sciences. Journal of Social Science Education, 9(4), 15–23. https://
doi.org/10.4119/jsse-538
Makel, M. C., & Plucker, J. A. (2014). Facts are more important than novelty: Replication in the education science. Educational Researcher, 43(6), 304–316. https://doi.
org/10.3102/0013189X14545513
Mason, L. (2016). Psychological perspectives on measuring epistemic cognition. In J. A. Greene, W. A. Sandoval, & I. Bråten (Eds.), Handbook of epistemic cognition (pp.
375–392). New York, NY: Routledge.
Mason, L., Ariasi, N., & Boldrin, A. (2011). Epistemic beliefs in action: Spontaneous reflections about knowledge and knowing during online information searching
and their influence on learning. Learning and Instruction, 21(1), 137–151. https://doi.org/10.1016/j.learninstruc.2010.01.001
Mason, L., & Boldrin, A. (2008). Epistemic metacognition in the context of information searching on the Web. In M. S. Khine (Ed.), Knowing, knowledge and beliefs (pp.
377–404). Dordrecht: Springer.
Mason, L., Boldrin, A., & Ariasi, N. (2010). Epistemic metacognition in context: Evaluating and learning online information. Metacognition & Learning, 5(1), 67–90.
https://doi.org/10.1007/s11409-009-9048-2
Mason, L., Boscolo, P., Tornatora, M. C., & Ronconi, L. (2013). Besides knowledge: A cross-sectional study on the relations between epistemic beliefs, achievement
goals, self-beliefs, and achievement in science. Instructional Science, 41, 49–79. https://doi.org/10.1007/s11251-012-9210-0
Mason, L., Junyent, A. A., & Tornatora, M. C. (2014). Epistemic evaluation and comprehension of web-source information on controversial science-related topics:
Effects of a short-term instructional intervention. Computers & Education, 76, 143–157. https://doi.org/10.1016/j.compedu.2014.03.016
Mason, L., Scrimin, S., Tornatora, M. C., Suitner, C., & Moè, A. (2018). Internet source evaluation: The role of implicit associations and psychophysiological self-
regulation. Computers & Education, 119, 59–75. https://doi.org/10.1016/j.compedu.2017.12.009
McCrudden, M. T., Stenseth, T., Bråten, I., & Strømsø, H. I. (2016). The effects of topic familiarity, author expertise, and content relevance on Norwegian students’
document selection: A mixed methods study. Journal of Educational Psychology, 108(2), 147–162. https://doi.org/10.1037/edu0000057
McGrew, S., Breakstone, J., Ortega, T., Smith, M., & Wineburg, S. (2018). Can students evaluate online sources? Learning from assessments of civic online reasoning.
Theory & Research in Social Education, 46(2), 165–193. https://doi.org/10.1080/00933104.2017.1416320
Metzger, M. J., Flanagin, A. J., Markov, A., Grossman, R., & Bulger, M. (2015). Believing the unbelievable: Understanding young people’s information literacy beliefs
and practices in the United States. Journal of Children and Media, 9(3), 325–348. https://doi.org/10.1080/17482798.2015.1056817
Muis, K. R., Trevors, G., Duffy, M., Ranellucci, J., & Foy, M. J. (2016). Testing the TIDE: Examining the nature of students’ epistemic beliefs using a multiple methods
approach. The Journal of Experimental Education, 84(2), 264–288. https://doi.org/10.1080/00220973.2015.1048843
Nauroth, P., Gollwitzer, M., Bender, J., & Rothmund, T. (2014). Gamers against science: The case of the violent video game debate. European Journal of Social
Psychology, 44, 104–116. https://doi.org/10.1002/ejsp.1998
Organisation for Economic Co-operation and Development [OECD]. (2019). PISA 2018 results. In What students know and can do. PISA (Vol. I)Paris: OECD Publishing.
https://doi.org/10.1787/5f07c754-en.
Paul, J., Macedo-Rouet, M., Rouet, J.-F., & Stadtler, M. (2017). Why attend to source information when reading online? The perspective of ninth grade students from
two different countries. Computers & Education, 113, 339–354. https://doi.org/10.1016/j.compedu.2017.05.020
Pérez, A., Potocki, A., Stadtler, M., Macedo-Rouet, M., Paul, J., Salmerón, L., et al. (2018). Fostering teenagers’ assessment of information reliability: Effects of a
classroom intervention focused on critical source dimensions. Learning and Instruction, 58, 53–64. https://doi.org/10.1016/j.learninstruc.2018.04.006
Perfetti, C. A., Rouet, J. F., & Britt, M. A. (1999). Towards a theory of documents representation. In H. van Oostendorp, & S. R. Goldman (Eds.), Construction of mental
representations during reading (pp. 99–122). Mahwah, NJ: Lawrence Erlbaum.
Perry, W. G. (1970). Forms of intellectual and ethical development in the college years: A scheme. New York: Holt, Rinehart and Winston.
Plage, S., Willing, I., Skribis, Z., & Woodward, I. (2016). Australianness as fairness: Implications for cosmopolitan encounters. Journal of Sociology, 53(2), 318–333.
https://doi.org/10.1177/1440783316667641
Porsch, T., & Bromme, R. (2011). Effects of epistemological sensitization on source choices. Instructional Science, 39(6), 805–819. https://doi.org/10.1007/s11251-
010-9155-0
Potocki, A., de Pereya, G., Ros, C., Macedo-Rouet, M., Stadtler, M., Salmerón, L., et al. (2020). The development of source evaluation skills during adolescence:
Exploring different levels of source processing and their relationship. Journal for the Study of Education and Development, 43(1), 19–59.
Rouet, J. F., Britt, M. A., Mason, R. A., & Perfetti, C. A. (1996). Using multiple sources of evidence to reason about history. Journal of Educational Psychology, 88(3),
478–493. https://doi.org/10.1037/0022-0663.88.3.478
Rule, D. C., & Bendixen, L. D. (2010). The integrative model of personal epistemology development: Theoretical underpinnings and implications for education. In
L. D. Bendixen, & F. C. Feucht (Eds.), Personal epistemology in the classroom. Theory, research, and implications for practice (pp. 94–123). Cambridge University Press.
Sandoval, W. A., Greene, J. A., & Bråten, I. (2016). Understanding and promoting thinking about knowledge: Origins, issues, and future directions of research on
epistemic cognition. Review of Research in Education, 40, 457–496. https://doi.org/10.3102/0091732X16669319
Scharrer, L., & Salmeron, L. (2016). Sourcing in the reading process: Introduction to the special issue. Reading and Writing, 29(8), 1539–1548. https://doi.org/
10.1007/s11145-016-9676-2
Schiefer, J., Golle, J., Tibus, M., Herbein, E., Gindele, V., Trautwein, U., et al. (2020). Effects of an extracurricular science intervention on elementary school children’s
epistemic beliefs: A randomized controlled trial. British Journal of Educational Psychology, 90, 382–402. https://doi.org/10.1111/bjep.12301
Schommer, M. (1990). Effects of beliefs about the nature of knowledge on comprehension. Journal of Educational Psychology, 82(3), 498–504. https://doi.org/
10.1037/0022-0663.82.3.498
Schommer-Aikins, M., Bird, M., & Bakken, L. (2010). Manifestations of an epistemological belief system in preschool to grade twelve classrooms. In L. D. Bendixen, &
F. C. Feucht (Eds.), Personal epistemology in the classroom. Theory, research, and implications for practice (pp. 31–54). Cambridge University Press.
Schommer, M., Clavert, C., Gariglietti, G., & Bajaj, A. (1997). The development of epistemological beliefs among secondary students: A longitudinal study. Journal of
Educational Psychology, 89(1), 37–40. https://doi.org/10.1037/0022-0663.89.1.37

15
S. Pieschl and D. Sivyer Computers & Education 160 (2021) 104038

Sinatra, G. M., Kienhues, D., & Hofer, B. K. (2014). Addressing challenges to public understanding of science: Epistemic cognition, motivated reasoning, and
conceptual change. Educational Psychologist, 49(2), 123–138. https://doi.org/10.1080/00461520.2014.916216
Stadtler, M., & Bromme, R. (2013). Multiple document comprehension: An approach to public understanding of science. Cognition and Instruction, 31(2), 122–129.
https://doi.org/10.1080/07370008.2013.771106
Stadtler, M., & Bromme, R. (2014). The content-source integration model: A taxonomic description of how readers comprehend conflicting scientific information. In
D. N. Rapp, & J. L. G. Braasch (Eds.), Processing inaccurate information: Theoretical and applied perspectives from cognitive science and the educational sciences (pp.
379–402). Cambridge, MA: MIT Press.
Stadtler, M., Scharrer, L., Macedo-Rouet, M., Rouet, J.-F., & Bromme, R. (2016). Improving vocational students’ consideration of source information when deciding
about science controversies. Reading and Writing, 29(4), 705–729. https://doi.org/10.1007/s11145-016-9623-2
van Strien, J. L. H., Kammerer, Y., Brand-Gruwel, S., & Boshuizen, H. P. A. (2016). How attitude strength biases information processing and evaluation on the web.
Computers in Human Behavior, 60, 245–252. https://doi.org/10.1016/j.chb.2016.02.057
Strømsø, H., Bråten, I., Anmarkrud, O., & Ferguson, L. E. (2016). Beliefs about justification for knowing when ethnic majority and ethnic minority students read
multiple conflicting documents. Educational Psychology, 36(4), 638–657. https://doi.org/10.1080/01443410.214.920080
Strømsø, H., Bråten, I., & Britt, M. (2011). Do students’ beliefs about knowledge and knowing predict their judgement of texts’ trustworthiness? Educational
Psychology, 31(2), 177–206. https://doi.org/10.1080/01443410.2010.538039
Strømsø, H., Bråten, I., Britt, M., & Ferguson, L. (2013). Spontaneous sourcing among students reading multiple documents. Cognition and Instruction, 31(2), 176–203.
https://doi.org/10.1080/07370008.2013.769994
Tolhurst, D. (2007). The influence of learning environments on students’ epistemological beliefs and learning outcomes. Teaching in Higher Education, 12(2), 219–233.
https://doi.org/10.1080/13562510701191992
Tranter, B., & Donoghue, J. (2015). National identity and important Australians. Journal of Sociology, 51(2), 236. https://doi.org/10.1177/1440783314550057
Walraven, A., Brand-Gruwel, S., & Boshuizen, H. P. (2009). How students evaluate information and sources when searching the World Wide Web for information.
Computers & Education, 52(1), 234–246. https://doi.org/10.1016/j.compedu.2008.08.003
Watson, C. (2014). An exploratory study of secondary students’ judgments of the relevance and reliability of information. Journal of the Association for Information
Science and Technology, 65(7), 1385–1408. https://doi.org/10.1002/asi.23067
Weinstock, M. (2015). Changing epistemologies under conditions of social change in two Arab communities in Israel. International Journal of Psychology, 50(1), 29.
https://doi.org/10.1002/ijop.12130, 36.
Weinstock, M. P., Neuman, Y., & Glassner, A. (2006). Identification of informal reasoning fallacies as a function of epistemological level, grade level, and cognitive
ability. Journal of Educational Psychology, 98(2), 327–341. https://doi.org/10.1037/0022-0663.89.2.327
Winberg, T. M., Hofverberg, A., & Lindfors, M. (2019). Relationships between epistemic beliefs and achievement goals: Developmental trends over grades 5–11.
European Journal of Psychology of Education, 34(2), 295–315. https://doi.org/10.1007/s10212-018-0391-z
Wineburg, S. S. (1991). Historical problem solving: A study of the cognitive processes used in the evaluation of documentary and pictorial evidence. Journal of
Educational Psychology, 83(1), 73–87. https://doi.org/10.1037/0022-0663.83.1.73
Yu, M., & Baxter, J. (2015). Australian children’s screen time and participation in extracurricular activities. Growing up in Australia. The Longitudinal Study of Australian
Children (LSAC) Annual Statistical Report. Australian Institute of Family Studies.

16

You might also like