Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

Theory & Research in Social Education

ISSN: 0093-3104 (Print) 2163-1654 (Online) Journal homepage: https://www.tandfonline.com/loi/utrs20

Can Students Evaluate Online Sources? Learning


From Assessments of Civic Online Reasoning

Sarah McGrew, Joel Breakstone, Teresa Ortega, Mark Smith & Sam Wineburg

To cite this article: Sarah McGrew, Joel Breakstone, Teresa Ortega, Mark Smith & Sam
Wineburg (2018) Can Students Evaluate Online Sources? Learning From Assessments
of Civic Online Reasoning, Theory & Research in Social Education, 46:2, 165-193, DOI:
10.1080/00933104.2017.1416320

To link to this article: https://doi.org/10.1080/00933104.2017.1416320

Published online: 08 Jan 2018.

Submit your article to this journal

Article views: 1596

View related articles

View Crossmark data

Citing articles: 6 View citing articles

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=utrs20
Theory & Research in Social Education, 46: 165–193, 2018
Copyright © College and University Faculty Assembly of
National Council for the Social Studies
ISSN 0093-3104 print / 2163-1654 online
DOI: https://doi.org/10.1080/00933104.2017.1416320

Can Students Evaluate Online Sources? Learning


From Assessments of Civic Online Reasoning

Sarah McGrew , Joel Breakstone , Teresa Ortega,


Mark Smith , and Sam Wineburg
Stanford University

Abstract: To be an informed citizen in today’s information-rich environment, indivi-


duals must be able to evaluate information they encounter on the Internet. However,
teachers currently have limited options if they want to assess students’ evaluations of
digital content. In response, we created a range of short tasks that assess students’ civic
online reasoning—the ability to effectively search for, evaluate, and verify social and
political information online. Assessments ranged from paper-and-pencil tasks to open
Internet search tasks delivered via Google Forms. We outline a process of assessment
development in which middle school, high school, and college students in 12 states
completed tasks. We present a series of representative tasks and analyses of trends in
student performance. Across tasks and grade levels, students struggled to effectively
evaluate online claims, sources, and evidence. These results point to a need for
curriculum materials that support students’ development of civic online reasoning
competencies.

Keywords: assessment, civic education, digital literacy, online information

We are in the midst of an information revolution in which we increasingly


learn about the world from screens instead of print. Although the Internet has
the potential to democratize access to information, it puts enormous responsi-
bility on individuals to evaluate the reliability of information. If citizens are
not prepared to critically evaluate the information that bombards them online,
they are apt to be duped by false claims and misleading arguments. Schools,
and social studies classrooms in particular, are places to prepare students for

Correspondence should be sent to Sarah McGrew, Graduate School of Education,


Stanford University, 485 Lasuen Mall, Stanford, CA 94305. Email: smcgrew@stanford.edu
Color versions of one or more of the figures in the article can be found online at
www.tandfonline.com/utrs.
166 McGrew et al.

this challenge. However, few assessments exist to help teachers gauge their
students’ abilities to evaluate evidence and arguments online. We set out to fill
this void by developing assessments of students’ civic online reasoning—the
ability to effectively search for, evaluate, and verify social and political
information online.
Why should students learn to evaluate online sources? Scholars argue that
digital literacy is—and will increasingly be—necessary for informed and
engaged citizenship (Kahne, Lee, & Feezell, 2012; Knight Commission on
the Information Needs of Communities in a Democracy [Knight Commission],
2009; Mihailidis & Thevenin, 2013). “Information,” according to the Knight
Commission (2009), “is as vital to the functioning of healthy communities as
clean air, safe streets, good schools, and public health” (p. 19). Simply having
access to information is not enough. Citizens must also be able to make
judgments about the credibility of that information (Hobbs, 2010; Mihailidis
& Thevenin, 2013). Over time, a lax stance toward evaluating online informa-
tion diminishes our ability to develop informed opinions. As we increasingly
rely on the Internet as a source of political and social information, effective
discernment skills grow ever more critical. Indeed, young people reported that
they access nearly 75% of their news online and that social media was their
top source for news about the 2016 presidential election (American Press
Institute, 2015; Gottfried, Barthel, Shearer, & Mitchell, 2016).
Young people’s reliance on the Internet as a source of information pre-
sents both immense opportunities for democratic participation and formidable
challenges. Because traditional information gatekeepers are largely absent
online, information travels with relative freedom and astounding speed. If
citizens are able to use the Internet well, it can be an empowering and
enriching venue for information sharing (Kahne & Middaugh, 2012;
Rheingold, 2012). However, it is imperative for students to understand how
the Internet shapes the information they receive (e.g., Lynch, 2016; Mason &
Metzger, 2012; Pariser, 2011) and to know how to find reliable information
(Kahne, Hodgin, & Eidman-Aadahl, 2016; Metzger, 2007; Metzger, Flanagin,
& Medders, 2010). If young people consume information without determining
who is behind it and what that source’s agenda might be, they are easy marks
for digital rogues. This article presents our rationale for developing assess-
ments of civic online reasoning, descriptions of representative tasks, and
results from administering these tasks with students across the country.

LITERATURE REVIEW

How do people vet information they find online? To date, studies have
shown that adults use cues and heuristics to decide if a website is credible,
including the authority of a search engine, site design and functionality,
previous experience with or referral to a site, and perceived expertise
Assessments of Civic Online Reasoning 167

(cf. Flanagin & Metzger, 2007; Fogg et al., 2003; Metzger et al., 2010;
Sundar, 2008). However, these heuristics do not appear to be used system-
atically, and even if they are, such heuristics can often be misleading. For
example, 46% of over 2,500 participants in one study commented on site
design to explain why they found one site more credible than another. Design
was the most commonly mentioned factor in participants’ credibility assess-
ments. Some chose a site because it looked “professional,” “pleasing,” or, as
one person wrote, “It just looks more credible” (Fogg et al., 2003, p. 5). If
being attractive and typo free are all that is required for a website to convince
users it is trustworthy, those users are likely to be repeatedly deceived.
Should we be more hopeful about young people’s skills online? Scholars
initially argued that “digital natives” grew up accessing information online and
would therefore learn and think in ways different than their parents and other
“digital immigrants” (Prensky, 2001). Researchers now believe the story is
more complicated (Bennett, 2012; Gasser, Cortesi, Malik, & Lee, 2012).
Students struggle with many aspects of gathering information online, includ-
ing searching for and evaluating information.
In the context of an open Internet search, young people interpret the order
of search results as a signal of websites’ trustworthiness. If a site is listed
higher up in the search results, students often assume it is more reliable
(Hargittai, Fullerton, Menchen-Trevino, & Thomas, 2010; Westerwick,
2013). Even when researchers manipulated search results so that the most
relevant results were at the bottom of the page, students persisted in their
tendency to select from the top few results (Pan et al., 2007).
Once they select a site from a list of search results, students struggle to
effectively evaluate it. They rarely consider the source of a website when
assessing its credibility (Bartlett & Miller, 2011; Barzilai & Zohar, 2012;
Flanagin & Metzger, 2010; Walraven, Brand-Gruwel, & Boshuizen, 2009).
In one study, few students commented on the authors of the information they
found online. No students further investigated the authors or verified their
credentials (Hargittai et al., 2010). Rather than carefully evaluating informa-
tion based on the credibility of its source or the veracity of its evidence,
students use many of the heuristics adults use, including site design, ease of
navigation, and goodness of fit between the content and the information
students are looking for (Barzilai & Zohar, 2012; Iding, Crosby,
Auernheimer, & Klemm, 2009; Walraven et al., 2009).
These studies paint a pessimistic portrait. There is even greater cause for
concern if we consider that these studies asked students to search for informa-
tion on straightforward health and science issues instead of contentious poli-
tical questions. In recent studies, young people have researched questions like
“Is chocolate healthy?” (Barzilai & Zohar, 2012) and “What caused the
eruption of Mt. St. Helens?” (Goldman, Braasch, Wiley, Graesser, &
Brodowinska, 2012). Researching why a volcano erupted is different than
trying to decide, for example, if stronger gun control laws would curb gun
168 McGrew et al.

violence—particularly if one already holds strong beliefs about the issue. If


people struggle with questions that have relatively clear right answers, they are
likely to do even worse in divisive political territory, where experts disagree
and cloaked sites with hidden agendas abound.

THEORETICAL FRAMEWORK

As evidence mounts about how young people struggle to evaluate online


information, it is increasingly clear that the Internet is reshaping participatory
politics. For example, the Internet changes how we research public policy
issues, communicate with our elected leaders, and organize political protests.
Kahne et al. (2016) included “investigation and research” as part of their core
practices of participatory politics, arguing that civic education must prepare
students to “analyze and evaluate information in order to learn about and
investigate pressing civic and political issues” (p. 9). Given that students will
turn to the Internet to conduct such research, Kahne et al. maintained that
youth must be prepared to evaluate the reliability of information, engage in
research using multiple sources, and analyze information gleaned from social
networks. The ability to research and investigate civic and political informa-
tion online is not the only civic skill students need help developing. However,
it is necessary to prepare students to become informed members of our
democracy.
Like Kahne et al.’s (2016) focus on research and investigation as central
practices of participatory politics in the digital age, civic online reasoning
encompasses the ability to effectively search for, evaluate, and verify social
and political information online. Civic online reasoning consists of three
primary constructs: Who is behind the information? What is the evidence?
What do other sources say? In asking who is behind information, students
should investigate the author and/or organization that is presenting the infor-
mation, inquire into the motives (commercial, ideological, or otherwise) those
people or organizations have for presenting the information, and ultimately
decide whether the source should be trusted. In order to investigate the
evidence, students should consider what evidence is presented, what source
provided it, and whether the evidence directly supports the claim(s) presented.
Students should also seek to verify arguments by consulting multiple sources.
These competencies were developed based on ongoing research on how
professional fact checkers evaluate online information (Wineburg &
McGrew, 2017). Together, these competencies encompass the ways of think-
ing students should possess to effectively evaluate online information.
We use the term civic online reasoning to highlight the essential civic
aims of this work. Civic education focuses broadly on equipping young people
with the knowledge and skills to participate in civic life. The ability to
Assessments of Civic Online Reasoning 169

evaluate online content, increasingly a prerequisite for informed democratic


participation, is part of this civic skillset.
Civic online reasoning skills are needed to make judgments about the
reliability of information and to resist drawing conclusions based solely on our
own beliefs. A range of motivations guide people as they produce, share,
evaluate, and interact with political content online and off (Kunda, 1990;
Lodge & Taber, 2013). Kahne and Bowyer (2017) showed that young people
were more likely to argue that a mocked-up social media post was accurate if
they agreed with the argument it was making. If we want students to do better,
we must first ensure that they have the skills necessary to conduct effective
evaluations based on the reliability of sources, strength of evidence, and
verification across multiple sources. “For accuracy to reduce bias,” as
Kunda (1990) argued, “it is crucial that subjects possess more appropriate
reasoning strategies, view these as superior to other strategies, and be capable
of accessing them at will” (p. 482).
We also use the term civic online reasoning to differentiate this set of
practices from related fields. Although focused on digital content, civic online
reasoning does not cover the broader domains of digital literacy or online
citizenship, which may include goals that range from protecting one’s privacy
to learning to code (e.g., Common Sense Media, n.d.; Mozilla, n.d.). Civic
online reasoning is more narrowly focused on how to evaluate and use online
information to make decisions about social and political matters than the larger
field of media literacy (e.g., National Association for Media Literacy
Education, 2007; National Council for the Social Studies, 2016).
As our dependence on the Internet as a source of information grows, civic
online reasoning takes on added importance. Unfortunately, assessments of
students’ abilities to evaluate online information are in short supply. Many
digital literacy lesson plans are accompanied by short assessments. These
assessments often take the form of multiple-choice or short-answer questions
that focus on rote knowledge or on what students say they would do in
hypothetical situations. For example, a lesson plan on “Identifying High
Quality Sites” by Common Sense Media (2012) suggests that teachers close
the lesson by asking students three questions, including, “How do you know
whether you can trust the information you find on a website?” (p. 4). A lesson
produced by Google and iKeepSafe (2013) takes a similar approach with True/
False questions. These include: “I should always be a skeptic when it comes to
information that I find online” and “I should always review the sources (or
author) of the website” (p. 4). Although questions like these target aspects of
civic online reasoning, they are limited. They ask students what they would do
rather than assessing what students actually do. A student could correctly
answer these questions and proceed to do something completely different
when faced with actual online content.
Additional problems accompany formal measures of news and media
literacy. One news media literacy measure (Ashley, Maksl, & Craft, 2013)
170 McGrew et al.

included 15 Likert-scale items that assessed three domains of news literacy:


authors and audiences (which included statements like “The owner of a
media company influences the content that is produced.”); media messages
and meanings (e.g., “Two people might see the same news story and get
different information from it.”); and representation and reality (e.g., “A
story about conflict is more likely to show up prominently.”). This measure
focused on discrete content knowledge and never placed students in front
of news content (in print or online) in order to judge whether they could
effectively evaluate it. Hobbs and Frost (2003) used a measure that asked
students to read, listen to, or watch short clips of media stories and answer
multiple-choice and short-answer questions based on those stories.
However, these questions were focused on traditional elements of media
literacy, such as understanding how authors construct messages (e.g.,
“What techniques were used to attract and hold attention?”); identifying
the target audience; and comparing and contrasting different stories. The
measure did not explicitly address the reliability or trustworthiness of the
stories students read.
Another formal measure of students’ digital literacy is the assessment
of Online Reading Comprehension Ability (ORCA). Developed by the
New Literacies Research Lab, ORCA uses an interactive platform to
measure students’ proficiency in locating, evaluating, synthesizing, and
communicating information (Leu, Coiro, Kulikowich, & Cui, 2012). In the
“evaluate” category, ORCA measures whether students can determine the
author of a website, decide if that author is an expert, determine the
author’s point of view, and make a summary judgment about whether
information is reliable. In a module in this category, an interactive chat
feature guides students through the process of evaluating the authorship of
an online text. The computer first directs students to an article about
energy drinks and prompts the student to identify the author. Once the
student responds with the author’s name, the computer asks the student
whether that author is an expert on energy drinks, as well as how the
student knows. Although ORCA moves a step beyond asking students
what they would do by placing them in an environment where they
evaluate online content, it falls short of fully assessing a student’s ability
to evaluate online information. The computer-posed questions direct stu-
dents to locate information, read it, and then make a judgment. The
questions do not assess whether students can independently evaluate the
kind of information they would typically encounter online.
Thus, existing assessments stop short of directly measuring whether
students can evaluate online content or do not measure skills central to civic
online reasoning, such as evaluating the reliability of sources. This study
sought to answer the following research questions: How do students perform
on assessments of civic online reasoning? What strategies do students use to
evaluate online information?
Assessments of Civic Online Reasoning 171

METHOD

Task Development

We developed a bank of assessments that measure students’ civic online


reasoning. In developing these assessments, we followed a research and
design sequence based on best practices in the field of measurement
(Pellegrino, Chudowsky, & Glaser, 2001; Schmeiser & Welch, 2006) and
honed through our experience developing short assessments of historical
thinking (Breakstone, 2014; Wineburg, Smith, & Breakstone, 2012).
First, we mapped the domain of civic online reasoning in order to identify
the constructs we sought to assess. Our tasks tap three primary constructs of
civic online reasoning, which we call core competencies of civic online
reasoning: Who is behind the information? What is the evidence? What do
other sources say?
We then drafted prototypes of tasks to measure these constructs. After
conducting logical analyses (Li, Ruiz-Primo, & Shavelson, 2006) of the tasks,
we engaged in multiple rounds of pilot testing and revision. As a final step in
evaluation, we conducted think-aloud interviews with students (Ericsson &
Simon, 1993; Taylor & Dionne, 2000) as they completed the tasks. To
determine whether the tasks tapped the intended constructs, we coded the
interview transcripts based on the cognitive processes students used to com-
plete each task and compared those to the processes the task was designed to
measure. To determine whether students’ written responses reflected the
processes they engaged in as they completed each task, we coded their written
responses based on the cognitive processes used and compared those to the
cognitive processes coded in the interview transcripts. Examining the good-
ness of fit between what developers think an assessment elicits and what it
actually elicits is known as “cognitive validity” (Pellegrino et al., 2001; Ruiz-
Primo, Shavelson, Li, & Schultz, 2001).
Our exercises assess different aspects of the core competencies and
vary in length and complexity in order to provide teachers with a range of
assessment options. They were designed for middle school, high school,
and college students. In our assessment battery, we have included a variety
of short paper-and-pencil tasks that reproduce material from the web, like
the splash page of a news website or a conversation on Facebook.
Although paper-and-pencil tasks do not fare well in terms of ecological
validity (cf. Bronfenbrenner, 1994), they can help teachers quickly gauge
student understanding of discrete skills. They are also useful for teachers
who do not have access to sets of classroom computers but still wish to
teach online reasoning. Our most complex tasks feature Internet searches
and measure whether students can orchestrate the skills necessary to eval-
uate the online sources they will encounter in the world beyond the
classroom.
172 McGrew et al.

We did not design tasks with specific grade levels in mind. Instead, we
attempted to create assessments across a range of complexity so that teachers
could select tasks based on their students’ sophistication with evaluating
information. Theoretically, a high school classroom where students have had
sustained instruction in online reasoning could use more complex assessments
than a college classroom where students have had little exposure to online
reasoning lessons. In deciding where to pilot each task, we sent the most
straightforward tasks (delivered in paper-and-pencil form and assessing a
single competency) to middle school students and the most complex assess-
ments (delivered online and requiring students to coordinate multiple compe-
tencies to successfully answer) to college students.

Participants and Procedures

Students in public and private schools in 12 states completed versions of


our assessments as we engaged in iterative rounds of piloting and revision.
Once we were confident, based on these rounds of piloting as well as think-
aloud interviews with students, that the tasks tapped the intended constructs,
we administered final versions of each task to larger groups of students. The
results reported here are drawn from student responses to each task’s final
version. There were 405 middle school students, 348 high school students, and
141 college students who completed the final versions of these tasks.
Because our primary goal was to ensure that the tasks performed as
expected, we only collected information about students’ grade levels. Final
versions of the middle school tasks were administered in a large school district
on the West Coast. During the school year in which the tasks were adminis-
tered, 22% of students in the district were eligible for free or reduced lunch.
Final versions of the high school tasks were administered in three districts in
the same geographic area of the West Coast. In these districts, 36%, 55%, and
68% of students, respectively, were eligible for free or reduced lunch. All
districts that participated in the final administration of the middle school and
high school tasks had diverse student populations. In each district, at least one-
third of students represented historically marginalized communities. Final
college tasks were administered at universities on the East and West Coasts.
Two were public and one was private. Results from administering the tasks in
these districts and universities closely matched results from earlier rounds of
piloting with larger numbers of students from across the country.
Packets of three tasks were randomly assigned to students within classes
so that half of the students in each class completed one packet of three tasks
while the other half completed a packet of three different tasks. In classrooms
that received paper-and-pencil tasks (the middle school and high school
classes), the teacher handed out the packets of tasks and gave students
approximately 30 minutes to complete them. In college classrooms, where
students completed tasks online and recorded their responses in Google
Assessments of Civic Online Reasoning 173

Forms, the administration of tasks proceeded in the same way, except that
instructors provided students the links to the assessments.

Analysis

We developed rubrics for each task as part of the assessment development


process. As students completed initial versions of the assessments, we scored
student responses using first drafts of rubrics. Multiple researchers scored each
task, examined inter-rater agreement, discussed whether the rubrics fully addressed
variations in student reasoning, and revised rubrics based on these deliberations.
The final versions of the rubrics included three categories: Beginning, Emerging,
and Mastery. The rubric for one particularly complex task (“Researching a Claim,”
in which an open Internet search is central to the task) contained four categories:
Beginning, Emerging, Partial Mastery, and Mastery. We designed rubrics to
contain just three or four levels in order to facilitate classroom use. Although
rubrics vary based on the specific demands of the task, they reflect levels of
performance within the broader domain of civic online reasoning. In “Mastery”
responses, students effectively evaluated online information by attending carefully
to the source, evaluating the evidence presented, or verifying information about the
source, evidence, or argument. Students whose responses were scored as
“Emerging” were on the right track in evaluating the source or evidence.
However, these responses were not fully explained or included irrelevant evalua-
tion strategies. Finally, in responses that were scored as “Beginning,” students used
problematic or irrelevant strategies to evaluate information.
After the administration of the final versions of the assessments, two
research team members (one who was involved in rubric development and
one who was not) scored all of the student responses using the finalized
rubrics. Inter-rater agreement was 97% across the four tasks described in
depth here (Cohen’s Kappa = 0.92). Once student responses were grouped
into rubric categories, responses were analyzed in order to describe and
quantify the most common reasoning strategies (both productive and not)
that students used at each level on the rubric.

RESULTS

Fifteen tasks were developed, and 2,616 responses to the final versions of
the tasks were collected from middle school, high school, and college stu-
dents. Although these tasks differed in the constructs they assessed, the
content they included, and their level of complexity, there was a striking
degree of consistency in students’ performance on these tasks. In addition to
an overview of students’ performance on all of the assessments, this section
details one assessment from each competency in greater depth.
174 McGrew et al.

Who Is Behind the Information?

One set of tasks we developed targets the first core competency of civic
online reasoning: Can students determine who is behind information and
assess that source’s possible motivations for providing information? (See
Table 1 for a summary of tasks assessing this competency.)
Before reading any online source, students need to ask a fundamental
question: Where is this information coming from? One task, titled “Article
Analysis” (see Figure 1), presented students with an article about millennials’
money habits “presented by” Bank of America and written by a bank. The task
assessed a student’s ability to recognize the source of the article and address
why a sponsored post by a bank on this topic might be problematic.
Successful responses recognized the source of the article and explained why
the source’s motives and conflict of interest might lead to questions about the
article’s credibility.

Table 1. Tasks Assessing “Who Is Behind the Information?”

Level of
Assessment Description students Student performance

Article Explain why sponsored content Middle 68% Beginning


Analysis from a bank school 14% Emerging
might not be a good 18% Mastery
source about money habits.
Homepage Explain whether items on the Middle 82% Beginning
Analysis homepage of an online news school 10% Emerging
magazine are advertisements. 8% Mastery
News Explain which of two articles (one Middle 64% Beginning
Search news, one opinion) is better to school 27% Emerging
learn the facts about year-round 9% Mastery
schooling.
Comparing Explain which of two sources High 80% Beginning
Articles (one sponsored content, one school 9% Emerging
traditional news) is a more 11% Mastery
reliable source about climate
change.
Social Media Evaluate the strengths and College 30% Mastery
Video weaknesses of a video posted 40% Emerging
on Facebook. 30% Beginning
Website Using any online sources, explain College 57% Beginning
Reliability whether a website is a reliable 12% Emerging
source of information about 31% Mastery
children’s health.
Assessments of Civic Online Reasoning 175

Figure 1. “Article Analysis” Assessment

Over 500 middle school students completed this assessment in piloting, and
201 responded to the final version. Responses indicated that most students
struggled to identify sponsored posts and to understand who was responsible
for their content. Sixty-eight percent of student responses to the final version of
the task were scored “Beginning,” the lowest level of the rubric (see Appendix).
These students did not include concerns about relevant aspects of the article’s
authorship or sponsorship in their responses. Seventeen percent of students who
completed the task argued that the article could not represent the money habits of
all millennials. As one student wrote, “One reason I would not trust this article is
because not all millennials need help with financial planning. Some might know a
lot about it.” Similarly, 11% questioned the author’s qualifications (unrelated to
his job at the bank) to be writing the article. Some based their skepticism on the
author’s age; as one student argued, “Well for the first reason, it almost seems
from the picture that Andrew Plepler looks like a millennial. I say this because if
you look closely, you can’t really see any wrinkles on his skin.”
Only 14% of students composed answers that were scored as “Emerging,”
the rubric’s middle level. These students questioned the source based on
relevant concerns about its authorship or sponsorship but did not fully explain
their reasoning. For example, a student wrote, “One reason why I might not
trust the article is because the author is an executive of a company that sells
176 McGrew et al.

financial planning programs.” This student identified the conflict of interest


but did not fully explain why it was a conflict. Only 18% of the students
produced “Mastery” responses. As one of these students wrote:

One reason that I might not trust this article would be that it is presented
by a bank. Since the article is presented by a bank it would of course in
some way try to promote people to use that bank. Bank of America is
saying that millennials need help with financial planning to promote
millennials to apply for financial planning help with Bank of America.

An analogous task piloted with high school students yielded similar results.
The “Comparing Articles” task presented the top sections of two articles, both
of which appeared on the website of The Atlantic. One was an article from the
“Science” section, while the other was “Sponsored Content” from Shell Oil
Company. Asked which of the two articles was a more reliable source for
learning about policies to solve global climate change, just 11% of the 176
students who completed the task selected the article from the “Science”
section and raised concerns about the potential conflict of interest when an
oil company sponsors an article about climate change. The overwhelming
majority of students, 80%, wrote responses that were scored as “Beginning.”
Most (over 70% of all students) selected the sponsored content from Shell
because they believed it contained more data and information. These students
were heavily swayed by the graphics that accompanied each article: the news
article included an image of a militant Uncle Sam below its headline (“Why
Solving Climate Change Will Be Like Mobilizing for War”). Shell’s sponsored
content included a stylized pie chart projecting percentages that different
sources (coal, nuclear, renewables, natural gas, etc.) might contribute to help
fuel the “larger, energy-hungry world of tomorrow.” As one of these students
wrote, “I believe Article B is more reliable, because it’s easier to understand
with the graph and seems more reliable because the chart shows facts right in
front of you.”

What Is the Evidence?

Another set of tasks taps the second core competency of civic online
reasoning, assessing whether students can determine whether evidence pre-
sented is trustworthy and sufficient to support a claim (see Table 2).
The “Evaluating Evidence” assessment gauged whether students, con-
fronted with a vivid photograph, would stop to ask two critical questions:
“Where does this evidence come from?” and “Does it actually support the
claim being made?” (see Figure 2). Students were presented with a post from
Imgur, a photo sharing website, which includes a picture of daisies along with
the claim that the flowers have “nuclear birth defects” from Japan’s
Fukushima Daiichi nuclear disaster.
Assessments of Civic Online Reasoning 177

Table 2. Tasks Assessing “What Is the Evidence?”

Level of Student
Assessment Description students Performance

Comments Explain whether statistics from a Middle 53% Beginning


Section comment by “Joe Smith” should be school 21% Emerging
used in a research paper. 26% Mastery
Argument Read two comments in response to a news High 62% Beginning
Analysis article and explain which commenter school 29% Emerging
makes a stronger argument. 9% Mastery
Facebook Explain which poster in a Facebook High 61% Beginning
Argument conversation provides stronger school 28% Emerging
evidence about gun laws. 11% Mastery
Evaluating Evaluate the strength of evidence in a High 73% Beginning
Evidence photograph posted on Imgur. school 9% Emerging
18% Mastery

Figure 2. “Evaluating Evidence” Assessment


178 McGrew et al.

Although the image is compelling and invites the viewer to accept it at


face value, successful students argued that the photograph does not provide
solid evidence about conditions near the nuclear power plant. Ideally, students
should question the source of the evidence, arguing that they know nothing
about the credentials of the person who posted this photo (especially since it
appears on a site where anyone can upload a photo). Alternatively, students
could have pointed out that the post provides no proof that the picture was
taken near the power plant or that nuclear radiation caused the daisies’
irregular growth.
Various drafts of this task were piloted with 454 high school students.
One hundred seventy high school students completed the final version. Only
18% of students wrote “Mastery” responses (see rubric in Appendix). These
students wrote responses that questioned the source of the post or the source of
the photo. As one student wrote, “No, it does not really provide strong
evidence. A photo posted by a stranger online has little credibility. This
photo could very easily be Photoshopped or stolen from another completely
different source; we have no idea given this information, which makes it an
unreliable source.”
In contrast, the overwhelming majority of students were captivated by the
photograph and relied on it to evaluate the post while ignoring key details like
the source of the photo. Seventy-three percent of students wrote “Beginning”
responses. About half of these students argued that the post provided strong
evidence about conditions near the power plant. As one student argued, “This
post does provide strong evidence because it shows how the small and
beautiful things were affected greatly, that they look and grow completely
different than they are supposed to. Additionally, it suggests that such a
disaster could happen to humans.” In developing this task, six students
thought aloud as they reviewed the photo. All expressed similar reasoning.
As one student explained, “Nature doesn’t usually make mistakes. However, if
this man-made nuclear disaster could actually cause nature to create deformed
flowers, then it must be an extremely powerful and very hard-to-reverse
disaster.”
Nearly a quarter of the students argued that the post did not provide
strong evidence because it only showed flowers and not other plants or
animals that may have been affected by the nuclear radiation. Such responses
were also scored as “Beginning” because the students fully accepted the post’s
claim that the flowers were damaged by radiation. According to one such
student, “This photo does not provide strong evidence because it only shows a
small portion of the damage and effects caused by the nuclear disaster.” None
of these students stopped to question the veracity of the source, the lack of
proof that the picture was taken near the plant, or the missing causal link
between radiation and the flowers’ condition. Instead, these students accepted
the photograph at face value.
Assessments of Civic Online Reasoning 179

Who Is Behind Information and What Is the Evidence?

Several of our tasks assess students’ ability to engage in two core


competencies at once: analyze who is behind information and assess the
evidence that the source provides (see Table 3).
A task administered via Google Forms with college students assessed
students’ ability to weigh the strengths and weaknesses of the sources of a
tweet as well as the evidence it presents. Students received a link to a tweet
from the liberal advocacy organization MoveOn.org (2015) and were asked
about its strengths and weaknesses as a source of information about National
Rifle Association members’ opinions on background checks (see Figure 3).
They were allowed to search anywhere online in order to investigate the
tweet’s claim and the organizations behind it.
Successful responses identified both strengths and weaknesses of the
tweet (see rubric in Appendix). Students needed to consider the sources
providing information and the evidence presented in the tweet. For example,
a strength of the tweet is that it presents an argument based on polling data
from Public Policy Polling, a professional polling firm. A potential weakness,
however, is that both the source of the tweet (MoveOn.org) and the poll’s
sponsor (Center for American Progress) are liberal advocacy organizations.
Forty-three students at three universities completed the final version of
this task. Most of these students struggled to evaluate both the sources and
evidence presented. Only three students (7%) identified that the tweet was

Table 3. Tasks Assessing “Who Is Behind the Information?” and “What Is the
Evidence?”

Level of Student
Assessment Description students performance

News on Explain which of four tweets about a Middle 60% Beginning


Twitter current event is most trustworthy. school 24% Emerging
16% Mastery
News on Explain which of two news posts (one High 62% Beginning
Facebook from a verified account, one not) is school 18% Emerging
a better source. 20% Mastery
Claims on Evaluate the strengths (Question 1) College Question 1:
Social and weaknesses (Question 2) of a 59% Beginning
Media tweet by a political advocacy group. 35% Emerging
7% Mastery
Question 2:
49% Beginning
30% Emerging
21% Mastery
180 McGrew et al.

Figure 3. “Claims on Social Media” Assessment


Students began this task in the Google Form shown on the left and, when they clicked the link
provided in the form, were led to the tweet on the right (MoveOn.org, 2015).

based on a poll conducted by a national polling firm and explained why


this might make it a strong source of information. As one of these students
wrote, “The polling information which the tweet references was collected
by Public Policy Polling, which appears to have a fairly strong accuracy
record, though with a Democratic bent.” This student found and cited a
Wall Street Journal article as his source of information about the polling
organization. Just 21% of students effectively identified weaknesses of the
tweet by explaining how the political agendas of MoveOn.org and the
Center for American Progress might influence the message. Only two of
43 students wrote “Mastery” responses about both the strengths and weak-
nesses of the tweet.
Most students wrote broad statements about the tweet’s utility. Of the
student responses about the strengths of the tweet, 59% were coded as
“Beginning.” The most common of these responses focused on the source of
information in simplistic or problematic ways. For example, one student
wrote, “MoveOn.org is America’s largest independent online political
group” and cited MoveOn’s description from its Twitter profile. This student
did not go beyond MoveOn’s description of itself or consider any potential
weaknesses of the source. Other students whose responses were scored
“Beginning” thought that the link to a press release, the graphic provided, or
the content of the tweet demonstrated that the tweet was a strong source of
information.
Almost half of the responses (49%) about the weaknesses of the tweet
were coded as “Beginning.” Many of these students focused on the fact that it
was a social media post, arguing that one should be wary of trusting anything
posted on Twitter. As one of these students wrote, “Twitter is a social platform
Assessments of Civic Online Reasoning 181

built for sharing opinions, and though there are plenty of news organizations
sharing facts on Twitter, I’d be more likely to trust an article than a tweet.”
Two of seven students who thought aloud while completing this task
expressed similar reasoning. As one explained, “It’s a tweet, so I don’t find
it that useful, personally,” and “Anything from Twitter can be falsified, so it’s
not a really reliable source.”

What Do Other Sources Say?

Some of our most complex tasks tap students’ ability to contend with the
third core competency of civic online reasoning: investigating multiple
sources before being satisfied that a claim is true or that a source is author-
itative (see Table 4).
The “Article Evaluation” task tapped students’ ability to investigate the
reliability of a website by checking what other sources say about the site’s
backers. Students were directed, via a Google Form, to an article entitled
“Denmark’s Dollar Forty-One Menu” posted on the website minimumwage.
com (2014b; see Figure 4). The article uses Denmark’s fast food industry as a
case study to argue that raising the minimum wage in the United States would
result in higher prices and fewer jobs.
Minimumwage.com is, at first glance, a reliable-looking website: It has
“Research” and “Media” tabs and describes itself (on its “About” page) as “a
non-profit research organization dedicated to studying public policy issues
surrounding employment growth.” The page adds that minimumwage.com is a
project of the Employment Policies Institute (EPI), which “sponsors nonparti-
san research which is conducted by independent economists at major univer-
sities around the country” (minimumwage.com, 2014a, para. 2). If one
performs an open search for EPI, however, credible and authoritative sources
dispute such anodyne descriptions. The New York Times reported that EPI “is
led by the advertising and public relations executive Richard B. Berman, who

Table 4. Tasks Assessing “What Do Other Sources Say?”

Level of Student
Assessment Description students performance

Article Explain whether an article High school 80% Beginning


Evaluation about Denmark’s food prices and 12% Emerging
is a reliable source, using any College 8% Mastery
resources available online.
Researching a Use an open Internet search to College 62% Beginning
Claim decide whether Margaret 24% Emerging
Sanger supported euthanasia 9% Partial Mastery
and cite the sources used. 5% Mastery
182 McGrew et al.

Figure 4. “Article Evaluation” Assessment


Students began this task in the Google Form shown on the left and, when they clicked
the link provided in the form, were led to the webpage on the right (minimumwage.
com, 2014b).

has made millions of dollars in Washington by taking up the causes of


corporate America,” including the restaurant industry (Lipton, 2014, para.
10). A Salon headline reads, “Corporate America’s new scam: Industry P.R.
firm poses as think tank!” (Graves, 2013). Employment Policies Institute is
run out of the same suite of offices as Richard Berman’s PR firm (Lipton,
2014).
Groups of college (n = 58) and high school students in an Advanced
Placement U.S. history course (n = 95) completed this task. Instead of
investigating what other sources had to say about minimumwage.com, the
vast majority of students in both groups limited their evaluations to the article
itself. Most of these students (80%) earned “Beginning” scores (see rubric in
Appendix). They largely decided that the site was trustworthy based on easily
manipulated factors like the site’s appearance, its description on the “About”
page, and the fact that the article links to sources that students judged credible:
the New York Times and Columbia Journalism Review. As one student wrote,

I read the “About Us” page for MinimumWage.com and also for
Employment Policies Institute. EPI sponsors MinimumWage.com and is
a nonprofit research organization dedicated to studying policy issues
surrounding employment, and it funds “nonpartisan” studies by econo-
mists around the nation. The fact that the organization is a non-profit, that
it sponsors nonpartisan studies, and that it contains both pros and cons of
raising the minimum wage on its website, makes me trust this source.

Although this student expressed sound reasoning about factors that might lend
credibility to a source—particularly its basis in nonpartisan, university-based
Assessments of Civic Online Reasoning 183

research—the student relied entirely on what the organizations (minimumw-


age.com and EPI) said about themselves on their “About” pages. If the student
had gone outside these sources, different portrayals of the organizations would
have emerged.
Just 6% of college students and 9% of high school students wrote
responses scored as “Mastery.” These students investigated minimumwage.
com and its parent organization outside the sites and learned about their
connections to a PR firm that represented restaurant industry clients. One
student, for example, raised questions about the clearly anti-minimum wage
bias of the article on Denmark and then wrote,

Furthermore, a Google search about the Employment Policies Institute


(the parent organization that created the minimum wage site) suggests
that it actually might be an organization created by Rick Berman, a
lobbyist for various private industries whose interests would be in
conflict with raising the minimum wage (http://www.sourcewatch.org/
index.php/Employment_Policies_Institute).

This student both researched the organization and recognized the potential
conflict of interest involved.
These results show a great degree of consistency: Across the core com-
petencies, students struggled to effectively evaluate social and political infor-
mation online. Regardless of grade level, most students did not consider who
created content, did not consider the evidence presented, and did not consult
other sources to verify claims.

DISCUSSION

Patterns in students’ approaches to evaluating information were consistent


across the tasks. First, we saw ample evidence that students rarely ask who
created online sources. This pattern was clear throughout tasks where informa-
tion about the source was immediately available (as in the assessments using
stimuli from Bank of America and The Atlantic) and those where additional
research may have been necessary (as in the task using the MoveOn.org
tweet). These results show that students not only struggle to seek out informa-
tion, but they often fail to first ask the fundamental question we should ask
when evaluating a digital source: “Who is behind it?”
Instead of evaluating information by asking this question, students made
judgments about trustworthiness based on factors like the content of a post and
surface features of the page on which it appeared. Middle school students’ most
common critique of the Bank of America post was not its authorship or sponsor-
ship, but the fact that it was not possible for the article to provide accurate
information about the money habits of all millennials. Likewise, the vast
184 McGrew et al.

majority of high school students selected one of The Atlantic posts not by
weighing the authority of the sources but by comparing the amount of informa-
tion presented by the posts’ graphics. Students also made broad generalizations
based on the platform on which information appeared: Many college students
raised concerns about the tweet from MoveOn.org precisely because it appeared
on a social media platform. Even when students attempted to investigate the
source, they were often satisfied by shallow information. For example, middle
school students focused on the age of Andrew Plepler, the Bank of America
executive who authored the post that students were asked to evaluate. Although
these students focused on the source, they did not consider Plepler’s job at Bank
of America, which was far more relevant in this case than his age.
Students did not do much better at evaluating evidence presented to
support social or political claims. We saw numerous examples of students
being taken in by the appearance of evidence. For example, in the assessment
with an Imgur post, high school students were captivated by the vivid photo-
graph of “nuclear” daisies. Perhaps distracted by the vividness of this evi-
dence, students failed to raise questions about whether an unknown user on a
popular photo-sharing website was a trustworthy source. Additionally, few
students raised questions about whether the photo, even if it was authentic,
could sufficiently support the claims being made. In this case, students were
almost blinded by the photograph. In other assessments, we saw students
being taken in by statistics, quotes from seemingly authoritative figures, and
data displays without asking about the source of the evidence and whether it
was relevant to the claims being made. Even when the evidence was strong,
college students often could not articulate reasons why. In the MoveOn.org
task, most students did not focus on the evidence the tweet provided (a poll
conducted by a well-established polling firm) as a reason the tweet might be
useful. Instead, they resorted to judgments based on the appearance or content
of the tweet itself.
Even when given the opportunity in a live web task, students rarely
showed evidence of venturing outside the webpage on which they landed.
Although they were explicitly instructed that they were free to search outside
the site in the “Article Evaluation” task, only 13 of 95 high school students
and eight of 58 college students reported using outside websites to evaluate
minimumwage.com. This reluctance to leave the initial site stands in stark
contrast to the behavior of professional fact checkers as they evaluated
online information: Fact checkers regularly read laterally, departing a web-
site to open new tabs and see what other sources had to say about a source
(Wineburg & McGrew, 2017). Because students rarely ventured outside the
confines of the website where they started, students relied on the organiza-
tion’s description of itself—if they even got that far. In most cases, high
school and college students simply evaluated the content or appearance of
the initial page and never sought a broader perspective by turning to the
open Internet.
Assessments of Civic Online Reasoning 185

Limitations

Patterns in students’ reasoning about online information were clear


throughout the tasks. However, these data were not drawn from a representa-
tive sample, and further research is needed to confirm the consistency of the
results we report across purposefully sampled populations. Still, the consis-
tency of the results we reported—across varied schools, geographic regions,
and student populations—suggests that this problem is widespread.
We further recognize that these tasks are somewhat artificial in that they ask
students to evaluate content that they did not come to on their own. Students may
be more motivated to evaluate—and more or less successful at evaluating—
information they come across out of personal interest. However, the purpose of
this study was to design assessments that could yield information about students’
ability to evaluate online sources. Such assessments depend on every student
examining the same content. Future research could explore whether varying the
stimuli in a task influences how students respond to it.
Although the tasks presented here sample from the three competencies of
civic online reasoning, they do not represent all aspects of the domain. Moreover,
we did not intend for these tasks to be used collectively as a summative instru-
ment. Instead, we were attempting to create a range of items that assessed different
aspects of the domain of civic online reasoning. We continue to develop new
assessments that gauge additional dimensions of the domain.

The Need for Civic Online Reasoning

Civic online reasoning is necessary for students to be informed partici-


pants in civic life. As such, it should be part of the larger project of civic
education in which students learn civic content, develop democratic disposi-
tions, and build the skills necessary to engage with others about political
issues. Civic online reasoning does not fully encompass the fields of media
literacy or digital citizenship, and we do not believe it should supplant these
fields. Instead, we maintain that the ability to effectively evaluate online social
and political content is a critical aspect of preparing students for civic life. If
we do not help young people develop civic online reasoning strategies, we
limit their ability to accurately judge online information.
Our results suggest that students are not prepared to navigate the mael-
strom of information online. For example, in the “Article Analysis” task, most
students did not identify the fact that the post was sponsored by a bank and
written by a bank executive as the most concerning features of an article
questioning millennials’ money habits. What do we expect the same students
to do when they are faced with information about a more complex issue, a
source that is less forthcoming about its conflicts of interest, or a topic about
which they feel strongly? Further research is needed to investigate such
scenarios, but our findings do not generate great optimism.
186 McGrew et al.

Routes Forward

Teachers cannot ensure that students will use these skills outside the class-
room, but that dilemma is not unique to civic online reasoning. Teachers can,
however, provide students with opportunities to learn and practice these skills.
In fact, evidence suggests that explicit instruction may help students develop a
commitment to accuracy in online evaluations (Kahne & Bowyer, 2017).
Taken together, our results point to priorities for civic online reasoning
instruction. Results from assessments targeting the construct of “Who is behind
the information?” suggest that students need to be taught, first and foremost, that
determining the author or sponsoring organization of a story is a critical part of
evaluating it. Powerful examples may be useful here: Teachers could present
students with a source that may seem credible (such as the article sponsored by
Shell Oil Company) and help them question and complicate their initial assump-
tions. Next, students need support in learning how to investigate digital sources,
whether it is researching the author’s qualifications and motivations or probing
the sponsoring organization’s potential conflicts of interest. With repeated
practice identifying sources of information, researching relevant information
about those sources, and synthesizing what they learn to make judgments about
an article’s trustworthiness, students should be able to improve their skills.
Students also need to be explicitly taught how to evaluate evidence. They
need support as they practice evaluating the sources of evidence (just as they do
when they evaluate sources of articles or webpages). Additionally, they need
more opportunities to consider how and whether evidence provided actually
supports a claim. A teacher could, for example, model how to examine evidence
provided by the Imgur user in the “Evaluating Evidence” task. First, the teacher
could summarize the claim being made by the post’s title, “Fukushima Nuclear
Flowers,” and the caption, “Not much more to say, this is what happens when
flowers get nuclear birth defects.” The teacher could then examine whether the
“evidence” provided supports the claim that the Fukushima nuclear disaster
caused the daisies’ mutations. In the process, the teacher could raise questions
about the source and location of the photograph and the causal link between the
nuclear disaster and the flowers’ appearance. Students could then practice with
evidence presented about other topics.
Finally, students need support in learning how to consider multiple sources
of information as they investigate online content. If students completed the
minimumwage.com assessment, the teacher could ask the class to compare the
conclusions reached by two anonymous students—one who trusted what mini-
mumwage.com said about itself and one who sought to find out what others had
to say about the organization. The class could then discuss reasons why con-
sulting multiple sources is necessary and practice strategies for doing so online.
Assessments of Civic Online Reasoning 187

Implications

Our findings show that students struggled to engage in even basic evalua-
tions of authors, sources, and evidence. We need to help them develop the
skills necessary to find reliable sources about social and political topics. When
people struggle to evaluate information, they risk making decisions that go
against their own interests. In a democratic society, our fellow citizens’ online
reasoning skills affect us. As more people go online for social and political
information, the ability to find reliable information can strengthen our society.
Or, if we are unable to distinguish truth from falsehood, it can weaken the
quality of our decisions and our ability to advocate for our interests. In order
to capitalize on the promise of the Internet and not be victims of its ruses,
teachers need tools to prepare students to evaluate information and arguments
online. Student responses to our tasks show we have a long way to go, but
they also suggest a route forward to develop assessments and curricular tools
to support teachers and their students in this critical work.

FUNDING

This research was supported by the Robert R. McCormick Foundation


(Sam Wineburg, principal investigator) and the Spencer Foundation (Grant
No. 201600012, Sam Wineburg, principal investigator). The content is solely
the responsibility of the authors and does not necessarily represent the views
of the McCormick Foundation or the Spencer Foundation.

ORCID

Sarah McGrew http://orcid.org/0000-0002-8925-1468


Joel Breakstone http://orcid.org/0000-0003-0468-6399
Mark Smith http://orcid.org/0000-0002-1048-7786
Sam Wineburg http://orcid.org/0000-0002-4838-1522

REFERENCES

American Press Institute. (2015). How millennials get news: Inside the habits
of America’s first digital generation. Retrieved from http://www.ameri-
canpressinstitute.org
Ashley, S., Maksl, A., & Craft, S. (2013). Developing a news media literacy
scale. Journalism & Mass Communication Educator, 68, 7–21.
doi:10.1177/1077695812469802
188 McGrew et al.

Bartlett, J., & Miller, C. (2011). Truth, lies, and the Internet: A report into
young people’s digital fluency. London, UK: Demos. Retrieved from
https://www.demos.co.uk/
Barzilai, S., & Zohar, A. (2012). Epistemic thinking in action: Evaluating and
integrating online sources. Cognition and Instruction, 30, 39–85.
doi:10.1080/07370008.2011.636495
Bennett, S. (2012). Digital natives. In Z. Yan (Ed.), Encyclopedia of cyber
behavior: Volume 1 (pp. 212–219). Hershey, PA: IGI Global.
Breakstone, J. (2014). Try, try, try again: The process of designing new history
assessments. Theory & Research in Social Education, 42, 453–485.
doi:10.1080/00933104.2014.965860
Bronfenbrenner, U. (1994). Ecological models of human development. In T.
Husén & T. N. Postlethwaite (Eds.), International encyclopedia of educa-
tion (2nd ed., pp. 1643–1647). Oxford, UK: Elsevier.
Common Sense Media. (2012). Identifying high-quality sites [PDF docu-
ment]. Retrieved from https://www.commonsense.org/education/system/
files/uploads/classroom-curriculum/6-8-unit3-identifyinghighqualitysites-
2015.pdf?x=1
Common Sense Media. (n.d.). Scope and sequence: Common Sense K–12
digital citizenship curriculum. Retrieved from https://www.commonsense.
org/education/scope-and-sequence
Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis: Verbal reports as
data (Rev. ed.). Cambridge, MA: MIT Press.
Flanagin, A. J., & Metzger, M. J. (2007). The role of site features, user
attributes, and information verification behaviors on the perceived cred-
ibility of web-based information. New Media & Society, 9, 319–342.
doi:10.1177/1461444807075015
Flanagin, A. J., & Metzger, M. J. (2010). Kids and credibility: An empirical
examination of youth, digital media use, and information credibility.
Cambridge, MA: MIT Press.
Fogg, B. J., Soohoo, C., Danielson, D. R., Marable, L., Stanford, J., & Tauber, E. R.
(2003, June). How do users evaluate the credibility of web sites? A study with
over 2,500 participants. Paper presented at the Association for Computing
Machinery Conference on Designing for User Experiences, San Francisco, CA.
Gasser, U., Cortesi, S., Malik, M., & Lee, A. (2012). Youth and digital media:
From credibility to information quality. Cambridge, MA: The Berkman
Center for Internet and Society. Retrieved from https://papers.ssrn.com/
sol3/papers.cfm?abstract_id=2005272
Goldman, S. R., Braasch, J. L. G., Wiley, J., Graesser, A. C., & Brodowinska,
K. (2012). Comprehending and learning from Internet sources: Processing
patterns of better and poorer learners. Reading Research Quarterly, 47,
356–381. doi:10.1002/RRQ.027
Assessments of Civic Online Reasoning 189

Google, & iKeepSafe. (2013). Class 1: Become an online sleuth [PDF docu-
ment]. Retrieved from http://ikeepsafe.org/wp-content/uploads/2011/10/
Class-1_Become-an-Online-Sleuth_FINAL-1.pdf
Gottfried, J., Barthel, M., Shearer, E., & Mitchell, A. (2016, February 4). The
2016 presidential campaign—A news event that’s hard to miss.
Washington, DC: Pew Research Center. Retrieved from http://www.jour-
nalism.org/news-item/the-2016-presidential-campaign-a-news-event-
thats-hard-to-miss/
Graves, L. (2013, November 13). Corporate America’s new scam: Industry P.
R. firm poses as think tank! Salon. Retrieved from http://www.salon.com/
2013/11/13/
corporate_americas_new_scam_industry_p_r_firm_poses_as_think_tank/
Hargittai, E., Fullerton, L., Menchen-Trevino, E., & Thomas, K. Y. (2010).
Trust online: Young adults’ evaluation of web content. International
Journal of Communication, 4, 468–494.
Hobbs, R. (2010). Digital and media literacy: A plan of action. Washington,
DC: The Aspen Institute. Retrieved from https://www.aspeninstitute.org/
publications/digital-media-literacy-plan-action-2/
Hobbs, R., & Frost, R. (2003). Measuring the acquisition of media-literacy
skills. Reading Research Quarterly, 38, 330–355. doi:10.1598/RRQ.38.3.2
Iding, M. K., Crosby, M. E., Auernheimer, B., & Klemm, E. B. (2009). Web
site credibility: Why do people believe what they believe?. Instructional
Science, 37, 43–63. doi:10.1007/s11251008-9080-7
Kahne, J., & Bowyer, B. T. (2017). Educating for democracy in a partisan age:
Confronting the challenges of motivated reasoning and misinformation.
American Educational Research Journal, 54, 3–34. doi:10.3102/
0002831216679817
Kahne, J., Hodgin, E., & Eidman-Aadahl, E. (2016). Redesigning civic
education for the digital age: Participatory politics and the pursuit of
democratic engagement. Theory & Research in Social Education, 44, 1–
35. doi:10.1080/00933104.2015.1132646
Kahne, J., Lee, N., & Feezell, J. T. (2012). Digital media literacy education
and online civic and political participation. International Journal of
Communication, 6, 1–24.
Kahne, J., & Middaugh, E. (2012). Digital media shapes youth participation in
politics. Phi Delta Kappan, 94(3), 52–56. doi:10.1177/003172171209400312
Knight Commission on the Information Needs of Communities in a
Democracy. (2009). Informing communities: Sustaining democracy in
the digital age. Retrieved from https://knightfoundation.org/reports/
informing-communities-sustaining-democracy-digital
Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin,
108, 480–498. doi:10.1037/0033-2909.108.3.480
Leu, D. J., Coiro, J., Kulikowich, J. M., & Cui, W. (2012, November). Using
the psychometric characteristics of multiple-choice, open Internet, and
190 McGrew et al.

closed (simulated) Internet formats to refine the development of online


research and comprehension assessments in science: Year three of the
ORCA project. Paper presented at the annual meeting of the Literacy
Research Association, San Diego, CA.
Li, M., Ruiz-Primo, M., & Shavelson, R. (2006). Towards a science achieve-
ment framework: The case of TIMSS 1999. In S. Howie & T. Plomp
(Eds.), Contexts of learning mathematics and science: Lessons learned
from TIMSS (pp. 291–311). London, UK: Routledge.
Lipton, E. (2014, February 9). Fight over minimum wage illustrates web of
industry ties. New York Times. Retrieved from https://www.nytimes.com/
2014/02/10/us/politics/fight-over-minimum-wage-illustrates-web-of-
industry-ties.html
Lodge, M., & Taber, C. S. (2013). The rationalizing voter. Cambridge, UK:
Cambridge University Press.
Lynch, M. P. (2016). The Internet of us: Knowing more and understanding
less in the age of big data. New York, NY: Norton.
Mason, L., & Metzger, S. A. (2012). Reconceptualizing media literacy in the
social studies: A pragmatist critique of the NCSS position statement on
media literacy. Theory & Research in Social Education, 4, 436–455.
doi:10.1080/00933104.2012.724630
Metzger, M. J. (2007). Making sense of credibility on the web: Models for
evaluating online information and recommendations for future research.
Journal of the American Society for Information Science and Technology,
58, 2078–2091. doi:10.1002/asi.20672
Metzger, M. J., Flanagin, A. J., & Medders, R. B. (2010). Social and heuristic
approaches to credibility evaluation online. Journal of Communication,
60, 413–439. doi:10.1111/j.1460-2466.2010.01488.x
Mihailidis, P., & Thevenin, B. (2013). Media literacy as a core competency for
engaged citizenship in participatory democracy. American Behavioral
Scientist, 57, 1611–1622. doi:10.1177/0002764213489015
minimumwage.com. (2014a). About. Retrieved from https://www.minimumw-
age.com/about/
minimumwage.com. (2014b). Denmark’s dollar forty-one menu. Retrieved from
https://www.minimumwage.com/2014/10/denmarks-dollar-forty-one-menu
MoveOn.org. (2015, November 17). New polling shows the @NRA is out of
touch with gun owners and their own members. ampr.gs/1Pyw4qg
#NRAfail [Tweet]. Retrieved from https://twitter.com/MoveOn/status/
666772893846675456?lang=en
Mozilla. (n.d.). Web literacy. Retrieved from https://learning.mozilla.org/en-
US/web-literacy
National Association for Media Literacy Education. (2007). Core principles of
media literacy education in the United States. Retrieved from https://
namle.net/publications/core principles/
Assessments of Civic Online Reasoning 191

National Council for the Social Studies. (2016). Media literacy. Social
Education, 80, 183–185.
Pan, B., Hembrooke, H., Joachims, T., Lorigo, L., Gay, G., & Granka, L.
(2007). In Google we trust: Users’ decisions on rank, position, and
relevance. Journal of Computer-Mediated Communication, 12, 801–
823. doi:10.1111/j.1083-6101.2007.00351.x
Pariser, E. (2011). The filter bubble: How the new personalized web is
changing what we read and how we think. New York, NY: Penguin Press.
Pellegrino, J. W., Chudowsky, N., & Glaser, R. (Eds.). (2001). Knowing what
students know. Washington, DC: National Academy Press.
Prensky, M. (2001). Digital natives, digital immigrants. On the Horizon, 9(5),
1–6. doi:10.1108/10748120110424816
Rheingold, H. (2012). Net smart: How to thrive online. Cambridge, MA: MIT
Press.
Ruiz-Primo, M. A., Shavelson, R. J., Li, M., & Schultz, S. E. (2001). On the
validity of cognitive interpretations of scores from alternative mapping
techniques. Educational Assessment, 7, 99–141. doi:10.1207/
S15326977EA0702_2
Schmeiser, C. B., & Welch, C. J. (2006). Test development. In R. L. Brennan
(Ed.), Educational measurement (pp. 307–354). Westport, CT: Praeger.
Sundar, S. S. (2008). The MAIN model: A heuristic approach to understand-
ing technology effects on credibility. In M. J. Metzger & A. J. Flanagin
(Eds.), Digital media, youth, and credibility (pp. 73–100). Cambridge,
MA: MIT Press.
Taylor, K. L., & Dionne, J. (2000). Accessing problem-solving strategy
knowledge: The complementary use of concurrent verbal protocols and
retrospective debriefing. Journal of Educational Psychology, 92, 413–
425. doi:10.1037//0022-0663.92.3.413
Walraven, A., Brand-Gruwel, S., & Boshuizen, H. (2009). How students
evaluate information and sources when searching the world wide web
for information. Computers & Education, 52, 234–246. doi:10.1016/j.
compedu.2008.08.003
Westerwick, A. (2013). Effects of sponsorship, web site design, and Google
ranking on the credibility of online information. Journal of Computer-
Mediated Communication, 18, 194–211. doi:10.1111/jcc4.12006
Wineburg, S., & McGrew, S. (2017). Lateral reading: Reading less and
learning more when evaluating digital information (Stanford History
Education Group Working Paper No. 2017-A1). Retrieved from https://
ssrn.com/abstract=3048994
Wineburg, S., Smith, M., & Breakstone, J. (2012). New directions in assess-
ment: Using Library of Congress sources to assess historical understand-
ing. Social Education, 76, 290–293.
192 McGrew et al.

APPENDIX

“Article Analysis” Rubric

Rubric level Description

Mastery Student thoroughly explains that the source of the article (written by an
employee of Bank of America or presented by Bank of America) might
make it less trustworthy because the bank stands to gain if people believe
they have financial problems and seek counsel from bank officials.
Emerging Student identifies the authorship (or sponsorship) of the article as a
factor that may make it less trustworthy. At the same time, the student
does not provide a complete explanation or makes statements that are
incorrect or irrelevant.
Beginning Student argues that the article is untrustworthy for reasons that are
unrelated to authorship/sponsorship or provides an answer that is
unclear or irrelevant.

“Evaluating Evidence” Rubric

Rubric level Description

Mastery Student argues the post does not provide strong evidence and questions
the source of the post (e.g., we don’t know anything about the author
of the post) and/or the source of the photograph (e.g., we don’t know
where the photo was taken).
Emerging Student argues that the post does not provide strong evidence, but the
explanation does not consider the source of the post or the source of
the photograph, or the explanation is incomplete.
Beginning Student argues that the post provides strong evidence or uses incorrect
or incoherent reasoning.

“Claims on Social Media” Rubric


Question 1: Why might this tweet be a useful source?

Rubric level Description

Mastery Student fully explains that the tweet may be useful because it includes
data from a poll conducted by a polling firm.
Emerging Student addresses the polling data and/or the source of the polling data but
does not fully explain how those elements may make the tweet useful.
Beginning Student does not address the polling data or the source of the polling
data as a reason the tweet may be useful.
Assessments of Civic Online Reasoning 193

Question 2: Why might this tweet not be a useful source?

Rubric level Description

Mastery Student fully explains how the political motivations of the organizations
may have influenced the content of the tweet and/or poll, which may
make the tweet less useful.
Emerging Student addresses the source of the tweet or the source of the news
release but does not fully explain how those elements may make the
tweet less useful.
Beginning Student does not address the source of the tweet or the source of the
news release as reasons the tweet may be less useful.

“Article Evaluation” Rubric

Rubric Level Description

Mastery Student rejects the website as a reliable source and provides a clear
rationale based on a thorough evaluation of the organizations behind
minimumwage.com.
Emerging Student rejects the website as a reliable source and identifies the intent
of the website’s sponsors but does not provide a complete rationale.
Beginning Student accepts the source as trustworthy or rejects the source based on
irrelevant considerations.

You might also like