Professional Documents
Culture Documents
W03 - McGrew Et Al 2018 Can Students Online Sources
W03 - McGrew Et Al 2018 Can Students Online Sources
Sarah McGrew, Joel Breakstone, Teresa Ortega, Mark Smith & Sam Wineburg
To cite this article: Sarah McGrew, Joel Breakstone, Teresa Ortega, Mark Smith & Sam
Wineburg (2018) Can Students Evaluate Online Sources? Learning From Assessments
of Civic Online Reasoning, Theory & Research in Social Education, 46:2, 165-193, DOI:
10.1080/00933104.2017.1416320
this challenge. However, few assessments exist to help teachers gauge their
students’ abilities to evaluate evidence and arguments online. We set out to fill
this void by developing assessments of students’ civic online reasoning—the
ability to effectively search for, evaluate, and verify social and political
information online.
Why should students learn to evaluate online sources? Scholars argue that
digital literacy is—and will increasingly be—necessary for informed and
engaged citizenship (Kahne, Lee, & Feezell, 2012; Knight Commission on
the Information Needs of Communities in a Democracy [Knight Commission],
2009; Mihailidis & Thevenin, 2013). “Information,” according to the Knight
Commission (2009), “is as vital to the functioning of healthy communities as
clean air, safe streets, good schools, and public health” (p. 19). Simply having
access to information is not enough. Citizens must also be able to make
judgments about the credibility of that information (Hobbs, 2010; Mihailidis
& Thevenin, 2013). Over time, a lax stance toward evaluating online informa-
tion diminishes our ability to develop informed opinions. As we increasingly
rely on the Internet as a source of political and social information, effective
discernment skills grow ever more critical. Indeed, young people reported that
they access nearly 75% of their news online and that social media was their
top source for news about the 2016 presidential election (American Press
Institute, 2015; Gottfried, Barthel, Shearer, & Mitchell, 2016).
Young people’s reliance on the Internet as a source of information pre-
sents both immense opportunities for democratic participation and formidable
challenges. Because traditional information gatekeepers are largely absent
online, information travels with relative freedom and astounding speed. If
citizens are able to use the Internet well, it can be an empowering and
enriching venue for information sharing (Kahne & Middaugh, 2012;
Rheingold, 2012). However, it is imperative for students to understand how
the Internet shapes the information they receive (e.g., Lynch, 2016; Mason &
Metzger, 2012; Pariser, 2011) and to know how to find reliable information
(Kahne, Hodgin, & Eidman-Aadahl, 2016; Metzger, 2007; Metzger, Flanagin,
& Medders, 2010). If young people consume information without determining
who is behind it and what that source’s agenda might be, they are easy marks
for digital rogues. This article presents our rationale for developing assess-
ments of civic online reasoning, descriptions of representative tasks, and
results from administering these tasks with students across the country.
LITERATURE REVIEW
How do people vet information they find online? To date, studies have
shown that adults use cues and heuristics to decide if a website is credible,
including the authority of a search engine, site design and functionality,
previous experience with or referral to a site, and perceived expertise
Assessments of Civic Online Reasoning 167
(cf. Flanagin & Metzger, 2007; Fogg et al., 2003; Metzger et al., 2010;
Sundar, 2008). However, these heuristics do not appear to be used system-
atically, and even if they are, such heuristics can often be misleading. For
example, 46% of over 2,500 participants in one study commented on site
design to explain why they found one site more credible than another. Design
was the most commonly mentioned factor in participants’ credibility assess-
ments. Some chose a site because it looked “professional,” “pleasing,” or, as
one person wrote, “It just looks more credible” (Fogg et al., 2003, p. 5). If
being attractive and typo free are all that is required for a website to convince
users it is trustworthy, those users are likely to be repeatedly deceived.
Should we be more hopeful about young people’s skills online? Scholars
initially argued that “digital natives” grew up accessing information online and
would therefore learn and think in ways different than their parents and other
“digital immigrants” (Prensky, 2001). Researchers now believe the story is
more complicated (Bennett, 2012; Gasser, Cortesi, Malik, & Lee, 2012).
Students struggle with many aspects of gathering information online, includ-
ing searching for and evaluating information.
In the context of an open Internet search, young people interpret the order
of search results as a signal of websites’ trustworthiness. If a site is listed
higher up in the search results, students often assume it is more reliable
(Hargittai, Fullerton, Menchen-Trevino, & Thomas, 2010; Westerwick,
2013). Even when researchers manipulated search results so that the most
relevant results were at the bottom of the page, students persisted in their
tendency to select from the top few results (Pan et al., 2007).
Once they select a site from a list of search results, students struggle to
effectively evaluate it. They rarely consider the source of a website when
assessing its credibility (Bartlett & Miller, 2011; Barzilai & Zohar, 2012;
Flanagin & Metzger, 2010; Walraven, Brand-Gruwel, & Boshuizen, 2009).
In one study, few students commented on the authors of the information they
found online. No students further investigated the authors or verified their
credentials (Hargittai et al., 2010). Rather than carefully evaluating informa-
tion based on the credibility of its source or the veracity of its evidence,
students use many of the heuristics adults use, including site design, ease of
navigation, and goodness of fit between the content and the information
students are looking for (Barzilai & Zohar, 2012; Iding, Crosby,
Auernheimer, & Klemm, 2009; Walraven et al., 2009).
These studies paint a pessimistic portrait. There is even greater cause for
concern if we consider that these studies asked students to search for informa-
tion on straightforward health and science issues instead of contentious poli-
tical questions. In recent studies, young people have researched questions like
“Is chocolate healthy?” (Barzilai & Zohar, 2012) and “What caused the
eruption of Mt. St. Helens?” (Goldman, Braasch, Wiley, Graesser, &
Brodowinska, 2012). Researching why a volcano erupted is different than
trying to decide, for example, if stronger gun control laws would curb gun
168 McGrew et al.
THEORETICAL FRAMEWORK
METHOD
Task Development
We did not design tasks with specific grade levels in mind. Instead, we
attempted to create assessments across a range of complexity so that teachers
could select tasks based on their students’ sophistication with evaluating
information. Theoretically, a high school classroom where students have had
sustained instruction in online reasoning could use more complex assessments
than a college classroom where students have had little exposure to online
reasoning lessons. In deciding where to pilot each task, we sent the most
straightforward tasks (delivered in paper-and-pencil form and assessing a
single competency) to middle school students and the most complex assess-
ments (delivered online and requiring students to coordinate multiple compe-
tencies to successfully answer) to college students.
Forms, the administration of tasks proceeded in the same way, except that
instructors provided students the links to the assessments.
Analysis
RESULTS
Fifteen tasks were developed, and 2,616 responses to the final versions of
the tasks were collected from middle school, high school, and college stu-
dents. Although these tasks differed in the constructs they assessed, the
content they included, and their level of complexity, there was a striking
degree of consistency in students’ performance on these tasks. In addition to
an overview of students’ performance on all of the assessments, this section
details one assessment from each competency in greater depth.
174 McGrew et al.
One set of tasks we developed targets the first core competency of civic
online reasoning: Can students determine who is behind information and
assess that source’s possible motivations for providing information? (See
Table 1 for a summary of tasks assessing this competency.)
Before reading any online source, students need to ask a fundamental
question: Where is this information coming from? One task, titled “Article
Analysis” (see Figure 1), presented students with an article about millennials’
money habits “presented by” Bank of America and written by a bank. The task
assessed a student’s ability to recognize the source of the article and address
why a sponsored post by a bank on this topic might be problematic.
Successful responses recognized the source of the article and explained why
the source’s motives and conflict of interest might lead to questions about the
article’s credibility.
Level of
Assessment Description students Student performance
Over 500 middle school students completed this assessment in piloting, and
201 responded to the final version. Responses indicated that most students
struggled to identify sponsored posts and to understand who was responsible
for their content. Sixty-eight percent of student responses to the final version of
the task were scored “Beginning,” the lowest level of the rubric (see Appendix).
These students did not include concerns about relevant aspects of the article’s
authorship or sponsorship in their responses. Seventeen percent of students who
completed the task argued that the article could not represent the money habits of
all millennials. As one student wrote, “One reason I would not trust this article is
because not all millennials need help with financial planning. Some might know a
lot about it.” Similarly, 11% questioned the author’s qualifications (unrelated to
his job at the bank) to be writing the article. Some based their skepticism on the
author’s age; as one student argued, “Well for the first reason, it almost seems
from the picture that Andrew Plepler looks like a millennial. I say this because if
you look closely, you can’t really see any wrinkles on his skin.”
Only 14% of students composed answers that were scored as “Emerging,”
the rubric’s middle level. These students questioned the source based on
relevant concerns about its authorship or sponsorship but did not fully explain
their reasoning. For example, a student wrote, “One reason why I might not
trust the article is because the author is an executive of a company that sells
176 McGrew et al.
One reason that I might not trust this article would be that it is presented
by a bank. Since the article is presented by a bank it would of course in
some way try to promote people to use that bank. Bank of America is
saying that millennials need help with financial planning to promote
millennials to apply for financial planning help with Bank of America.
An analogous task piloted with high school students yielded similar results.
The “Comparing Articles” task presented the top sections of two articles, both
of which appeared on the website of The Atlantic. One was an article from the
“Science” section, while the other was “Sponsored Content” from Shell Oil
Company. Asked which of the two articles was a more reliable source for
learning about policies to solve global climate change, just 11% of the 176
students who completed the task selected the article from the “Science”
section and raised concerns about the potential conflict of interest when an
oil company sponsors an article about climate change. The overwhelming
majority of students, 80%, wrote responses that were scored as “Beginning.”
Most (over 70% of all students) selected the sponsored content from Shell
because they believed it contained more data and information. These students
were heavily swayed by the graphics that accompanied each article: the news
article included an image of a militant Uncle Sam below its headline (“Why
Solving Climate Change Will Be Like Mobilizing for War”). Shell’s sponsored
content included a stylized pie chart projecting percentages that different
sources (coal, nuclear, renewables, natural gas, etc.) might contribute to help
fuel the “larger, energy-hungry world of tomorrow.” As one of these students
wrote, “I believe Article B is more reliable, because it’s easier to understand
with the graph and seems more reliable because the chart shows facts right in
front of you.”
Another set of tasks taps the second core competency of civic online
reasoning, assessing whether students can determine whether evidence pre-
sented is trustworthy and sufficient to support a claim (see Table 2).
The “Evaluating Evidence” assessment gauged whether students, con-
fronted with a vivid photograph, would stop to ask two critical questions:
“Where does this evidence come from?” and “Does it actually support the
claim being made?” (see Figure 2). Students were presented with a post from
Imgur, a photo sharing website, which includes a picture of daisies along with
the claim that the flowers have “nuclear birth defects” from Japan’s
Fukushima Daiichi nuclear disaster.
Assessments of Civic Online Reasoning 177
Level of Student
Assessment Description students Performance
Table 3. Tasks Assessing “Who Is Behind the Information?” and “What Is the
Evidence?”
Level of Student
Assessment Description students performance
built for sharing opinions, and though there are plenty of news organizations
sharing facts on Twitter, I’d be more likely to trust an article than a tweet.”
Two of seven students who thought aloud while completing this task
expressed similar reasoning. As one explained, “It’s a tweet, so I don’t find
it that useful, personally,” and “Anything from Twitter can be falsified, so it’s
not a really reliable source.”
Some of our most complex tasks tap students’ ability to contend with the
third core competency of civic online reasoning: investigating multiple
sources before being satisfied that a claim is true or that a source is author-
itative (see Table 4).
The “Article Evaluation” task tapped students’ ability to investigate the
reliability of a website by checking what other sources say about the site’s
backers. Students were directed, via a Google Form, to an article entitled
“Denmark’s Dollar Forty-One Menu” posted on the website minimumwage.
com (2014b; see Figure 4). The article uses Denmark’s fast food industry as a
case study to argue that raising the minimum wage in the United States would
result in higher prices and fewer jobs.
Minimumwage.com is, at first glance, a reliable-looking website: It has
“Research” and “Media” tabs and describes itself (on its “About” page) as “a
non-profit research organization dedicated to studying public policy issues
surrounding employment growth.” The page adds that minimumwage.com is a
project of the Employment Policies Institute (EPI), which “sponsors nonparti-
san research which is conducted by independent economists at major univer-
sities around the country” (minimumwage.com, 2014a, para. 2). If one
performs an open search for EPI, however, credible and authoritative sources
dispute such anodyne descriptions. The New York Times reported that EPI “is
led by the advertising and public relations executive Richard B. Berman, who
Level of Student
Assessment Description students performance
I read the “About Us” page for MinimumWage.com and also for
Employment Policies Institute. EPI sponsors MinimumWage.com and is
a nonprofit research organization dedicated to studying policy issues
surrounding employment, and it funds “nonpartisan” studies by econo-
mists around the nation. The fact that the organization is a non-profit, that
it sponsors nonpartisan studies, and that it contains both pros and cons of
raising the minimum wage on its website, makes me trust this source.
Although this student expressed sound reasoning about factors that might lend
credibility to a source—particularly its basis in nonpartisan, university-based
Assessments of Civic Online Reasoning 183
This student both researched the organization and recognized the potential
conflict of interest involved.
These results show a great degree of consistency: Across the core com-
petencies, students struggled to effectively evaluate social and political infor-
mation online. Regardless of grade level, most students did not consider who
created content, did not consider the evidence presented, and did not consult
other sources to verify claims.
DISCUSSION
majority of high school students selected one of The Atlantic posts not by
weighing the authority of the sources but by comparing the amount of informa-
tion presented by the posts’ graphics. Students also made broad generalizations
based on the platform on which information appeared: Many college students
raised concerns about the tweet from MoveOn.org precisely because it appeared
on a social media platform. Even when students attempted to investigate the
source, they were often satisfied by shallow information. For example, middle
school students focused on the age of Andrew Plepler, the Bank of America
executive who authored the post that students were asked to evaluate. Although
these students focused on the source, they did not consider Plepler’s job at Bank
of America, which was far more relevant in this case than his age.
Students did not do much better at evaluating evidence presented to
support social or political claims. We saw numerous examples of students
being taken in by the appearance of evidence. For example, in the assessment
with an Imgur post, high school students were captivated by the vivid photo-
graph of “nuclear” daisies. Perhaps distracted by the vividness of this evi-
dence, students failed to raise questions about whether an unknown user on a
popular photo-sharing website was a trustworthy source. Additionally, few
students raised questions about whether the photo, even if it was authentic,
could sufficiently support the claims being made. In this case, students were
almost blinded by the photograph. In other assessments, we saw students
being taken in by statistics, quotes from seemingly authoritative figures, and
data displays without asking about the source of the evidence and whether it
was relevant to the claims being made. Even when the evidence was strong,
college students often could not articulate reasons why. In the MoveOn.org
task, most students did not focus on the evidence the tweet provided (a poll
conducted by a well-established polling firm) as a reason the tweet might be
useful. Instead, they resorted to judgments based on the appearance or content
of the tweet itself.
Even when given the opportunity in a live web task, students rarely
showed evidence of venturing outside the webpage on which they landed.
Although they were explicitly instructed that they were free to search outside
the site in the “Article Evaluation” task, only 13 of 95 high school students
and eight of 58 college students reported using outside websites to evaluate
minimumwage.com. This reluctance to leave the initial site stands in stark
contrast to the behavior of professional fact checkers as they evaluated
online information: Fact checkers regularly read laterally, departing a web-
site to open new tabs and see what other sources had to say about a source
(Wineburg & McGrew, 2017). Because students rarely ventured outside the
confines of the website where they started, students relied on the organiza-
tion’s description of itself—if they even got that far. In most cases, high
school and college students simply evaluated the content or appearance of
the initial page and never sought a broader perspective by turning to the
open Internet.
Assessments of Civic Online Reasoning 185
Limitations
Routes Forward
Teachers cannot ensure that students will use these skills outside the class-
room, but that dilemma is not unique to civic online reasoning. Teachers can,
however, provide students with opportunities to learn and practice these skills.
In fact, evidence suggests that explicit instruction may help students develop a
commitment to accuracy in online evaluations (Kahne & Bowyer, 2017).
Taken together, our results point to priorities for civic online reasoning
instruction. Results from assessments targeting the construct of “Who is behind
the information?” suggest that students need to be taught, first and foremost, that
determining the author or sponsoring organization of a story is a critical part of
evaluating it. Powerful examples may be useful here: Teachers could present
students with a source that may seem credible (such as the article sponsored by
Shell Oil Company) and help them question and complicate their initial assump-
tions. Next, students need support in learning how to investigate digital sources,
whether it is researching the author’s qualifications and motivations or probing
the sponsoring organization’s potential conflicts of interest. With repeated
practice identifying sources of information, researching relevant information
about those sources, and synthesizing what they learn to make judgments about
an article’s trustworthiness, students should be able to improve their skills.
Students also need to be explicitly taught how to evaluate evidence. They
need support as they practice evaluating the sources of evidence (just as they do
when they evaluate sources of articles or webpages). Additionally, they need
more opportunities to consider how and whether evidence provided actually
supports a claim. A teacher could, for example, model how to examine evidence
provided by the Imgur user in the “Evaluating Evidence” task. First, the teacher
could summarize the claim being made by the post’s title, “Fukushima Nuclear
Flowers,” and the caption, “Not much more to say, this is what happens when
flowers get nuclear birth defects.” The teacher could then examine whether the
“evidence” provided supports the claim that the Fukushima nuclear disaster
caused the daisies’ mutations. In the process, the teacher could raise questions
about the source and location of the photograph and the causal link between the
nuclear disaster and the flowers’ appearance. Students could then practice with
evidence presented about other topics.
Finally, students need support in learning how to consider multiple sources
of information as they investigate online content. If students completed the
minimumwage.com assessment, the teacher could ask the class to compare the
conclusions reached by two anonymous students—one who trusted what mini-
mumwage.com said about itself and one who sought to find out what others had
to say about the organization. The class could then discuss reasons why con-
sulting multiple sources is necessary and practice strategies for doing so online.
Assessments of Civic Online Reasoning 187
Implications
Our findings show that students struggled to engage in even basic evalua-
tions of authors, sources, and evidence. We need to help them develop the
skills necessary to find reliable sources about social and political topics. When
people struggle to evaluate information, they risk making decisions that go
against their own interests. In a democratic society, our fellow citizens’ online
reasoning skills affect us. As more people go online for social and political
information, the ability to find reliable information can strengthen our society.
Or, if we are unable to distinguish truth from falsehood, it can weaken the
quality of our decisions and our ability to advocate for our interests. In order
to capitalize on the promise of the Internet and not be victims of its ruses,
teachers need tools to prepare students to evaluate information and arguments
online. Student responses to our tasks show we have a long way to go, but
they also suggest a route forward to develop assessments and curricular tools
to support teachers and their students in this critical work.
FUNDING
ORCID
REFERENCES
American Press Institute. (2015). How millennials get news: Inside the habits
of America’s first digital generation. Retrieved from http://www.ameri-
canpressinstitute.org
Ashley, S., Maksl, A., & Craft, S. (2013). Developing a news media literacy
scale. Journalism & Mass Communication Educator, 68, 7–21.
doi:10.1177/1077695812469802
188 McGrew et al.
Bartlett, J., & Miller, C. (2011). Truth, lies, and the Internet: A report into
young people’s digital fluency. London, UK: Demos. Retrieved from
https://www.demos.co.uk/
Barzilai, S., & Zohar, A. (2012). Epistemic thinking in action: Evaluating and
integrating online sources. Cognition and Instruction, 30, 39–85.
doi:10.1080/07370008.2011.636495
Bennett, S. (2012). Digital natives. In Z. Yan (Ed.), Encyclopedia of cyber
behavior: Volume 1 (pp. 212–219). Hershey, PA: IGI Global.
Breakstone, J. (2014). Try, try, try again: The process of designing new history
assessments. Theory & Research in Social Education, 42, 453–485.
doi:10.1080/00933104.2014.965860
Bronfenbrenner, U. (1994). Ecological models of human development. In T.
Husén & T. N. Postlethwaite (Eds.), International encyclopedia of educa-
tion (2nd ed., pp. 1643–1647). Oxford, UK: Elsevier.
Common Sense Media. (2012). Identifying high-quality sites [PDF docu-
ment]. Retrieved from https://www.commonsense.org/education/system/
files/uploads/classroom-curriculum/6-8-unit3-identifyinghighqualitysites-
2015.pdf?x=1
Common Sense Media. (n.d.). Scope and sequence: Common Sense K–12
digital citizenship curriculum. Retrieved from https://www.commonsense.
org/education/scope-and-sequence
Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis: Verbal reports as
data (Rev. ed.). Cambridge, MA: MIT Press.
Flanagin, A. J., & Metzger, M. J. (2007). The role of site features, user
attributes, and information verification behaviors on the perceived cred-
ibility of web-based information. New Media & Society, 9, 319–342.
doi:10.1177/1461444807075015
Flanagin, A. J., & Metzger, M. J. (2010). Kids and credibility: An empirical
examination of youth, digital media use, and information credibility.
Cambridge, MA: MIT Press.
Fogg, B. J., Soohoo, C., Danielson, D. R., Marable, L., Stanford, J., & Tauber, E. R.
(2003, June). How do users evaluate the credibility of web sites? A study with
over 2,500 participants. Paper presented at the Association for Computing
Machinery Conference on Designing for User Experiences, San Francisco, CA.
Gasser, U., Cortesi, S., Malik, M., & Lee, A. (2012). Youth and digital media:
From credibility to information quality. Cambridge, MA: The Berkman
Center for Internet and Society. Retrieved from https://papers.ssrn.com/
sol3/papers.cfm?abstract_id=2005272
Goldman, S. R., Braasch, J. L. G., Wiley, J., Graesser, A. C., & Brodowinska,
K. (2012). Comprehending and learning from Internet sources: Processing
patterns of better and poorer learners. Reading Research Quarterly, 47,
356–381. doi:10.1002/RRQ.027
Assessments of Civic Online Reasoning 189
Google, & iKeepSafe. (2013). Class 1: Become an online sleuth [PDF docu-
ment]. Retrieved from http://ikeepsafe.org/wp-content/uploads/2011/10/
Class-1_Become-an-Online-Sleuth_FINAL-1.pdf
Gottfried, J., Barthel, M., Shearer, E., & Mitchell, A. (2016, February 4). The
2016 presidential campaign—A news event that’s hard to miss.
Washington, DC: Pew Research Center. Retrieved from http://www.jour-
nalism.org/news-item/the-2016-presidential-campaign-a-news-event-
thats-hard-to-miss/
Graves, L. (2013, November 13). Corporate America’s new scam: Industry P.
R. firm poses as think tank! Salon. Retrieved from http://www.salon.com/
2013/11/13/
corporate_americas_new_scam_industry_p_r_firm_poses_as_think_tank/
Hargittai, E., Fullerton, L., Menchen-Trevino, E., & Thomas, K. Y. (2010).
Trust online: Young adults’ evaluation of web content. International
Journal of Communication, 4, 468–494.
Hobbs, R. (2010). Digital and media literacy: A plan of action. Washington,
DC: The Aspen Institute. Retrieved from https://www.aspeninstitute.org/
publications/digital-media-literacy-plan-action-2/
Hobbs, R., & Frost, R. (2003). Measuring the acquisition of media-literacy
skills. Reading Research Quarterly, 38, 330–355. doi:10.1598/RRQ.38.3.2
Iding, M. K., Crosby, M. E., Auernheimer, B., & Klemm, E. B. (2009). Web
site credibility: Why do people believe what they believe?. Instructional
Science, 37, 43–63. doi:10.1007/s11251008-9080-7
Kahne, J., & Bowyer, B. T. (2017). Educating for democracy in a partisan age:
Confronting the challenges of motivated reasoning and misinformation.
American Educational Research Journal, 54, 3–34. doi:10.3102/
0002831216679817
Kahne, J., Hodgin, E., & Eidman-Aadahl, E. (2016). Redesigning civic
education for the digital age: Participatory politics and the pursuit of
democratic engagement. Theory & Research in Social Education, 44, 1–
35. doi:10.1080/00933104.2015.1132646
Kahne, J., Lee, N., & Feezell, J. T. (2012). Digital media literacy education
and online civic and political participation. International Journal of
Communication, 6, 1–24.
Kahne, J., & Middaugh, E. (2012). Digital media shapes youth participation in
politics. Phi Delta Kappan, 94(3), 52–56. doi:10.1177/003172171209400312
Knight Commission on the Information Needs of Communities in a
Democracy. (2009). Informing communities: Sustaining democracy in
the digital age. Retrieved from https://knightfoundation.org/reports/
informing-communities-sustaining-democracy-digital
Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin,
108, 480–498. doi:10.1037/0033-2909.108.3.480
Leu, D. J., Coiro, J., Kulikowich, J. M., & Cui, W. (2012, November). Using
the psychometric characteristics of multiple-choice, open Internet, and
190 McGrew et al.
National Council for the Social Studies. (2016). Media literacy. Social
Education, 80, 183–185.
Pan, B., Hembrooke, H., Joachims, T., Lorigo, L., Gay, G., & Granka, L.
(2007). In Google we trust: Users’ decisions on rank, position, and
relevance. Journal of Computer-Mediated Communication, 12, 801–
823. doi:10.1111/j.1083-6101.2007.00351.x
Pariser, E. (2011). The filter bubble: How the new personalized web is
changing what we read and how we think. New York, NY: Penguin Press.
Pellegrino, J. W., Chudowsky, N., & Glaser, R. (Eds.). (2001). Knowing what
students know. Washington, DC: National Academy Press.
Prensky, M. (2001). Digital natives, digital immigrants. On the Horizon, 9(5),
1–6. doi:10.1108/10748120110424816
Rheingold, H. (2012). Net smart: How to thrive online. Cambridge, MA: MIT
Press.
Ruiz-Primo, M. A., Shavelson, R. J., Li, M., & Schultz, S. E. (2001). On the
validity of cognitive interpretations of scores from alternative mapping
techniques. Educational Assessment, 7, 99–141. doi:10.1207/
S15326977EA0702_2
Schmeiser, C. B., & Welch, C. J. (2006). Test development. In R. L. Brennan
(Ed.), Educational measurement (pp. 307–354). Westport, CT: Praeger.
Sundar, S. S. (2008). The MAIN model: A heuristic approach to understand-
ing technology effects on credibility. In M. J. Metzger & A. J. Flanagin
(Eds.), Digital media, youth, and credibility (pp. 73–100). Cambridge,
MA: MIT Press.
Taylor, K. L., & Dionne, J. (2000). Accessing problem-solving strategy
knowledge: The complementary use of concurrent verbal protocols and
retrospective debriefing. Journal of Educational Psychology, 92, 413–
425. doi:10.1037//0022-0663.92.3.413
Walraven, A., Brand-Gruwel, S., & Boshuizen, H. (2009). How students
evaluate information and sources when searching the world wide web
for information. Computers & Education, 52, 234–246. doi:10.1016/j.
compedu.2008.08.003
Westerwick, A. (2013). Effects of sponsorship, web site design, and Google
ranking on the credibility of online information. Journal of Computer-
Mediated Communication, 18, 194–211. doi:10.1111/jcc4.12006
Wineburg, S., & McGrew, S. (2017). Lateral reading: Reading less and
learning more when evaluating digital information (Stanford History
Education Group Working Paper No. 2017-A1). Retrieved from https://
ssrn.com/abstract=3048994
Wineburg, S., Smith, M., & Breakstone, J. (2012). New directions in assess-
ment: Using Library of Congress sources to assess historical understand-
ing. Social Education, 76, 290–293.
192 McGrew et al.
APPENDIX
Mastery Student thoroughly explains that the source of the article (written by an
employee of Bank of America or presented by Bank of America) might
make it less trustworthy because the bank stands to gain if people believe
they have financial problems and seek counsel from bank officials.
Emerging Student identifies the authorship (or sponsorship) of the article as a
factor that may make it less trustworthy. At the same time, the student
does not provide a complete explanation or makes statements that are
incorrect or irrelevant.
Beginning Student argues that the article is untrustworthy for reasons that are
unrelated to authorship/sponsorship or provides an answer that is
unclear or irrelevant.
Mastery Student argues the post does not provide strong evidence and questions
the source of the post (e.g., we don’t know anything about the author
of the post) and/or the source of the photograph (e.g., we don’t know
where the photo was taken).
Emerging Student argues that the post does not provide strong evidence, but the
explanation does not consider the source of the post or the source of
the photograph, or the explanation is incomplete.
Beginning Student argues that the post provides strong evidence or uses incorrect
or incoherent reasoning.
Mastery Student fully explains that the tweet may be useful because it includes
data from a poll conducted by a polling firm.
Emerging Student addresses the polling data and/or the source of the polling data but
does not fully explain how those elements may make the tweet useful.
Beginning Student does not address the polling data or the source of the polling
data as a reason the tweet may be useful.
Assessments of Civic Online Reasoning 193
Mastery Student fully explains how the political motivations of the organizations
may have influenced the content of the tweet and/or poll, which may
make the tweet less useful.
Emerging Student addresses the source of the tweet or the source of the news
release but does not fully explain how those elements may make the
tweet less useful.
Beginning Student does not address the source of the tweet or the source of the
news release as reasons the tweet may be less useful.
Mastery Student rejects the website as a reliable source and provides a clear
rationale based on a thorough evaluation of the organizations behind
minimumwage.com.
Emerging Student rejects the website as a reliable source and identifies the intent
of the website’s sponsors but does not provide a complete rationale.
Beginning Student accepts the source as trustworthy or rejects the source based on
irrelevant considerations.