Professional Documents
Culture Documents
2.1equipping Preservice Elementary Teachers For Data Use in The Classroom
2.1equipping Preservice Elementary Teachers For Data Use in The Classroom
Todd D. Reeves
To cite this article: Todd D. Reeves (2017) Equipping Preservice Elementary Teachers
for Data Use in the Classroom, Action in Teacher Education, 39:4, 361-380, DOI:
10.1080/01626620.2017.1336131
In the current era of assessment-driven accountability and reform, teachers’ capacity to analyze,
interpret, and use data to inform decisions is a critical component of their professional expertise
(Mandinach & Gummer, 2013a). Professional standards require teachers to analyze and interpret
assessment data to better understand student cognition and make instructional decisions (Interstate
Teacher Assessment and Support Consortium, 2011). By informing with data decisions related to
instructional goals, methods, and time allocation, teachers can theoretically better target their
instruction to student needs, resulting in higher levels of student achievement (McDougall,
Saunders, & Goldenberg, 2007; Means, Padilla, DeBarger, & Bakia, 2009). Rigorous empirical
evidence also supports the contention that teacher engagement in data use can result in improved
student achievement growth (Carlson, Borman, & Robinson, 2011). For example, a school-wide
educator data use intervention implemented in the Netherlands was associated with overall student
achievement gains that translated to about a month of schooling (van Geel, Keuning, Visscher, &
Fox, 2016).
Research suggests that inservice teachers find the analysis, interpretation, and instructional use of
data difficult, however (DeLuca & Bellara, 2013; Stobaugh, Tassell, & Norman, 2010; Wayman &
Jimerson, 2013). This underscores the need to prepare and support teachers with respect to these
practices (Jacobs, Gregory, Hoppey, & Yendol-Hoppey, 2009; Kerr, Marsh, Ikemoto, Darilek, &
Barney, 2006; Mandinach & Gummer, 2013b). Despite the current emphasis on inservice teachers’
data literacy, recent research indicates that preservice teacher education inadequately equips teachers
by focusing only superficially on data literacy and more on assessment literacy (Greenberg & Walsh,
2012; Mandinach, Friedman, & Gummer, 2015; Mann & Simon, 2010; July). Such concerns are
CONTACT Todd D. Reeves treeves@niu.edu Educational Research and Evaluation, Northern Illinois University, 204A Gabel
Hall, 1425 W. Lincoln Highway, DeKalb, IL 60115. USA.
Color versions of one or more of the figures in the article can be found online at www.tandfonline.com/UATE.
© 2017 Association of Teacher Educators
362 T. D. REEVES
especially salient in light of external mandates in some locales for preservice teachers to demonstrate
such skills to attain licensure. The edTPA, for example, calls for preservice teachers to engage in
processes such as analyzing, interpreting, and instructionally using data. Moreover, there is a paucity
of research concerning effective preservice mechanisms by which to equip teachers for data use
(Greenberg & Walsh; Mandinach & Gummer, 2013a; Reeves & Honig, 2015; Reeves, Summers, &
Grove, 2016).
These problems have led to calls for increased attention to the promotion of data literacy and data
use skills among preservice teachers, the future of our nation’s teaching force (Cramer, Little, &
McHatton, 2014; Data Quality Campaign, 2014). In response, the purpose of this article is to
describe a course-based classroom data literacy experience for preservice elementary teachers. The
experience engages preservice teachers in scoring (teacher-developed) traditional and performance
classroom assessments, and analyzing, interpreting, and making decisions based on the data. In
addition to describing the design of the experience, including its objectives, materials, and activities,
this article also discusses potential implementation considerations and challenges for the reader.
data statistically. Third, the experience represents the domain of transforming data into information,
including interpreting data, understanding different data displays and representations, examining patterns,
and articulating inferences and conclusions. Finally, the experience targets skills that are part of Mandinach
and Gummer’s transforming information into a decision domain (namely specifying next instructional
steps, and instructional strategies to use).
Teacher transformation of information into a decision has proved to be a particularly elusive
aspect of data use (Mandinach & Jimerson, 2016). There is evidence that, even among trained
teachers who regularly engage in data use (broadly), little time is spent considering the implications
of data for instruction, compared to other processes such as data collection and analysis (Slavit,
Nelson, & Deuel, 2013). The arduousness of this domain may be explained by its invoking of
multiple, other forms of knowledge (e.g., content, pedagogy; Mandinach & Gummer, 2016). It is for
this reason that the innovation described here culminates with opportunities for preservice teachers
to consider potential instructional implications of classroom assessment data.
Collaboration
Teacher collaboration is prevalent in the literature as a central element of many inservice data use
interventions (Mandinach & Jimerson, 2016; Poortman & Schildkamp, 2016; van Geel et al., 2016). At
the same time, teachers increasingly engage in collaborative activities around data in the context of data
teams and professional learning communities (Carlson et al., 2011; Hamilton et al., 2009; Jacobs et al., 2009).
Although teacher collaboration can prove challenging, it affords the pooling of teachers’ expertise that
might be necessary for difficult practices such as data use (Athanases et al., 2013; Farley-Ripple & Buttram,
2014; Means et al., 2011). A study involving preservice teacher collaboration around data also reported
pretest–-posttest increases in preservice teachers’ self-reported efficacy concerning data use practices (Piro
& Hutchinson, 2014).
Contextual Relevance
Another design element vis-à-vis data literacy and use interventions is alignment of the intervention with
teachers’ local context(s) (e.g., curriculum, assessments, students). In Gearhart and Osmundson’s (2009)
study, 23 science teachers developed data use skills through the examination of actual student work
gathered via their own assessments and from their own students. Interview-based evidence gathered
from inservice teachers also indicates that they value close alignment between data use professional
development and their classroom and curricular contexts (Wayman & Jimerson, 2013).
Use of Technology
The use of computer-based technology has also received attention in the literature with respect to the
development of teacher data literacy and data use. For example, such technology can be used to
automate data analysis processes (e.g., filtering, summarizing) and facilitate interpretation of data.
Data dashboards and data systems—which are increasingly common in schools—serve this purpose.
Technological tools used in these ways might be especially helpful given that teachers have been
shown to need support with data analysis and interpretation (Wayman & Jimerson, 2013).
Preservice teachers receive in-depth feedback on their assessments from a peer and the instructor.
Participants administer their assessments to a sample of K-8 students during a clinical experience
and bring all collected student work (e.g., completed tests, projects, rubrics, checklists) to the first
session. Prior to the experience, preservice teachers also receive direct instruction in scoring student
work, summarizing item- and test-level data (tabularly, graphically, statistically), and disaggregating
data by subgroup and content standard/behavioral objective.
The design of the pedagogical strategy is grounded in theory and prior research related to data
literacy and use, especially research on successful data use interventions for pre- and inservice
teachers (e.g., Poortman & Schildkamp, 2016; Reeves & Honig, 2015; van Geel et al., 2016). Along
these lines, the experience provides opportunities for preservice teachers to engage in a variety of
behaviors inherent in data-driven decision making (Mandinach & Gummer, 2016; Wieman, 2014).
The experience addresses primarily components two through four of Mandinach and Gummer’s
(2016) framework cited earlier: using data, transforming data into information, and transforming
information into a decision. Identifying problems and framing questions is addressed only in a
limited way, as the questions to be addressed by participants are prescribed within the protocol. The
fifth component—evaluating outcomes—is not addressed due to practical constraints on preservice
teachers’ ability to reengage students vis-à-vis content instruction and evaluate the results.
The design of the experience itself is also anchored in prior scholarship on data literacy and use
interventions for teachers. First, the instructor of the assessment course (the author of this article)
facilitates the experience. Second, the experience is highly structured in nature. Materials provided to
participants include step-by-step protocols that frame each of the 2 days (see the appendix). The
instructor introduces each day by reading a script that explains the arc of the experience that day to
orient participants to each day’s scope of work. Each day of the experience also proceeds in stages,
for which a particular amount of time is allocated (e.g., interpretation of overall class performance,
interpretation of individual student performance).
Third, the assessment data the preservice teachers work with are collected from K-8 students with
whom they are completing a clinical experience. Thus, the experience itself is classroom contextua-
lized, and the participants are familiar with the students from whom data are collected. During the
experience, each preservice teacher works with his or her own traditional and performance assess-
ments, scoring guides, and corresponding data gathered within their clinical placement (even though
they may collaborate during the experience). As each preservice teacher has data from two assess-
ments to work with, participants complete each activity twice—once for their traditional assessment
and once for their performance assessment. Each preservice teacher working with data from two
assessments is intended to maximize opportunities to practice scoring, data analysis and interpreta-
tion, and data-driven decision-making processes within the experience, in turn promoting transfer of
training.
Fourth, tools are used to scaffold participants and facilitate scoring, data analysis and interpreta-
tion, and data-based decision making processes (as elaborated below). For example, an Excel
document is used to facilitate interpretation of item-level data through tabular and graphical
representation of those data. As another example, a protocol contained in a Word document serves
to structure and facilitate preservice teachers’ articulation of claims/decisions about students/instruc-
tion and the provision of evidence in support of those claims/decisions. The Excel and Word
documents are made available in electronic and hard copy format, in the event that the preservice
teachers prefer to work on paper (and in case of technology failure).
Fifth, to promote data-related collaboration, the preservice teachers are assembled in four- to five-
person grade-level roundtables approximately corresponding to the grade level of a concomitant
clinical experience. Participants previously worked in these roundtables twice during the course
(once to design and construct assessments, and once to review and revise them). Relative to
collaboration, preservice teachers are orally provided with the following direction, “While these
activities are ultimately to be completed individually, you are welcome to ask questions to and assist
your peers during any part of today’s workshop.”
366 T. D. REEVES
Figure 1. Exemplar qualitative analysis worksheet for incorrect counting and subtraction responses. The counting task was designed to
elicit evidence (partly) of student mastery of the following Common Core State Standard for mathematics: “Count to 100 by ones and by
tens” (K.CC.A.1). The subtraction item was intended to align partly with the following Common Core State Standard for mathematics:
“Fluently add and subtract within 1000 using strategies and algorithms based on place value, properties of operations, and/or the
relationship between addition and subtraction” (3.NBT.A.2). In this example, preservice teachers identified common counting and
subtraction errors among K-8 students. Example preservice teacher responses contained within white cells.
During this second day, the preservice teachers first engage in data analysis and interpretation
activities in accord with a step-by-step protocol (see the appendix), during which they take notes
concerning their tentative conclusions. Preservice teachers interpret measures of central tendency (i.e.,
mean, median, mode) and dispersion (i.e., range, Standard Deviation) for particular items or rubric
dimensions and total scores (all automatically calculated in the Excel document from the first day).
Preservice teachers also interpret the performance of three individual students on each assessment
overall and on particular items or rubric dimensions. In selecting individual students on which to focus,
the preservice teachers are encouraged to select students who were legitimately engaged during the
assessment process and who are members of a special population (e.g., English language learners,
students with disabilities), “average,” or otherwise interesting to the preservice teacher.
Altogether, for each of two assessments the preservice teachers practice data analysis, interpretation,
and use at different levels of student aggregation and content grain sizes. With respect to level of student
aggregation, the preservice teachers focus their efforts at the overall class and individual student levels.
In terms of grain size, the preservice teachers examine mastery of content overall, by content standard/
behavioral objective, and by item or rubric dimension. The sequence of these activities is highly
prescribed in the protocols (e.g., examination of overall class performance on the traditional assessment
overall, then overall class performance on parts of the traditional assessment, etc.). After the preservice
teachers engage in data analysis and interpretation tasks to yield tentative conclusions, they then repeat
the process to formally articulate claims/decisions about students and instruction.
368 T. D. REEVES
Figure 2. Completed raw data worksheet for a mathematics traditional assessment designed to elicit evidence of student mastery
of the following Common Core State Standard for mathematics: “Use addition and subtraction within 100 to solve one and two
step word problems involving situations of adding to, taking from, putting together, taking apart, and comparing with unknowns
in all positions by using drawings and equations with a symbol for the unknown number to represent the problem” (2.OA.A.1). In
this example, the preservice teacher parsed out five separate behavioral objectives from this content standard related to (1)
solving one-step addition word problems within 100 (1SA), (2) solving two-step addition word problems within 100 (2SA), (3)
solving one-step subtraction word problems within 100 (1SS), (4) solving two-step subtraction word problems within 100 (2SS),
and (5) solving two-step addition and subtraction word problems (2SAS). In the bottom-most row of the raw data worksheet,
descriptors or key words are provided for each item to facilitate data analysis and interpretation. For example, “1SA” at the
bottom-left represents the first item (“1”) on the assessment, a two-step mathematics problem which involved first subtraction
(“S”) and then addition (“A”). The items represented within the raw data worksheet constitute constructed-response items scored
for partial credit (0, 1, or 2).
Figure 3. Example excerpt of a completed item score frequency distribution worksheet for an English/language arts performance
assessment. The particular rubric dimension represented in the figure is intended to align with the following Common Core State
Standard for English/language arts: “Distinguish among fact, opinion, and reasoned judgment in a text” (RH.6–8.8). Although the
worksheet’s title references an “item,” it is also used to represent scores derived from a particular dimension of a rubric.
errors or misconceptions; instruction (e.g., changes to the lesson plan, next steps); and feedback that
could be provided to students based on the data. For each claim/decision, preservice teachers need to
cite supporting evidence (qualitative, tabular, graphical, or statistical). The organization (sequencing)
of the data-based decision-making worksheet parallels that of the earlier second-day data analysis
and interpretation activities. As such, when completing the data-based decision making worksheet,
preservice teachers are finalizing claims/decisions they began to initially formulate when they earlier
engaged in data analysis and interpretation. Figures 4–6 contain excerpts from completed data-based
decision-making worksheets. It is notable that the data-based decision-making worksheet contains
scaffolds throughout, as shown in Figure 4 through 6. For example, in the section concerning
instructional decisions about next steps for current students, the worksheet states “(e.g., move on
to particular behavioral objectives using a particular method, reteach the behavioral objectives using
a particular method).”
Immediately prior to beginning work with the data-based decision-making worksheet, the pre-
service teachers receive direct instruction in articulating evidence-based claims about and providing
high-quality feedback to students. In terms of articulating evidence-based claims, preservice teachers
are given guidelines for stating claims and advancing evidence in support of those claims. For
example, it is explained to preservice teachers that claims should focus on students’ degree of
mastery of the content targeted by their assessments, rather than test scores (as test scores are
evidence of knowledge or skills).
With respect to the provision of evidence, preservice teachers are instructed to provide specific
and relevant evidence. In particular, preservice teachers are instructed to: indicate which specific
item(s) or rubric dimension(s) scores are the basis for each decision/claim about what students know
or can do, and explicitly state why the evidence is relevant to a specific decision/claim (e.g., because
the items required students to do X). They are also reminded to put scores in context (e.g., when
reporting a class summary score, the total number of possible points should be reported as well). In
terms of feedback, the preservice teachers are alerted to the characteristics of high-quality feedback,
such as attending to strengths and weaknesses.
Finally, before completing the data-based decision-making worksheet, preservice teachers are also
reminded to keep in mind assessment quality issues (e.g., item quality, assessment task quality,
scoring guide quality, and reliability, validity, and fairness issues) as well as students’ engagement
during the assessment process. The preservice teachers are also told that if the data do not support a
370
T. D. REEVES
Figure 4. Example excerpt from a data-based decision-making worksheet in which a preservice teacher articulates data-based claims about an overall class’ content mastery strengths and
weaknesses and provides evidence for those claims. The example shown is associated with a traditional assessment in a social studies context. The particular traditional assessment was designed
to elicit evidence of student mastery of two behavioral objectives related to government: (1) “Students will be able to identify what government does at local, state, and national levels,” and (2)
“Students will be able to describe how local, state, and national governments interact.”.
ACTION IN TEACHER EDUCATION 371
Figure 5. Example excerpt from a data-based decision-making worksheet in which a preservice teacher articulates data-based
claims about an individual (focus) student’s content mastery strengths and weaknesses and provides evidence for those claims.
The example shown is associated with a performance assessment designed to elicit evidence of mastery of the Common Core
State Standard for English/language arts: “Write opinion pieces in which they introduce the topic or book they are writing about,
state an opinion, supply reasons that support the opinion, use linking words (because, and, also) to connect opinion and reasons,
and provide a concluding statement or section” (W.2.1). In this example, the preservice teacher has deconstructed the standard
and assessed the following four behavioral objectives specifically: (1) “Students will write opinion pieces in which they introduce
the topic or book they are writing about;,” (2) “Students will write opinion pieces in which they state an opinion,” (3) “Students will
write opinion pieces in which they supply reasons that support their opinion,” and (4) “Students will write a concluding statement
or section at the end of their opinion piece.”.
Weaknesses “Make sure to include a topic sentence at the very beginning to tell your audience what they can expect
to read about in your paper.”
Figure 6. Example excerpt from a data-based decision-making worksheet in which a preservice teacher articulates evidence-based
feedback to provide to an individual (focus) student related to their content mastery strengths and weaknesses. The example
shown is associated with the same writing performance assessment, and student, reflected in Figure 5.
particular claim/decision, they are to write, “No data.” For example, if a student answered all
multiple-choice questions correctly, it is not possible to identify or provide feedback on his or her
weaknesses related to the content assessed.
Discussion
Despite prominent attention to teacher data literacy and use, the current status of teacher education
for data use has been deemed inadequate. The treatment of data use in preservice teacher education,
in particular, is often superficial, sometimes absent entirely, and focused more on assessment and
assessment literacy than data literacy (Greenberg & Walsh, 2012; Mandinach et al., 2015; Mann &
Simon, 2010, July). At the same time, little is empirically known about how to develop such expertise
and practices during teacher education (DeLuca & Bellara, 2013; Greenberg & Walsh; Reeves &
Chiang, 2017). Addressing this concern, the purpose of this article was to describe an assessment
course-based pedagogical strategy designed to support the development of preservice teachers’
372 T. D. REEVES
assessment data literacy (i.e., ability to analyze, interpret, and make decisions based on assessment
data). The experience involved scoring traditional and performance classroom assessments, and
analyzing, interpreting, and making decisions based on the data. Grounded in theory and research
concerning data literacy and use and their development, the described experience was facilitated,
collaborative, and contextually relevant, and featured step-by-step protocols and technological tools.
Implementation Challenges
A number of notable challenges naturally occurred during implementation of the pedagogical
strategy. Some of these implementation challenges resulted in changes being made to the experience
in subsequent iterations. First, during the initial implementation of the experience it was learned that
some of the preservice teachers did not begin the experience with high-quality, ready-to-use scoring
guides. Not having a scoring guide precluded these preservice teachers’ ability to participate mean-
ingfully in the first day of the experience. In response to this challenge, the instructor decided to
formally review and provide feedback on drafts of all preservice teachers’ scoring guides prior to the
experience.
Second, some preservice teachers did not bring the collected K-12 student assessment artifacts
(e.g., completed tests, student work) to the first session; these preservice teachers were unable to
participate fully in the first day of the experience and were instructed to assist their peers during the
session. In response to this challenge, the instructor implemented additional electronic reminders to
preservice teachers concerning what they must bring to each session.
Third, the author identified problems with the evidence-based claims/decisions articulated by
participants (e.g., specificity of the claims, relevance of the evidence). In response to this challenge,
the instructor provided direct instruction during the experience concerning the formulation of
evidence-based claims and included related scaffolds on the data-based decision making worksheet.
Another implementation challenge concerns the pace of the experience. Some preservice teachers
perceived that the experience was too fast whereas others perceived that it moved too slowly (Reeves
& Honig 2015). These mixed perceptions concerning the pace potentially arose from the fact that the
participants’ assessments varied with respect to their length and scoring complexity; and, the
numbers of K-12 students to which the assessments had been administered varied. As such, the
base process of scoring, analyzing, and interpreting the data might have taken longer for some
participants. A related challenge was that that some participants moved ahead with the protocol at a
pace faster than expected, perhaps due to higher prior levels of experience using data (e.g., Athanases
et al., 2013). In response to this challenge, participants were encouraged to assist and/or discuss
conclusions with their peers if they finish prior to the group advances to the next phase.
incorporate facets of the evaluating outcomes domain. Preservice teachers could potentially imple-
ment the instructional changes they proposed during the experience and then reexamine the status
of the problem after their implementation.
Notes
1. Studies by McDougall et al. (2007), Carlson et al. (2011), and van Geel et al. (2016) were experimental (or quasi-
experimental) in nature, whereas others cited in this section used other research designs. Although the set of
design features identified here are implicated in prior research, more randomized studies are needed to
understand unequivocally the optimal design of data use capacity-building activities for teachers.
2. In some cases, the preservice teachers’ assessments have more scored components (e.g., items), or more
students were assessed, than there are columns or rows (respectively) in the raw data worksheets; in such
cases, the instructor modifies the raw data worksheet. The instructor also offers to check any preservice
teacher’s raw data worksheets to make sure they are formulated and are analyzing data correctly.
3. If a constructed-response item is scored using an analytic rubric, score the item with respect to each rubric
dimension separately and then aggregate each rubric dimension score to yield a total score for that item.
Notes on Contributor
Todd Reeves is an assistant professor of educational research and evaluation at Northern Illinois University. His
research focuses on teacher education relative to assessment and data use.
ORCID
Todd D. Reeves http://orcid.org/0000-0001-8912-1690
References
Athanases, S. Z., Bennett, L. H., & Wahleithner, J. M. (2013). Fostering data literacy through preservice teacher inquiry
in English language arts. Teacher Educator, 48(1), 8–28. doi:10.1080/08878730.2012.740151
Athanases, S. Z., Wahleithner, J. M., & Bennett, L. H. (2012). Learning to attend to culturally and linguistically diverse
learners through teacher inquiry in teacher education. Teachers College Record, 114(7), 1–50.
Carlson, D., Borman, G., & Robinson, M. (2011). A multistate district-level cluster randomized trial of the impact of
data-driven reform on reading and mathematics achievement. Educational Evaluation and Policy Analysis, 33(3),
378–398. doi:10.3102/0162373711412765
Coburn, C. E., & Turner, E. O. (2011). Research on data use: A framework and analysis. Measurement:
Interdisciplinary Research & Perspective, 9(4), 173–206.
Cramer, E. D., Little, M. E., & McHatton, P. A. (2014). Demystifying the data-based decision-making process. Action
in Teacher Education, 36(5/6), 389–400. doi:10.1080/01626620.2014.977690
Data Quality Campaign. (2014). Teacher data literacy: It’s about time. Washington, DC: Author.
DeLuca, C., & Bellara, A. (2013). The current state of assessment education: Aligning policy, standards, and teacher
education curriculum. Journal of Teacher Education, 64(4), 356–372. doi:10.1177/0022487113488144
Farley-Ripple, E. N., & Buttram, J. L. (2014). Developing collaborative data use through professional learning
communities: Early lessons from Delaware. Studies in Educational Evaluation, 42, 41–53. doi:10.1016/j.
stueduc.2013.09.006
Gearhart, M., & Osmundson, E. (2009). Assessment portfolios as opportunities for teacher learning. Educational
Assessment, 14(1), 1–24. doi:10.1080/10627190902816108
Greenberg, J., & Walsh, K. (2012). What teacher preparation programs teach about K-12 assessment: A review. New
York.NY: National Council on Teacher Quality.
Hamilton, L., Halverson, R., Jackson, S., Mandinach, E., Supovitz, J., & Wayman, J. (2009). Using student achievement
data to support instructional decision making (NCEE 2009-4067). Washington, DC: U.S. Department of Education,
Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance.
ACTION IN TEACHER EDUCATION 375
Interstate Teacher Assessment and Support Consortium. (2011). InTASC model core teaching standards. Retrieved
from http://www.ccsso.org/Resources/Publications/InTASC_Model_Core_Teaching_Standards_2011_MS_Word_
Version.html
Jacobs, J., Gregory, A., Hoppey, D., & Yendol-Hoppey, D. (2009). Data literacy: Understanding teachers’ data use in a
context of accountability and response to intervention. Action in Teacher Education, 31(3), 41–55. doi:10.1080/
01626620.2009.10463527
Kerr, K. A., Marsh, J. A., Ikemoto, G. S., Darilek, H., & Barney, H. (2006). Strategies to promote data use for
instructional improvement: Actions, outcomes, and lessons from three urban districts. American Journal of
Education, 112(4), 496–520. doi:10.1086/505057
Mandinach, E., & Gummer, E. S. (2013a). A systematic view of implementing data literacy in educator preparation.
Educational Researcher, 42(1), 30–37. doi:10.3102/0013189X12459803
Mandinach, E. B., Friedman, J. M., & Gummer, E. S. (2015). How can schools of education help to build educators’
capacity to use data: A systemic view of the issue. Teachers College Record, 117(4), 1–50.
Mandinach, E. B., & Gummer, E. S. (2013b). Building educators’ data literacy: Differing perspectives. Journal of
Educational Research & Policy Studies, 13(2), 1–5.
Mandinach, E. B., & Gummer, E. S. (2016). What does it mean for teachers to be data literate: Laying out the skills,
knowledge, and dispositions. Teaching and Teacher Education, 60, 366–376. doi:10.1016/j.tate.2016.07.011
Mandinach, E. B., & Jimerson, J. B. (2016). Teachers learning how to use data: A synthesis of the issues and what is
known. Teaching and Teacher Education, 60, 452–457. doi:10.1016/j.tate.2016.07.009
Mann, B., & Simon, T. (2010, July). Teaching teachers to use data. Paper presented at the NCES STATS-DC 2010 Data
Conference, Bethesda, MD.
Marsh, J. A. (2012). Interventions promoting educators’ use of data: Research insights and gaps. Teachers College
Record, 114(11), 1–48.
Marsh, J. A., McCombs, J. S., & Martorell, F. (2010). How instructional coaches support data-driven decision making:
Policy implementation and effects in Florida middle schools. Educational Policy, 24(6), 872–907. doi:10.1177/
0895904809341467
McDougall, D., Saunders, W. M., & Goldenberg, C. (2007). Inside the black box of school reform: Explaining the how
and why of change at getting results schools. International Journal of Disability, Development and Education, 54(1),
51–89. doi:10.1080/10349120601149755
Means, B., Chen, E., DeBarger, A., & Padilla, C. (2011). Teachers’ ability to use data to inform instruction: Challenges
and supports. Washington, DC: U.S. Department of Education, Office of Planning, Evaluation, and Policy
Development.
Means, B., Padilla, C., DeBarger, A., & Bakia, M. (2009). Implementing data-informed decision making in schools:
Teacher access, supports and use. Washington, DC: U.S. Department of Education.
Nelson, T., & Slavit, D. (2008). Supported teacher collaborative inquiry. Teacher Education Quarterly, 35(1), 99–116.
Piro, J. S., & Hutchinson, C. J. (2014). Using a data chat to teach instructional interventions: Student perceptions of
data literacy in an assessment course. The New Educator, 10(2), 95–111. doi:10.1080/1547688X.2014.898479
Poortman, C. L., & Schildkamp, K. (2016). Solving student achievement problems with a data use intervention for
teachers. Teaching and Teacher Education, 60, 425–433. doi:10.1016/j.tate.2016.06.010
Reeves, T. D., & Chiang, J. L. (2017). Building pre-service teacher capacity to use external assessment data: An
intervention study. The Teacher Educator, 52(2), 155–172.
Reeves, T. D., & Honig, S. L. (2015). A classroom assessment data literacy intervention for pre-service teachers.
Teaching and Teacher Education, 50, 90–101.
Reeves, T. D., Summers, K. H., & Grove, E. (2016). Examining the landscape of teacher learning for data use: The case
of Illinois. Cogent Education, 3(1).
Roehrig, A. D., Duggar, S. W., Moats, L., Glover, M., & Mincey, B. (2008). When teachers work to use progress
monitoring data to inform literacy instruction: Identifying potential supports and challenges. Remedial and Special
Education, 29(6), 364–382. doi:10.1177/0741932507314021
Slavit, D., Nelson, T. H., & Deuel, A. (2013). Teacher groups’ conceptions and uses of student-learning data. Journal of
Teacher Education, 64(1),8–21.
Stiggins, R. J. (1987). Design and development of performance assessments. Educational Measurement: Issues and
Practice, 6(3), 33–42. doi:10.1111/emip.1987.6.issue-3
Stobaugh, R. R., Tassell, J. L., & Norman, A. D. (2010). Improving preservice teacher preparation through the teacher
work sample: Exploring assessment and analysis of student learning. Action in Teacher Education, 32(1), 39–53.
doi:10.1080/01626620.2010.10463541
van Geel, M., Keuning, T., Visscher, A. J., & Fox, J. P. (2016). Assessing the effects of a school-wide data-based
decision-making intervention on student achievement growth in primary schools. American Educational Research
Journal, 53(2), 360–394. doi:10.3102/0002831216637346
Wayman, J. C., & Jimerson, J. B. (2013). Teacher needs for data-related professional learning. Studies in Educational
Evaluation, 42, 25–34. doi:10.1016/j.stueduc.2013.11.001
376 T. D. REEVES
Wieman, R. (2014). Using data to improve instruction: Different approaches for different goals. Action in Teacher
Education, 36(5/6), 546–558. doi:10.1080/01626620.2014.977755
Windschitl, M., Thompson, J., & Braaten, M. (2011). Ambitious pedagogy by novice teachers: Who benefits from tool-
supported collaborative inquiry into practice and why. Teachers College Record, 113(7), 1311–1360.
Young, V. M., & Kim, D. H. (2010). Using assessments for instructional improvement: A literature review. Education
Policy Analysis Archives, 18(19), 19. doi:10.14507/epaa.v18n19.2010
Zambo, D., & Zambo, R. (2007). Action research in an undergraduate teacher education program: What promises does
it hold? Action in Teacher Education, 28(4), 62–74. doi:10.1080/01626620.2007.10463430
Appendix
Step-by-Step Protocol (Day One)
Traditional Assessment Scoring (Individual) (60 min)
(1) Score responses to all selected-response items, using the answer key and appropriate point values (e.g., 0 = incorrect,
1 = correct) (30 min)
(a) Score selected-response items one student at a time
(b) Indicate selected-response item scores on actual student work (if possible)
(c) Indicate selected-response item scores on the Traditional Assessment Raw Data worksheet (either electronic
or hard copy)
Note: If your traditional assessment has no selected-response items, proceed to Step 2.
(2) Score responses to all constructed-response items, using the item-specific scoring guides and appropriate point
values (e.g., 0 = incorrect, 1 = partially correct, 2 = correct)3 (30 min)
(a) Score constructed-response items one item at a time (i.e., score all students on the first constructed-response
item, then score all students on the second constructed-response item, and so on)
(b) For each item, score students in random order and anonymously (if possible)
(c) Indicate constructed-response item scores on actual student work (if possible)
(d) Indicate constructed-response item scores on the Traditional Assessment Raw Data worksheet (either
electronic or hard copy)
Note: Each student’s total score will be computed automatically in the electronic version of the
Traditional Assessment Raw Data worksheet.
Qualitative Analysis of Incorrect Constructed-Responses (Collaborative) (30 min)
(3) Identify Roundtable Group members whose traditional assessments included constructed-response items
(4) Identify constructed-response items answered incorrectly by at least two students
(5) Select the two constructed-response items answered incorrectly by the largest numbers of students
(6) For the first constructed-response item
(a) Collaboratively examine incorrect responses to identify patterns (e.g., misconceptions, errors, partially
applied strategies), if any
(b) Indicate patterns and exemplar responses on the Qualitative Analysis worksheet (Item 1 section)
(8) Score all performance assessments, using the scoring guide (e.g., checklist, rubric) and appropriate point values
(e.g., 0, 1, 2, and 3)
(a) Score students in random order and anonymously (if possible)
(b) Indicate performance assessment scores (one score per rubric dimension) on actual student work (if
possible)
(c) Indicate performance assessment scores (one score per rubric dimension) on the Performance Assessment
Raw Data worksheet (electronic or hard copy)
ACTION IN TEACHER EDUCATION 377
Note: Each student’s total score will be computed automatically in the electronic version of the
Performance Assessment Raw Data worksheet.
Data Entry (Individual) (10 min)
(9) Enter numeric scores for each student for each traditional assessment item in the Traditional Assessment Raw Data
worksheet (electronic copy)
(10) Enter numeric scores for each student for each performance assessment rubric dimension in the Performance
Assessment Raw Data worksheet (electronic copy)
(11) Highlight item columns related to each behavioral objective in different colors in the Traditional Assessment Raw
Data worksheet (e.g., items related to behavioral objective 1 in blue, items related to behavioral objective 2 in red)
to facilitate later analysis and interpretation of the data by behavioral objective
(12) For both the traditional and performance assessment, enter in the Possible Points row for each item/task
component the maximum number of points students can receive, to facilitate later interpretation of item statistics
(13) For both the traditional and performance assessment, enter in the Item Descriptor row for each item/task
component a descriptor or key word, to facilitate data analysis and interpretation
(14) Put an X in the Total Score column for any blank rows in the Traditional Assessment Raw Data worksheet
(electronic copy)
(15) Put an X in the Total Score column for any blank rows in the Performance Assessment Raw Data worksheet
(electronic copy)
(16) Bring completed Traditional Assessment Raw Data and Performance Assessment Raw Data worksheets to next
week’s workshop
(1) Verify that the Raw Data worksheet’s total score column contains an X for any empty row
(2) Interpret the traditional assessment total score mean, median, and mode (central tendency), using the Traditional
Assessment Raw Data worksheet
(3) Interpret the range and standard deviation of traditional assessment scores (dispersion), using the Traditional
Assessment Raw Data worksheet
Overall Class: Traditional Assessment Item-Level Analysis and Interpretation (15 min)
(8) Examine patterns in student performance on items related to particular behavioral objectives/content standards, or
knowledge/skills
378 T. D. REEVES
(9) Verify that the Raw Data worksheet’s total score column contains an X for any empty row
(10) Select three focus students (e.g., struggling reader) who completed the traditional assessment
(11) For each focus student, interpret his/her total score relative to the mean total score for the class
(12) For each focus student, interpret his/her total score in light of the dispersion of total scores
(13) Select three focus students (e.g., struggling reader) who completed the traditional assessment
(14) Select two items
(15) For each focus student, interpret his/her item score relative to the class mean score on each of the two items
Overall Class: Performance Assessment Total Score Analysis and Interpretation (10 min)
(16) Verify that the Raw Data worksheet’s total score column contains an X for any empty row
(17) Interpret the performance assessment total score mean, median, and mode (central tendency), using the
Performance Assessment Raw Data worksheet
(18) Interpret the range and standard deviation of traditional assessment scores (dispersion), using the Performance
Assessment Raw Data worksheet
Overall Class: Performance Assessment Item-Level Analysis and Interpretation (15 min)
(24) Examine patterns in student performance for rubric dimensions related to (sets of) particular behavioral
objectives/content standards, or knowledge/skills
(25) Verify that the Raw Data worksheet’s total score column contains an X for any empty row
(26) Select three focus students (e.g., struggling reader) who completed the performance assessment
(27) For each focus student, interpret his/her total score relative to the mean total score for the class
(28) For each focus student, interpret his/her total score in light of the dispersion of total scores
(29) Select three focus students (e.g., struggling reader) who completed the performance assessment
(30) Select two items (rubric dimensions)
(31) For each focus student, interpret his/her item (rubric dimension) score relative to the class mean score on each of
the two items (rubric dimensions)
ACTION IN TEACHER EDUCATION 379
Decisions
Complete the following steps and record your decisions, and evidence for those decisions, on the
Data-Based Decision Making worksheet. If the data do not support a particular decision, write “No
data.”
Overall Class: Traditional Assessment
(32) Make an overall decision about class performance on the traditional assessment
(33) Indicate any strengths and weaknesses for the whole class (e.g., high or low performance on particular items, or
items related to particular behavioral objectives/content standards, knowledge/skills)
(34) Indicate any common errors, misconceptions, partial understandings, or incompletely applied strategies
(35) Cite evidence for the above decisions (qualitative, tabular, graphical, statistical)
(36) Indicate any strengths and weaknesses on the traditional assessment (high or low performance on particular
items, or items related to particular behavioral objectives/content standards, knowledge/skills) for each of three
focus students
(37) Indicate any common errors, misconceptions, partial understandings, or incompletely applied strategies exhibited
by each of three focus students
(38) Cite evidence for the above decisions (scores received and their meaning, qualitative, tabular, graphical, statistical)
Instruction: Traditional Assessment Content
(39) Based on earlier decisions/claims, indicate specific changes to the lesson (e.g., activities, representations,
resources/materials, teaching strategies), if any
(40) Based on earlier decisions/claims, indicate next steps for instruction for the whole class, if any
(41) Based on earlier decisions/claims, indicate next steps for instructions for three focus students, if any
(42) Justify any instructional decisions on the basis of theory/research
(43) Based on earlier decisions/claims, indicate what specific feedback you would provide about the strengths and
weaknesses of the overall class (high or low performance on particular items, or items related to particular
behavioral objectives/content standards, knowledge/skills)
(44) Based on earlier decisions/claims, indicate what specific feedback you would provide about the strengths and
weaknesses of three focus students (high or low performance on particular items, or items related to particular
behavioral objectives/content standards, knowledge/skills)
(45) Make an overall decision about class performance on the performance assessment
(46) Indicate any strengths and weaknesses about overall class strengths and weaknesses (e.g., high or low performance
on particular rubric dimensions, or dimensions related to particular behavioral objectives/content standards,
knowledge/skills)
(47) Cite evidence for the above decisions (qualitative, tabular, graphical, statistical)
(48) Indicate any strengths and weaknesses on the performance assessment (high or low performance on particular
rubric dimensions, dimensions related to particular behavioral objectives/content standards, knowledge/skills) for
each of three focus students
(49) Cite evidence for the above decisions (scores received and their meaning)
(50) Based on earlier decisions/claims, indicate specific changes to the lesson (e.g., activities, representations,
resources/materials, teaching strategies), if any
380 T. D. REEVES
(51) Based on earlier decisions/claims, indicate next steps for instruction for the whole class, if any
(52) Based on earlier decisions/claims, indicate next steps for instructions for three focus students, if any
(53) Justify any instructional decisions on the basis of theory/research
(54) Based on performance assessment data, indicate what specific feedback you would provide about the strengths
and weaknesses of the overall class (high or low performance on particular rubric dimensions, or dimensions
related to particular behavioral objectives/content standards, knowledge/skills)
(55) Based on performance assessment data, indicate what specific feedback you would provide about the strengths
and weaknesses of three focus students (high or low performance on particular rubric dimensions, or dimensions
related to particular behavioral objectives/content standards, knowledge/skills)