Instructional Tools in Educational Measu

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Instructional Tools in

Educational Measurement and


Statistics (ITEMS) for School
Personnel: Evaluation of Three
Web-Based Training Modules
Rebecca Zwick, University of California, Santa Barbara,
Jeffrey C. Sklar, California State Polytechnic University,
San Luis Obispo,
Graham Wakefield, University of California, Santa Barbara,
Cris Hamilton, Independent Consultant,
Alex Norman, University of California, Santa Barbara, and
Douglas Folsom, University of California, Santa Barbara
data will soon be available, because
the new federal requirements require
In the current No Child Left Behind era, K-12 teachers and states to produce descriptive and diag-
principals are expected to have a sophisticated understanding of nostic information, as well as individual
standardized test results, use them to improve instruction, and test results” (Jennings, 2002). As the
communicate them to others. The goal of our project, funded by report predicted, teachers and school
the National Science Foundation, was to develop and evaluate administrators in the current No Child
Left Behind (NCLB) era are expected
three Web-based instructional modules in educational
to have a sophisticated understanding
measurement and statistics to help school personnel acquire the of test results, to use them to make
“assessment literacy” required for these roles. Our first module, data-based decisions about classroom
“What’s the Score?” was administered in 2005 to 113 educators instruction, and to communicate them
who also completed an assessment literacy quiz. Viewing the to others.
module had a small but statistically significant positive effect on Although there is little formal re-
quiz scores. Our second module, “What Test Scores Do and Don’t search in this area, it is widely rec-
Tell Us,” administered in 2006 to 104 educators, was even more ognized that many school personnel
effective, primarily among teacher education students. In have not had the opportunity to ac-
evaluating our third module, “What’s the Difference?” we were quire the “assessment literacy” that
able to recruit only 33 participants. Although those who saw the is required for these roles. Little has
module before taking the quiz outperformed those who did not,
results were not statistically significant. Now that the research Rebecca Zwick is a Professor, Depart-
ment of Education, University of Califor-
phase is complete, all ITEMS instructional materials are freely nia, Santa Barbara, Santa Barbara, CA
available on our Website. 93106; rzwick@education.ucsb.edu. Jeffrey
C. Sklar is an Instructor, Department of
Keywords: assessment literacy, educational measurement, educational CSM-Statistics, California State Polytech-
statistics, No Child Left Behind, psychometrics, standardized tests nic University, San Luis Obispo, CA 93407.
Graham Wakefield is a PhD Candidate, De-
partment of Media Arts and Technology,
University of California, Santa Barbara.
A ccording to a 2002 report from
the Center on Education Policy,
“[s]chool leaders . . . must be adept
diagnose the needs of individual stu-
dents and improve their education.
Both principals and teachers must be
Cris Hamilton is an independent consul-
tant. At the time of writing, Alex Norman
and Douglas Folsom were both graduate
at using data to improve teaching and trained to use test data to modify daily students in the Department of Media Arts
learning. Too often, test data is used lesson plans and to tailor assistance and Technology, University of California,
only for accountability, rather than to for individual children. Much more test Santa Barbara.

14 Copyright 
C 2008 by the National Council on Measurement in Education Educational Measurement: Issues and Practice
changed, in fact since the publication in lieve that teachers can turn to their gest the existence of substantial gaps
1990 of the Standards for Teacher Com- principals for help, almost no states re- in the respondents’ knowledge of ed-
petence in Educational Assessment of quire competence in assessment for li- ucational measurement and statistics.
Students (American Federation of censure as a principal or school admin- For example, only 10 of the 24 UCSB
Teachers, National Council on Mea- istrator at any level. As a result, assess- respondents were able to choose the
surement in Education [NCME], & Na- ment training is almost nonexistent in correct definition of measurement er-
tional Education Association, 1990), administrator-training programs.” ror, and only 10 knew that a Z-score
which expressed concern about “the The continuing need for professional represents the distance from the mean
inadequacy with which teachers are development in this area was confirmed in standard deviation units. Nearly half
prepared for assessing the educa- by two recent reports. A report on the mistakenly thought the reliability of a
tional progress of their students.” The licensing of school principals in the 50 test is “the correlation between student
document recommended that “assess- states, sponsored by the Wallace Foun- grades and student scores on the test.”
ment training . . . be widely available dation (Adams & Copland, 2005), in- When told that “20 students averaged 90
to practicing teachers through staff cludes a discussion of the necessary on an exam, and 30 students averaged
development programs at the district knowledge and skills for principals, ac- 40,” only one-half of the CSUN group
and building levels” (p. 1).1 cording to state licensing requirements. were able to calculate the combined av-
Further evidence regarding the need “Totally missing from state licensing erage correctly, and only one in 10 chose
for training in this area comes from frameworks,” according to the authors, the correct definition of measurement
two additional efforts undertaken by “was any attention to the meaning and error.
professional organizations during the use of learning assessments. . .” In an- As Popham (2006a, p. xiii) notes,
1990s. The Joint Committee on Com- other recent study, by the National “[t]oday’s educational leaders need to
petency Standards in Student Assess- Board on Educational Testing and Pub- understand the basics of assessment or
ment for Educational Administrators, lic Policy at the Lynch School of Educa- they are likely to become yesterday’s
sponsored by the American Association tion at Boston College, researchers sur- educational leaders . . .” How can the
of School Administrators, National As- veyed a nationally representative sam- required level of understanding be at-
sociation of Elementary School Prin- ple of teachers to ascertain their at- tained? Ideally, teacher and principal
cipals, National Association of Sec- titudes about state-mandated testing certification programs will eventually
ondary School Principals, and NCME, programs (Pedulla et al., 2003). When be modified to increase course con-
surveyed 1,700 administrators from the asked about the adequacy of profes- tent in educational measurement and
first three of these sponsoring organiza- sional development in the area of stan- statistics. Substantial changes of this
tions about assessment-related skills. dardized test interpretation, almost a kind are likely to be slow, however,
The three skills rated as most needed third of the 4,200 responding teachers and may occur only if licensing require-
by educational administrators (out of a reported that professional development ments are first modified. To help K-12
list of 37) were knowing the terminol- in this area was inadequate or very in- schools respond rapidly to the training
ogy associated with standardized tests, adequate (Pedulla et al., 2003). Fur- gap in these areas, the Instructional
knowing the purposes of different kinds ther documentation of the “widespread Tools in Educational Measurement and
of testing, and understanding the link- deficits in assessment skills evidenced Statistics (ITEMS) for School Person-
age between curriculum content and by practicing teachers” is described by nel project has created and evaluated
various kinds of tests (Impara, 1993, p. Lukin, Bandalos, Eckhout, and Mickel- a series of three instructional modules
20). In 1995, NCME published the Code son (2004, pp. 26–27). in educational measurement and statis-
of Professional Responsibilities in Edu- As part of the preliminary work con- tics, to be available via the Web and also
cational Measurement. This document, ducted in preparation for the ITEMS as CDs and DVDs.
which is intended to apply to profes- project, a survey assessing respondents’ The goal of the project was to provide
sionals involved in all aspects of educa- understanding of educational measure- immediate opportunities for teachers
tional assessment, including those who ment and statistics was developed and and administrators to increase their as-
administer assessments and use assess- field-tested by two doctoral students sessment literacy—more specifically,
ment results, includes the responsibil- under the first author’s supervision their understanding of the psycho-
ity to “maintain and improve . . . profes- (Brown & Daw, 2004). The survey, metric and statistical principles needed
sional competence in educational as- which consisted mainly of multiple- for the correct interpretation of stan-
sessment” (NCME, 1995, p. 1). choice questions, was completed by dardized test scores. The ITEMS ma-
According to Stiggins (2002), how- 24 University of California, Santa Bar- terials are intended to help prepare
ever, “only a few states explicitly re- bara (UCSB) graduate students who school personnel to use test results to
quire competence in assessment as a were pursuing an M.Ed. and a multiple- optimize instructional decisions and to
condition for being licensed to teach. subjects teaching credential. A revised pinpoint schools, classes, or individu-
No licensing examination now in place version of the survey was subsequently als that require additional instruction
at the state or federal level verifies com- offered over the Internet to students en- or resources, as well as to explain test
petence in assessment. Since teacher- rolled in a graduate education course at results to students, parents, the school
preparation programs are designed to California State University, Northridge board, the press, and the general com-
prepare candidates for certification un- (CSUN). The 10 CSUN students who munity. The provision of this train-
der these terms, the vast majority of responded to the Web-based version of ing in a convenient and economical
programs fail to provide the assessment the survey were experienced teachers way is intended to assist schools with
literacy required to prepare teachers or administrators in K-12 schools. Re- the successful implementation and in-
to face emerging classroom-assessment sults of the two survey administrations terpretation of assessments. Alterna-
challenges . . . Furthermore, lest we be- to teachers and future teachers sug- tive forms of professional development,

Summer 2008 15
such as intensive workshops or uni- data analysis software tools have participants made significant gains in
versity courses, are much more time- emerged from projects conducted at their understanding of statistical con-
consuming and expensive, and are un- SERG, including DataScope , Prob cepts; however, it was not determined if
likely to be funded in an era of limited Sim (Konold & Miller, 1994), and teachers made significant gains in their
school budgets. TinkerplotsTM (Konold & Miller, 2004). understanding of test results per se.
These products are designed for
Previous Statistics Education and teaching concepts in probability and
Assessment Literacy Research sampling using simulations. They have
been used extensively in the classroom Assessment Literacy
The ITEMS work is related to previ- to teach concepts to students, and have Since the enactment of NCLB, an in-
ous research in statistics education and also been used to investigate teachers’ creasing number of schools and dis-
assessment literacy. The goal of statis- approaches to analyzing data (e.g., see tricts are collecting massive quantities
tics education research is to promote ef- Hammerman & Rubin, 2002, 2004). of student data, including standardized
fective techniques, methods, and prod- While much of the statistics edu- test results, using software manage-
ucts for teaching statistics, as well as cation literature is focused ultimately ment products. (See Wayman, String-
to understand how individuals reason on student learning, recent studies field, and Yakimowski (2004) for a dis-
about statistical concepts. By contrast, have also examined teachers’ statisti- cussion of various software products de-
the work described here as assessment cal thinking and reasoning. Hammer- signed for storing, organizing, and ana-
literacy research focuses on the ways man and Rubin (2002, 2004), Makar and lyzing student, school, and district level
teachers use assessments to make in- Confrey (2002) and Confrey, Makar, data.) Administrators and teachers are
structional decisions. and Kazak (2004) have examined expected to use these data to make ed-
ways that teachers reason about sta- ucational and instructional decisions.
Statistics Education tistical concepts (in particular, vari- Boudett, City, and Murnane (2005) out-
A prominent statistics education re- ation in data) and how using com- line eight critical steps to effectively
search effort is that of Joan Garfield puter graphics tools can facilitate their use assessments to improve instruction
and Robert delMas of the University of understanding. and raise student achievement. Accord-
Minnesota and Beth Chance of Califor- Hammerman and Rubin (2002, 2004) ing to Boudett et al. (2005), the second
nia Polytechnic State University. In the studied the strategies that teachers step is to “build assessment literacy,”
current ARTIST (Assessment Resource used to comprehend variability in i.e., develop a working knowledge of
Tools for Improving Statistical Think- data and examined how teachers ex- common concepts related to test score
ing) project (https://app.gen.umn.edu/ plored and analyzed data using the results, and acquire appropriate skills
artist/), these researchers are work- TinkerplotsTM software. After learn- to interpret test score results. The text
ing to develop effective assessments of ing to “bin” and visualize data using by Popham (2006a) is intended to im-
statistics knowledge, which are made the graphical tools, teachers were less prove the assessment literacy of educa-
available online for teachers of first likely to use the sample average as a sin- tional leaders and another recent con-
courses in statistics (Garfield, delMas, gle overall representation of the data. tribution by Popham (2006b) consists
& Chance, 2003; see https://app.gen. Makar and Confrey (2002) exam- of 15 booklets on assessment topics that
umn.edu/artist/publications.html for a ined mathematics teachers’ statisti- are intended to help teachers increase
publications list). In a previous project, cal thinking about high-stakes test re- their assessment knowledge.
delMas, Garfield, and Chance (1999) sults at a low-performing high school Other resources for teachers and ad-
investigated the use of computer sim- in Texas. Using the computer-based ministrators are available, although in
ulations to improve the statistical rea- statistical learning tool, FathomTM many cases their effectiveness has not
soning of students in college-level in- (Finzer, 2001), teachers created graphs been formally assessed. Test publish-
troductory statistics courses. These to compare distributions of test scores ing companies offer products for profes-
projects built on an earlier program for male and female students to de- sional development in assessment lit-
of research, Project Chance, headed by termine if the differences in the cen- eracy. For example, Educational Test-
J. Laurie Snell of Dartmouth, the goal ters between the two groups of data ing Service (ETS) and CTB McGraw-
of which was to help students think were significant. However, the research Hill provide written guides and work-
critically about media reports that use showed that, after attending a work- shops for professional development in
probability and statistics (Snell & Finn, shop in data analysis, teachers still used which teachers can learn to use data
1992). intuition to determine whether differ- for instructional improvement. The
The Statistics Education Re- ences in centers of the distributions Pathwise Evidence-Centered Teach-
search Group (SERG) (www.umass. of test scores were significant, ignoring ing and Assessment Workshops, offered
edu/srri/serg/) at the University of concepts like variability and sampling through ETS, are designed for language
Massachusetts, Amherst works on a va- distributions in their reasoning. arts and mathematics teachers as train-
riety of projects to improve instruction Confrey et al. (2004) conducted ad- ing sessions in the principles of for-
in statistics courses. Founded in the ditional studies to explore the statis- mative assessment and linking of as-
1970s, the group originally investigated tical understanding of high-stakes test sessment to instruction. The Assess-
how individuals reason about statis- data by teachers. The purpose was to de- ment Training Institute, founded by
tical concepts before receiving any termine if a professional development Rick Stiggins in 1993 and acquired by
formal training, and then used results course in the area of testing and statis- ETS in 2006, provides classroom assess-
to improve K-12 and college-level tics could improve teacher reasoning ment training needed to support educa-
statistics instruction. Their current about statistical concepts as it relates tors. Books, videos, DVDs, and seminars
focus is on younger students. Several to test data. The results indicated that are available through the institute.

16 Educational Measurement: Issues and Practice


Project Overview A brief outline of the content of the cepts and terminology. In each mod-
The ITEMS project differs from much three modules appears in Table 1. ule, features of these reports are high-
of the previous work in at least three For each module, the project work lighted and explained. In computer-
respects. First, the ultimate target consisted of the following four phases, based learning environments, it has
audience is school personnel them- which took place over a period of been found that individuals who are
selves, rather than K-12 or college roughly one year: presented with material via an ani-
students. Also, rather than focusing mated agent demonstrate better learn-
Development phase. The module is ing outcomes than those who are pre-
on either statistical reasoning (e.g., conceptualized and produced.
Chance, 2002 or Mills, 2002) or on class- sented with the material via on-screen
room testing (like Stiggins & Chap- Research phase. Data are collected text and static graphs (Moreno, Mayer,
puis, 2005), the project is intended to and analyzed to allow formal evalua- Spires, & Lester, 2001). Therefore, the
help teachers and principals become tion of the effectiveness of a Web-based modules make liberal use of graph-
educated consumers with respect to a version of the module. ics, including computer animation, to
broad range of topics in educational present concepts.
Program evaluation phase. The For example, Figure 1 shows a screen
measurement and statistics. Finally, project evaluator collects data from
while the effectiveness of many assess- shot from our second module that il-
participants to determine their views lustrates the concept of measurement
ment products currently available has on the usefulness and effectiveness of
not been evaluated, a major research error. The viewer is asked to imagine
the module. that Edgar, the child pictured in the
component of the ITEMS project has
been dedicated to investigating the ef- Dissemination phase. The module is display, takes a test several times, mag-
fectiveness of the modules. posted on our project Website as a freely ically forgetting the content of the test
The project work was organized so available resource for educators, along between administrations. On the first
that one module was produced and eval- with supplementary materials. CD and occasion, he misreads a question; on
uated in each of the three years of the DVD versions of the module are mailed the second, he guesses correctly; and on
project. The topics of the three modules to those who request them. the third, he is accidentally given extra
are as follows: time. For these reasons, he gets slightly
different scores on each imaginary test
Module 1: Test Scores and Score Distri- Principles of Module Development administration.
butions The instructional modules rely on a In designing the modules, we have in-
Module 2: Imprecision in Individual and case-based approach to learning (Lun- corporated other findings from the cog-
Average Test Scores deberg, Levin, & Harrington, 1999) in nitive psychology literature on learning
Module 3: Interpretation of Test Score which realistic test score reports are through technology, as reflected in the
Differences and Trends used as a basis for explaining con- following principles:
Multimedia principle
Concepts are presented using both
Table 1. Content of Instructional Modules words and pictures (static or dynamic
graphics). Research has indicated that
1. “What’s the Score?” (2005): Test Scores and Score Distributions “. . .human understanding occurs when
• Mean, median, mode learners are able to mentally inte-
• Symmetric vs. skewed distributions grate visual and verbal representations”
• Range, standard deviation (Mayer, 2001, p. 5).
• Percentage above a cut-point (NCLB)
• Raw scores Contiguity principle
• Percentile ranks Auditory and visual materials on the
• Grade-equivalents same topic are, whenever possible,
• Norm-referenced and criterion-referenced score interpretation presented simultaneously, rather than
successively, and words and corre-
2. “What Test Scores Do and Don’t Tell Us” (2006): Imprecision in sponding pictures appear on the screen
Individual and Average Test Scores together rather than separately. Ma-
• Measurement error and test reliability terials that incorporate these princi-
• Standard error of measurement ples of temporal and spatial contiguity
• Confidence bands for test scores have been shown to enhance learning
• Effect of sample size on precision of test score means (Mayer, 2001, pp. 81–112).
• Precision of individual scores versus precision of means
• Test bias Modality principle
3. “What’s the Difference?” (2007): Interpretation of Test Score Differences Verbal information that accompanies
and Trends graphical presentations, is, in general,
• Importance of disaggregating data for key subgroups presented in spoken form rather than as
• Effect of population changes on interpretation of trends on-screen text. Research has indicated
• Effect of number of students that spoken verbal material is prefer-
• Effect of changes in tests and test forms; test equating able in this context because, unlike dis-
played text, it “does not compete with

Summer 2008 17
FIGURE 1. Illustration of measurement error from Module 2.

pictures for cognitive resources in the haps because “learners may be more ministrator, who was responsible for
visual channel” (Mayer, 2001, p. 140). willing to accept that they are in a much of the day-to-day management
human-to-human conversation includ- of the project and a project evalua-
ing all the conventions of trying hard tor, who conducted a semi-independent
Coherence principle to understand what the other person is review of the project’s effective-
Efforts have been made to remove ex- saying” (Mayer, 2003, p. 135). In keep- ness.
traneous words, pictures, and sounds ing with this principle, formulas are not The project staff met three times
from the instructional presentations. used in the instructional modules. (The a year with an advisory committee
Irrelevant material, however interest- relevant formulas are given in the sup- consisting of public school teach-
ing, has been shown to interfere with plementary handbooks for those who ers and administrators and univer-
the learning process (Mayer, 2001, pp. are interested in learning them.) sity experts in human-computer inter-
113–133). action, multimedia production, multi-
media learning, cognitive psychology,
Project Staffing teacher education, educational tech-
Prior knowledge principle
The modules are designed to “use words The project staff who created the nology, theoretical statistics, and math
and pictures that help users invoke and three instructional modules described and statistics education.
connect their prior knowledge” to the in this article consisted of six indi-
viduals (some of whom worked on Principles of Program Evaluation
content of the materials (Narayanan &
Hegarty, 2002, p. 310). For example, only one or two modules): The prin- Generally stated, the questions to be
while participants may be unfamiliar cipal investigator (first author), who addressed by the program evaluation
with the term “sampling error,” most is an education professor with over 25 procedures for the ITEMS project are
will be familiar with statements like, years of research experience in edu- consistent with those considered by
“The results of this survey are accurate cational measurement and statistics; Owen (2007, p. 48) under the rubric
within plus or minus five percentage a senior researcher (second author), of impact evaluation: “Have the stated
points.” Analogies and metaphors have who is a statistics professor with ex- goals of the program been achieved?
also been shown to enhance mathemat- perience in education research; three Have the needs of those served by the
ical learning (English, 1997). technical specialists (third, fifth, and program been achieved? . . . Is the pro-
sixth authors) who were graduate stu- gram more effective for some partic-
dents in media arts and technology with ipants than for others?” In our case,
Conversational style experience in computer programming, the stated goal of the project was
A conversational rather than a formal graphics, and animation, and a profes- to present psychometric and statisti-
style is used in the modules; this has sional animator (fourth author). The cal information about the interpreta-
been shown to enhance learning, per- project staff also included a project ad- tion of standardized test scores that

18 Educational Measurement: Issues and Practice


was comprehensible and applicable to displaying conversations between the ening the module and dividing it into
everyday work situations encountered cartoon characters Stan, a somewhat short segments, each preceded by a title
by the participants. Furthermore, we clueless teacher, and Norma, a more slide. This recommendation was imple-
hoped that the information would be informed one, this 25-minute module mented before the research phase be-
retained. In addition, we wanted the explores topics such as test score dis- gan. Also, on the recommendations of
presentation format to be perceived as tributions and their properties (mean, participants, “pause” and “rewind” ca-
appealing and convenient by the users. median, mode, range, standard devia- pabilities were added to allow viewers
Four sources of information were tion), types of test scores (raw scores, to navigate among the seven segments
used to address the evaluation ques- percentiles, scaled scores, and grade- of the module.
tions. First, as described below, a quiz equivalents), and norm-referenced and
was constructed to correspond to each criterion-referenced score interpreta-
module. The purpose of the quiz was tion. Table 1 provides some additional Module 1 Research Phase
to determine whether participants information on the content of the The primary means of evaluating the ef-
comprehended the information pre- module. fectiveness of the module involved the
sented in the module and could apply The module development process administration of a Web-based assess-
this knowledge to problems intended began with a wide-ranging review of ment literacy quiz that we developed
to resemble those encountered in literature, including research articles to address the topics covered in the
their everyday work. An experimental and software products related to statis- module. A 20-item multiple-choice quiz
paradigm, involving random assign- tics education and assessment liter- was pilot-tested on a group of teacher
ment was implemented in which half acy training, textbooks in measure- education students at UCSB and then
the participants took the quiz before ment and statistics, the Standards for revised substantially to improve its cor-
seeing the module, and half took Teacher Competence in Educational respondence to the material included
the quiz after viewing the module. Assessment of Students (American Fed- in the module. The purpose of the quiz
Participants willing to be followed up eration of Teachers, NCME, & National was to allow us to determine whether
took the quiz a second time, one month Education Association, 1990), the Stan- viewing that module improved under-
later, to measure retention. Second, a dards for Educational and Psychologi- standing of the concepts that were
background survey was used to collect cal Testing (American Educational Re- presented.
information about participants so that search Association, American Psycho- Sixty-eight teacher education pro-
we could determine if the modules were logical Association, & NCME, 1999), gram (TEP) students from UCSB, as
more useful and effective for some par- research findings on the effectiveness well as 45 teachers and administra-
ticipants than for others. Third, an inde- of multimedia instruction (see above), tors from local school districts, were
pendent evaluator conducted phone in- and references on storyboarding and recruited to take part in a formal study
terviews (Module 1) and administered film-making (e.g., Begleiter, 2001). The of the effectiveness of the Web-based
an evaluation survey (Modules 1, 2, and module development process also drew module. Participants received $15 gift
3) to participants to assess the utility on the results of a preliminary survey of cards from Borders. Demographic in-
and effectiveness of the materials and the measurement and statistics knowl- formation on the participants is given
to solicit suggestions for improvement. edge of teachers, school administrators, in Table 2. Overall, participants had an
(The evaluator role was filled by one and teacher education students. average age of 34 and had taught for
individual for the Module 1 evaluation Following these preliminary steps, a an average of seven years. Seventy per-
and a second individual for the evalua- detailed outline of the module was de- cent were women. The TEP students, of
tion of Modules 2 and 3.) Fourth, partic- veloped. Text was then created and course, tended to be younger and have
ipants provided comments via comment scenes were conceptualized and in- fewer years of teaching experience than
boxes included as part of the module corporated in a storyboard. To maxi- the school personnel.
and quiz administration, and, in some mize accessibility, both low- and high- When they logged on, participants
cases, via email messages to the project. bandwidth versions of the Web-based completed a background survey and
module were produced using Macro- then were randomly assigned to one of
Development and Evaluation of media Flash (now Adobe Flash) soft- two conditions: In one condition, the
Module 1, “What’s the Score?” ware. The module was revised over module was viewed before the quiz was
a period of six months to reflect the administered; in the other, the quiz
Module 1 Development was administered first. By comparing
feedback received. For example, af-
The development of our first module, ter viewing an initial version, our ad- participants from the two conditions,
“What’s the Score?” began in 2004. By visory committee recommended short- we were able to test the hypothesis

Table 2. Module 1: Demographic Information for Participants


Percent in Average Years
Sample Average Percent Administrative of Teaching
Size Age Female Positions Experience
All Participants 113 33.7 69.9 5.3 7.4
Teacher Education (TEP) Students 68 25.4 75.0 0.0 2.3
School District Personnel 45 46.6 62.2 13.3 15.1

Summer 2008 19
Table 3. Module 1: Results on the 20-Item Quiz
Module-First Group Quiz-First Group
Effect t-test p-value
Mean SD n Mean SD n Size (1-sided)
All Participants 13.2 3.7 52 12.0 3.4 61 .34 .042
Teacher Education (TEP) Students 13.1 4.0 33 11.7 3.5 35 .37 .059
School District Personnel 13.4 3.2 19 12.5 3.2 26 .28 .198

that those who viewed the module first who were either full-time math teach- dividuals who agreed to participate in
were better able to answer the quiz ers or TEP student teachers were clas- the follow-up phase of the Module 1 re-
questions. sified as math instructors if they in- search were contacted about a month
dicated that math was the sole sub- after they had first viewed the module
Data analysis ject taught, while all other participants and taken the quiz. They were given an
Psychometric analysis of the assess- were classified as nonmath instructors. online quiz identical to the one that
ment literacy quiz showed that, for the Because we expected math instruc- they took during their first module-
total group, the quiz had an average per- tors to be more familiar with measure- viewing session.
cent correct of 63, an average item dis- ment and statistics concepts than other Unfortunately, of the original 113
crimination (corrected item-total cor- participants, we anticipated that they participants, only 11 participated in
relation) of .28, a reliability of .71, and would perform better on the quiz than the follow-up phase and completed the
a standard error of measurement of 1.9 other participants. In fact, math in- quiz again. Four participants were from
points. structors (n = 19), scored only slightly the module-first group, and seven were
As indicated in Table 3, results for higher than participants who were not from the quiz-first group. The average
the 113 participants showed that view- math instructors (n = 94). The aver- score on the quiz for the four mem-
ing the module had a small but sta- age score for math instructors was 13.2 bers of the module-first group was 15.5
tistically significant positive effect on points (SD = 2.9), compared to an aver- the first time they took the quiz, and
quiz scores: Those in the “module-first” age score of 12.4 for other participants it remained 15.5 when they retook the
group answered an average of 13.2 of 20 (SD = 3.7). quiz (note that there was variability
questions correctly, compared with an within the scores at the two different
average score of 12.0 questions for the Follow-up quiz analysis times). The average quiz score for the
“quiz-first” group (an effect size of .34 A follow-up analysis was conducted to seven quiz-first follow-up participants
standard deviation units). determine the extent to which partici- was 14.3 the first time they took the
Further analysis showed that the pants retained an understanding of the quiz, and increased to 15.9 when they
quiz item on which the two groups dif- module material, as measured by their took it during the follow-up phase. Al-
fered most, addressed the topic of skew- follow-up performance on the quiz. In- though the follow-up samples are small
ness of a test score distribution. On
this question, 63% of the module-first
group answered the question correctly,
while only 34% of the quiz-first group
answered the question correctly. On the
remaining quiz items, differences in the
percentages correct for the two groups
were typically less than 10 points.
When the overall results in Table 3
are disaggregated, it becomes appar-
ent that the effect was larger among
the TEP students than among the
school personnel (see Figure 2). There
are several possible explanations for
this disparity. As noted, the TEP stu-
dents differed in several ways from the
school personnel (Table 2). In addi-
tion, Module 1 and the associated quiz
were presented to these students in a
group administration, with project staff
available to trouble-shoot. By contrast,
school personnel participated on an in-
dividual basis, at a time and place of
their own choice.
In addition to comparing the quiz
results by TEP status, we were also
interested in comparing quiz results FIGURE 2. Module 1: Average scores on 20-item quiz for school personnel
by math instructor status. Participants and teacher education program students in quiz-first and module-first groups.

Note: In Figures 2, 3, and 4, the labels for the "Module-First " and
20 "Quiz-First" groups were reversed. The accompanying data tables Educational Measurement: Issues and Practice
(Tables 2, 4, and 5) were correctly labeled.
and cannot be assumed to be represen- counterparts in three related respects. tion of these displays. Because the quiz
tative of the original participants, it is First, whereas we recruited only Cen- was more directly linked to the mate-
worth noting that on average, no loss tral California teachers and educators rial in the module than was the case for
of content knowledge was observed for in 2005, we recruited educators from Module 1, we believe the Module 2 quiz
either group. around the nation in 2006. Our primary provided a better test of the effective-
means of doing so was to place an ad- ness of the module. This 16-item mul-
Development and Evaluation of vertisement in the educational maga- tiple choice quiz was pilot-tested with
Module 2, “What Test Scores Do zine Phi Delta Kappan in January and UCSB TEP students.
and Don’t Tell Us” February. We also adapted our Website One hundred four individuals partic-
Module 2 Development to allow individuals to sign up for the ipated in the research phase for Module
Our second module, “What Test Scores project on the Website itself. In addi- 2, “What Test Scores Do and Don’t Tell
Do and Don’t Tell Us,” shows Stan, the tion, we contacted professional organi- Us.” Of those, 81 were TEP students
teacher introduced in Module 1, meet- zations, focusing on those for minor- from at least five universities, and 23
ing with parents Maria and Tim to dis- ity professionals, to encourage them to were school district personnel from 19
cuss the test results of their twins, sign up for project participation. school districts across the country. Par-
Edgar and Mandy. The module focuses A second change that we imple- ticipants received $15 electronic gift
on the effect of measurement error on mented in 2006 was necessitated by certificates from Borders.com. Demo-
individual student test scores, the ef- the fact that we were working with graphic information on the participants
fect of sample size on the precision of participants who were in some cases for this phase is given in Table 4. Sixty-
average scores for groups of students, thousands of miles away, making vis- eight percent of the participants were
and the definition and effect of test bias. its to schools impractical. We there- women, with average age of 31 years
Table 1 provides some additional infor- fore developed an infrastructure that and average teaching experience of 11
mation on the content of the module. allowed all transactions that take place years. The TEP students were typi-
As was the case in developing Mod- during the research and program eval- cally younger and had fewer years of
ule 1, a detailed outline was first devel- uation phases to be conducted elec- teaching experience than the school
oped. Text was then created and scenes tronically. Provision of logon informa- personnel.
were conceptualized and incorporated tion to participants, collection of data, To develop a better picture of
in a storyboard. Following that, a Web- and even distribution of gift cards were the school districts where participants
based version of the module was pro- electronic. (Participants received elec- worked, either full-time or as a stu-
duced using Flash software and was tronic certificates that are usable at dent teacher, we collected additional
revised over a period of several months Borders.com.) information about the community sur-
to reflect the feedback received from A third change was that, rather than rounding the school district and the
our project advisory committee. Both regarding a school or district as a target minority composition of the school
low- and high-bandwidth versions were of recruitment, we directly enlisted the district. In the background survey,
made available. participation of individual teachers and participants were asked if they would
Module 2 includes two features not administrators. This did not preclude describe the community surrounding
incorporated in Module 1. First, an op- the possibility that particular schools the school district as “central city,” “ur-
tional closed captioning feature is avail- and districts would participate; rather, ban fringe/large town,” or “rural/small
able, which should facilitate the use it allowed us to collect data from edu- town.” Of the 85 participants who re-
of the module by participants who are cators whether or not their schools or sponded to this question, 69% described
hard of hearing. Also, the module in- districts were participating on an insti- the area surrounding their district as
cludes four “embedded questions” (one tutional level. urban fringe, while 18% indicated that
for each main section) that allow par- their community was rural, and 13% de-
Module 2 Research Phase scribed it as central city.
ticipants to check their understanding
of the material and view portions of the To test the effectiveness of Module Participants were also asked to esti-
module a second time if they wish. The 2, we used a new assessment literacy mate a range for the percentage of stu-
embedded questions are also intended quiz, tailored to the topics of Module 2. dents in their school district who were
to encourage participants to remain en- Consistent with the recommendations African-American, Hispanic/Latino,
gaged in the process of watching the of our advisory panel, we created an Native American, or members of other
module. instrument that is more applications- ethnic minorities. The choices were as
oriented than the Module 1 quiz. Most follows: less than 20%, 21%–40%, 41%–
Recruitment and Data Collection questions include tables or graphics 60%, 61%–80%, and 81%–100%. Diverse
Procedures for Module 2 that resemble those that appear in stan- districts were well represented in the
The 2006 research and program eval- dardized test results. Respondents are study. Of the 84 participants who re-
uation phases differed from their 2005 asked questions about the interpreta- sponded to the question, 45% indicated

Table 4. Module 2: Demographic Information for Participants


Sample Average Percent Average Years of
Size Age Female Teaching Experience
All Participants 104 31.4 68 11.3
Teacher Education (TEP) Students 81 25.7 73 2.6
School District Personnel 23 50.9 52 23.0

Summer 2008 21
Table 5. Module 2: Results on the 16-Item Quiz
Module-First Group Quiz-First Group
Effect t-test p-value
Mean SD n Mean SD n Size (1-sided)
All Participants 12.6 3.0 51 10.2 3.5 53 .74 .000
Teacher Education (TEP) Students 12.6 3.2 40 9.5 3.7 41 .90 .000
School District Personnel 12.7 1.9 11 12.5 1.4 12 .12 .375

that they worked in a district composed ule before and after taking the quiz. Module 1 background survey, partici-
of 41% to 60% minority students, 19% This effect is more apparent than that pants were asked to indicate the sub-
worked in a district that was 61% to obtained in the Module 1 evaluation ject they taught most frequently, while
80% minority students, 13% worked in a phase. in the Module 2 survey, they were given
district that was less than 20% minority To parallel the analysis in the Module the option to indicate multiple subjects
students, 12% worked in a district that 1 evaluation phase, we compared quiz that they taught. Participants who indi-
was 81% to 100% minority students, and results between math instructors and cated only “math” were then designated
11% worked in a district that consisted nonmath instructors. Math instructors as math instructors.)
of 21% to 40% minority students. consisted of both full-time math teach-
Psychometric analysis of the 16- ers and TEP student teachers in math; Follow-up quiz analysis
item multiple-choice assessment liter- all other participants were classified as As in the first module research phase,
acy quiz showed that the quiz had an nonmath instructors. As in the Module a follow-up analysis was conducted to
average percent correct of 71, an aver- 1 evaluation phase, math instructors (n determine the extent to which partic-
age item discrimination of .40, a reli- = 5) scored slightly higher on average ipants retained an understanding of
ability of .79, and a standard error of than participants who were nonmath the material from the second module.
measurement of 1.58 points. instructors (n = 99). The average score Participants in the Module 2 research
As indicated in Table 5, results for the for math instructors was 14.4 (SD = phase were contacted about a month
104 participants showed a statistically 1.5), compared to 11.2 for nonmath in- after they had first viewed the module
significant effect: Those in the module- structors (SD = 3.5). (Note that only and taken the quiz. They were given the
first group answered an average of 12.6 five participants were designated as identical quiz that they had previously
out of 16 questions correctly, compared math instructors in the Module 2 anal- taken during their first module-viewing
with an average score of 10.2 questions ysis, as compared to 19 in the Module 1 session.
for the quiz-first group (an effect size analysis. This may be because informa- Due to improvements in the online
of .74). tion on teaching was collected slightly administration system, contact with
Further analysis of the quiz results differently on the two occasions. In the participants was more easily achieved,
showed that the items that had the
largest group differences in the per-
centages correct—between 20% and
27%—were primarily about measure-
ment error or about the relation be-
tween the stability of sample means and
sample size. For example, a question
about the standard error of measure-
ment was answered correctly by 88%
of the module-first group, compared to
62% of the quiz-first group. For the re-
mainder of the questions, the typical
difference between the groups in the
percentages correct was between 5%
and 10%.
Table 5 shows that, as in Module 1,
the effect of viewing the module was
much larger for the TEP students (an
effect size of .90) than for the school
personnel (.12), a finding that is dis-
cussed further below. In Figure 3 we
can observe that school personnel per-
formed about the same (on average)
on the quiz regardless of whether they
viewed the module before or after tak-
ing the quiz; however, there was a dif-
ference in the average scores between FIGURE 3. Module 2: Average scores on 16-item quiz for school personnel
TEP students who viewed the mod- and teacher education program students in quiz-first and module-first groups.

22 Educational Measurement: Issues and Practice


Table 6. Module 3: Demographic Information for Participants
Sample Average Percent Average Years of
Size Age Female Teaching Experience
All Participants 33 37.3 82 6.6
Teacher Education (TEP) Students 14 25 93 1.0
School District Personnel 19 46.4 74 13.8

and a much higher follow-up rate was based module was then produced, re- pants received $15 electronic gift cer-
observed. Of the original 104 partici- vised, and incorporated into our data tificates from Borders.com.
pants who took the quiz, 38 participated collection system. The closed caption- Demographic information on the par-
in the follow-up phase and completed ing and embedded questions features ticipants for this phase is given in Table
the quiz again. Fifteen participants introduced in Module 2 were retained. 6. About 82% of the participants were
were from the module-first group, and Production values for Module 3 were women, the average age was 37, and
23 were from the quiz-first group. The substantially improved over the previ- the average number of years of teaching
average score on the quiz for the 15 ous two modules. First, a professional experience was 6.6. The teacher educa-
members of the module-first group was animator with extensive cartooning ex- tion students were generally younger
13.9 the first time they took the quiz, perience joined the project. Second, and had fewer years of teaching experi-
and it dropped slightly to 13.1 when students from the Dramatic Art depart- ence than the school personnel.
they retook the quiz. However, this dif- ment at UCSB were hired to voice the As in the Module 2 research phase,
ference was not statistically significant animated characters, rather than em- we collected information about the
(p = .20) at the .05 level, as deter- ploying project staff for this purpose. community surrounding the school dis-
mined by a matched-pairs t-test. Hence, Finally, the audio recording was con- trict and the minority composition of
there does not appear to be any signifi- ducted in a professional sound studio. the school district. Participants were
cant loss of content knowledge for this These changes resulted in a module asked if they would describe the com-
group. The average quiz score for the that was much more polished than its munity surrounding the school district
23 quiz-first follow-up participants was predecessors. as “central city,” “urban fringe/large
10.6 the first time they took the quiz, Recruitment and data collection town,” or “rural/small town.” Of the
and increased to 11.4 when they took it procedures for Module 3 were sim- 24 participants who responded to this
during the follow-up phase. This differ- ilar to those in Module 2, except question, 12 described the area sur-
ence was marginally statistically signifi- that we selected a different magazine— rounding their district as urban fringe,
cant (p = .05), which is consistent with Learning and Leading with Techno- 3 indicated that their community was
an increase in content knowledge due logy—in which to place our advertise- rural, and 9 described it as central
to the effect of viewing the module. As ment, and we timed the ad to be con- city.
in the Module 1 evaluation, interpreta- current with the data collection instead Participants were also asked to
tion is complicated by the fact that the of preceding it. As an added incentive estimate a range for the percent-
follow-up samples are small and cannot to participation, a feature was added age of students in their school dis-
be assumed to be representative of the that allowed participants the option of trict who were African-American, His-
original participants. downloading a personalized certificate panic/Latino, Native American, or
indicating that they had completed an members of other ethnic minorities.
Development and Evaluation of ITEMS training module. The choices were identical to those
Module 3, “What’s the Difference?” given in the Module 2 background sur-
vey: Less than 20%, 21%–40%, 41%–
Module 3 Development Module 3 Research Phase 60%, 61%–80%, and 81%–100%. Diverse
Our third module, “What’s the Differ- The effectiveness of Module 3, “What’s districts were well-represented: Of the
ence?” shows a press conference in the Difference?” was tested using an 24 participants who responded to the
which a superintendent is explaining assessment literacy quiz tailored to the question, 8 worked in a district that
recently released test results. Norma topics of the module. We began with a consisted of 21% to 40% minority stu-
and Stan, the teachers introduced in 16-item instrument, which was reduced dents, 7 indicated that they worked in a
the previous modules, help to demys- to 14 items after being pilot-tested with district composed of 41% to 60% minor-
tify the test results for the reporters. UCSB TEP students. ity students, 4 worked in a district that
The main topics addressed are the im- Primarily because of reduced partic- was less than 20% minority, 3 worked in
portance of disaggregating test data for ipation by the UCSB TEP students dur- a district that was 81% to 100% minor-
key student groups and the implications ing the research phase, we were initially ity, and 2 worked in a district that was
for score trend interpretation of shifts able to recruit only 23 participants. 61% to 80% minority.
in the student population, the number Four were UCSB TEP students and Psychometric analysis of the 14-
of students assessed, and changes in 19 were school district personnel from item multiple-choice assessment liter-
tests and test forms. (See Table 1 for seven states. We were subsequently acy quiz revealed that the quiz had an
details.) able to collect data from 10 teacher edu- average percent correct of 63, an aver-
As was the case in developing Mod- cation students at California State Uni- age item discrimination of .53, a reli-
ules 1 and 2, an outline and script versity, Fresno, bringing the number of ability of .87, and a standard error of
were first developed. An animated Web- TEP students to 14. As before, partici- measurement of 1.47 points.

Summer 2008 23
Table 7. Module 3: Results on the 14-Item Quiz
Module-First Group Quiz-First Group
Effect t-test p-value
Mean SD n Mean SD n Size (1-sided)
All Participants 9.1 4.2 18 8.5 4.1 15 .14 .33
Teacher Education (TEP) Students 6.5 4.1 8 5.5 2.1 6 .32 __
School District Personnel 11.2 3.0 10 10.4 4.0 9 .23 __

Note: Because of small sample sizes, a single t-test was performed for all participants combined; separate tests were not conducted
for TEP students and school district personnel.

As indicated in Table 7, the effect of is currently a TEP student, answered 5 Participants in the Module 3 research
the module was not statistically signif- out of 14 items correctly. phase were contacted about a month
icant. (Because of small sample sizes, Further analysis of the quiz results after they had first viewed the module
a single t-test was performed for all did not show much difference between and taken the quiz. They were given the
participants combined; separate tests the scores of the module-first and quiz- identical quiz that they had previously
were not conducted for TEP students first groups on most items. However, taken during their first module-viewing
and school district personnel). Those two questions did reveal substantial session. Of the original 33 participants
in the module-first group did have group differences: An item on the ef- who took the quiz, 11 (33%) partici-
a slightly higher average quiz score fect of sample size on the interpreta- pated in the follow-up phase and com-
(9.1) than those in the quiz-first group tion of improvements in average test pleted the quiz again. Four participants
(8.6), however (see Figure 4). In both scores was answered correctly by 83% were from the module-first group and
the module-first and quiz-first groups, of the module-first group, compared to seven were from the quiz-first group.
school personnel performed substan- 60% of the quiz-first group. A question The average score on the quiz for the
tially better than teacher education about test equating was answered cor- four members of the module-first group
students. There were not enough data rectly by 78% of the module-first group, was 12.3 the first time they took the
to make a meaningful comparison compared to only 27% of the quiz-first quiz, and it remained the same when
between math and nonmath instructors group. they retook the quiz. The average quiz
as was done for the first two research score for the seven quiz-first follow-up
phases. Only two participants were Follow-up quiz analysis participants was 10.7 the first time they
math instructors. One math instructor A follow-up analysis was conducted to took the quiz, and increased to 12.6
with three years of teaching experience determine the extent to which partic- when they took it during the follow-
received a perfect score on the quiz, ipants retained an understanding of up phase. Because of the very small
while the other math instructor, who the material from the second module. number of participants in the follow-
up phase, tests of significance were not
conducted. As in the results from the
previous follow-up studies, the slight
increase in average score is consis-
tent with an increase in content knowl-
edge due to the effect of viewing the
module.

Program Evaluation and


Dissemination Phases for Modules 1,
2, and 3
During the program evaluation phases,
project participants were asked to pro-
vide data to the project evaluator about
the quality and usefulness of the mod-
ules. Requests to participate in the eval-
uation came directly from the evalua-
tor, rather than the project director,
and no incentives were provided for
participation in this phase. Our inten-
tion was to keep the program evaluation
phase as separate as possible from
the main activities of the project, in
FIGURE 4. Module 3: Average scores on 14-item quiz for school personnel hopes that participants would feel free
and teacher education program students in quiz-first and module-first groups. to provide honest comments on the

24 Educational Measurement: Issues and Practice


project materials. The response rate Informal comments that participants Discussion
for the program evaluation phase was entered in comment windows during A key finding of this study is that our in-
low in all three years. For Module 1, the original or follow-up data collection structional modules are effective train-
seven teachers and six administrators were also analyzed. Roughly one-third ing tools for teacher education program
(11.5% of the original participants) of the overall comments made were pos- students. TEP students who took the
agreed to participate in phone inter- itive, one third were neutral, and one- quiz after seeing the module performed
views or complete paper surveys ad- third were negative. Among the recom- better than those who took the quiz be-
ministered in person or by mail by the mendations made by the participants fore seeing the module. The effect sizes
evaluator. In the survey, participants were to make definitions of terms avail- for these students were .37 for Module
were asked to rate several aspects of able in the module through pop-up win- 1 (Table 3), .90 for Module 2 (Table 5),
the project on a 4-point Likert scale dows or some other means, and to in- and .32 for Module 3 (Table 7). Results
(poor, fair, good, excellent). Questions crease the number of examples used to were statistically significant only for the
about the overall program, comprehen- illustrate concepts. first two modules. Follow-up analyses
siveness of the material and presen- The Module 3 evaluation procedures suggested that this information was re-
tation of material yielded modal re- and survey paralleled those of Mod- tained a month later; however, because
sponses of “good”; questions on “rele- ule 2. Unfortunately, only two of the the small number of individuals who
vance to my role” and availability and original 33 participants (6.1%) com- participated in the follow-ups cannot
access yielded modal responses of “ex- pleted the survey. These two respon- be assumed to be representative of the
cellent.” In addition, most participants dents provided uniformly positive re- total group of initial participants, this
said they had used or planned to use sponses about the presentation, con- finding must be regarded as tentative.
the materials in their work. Interview tent, and impact of the module. Nine- In the case of school personnel, there
comments were “very positive, in gen- teen participants made a total of 27 was no statistically significant differ-
eral” according to the evaluation re- comments in the windows provided dur- ence between those who saw the mod-
port, but included both complimentary ing the module and quiz phases. Eleven ule first and those who took the quiz
and critical feedback about the content of these comments were positive, 10 first (Tables 3, 5, and 7). For all three
and technical features of the module. were negative, and 6 were neutral. The modules, school personnel, on average,
The more negative comments tended comments lacked a common theme. For outperformed TEP students. For Mod-
to focus on the unavailability of naviga- example, while two viewers found the ule 2, which produced the largest ef-
tion features, a problem that was later video aspect of the module distracting, fect for TEP students, the average quiz
corrected. another praised the graphics. And while score for the school personnel (12.7
The Module 2 program evaluation one viewer thought the video was slow and 12.5 for the module-first and quiz-
phase included a 22-item evaluation and repetitive in its treatment of the first groups, respectively) was nearly
survey that was administered online. material, another called it “somewhat identical to that of the TEP students
Approximately 5 weeks after their orig- advanced.” who saw the module before taking the
inal participation, individuals received In addition to the information col- quiz (12.6). This suggests that, whether
an email invitation to participate in the lected via the evaluation survey and or not they had seen the module, the
evaluation. Eleven individuals (10.6%) comments boxes, we also asked Mod- school personnel, who had an average
completed the surveys, which con- ule 3 participants to respond, following of 11 years of experience, were as famil-
tained Likert items about the presen- the module, to a multiple-choice item iar with the included measurement and
tation, content, and impact of the mod- soliciting their views about the embed- statistics material as the TEP students
ule and open-ended questions about ded questions posed at the end of each who had already viewed the module.
the quality of the module. Responses scene. Of the 32 participants who re- A factor that somewhat complicates
regarding presentation (e.g., quality of sponded, 30 found the questions “some- the interpretation of the larger effects
navigational tools) were uniformly pos- what helpful” or “very helpful,” while for the teacher education students for
itive, and responses on content (e.g., 2 other participants indicated they all three modules is that, one to four
quality of examples) were all positive were “neither annoying nor helpful.” months before the research phase, a
as well, except for one response to one None considered them “distracting or portion of them had participated in a
of the five content questions. In the annoying.” pilot test of the assessment literacy
impact section, most respondents re- Following their respective research quiz. Those who took part in the pi-
ported that they learned from the mod- and program evaluation phases, the lot test responded to the quiz and pro-
ule, increased their confidence in the three modules were revised accord- vided comments about the clarity of
subject matter, and expected to change ing to the recommendations of partic- the wording and the difficulty of the
the way they talked to others about ipants and advisory committee mem- material. They did not receive the an-
test results, though there was some dis- bers. They were then made freely avail- swers to the quiz, nor did they view the
agreement among respondents on these able on the Web, along with their asso- module. Those who took part in the
issues. The open-ended questions on ciated quizzes and online “handbooks” pilot were just as likely to end up in
quality yielded positive responses about that provide formulas, supplementary the module-first group as in the quiz-
the clarity of presentation and about explanations, and references. Educa- first group during the research phase,
the examples and terms included in tors who preferred not to use the Web- so that any effects of pilot participa-
the module, as well as some negative based version of the module could re- tion should have affected both exper-
responses about animation quality and quest CDs or DVDs, which were mailed imental groups equally. Furthermore,
about specific aspects of the content. to them at no cost. in the case of the Module 1 quiz, the

Summer 2008 25
instrument was almost entirely over- By far the greatest challenge in (National Council on Measurement in
hauled before the research phase, so the ITEMS project has been the re- Education, 1997). The project was led
that the final version bore little resem- cruitment of participants. Overworked by Barbara S. Plake and James C. Im-
blance to the pilot version. Neverthe- teachers who are overwhelmed by the para and was funded by a grant to NCME
less, it is possible that pilot participa- demands of NCLB are unlikely to under- from the W. K. Kellogg Foundation.
tion had some effect. take an optional activity like this one.
More generally, caution is warranted Ironically, then, one reason that educa- References
when interpreting the data analysis re- tors do not have time to learn about the
sults. While the TEP students from technical aspects of testing is that they Adams, J. E., & Copland, M. A. (2005). When
UCSB were required by their own fac- are occupied with the preparation and learning counts: Rethinking licenses for
ulty to participate in the main data col- administration of tests. school leaders. Available at http://www.
lection for Modules 1 and 2, the remain- As part of our effort to increase ncsl.org/print/educ/WhenLearningCounts_
der of the individuals who participated awareness of the project, we have is- Adams.pdf.
American Educational Research Association,
in the research phase chose to do so, sued press releases and made con- American Psychological Association, & Na-
resulting in nonrandom samples of the tacts with the Corporation for Educa- tional Council on Measurement in Edu-
corresponding populations. In addition, tional Network Initiatives in Califor- cation (1999). Standards for educational
samples ranged from small to moder- nia (CENIC), the University of Califor- and psychological testing. Washington, DC:
ate in size and cannot be assumed to nia Office of the President, the Califor- American Educational Research Associa-
be representative of school personnel nia Teachers Association Institute for tion.
or teacher education program students Teaching, the California Department American Federation of Teachers, National
nationwide. Nevertheless, the results of Education Beginning Teacher Sup- Council on Measurement in Education, &
are encouraging and do indicate that port and Assessment program, and the National Education Association (1990).
overall, the modules are having a posi- California County Superintendents Ed- Standards for teacher competence in edu-
cational assessment of student. Available at
tive impact on content knowledge in ed- ucational Services Association. In addi- http://ericsys.uncg.edu/AFT-NEA-AERA-
ucational measurement and statistics. tion, we have made conference presen- Standards for Teacher Competence in
Supplemental analysis of quiz perfor- tations, posted project information on Education.htm.
mance showed that math instructors education-oriented Websites and blogs, Begleiter, M. (2001). From word to image:
tended to perform better than other and used listservs to contact profes- Storyboarding and the filmmaking pro-
participants (Modules 1 and 2) and that sional organizations, focusing on those cess. Studio City, CA: Michael Weise Pro-
the topics participants were most likely for minority professionals. ductions.
to learn about from the modules were Following the completion of the sup- Boudett, K. P., City, E. A., & Murnane, R. J.
skewness of a test score distribution plementary data collection, we will fo- (Eds.) (2005). Data wise: A step-by-step
(Module 1), measurement error (Mod- cus for the remainder of the project on guide to using assessment results to im-
prove teaching and learning. Cambridge,
ule 2), the stability of sample means the dissemination of our materials, par- MA: Harvard Education Press.
(Module 2), the effect of sample size on ticularly to teacher education programs Brown, T., & Daw, R. (2004). How do K-
the interpretation of changes in average and new teachers. It is our hope that 12 teachers and administrators feel about
scores (Module 3), and test equating even those educators who were hesi- standardized testing and how well can
(Module 3). tant to participate in the research as- they interpret results of such tests? Unpub-
Most comments relayed to the pects of the project will find the ITEMS lished report, Santa Barbara, CA: Univer-
project via email (not included in the materials useful as a resource. sity of California.
formal program evaluation) have been Chance, B. (2002). Components of statis-
positive. Some examples are as follows: tical thinking and implications for in-
Acknowledgments struction and assessment. Journal of
“Very helpful and right to the point. Statistics Education, 10 (3). Available
We appreciate the support of the National at www.amstat.org/publications/jse/v10n3/
If I were a building principal or a de-
partment chair today all of the staff Science Foundation (#0352519). Any chance.html.
would go through this until everyone opinions, findings, and conclusions or rec- Confrey, J., Makar, K., & Kazak, S. (2004).
really understood it.” ommendations expressed in this material Undertaking data analysis of student out-
are those of the authors and do not neces- comes as professional development for
“I am inclined to recommend [Mod- teachers. International Reviews on Math-
ule 1] as required viewing for all new
sarily reflect the views of the National Sci- ematical Education (Zentralblatt für Di-
hires in our K-12 district, and it cer- ence Foundation. We are grateful to Liz daktik der Mathematik), 36(1), 32–40.
tainly will be recommended . . . for in- Alix, Pamela Yeagley, and Lois Phillips for delMas, R. C., Garfield, J., & Chance, B. L.
clusion in professional development their contributions. For more information (1999). A model of classroom research
on assessment literacy.” on the project, refer to the project Web- in action: Developing simulation ac-
tivities to improve students’ statistical
“I will be sharing [Module 1] with site at http://items.education.ucsb.edu reasoning. Journal of Statistics Edu-
my Assistant Superintendent with the or contact the first author at rzwick@ cation, 7 (3). Available at http://www.
hope of promoting it as a part of our education.ucsb.edu. amstat.org/publications/jse/secure/v7n3/
new teacher induction process.” delmas.cfm.
Note English, L. D. (1997). Mathematical reason-
In addition, the teacher education pro- ing: Analogies, metaphors, and images.
1
grams at UCSB and at California State These three organizations subse- Mahwah, NJ: Erlbaum.
University, Fresno have now incorpo- quently collaborated on the develop- Finzer, W. (2001). Fathom! (Version 1.12)
rated the ITEMS materials into their ment of a manual, Interpreting and [Computer Software]. Emeryville, CA: Key
curriculums. Communicating Assessment Results Curriculum Press.

26 Educational Measurement: Issues and Practice


Garfield, J., delMas, R. C., & Chance, B. cases and how? The research base for teach- National Council on Measurement in Ed-
L. (2003). The Web-based ARTIST: As- ing and learning with cases. Mahwah, NJ: ucation Ad Hoc Committee on the De-
sessment resource tools for improving Erlbaum. velopment of a Code of Ethics (1995).
statistical thinking. Presented at the an- Makar, K., & Confrey, J. (2002). Comparing Code of professional responsibilities in
nual meeting of the American Educational two distributions: Investigating secondary educational measurement. Available at
Research Association, Chicago. Available at teachers’ statistical thinking. Presented www.ncme.org.
http://www.gen.umn.edu/artist/publications. at the Sixth International Conference Owen, J. M. (2007). Program evaluation:
html. on Teaching Statistics, Cape Town, Forms and approaches (3rd ed.) New York:
Hammerman, J., & Rubin, A. (2002). Visu- South Africa. Available at http://www. Guilford.
alizing a statistical world. Hands On! 25 stat.auckland.ac.nz/∼iase/publications/1/ Pedulla, J., Abrams, L., Madaus, G., Russell,
(2), 1–23. Available at http://www.terc. 10_18_ma.pdf. M., Ramos, M., & Miao, J. (2003). Perceived
edu/downloads/Ho Fall o2.pdf. Mayer, R. E. (2001). Multimedia learn- effects of state-mandated testing programs
Hammerman, J., & Rubin, A. (2004). Strate- ing. Cambridge, UK: Cambridge University on teaching and learning: Findings from a
gies for managing statistical complexity Press. national survey of teachers. Chestnut Hill,
with new software tools. Statistics Educa- Mayer, R. E. (2003). The promise of multi- MA: Center for the Study of Testing, Evalu-
tion Research Journal, 3(2), 17–41. media learning: Using the same instruc- ation, and Educational Policy, Boston Col-
Impara, J. C. (1993). Joint Committee on Com- tional design methods across different me- lege.
petency Standards in Student Assessment dia. Learning and Instruction, 13, 125– Popham, W. J. (2006a). Assessment for educa-
for Educational Administrators update: 139. tional leaders. Boston: Pearson.
Assessment survey results. Paper presented Mills, J. (2002). Using computer simulation Popham, W. J. (2006b). Mastering Assess-
at the annual meeting of the National Coun- methods to teach statistics: A review ment: A self-service system for educators.
cil in Educational Measurement. Available of the literature. Journal of Statistics New York: Routledge.
at www.eric.ed.gov. Education, 10(1). Available at http:// Snell, J. L., & Finn, J. (1992). A course called
Jennings, J. (2002). New leadership for new www.amstat.org/publications/jse/v10n1/ “Chance.” Chance Magazine, 5(3–4), 12–
standards. Leaders Count Report, Wallace mills.html. 16.
Readers Digest Funds, Spring/Summer is- Moreno, R., Mayer, R., Spires, H., & Lester, Stiggins, R. (2002). Assessment for learn-
sue. Available at http://www.ctredpol.org. J. (2001). The case for social agency ing. Education Week, 21(26), 30, 32–
Konold, C., & Miller, C. (1994). Prob Sim : in computer-based teaching: Do students 33.
A probability simulation program [com- learn more deeply when they inter- Stiggins, R., & Chappuis, J. (2005). Using
puter software]. Santa Barbara, CA: Intel- act with animated pedagogical agents? student-involved classroom assessment to
limation Library for the Macintosh. Cognition and Instruction, 19, 177– close achievement gaps. Theory Into Prac-
Konold, C., & Miller, C. (2004). Tinker- 213. tice, 44(1), 11–18.
plots (version 1.0) [computer software]. Narayanan, N. H., & Hegarty, M. (2002). Mul- Wayman, J. C., Stringfield, S., & Yakimowski,
Emeryville, CA: Key Curriculum Press. timedia design for communication of dy- M. (2004). Software enabling school
Lukin, L. E., Bandalos, D. L., Eckhout, T. J., & namic information. International Journal improvement through the analysis of
Mickelson, K. (2004). Facilitating the de- of Human-Computer Studies, 57, 279–315. student data (Report No. 67). Baltimore,
velopment of assessment literacy. Educa- National Council on Measurement in Edu- MD: Johns Hopkins University, Center for
tional Measurement: Issues and Practice, cation (1997). Interpreting and commu- Research on the Education of Students
23(2), 26–32. nicating assessment results: Professional Placed At Risk. Available at http://www.
Lundeberg, M. A., Levin, B. B., & Harrington, development resource materials. Washing- csos.jhu.edu/crespar/techReports/Report
H. L. (Eds.) (1999). Who learns what from ton, DC: Author. 67.pdf.

Summer 2008 27

You might also like