Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 18

Running Head: TECHNOLOGY-BASED ASSESSMENTS OF STUDENT WRITING

Using Technology to Assess Student Writing Samples in Secondary Education


Jon Reader
University of Maryland, University College (UMUC)

Running Head: TECHNOLOGY-BASED ASSESSMENTS OF STUDENT WRITING

Abstract
In an extension of research studying the effects of computer-based versus paper-based assessments of
students essays in secondary education, this paper attempts to examine the benefits or advantages and the
disadvantages associated with the use of technology-based assessments for student writing samples in
middle and high schools. Studies have investigated the comparability of scores for paper and computer
versions of a writing test administered to 8th grade students. Results generally showed no significant mean
score differences between paper and computer delivery. Observations, interviews and a survey indicated
that automated writing evaluation (AWE) software programs like the Intelligent Essay Assessor (IEA)
using Latent Semantic Analysis (LSA), and MY Access! using artificial intelligence (AI) to score
student essays and support revision simplified classroom management and increased students motivation
to write and revise (Burstein, Chodorow, & Leacock, 2004). The use of (AWE) software programs also
allows teachers to increase the number of writing assignments without increasing the amount of grading
and can serve as a highly effective and efficient tool for increasing students' exposure to writing.
Technology-based writing assessments provide a level of feedback that requires the students to reflect on
their performance and also provides assessment methods that inform students on where they most need
assistance. A technology-based writing assessment allows students to practice their writing with efficient
and informative feedback and gives teachers the information they need to identify individual and class
strengths and weaknesses. Automated software programs enhance students critical-thinking skills and can
accelerate student learning leading to higher levels of student achievement. However, computer
familiarity significantly predicted online writing test performance after controlling for paper writing
skill. These results suggest that, for any given individual, a computer-based writing assessment may
produce different results than a paper one, depending upon that individuals level of computer familiarity
(Horkay, Elliot Bennett, Allen, Kaplan, & Yan, 2006).

Running Head: TECHNOLOGY-BASED ASSESSMENTS OF STUDENT WRITING

Using Technology to Assess Student Writing Samples in Secondary Education


English teachers in secondary schools today are inundated with countless papers and
writing assignments needing evaluation and grades in their respective classes. These are classes,
which it seems are increasing in student size and diversity each year in the public school system.
How can teachers grade students work accurately and fairly given the current sizes of their
classes? How can teachers grade students work accurately and fairly given the degree of cultural
disparity in their classes? How can teachers grade students work accurately and fairly given the
degree of dissimilarity of student performance in their classes? In the age of the Internet, its
imperative that educators begin to research the advantages and benefits associated with the use of
technology-based assessment software programs to score students papers and writing
assignments effectively and efficiently.
The use of automated software programs (AWE) such as the Intelligent Essay Assessor
(IEA) and MY Access (MA) to score student writing assignments is a topic of hot debate.
Advocates view AWE programs as a magic bullet and proponents consider AWE programs as a
threat to the perceived foundations of education. However, in my opinion, neither perspective is
entirely true. If AWE programs are reliable and valid, the practical and genuine question
therefore is what are the advantages and/or disadvantages of technology-based versus
paper-based writing assessments? The two (2) main research questions addressed in this study
are the following:
1. Are automated software programs (AWE) such as the Intelligent Essay Assessor (IEA)
and MY Access (MA) which score student writing assignments reliable and valid?

Running Head: TECHNOLOGY-BASED ASSESSMENTS OF STUDENT WRITING


2. What are the advantages and/or disadvantages for students and teachers of using
technology based assessments in secondary schools for student essays and writing
prompts?
Does it matter if students take their writing test on computer? Studies have
investigated the comparability of scores for paper and computer versions of a writing test
administered to 8th grade students. Analyses looked at overall differences in performance
between the delivery modes, interactions of delivery mode with group membership, and whether
computer familiarity was associated with online writing test performance. Results generally
showed no significant mean score differences between paper and computer delivery and no
significant mean score differences between group memberships. Observations, interviews and a
survey indicated that automated writing evaluation (AWE) software programs like the Intelligent
Essay Assessor (IEA) using Latent Semantic Analysis (LSA), and MY Access! using artificial
intelligence (AI) to score student essays and support revision simplified classroom management
and increased students motivation to write and revise.
However, computer familiarity significantly predicted online writing test performance
after controlling for paper writing skill. These results suggest that, for any given individual, a
computer-based writing assessment may produce different results than a paper one, depending
upon that individuals level of computer familiarity (Horkay, Elliot Bennett, Allen, Kaplan, &
Yan, 2006).
Most teachers evaluate students writing assignments based upon sound mechanical
features like spelling, grammar and punctuation. However all of us that write essays and respond
to questions in writing understand that spelling, grammar and the appropriate punctuation is not
everything. For example Foltz, Laham, & Landauer (1999) stated that at an abstract level, one
4

Running Head: TECHNOLOGY-BASED ASSESSMENTS OF STUDENT WRITING


can distinguish three properties of a student essay that are desirable to assess; the correctness and
completeness of its conceptual knowledge, the soundness of arguments that it presents in
discussion of issues, and the fluency, elegance, and comprehensibility of its writing. However,
the efforts required to examine writing assignments so thoroughly is a daunting task for any
teacher.
Can AWE provide scores that prove to be an accurate measure of the quality of
essays? The Intelligent Essay Assessor (IEA) using Latent Semantic Analysis (LSA) has been
successfully applied towards a number of simulations of cognitive and psycholinguistic
phenomena (Foltz, Laham, & Landauer, 1999). These simulations have shown that LSA captures
a great deal of the similarity of the meaning of words expressed in discourse because LSA
methods of assessment concentrate on the conceptual content, the knowledge conveyed in an
essay, rather than its style, or even its syntax or argument structure (Foltz, Laham, & Landauer,
1999). Based on a statistical analysis of a large amount of text, LSA derives a high-dimensional
semantic space that permits comparisons of the semantic similarity of words and passages (Foltz,
Laham, & Landauer, 1999). The LSA measured similarities have been shown to closely mimic
human judgments of meaning similarity and human performance based on such similarity in a
variety of ways (Foltz, Laham, & Landauer, 1999).
Is automated scoring effective? Several techniques have been developed for assessing
students essays. One technique is to compare students essays to essays that have already been
graded. The score for each essay is determined based on how well the overall meaning or
similarity of content matches that of previously graded essays (Foltz, Laham, & Landauer, 1999).
This method has been tested on a large number of essays over a diverse set of topics. The content
of each of the new essays is compared against the content of a set of previously graded essays on

Running Head: TECHNOLOGY-BASED ASSESSMENTS OF STUDENT WRITING


the same topic. In each case, the essays were also graded by at least two course instructors or
expert graders. According to the data provided by Foltz, Laham, & Landauer (1999), LSA's
performance produced reliabilities within the generally accepted guidelines for minimum
reliability correlation coefficients. For example, out of 188 essays describing how the human
heart functions, the average correlation between two graders was 0.83, while the correlation of
LSA's scores with the graders was 0.80 (Foltz, Laham, & Landauer, 1999).
In a more recent study, the holistic method (i.e., accounts for all variables) was used to
grade 2 additional questions from the GMAT standardized test. The performance was compared
against 2 trained Educational Testing Service (ETS) graders. For 1 question, a set of 695 opinion
essays, the correlation between the 2 graders was 0.86, while LSA's correlation with the ETS
grades was also 0.86. For the second question, a set of 668 analyses of argument essays, the
correlation between the 2 graders was 0.87, while LSA's correlation to the ETS grades was 0.86
(Foltz, Laham, & Landauer, 1999).
What are the advantages and/or disadvantages for students and teachers of using
technology-based assessments in secondary schools for student essays and writing
prompts? Writing is a fundamental skill that requires years of practice and necessitates feedback
on the content, form, style, grammar, and spelling of the essay. However, with so many students
to assess on a daily basis a teacher can only provide a limited amount of feedback and students
are usually left unclear as to the authentic quality of their work. What students really need is a
method of understanding not only the quality but also the depth of their work. And they need this
feedback consistently and often in order to make real progress.

Running Head: TECHNOLOGY-BASED ASSESSMENTS OF STUDENT WRITING


The Intelligent Essay Assessor (IEA) uses Latent Semantic Analysis (LSA) for assisting
students in evaluating the content of their essay. LSA can compare each student's writing with the
writing of experts and create a report indicating how well the paper correlates in content on a
scale from 1 to 5. The numerical output does not give students specific feedback on what content
needs to change, but it helps them identify when more work needs to be done. Students can then
rewrite and submit their papers to the LSA system as many times as necessary to improve the
quality ranking. The result is that students' final essays have a much higher quality of content
when they complete the assignment. Accordingly, the students are forced to comprehensively
evaluate their own work before handing in the final product, thus allowing the teacher to spend
more time evaluating the content, creativity, and synthesis of ideas of students writing.
Technology-based writing assessments provide a level of feedback that requires the
students to reflect on their performance and also provides assessment methods that inform
students on where they most need assistance. A technology-based writing assessment allows
students to practice their writing with efficient and informative feedback and gives teachers the
information they need to identify individual and class strengths and weaknesses. Automated
software programs provide feedback on six traits of writing which includes ideas, organization,
conventions, sentence fluency, word choice and voice. Students and teachers are free to focus on
each of these important dimensions of writing when using AWE programs. Students also
reported that they find the instructional environment both engaging and motivational. They are
encouraged to spend extra time on assignments and projects. Automated software programs
enhance students critical-thinking skills and can accelerate student learning leading to higher
levels of student achievement.

Running Head: TECHNOLOGY-BASED ASSESSMENTS OF STUDENT WRITING


Using technology to assess students writing and essay prompts has other advantages,
including the electronic transfer of test results and the ability to tailor questions and content.
Currently, most teachers dont receive test results in a timely manner. Therefore, teachers have a
lot of difficulty improvising classroom instruction and adapting their teaching methods and
curricula in order to meet individual student and classroom needs.
Specifically, the 2002 Writing Online (WOL) study explores the use of new technology in
administering the National Assessment of Educational Progress (NAEP). The study addresses
issues related to measurement, equity, efficiency, and operations in a computer-based writing
assessment (Horkay, Elliot Bennett, Allen, Kaplan, & Yan, 2006). This report describes the
results of testing a national sample of 8th grade students on a computer. The WOL study was
administered to students on school computers via the Internet or on NAEP laptop computers
brought into the schools. During April and May of 2002, data were collected from more than
1,300 students in about 160 schools. Student performance on WOL was compared to that of a
national sample that took the main NAEP paper-and-pencil writing assessment between January
and March 2002. For the samples taking WOL, background information concerning access to,
use of, and attitudes toward computers was also collected (Horkay, Elliot Bennett, Allen, Kaplan,
& Yan, 2006).
Informal feedback was acquired from students regarding their reactions to the test.
Overall, administrators reported far more positive reactions than negative ones from the students.
When asked what they liked most about WOL, 722 student comments were received compared
with 417 comments received regarding what students liked least (Horkay, Elliot Bennett, Allen,
Kaplan, & Yan, 2006). The most common positive responses from the students stated that they

Running Head: TECHNOLOGY-BASED ASSESSMENTS OF STUDENT WRITING


liked using the computer format (185), liked typing (68), test was easy (66), liked writing (42),
liked using the laptop (32), and it was fun (30) (Horkay, Elliot Bennett, Allen, Kaplan, & Yan,
2006). The most common negative responses from the students were the time limit was too short
or was too long (78), did not like writing (34), did not like typing (33), and did not like essay
portion (28) (Horkay, Elliot Bennett, Allen, Kaplan, & Yan, 2006).
Students were also asked if they thought they would write better on computer or paper.
Of the 929 responses, the overwhelming majority (76 percent) reported that they write better on
the computer, while 21 percent indicated that they write better on paper (Horkay, Elliot Bennett,
Allen, Kaplan, & Yan, 2006). Those students who reported that they write better on the computer
gave reasons such as the following: typing is faster (119), editing is easier (107), editing tools are
useful (102), neatness is improved (83), typing is easier (65), and writing by hand cramps their
hands (35) (Sandene, 2005). Students who reported that they write better on paper gave reasons
such as the following: writing is faster (43), not a proficient typist (29), easier to express ideas
(26) and not comfortable using the computer (26) (Horkay, Elliot Bennett, Allen, Kaplan, & Yan,
2006).
The primary disadvantage of AWE programs though is the cost in dollars associated with
start-up and maintenance of technology. Additionally, there are concerns about losing assessment
data if computer systems crash and providing adequate technical support for the schools
(Computer Based Assessment, 2010). Equity could be an issue if students with more access to
computers and who are skilled keyboarders have a greater advantage with technology-based
writing assessments than paper-based writing assessments (Computer Based Assessment, 2010).
In any case, using technology-based assessments for students essays and writing prompts is
gaining more credibility and most likely will become the general assessment in the near future.
9

Running Head: TECHNOLOGY-BASED ASSESSMENTS OF STUDENT WRITING


Educators, policymakers, assessment experts and testing companies must work closely to
minimize the disadvantages and maximize the advantages if technology-based writing
assessments are to become an efficient, effective and equitable way to assess students writing
performances (Computer Based Assessment, 2010).
Several studies have looked at the relationship of computer familiarity to writing test
performance. Although the results are not entirely consistent, they suggest that computer and
paper-based writing tests may not measure the same type of skill for all students. For example,
Wolfe, Bolton, Feltovich, and Bangert (1996) and Wolfe, Bolton, Feltovich, and Niday (1996)
found that secondary school students with less experience writing on computer were
disadvantaged by having to test that way. Wolfe, Bolton, Feltovich, and Bangert (1996) found
that 10th grade students with little or no experience using computers outside of school scored
higher on pen-and-paper essays than on computer-written ones, whereas students with a lot of
computer experience showed no difference in performance across modes. In the second study,
Wolfe, Bolton, Feltovich, and Niday (1996) found that less experienced students achieved lower
scores, wrote fewer words, and wrote more simple sentences when tested on computer than when
they tested on paper. Students with more experience writing on computer achieved similar scores
in both modes, but wrote fewer words and more simple sentences on paper than on computer
(Wolfe, Bolton, Feltovich, and Niday, 1996).
Another study by Russell (1999) found that, after controlling for reading performance,
middle-school students with low keyboarding speed were disadvantaged by a computer-writing
test relative to students with similar low levels of keyboarding skill taking a paper test. The
opposite effect was detected for students with high keyboarding speed, who fared better on the
computer than on paper examinations (Russell, 1999). In a subsequent investigation, however,
10

Running Head: TECHNOLOGY-BASED ASSESSMENTS OF STUDENT WRITING


Russell and Plati (2001) found that 8th and 10th grade students performed better on the computerwriting test regardless of whether their keyboarding speed was high or low.
NAEP administrators also informally asked school staff for their reactions to the WOL
administration. Of the 124 school staff comments received, 96 were positive, 2 negative, and 26
mixed or neutral (Horkay, Elliot Bennett, Allen, Kaplan, & Yan, 2006). Staff also reported about
how eagerly and diligently the students participated in the WOL program.
Teachers: Classroom Management with Automated Scoring
The research conducted by Grimes & Warschauer (2010) studied how an AWE program
called MY Access! (MA) was used in 8 middle schools in Southern California over a three-year
period in 2 school districts. When all the technologies worked as planned, AWE simplified
classroom management. Teachers that were observed appeared more relaxed when students
wrote with AWE instead of pencil and paper, and students often became noticeably more focused
and engaged the minute the teacher let them started writing with AWE (Grimes & Warschauer,
2010). According to Grimes & Warschauer (2010), for the majority of teachers in this study,
using AWE simplified classroom management for several reasons: students were more motivated
to write and revise, they were more autonomous, and their writing portfolios were conveniently
organized.
The potential for AWE to ease teachers stress sometimes backfired when technical
problems far outweighed a teachers troubleshooting skill, and also wasted precious class time.
Grimes & Warschauer (2010) reported that teachers were able to temporarily relax their role as
judges of student performance and play a more sympathetic, coaching role with students by
transferring the evaluation and scoring of essays to AWE. This would allow teachers to assume

11

Running Head: TECHNOLOGY-BASED ASSESSMENTS OF STUDENT WRITING


more of a supportive role and not so much the judge and jury when it comes to scoring
students writing assignments. Grimes & Warschauer (2010) found that if teachers relate poorly
to students, are preoccupied with technical concerns, or use automated scoring to determine
grades; using AWE is likely to dehumanize instruction. On the other hand, Grimes & Warschauer
(2010) also found that if a teacher uses the software to overcome students reluctance to write
and to help with low-level errors so that she can focus on high-level concerns like ideas and
style, then it is likely to contribute to more human-oriented writing.
Teachers: Teaching Different Types of Students via Automated Scoring
The classrooms in this study included students with very diverse achievement levels,
English skills, and motivation to learn. When the classrooms were observed, Grimes &
Warschauer (2010) observed that students of all types appeared more focused when writing with
MA than when writing with pen and pencil. Teachers in the survey reported that MY Access
(MA) assists writing development and encourages positive attitudes for English language
learners, special education students, gifted students, at risk students, and general students without
special needs (Grimes & Warschauer, 2010).
Students: Attitudes and Writing Practices Using Automated Scoring
Students attitudes and writing practices were examined in 6 sub-sections: Students
Attitudes toward Automated Scores, Amount of Revision, Types of Revision, Use of Feedback, and
Students Writing Development. Classroom observations revealed that even when students
realized that automated scores were sometimes ungrounded, the prospect of receiving a quick
score motivated them to focus more (Students Attitudes toward Automated Scores) than if they
expected to wait days or weeks for their score (Grimes & Warschauer, 2010). That immediate
feedback seemed to increase the students motivation to perform and was confirmed in the

12

Running Head: TECHNOLOGY-BASED ASSESSMENTS OF STUDENT WRITING


survey by Grimes & Warschauer (2010), where 30 out of 40 teachers agreed or strongly agreed
that students were more motivated to write with MA more than they were motivated to write with
a word processor.
Interviews and surveys with teachers and students indicated increased amounts of
revision by the students as well. For example, 30 out of 40 participants on the teacher survey
agreed or strongly agreed that students revised more (Types of Revision) when writing with MA.
Classroom observations and teacher interviews confirmed the survey results indicating
increasing student motivation to write and revise (Writing Development) with MA. However,
there were some classrooms in which automated scoring appeared to modify the students goal of
writing from communication to improving the score, at least temporarily (Grimes & Warschauer,
2010).
Implications
These studies have investigated the comparability of scores for paper and computer
versions of a writing test administered to 8th grade students. Analyses looked at overall
differences in performance between the delivery modes, interactions of delivery mode with
group membership, and whether computer familiarity was associated with online writing test
performance. Results generally showed no significant mean score differences between paper and
computer delivery. However, for any given individual, a computer-based writing assessment may
produce different results than a paper one, depending upon that individuals level of computer
familiarity (Horkay, Elliot Bennett, Allen, Kaplan, & Yan, 2006). In order for these findings to be
considered more reliable and valid or authentic, the studies should be substantiated with larger
samples before presuming that the two delivery modes are interchangeable for population
groups. Second, comparability scores are dependent on whether the test is taken on a NAEP

13

Running Head: TECHNOLOGY-BASED ASSESSMENTS OF STUDENT WRITING


laptop or on a school computer. For a given level of paper writing skill, students with more
hands-on computer facility appear to get higher scores on WOL than do students with less
keyboard proficiency (Horkay, Elliot Bennett, Allen, Kaplan, & Yan, 2006).
A second implication for interpretation is that the relationships of certain demographic
variables to writing proficiency might have been different if that proficiency had been measured
on a computer (Horkay, Elliot Bennett, Allen, Kaplan, & Yan, 2006). Ideally, scores taken across
a greater number of readers grading under less pressured conditions, in combination with other
measures of writing skill, would provide a sounder comparative standard (Horkay, Elliot
Bennett, Allen, Kaplan, & Yan, 2006). Even if automated scoring were less accurate, it would be
important to know the impact of that accuracy loss on NAEP population estimates. If the loss
were small enough, the use of automated scoring could have little negative impact on results but
considerable effect in lowering costs and faster reporting (Horkay, Elliot Bennett, Allen, Kaplan,
& Yan, 2006).
A primary reason for high costs is that the school technology infrastructure is not yet
developed enough to support national delivery via the Web directly to school computers (Horkay,
Elliot Bennett, Allen, Kaplan, & Yan, 2006). Thus, NAEP will need to supplement web delivery
by bringing laptop computers into schools, though undoubtedly not to the same extent as in this
study because school technology is being improved continually (Horkay, Elliot Bennett, Allen,
Kaplan, & Yan, 2006).
There are several issues that future research on the delivery of electronic writing
assessment in NAEP should also address. First, this study only accounted for one grade (8th) and
for only two essay tasks. The findings of the study could very well be different if more grade
levels and populations groups were included. If 4th grade students have more limited word

14

Running Head: TECHNOLOGY-BASED ASSESSMENTS OF STUDENT WRITING


processing skills, or 12th graders more developed ones, student performance might vary much
more dramatically across modes than was observed for just the 8th grade participants in this study
(Horkay, Elliot Bennett, Allen, Kaplan, & Yan, 2006). Additionally, the study results could
dramatically vary if the tasks or writing assignments used in the study required significantly
longer or shorter responses. Future research should also take into account the impact of
differences in equipment configuration on NAEP population estimates (Horkay, Elliot Bennett,
Allen, Kaplan, & Yan, 2006).
Differences in students performance were a function of whether a student used a NAEP
laptop or a school computer to take the writing test. As school computers become the
predominant delivery mechanism, variation across computers (e.g., monitor size, screen
resolution, connection speed) may play a greater role in affecting performance irrelevantly
(Horkay, Elliot Bennett, Allen, Kaplan, & Yan, 2006).
Technology-based assessment of students writing performances in technology-rich
environments will help educators better understand how computers can generally help improve
the NAEP educational assessment and students writing skills (Horkay, Elliot Bennett, Allen,
Kaplan, & Yan, 2006).

Annotated References
1. Utility in a Fallible Tool: A Multi-Site Case Study of Automated Writing Evaluation
Automated writing evaluation (AWE) software uses artificial intelligence (AI) to score student
essays and support revision. We studied how an AWE program called MY Access! was used in
eight middle schools in Southern California over a three-year period. Observations, interviews,
and a survey indicated that using AWE simplified classroom management and increased
students motivation to write and revise.
Research Questions:

15

Running Head: TECHNOLOGY-BASED ASSESSMENTS OF STUDENT WRITING


1. Teachers: What were teachers attitudes toward and instructional practices with AWE?
2. Students: What were students attitudes toward and writing practices with AWE?
Grimes, D., & Warschauer, M. (2010, March). Utility in a Fallible Tool: A Multi-Site Case Study
of Automated Writing Evaluation. Retrieved March 20, 2010 from The Journal of
Technology, Learning, and Assessment:
http://escholarship.bc.edu/cgi/viewcontent.cgi?article=1215&context=jtla
2. Does it Matter if I take My Writing Test on Computer? An Empirical Study of Mode
Effects in NAEP
This study investigated the comparability of scores for paper and computer versions of a writing
test administered to eighth grade students. Analyses looked at overall differences in performance
between the delivery modes, interactions of delivery mode with group membership, differences
in performance between those taking the computer test on different types of equipment (i.e.,
school machines vs. NAEP-supplied laptops), and whether computer familiarity was associated
with online writing test performance. Results generally showed no significant mean score
differences between paper and computer delivery. However, computer familiarity significantly
predicted online writing test performance after controlling for paper writing skill (Horkay, Elliot
Bennett, Allen, Kaplan, & Yan, 2006).
Research Questions:
1. Do students perform differently on computer-based versus paper-based writing
assessments?
2. Does test mode differentially affect the performance of NAEP reporting groups (e.g.,
those categorized by gender or by race/ethnicity)?
3. Do students who are relatively unfamiliar with computers perform differently from
students who are more familiar with them?
Horkay, N., Elliot Bennett, R., Allen, N., Kaplan, B., & Yan, F. (2006, November). Does it
Matter if I take My Writing Test on Computer? An Empirical Study of Mode Effects in
NAEP. Retrieved March 20, 2010, from The Journal of Technology, Learning, and
Assessment: http://escholarship.bc.edu/cgi/viewcontent.cgi?article=1071&context=jtla
Additional References
Foltz, P.W., Laham, D., & Landauer, J.K. (1999, October). The Intelligent Essay Assessor:
Applications to Educational Technology. Retrieved April 4, 2010, from Interactive
Multimedia Electronic Journal of Computer-Enhanced Learning:
http://imej.wfu.edu/articles/1999/2/04/
16

Running Head: TECHNOLOGY-BASED ASSESSMENTS OF STUDENT WRITING


Burstein, J., Chodorow, M., & Leacock, C. (2004). Automated essay evaluation: the
Criterion Online writing service. AI Magazine, 35(3), 2736.
Elliot, S.M., & Mikulas, C. (2004, April 12-16, 2004). The impact of MY Access! use on
student writing performance: A technology overview and four studies. Paper presented at
the Annual Meeting of the American Educational Research Association, San Diego, CA.
Foltz, P.W., Laham, D., & Landauer, T.K. (1999). Automated Essay Scoring: Applications to
Educational Technology [Electronic Version]. Interactive Multimedia Electronic Journal
of Computer-Enhanced Learning.
Russell, M., & Haney, W. (1997). Testing writing on computers: An experiment comparing
student performance on tests conducted via computer and via paper-and-pencil.
Education Policy Analysis Archives, 5(3). Retrieved June 27, 2003, from
http://epaa.asu.edu/epaa/v5n3.html
Russell, M., & Plati, T. (2001). Effects of computer versus paper administration of a statemandated writing assessment. TCRecord. Retrieved June 27, 2003, from
http://www.tcrecord.org/Content.asp?ContentID=10709
Wolfe, E. W., Bolton, S., Feltovich, B., & Bangert, A. W. (1996). A study of word
processing experience and its effects on student essay writing. Journal of Educational
Computing Research, 14(3), 269283.
Wolfe, E. W., Bolton, S., Feltovich, B., & Niday, D. M. (1996). The influence of student
experience with word processors on the quality of essays written for a direct writing
assessment. Assessing Writing, 3(2), 123147.
Computer Based Assessment. (2010). Retrieved August 9, 2010, from Pearson Education:
http://www.ecs.org/html/issue.asp?issueid=12&subIssueID=76
Horkay, N., Elliot Bennett, R., Allen, N., Kaplan, B., & Yan, F. (2005, August). Reports
From the NAEP Technology-Based Assessment Project, Research and Development
Series. Retrieved August 8, 2010, from National Center for Education Statistics:
http://www.ecs.org/html/offsite.asp?document=http%3A%2F%2Fnces%2Eed%2Egov%2
Fnationsreportcard%2Fpdf%2Fstudies%2F2005457%2Epdf

17

Running Head: TECHNOLOGY-BASED ASSESSMENTS OF STUDENT WRITING

18

You might also like