Professional Documents
Culture Documents
Jreader Scholarlyresearchpaper
Jreader Scholarlyresearchpaper
Abstract
In an extension of research studying the effects of computer-based versus paper-based assessments of
students essays in secondary education, this paper attempts to examine the benefits or advantages and the
disadvantages associated with the use of technology-based assessments for student writing samples in
middle and high schools. Studies have investigated the comparability of scores for paper and computer
versions of a writing test administered to 8th grade students. Results generally showed no significant mean
score differences between paper and computer delivery. Observations, interviews and a survey indicated
that automated writing evaluation (AWE) software programs like the Intelligent Essay Assessor (IEA)
using Latent Semantic Analysis (LSA), and MY Access! using artificial intelligence (AI) to score
student essays and support revision simplified classroom management and increased students motivation
to write and revise (Burstein, Chodorow, & Leacock, 2004). The use of (AWE) software programs also
allows teachers to increase the number of writing assignments without increasing the amount of grading
and can serve as a highly effective and efficient tool for increasing students' exposure to writing.
Technology-based writing assessments provide a level of feedback that requires the students to reflect on
their performance and also provides assessment methods that inform students on where they most need
assistance. A technology-based writing assessment allows students to practice their writing with efficient
and informative feedback and gives teachers the information they need to identify individual and class
strengths and weaknesses. Automated software programs enhance students critical-thinking skills and can
accelerate student learning leading to higher levels of student achievement. However, computer
familiarity significantly predicted online writing test performance after controlling for paper writing
skill. These results suggest that, for any given individual, a computer-based writing assessment may
produce different results than a paper one, depending upon that individuals level of computer familiarity
(Horkay, Elliot Bennett, Allen, Kaplan, & Yan, 2006).
11
12
13
14
Annotated References
1. Utility in a Fallible Tool: A Multi-Site Case Study of Automated Writing Evaluation
Automated writing evaluation (AWE) software uses artificial intelligence (AI) to score student
essays and support revision. We studied how an AWE program called MY Access! was used in
eight middle schools in Southern California over a three-year period. Observations, interviews,
and a survey indicated that using AWE simplified classroom management and increased
students motivation to write and revise.
Research Questions:
15
17
18