Professional Documents
Culture Documents
Online Exams: Practical Implications and Future Directions: October 2012
Online Exams: Practical Implications and Future Directions: October 2012
net/publication/234153336
CITATIONS READS
5 4,616
2 authors:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Gabriele Frankl on 30 May 2014.
Abstract: Teaching is continuously adapting to the needs and requirements of digital natives, but
testing is generally conducted in the same, antiqued paper-and-pencil way. While oral exams outclass
written exams from a qualitative point of view, paper-and-pencil exams have no serious advantage
over online exams and even hinder the testing of student knowledge properly, particularly if it was
acquired via in-class software programs. Written exams also generate large workloads, particularly in
classes of hundreds of students as is often the case for mandatory courses in the initial phases of
education. Furthermore, the great advantage of answer readability is assured for open-text-questions
even if smaller numbers of students are involved.
A “secure exam environment” (SEE) is presented in this paper which was developed in response to
the circumstances noted above, as well as the financial difficulties associated with acquisition and
maintenance of a large-scale computing facility. The system was inaugurated in June 2011 and
students now have the possibility of taking online exams with their own devices while, at the same
time, being prevented from accessing locally-stored files or non-specified Internet pages. It is also
possible to integrate tools that have been used in class. By the end of May 2012 we had conducted
47 such online exams with 1075 students, and are currently able to test up to 70 students
consequently. Further developments to the SEE are planned for synchronous and concurrent online
testing of approximately 200 students.
One aspect of this paper therefore draws on practical experiences gained through the implementation
of this flexible solution for online testing at the Alpen-Adria-Universität Klagenfurt (AAUK). In
experimenting with the SEE we also conducted a survey among participating students which revealed
their general attitudes, concerns, technical obstacles and suggestions for improvements regarding
online exams. Our findings include several implications for modern online testing and successful
implementation of online assessments in a university environment. This paper thus discusses the
current status of online exams and associated didactic implications; outlines SEE functionalities
including issues of security, safety, privacy and organizational features; and discusses our survey
research results in consideration of future research directions.
Keywords: Online testing, Secure Exam Environment, Moodle, security, privacy, survey
1. Introduction
The general purpose of an exam is to test student knowledge: “At its most basic level, assessment is
the process of generating evidence of student learning and then making a judgment about that
evidence.” (Elliott, 2008, p. 1). To cite another author, “…assessment is a process of appraising an
individual’s knowledge, understanding, abilities or skills” (Marriott, 2009, p. 252), and the assessment
process, “…can be used as a means of channelling students’ energies, and the feedback that it
generates can provide students with an opportunity for reflection” (Marriott, 2009, p. 238).
Elliott (2009) argues that teaching methods have changed little over the last century, highlighting that
current practice regarding exams displays a “behaviouralist approach” whereby success is rewarded
with a “pass” and failure is punished by withholding certification (Elliott, 2009). This is in line with the
fact that university student behaviour is heavily driven by exams (Müller and Bayer, 2007). As Marriott
(2009) points out, the way lecturers teach and assess knowledge has a significant impact on the
learning experience of students: an assessment instrument, “…is perceived as the sole purpose of
their learning, with all their efforts going into passing the test rather than the acquisition of new
knowledge and skills.” (Elliott, 2008, p. 2).
Müller and Schmidt (2009), however, stress that assessments and exams should be regarded as part
of the learning process. They note that exams can be much more than a simple evaluation of student
performance in that they can form a crucial and central part of quality learning by involving students in
the design of exams.
Exams as such can have various functions. They can assume a recruitment / selection function
(deciding on who advances and who does not), a didactic function (monitoring the teaching and
learning process), a socialization function (success in an exam leads to a certain social status), as
well as the production of scientific knowledge (the writing of papers, theses, dissertations and related)
(Müller and Bayer, 2007).
It should be mentioned, however, that oral or non-standardized written exams are not an objective
means of evaluation in that they encompass subjective communications and are steeped in subjective
construction processes (Müller and Bayer, 2007). The “halo-effect” (Kahneman, 2011), as pointed out
by Müller and Bayer (2007), is such an example whereby the lecturer infers from one student attribute
(e.g., proper/illegible handwriting) that s/he is more generally similar in other areas of life. In this way
a lecturer may by be influenced by handwriting or by the number of corrections which may,
subsequently, influence his or her objectivity.
On this point online exams offer an interesting and effective method for enhancing objectivity. For
standardized questions – such as multiple or single choice questions, or matching questions – the
evaluation is done automatically. It is nonetheless necessary to design questions intelligently in order
to avoid examining only superficial knowledge. But for free-text questions the lecturer must still correct
the answer even if workload is lessened through enhanced structure and readability. It is also easier,
for example, to evaluate all the answers for a single question at once easily with the online testing, as
it is possible to switch between the different students’ answers, and you can keep track of which
answer you have already marked. Furthermore, different handwriting styles do not influence the
lecturer; it becomes easier to evaluate each question on its merits without being influenced by the
handwriting of other answers provided by the student.
2. Online Exams
Today’s learners are faced with a plethora of possibilities to interact and collaborate online in the
university context. Elliott (2009) has stressed that education as such is in transformation and being
heavily affected by information and communication technology (ICT). Students are offered various
forms of learning management systems in order to make efficient use of their learning time, and
allocate their resources to maximize learning outcomes. Thus, teaching is continuously adapting to
the new needs and requirements of “digital natives” (as defined by Prensky, 2010). Testing, on the
other hand, is generally conducted in the same, antiqued paper-and-pencil way (Elliott, 2009).
In his historical review of examinations, Elliott (2008) positions paper-and-pencil exams as Generation
1.0 instruments: fixed in terms of time and space, formalized and controlled. This era was followed by
Generation 1.5 examinations that were electronic and held in computer rooms, but which simply
mirrored Generation 1.0 instruments by moving the “off-line” modality into an “on-line” context. But the
critical issue, writes Elliott, is that testing became the focal point of learning and occupied much of the
time that could have been otherwise used for teaching. He views Assessment 2.0 as an era
corresponding to Web 2.0, aimed at generating answers and evaluating procedural knowledge in the
form of interactive e-assessments (Elliott, 2008).
Chao et al. (2011) have assessed the adequacy of tools for online synchronous assessment in terms
of cyber classes and distance assessment via camera monitoring. This research identifies challenges
and issues such as the extent of monitoring and cheating, finds that there is a lack of adequate
software tools, and discusses the requirements for various online synchronous assessment methods.
Hewson (2012) has conducted a literature review regarding the benefits associated with online
exams. She finds that this examination modality has proved particularly beneficial since it saves time
and money given its automatic delivery, scoring and storage. She also finds that online exams
increase student engagement due to their relative novelty, and provide greater flexibility as compared
with traditional testing methods. Anakwe (2008) has assessed the application of online exams in
traditional class-based courses, comparing student performance on online exams and traditional in-
class exams. While no significant differences were found, she determined that the greater efficiency
of online exams lightened teaching workload and administration: “Thus, instructors may include online
tests in their traditional in-class courses without affecting the students’ test performance, while
reaping the benefits of online testing, which include instant grading and feedback to the students.”
(Anakwe, 2008, pp. 16-17).
Feedback plays a key role in assessment processes and is an important element of the learning
process (Anakwe, 2008; Marriott, 2009). The importance of feedback options in the teaching and
learning process relate to the determination of knowledge gaps between achieved and expected
learning outcomes (Marriott, 2009). Online exams provide both standardized and individualized
feedback possibilities (Hewson, 2012). Furthermore, online exams free up time that lecturers would
have otherwise dedicated to the administration and correction of the tests; hence, the time savings
can be devoted to new topics or in-depth discussion (Anakwe, 2008). Moreover, “[o]nline testing also
makes it easy to provide repeated testing opportunities for practice purposes. Multiple-choice, true or
false, and matching items can be easily administered through the Internet.” (Anakwe, 2008, p. 13).
A study by Marriott (2009) has found that phased online assessment encourages classroom
participation and student engagement in the teaching and learning process, as well as improved
feedback possibilities. Hewson’s results also stress that online examination methods, “…offer a fair
and valid alternative to traditional pen and paper approaches, and thus allows practitioners to more
confidently adopt such methods, taking advantage of the various benefits they can offer.” (Hewson,
2012, p. 8).
Online exams also face challenges, however, and among them the critical issue of reliability is
paramount since this modality is dependent on computers and computer networking technologies
(Hewson, 2012).
The Alpen-Adria-Universität Klagenfurt has developed a “Secure Exam Environment” (SEE) that is in
line with such thinking. A pilot project was launched in 2011 to implement the SEE for online testing
(Frankl et al., 2011) with the objective of providing up-to-date testing methods that go hand-in-hand
with the blended learning strategies in operation. The implemented learning management system
(LMS) at the Alpen-Adria-Universität is Moodle.
Online exams are usually conducted in computer rooms that are often too small, and larger computer
rooms are usually unavailable or not economically feasible. Hence, the SEE makes use of existing
student resources, specifically their personal computers (typically notebooks and netbooks). The fact
that on site computer rooms severely restrict the number of students for synchronous testing (a
maximum 15-20 students per run) was an important motivator for developing the system. The
efficiencies of allowing students to use their own devices are complimented by an effectiveness factor
since they are presumably familiar with these devices. The institution therefore is not faced with
expensive investments in new computing facilities and the associated maintenance costs. We are
currently able to test up to 70 students synchronously and plan to increase this to 200 by Autumn
2012.
It has also become common that many courses are based on or supported by different software tools
and programs, for example statistical programs or special mathematical software packages.
Traditional testing methods do not offer the possibility of testing student knowledge related to the use
and application of such software programs; the SEE, in contrast, does offer this possibility which is
consistent with pedagogical coherence suggested by Biggs and Tang (2011). For example, if Excel is
used in a course and students require the program, they will be in a position to use it for the exam.
We integrated a Libre-office solution into the SEE and students can easily switch between the
program and the exam (contingent on lecturer preferences). We also conducted literature-focused
essay exams using Open Office word processing programs, and we plan to integrate Eclipse into the
SEE so that programming can be tested more efficiently for the student as well as for the lecturer.
A recent research study has stated, “…there is also a clear need for further studies investigating
students’ attitudes, perceptions and preferences in relation to online assessment methods…”
(Hewson, 2012, p. 4). Our study contributes to current research in this vein by providing some
additional insights concerning student attitudes related to online testing.
3. Empirical study
3.2 Results
The first sections of the questionnaire assessed student attitudes towards online exams in general.
We identified student responses according to faculty (Management and Economics, Interdisciplinary
Studies, Humanities and Technical Sciences) in order to refine the analysis (Figure 1). A review of
this figure indicates a positive attitude towards online exams across the four faculties. Responses
from the faculty for Technical Sciences did not report any negative feedback whatsoever, and the
proportion of “very positive” responses from the faculty for Interdisciplinary Studies outstripped all
others.
The questionnaire also included open questions asking students to report perceived benefits of online
testing using free text; we categorized these responses as displayed in Figure 2. The majority of the
students judged the key benefits of online testing to be quickly obtaining the results of an exam, the
time saved in its administration, and improved readability and structure for free-text answers.
Students also noted that online assessments are highly interesting, convenient and in general better
than paper-and-pencil exams. Some students reported that they do not see any difference compared
with conventional testing methods. Students also indicated that they appreciated the modernism
associated with online testing, including its environmentally friendly nature (paper saved) and the fact
that their hands were less tired than when completing a paper and pencil examination.
We also sought to determine the obstacles that students encountered, however, and the biggest
category of obstacles related to technical issues (Figure 3). Other problems frequently noted include
the additional time that some students require, difficulties with the structure and overview, the types of
questions employed, and a longer preparation time needed as compared to traditional testing
methods. Some students also had troubles with typing and others were not familiar with devices on
loan to them. It is clear, however, that students reported more benefits than obstacles.
We also asked if students would prefer taking online exams in other courses. Figure 4 illustrates the
results among the four faculties and it is quite clear that most students answered in the affirmative.
Negative responses are comparatively low, in particular for the faculties of Technical Studies and
Interdisciplinary Studies.
Figure 4: Student preferences for online exams in other courses
As regards the types of questions deployed in online tests, the majority (54%) utilized only
standardized questions, 42% included a mixture of standardized questions and free-text questions
and only 4% included free-text questions (Figure 5).
With nearly one year of experience with online testing at AAUK, we are improving our SEE and the
additional add-on tools such as the integration of software programs. But as the results of this study
show, students report that they generally satisfied with online exams and would, in fact, like to go
further in this domain. We thus have laid plans for further integrating SEE at our university. The
faculty of Management and Economics, in particular, intends to integrated online examinations
throughout its institutes and departments. We would like to highlight that with the beginning of June
2012 we implemented a new survey tool for administering students’’ feedback on online testing.
Hence, by mid of July 2012, 64 online exams with 1301 students participating have already been
conducted; future analysis will be established.
The department of e-learning also provides personalized support some weeks before the online
exams begin. Students can experiment with booting the SEE on their own devices, determine whether
their hardware is suitable for the SEE and thus can get in touch with online help before the actual
exam, which allays many fears. We also provide information materials for students and teachers, and
offer personalized support.
5. References