Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/234153336

Online Exams: Practical Implications and Future Directions

Conference Paper · October 2012

CITATIONS READS

5 4,616

2 authors:

Gabriele Frankl Sofie Schratt-Bitter


Alpen-Adria-Universität Klagenfurt Alpen-Adria-Universität Klagenfurt
46 PUBLICATIONS   81 CITATIONS    16 PUBLICATIONS   115 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Die Zukunft lernt im Kindergarten View project

Secure Exam Environment (SEE) View project

All content following this page was uploaded by Gabriele Frankl on 30 May 2014.

The user has requested enhancement of the downloaded file.


Online Exams: Practical Implications and Future Directions
Gabriele Frankl, Sofie Bitter
Alpen-Adria-Universität Klagenfurt, Austria
gabriele.frankl@aau.at
sofie.bitter@aau.at

Abstract: Teaching is continuously adapting to the needs and requirements of digital natives, but
testing is generally conducted in the same, antiqued paper-and-pencil way. While oral exams outclass
written exams from a qualitative point of view, paper-and-pencil exams have no serious advantage
over online exams and even hinder the testing of student knowledge properly, particularly if it was
acquired via in-class software programs. Written exams also generate large workloads, particularly in
classes of hundreds of students as is often the case for mandatory courses in the initial phases of
education. Furthermore, the great advantage of answer readability is assured for open-text-questions
even if smaller numbers of students are involved.

A “secure exam environment” (SEE) is presented in this paper which was developed in response to
the circumstances noted above, as well as the financial difficulties associated with acquisition and
maintenance of a large-scale computing facility. The system was inaugurated in June 2011 and
students now have the possibility of taking online exams with their own devices while, at the same
time, being prevented from accessing locally-stored files or non-specified Internet pages. It is also
possible to integrate tools that have been used in class. By the end of May 2012 we had conducted
47 such online exams with 1075 students, and are currently able to test up to 70 students
consequently. Further developments to the SEE are planned for synchronous and concurrent online
testing of approximately 200 students.

One aspect of this paper therefore draws on practical experiences gained through the implementation
of this flexible solution for online testing at the Alpen-Adria-Universität Klagenfurt (AAUK). In
experimenting with the SEE we also conducted a survey among participating students which revealed
their general attitudes, concerns, technical obstacles and suggestions for improvements regarding
online exams. Our findings include several implications for modern online testing and successful
implementation of online assessments in a university environment. This paper thus discusses the
current status of online exams and associated didactic implications; outlines SEE functionalities
including issues of security, safety, privacy and organizational features; and discusses our survey
research results in consideration of future research directions.

Keywords: Online testing, Secure Exam Environment, Moodle, security, privacy, survey

1. Introduction
The general purpose of an exam is to test student knowledge: “At its most basic level, assessment is
the process of generating evidence of student learning and then making a judgment about that
evidence.” (Elliott, 2008, p. 1). To cite another author, “…assessment is a process of appraising an
individual’s knowledge, understanding, abilities or skills” (Marriott, 2009, p. 252), and the assessment
process, “…can be used as a means of channelling students’ energies, and the feedback that it
generates can provide students with an opportunity for reflection” (Marriott, 2009, p. 238).

Elliott (2009) argues that teaching methods have changed little over the last century, highlighting that
current practice regarding exams displays a “behaviouralist approach” whereby success is rewarded
with a “pass” and failure is punished by withholding certification (Elliott, 2009). This is in line with the
fact that university student behaviour is heavily driven by exams (Müller and Bayer, 2007). As Marriott
(2009) points out, the way lecturers teach and assess knowledge has a significant impact on the
learning experience of students: an assessment instrument, “…is perceived as the sole purpose of
their learning, with all their efforts going into passing the test rather than the acquisition of new
knowledge and skills.” (Elliott, 2008, p. 2).

Müller and Schmidt (2009), however, stress that assessments and exams should be regarded as part
of the learning process. They note that exams can be much more than a simple evaluation of student
performance in that they can form a crucial and central part of quality learning by involving students in
the design of exams.
Exams as such can have various functions. They can assume a recruitment / selection function
(deciding on who advances and who does not), a didactic function (monitoring the teaching and
learning process), a socialization function (success in an exam leads to a certain social status), as
well as the production of scientific knowledge (the writing of papers, theses, dissertations and related)
(Müller and Bayer, 2007).

It should be mentioned, however, that oral or non-standardized written exams are not an objective
means of evaluation in that they encompass subjective communications and are steeped in subjective
construction processes (Müller and Bayer, 2007). The “halo-effect” (Kahneman, 2011), as pointed out
by Müller and Bayer (2007), is such an example whereby the lecturer infers from one student attribute
(e.g., proper/illegible handwriting) that s/he is more generally similar in other areas of life. In this way
a lecturer may by be influenced by handwriting or by the number of corrections which may,
subsequently, influence his or her objectivity.

On this point online exams offer an interesting and effective method for enhancing objectivity. For
standardized questions – such as multiple or single choice questions, or matching questions – the
evaluation is done automatically. It is nonetheless necessary to design questions intelligently in order
to avoid examining only superficial knowledge. But for free-text questions the lecturer must still correct
the answer even if workload is lessened through enhanced structure and readability. It is also easier,
for example, to evaluate all the answers for a single question at once easily with the online testing, as
it is possible to switch between the different students’ answers, and you can keep track of which
answer you have already marked. Furthermore, different handwriting styles do not influence the
lecturer; it becomes easier to evaluate each question on its merits without being influenced by the
handwriting of other answers provided by the student.

2. Online Exams

Today’s learners are faced with a plethora of possibilities to interact and collaborate online in the
university context. Elliott (2009) has stressed that education as such is in transformation and being
heavily affected by information and communication technology (ICT). Students are offered various
forms of learning management systems in order to make efficient use of their learning time, and
allocate their resources to maximize learning outcomes. Thus, teaching is continuously adapting to
the new needs and requirements of “digital natives” (as defined by Prensky, 2010). Testing, on the
other hand, is generally conducted in the same, antiqued paper-and-pencil way (Elliott, 2009).

In his historical review of examinations, Elliott (2008) positions paper-and-pencil exams as Generation
1.0 instruments: fixed in terms of time and space, formalized and controlled. This era was followed by
Generation 1.5 examinations that were electronic and held in computer rooms, but which simply
mirrored Generation 1.0 instruments by moving the “off-line” modality into an “on-line” context. But the
critical issue, writes Elliott, is that testing became the focal point of learning and occupied much of the
time that could have been otherwise used for teaching. He views Assessment 2.0 as an era
corresponding to Web 2.0, aimed at generating answers and evaluating procedural knowledge in the
form of interactive e-assessments (Elliott, 2008).

Chao et al. (2011) have assessed the adequacy of tools for online synchronous assessment in terms
of cyber classes and distance assessment via camera monitoring. This research identifies challenges
and issues such as the extent of monitoring and cheating, finds that there is a lack of adequate
software tools, and discusses the requirements for various online synchronous assessment methods.

2.1 Benefits and obstacles of online testing


Online testing methods increase assessment objectivity, and also lighten correction workloads. This is
particularly advantageous in classes with hundreds of students, such as mandatory courses of the
study entry and orientation phase (STEOP). It would seem clear that at this point in time, up-to-date
teaching requires up-to-date testing.

Hewson (2012) has conducted a literature review regarding the benefits associated with online
exams. She finds that this examination modality has proved particularly beneficial since it saves time
and money given its automatic delivery, scoring and storage. She also finds that online exams
increase student engagement due to their relative novelty, and provide greater flexibility as compared
with traditional testing methods. Anakwe (2008) has assessed the application of online exams in
traditional class-based courses, comparing student performance on online exams and traditional in-
class exams. While no significant differences were found, she determined that the greater efficiency
of online exams lightened teaching workload and administration: “Thus, instructors may include online
tests in their traditional in-class courses without affecting the students’ test performance, while
reaping the benefits of online testing, which include instant grading and feedback to the students.”
(Anakwe, 2008, pp. 16-17).

Feedback plays a key role in assessment processes and is an important element of the learning
process (Anakwe, 2008; Marriott, 2009). The importance of feedback options in the teaching and
learning process relate to the determination of knowledge gaps between achieved and expected
learning outcomes (Marriott, 2009). Online exams provide both standardized and individualized
feedback possibilities (Hewson, 2012). Furthermore, online exams free up time that lecturers would
have otherwise dedicated to the administration and correction of the tests; hence, the time savings
can be devoted to new topics or in-depth discussion (Anakwe, 2008). Moreover, “[o]nline testing also
makes it easy to provide repeated testing opportunities for practice purposes. Multiple-choice, true or
false, and matching items can be easily administered through the Internet.” (Anakwe, 2008, p. 13).

A study by Marriott (2009) has found that phased online assessment encourages classroom
participation and student engagement in the teaching and learning process, as well as improved
feedback possibilities. Hewson’s results also stress that online examination methods, “…offer a fair
and valid alternative to traditional pen and paper approaches, and thus allows practitioners to more
confidently adopt such methods, taking advantage of the various benefits they can offer.” (Hewson,
2012, p. 8).

Online exams also face challenges, however, and among them the critical issue of reliability is
paramount since this modality is dependent on computers and computer networking technologies
(Hewson, 2012).

2.2 Testing as an integral part of the learning process


Biggs and Tang (2011) argue that a well-founded lecture design includes assessment, and they relate
this to the concept of “constructive alignment” – the necessity of establishing coherence between all
phases of the learning process. They argue that all elements of the learning process (intended
learning outcomes, teaching/learning activities, assessment tasks and grading) should support one
another other (Biggs and Tang, 2011, p. 109). It is therefore possible to suggest that in order to
ensure coherence, the software tools used for teaching should be part of the examination process.
Müller and Schmidt (2009) similarly argue that learning targets, methods and examination methods
should act in harmony.

The Alpen-Adria-Universität Klagenfurt has developed a “Secure Exam Environment” (SEE) that is in
line with such thinking. A pilot project was launched in 2011 to implement the SEE for online testing
(Frankl et al., 2011) with the objective of providing up-to-date testing methods that go hand-in-hand
with the blended learning strategies in operation. The implemented learning management system
(LMS) at the Alpen-Adria-Universität is Moodle.

Online exams are usually conducted in computer rooms that are often too small, and larger computer
rooms are usually unavailable or not economically feasible. Hence, the SEE makes use of existing
student resources, specifically their personal computers (typically notebooks and netbooks). The fact
that on site computer rooms severely restrict the number of students for synchronous testing (a
maximum 15-20 students per run) was an important motivator for developing the system. The
efficiencies of allowing students to use their own devices are complimented by an effectiveness factor
since they are presumably familiar with these devices. The institution therefore is not faced with
expensive investments in new computing facilities and the associated maintenance costs. We are
currently able to test up to 70 students synchronously and plan to increase this to 200 by Autumn
2012.

It has also become common that many courses are based on or supported by different software tools
and programs, for example statistical programs or special mathematical software packages.
Traditional testing methods do not offer the possibility of testing student knowledge related to the use
and application of such software programs; the SEE, in contrast, does offer this possibility which is
consistent with pedagogical coherence suggested by Biggs and Tang (2011). For example, if Excel is
used in a course and students require the program, they will be in a position to use it for the exam.
We integrated a Libre-office solution into the SEE and students can easily switch between the
program and the exam (contingent on lecturer preferences). We also conducted literature-focused
essay exams using Open Office word processing programs, and we plan to integrate Eclipse into the
SEE so that programming can be tested more efficiently for the student as well as for the lecturer.

A recent research study has stated, “…there is also a clear need for further studies investigating
students’ attitudes, perceptions and preferences in relation to online assessment methods…”
(Hewson, 2012, p. 4). Our study contributes to current research in this vein by providing some
additional insights concerning student attitudes related to online testing.

3. Empirical study

3.1 Data collection


In June 2011 we began offering online exams with the SEE and over the year a total of 9 courses
were involved for a total of 10 online exams involving 288 students. From January to May 2012 a total
of 20 courses have been involved for a running total (currently) of 37 online exams involving 787
students. These figures indicate a significant increase in the number of online exams and thus
increased interest in online assessment methods at Alpen-Adria-Universität Klagenfurt. In order to
obtain feedback from participating students, we integrated a brief survey tool in Moodle incorporating
five questions about general attitude, benefits, obstacles and technical problems encountered. We
received 308 usable questionnaires from of a total of 1075 participating students, for a response rate
of 29 %.

3.2 Results
The first sections of the questionnaire assessed student attitudes towards online exams in general.
We identified student responses according to faculty (Management and Economics, Interdisciplinary
Studies, Humanities and Technical Sciences) in order to refine the analysis (Figure 1). A review of
this figure indicates a positive attitude towards online exams across the four faculties. Responses
from the faculty for Technical Sciences did not report any negative feedback whatsoever, and the
proportion of “very positive” responses from the faculty for Interdisciplinary Studies outstripped all
others.

Figure 1: Student attitudes toward online exams across four faculties

The questionnaire also included open questions asking students to report perceived benefits of online
testing using free text; we categorized these responses as displayed in Figure 2. The majority of the
students judged the key benefits of online testing to be quickly obtaining the results of an exam, the
time saved in its administration, and improved readability and structure for free-text answers.
Students also noted that online assessments are highly interesting, convenient and in general better
than paper-and-pencil exams. Some students reported that they do not see any difference compared
with conventional testing methods. Students also indicated that they appreciated the modernism
associated with online testing, including its environmentally friendly nature (paper saved) and the fact
that their hands were less tired than when completing a paper and pencil examination.

Figure 2: The benefits of online exams from the students' perspective

We also sought to determine the obstacles that students encountered, however, and the biggest
category of obstacles related to technical issues (Figure 3). Other problems frequently noted include
the additional time that some students require, difficulties with the structure and overview, the types of
questions employed, and a longer preparation time needed as compared to traditional testing
methods. Some students also had troubles with typing and others were not familiar with devices on
loan to them. It is clear, however, that students reported more benefits than obstacles.

Figure 3: Obstacles of online exams as reported by students

We also asked if students would prefer taking online exams in other courses. Figure 4 illustrates the
results among the four faculties and it is quite clear that most students answered in the affirmative.
Negative responses are comparatively low, in particular for the faculties of Technical Studies and
Interdisciplinary Studies.
Figure 4: Student preferences for online exams in other courses

As regards the types of questions deployed in online tests, the majority (54%) utilized only
standardized questions, 42% included a mixture of standardized questions and free-text questions
and only 4% included free-text questions (Figure 5).

Figure 5: Types of question utilized in online exams

4. Implications and future directions

With nearly one year of experience with online testing at AAUK, we are improving our SEE and the
additional add-on tools such as the integration of software programs. But as the results of this study
show, students report that they generally satisfied with online exams and would, in fact, like to go
further in this domain. We thus have laid plans for further integrating SEE at our university. The
faculty of Management and Economics, in particular, intends to integrated online examinations
throughout its institutes and departments. We would like to highlight that with the beginning of June
2012 we implemented a new survey tool for administering students’’ feedback on online testing.
Hence, by mid of July 2012, 64 online exams with 1301 students participating have already been
conducted; future analysis will be established.

4.1 Practical implications


The results of this study implicate a certain number of organizational and technical factors. First,
lecturers need support when conducting online exams: this is true in the preparation phase but also
when exams are executed. Secondly, lecturers (in our case) are afforded the support of specially
trained tutors (e-tutors) from the department of e-learning. These e-tutors help in the preparation
phase with setting up a test, in choosing pedagogically suitable examination designs, and are present
as exams are conducted. Thirdly, technical support is provided for students throughout an
examination (e.g., in case there are difficulties with the booting procedure, etc.). This is necessary
since, for many students, online testing is a new situation, anxiety may be high, and they should not
be encumbered because of the novelty of online assessment.

The department of e-learning also provides personalized support some weeks before the online
exams begin. Students can experiment with booting the SEE on their own devices, determine whether
their hardware is suitable for the SEE and thus can get in touch with online help before the actual
exam, which allays many fears. We also provide information materials for students and teachers, and
offer personalized support.

4.2 Pedagogical implications


Online testing has pedagogical implications given its immense potential. First of all, in-depth
knowledge can be assessed with correctly designed multiple-choice questions. Even though some
lecturers have utilized standardized questions this remains a new modality and requires appropriate
support. We therefore offer, for example, a checklist specific to the development of multiple and
single-choice questions, created by an internal expert (August Fenk). Secondly, online examinations
are superior in evaluating individual performance. Individualized exams are becoming a central topic
in education (see Elliott, 2008) and thus, “…E-assessment’s most exciting use is in assessing
functioning knowledge. Complex real-life situations can be given in multimedia presentations and
students asked to respond” (Biggs and Tang, 2011, p. 269). Other important pedagogical aspects
include the opportunity for feedback that online exams offer: individualized or even automated
feedback provides students the opportunity to obtain valuable information which they otherwise would
not have received (with paper-and-pencil tests). Constructive feedback as well as immediate results
support students in detecting deficiencies and fostering learning to improve performance (Marriott,
2009). Online tests definitely offer a comparative advantage in this context. According to Elliott (2008),
it is now necessary to put a greater role on e-learning and advance towards personalized learning and
assessment. Since life-long learning is of increasing importance, e-learning cannot be neglected.

5. References

Anakwe, B. (2008) “Comparison of Student Performance in Paper-Based Versus Computer- Based


Testing,” Journal of Education for Business, No. October, pp. 13-18.
Biggs, J. and Tang, C. (2011) Teaching for Quality Learning at University, Berkshire, McGraw Hill, 4th
ed.
Chao, K.J., Hung, I.C. and Chen, N.S. (2011) “On the design of online synchronous assessments in a
synchronous cyber classroom,” Journal of Computer Assisted Learning, Blackwell Publishing
Ltd, p. 1-17
Elliott, B. (2008), “Assessment 2.0: Modernising assessment in the age of Web 2.0,” Scottish
Qualifications Authority. Retrieved May 16, 2012, from
http://www.scribd.com/doc/461041/Assessment-20.
Elliott, B. (2009) “E - PEDAGOGY: Does e-learning require a new pedagogy?,” Scottish Qualifications
Authority. Retrieved May 16, 2012, from http://www.scribd.com/doc/932164/E-Pedagogy.
Frankl, G., Schartner, P. and Zebedin, G. (2011) “The ‘Secure Exam Environment’ for Online Testing
at the Alpen-Adria-Universität Klagenfurt/Austria,” World Conference on E-Learning in
Corporate, Government, Healthcare, and Higher Education, Hawaii, Association for the
Advancement of Computing in Education (AACE), pp. 498-505.
Hewson, C. (2012) “Can online course-based assessment methods be fair and equitable?
Relationships between students’ preferences and performance within online and offline
assessments,” Journal of Computer Assisted Learning, Blackwell Publishing Ltd.
Kahneman, D. (2011) “Thinking, Fast and Slow”, New York, Farrar, Straus and Giroux.
Marriott, P. (2009) “Students’ evaluation of the use of online summative assessment on an
undergraduate financial accounting module,” British Journal of Educational Technology,
Blackwell Publishing Ltd, Vol. 40 No. 2, pp. 237-254.
Müller, A. and Schmidt, B. (2009) “Prüfungen als Lernchance  : Sinn, Ziele und Formen von
Hochschulprüfungen,” Zeitschrift für Hochschulentwicklung, Vol. 4 No. 1, pp. 23-45.
Müller, F.H. and Bayer, C. (2007) “Prüfungen: Vorbereitung - Durchführung - Bewertung,”
inHawelka,B.,Hammerl,M. and Gruber,H. (Eds.),Förderung von Kompetenzen in der
Hochschullehre, Kröning, Ansager, pp. 223-237.
Prensky, M. (2010) Teaching Digital Natives. Partnering for Real Learning , Thousand Oaks, Corwin.

View publication stats

You might also like